Module 4 - Data Security in Cloud Part 3
Firebase
"Firebase" is the back-end service for running Mobile applications. Firebase is a platform used for developing and running a mobile application. Using Firebase, high-quality Android, web, and iOS applications can be developed. A serverless framework called "Cloud Functions" is used to run JavaScript code to handle events triggered by Firebase Firebase has a database called "Firebase Realtime Database" to store and synchronize the data between the users. Firebase has features such as * auto-synchronization of data * messaging * authentication * file storage services, and many more. It is backed up by the GCP. Using cloud storage, it is easy for the developers to store and provide content to users. Regardless of the network quality, security is provided for Firebase apps file uploads and downloads. To handle security and authentication, Firebase provides "Firebase Authentication." It supports phone authentication, Google and social media login, and email and password accounts. The Firebase Security Rules can also be used to handle authorization and validation requests
Solutions for Successful IRM Implementation
* Automatic policy provisions: IRM implementation becomes easier with automatic policy retention. It is based on the file location, document origin, and keywords. >>>>>> Organizations can assign this policy to certain folders, content, or any information that reached at production stage * Dynamic policy control: IRM policies that govern the content must evolve as per the changes in the business. >>>>>the permission of the files can be changed anytime * Audit trail: Audit trail is an unalterable log of access to the system. It also contains records of modifications, additions, and deletion of any information managed by a system. >>>>>> using Audit trail make it easy to track users activities and serves as proof from compliance requirements of security implementation * Integration with third parties: IRM can be integrated with 3rd parties security software which ensures that a company meets compliance requirements. * Automatic expiration: IRM feature when enabled by owner can automatically expire permission to a file, emails at any point of time. * Other security capabilities: → restrict printing the entire document or a portion of it → disabling screenshot → copy/paste capabilities → watermarking pages if print access is granted.
conditions to be considered while deleting data in the cloud.
* Fine-grained deletion: Only a portion of data is deleted while the other portion are accessible to users. * Availability of service: There should be available service to users for data deletion. * Consistency: Data deletion should take effect in other storage locations where that original data is stored for backup/cache * Error handling: The deletion should take place without any errors in a specified timeframe. * Acknowledgment: Once the data is deleted, the cloud user should receive an acknowledgment that the data deletion is successfully completed.
Interacting Tools in Google Cloud Storage
* Google Cloud Console: The Google Cloud Console is a virtual interface that is used to manage the data in the Google Cloud. On the Console , user can create a bucket, upload objects into the buckets, and can control access to the buckets. * gsutil is a python CLI tool used to interact with Google Cloud Storage (GCS). gsutil uses HTTPS and transport layer to securely communicate with Google Cloud Storage (GCS) * Interoperability of Google Cloud Storage Client libraries using any programming language can be used to Google Cloud Storage. * REST API with JSON API and XML API can be used for migration from other storage to Google Cloud Storage.
Ways cloud security administrator can control the level of access to the data stored in Google Storage (objects)
* IAM: Using IAM, the cloud security professional can control access to Google Cloud resources such as objects and buckets. >>> Permissions can be set to users and groups using IAM policies. -> Permissions: Set of actions to perform on Google cloud storage. -> Roles : Roles are set of permissions assign to a group or users for accessing objects & buckets. -> Members: Members are either groups or users to whom the roles are assigned * Encryption: By default, the data i n Google Cloud is encrypted using the SSE method. * Authentication: All the actions performed on Google Cloud are authenticated. * Bucket lock: The cloud user can set the duration of how long an object should stay in the bucket using the retention policy. >>> It prevents the users from modifying or deleting the data for a particular time. * Object versioning: Google Cloud Storage provides additional security by enabling users to retrieve the deleted or replaced object in the storage bucket
Components of data-retention policy
* Legislation, regulation, and standards requirements: Data retention periods are set based on compliance requirements associated with the data type. >>>><<< e.g financial record (dtype: number) has a retention period of 10 years before deletion. * Mapping: data mapping method is used to understand the data types, data formats, and data locations. * Data classification: Based on the classification, the organization would choose the right retention procedures. * Data-retention procedure: it involves steps to implementing a data retention policy associated with the data type. E.g there is procedure for implementing financial data. The policy should be implemented using a data-retention procedure. The data-retention procedure should also mention >>>backup options and restore procedures.<<<< >>> Retention procedure ⇒ data-type + data-retention policy Data retention policy consist of: → how long the data should be stored → its physical location where data is to be stored → how it is stored. (DB, HDD, SSD etc) * Monitoring and maintenance: data-retention policy should be monitored properly to ensure that the process is working well without any changes.
Features of AWS Storage
* The Performance of an S3 bucket is → if S3 resources are in the same region → Amazon Glacier service for archiving data (it is extremely slow in performance) but the low-cost storage. * Cost-effective: pay only what they use. * Security : Highly secure * Scalability and elasticity: provide a high level of scalability and elasticity automatically. * Disaster recovery: provide disaster recovery for business continuity.
The 5 Types storage services in Azure storage are
- Azure Blob - Azure Files - Azure Queue - Azure Tables - Azure Disks
Azure Type Explained -- Table Storage
- It Stores Large amount of structured data, Non - realtional data - It can process the data query quickly using clustered index
Azure Storage
- It offers storage services that are highly available, durable, scalable, and secure. - It allows users to store and access data of any format from storage services. - Azure storage services can be used as a pay-as-you-go model. - Azure User Subscription determines the level of Access the user can have to Azure Services. FEATURES OF AZURE STORAGE - High availability and durability: Data redundancy backup for disaster recovery and High availability. - Scalability: Based on the customer's requirement, the data stored in Azure storage is scalable. - Security: Shared access signature (SAS) can restrict data access by using shared key model in authenticating a user. - Accessibility: Azure is accessible via * HTTP * HTTPS * Azure PowerShell * Azure Portal - Cost-effective: The use of Azure Hybrid reduces financial burden by shutting down unused instances and paying for what you need. - Performance: Azure storage delivers high performance and low latency - Elasticity: Azure storage services provide elasticity. - Disaster recovery: Data are backed-up to secondary server and are used once there is a disaster out break.
event attributes and event loggings should be integrated into the event data to audit and trace the data in an effective way,
-The event date and time which is different from log time -Log date and time -Interactive identifier -Service name and protocol -Geolocation -Application address and an identifier such as server address, port number, name, version, etc. -Code location -Source address along with user's IP address and user's device identifier -User identity -Type and severity of the event -Security-related event flags additional event attributes can also be integrated into the event data: - GPS event date and time - Action and object - Result status to check whether the action performed on the object is successful - HTTP status code - Responses seen by the users such as session termination and custom text messages
4 Stages of Chain of Custody
1. Data collection: At the COLLECTION Stage, the examiner provide detailed documentation about the data such as WHO, WHEN, HOW data is created, WHERE it's STORED and ACCESSED. 2. Examination: During the examination stage, all the information related to the chain of custody is documented including all the forensic steps taken in the process. It is vital to take screenshots to show that the tasks are completed. 3. Analysis: It is the result of the examination stage. Using legal methods and techniques, the useful information will be extracted. 4. Reporting: It is the stage where chain of custody is documented which includes the source of data, extraction methods used, analysis process, tools used, issues faced, and how they are handled. This is the document that should show that chain of custody is maintained throughout the process and evidence submitted is legally accepted
elements to be included in a data archiving policy.
1. Data encryption procedure must be implemented such that the right policy are implemented for the encryption keys. 2. Data tracking procedure must be implemented by ensuring proper monitoring of data distributed location 3. Granular retrieval: data must be retrieved easily by using search features 4. Backup and disaster recovery plan should be properly documented, implemented and updated regularly 5. Data formats: ensure the right data format is selected base of the data data-type. e.g Name ⇒ character 6. Data Restoration procedure must be tested in a different environment to ensure easy restoration of archived data. The data archiving procedure uses automated software that moves the data to archival storage system based on the policies set by the cloud administrator. >>>>The archived data will still be accessible by the users at any time and from anywhere.
AWS storage services and there uses
1. Object storage: Amazon Simple Storage Service (S3) 2. File storage: Amazon Elastic File System, Amazon FSx for Windows File Server, Amazon FSx for Lustre 3. Block storage: Amazon Elastic Block Store (EBS volume) 4. Backup: AWS Backup 5. Data transfer: AWS Storage Gateway, AWS DataSync, AWS Transfer Family, AWS Snow Family 6. Edge computing and storage: AWS Snow Family
Deletion Methods
1. Overwriting: the data to be deleted will be overwritten by new data. >>> (less adopted method requires internal modifications, and it is time-consuming.) 2. Data scrubbing: Data scrubbing is a process of deleting or rectifying the data. >>>>This method ensures the deletion of all the copies of data. 3. Crypto-shredding: Widely adopted Method of deleting data. In Crypto-shredding, encryption keys that are used to encrypt the original data are deleted to dispose the data. This makes the data unreadable and unrecoverable
Event Sources from cloud service models - SaaS - PaaS - IaaS
1. SaaS Event Sources: Since SaaS provides no visibility to infrastructure logs. >>>> Cloud security engineers must make sure that In the ""Service Level Agreement (SLA)" all data access requirements are stated to maintain proper investigation capabilities, auditability, and traceability of data. data sources used for event investigations, monitoring and visibility includes * Application server logs, * Web server logs, * Guest operating system logs * Database and hot access logs, * SaaS portal logs, * Billing records, * Network captures 2. PaaS Event Sources: users have access to and control over the event and diagnostic data. >>>>> Cloud security professionals can view few infrastructure-level logs and complete application logs. The cloud security professionals + development team work together to understand the features of applications and design & implement monitoring regimes to maintain auditability and traceability on data. 3. IaaS Event Sources: All the infrastructure and application-level logs are visible to the cloud security professional. (There is complete visibility of logs to the user) NB: It is recommended for the cloud users to specify the data access requirements in the SLA, so that the user can maintain auditability and traceability over data. perform an effective audit by examining the following logs: DNS server logs; API access logs; Virtual Machine Monitor (VMM) logs; Management portal logs; Billing records; Cloud network logs.
Google block storage called Persistent Disk is similar to EBS volume in AWS.
=> google offers high performance storage called persistent disk. => Persistent Disk is a block storage attached to VMs in Google Cloud. => Persistent Disk can be attached and detached from VMs <<<<>>>>> NOTE Since VMs and Persistent Disk are managed separately, Even after deleting a VM instance, the data in the Persistent Disk are kept safe. Persistent Disk are ⇒ HDD: cost-effective and used for bulky throughput ⇒ SSD: high performance and bulky throughput. SSD offers high performance for random workloads and bulk throughout. Both the storages can be extended up to 64TB. Persistent Disk are : ⇒ Regional: high availability and data redundancy. (multiple zones within 1 region) e.g R-1 (Za, Zb, Zc) ⇒ Zonal: Persistent Disk are less expensive and provides high availability and stores data in 1 zone. (1 datacenter) ⇒ Local: Persistent Disk are hardware where VM runs with BEST I/O performance. Local Persistent Disk are less recommended because they offer low availability and redundancy. A single Persistent Disk can be attached to Multiple Virtual Machines. 1-Per-Disk to Many Vms Snapshot of Persistent disk can be taken to backup data periodically and protects the cloud consumers from data loss. The snapshots of persistent disks are geo-replicated and can be used for restoration and are are automatically encrypted to safeguard the data using system defined keys or customer-supplied keys.
Example and Use case of Amazon CloudFront >>>> CloudFront is mainly used for Static content static content such as audio, videos, or other media files. CloudFront is also used to deliver Static and dynamic web Content
>>>>>>>>>>>For example, when a user sends a request for the data that is served with CloudFront, the request goes to the nearest edge location to reduce the latency and deliver it as fast as possible with low network traffic. If the edge location has the cache of the requested content, CloudFront delivers it immediately with the lowest latency. If the content is not available in the edge location, it retrieves the content from origin and stores the cache in the edge location for a certain period. CloudFront is mainly designed to deliver content with low latency and high bandwidth. It speeds up the content distribution by sending the requests of end-users to the nearest edge locations instead of routing to the original source. It also reduces the number of networks through which a user request must pass through and thereby improves performance. >>>>>>>>>>>> ***** If the size of the files is smaller, then they can be downloaded with higher speed. >>> make sure that the Amazon CloudFront CDN will compress the objects automatically. >>>>>> When CloudFront serves compressed files, it is less costly than serving uncompressed files To mitigate DDoS attacks, the cloud security administrator can whitelist or blacklist the users from specific locations to access the web application content by enabling geo-restriction.
Lock Storage Account to Prevent Accidental Account Deletion
A cloud account is usually handled by multiple users with different kinds of access privileges. An administrator is the one who can control everything and can lock the resource group or resources to prevent accidental deletions. To prevent deletion and to avoid unwanted modifications, an administrator can set read-only access to the users. If the lock type is set to Delete, users having access can still read and modify data, but cannot perform delete action. If the lock is set to read-only, authorized users can access data to read, but they cannot modify or delete it.
Check for Overly Permissive Stored Access Policies
A stored access policy is an additional layer of control for service-level SASs. These stored access policies control group shared access signatures by changing or overwriting the parameters like validity (start time and expiry time) and permissions (read, write, list, delete, add, create, and list) as per requirement. Shared access signatures should not be given full access to storage account resources like the Blob, tables, files, and queues, etc. through stored access policies. It is recommended to verify the permissions given and make necessary changes.
AWS Snowball
AWS snowball is used to move large data from on-premise datacenter to AWS without the use of INTERNET via a physical storage Device. Snowball appliance is waterproof and dustproof. And it is lightweight Using AWS snowball help save ⇒ Network cost ⇒ Security concerns (data tempering) ⇒ long duration for data transfer. It is available for all the AWS regions and it is lightweight. The backside of the Snowball device contains two SFP+ 10 GigE network connections, one RJ45 1 GigE, and a power cord. The frontside of Snowball device contains control panel and an E Ink display. AWS snowball is used to transfer data such as ⇒ database ⇒ medical-related records ⇒ backups ⇒ content distribution ⇒ disaster recovery. Snowball devices; ⇒ Snowball ⇒ Snowball Edge ⇒ Snow Mobile. Snowball can transfer up to 80 TB of data and it is simple, safe, fast, and secure. Root user can create an IAM user with Snowball action/permission. Data in Snowball is secured/ encrypted using AWS KMS. All the Snowball devices come with tamper-evident enclosures, 256-bit encryption and industry-standard Trusted Platform Modules (TPM) for a secure data transfer. Additionally, the data in Snowball is secured through Amazon KMS and all the encryption keys are stored in it.
Enable Active Geo-Replication of Azure SQL Database
Active geo-replication is a feature of Azure SQL Database that allows the creation of a readable secondary database on SQL database server in the same or different region. Active geo-replication is a business continuity strategy for quick disaster recovery of individual databases during a regional disaster. Geo-replication supports four secondary databases, which are used for read-only access queries. Enabling geo-replication allows the application to start failover to a secondary database. After failover, the secondary database will become the primary database with the different connection end points. Benefits of Geo-replication Used in disaster recovery. Used in database migration to migrate databases from one server to another with minimum downtime. Used in application upgrades, as a failback copy.
Amazon EC2 Instance Storage
Amazon EC2 instance storage are ephemeral drive. ⇒ It is a temporary block storage for several types of EC2. EC2 instance storage are cost-effective, easy to use and flexible data storage. The EC2 instance storage is provided by the Host machine disk/drive where EC2 instance is deployed. EC2 instance storage is used by application for processing of data rather than storage. It is highly suitable for the storage of data that changes frequently such as: → caches → temporary data → buffers → logs → scratch information. Amazon EC2 instance storage provides more than one instance store volume. NB: amount of instance store volume depends on the type of EC2 instance Large EC2 instance type = large and multiple instance storage volume EC2 instance storage is not suitable for storing long-term and important data. ⇒ EC2 instance storage can't be attached or detached from one EC2 instance to another. ⇒ EC2 instance storage data remains available only when EC2 instance is still running. ⇒ Data in Ec2 instance storage gets deleted when there is a disk failure or EC2 instance get stopped or terminated ⇒ Data in EC2 instance storage remains if EC2 reboots An AMI created from an EC2 instance that has data in it EC2 instance storage won't be replicated in a new EC2 instance storage when the IAM is deployed/launched. Types of EC2 instance storages such as ⇒ ephemeral ⇒ non-volatile memory (NVMe) SSD ⇒ TRIM NB: they differ in their performance capabilities. The number of instance store volumes is fixed, and it cannot be increased or decreased for a single instance. But storage elasticity (increase and decrease) can be achieved by running multiple EC2 instance. IAM can be used to control and deploy EC2 instance on AWS. The cloud security profess
Amazon Simple Storage Service (Amazon S3)
Amazon S3 is a simple storage service that provides a highly scalable, cost-effective, low-latency data storage infrastructure. user can store an unlimited number of objects in Amazon S3 buckets and can read, write, or delete objects having data that ranges from 1 byte to 5 terabytes. ⇒ Multiple Users can read and write data to S3 At the same time. ⇒ Amazon S3 horizontal scalability enables concurrent access to S3 without latency (delay in data transmission) ⇒ S3 Versioning feature prevent accidental deletion or application failure. AND -> S3 versioning and replication feature enables extra data protection for data retention and archiving. ⇒ S3 Versioning feature allow application or data rollback to previous version Also help in backing up critical data and to deliver disaster recovery solution for business continuity. Payment for S3 is base on Usage. ⇒ Pay per Usage ⇒ S3 offers high durability (lasting capacity) for both mission-critical data. Files that are backed up include: → DB snapshots → Files → volumes This backup enhance disaster recovery. ⇒ Data backup in form of ETL jobs are sent to S3 periodically. data backup can also be inform of cross-region replication (across different different regions) which enhance disaster recovery ⇒ Amazon Glacier is used for infrequently accessed data. (Data with long-term backup) Data backed up on Amazon Glacier enhance disaster recovery but takes (Several Hours) for data retrieval. ⇒ Amazon Glacier is not suitable for disaster recovery that requires quick retrieval of data. ⇒ There is no limit to Amazon Glacier storage capability. ⇒ There is no Minimum fee to use Amazon Glacier ⇒ Amazon Glacier automatically scale up and down storage automatically ⇒ Amazon Glacier is durable (last longer) as Amazon
What is an Object in Cloud Storage ?
An object can be of any format such as audio, video, text files, etc. All the objects are stored in containers called buckets. Google Cloud Storage enhance scalability of storage The objects are accessible globally and the service offers high durability. Google Storage provides four different types of storage classes that a cloud user should choose for the buckets. Google Cloud Storage storage class * Regular/ frequently access to data ⇒ Standard storage class. * Infrequently accessed data + stored for long duration ⇒ Nearline storage (duration 30 days) and Coldline storage (90 days) NB: data can be stored in a single location or across different locations. data across different locations (regions) offers High Availability and high performance.
Azure Type Explained -- Blob Storage
Azure Blob Storage is designed to store massive unstructured data like images, documents, log files, backup and restore, audio and video streaming files. All the stored unstructured data can be accessed from anywhere through HTTP/HTTPS and some Azure applications like Azure PowerShell, Azure storage client library. Blob Storage Has 3 types ⇒ Block blobs: This storage can be used to store images, log files, and more. ⇒ Append blobs: Storage used to log data from VMs (VMs data are stored in Append blobs) ⇒ Page blobs: Page blobs stores the images of Virtual Hard Drive (VHD) -- NB ######################### Blob Storage is of three different tiers ⇒ hot access tier: for Frequently Access data ⇒ cool access tier: for Infrequently Access data (stores data for at least 30 days) ⇒ archive access tier: for rarely used data (longer period of a minimum of 6 months e.g 180 days) Backups that are not used frequently. Compliance/audit data that is accessed rarely. ######################## Ways to move data (either on-prem) to Blob Storage. * AzCopy * Azure storage Data Movement Library * Azure data factory,
Azure Type Explained -- Azure Files:
Azure Files is a shared network file in which Files are accessed via SMB or Network File System protocol (NFS). Azure files storage can be used as a replacement or additional storage for on-premises file servers or NAS devices. % SMB file share ⇒ Windows % NFS file share ⇒ Linux and macOS Data at rest and on transit are encrypted using SMB and HTTPS.
VMs Logs data are stored in what type of Azure Blob Storage A. Block Blob B. Append Blob C. Page Blob D. Comic Blob
B. Append Blob Append blobs: It helps to log data from VMs. It is composed of several blocks of different sizes.
Use BYOK for Storage Account Encryption
BYOK is the CMK in Azure. The stored data in the Azure storage accounts are encrypted by default with Microsoft-managed keys. Microsoft allows customers to encrypt data-at-rest using customer-managed keys. These keys are stored in key vaults. It is recommended to use customer-managed keys to have more control over encryption and decryption of data stored in Azure storage.
NOTE THIS
Business-critical data that needs to be stored in WORM state (write once and read many) can be configured in immutable policies and legal holds This is available for both general-purpose v2 as well as Blob storage accounts but not other storage types. These blobs once created cannot be modified and can be read as many times as required, it has some specific time called retention period.
With AWS backup, the cloud security administrator can assign policies to the backup vault and the related resources. These policies provide access to the users for creating backup plans and can also restrict their ability to delete the recovery points.
By using resource-based access policies with AWS backup vault, the administrator can specify who can access the backup in the vault and what actions they can perform on that. Using these policies, the administrator can also restrict the users from deleting the backup in the vault. The use of resource-based access policies for Amazon Backup ensures that the backups are protected and can be accessed only by the right users.
____________ is an attempt to preserve and protect the digital evidence from the time it is collected to submission in court
Chain of custody The chain of custody also documents the details of people who have collected, handled, or transferred including the date and time of these activities. >>>>>>>>>>><<<<<<< For digital evidence to be acceptable in a court, there must be proper documentation showing collecting, possessing, transferring, accessing an item, and also analyses performed on the collected data till the date of final disposition. >>>>>>>>>>>>>>>NOTE <<<<<<<<<<<< It would be challenging for cloud security professionals to create a verifiable chain of custody in cloud platforms as there are multiple data centers across the world. ##################### The best way to provide the chain of custody is to include it in the SERVICE LEVEL AGREEMENT. This is enable CSP to provide it when requested.
Enable Logging for Azure Storage Queue Service
Check logs of any service to verify the proper functioning and availability of the configured services. Logs provide successful and failed requests of selected categories. It is recommended to enable Azure storage Queue service monitoring with "read, write, and delete" categories, so that it logs information of StorageRead, StorageWrite, and StorageDelete. The cloud security professional can use the Azure storage queue service log information for checking the individual request and to identify issues related to the Azure storage queue service.
GCP Cloud Audit Logs Cloud Audit Logs also have incident management tools to monitor and act on incidents
Cloud Audit Logs of Google Cloud Storage help administrator teams maintain the audit details in the GCP. The Cloud Audit logs are stored in highly secured storage space. All the Cloud Audit Logs are encrypted at rest. - The security team can export the audit log details into Google Cloud Services or Cloud Logging. - Cloud Audit Logs are retained for a period of time for compliance before been deleted ##################################### data access logs can be personalized to meet the organization's needs. * Administrator activity logs: include administrative activities * data access logs: contain user access to the cloud data and calls that read the metadata of resources * system event logs include Google Cloud actions that modify the configuration of resources. * Policy-denied logs include the logs where the Google Cloud denies the user access to resources due to policy violations.
Enable Cloud KMS to Encrypt Cloud Storage Data Cloud KMS is fully managed by Google Cloud KMS enables cloud users to take custody of the data and manage the encryption keys in the cloud
Cloud KMS ensures the confidentiality of data on GCP By allowing users to manage cryptographic keys. Users can generate, destroy, and rotate their own keys. Cloud KMS provides additional benefits by monitoring and controlling where keys can be stored by selecting regions and durability. Cloud KMS is integrated with IAM to protect unauthorized users. ACLs allow the user having an authorized role to decrypt the data in Google services. - Envelope Encryption uses 1 key (DEK) to warp (envelop) another key (KEK). - Envelope Encryption are stored and use by Google central KMS. - Customers data are encrypted using AES256 or AES128
Amazon CloudFront
CloudFront is a content delivery network web service provided by Amazon that offers fast and secure delivery or distribution of data to various customers throughout the world. Amazon CloudFront offers low-latency and high-speed data delivery of content. it can be integrated with other AWS services such as → S3 → Route 53 → EC2. CloudFront can deliver all types of files and content that can be served through HTTP. CloudFront delivers content to users using global network called edge locations. edge location consist of distributed physical data center containing cache copies of uses content. NB: Edge location help reduce latency of data delivery to end users. cloud security professional can restrict the access of content distributed through CloudFront using geo-restriction capability. CloudFront provides NETWORK AND APPLICATION LEVEL Security. It also Use AWS shield Standard to protect against DDOS attack. Amazon CloudFront is cost-effective as the cloud users need to pay only for the content that is delivered through the network and there will not be any hidden, minimum, or up-front charges.
S3 Access point is very useful when there are large amounts of data in the S3 bucket that needs to be accessed by different users S3 Access point is used to restrict access to ⇒ S3 resources ⇒ IAM policies ⇒ S3 bucket policies
Create S3 Access point + VPC So that users can have dedicated access points individually, where each access point can have separate policies. S3 access points enforce the least privilege access model by restricting access to S3 resources along with IAM polices and S3 bucket policies.
______________is a method of identifying the inactive data and moving that data from the current system to long-term archival storage.
Data archiving The data can be stored in any location and it is accessible and provides integrity When Data archiving is implemented, ⇒ it increase the read/write performance to storage. ⇒ it enable quick access to storage when needed >>>>>>> It is mainly used by organizations that update the information regularly and still need to retain the old content.
Google Cloud Data Transfer Services
Data migration or transfer to Google cloud is done using Google cloud Data Transfer service. Whether the source is on-premises or online, using Google data transfer services the cloud consumers can easily handle large-scale data transfer easily and efficiently. With Google Cloud Data Transfer Services, data transfer is made possible from other CSP storage such as AWS S3, Azure Blob Storage into Google cloud storage. The Google Transfer Services comes with in-built ⇒ data validation ⇒ encryption and ⇒ fault-tolerance features It is reliable and secure so that the transfers will not be impacted in case of an agent failure. The data in the flight is also encrypted with TLS for both public and private connections. TLS encryption and IAM roles such as ⇒ "Storage Transfer Administrator" ⇒ "Storage Transfer User" Are user to restrict and control transfer operation.
___________________ refers to the storage and maintenance of safety and integrity of the data being stored.
Data preservation. Evidence preservations assures that it would be acceptable by the court though it can be easily destroyed or modified. The backlog for forensic laboratories will range from 6 months to one year. >>>>Fake Event logs are prevented by a tamper-proof of event log mechanism. using the right tools to analyze, monitor, and test new implementations would improve the performance of the product. Some of the tools include Looker, Tableau, Periscope, and Mode Analytics. Maintaining event data and logs is recommended as the data can be used in various scenarios such as examining firewall, validating access controls, and diagnosing an application installation error, etc.
__________policies are a set of guidelines used by an organization to determine how long the data should be available and when it needs to be deleted
Data-retention Data retention policy is established for future purpose and easy retrieval of when need and ensures data is deleted when it is no longer useful or needed. >>>> A data-retention policy must specify the retention period, data types such as names, addresses, financial information, data security, and data-retrieval procedures A data retention policy is implemented by first classifying the data an organization deals with based on there data types. ⇒ Data retention policies ensure that local and federal regulations/requirements are met. >>>>>>>>>>>>>> Create a Data-retention policy ⇒ Data-retention procedure ⇒ Monitoring an maintenance
To minimize the risk of confidential or sensitive data breach and to reduce the monthly AWS account bill, identity the unused EBS volume present in your AWS account and delete them.
Deleting Unused EBS Volume Help ⇒ Reduce monthly Billing ⇒ Minimize risk of data-leakage or data breaches.
identity the unused EBS volume present in your AWS account and delete them. WHY ??
Deleting Unused EBS Volume Help ⇒ Reduce monthly Billing ⇒ Minimize risk of data-leakage or data breaches.
Azure Type Explained -- Disk Storage
Disk Storage is can be attached to Virtual Machines and consist of: ⇒ SSD and HDD The SSD performs faster operations compare to HDD. Azure Standard Hard-Disk Drives (HDD), Azure Standard Solid-State Drives (SSD), Azure Premium SSD, and Azure Ultra Disk variants. Managed Disk is encrypted by default using SSE (Server Side Encryption). The Disk (OS and Data Disk) used for VM are Encrypted using Azure Disk Encryption. Azure Key Vault can be used to store the encrypted keys for data-at-rest. Encryption of VM disk requires creating an Azure Key Vault to store the encryption keys. Managed keys or Customer manage keys can be used for encryption.
Limit Storage Account Access by IP Address
To safeguard the stored data in Azure storage services such as blobs, files, tables, and queues against unauthorized access and to prevent data exposure, the cloud security professional should limit the access to trusted IP address or safelisted IP addresses. Ensure the security of the storage accounts by reviewing the IP addresses that are added to the safe list.
Amazon Elastic File System (EFS)
Elastic file share (EFS) is used with applications and workloads on AWS An EFS can have several EC2 instance connected to it. all the connected EC2 instance can perform read and write operation on files stored in EFS. EFS storage increases and decreases base on files on it automatically. (It is Elastic) With EFS, multiple EC2 can share data sources between each other EBS ⇒ 1 EC2 to many EBS volumes EFS ⇒ 1 EFS to many Ec2 instance >>>>>>>>>>>> NB EFS is designed for both AWS cloud services and on-premises. ⇒ EFS provides high availability and durability to all Ec2 instance connect to it. ⇒ Files in EFS are distributed for redundancy across different AZ's in a region. ⇒ EFS also provides periodic backup of files for Disaster recovery. ⇒ Customers don't have to make advance storage allocation and only pay for storage used. ⇒ Users don't have to pay for deleted files EFS storage class ⇒The Standard storage class: offer at high-cost and stores frequently accessed data ⇒ Infrequent Access storage class (IA): Offer at low-price and store data or files that are accessed less frequently. EFS Performance mode differs: ⇒ the default general-purpose performance mode is limited to 7000 files IOPS which is suitable to content management system (CMS) and general file serving. A Cloud security profession can control access to EFS using: * Portable Operating System interface permission (POSIX) * IAM (identity access management) * Security groups for EC2 instance * Amazon VPC NOTE EFS can be secured using different Encryption Methods
To protect sensitive information in EBS volume and to make stored data inaccessible to unauthorized users, the cloud security professional should create ____________________ EBS volume.
Encrypted EBS volume will protect corporate secrets, classified information, confidential information, etc. It also provides protection against identity theft.
_______________are actions that take place in a cloud environment.
Events Events are triggered by different services in the cloud that are associated with data. >>> These data are collected and analyzed so as to identify security threats or vulnerabilities Cloud security professionals can filter the events that take place on cloud infrastructure and focus on the relevant and important events.
______________ refers to the assurance that the owner of the information provides valid proof and cannot deny the validity. or It also refers to the service that provides proof for the origin and integrity of data.
Non-repudiation It also provides protection against a person who denies after performing a particular action such as creating data, sending, and receiving message, and approving the information. Non-repudiation makes it difficult to deny in terms of who or from where the data has been collected and the authenticity of the data. USECASE non-repudiation is used in case of a formal contract, any communication, or transfer of data.
IN GCP , ________________ Service is used to import data from cloud sources such as AWS S3 and Azure Blob Storage into the Google Cloud Storage by scheduling recurrent transfer schedules
Google Cloud Data Transfer Services Google Transfer Service can be used to transfer on-premises data to Google cloud storage. >>>>>NB: STORAGE TRANSFER SERVICE Storage Transfer service can be used to transfer data from one bucket to anoher. With Google Cloud Data Transfer Services, data transfer is made possible from other CSP storage such as AWS S3, Azure Blob Storage into Google cloud storage. With this, the data from other cloud providers can be moved to Google Cloud Storage simply, quickly, and safely. The customers are charged when they transfer data from other cloud providers. Charges are also applicable for transferring data from one cloud storage bucket to another
Google Cloud File Storage
Google Cloud File Storage offers fully managed NAS with Google Compute Engine. As the Filestore is fully managed, the cloud users can easily mount the volumes on Compute Engine Virtual Machines. It provides fast performance for file-based workloads. Google Cloud File Storage provides disk storage through the network. Google File Storage provides high performance with low latency. highly beneficial for latency-sensitive workloads such as * electronic design automation (EDA) * data analytics * media rendering Google Cloud Filestore enhance application migration to the cloud without rewriting the applications. The Google Cloud users can choose the IOPS and storage capacity. This enables them to tune the filesystem for each type of workload. The speed would be consistent for that workload.
What should be done to event data collected from event logs ?
Once the event data is collected, then ""analyze the data" to "gain potential insights" into the users' behavior and to improve the performance of the product or website
Google Cloud Storage is a cloud storage service used for storing and retrieving data on the Google Cloud Platform.
Google Cloud Storage similar to Amazon S3 GCS is used to: → Store primary and least accessed data → Store Website → Distribute data in multiple locations (data) files are objects ⇒ stored in Bucket (Container)
Enable In-transit Encryption for PostgreSQL Database Servers
The cloud security professional should enable in-transit encryption using SSL protocol between the client application and Microsoft Azure PostgreSQL Database servers to protect the data-in-motion against MIM attack and to fulfill compliance requirements. The in-transit encryption on Microsoft Azure PostgreSQL Database servers protects the confidential information from unauthorized users.
Google Workspace
Google Workspace is used to handle collaboration, communication, and file storage. Google Workspace provides an email id for business and it collaborates with tools such as Meets, Chat, Drive, Docs, Sheets, Slides, and many more. Google Workspace enhances * Collaboration * Communication * File Storage If the team needs to interact for a business discussion, Google Meet can be used, which provides a simple and reliable video conference. .150 members can participate in the meeting and the meeting can be recorded and uploaded to Google Drive. The recordings can be used by the team members who have not attended the meeting. >>>>>>>>>>NB Users can sync files on their PCs or Laptop to Google Drive users can work offline on file and later sync when connected to the internet. - Using IAM policy, access and actions to files can be restricted. - MFA and Security keys are used for Additional Security. - Google ensures customers data are secured and not shared with third-parties.
Enable Blob Storage Lifecycle Management
For meeting compliance requirements in terms of security and cost optimization, the cloud security professional should configure lifecycle management policy for Microsoft Azure Blob Storage data. Depending on the lifecycle management policy defined at the storage account level, the Blob data can be automatically deleted or transit Blob data to appropriate storage tier. Azure provides Blob Storage lifecycle management that acts according to the requirement using customized rules. Some of the data is accessed frequently and sometimes it may be accessed after longer durations. This lifecycle management helps to store data in the appropriate tiers. Consider, a particular data is accessed frequently when it is created and later it may be required once in two months and later quarterly or other. Lifecycle management allows users to set up rules as per their requirement, that is, it can be set up to move the data to cool storage, 30 days after the last modified day, set to 90 days to move into an archive, and can set up deletion as well.
Tape gateway provides virtual tape storage. It is cost-effective and cloud customers can durably archive their backup data in AWS storage such as S3 Glacier and Deep Archive.
For this, cloud customers should consider using KMS customer master key instead of AWS managed keys for encrypting tape data. For existing Amazon storage gateway virtual tapes, encryption using CMK cannot be enabled. To encrypt stored tape data with the customer-managed keys, specified tapes must be recreated.
Example of AWS Snowball NB: Data in AWS snowball can be imported or exported to AWS S3
For example, consider a cloud user with 80 TB of data with a slow internet connection. Using AWS Snowball, the cloud user can send the data to AWS through physical devices and Amazon would transfer the data into AWS storage devices using its own high-speed internet connection. Amazon itself provides the appliance, and the users only need to load the data into the appliance. The data in the Snowball can be imported and exported from and to AWS S3 buckets. The data can be transferred from S3 buckets to other AWS storage services. Cloud users can rely on Amazon for transferring their data securely and in a cost-effective way. USECASE Snowball is highly suitable in cases where users do not want to spend money on upgrading their internet to a high-speed internet connection.
Google Cloud Archival Storage Example
For example, consider an organization that has a bulk amount of data that needs to be retained following all the regulations including data protection. For this, the archive storage class can be used along with the Object Lock to ensure that the objects are stored without any modifications till the specified time. This is especially used by industries where legal retention requirements need to be met. Bucket lock + archive storage class are used by industries where legal retention requirements need to be met If availability and durability are the major requirements, the cloud customers can choose archive storage classes with multi-region or dual-region locations. Though Google archive storage is expensive, it is beneficial as the data can be retrieve in a fraction of seconds. (data is retrieved in milliseconds {instantly}) Industries that use archive storage class include: ⇒Educational sector for storing research data ⇒Government for storing data for a long time ⇒Financial industry to store transactions, and audit logs ⇒Healthcare industry to store medical records ⇒Insurance for storing claims, and billing ⇒Media and entertainment industry for storing raw production footages ⇒Security industry for storing video surveillance data ⇒Telecommunications for retaining call records, billing, and customer-service records
Google Storage provides this option using lifecycle management rules. Example
For example, if the buckets are used to store application logs, the data need to be available only for the first few months. Later, the data needs less availability. In the end, only a copy of the logs may need to be retained. By using data lifecycle management rules, the storage classes can be modified as per requirement and cost efficiency. >>>>> The users need to pay only for the data stored in the cloud. Pricing is based on Read and Write performed on the objects + network charged for data movement. Control access to Bucket and Object using ACL (Access control List) Using IAM, the cloud administrator can create users, groups, and set permissions to them.
Google Cloud Archival Storage
Google cloud archival storage helps to securely stores data that are infrequently accessed for a long period of time. The minimum duration for archival storage is one year. Access and management are performed with the same APIs that are used for other storage classes. archive storage is backed up in multiple regions and helpful for disaster recovery. data in archive storage can be protected from modification by using multiple mechanisms such as archive storage class + bucket lock or other Google data locking mechanisms. => Bucket lock + retention period offers additional security and high durability => archive storage class + bucket lock >>Data in archive storage are encrypted by default (at rest or on transit).
Google Cloud started investing more in protecting the customer data in all the Google products. The data stored in the Google Cloud Storage is secured by using >>>>robust security controls.<<<<<
In Google Cloud Storage, users can set permissions to objects and buckets using IAM or ACL. Both systems work in the same way and either of the methods can be used to grant permissions to users. Uniform bucket-level access can also be used to add additional security to the data stored in Google Storage robust security controls for Google Storage = IAM or ACL + Uniform bucket-level access. >>>>> Enabling Uniform bucket-level access + IAM - {ACL get disabled } By enabling uniform bucket-level access, ACL get disabled and all the permissions to the buckets and objects can be granted only through IAM. Enabling Uniform Bucket-level Access automatically disable ACL and uses IAM policy. >>>>> Uniform Bucket-level Access can be enabled while creating new bucket or on existing bucket. <<<<<< ###### Uniform Bucket-level Access provides Additional Security via IAM roles ###### >>> Migrate all ACL permissions to IAM before enabling "Uniform Bucket-level Access" Summary When Uniform Bucket-level Access is enabled, ACL is automatically disabled and Saved for 90days. Before Uniform Bucket-level Access is disabled ensures IAM roles/conditions are "deleted". Once Uniform Bucket-level Access is disabled, the saved ACL is automatically enabled and enforced on the bucket. Uniform bucket-level access is highly suitable for enterprises that provide financial services.
_________________ technology is used to protect sensitive information from an attacker by encrypting the data and by including user permissions inside the file containing the information.
Information Rights Management (IRM) In IRM, Protecting sensitive data can be done by - Encryption - setting user permissions to (read, write and execute file) NB: the owner can restrict specific actions on the file. implementation of IRM - (information right management) is very important for organizations because they prevent exposure or leakage of sensitive data. ⇒ IRM ensures only authorized user can view the data. ⇒ IRM can protect emails, web pages, database rows and columns, and other data. ⇒ IRM uses ACL with files or document. ACL is set on a file stating WHO has Permission WHEN and WHERE ⇒ IRM protected data has expiration date (unviewable after the set time) and is protected through out the data lifecycle either data-at-rest, motion or in-use.
AWS Storage Gateway
It is an AWS cloud storage used to enhance secure integration between AWS storage (S3) ⇒ on-premises IT environment of the organization. AWS storage gateway ⇒ secure on-premises data by encryption before uploading it to Amazon S3. it offer low-latency (low- delay) AWS storage gateway software appliance (VM image) is download ⇒ installed on the on-premises datacenter of the customer. AWS storage gateway is linked with an AWS account where a gateway volume is created (either VTL or cached volume). AWS storage gateway is used for ⇒ sharing corporate files ⇒ making the on-premises data applications backup on Amazon S3 (primary backup) ⇒ data mirroring ⇒ disaster recovery. AWS storage gateway utilize internet bandwidth for uploading data from on-premises to Amazon S3. NB: only modified data is uploaded. AWS Storage Gateway provides disaster recovery with three configurations. 1. Gateway-cached volumes is best used for frequently accessed data. Data is cached on local (on-premises) storage which are the primary data that is moved to AWS S3. 2. Gateway-stored volumes, the primary data in the local datacenter is backed up to Amazon S3 asynchronously (synced in different time) 3. Gateway virtual tape library, it enable storage of virtual tape in S3 using Virtual Tape Library (VTL) or Amazon Glacier backed virtual tape shelf (VTS). >>>>>>>>>>> NOTE AWS Storage Gateway stores the application data for extended periods by uploading the data in Amazon S3 which provides robust scalability and elasticity automatically. S3 does systemic data integrity check and automatically perform self-healing. EBS ⇒ 1 EC2 to many EBS volumes EFS ⇒ 1 EFS to many Ec2 instance
Create Alert for "Delete Azure SQL Database" Events
It is highly recommended to create alerts for critical resources. If the delete Azure SQL Database rule is created, when someone tries to delete the Azure SQL Database it triggers the alert when it meets the condition of "Category='Administrative', Signal name='Delete Azure SQL Database Microsoft.Sql/servers/databases)." Alerts logs are useful in the prevention of accidental or intentional deletion of Azure SQL databases.
Athena is an interactive query service provided by AWS. It allows the usage of standard SQL for data analysis in Amazon S3. By default, it is protected by SSL/TLS; however, query results at rest are not encrypted.
It is highly recommended to enable encryption for AWS Anthea query results. Enabling encryption allows organizations to meet the compliance requirements. AWS Athena supports three types of S3 encryptions, that is, SSE with an Amazon S3-managed Key (SSE-S3), SSE with an AWS Key Management Service customer-managed key (SSE-KMS), and client-side encryption (CSE) with an AWS KMS customer-managed key (CSE-KMS)
Google Cloud Transfer Appliance
Large Amount of data can be shipped to Google cloud storage by using an hardware called Google Transfer Appliance. high-capacity device that can be used to transfer and ship the business data to the Google Cloud Storage upload facility. From the upload facility, it will be transferred to Google Cloud Storage. Users request for Google Transfer Appliance. and upload their data (max 25days). Then ship to Google. Google cloud Admin upload data to GCS and wipe the data in the appliance. Customers can request for a certificate to ascertain that the data has be Customers can verify the identity of the Transfer Appliance Online before uploading data on it.en wiped. It takes 25 days to upload data to Google transfer Appliance. It takes another 25day before data can be accessed by customer on Google Storage. A total of 50day to transfer (25) and upload to GCS (25). >>>>>>>>>>>>>>NOTE Tamper-evident tags are attached to Google Transfer Appliance to ascertain that it is new and hasn't been used by anyone. Customers can verify the identity of the Transfer Appliance Online before uploading data on it. With Transfer Appliance, the customers can trust that their data is safe. The data is encrypted during upload and after uploading to the Google Cloud Storage. The Cloud customers can also restrict access to the data in the appliance by applying an IP filter that allows only specific hosts on their network to access the appliance.
Enable Trusted Microsoft Services for Storage Account Access
One of the basic security steps in network protection is handling firewall rules properly. Incoming requests to the storage account from any of the Azure services, may be blocked with the usage of a firewall. To make sure that the connections from Azure services are not dropped which fails intended actions, it should allow trusted Microsoft services. Virtual network configuration and firewall rules setup should be applied carefully, as the misconfiguration in networks leads to malfunctions or risks. In Azure storage account configuration settings, make sure that the exception "Allow trusted Microsoft services to access storage account" is enabled. If the firewall rules are enabled for the storage account, then incoming requests for the data will be blocked including Azure services. If the exception "Allow trusted Microsoft services to access storage account" is enabled, then the access to the storage account resources will be granted to the Azure services.
Hierarchy of Google Cloud Storage
Organization ⇒ Project ⇒ Buckets ⇒ Object 1. Organization: Name of the company e.g Test.org 2. Projects: builds various applications that belong to various projects 3. Buckets: Buckets are the containers used for storing data in the form of objects. 4. Objects: Objects are the files of any format such as images, videos, audio files, etc. ########## features of Google Cloud Storage ####### 1. Accessibility: accessed from any location 2. Cost-effective: "pay as you use" 3. Durability and availability: Storage can be accessed 24/7 and the data is highly durable. 4. Scalability and elasticity: scalable based on your requirement. 5. Consistency: Cloud Storage can be read immediately after the uploads. 6. Interoperability: Google Storage can be implemented with other cloud storage tools and libraries e.g Amazon S3 7. Resumable data transfer: resuming data uploads in case a communication interruption stops them. 8. Security: Google Cloud Storage uses multiple layers of security
Azure Type Explained -- Queue Storage
Queue storage is used to store large number of messages that is processed one at time. {FIFO} and access from anywhere through HTTP/HTTPS - It can contain millions of messages using the limit of storage assigned to the account. Queue Storage reduces the data loss possibility due to timeouts on a data store or long-running processes. The maximum capacity of the message is 64 KB and it can remain in a queue for a maximum of 7 days. >>>>>>>>DATA BOX>>>>>>> storage device that is designed to transfer terabytes of data. MAX 80 TB. It can be used to either import or export data from the servers to the data box or vice versa. It features AES 256-bit encryption for safer transit of data. ###############OTHER TYPES OF BOX STORAGE ⇒ Data Box Disk: an SSD of 8TB send to users in 5 pack which is a total of 40TB and uses 128-bit encryption. ⇒ Data Box Heavy: 1 Tb storage used to move data from on-prem to cloud ⇒ Data Box Gateway: A virtual appliance service that is used to transfer data to and from Azure. ⇒ Azure Netapp File is a Microsoft Azure native file storage service with high-performance for business applications ⇒ Azure HPC (high performance capability) improve application performance by presenting users hot data into a singular directory structure so that client complexity can be reduced
Configure Azure Defender for Storage
To provide an additional layer of protection for storage accounts, Azure Defender for Storage is used. It detects any unusual, malicious attempts or exploits in storage accounts. This additional layer of security can be used without any expertise in the monitoring of security systems. The security alerts are sent to emails of subscription administrators, which contain details of the unusual activity and remediation recommendations. These alerts can be viewed in storage activity through Storage Analytics Logging.
Allow Shared Access Signature (SAS) Tokens Over HTTPS Only
SAS grants limited access to Azure storage features. This can be shared with the people who require access to the storage without compromising security and privacy. This is the best way to provide temporary access to storage account features. SAS token contains the information of storage services, resources, permissions, and validity of the token. This token authorizes the user to access the permitted storage services. To establish a secure connection and to transfer data securely from Azure storage, it is recommended to allow SAS tokens over HTTPS only. Any security professionals who have previously generated SAS tokens that accept both HTTP and HTTPS look similar to the below image. These tokens can be replaced with the newly generated tokens that accept HTTPS only protocol, which is created using the below steps
To safeguard sensitive information stored on AWS Elastic Block Store, ensure that the EBS volume snapshots are not publicly shared with other AWS account users. EBS snapshots contain mirrors of the application as well as data; thus, sharing the EBS volume snapshots publicly allows other AWS account users to copy the snapshot and create a volume. How do a SEC Engineer go about it ?
Share EBS Volume privately by entering the AWS account number of the user intended NOTE The snapshot shared with the specific AWS account user will be an unencrypted snapshot. If needed, then you can share your EBS snapshots with a particular AWS account user. It is recommended to restrict public access to the EBS snapshots; instead, provide private access to the EBS volume snapshot by entering the AWS account number of the user with whom you want to share the snapshot
Use Soft Delete for Containers
Soft delete enables users to restore any accidentally deleted containers within the retention period. The retention period provided by Azure is from 1 day to 365 days. By default, it provides a retention period of 7 days. Undelete Container option recovers the deleted container. Once the retention period is completed the container will be deleted permanently. If soft delete for the container is disabled, then the previously soft-deleted containers will not be deleted permanently. They are permanently deleted only after the completion of the retention period which was set at the time of enabling.
Select Longer Soft Deleted Data Retention Period
Soft delete provides a facility to recover accidentally deleted data for a particular period, the specified period is known as the retention period. Azure provides a retention period from 1 day to 365 days; customers can opt as per their requirement. One of the best ways to comply with regulations regarding data retention is having a longer retention period. It helps to manage the data more effectively in case of any failures or data losses.
Providing public access to S3 buckets can allow the attacker to modify, upload, view, or delete the S3 objects. TRUE OR FALSE
TRUE If public access to the bucket is not blocked by default or accidentally removed, then block public access to the S3 bucket. WAYS 2 BLOCK PUBLIC ACCESS TO S3 BUCKET 1. Enable "Block all Public Access" Feature 2. Restrict Public access to S3 buckets via bucket policies. ⇒ In restricting access to S3 bucket via bucket policy, the bucket policy is either modified to allow specific AWS IAM user/AWS account to access bucket to delete the entire bucket policy.
T / F Providing public access to S3 buckets through bucket policies can allow the attacker to modify, upload, view, or delete the S3 objects.
TRUE If there is no restriction on accessing bucket policy, then any one can download, upload, delete, modify, or list the objects within the buckets ⇒ Granting public access to S3 buckets can lead to exploitation of S3 objects ⇒ To protect against unauthorized access, restrict public access to S3 buckets through bucket policy NB: it is better to restrict access to specific AWS accounts rather than allowing access to everyone on the internet.
TRUE OR FALSE Amazon S3 does not allow logging by default; the service user needs to enable logging for S3 buckets
TRUE When "server access logging" feature is ENABLED, Request made to S3 bucket are logged so as to help in → log auditing → take measures to protect buckets from unauthorized access S3 collects logs for the source bucket and target bucket, which reside in the same region; however, ensure that the service user does not have a configured retention period.
NOTE
The cloud consumer can set their own encryption keys utilizing server-side encryption with customer-managed CMKs. For fine-grained control of Amazon S3, data-at-rest encryption and decryption process, make sure that Amazon S3 buckets are encrypted with customer-provided AWS KMS CMKs. Safeguard the sensitive information at rest in the S3 buckets by implementing SSE. The cloud security professional can enable SSE utilizing access policy or default encryption.
Data-at-rest encryption compliance requirements can be fulfilled by enabling AWS RDS.
The encryption and decryption in RDS do not need any additional action from the user or application. It helps in complying with different standards such as PCI-DSS, HIPAA, GDPR, etc. With the RDS encryption enabled, production databases with sensitive and crucial data can be protected from unauthorized access. This RDS encryption encrypts the data stored on the instance underlying storage such as read replicas, backups, and snapshots. It uses the AES-256 algorithm that is managed by AWS KMS. AWS RDS encryption is not available for all types of database instances.
AWS Storage Gateway File Share is a service that provides on-premises access to unlimited cloud storage applications. It makes use of SSL or TLS to encrypt the data while transferring from the customer gateway to AWS storage. Cloud customers can also configure their gateway to use SSE with AWS KMS CMK for encrypting the data
The storage gateway is integrated with Cloud Trial. All the API calls for storage gateways will be captured by the CloudTrail. By collecting the information from CloudTrail, cloud customers can come to know about the requests made to the Storage gateway, the IP address of the location from which the request was made, time, etc.
Check for Key Vault Full Administrator Permissions
To enforce the principle of least privilege and to implement the security best-practice, the cloud security professional should ensure that none of the Microsoft Azure user, group, or application have full administrator privileges for accessing and managing Azure Key Vaults. Since Azure Key Vaults stores sensitive and business-critical data, it is the responsibility of the cloud security professionals to secure the key vault and the stored data. It is recommended to grant limited access to the key vault based on the requirements. As the Azure Key Vault policy is applicable at the vault level, if the Azure user is given permission to create, delete, or update keys, then the user can perform operations on the entire key vault. Limited permission to perform particular operations on key vault should be given to the user to avoid severe security issues, data loss, or data breach.
Disable Anonymous Access to Blob Containers
To prevent anonymous access to Blob containers, the cloud security administrator should change the public access level to private access. With public access level configuration settings, the anonymous user can use constructor to access the Blob containers and access the containers without using any credentials such as SAS. Hence, when the Blob containers access is changed from public to private level access, the anonymous user will not be allowed to access the Blob containers. A container can be changed to public or private if the storage account level access is public. If the storage account level is set to private, then all the containers become private by default and cannot be changed to public access. Set the containers with private access unless and until there is a requirement to change the settings. It is recommended to allow access using a shared access signature token that helps in controlled access.
Enable In-transit Encryption for MySQL Servers
To safeguard the data-in-motion against MIM attacks and to fulfill the compliance requirements, the cloud security professional should enforce in-transit encryption using SSL protocol between the client application and MySQL database server. The in-transit encryption on Microsoft Azure MySQL database servers safeguards the sensitive information from unauthorized users.
Enable Transparent Data Encryption for Azure SQL Database
Transparent Data Encryption (TDE) encrypts services like SQL server and Azure SQL Database. It prevents data loss from malicious attackers by encrypting data-at-rest. It provides real-time encryption and decryption of the databases. This encryption uses a DEK that is a symmetric key. It is stored in the database boot record, so that it can be used during recovery. To protect the sensitive information against unauthorized users, the cloud security administrator should encrypt Azure SQL Database at rest with TDE
Misconception about Data-retention TRUE OR FALSE Most companies have the misconception that storing data for a long time than required would be a secure way and it can be accessed when needed in the future.
True - Which is wrong Storing data for a long time increases the chance of data breaches. Data that are no-longer needed by an organization should be deleted
To enhance the security of the web application content, the cloud security professional should encrypt the communication between Amazon CloudFront CDN distribution and end-users utilizing HTTPS. For encrypting the data-in-motion, the administrator should configure the viewer protocol policy either to redirect HTTP to HTTPS or HTTPS only based on the requirement.
Utilizing HTTPS for CloudFront CDN distribution assures that the encrypting traffic between the CloudFront distribution and the end-users cannot be decrypted by the adversary, even though the adversary intercepts the sent packets in the CDN distribution network.
Secure Google Cloud Data with VPC Service Controls
VPC Service Control is Known as FIREWALL for GCP resources. Because VPC service control, allow user to specify security perimeter around GCP resources, Projects at organization level. VPC Service Control when enabled, allows access to the resources only from the authorized IP addresses and verified client devices. By Using VPC Service Control, enterprises can set access control to cloud resources based on User ID and IP address - VPC service Control has a log Access Denial logs for malicious activities on cloud resources. - Flow logs has info about IP traffic logs from Compute engines. - VPC Security control can be enabled to monitor communication between Virtual machines and google cloud resources With the storage buckets residing behind the VPC Service Controls, an enterprise can set private security for the Google Cloud data. ################################# VPC SERVICE CONTROL protects the organization from risks such as: * Access to the resources from unauthorized network * Data exploitation * Exposure of private data as public due to wrong IAM policies
A private connection can established from on-Premise to Google Cloud by using _______________
VPC service Control. an organization can secure its confidential data and can maintain it privately using VPC Service Controls. VPC Service Controls allows the security teams to define perimeter controls and achieve security for Google Cloud resources and projects.
Enforce Domain-Restricted Sharing Policy Domain-restricted sharing is a list of restricted users IDs. Using this domain-restricted sharing policy, companies can control the access of certain domains to the Google Cloud resources
When "Uniform Bucket-level Access" is enabled, a feature called "domain-restricted sharing" is enabled to prevent accidental sharing of company's data to third-party or public sharing. domain-restricted sharing + uniform bucket-level access = Strong Access Control Policies. >>>>>>>>>> When the feature "domain-restricted sharing" is enable on Google Workspace, it applies to all users in the workspace. This feature is enabled under Organization policies.
Key NOTE while creating Bucket in GCP
When creating a bucket assign ⇒ unique Name (can't be modified after creation but name of deleted bucket can be reused) ⇒ Location to store data (can't be modified after creation ) ⇒ default Storage class (can be modified after creation )
_____________procedure involves deleting the data in a safe and secure manner when the data is no longer required.
data deletion procedures/ policy It is designed in such a way that there will not be any files or pointers after deletion. This ensures that the data cannot be restored after deletion. Otherwise, when this is an unauthorized access or deletion of data it can lead to a data breach or compliance failures. NOTE Many organizations use the data deletion policy to ensure that certain records are properly disposed by following all rules and regulations. Data deletion procedure can be implemented after the encryption of data. e.g After encrypting the data in plaintext. Data deletion procedure takes effect on the plaintext data.
To safeguard the communication between the S3 buckets and the clients against eavesdropping and man-in-the-middle attack what can the cloud security engineer do ?
restrict non-SSL S3 access to all the objects in Amazon S3 bucket
Challenges with Information Rights Management (IRM)
⇒ All the users and data to be embedded with IRM should have the same encryption keys. ⇒ Each resource will have an access policy. Each user should have an account and keys to access the resources (using automatic policy provision.) ⇒ implementing the right RBAC is challenging ⇒ downloading a local IRM would limit other implementation features ⇒ Readers who read IRM files on mobile devices would face compatibility issues ⇒ The readers of IRM-protected files should have IRM-ware reader software. Microsoft has an IRM product but other users would face trouble ⇒ A centralized identity key vault. Admin should choose the right method based on the security requirements.
Amazon Glacier
⇒ Amaxon S3 Glacier offers durable (long lasting) storage of data and less storage cost for data backup and archive ⇒ Amazon S3 Glacier is best used for storing data that are not frequently accessed ⇒ in S3 Glacier Compressed files are uploaded as a single file for archive. ⇒ AWS IAM service can be used to control access to archives organized in vaults. ⇒AWS Glacier can be used with various AWS service but it is mostly used with Amazon S3. ⇒ Data is moved between Amazon S3 (frequently accessed) ⇒ Amazon Glacier (infrequently accessed) Archive
Amazon Elastic Block Store (Amazon EBS) Volume
⇒ EBS volume are designed as storage for EC2 instance (VMs) ⇒ EBS volume is a independent Network attached volume (NAS) that can be attached and detached from an EC2 Instance. → EBS is used to boot an EC2 instance → many EBS volumes can be attached to one EC2 instance. ⇒ EBS is designed for frequently changing data with long-term utilization. ⇒ EBS can be used as primary storage for database and file system ⇒ Amazon EBS offers two volume types, → standard volume → provisional IOPS volume ⇒ The volumes types differs in price and performance → standard volume (low-cost storage) {moderate Input/output requirment} → provisional IOPS (high-cost storage for high performance and I/O intensive workloads) ⇒ backups can be created for EBS volumes by creating SNAPSHOTS of the data volume. This snapshots when distributed in different AZ's (redundantly) in a region enhance disaster recovery in a fast and reliable way. HOW IT WORKS it takes incremental level of snapshot ⇒ the first snapshot is the complete copy of the EBS volume ⇒ ongoing snapshots contains recent changes in the volume (incremental-level changes) ⇒ EBS volume Durability depends on the volume size and the recent snapshot taken. AWS user is responsible for the durability and availability of the EBS volume by making sure that they frequently creating snapshots of the EBS volume. back-up AWS User can create backup by sharing the Snapshot with other aws accounts. EBS volume snapshot serves helps in disaster recovery, backup and sharing or volumes (data)