DP-900
Azure Files key benefits:
- Shared access - Fully managed - Scripting and tooling - Resiliency - Familiar programmability
Azure Synapse Analytics
-Fully managed DW (PaaS), single interface -Advanced analytics and ML engine -integral security at every level of scale at no extra cost. -Based on Spark and Notebooks(like Databricks) -Includes following: 1. Pipelines: using ADF - Integration engine 2. SQL: optimized for DW workloads 3. Synapse link for Cosmos DB(HTAP) 4. Synapse Studio(editor) Supports: 1. Apache Spark SQL, Polybase, C#, Python, Scala 2. CSV, json, XML, Parquet, ORC, Avro Azure synapse data explorer: high-perf DA solution, optimized for real time querying of log/telemetry data using KQL
Azure Cosmos DB
-Global scale, non relational (NoSQL) -supports multiple APIs, encryption at rest/motion -Store JSON, NV pairs, column families and graphs -uses indexes and partitioning for fast R/W - <10ms, 99% SLA -can scale to massive volumes -enable multi-region writes -automatically allocates space in a container for partitions, and -each partition can grow up to 10 GB in size -Indexes are created and maintained automatically -MSFT products that use cosmos DB: Skype, Xbox, Microsoft 365, Azure -used in Retail and mktg, for storing catalog data and for event sourcing in order processing pipelines. -Used in gaming: fast and be able to handle massive spikes in request rates during new game launches and feature updates -Used in Web and Mobile apps: well suited for modeling social interactions, integrating with third-party services, and for building rich personalized experiences. -Cosmos DB SDKs can be used to build rich iOS and Android applications using the popular Xamarin framework.
Benefits of Azure DB for PostgreSQL
-HA-built-in failure detection and failover Predictable performance, using inclusive pay-as-you-go pricing Elastic scaling within seconds -Entp grade security and compliance to protect data at rest/in motion -monitoring and automation to simplify management. -monitoring for large scale deployments. -use pgAdmin to manage and monitor DB, some server-focussed functionality from it isn't available (e.g. backup/restore) -Records query information in "azure_sys" DB (use query_store.qs_view view to see-monitor user queries) - valuable for fine-tuning queries.
Blob Storage Tiers
-Hot: accessed frequently (e.g. website's image files). can be set at a/c level -Cool: oinfrequently accessed and stored for at least 30 days (e.g. customer invoices). can be set at a/c level -Cold: Accessed infrequently, stored for at least 90 days. can be set at a/c level -Archive: rarely accessed and stored for at least 180 days with flexible latency requirements.(e.g. long-term backups). offline storage, least cost, highest cost to rehydrate and access. -All can be set at blob level - during/after upload
Azure HDInsight
-Managed Analytics service, based on Apache Spark. -provides Azure hosted clusters for Apache big data technologies -Apache Spark, Apache Hadoop, Apache HBase, Apache Kafka, Map/Reduce, Hive, Storm, R etc -Stores data on data lake storage, integrates s/ Azure
Azure Tables
-NoSQL database, Very fast, Robust(several TBs) -Structured/semi-structured data with schema-less design makes use of tables containing key/value data items. -Each item is represented by a row that contains columns for the data fields that need to be stored. -rows must have a unique key (composed of a partition key and a row key) -timestamp column records time of modification -no concept of foreign keys, relationships, stored procedures, views, or other relational objects -denormalized data, with each row holding the entire data for a logical entity. -Each table is split into partitions (mechanism for grouping related rows, based on a common property or partition key) - helps to organize data, improve scalability and performance can include the partition key in the search criteria - improves performance -ideal for fast data ingestion (IoT/Telematics) -Multiple read replicas, but not Multiple write regions (use Cosmos DB for this).
Azure DB for PostgreSQL
-PaaS implementation of PostgreSQL. -Fully managed. -Same availability, perf, scaling, security and admin benefits as MySQL service -Some on-prem features unavailable: like extensions that users can add to DB for specialized tasks like writing SPs in various PLs (only pgsql is supported) or interacting directly w/ OS. -Core set of freq used extensions are available and list keeps getting reviewed continuously. -Flexible server deployment: provides high level of control and server config; provides cost optimization controls.
Azure SQL Managed Instance
-PaaS, Hosted instance w/ automated admin(backup, patching, monitoring etc) -flexible config, fully controllable SQL instance(Security, resource allocation). -more admin responsibility for owner -Includes automated s/w update management, backups, and other maintenance tasks. near 100% compatibility - most on-prem can be migrated w/ minimal code change - Azure DB migration service Instance pools can be used to share resources efficiently across smaller instances Use Azure Storage for backups, Azure event hub for telemetry, MS Entra ID for auth, Azure Key Vault for Transparent Data Encryption (TDE). All comm encrypted w/ certificates using certificate revocation lists. Features like Linked Servers, Service broker, DB Mail - aren't available in Azure SQL DB, but available here. supports DB logins and Entra ID logins
Azure SQL editions
-Single DB - greenfield projects, not fully compatible w/ SQL server -Elastic Pool: Allows DB to share resource pool, ideal for SaaS applns -Managed Instance: highest compatibility, but still PaaS, supports: linked servers, DB mail, Cross DB queries.
Blob Storage
-Unstructured: like a file system. -Used for images, documents, video, audio. Perform big data analytics. -Common uses: 1. streaming and random access scenarios 2. access application data from anywhere. 3. build enterprise data lake on Azure and perform big data analytics. 4. Storing data for backup/restore, DR, and archiving -Supports massive amount of data(along w/ metadata -ADLS Gen 2 runs on this -Data is encrypted (supports BYOK)
Azure File Sync
-a tool that lets you centralize your file shares in Azure Files and keep the flexibility, performance, and compatibility of a Windows file server -synchronize locally cached copies of shared files with the data in Azure File Storage. -similar to turning your Windows file server into a miniature content delivery network -once installed on a local m/c, it automatically stays bi-directionally synced with your files in Azure -Use any protocol available: SMB, NFS, and FTPS. -Have as many caches as you need across the world. -Replace a failed local server by installing Azure File Sync on a new server in the same datacenter -Configure cloud tiering so the most frequently accessed files are replicated locally, while infrequently accessed files are kept in the cloud until requested
Azure Data factory
-define and schedule data pipelines -ingestion/ Transformation(mapping data flows)/ Orchestration. -Can integrate w/ other Azure services Can be used to ingest cloud data stores, process using cloud compute, persist in data store. Integration runtime - uses ADF compute engine -Has connectors for: 1. Azure, AWS, GCP 2. Dozens of DBs/DWs 3. SaaS applns 4. Several Files.
Azure SQL Database
-fully managed PaaS -Highly scalable service -Includes core DB level capabilities of on-prem -Good for when you need to create a new app in cloud -Available as single DB or an Elastic pool. -Biz critical tier w/ extra features: :Lower latency, HA, Read replicas for reporting -DMA helps on migration strategy -limitations: e.g. shutdown or s/w installs -advanced threat protection (ATP)
Azure Files
-fully managed file shares -accessible via Industry standard-Server Message Block (SMB 3.0), preview for NFS(Linux) -a way to create cloud-based network shares -share up to 100 TB of data in a single storage account -maximum - single file 1 TB, you can set quotas to limit the size of each share -Currently up to 2000 concurrent connections per shared file are supported -Upload using Azure Portal or AzCopy -2 perf tiers: Standard (HDD), Premium(SSD) -Main use: migrating file shares from on-prem Windows servers
Steps - getting started w/ Azure
1. Create Account 2. Create subscription (Think similar to "One Oracle server for GE-CSA-Dev - that encumbers all CSA Oracle databases/servers-> Dev instance for all those under one subscription and then create resources under this RG - like a folder for 5 dev cloud-Oracle servers for CSA department, separate folder for UAT and Prod servers 3.
Azure SQL DB benefits
1. always running latest and most stable version of SQL 2. scale up and down w/o manual upgrade 3. HA guarantees: 99.995% 4. point in time restore, replicated across regions - Resilient and disaster recovery. 5. Advanced security-vulnerability assessments, threat protection, continuous monitoring for ssuspicious activities, injection attacks, anomalous DB access patterns, sends alerts including recommended action. 6. Auditing tracks DB events, writes audit log in storage account. - maintain regulatory compliance, insights into discrepancies/anomalies, suspected sec violations 7. secures data - encryption at rest(stored data) and in motion(during transfer)
Azure Table key
1. partition key that identifies the partition containing the row 2. row key that is unique to each row in the same partition Items in each partition are sorted on row key - facilitate performance of point queries (identify a single row), and range queries (contiguous block of rows).
storage account naming
3-24 lower case characters or digits, unique within Azure
Azure Queues
A messaging store for reliable messaging between application components. individual message up to 64kb used to create a backlog of work to process asynchronously can be combined with compute functions like Azure Functions to take an action when a message is received
Data Migration Assistant
A stand-alone tool to assess SQL Servers. It helps pinpoint potential problems blocking migration. It identifies unsupported features, new features that can benefit you after migration, and the right path for database migration.
Apache Spark
An open source analytics environment for large, heterogeneous data sets that provides capabilities from analytics to the maintenance of broadly distributed data storage systems Open source, distributed data proc system; supports multiple PLs and APIs e.g. Java, Python, Scala and SQL
Hive
Apache DW system, supports fast data summarization and querying over large datasets.
append blobs
Append blobs are made up of blocks like block blobs, but are optimized for append operations - updates/deletes not allowed. Append blobs are ideal for scenarios such as logging data from virtual machines. 1 block can be up to 4mb, Max append blob can be 195GB
ACID
Atomicity, Consistency, Isolation, Durability
Which Azure Cosmos DB API should you use to work with data in which entities and their relationships to one another are represented in a graph using vertices and edges?
Azure Cosmos DB for Apache Gremlin The API for Gremlin is used to manage a network of nodes (vertices) and the relationships between them (edges).
___ are an efficient way to store data files in a format that is optimized for cloud-based storage, and applications can read and write them using an Azure API
BLOBs, applications can read and write them by using the Azure blob storage API.
Azure Databricks
Collaborative Apache Spark-based analytics service that can be integrated with other Big Data services in Azure. Azure integrated version of DataBricks Combines Apache Spark platform w/ SQL DB semantics and integrated management interface-large-scale data analytics
Azure SQL
Collective name for: Azure SQL database SQL managed instance SQL VM
Parquet
Columnar format - by Cloudera and Twitter. contains row groups(RG) - data for each column is in the same row group. Each RG contains 1 or more chunks of data each file Includes meta data that can be used by applns to quickly locate correct chunk for given set of rows. Specializes in storing and processing nested data types efficiently. very efficient compression and encoding schemes.
Storage account endpoints
Combination of the account name and the Azure Storage service endpoint forms the endpoints for your storage account
Azure DB for Maria DB
Compatible w/ Oracle DB Runs on Community edition. Fully managed and controlled. No admin required once provisioned. Built in HA w/ no additional cost Predictable perf, pay as you go scaling w/in seconds. secured protection of data, in-motion, at-rest auto backups, point in time-35 days entp grade sec and compliance.
Azure Physical Infrastructure
Data Centers->Regions->Region pairs->Sovereign Regions
Hot Tier Storage
Default Blob storage/access tier Accessed frequently, stored on hi-perf media. Read latency few mili seconds
Azure Migrate: Discovery and assessment
Discover and assess on-premises servers running on VMware, Hyper-V, and physical servers in preparation for migration to Azure.
Apache Hadoop
Distributed system, uses MapReduce jobs to process large data volumes efficiently, across multiple cluster nodes.
Azure Storage features
Durable(Redundancy), HA(Replication), Secure(encryption), Scalable, Managed (h/w, updates etc), accessible (HTTP, HTTPS, multi-languages, APIs, PowerShell, CLI, Azure Portal, Azure Storage explorer)
How can you enable globally distributed users to work with their own local replica of a Cosmos DB database?
Enable multi-region writes and add the regions where you have users. Enable multi-region writes in the regions where you want users to work with the data.
Structured data
Fixed schema, all data has same fields/attributes. Schema is tabular
Azure Free Student Account
Free access to certain Azure services for 12 months. A credit to use in the first 12 months. Free access to certain software developer tools.
GRS
Geo-Redundant Storage is designed to provide at least 99.99999999999999% (16 9's) durability of objects over a given year by replicating your data to a secondary region that is hundreds of miles away from the primary region. If your storage account has GRS enabled, then your data is durable even in the case of a complete regional outage or a disaster in which the primary region is not recoverable. GRS is similar to running LRS in two regions
GZRS
Geo-zone-redundant-storage GZRS is similar to running ZRS in the primary region and LRS in the secondary region. combines the high availability provided by redundancy across availability zones with protection from regional outages provided by geo-replication. copied across three Azure availability zones in the primary region (similar to ZRS) and is also replicated to a secondary geographic region, using LRS, for protection from regional disasters. Recommended for applications requiring maximum consistency, durability, and availability, excellent performance, and resilience for disaster recovery.
Cool tier storage
Incurs less storage cost, for data accessed infrequently. Can create new blobs in hot tier and as they become infrequently used, migrate to cool and vice versa when needed. Read latency few mili seconds
JSON
Javascript object notation
KQL
Kusto Query language
Archive tier storage
Least expensive, maximum latency Used for historic data, used rarely - effectively offline storage. Read latency can be hours. First change tier to cool or hot, blob will then be rehydrated - read when it has been hydrated.
Create a ___ policy to automatically move hot tier blobs to cool and then archive as it ages (based on # of days since last modification). Policy can also auto delete outdated blobs
Lifecycle management policies
LRS
Locally redundant storage data replicated 3 times in a single data center, in primary region 11 9s of data durability in a year lowest cost, least durable protection against, server, rack or drive failures
Apache Kafka
Message broker for data stream processing
Azure Migrate: Server Migration
Migrate VMware VMs, Hyper-V VMs, physical servers, other virtualized servers, and public cloud VMs to Azure.
Azure Database Migration Service
Migrate on-premises databases to Azure VMs running SQL Server, Azure SQL Database, or SQL Managed Instances.
Maria DB
Newer, by MySQL developers - has been rewritten(improved) for higher perf Compatible w/ Oracle Built-in support for temporal data - a table can hold several versions of data - point in time version of data
Non relational
NoSQL Databases (e.g. DynamoDB, MongoDB Key value databases: Document databases: Value is a JSON document. Column family databases: Tabular data - columns can be divided into groups called column-families - logically related columns Graph databases: Entities are nodes with links to define relationships b/w them
Availability Zone
One or more discrete data centers, each with redundant power, networking and connectivity -housed in separate facilities -an isolation boundary -are connected through high-speed, private fiber-optic networks. -a minimum of three are present in all availability zone-enabled regions. -not all Azure Regions currently support availability zones. -are primarily for VMs, managed disks, load balancers, and SQL databases.
Azure DB for MySQL
Open source DBMS, used in Linux, Apache, MySQL, Php(LAMP) stack apps Includes HA at no additional cost, scalability as required automatic backups, point in time, pay as you go connection security to enforce firewall rules, optional SSL connections. Config server: lock mode, max count of connections, timeouts. global DB, automatic scale up Azure manages certain operations like security and admin Benefits: HA, Predictable perf, easy scaling, secure data(at-rest and in-motion), auto backups, point in time restore(last 35 days), entp level sec and compliance w/ legislation Can enable alerts and view metrics and logs.
Apache HBase
Open source-large scale NoSQL data storage and querying
Single DB - Azure SQL Database
Owner-> config DB, create tables... Scale DB-> add storage space, memory or CPU power. Pre-allocated resources, charged by hour. Can be server less config->MS created server shared w/ other subscribers - privacy ensured - automatic scaling - allocated/deallocated
Page BLOBs
Page blobs store random access files up to 8 TB in size. Page blobs store the virtual hard drive (VHD) files that serve as disks for Azure virtual machines (to implement virtual disk storage for virtual machines) organized as a collection of fixed size 512-byte pages. Support random R/W
Azure Data Box
Physical migration service that helps transfer large amounts of data in a quick, inexpensive, and reliable way. ships you a proprietary Data Box storage device - capacity 80TB. transported to and from your datacenter via a regional carrier. A rugged case protects and secures the Data Box from damage during transit. ordered and tracked E2E via the Azure portal Suited for: 1. data>40TB w/ limited or no network connectivity 2. Offline tape library to online media library 3. migrating VM farm, SQL server, and applns 4. moving historical data for in-depth analysis and reporting using HDInsight. 5. Initial bulk transfer 6. Periodic bulk uploads 7. export from Azure to on-prem 8. Security requirements: to export data out of Azure due to government or security requirements. 9. Migrate to another cloud provider
Premium block blobs
Premium storage account type for block blobs and append blobs. Recommended for scenarios with high transaction rates or that use smaller objects or require consistently low storage latency.
Premium file shares
Premium storage account type for file shares only. Recommended for enterprise or high-performance scale applications. Use this account type if you want a storage account that supports both Server Message Block (SMB) and NFS file shares.
Premium page blobs
Premium storage account type for page blobs only.
RA-GZRS
Read-Access Geo-Zone Redundant Storage Secondary data copy is available to be read only if the customer or Microsoft initiates a failover from the primary to secondary region. if you enable read access to the secondary region, your data is always available, even when the primary region is running optimally.
RA-GRS
Read-access geo-redundant storage
RPO
Recovery Point Objective. identifies a point in time where data loss is acceptable. It is related to the RTO and the BIA often includes both RTOs and RPOs. Typically less than 15 min - but no SLAs
Azure SQL Edge
SQL engine optimized for IoT scenarios that need to work w/ streaming time-series data
File sharing protocols - Azure file storage
Server Message Block (SMB) - commonly used across multiple operating systems (Windows, Linux, macOS). Network File System (NFS) - used by some Linux and macOS versions. To create an NFS share, you must use a premium tier storage account and create and configure a virtual network through which access to the share can be controlled.
Elastic Pool - Azure SQL database
Similar to Single DB config except default is to share resources like memory, storage and CPU thru multiple tenancy w/ multiple DBs. Resources AKA pool - you can create so only your DBs use this pool. Useful when your requirements vary over time - reduces cost Use pool resources when needed and release once processing/load reduces. Not fully compatible w/ on-prem installations. Often used in new cloud projects when appln design can accommodate changes DB migration assistant helps detect compatibility issues. Used for: Modern cloud apps using latest stable SQL server features, HA apps, Variable load apps (scale up/down quickly)
Standard general purpose v2 storage account
Standard storage account type for blobs, file shares, queues, and tables. Recommended for most scenarios using Azure Storage. If you want support for network file system (NFS) in Azure Files, use the premium file shares account type.
Performance tiers in Azure file storage
Standard tier uses hard disk-based hardware in a datacenter Premium tier uses solid-state disks; offers greater throughput, but is charged at a higher rate.
Which API should you use to store and query JSON documents in Azure Cosmos DB?
The API for NoSQL is designed to store and query JSON documents. Azure Cosmos DB for NoSQL
Azure Storage
Types: 1. Blob containers: scalable, cost-effective for binary files 2. File shares: network shares (SMB protocol) 3. Tables: NV pair storage for applns, that need quick R/W Azure storage is used to host data lakes - BLOB storage w/ hierarchical namespace->enables files to be organized in folder system 4. Queues: for Messaging
IoT and telematics
Use cosmos DB ingest large amounts of data in frequent bursts of activity can then use data for analytics services, such as Azure Machine Learning, Azure HDInsight, and Power BI can process data in real-time using Azure Functions triggered as data arrives in the database.
Azure Synapse
Used by data analysts, SQL and spark pools - interactive workbooks - explore/analyze data Integrated w/ Azure ML and PBI for creating data models and extracting insights
SQL Server on Azure VM
VM w/ SQL server installed maximum configurability full management responsibilities. IaaS - no h/w, cabling Best for Lift and Shift - On-Prem to Cloud 100% compatible w/ on-prem physical and virtualized installations Need to manage System, SQL updates, config, backup and other maintenance. useful when you need some OS features unsupported at PaaS level. Can be used with on-prem to extend existing applns as hybrid deployment. Rapid dev and QA, scale up quickly - more memory, CPU, disk space w/o reinstallation.
Azure Disks
Virtualized disks Used with Azure VM's OS Disk (boot volume), Data Disks (Customer-defined) and temporary disks
ZRS
Zone Redundant Storage DNS repointing etc are handled by Azure recommended for restricting replication of data within a country or region to meet data governance requirements. durability of 12 9s over a year
Microsoft purview
a family of data governance, risk, and compliance solutions that helps you get a single, unified view into your data Create a map of data, track data lineage across data sources and systems enables finding trustworthy data for analysis and reporting
MySQL
a popular open-source DBMS product that is license-free for most applications Community edition: free, under Linux - also available for Windows. Std edition: higher perf, diff tech for storing data, not free, used by commercial apps Entp Edition: comprehensive set of tools and features: enhanced sec, availability, scalability - not free - used by commercial apps
Azure Storage Explorer
a standalone app that provides a graphical interface to manage files and blobs in your Azure Storage Account works on Windows, macOS, and Linux uses AzCopy on the backend upload to Azure, download from Azure, or move between storage accounts.
Block BLOBs
are made up of blocks of data that can be managed individually Block can be of size up to 4000 MB - smallest R/W unit A block blob can contain up to 190.7 TiB (4000 MiB X 50,000 blocks), giving a maximum size of over 5000 MiB. used to store discrete, large, binary objects that change infrequently.
Storage services naming
blob: https://<storage-account-name>.blob.core.windows.net gen2 data lake storage: https://<storage-account-name>.dfs.core.windows.net azure files: https://<storage-account-name>.file.core.windows.net queue storage: https://<storage-account-name>.queue.core.windows.net table storage: https://<storage-account-name>.table.core.windows.net
MapReduce Jobs
can be written in Java pr abstracted by interfaces like Hive.
unstructured data
documents, images, audio, video, binary files
PostgreSQL
hybrid Relational-object DB store data in relational tables, store custom data types w/ their own non relational properties. Extensible, can add code odules to DB - which can be run by queries Ability to store n manipulate geometric data e.g. lines, circles, polygons pgsql: variant of SQL w/ features that enable u to write SPs that run inside DB
Azure Gen2 Data lake store
integrated into Azure Storage; enabling you to take advantage of the scalability of blob storage and the cost-control of storage tiers Adds blob storage and storage tiers to Gen1 capabilities. Hadoop in Azure HDInsight, Azure Databricks, and Azure Synapse Analytics can mount a distributed file system in Azure Data Lake Store Gen2 and use it to process huge volumes of data. must enable the Hierarchical Namespace option of an Azure Storage account before using. Can upgrade Gen 1 to 2: one-way process - after upgrading a storage account to support a hierarchical namespace for blob storage, you can't revert it to a flat namespace.
Region
or Availability Zones -geographical area that contains at least 1 data center -networked together w/ low latency network -Azure handles load balancing -some services/VM features are region specific. -Azure services like the following don't require region to be selected: Microsoft Entra ID, Azure Traffic Manager, and Azure DNS
ORC (optimized row columnar format)
organized in columns - HortonWorks Optimized for R/W in Apache Hive Contains stripes of data - each stripe holds data for 1 or set of columns. A stripe contains an index into rows in the stripe, data for each row, a footer that holds statistical information(sum, count, max, min etc)
Azure Free Account
provides subscribers with a $200 USD Azure credit that they can use for paid Azure products during a 30-day trial period. Once you use that $200 USD credit or reach your trial's end, Azure suspends your account unless you sign up for a paid account. If you upgrade to a Pay-As-You-Go subscription within the 30-day trial period, by providing your credit or debit card details you can use a limited selection of free services for 12 months. After 12 months, you will be billed for the services and products in use on your account at the pay-as-you-go rate. -Free access to all Azure products after the 12 month expiration period -Access to certain products that are always free -only 1 free azure account -has a spending limit -has a limit for the amount of data that can be uploaded -cannot host unlimited number of web applications
Azure Migrate service
provides tools for discovering, assessing, and migrating applications, infrastructure and data from on-premises data center to Azure unified platform, Range of tools (discover, assess, server migration) Also integrates w/ independent software vendor (ISV) offerings
Azure Stream analytics
real-time stream processing engine captures a stream of data from an input, applies a query to extract and manipulate data from the input stream, and writes the results to an output for analysis or further processing.
AVRO
row based format - by Apache Each record has a json header, data is binary. Good for compressing and minimizing storage and n/w bandwidth required.
Azure Gen 1 Data lake store
separate service for hierarchical data storage for analytical data lakes, often used by big data analytical solutions that work with structured, semi-structured, and unstructured data stored in files.
Semi structured data
some structure, but allows some variation between entity instances. e.g.JSON
Azure data explorer
standalone service that offers the same high-performance querying of log and telemetry data as the Azure Synapse Data Explorer runtime in Azure Synapse Analytics.
Azure app service migration assistant
standalone tool to assess on-premises websites for migration to Azure App Service. Used to migrate .NET and PHP web apps to Azure.
File format depends upon
type of data being stored: structured, semi/un structured applications/services that need to R/W or process data Readability by humans or efficient storage and processing
MS fabric
unified SaaS analytics platform based on Open and governed lakehouse. Supports following: Data ingestion, ETL, Lake and DW analytics, DS and ML, Realtime analytics, data visualization, Data Governance and management.
azCopy utility
upload files to Azure File Storage using the Azure portal, or tools such as the AzCopy utility. upload files, download files, copy files between storage accounts, and even synchronize files move files back and forth between cloud providers Synchronizing blobs or files with AzCopy is one-direction synchronization(designated source and target)
MS learn sandbox
uses a technology called the sandbox, which creates a temporary subscription that's added to your Azure account. This temporary subscription allows you to create Azure resources during a Learn module. Learn automatically cleans up the temporary resources for you after you've completed the module.