Azure Data Fundamentals

Ace your homework & exams now with Quizwiz!

Where to use Data Lake Storage Gen2

Data Lake Storage is designed to store massive amounts of data for big-data analytics.

Common architecture for enterprise-scale analytics

1.- Data files may be stored in a central data lake for analysis. 2.- An extract, transform, and load (ETL) process copies data from files and OLTP databases into a data warehouse that is optimized for read activity. Commonly, a data warehouse schema is based on fact tables that contain numeric values you want to analyze (for example, sales amounts), with related dimension tables that represent the entities by which you want to measure them (for example, customer or product) 3.- Data in the data warehouse may be aggregated and loaded into an online analytical processing (OLAP) model, or cube. Aggregated numeric values (measures) from fact tables are calculated for intersections of dimensions from dimension tables. For example, sales revenue might be totaled by date, customer, and product. 4.- The data in the data lake, data warehouse, and analytical model can be queried to produce reports, visualizations, and dashboards.

Data Analyst

A data analyst enables businesses to maximize the value of their data assets. They're responsible for exploring data to identify trends and relationships, designing and building analytical models, and enabling advanced analytics capabilities through reports and visualizations. A data analyst processes raw data into relevant insights based on identified business requirements to deliver relevant insights.

Data Engineer

A data engineer collaborates with stakeholders to design and implement data-related workloads, including data ingestion pipelines, cleansing and transformation activities, and data stores for analytical workloads. They use a wide range of data platform technologies, including relational and non-relational databases, file stores, and data streams. They're also responsible for ensuring that the privacy of data is maintained within the cloud and spanning from on-premises to the cloud data stores. They own the management and monitoring of data pipelines to ensure that data loads perform as expected.

Database Administrator

A database administrator is responsible for the design, implementation, maintenance, and operational aspects of on-premises and cloud-based database systems. They're responsible for the overall availability and consistent performance and optimizations of databases. They work with stakeholders to implement policies, tools, and processes for backup and recovery plans to recover following a natural disaster or human-made error. The database administrator is also responsible for managing the security of the data in the database, granting privileges over the data, granting or denying access to users as appropriate.

What is Apache Spark notebook?

A notebook is a collection of cells. These cells are run to execute code, to render formatted text, or to display graphical visualizations. You can use Apache Spark notebooks to: Read and process huge files and data sets Query, explore, and visualize data sets Join disparate data sets found in data lakes Train and evaluate machine learning models Process live streams of data Perform analysis on large graph data sets and social networks.

Storage Account

A storage account is a container that groups a set of Azure Storage services together. Only data services from Azure Storage can be included in a storage account (Azure Blobs, Azure Files, Azure Queues, and Azure Tables). Combining data services into a single storage account enables you to manage them as a group. The settings you specify when you create the account, or any changes that you make after creation, apply to all services in the storage account. Deleting a storage account deletes all of the data stored inside it. A storage account is an Azure resource and is part of a resource group.

online analytical processing (OLAP)

An OLAP model is an aggregated type of data storage that is optimized for analytical workloads. Data aggregations are across dimensions at different levels, enabling you to drill up/down to view aggregations at multiple hierarchical levels; for example to find total sales by region, by city, or for an individual address. Because OLAP data is pre-aggregated, queries to return the summaries it contains can be run quickly.

ACID Semantics

Atomicity - each transaction is treated as a single unit, which succeeds completely or fails completely. For example, a transaction that involved debiting funds from one account and crediting the same amount to another account must complete both actions. If either action can't be completed, then the other action must fail. Consistency - transactions can only take the data in the database from one valid state to another. To continue the debit and credit example above, the completed state of the transaction must reflect the transfer of funds from one account to the other. Isolation - concurrent transactions cannot interfere with one another, and must result in a consistent database state. For example, while the transaction to transfer funds from one account to another is in-process, another transaction that checks the balance of these accounts must return consistent results - the balance-checking transaction can't retrieve a value for one account that reflects the balance before the transfer, and a value for the other account that reflects the balance after the transfer. Durability - when a transaction has been committed, it will remain committed. After the account transfer transaction has completed, the revised account balances are persisted so that even if the database system were to be switched off, the committed transaction would be reflected when it is switched on again.

Optimized File Formats

Avro: Avro is a row-based format. It was created by Apache. Each record contains a header that describes the structure of the data in the record. This header is stored as JSON. The data is stored as binary information. An application uses the information in the header to parse the binary data and extract the fields it contains. Avro is a good format for compressing data and minimizing storage and network bandwidth requirements. ORC: ORC (Optimized Row Columnar format) organizes data into columns rather than rows. It was developed by HortonWorks for optimizing read and write operations in Apache Hive (Hive is a data warehouse system that supports fast data summarization and querying over large datasets). An ORC file contains stripes of data. Each stripe holds the data for a column or set of columns. A stripe contains an index into the rows in the stripe, the data for each row, and a footer that holds statistical information (count, sum, max, min, and so on) for each column. Parquet: Parquet is another columnar data format. It was created by Cloudera and Twitter. A Parquet file contains row groups. Data for each column is stored together in the same row group. Each row group contains one or more chunks of data. A Parquet file includes metadata that describes the set of rows found in each chunk. An application can use this metadata to quickly locate the correct chunk for a given set of rows, and retrieve the data in the specified columns for these rows. Parquet specializes in storing and processing nested data types efficiently. It supports very efficient compression and encoding schemes.

Azure Cosmos DB

Azure Cosmos DB is a global-scale non-relational (NoSQL) database system that supports multiple application programming interfaces (APIs), enabling you to store and manage data as JSON documents, key-value pairs, column-families, and graphs.

Azure Data Explorer

Azure Data Explorer is a standalone service that offers the same high-performance querying of log and telemetry data as the Azure Synapse Data Explorer runtime in Azure Synapse Analytics. Data analysts can use Azure Data Explorer to query and analyze data that includes a timestamp attribute, such as is typically found in log files and Internet-of-things (IoT) telemetry data.

Azure Data Factory

Azure Data Factory is an Azure service that enables you to define and schedule data pipelines to transfer and transform data. You can integrate your pipelines with other Azure services, enabling you to ingest data from cloud data stores, process the data using cloud-based compute, and persist the results in another data store. Azure Data Factory is used by data engineers to build extract, transform, and load (ETL) solutions that populate analytical data stores with data from transactional systems across the organization.

Azure Databricks

Azure Databricks is a fully managed, cloud-based Big Data and Machine Learning platform, which empowers developers to accelerate AI and innovation by simplifying the process of building enterprise-grade production data applications. Azure Databricks provides data science and engineering teams with a single platform for Big Data processing and Machine Learning. What does Databricks offer that isn't Open-Source Spark? Databricks Workspace - Interactive Data Science & Collaboration Databricks Workflows - Production Jobs & Workflow Automation Databricks Runtime Databricks I/O (DBIO) - Optimized Data Access Layer Databricks Serverless - Fully Managed Auto-Tuning Platform Databricks Enterprise Security (DBES) - End-To-End Security & Compliance

Azure HDInsight

Azure HDInsight is an Azure service that provides Azure-hosted clusters for popular Apache open-source big data processing technologies, including: Apache Spark - a distributed data processing system that supports multiple programming languages and APIs, including Java, Scala, Python, and SQL. Apache Hadoop - a distributed system that uses MapReduce jobs to process large volumes of data efficiently across multiple cluster nodes. MapReduce jobs can be written in Java or abstracted by interfaces such as Apache Hive - a SQL-based API that runs on Hadoop. Apache HBase - an open-source system for large-scale NoSQL data storage and querying. Apache Kafka - a message broker for data stream processing. Apache Storm - an open-source system for real-time data processing through a topology of spouts and bolts.

Azure Purview

Azure Purview provides a solution for enterprise-wide data governance and discoverability. You can use Azure Purview to create a map of your data and track data lineage across multiple data sources and systems, enabling you to find trustworthy data for analysis and reporting. Data engineers can use Azure Purview to enforce data governance across the enterprise and ensure the integrity of data used to support analytical workloads.

Azure SQL

Azure SQL is the collective name for a family of relational database solutions based on the Microsoft SQL Server database engine. Specific Azure SQL services include: Azure SQL Database - a fully managed platform-as-a-service (PaaS) database hosted in Azure. Supports most core database-level capabilities of SQL Server. You can scale the database if needed. You can also specify a serverless configuration where server might be shared by databases belonging to other Azure subscribers and where DB is scaled automatically. With Elastic Pool multiple databases can share the same resources. This model is useful if you have databases with resource requirements that vary over time. Azure SQL Managed Instance - a hosted instance of SQL Server with automated maintenance, which allows more flexible configuration than Azure SQL DB but with more administrative responsibility for the owner. Near-100% compatibility with SQL Server. Azure SQL VM - a virtual machine (IAAS) with an installation of SQL Server, allowing maximum configurability with full management responsibility. Fully compatible with on-premises physical and virtualized installations. Applications and databases can easily be "lift and shift" migrated without change. Azure SQL Edge - A SQL engine that is optimized for Internet-of-things (IoT) scenarios that need to work with streaming time-series data.

Azure Storage

Azure Storage is a core Azure service that enables you to store data in: Blob containers - scalable, cost-effective storage for text and binary files. File shares - network file shares such as you typically find in corporate networks. Tables - NoSQL schemaless key-value storage for applications that need to read and write data values quickly. Queue: A messaging store for reliable messaging between application components. Microsoft Azure Storage is a managed service that provides durable, secure, and scalable storage in the cloud. Let's break these terms down.

Azure Stream Analytics

Azure Stream Analytics is a real-time stream processing engine that captures a stream of data from an input, applies a query to extract and manipulate data from the input stream, and writes the results to an output for analysis or further processing. Data engineers can incorporate Azure Stream Analytics into data analytics architectures that capture streaming data for ingestion into an analytical data store or for real-time visualization.

Azure Synapse Analytics

Azure Synapse Analytics is a comprehensive, unified data analytics solution that provides a single service interface for multiple analytical capabilities, including: Pipelines - based on the same technology as Azure Data Factory. SQL - a highly scalable SQL database engine, optimized for data warehouse workloads. Apache Spark - an open-source distributed data processing system that supports multiple programming languages and APIs, including Java, Scala, Python, and SQL. Azure Synapse Data Explorer - a high-performance data analytics solution that is optimized for real-time querying of log and telemetry data using Kusto Query Language (KQL). Data engineers can use Azure Synapse Analytics to create a unified data analytics solution that combines data ingestion pipelines, data warehouse storage, and data lake storage through a single service. Data analysts can use SQL and Spark pools through interactive notebooks to explore and analyze data, and take advantage of integration with services such as Azure Machine Learning and Microsoft Power BI to create data models and extract insights from the data.

Azure Database for open-source relational databases

Azure includes managed services for popular open-source relational database systems, including: Azure Database for MySQL - a PaaS simple-to-use open-source database management system that is commonly used in Linux, Apache, MySQL, and PHP (LAMP) stack apps based on the Community release. Azure Database for MariaDB (PaaS) - a newer database management system, created by the original developers of MySQL. The database engine has since been rewritten and optimized to improve performance. MariaDB offers compatibility with Oracle Database. Based on the Community release. Azure Database for PostgreSQL (PaaS) - a hybrid relational-object database. You can store data in relational tables, but a PostgreSQL database also enables you to store custom data types, with their own non-relational properties. Azure Database for PostgreSQL has three deployment options: Single Server, Flexible Server, and Hyperscale.

Data Analysis

Data analysis is the process of identifying, cleaning, transforming, and modeling data to discover meaningful and useful information.

Data Lakes

Data lakes are common in modern data analytical processing scenarios, where a large volume of file-based data must be collected and analyzed.

Data Warehouse

Data warehouses are an established way to store data in a relational schema that is optimized for read operations - primarily queries to support reporting and data visualization. The data warehouse schema may require some denormalization of data in an OLTP data source (introducing some duplication to make queries perform faster).

File Storage

Delimited text files JavaScript Object Notation (JSON) Extensible Markup Language (XML) Binary Large Object (BLOB): Some file formats however, particularly for unstructured data, store the data as raw binary that must be interpreted by applications and rendered. Common types of data stored as binary include images, video, audio, and application-specific documents. When working with data like this, data professionals often refer to the data files as BLOBs (Binary Large Objects).

Microsoft Power BI

Microsoft Power BI is a platform for analytical data modeling and reporting that data analysts can use to create and share interactive data visualizations. Power BI reports can be created by using the Power BI Desktop application, and the published and delivered through web-based reports and apps in the Power BI service, as well as in the Power BI mobile app.

Non-Relational Databases

Non-relational databases are data management systems that don't apply a relational schema to the data. Non-relational databases are often referred to as NoSQL database, even though some support a variant of the SQL language. There are four common types of Non-relational database commonly in use: Key-value databases in which each record consists of a unique key and an associated value, which can be in any format. Document databases, which are a specific form of key-value database in which the value is a JSON document (which the system is optimized to parse and query.) Column family databases, which store tabular data comprising rows and columns, but you can divide the columns into groups known as column-families. Each column family holds a set of columns that are logically related together. Graph databases, which store entities as nodes with links to define relationships between them.

Data Normalization

Normalization is a term used by database professionals for a schema design process that minimizes data duplication and enforces data integrity. Process for data normalization: Separate each entity into its own table. Separate each discrete attribute into its own column. Uniquely identify each entity instance (row) using a primary key. Use foreign key columns to link related entities.

Unstructured data

Not all data is structured or even semi-structured. For example, documents, images, audio and video data, and binary files might not have a specific structure. This kind of data is referred to as unstructured data.

Online Transactional Processing (OLTP)

OLTP solutions rely on a database system in which data storage is optimized for both read and write operations in order to support transactional workloads in which data records are created, retrieved, updated, and deleted (often referred to as CRUD operations). These operations are applied transactionally, in a way that ensures the integrity of the data stored in the database. To accomplish this, OLTP systems enforce transactions that support so-called ACID semantics

Relational Databases

Relational databases are commonly used to store and query structured data. The data is stored in tables that represent entities, such as customers, products, or sales orders. Each instance of an entity is assigned a primary key that uniquely identifies it; and these keys are used to reference the entity instance in other tables. This use of keys to reference data entities enables a relational database to be normalized; which in part means the elimination of duplicate data values. The tables are managed and queried using Structured Query Language (SQL), which is based on an ANSII standard, so it's similar across multiple database systems.

SQL

SQL stands for Structured Query Language, and is used to communicate with a relational database. It's the standard language for relational database management systems. SQL statements are used to perform tasks such as update data in a database, or retrieve data from a database. Some common relational database management systems that use SQL include Microsoft SQL Server, MySQL, PostgreSQL, MariaDB, and Oracle. Some popular dialects of SQL include: Transact-SQL (T-SQL). This version of SQL is used by Microsoft SQL Server and Azure SQL services. pgSQL. This is the dialect, with extensions implemented in PostgreSQL. PL/SQL. This is the dialect used by Oracle. PL/SQL stands for Procedural Language/SQL.

SQL statement types

SQL statements are grouped into three main logical groups: Data Definition Language (DDL) Data Control Language (DCL) Data Manipulation Language (DML) You use DDL statements to create, modify, and remove tables and other objects in a database (table, stored procedures, views, and so on). Use use DCL statements to manage access to objects in a database by granting, denying, or revoking permissions to specific users or groups. Use DML statements to manipulate the rows in tables. These statements enable you to retrieve (query) data, insert new rows, delete, or modify existing rows.

Semi-structured data

Semi-structured data is information that has some structure, but which allows for some variation between entity instances. For example, while most customers may have an email address, some might have multiple email addresses, and some might have none at all. One common format for semi-structured data is JavaScript Object Notation (JSON).

Structured data

Structured data is data that adheres to a fixed schema, so all of the data has the same fields or properties. Most commonly, the schema for structured data entities is tabular - in other words, the data is represented in one or more tables that consist of rows to represent each instance of a data entity, and columns to represent attributes of the entity. Structured data is often stored in a database in which multiple tables can reference one another by using key values in a relational model.

Data stores

There are two broad categories of data store in common use: File stores Databases

Data Lake Storage Key Features

Unlimited scalability Hadoop compatibility Security support for both access control lists (ACLs) POSIX compliance An optimized Azure Blob File System (ABFS) driver that's designed for big-data analytics Zone-redundant storage Geo-redundant storage


Related study sets

Chapter 16 Test-Properties of Atoms and the Periodic Table

View Set

Introduction to Information Systems in Healthcare

View Set

Communication APA Formatting Quiz

View Set

Chapter 19: Control of Gene Expression in Eukaryotes

View Set

Chapter 19 Program Design and Technique for Speed and Agility Training

View Set