Snowflake SnowPro Core Udemy Practise Question

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

Question 76: Incorrect Snowflake has a default file format if none is explicitly set or declared in SQL syntax. (TRUE / FALSE) -TRUE -FALSE

-TRUE True, Snowflake has a default file format if none is explicitly set or declared in SQL syntax

Can you have a database overlap across two Snowflake account? 1. No 2.Yes

1. No A database can not overlap across two Snowflake accounts.

Secured view can be used to hide the definition but its performance can get degraded? 1. TRUE 2.FALSE

1. TRUE Secure views should not be used for views that are defined for query convenience, such as views created for simplifying querying data for which users do not need to understand the underlying data representation. This is because the Snowflake query optimizer, when evaluating secure views, bypasses certain optimizations used for regular views. This might result in some impact on query performance for secure views.

Decreasing the size of a running warehouse removes compute resources from the warehouse. When the computer resources are removed, the cache associated with those resources is dropped. 1. TRUE 2.FALSE

1. TRUE https://docs.snowflake.com/en/user-guide/warehouses-considerations.html#warehouse-resizing-improves-performance When the computer resources are removed, the cache associated with those resources is dropped, which can impact performance in the same way that suspending the warehouse can impact performance after it is resumed.

Question 45: Correct Generally, what is the size of a micro-partition in compresses format? 1.16 MB 2.16 GB 3.500 MB 4.50 MB

1.16 MB 50 MB to 500 MB in Uncompressed format and 16 MB in compressed format.

Snowflake follows the Staged release process for new releases. How many days it take to normally to complete all the stages of the staged release. 1.2 days 2.1 day 3.3 days 4.7 days

1.2 days Once a full release has been deployed, Snowflake does not move all accounts to the release at the same time. Accounts are moved to the release using a three-stage approach over two (or more) days. A https://docs.snowflake.com/en/user-guide/intro-releases.html#staged-release-process

!!!Question 53: Incorrect Which all commands COPY INTO supports while unloading from Snowflake tables to Internal or external stages? 1.Column reordering 2.Casts 3.Column omission 4.Join 5.String Truncation

1.Column reordering 2.Casts 3.Column omission 4.Join 5.String Truncation All of these. Unloading is more flexible than loading. JOIN is not supported while loading but supported for unloading.

Please choose all the securable objects from the given options. 1.DATABASE 2.EXTERNAL TABLE 3.SCHEMA 4.STORED PROCEDURE 5UDF

1.DATABASE 2.EXTERNAL TABLE 3.SCHEMA 4.STORED PROCEDURE 5UDF All other securable objects (such as TABLE, FUNCTION, FILE FORMAT, STAGE, SEQUENCE, STORED PROCEDURE, UDF, !!!!EXTERNAL TABLE, etc.) are contained within a SCHEMA object within a DATABASE.

!!!Question 27: Correct Which objects are securable objects in Snowflake? 1.Database 2.Table 3.Warehouse 4.File Format

1.Database 2.Table 3.Warehouse 4.File Format All of these are securable objects in Snowflake. Securable Object is an entity to which access can be granted. Unless allowed by a grant, access will be denied.

Question 25: Correct What are the features of Column-level security? (Select 2) 1.Dynamic Data Masking 2.External Tokenization 3.Internal Tokenization 4.Column Masking

1.Dynamic Data Masking 2.External Tokenization Column-level security in Snowflake allows the application of a masking policy to a column within a table or view. Currently, column-level security comprises two features: - Dynamic Data Masking - External Tokenization Dynamic Data Masking is a column-level security feature that uses masking policies to selectively mask plain-text data in table and view columns at query time. External Tokenization enables accounts to tokenize data before loading it into Snowflake and detokenize the data at query runtime.

Question 24: Correct Dynamic Data Masking is supported by: 1.Enterprise Edition 2.Business Critical 3.VPS 4.Standard Edition

1.Enterprise Edition 2.Business Critical 3.VPS Explanation Dynamic Data Masking starts with the Enterprise edition.

Question 64: Incorrect All data transparently & synchronously replicated across minimum 2 availability zones. (TRUE / FALSE) 1.FALSE 2.TRUE

1.FALSE This is in Level up Snowflake is designed to provide high availability and fault tolerance by deploying a Snowflake across 3 availability zones. Data and metadata is replicated across all three zones and Global Services runs across all 3 zones. In the event a zone becomes unavailable, the only impact is that queries or loads running in that zone will be automatically restarted.

Question 65: Correct Snowflake provides standard and powerful features that ensure the highest levels of security for your account and users if used properly. Which are the true statements about Snowflake Security? 1.Federated authentication in Snowflake is compliant with SAML 2.0 2.Tri-secret requires that customers manage their own keys 3.Snowflake supports user-based access control

1.Federated authentication in Snowflake is compliant with SAML 2.0 2.Tri-secret requires that customers manage their own keys Explanation Along with Tri-Secret and Federated authentication, Snowflake supports ROLE-based access control.

!!!When a database or schema is cloned. What object is not cloned? 1.Internal Named Stages 2.Sequences 3.Internal Stages 4.Stored Procedures 5.Pipes

1.Internal Named Stages https://docs.snowflake.com/en/user-guide/object-clone.html#cloning-and-stages When cloning a database or schema: -External named stages that were present in the source when the cloning operation started are cloned. -Tables are cloned, which means the internal stage associated with each table is also cloned. Any data files that were present in a table stage in the source database or schema are not copied to the clone (i.e. the cloned table stages are empty). -Internal named stages are not cloned.

Question 7: Correct Snowflake supports many methods of authentication. Which are the supported authentication methods in ALL Snowflake Editions? 1.MFA (Multi-factor authentication) 2.SSO 3.Only MFA is supported by all the Snowflake editions 4.OAuth 5.Only MFA and SSO are supported by all the Snowflake editions

1.MFA (Multi-factor authentication) 2.SSO 4.OAuth

Question 48: Incorrect You set up a Snowflake account, choosing AWS as your cloud platform provider. What stages can you use to load data files? (Check all that apply) 1.NAMED EXTERNAL - using Azure BLOB storage 2.NAMED INTERNAL 3.NAMED EXTERNAL - using GCS/GCP Buckets 4.TABLE 5.USER 6.NAMED EXTERNAL - using S3 Buckets

1.NAMED EXTERNAL - using Azure BLOB storage 2.NAMED INTERNAL 3.NAMED EXTERNAL - using GCS/GCP Buckets 4.TABLE 5.USER 6.NAMED EXTERNAL - using S3 Buckets https://docs.snowflake.com/en/user-guide/data-load-overview.html#internal-stages User A user stage is allocated to each user for storing files. This stage type is designed to store files that are staged and managed by a single user but can be loaded into multiple tables. User stages cannot be altered or dropped. Table A table stage is available for each table created in Snowflake. This stage type is designed to store files that are staged and managed by one or more users but only loaded into a single table. Table stages cannot be altered or dropped. Note that a table stage is not a separate database object; rather, it is an implicit stage tied to the table itself Named A named internal stage is a database object created in a schema. This stage type can store files that are staged and managed by one or more users and loaded into one or more tables

Question 5: Correct What data structure types can be ingested into a VARIANT column in a Snowflake table? (Check all that apply) 1.ORC 2.PARQUET 3.JSON 4.AVRO 5.XML

1.ORC 2.PARQUET 3.JSON 4.AVRO 5.XML Snowflake supports all these types of Semi-Structured data. -JSON (JavaScript Object Notation) is a lightweight, plain-text, data-interchange format based on a subset of the JavaScript Programming Language. -Avro is an open-source data serialization and RPC framework originally developed for use with Apache Hadoop. It utilizes schemas defined in JSON to produce serialized data in a compact binary format. -ORC is used to store Hive data, the ORC (Optimized Row Columnar) file format was designed for efficient compression and improved performance for reading, writing, and processing data over earlier Hive file formats. -Parquet is a compressed, efficient columnar data representation designed for projects in the Hadoop ecosystem. The file format supports complex nested data structures and uses Dremel record shredding and assembly algorithms. -XML (Extensible Markup Language) is a markup language that defines a set of rules for encoding documents. It was originally based on SGML, another markup language developed for standardizing the structure and elements that comprise a document.

Question 88: Incorrect When setting the password policy, what options are available? 1.Snowflake password policy cannot be changed 2.Force the use of a special character in a password 3.All of the above 4.Set the required length for a password 5.Prevent the reuse of a previous password

1.Snowflake password policy cannot be changed Snowflake enforces the following password policy when creating a user, setting a new/initial password for a user, or resetting the existing password for a user: - Must be at least 8 characters long . - Must contain at least 1 digit. - Must contain at least 1 uppercase letter and 1 lowercase letter. You can not change this policy, but adviced to set more stronger password which also must satisfy this policy.

Question 61: Incorrect Which are the correct statements about STREAMS? 1.Streams is used to identify and act on changed table records 2.STREAMS is used to scheduled SQL execution 3.It is used for Change Data Capture (CDC) 4.it can not be used with TASKS

1.Streams is used to identify and act on changed table records 3.It is used for Change Data Capture (CDC) Explanation -Tasks is used to scheduled SQL execution. -A stream records data manipulation language (DML) changes made to a table, including information about inserts, updates, and deletes. -It can be combine with TASKS to design some valuable solution.

Question 66: Incorrect Which are the correct statements about TASKS? 1.TASKS is used to scheduled SQL execution 2.Can not be triggered manually 3.It is used for Change Data Capture (CDC) 4.TASKS is used to identify and act on changed table records 5.it can be used with STREAMS

1.TASKS is used to scheduled SQL execution 2.Can not be triggered manually 5.it can be used with STREAMS A task can execute a single SQL statement, including a call to a stored procedure. Tasks can be combined with table streams for continuous ELT workflows to process recently changed table rows. Streams ensure exactly once semantics for new or changed data in a table. Tasks can also be used independently to generate periodic reports by inserting or merging rows into a report table or perform other periodic work. Tasks can not be triggered manually.

Question 68: STRIP_NULL_VALUE converts a JSON "null" value to a SQL NULL value. (TRUE / FALSE) 1.TRUE 2.FALSE

1.TRUE https://docs.snowflake.com/en/sql-reference/functions/strip_null_value.html Converts a JSON "null" value to a SQL NULL value. All other variant values are passed unchanged.

Question 68: Correct STRIP_NULL_VALUE converts a JSON "null" value to a SQL NULL value. (TRUE / FALSE) 1.TRUE 2.FALSE

1.TRUE Converts a JSON "null" value to a SQL NULL value. All other variant values are passed unchanged.

Question 54: Each share contains a single database, and all other objects included in the share must be from this same database. 1.TRUE 2.FALSE

1.TRUE Snowflake data providers can share data that resides in different databases by using secure views. A secure view can reference objects such as schemas, tables, and other views from one or more databases, as long as these databases belong to the same account.

Question 98: Incorrect Scaling up can be configured as automated process. 1.TRUE 2.FALSE

2. FALSE Scaling up is to increase warehouse which is a manual process

Question 8: Incorrect What is the default Time-Travel retention period of Enterprise Edition? 1.90 days 2.1 day 3.7 days 4.0 days

2.1 day The default retention period enabled for all the Snowflake editions is 1 day (24 hours)

Question 43: Correct Each Snowflake account comes with two shared databases. One is a set of sample data and the other contains Account Usage information. Check all true statements about these shared databases. 1.ACCOUNT_USAGE is a schema filled with external tables 2.ACCOUNT USAGE is a schema filled with secure views 3.SNOWFLAKE_SAMPLE_DATA contains a schema called ACCOUNT_USAGE 4.SNOWFLAKE contains a table called ACCOUNT_USAGE 5.SNOWFLAKE contains a schema called ACCOUNT_USAGE

2.ACCOUNT USAGE is a schema filled with secure views 5.SNOWFLAKE contains a schema called ACCOUNT_USAGE ACCOUNT_USAGE is Schema not a table. It contains many VIEWS which help a customer get all the relevant details about the usage.

!!! Question 58: Incorrect All files stored in stages (for data loading/unloading) automatically encrypted (using either AES 256 standard or 512 strong encryption). (TRUE/FALSE) 1.TRUE 2.FALSE

2.FALSE Explanation All files stored in stages (for data loading/unloading) automatically encrypted (using either AES 128 standard or 256 strong encryption). https://docs.snowflake.com/en/user-guide/intro-summary-loading.html#label-summary-loading-encrypting-staged-files -When staging unencrypted files in a Snowflake internal location, the files are automatically encrypted using 128-bit keys. 256-bit keys can be enabled (for stronger encryption); however, additional configuration is required. -Files that are already encrypted can be loaded into Snowflake from external cloud storage; the User-supplied key used to encrypt the files must be provided to Snowflake.

Question 69: Correct Semi-Structured data must be transformed prior to loading the data into a Snowflake table. (TRUE / FALSE) 1.TRUE 2.FALSE

2.FALSE Explanation Snowflake natively supports semi-structured data, it is not must to transformed Semi-Structured data prior to loading the data into a Snowflake table.

Question 22: Correct John wants to create a multi-cluster warehouse and wants to make sure that the virtual warehouse starts the additional clusters only if the system estimates there's enough query load to keep the cluster busy for at least 6 minutes. How should he configure the Warehouse? 1.Configure as SCALE-MAX so that the warehouse is always using maximum number of specified clusters 2.Set the SCALING POLICY as ECONOMY 3.Snowflake takes care of this automatically so, John does not have to worry about it 4.Set the SCALING POLICY as STANDARD

2.Set the SCALING POLICY as ECONOMY If a multi-cluster warehouse is configured with SCALING policy as ECONOMY it starts cluster only if the system estimates there's enough query load to keep the cluster busy for at least 6 minutes.

!!!Question 26: Correct SNOWPIPE AUTO_INGEST method only works with External Stages. (TRUE/FALSE) 1.FALSE 2.TRUE

2.TRUE https://docs.snowflake.com/en/sql-reference/sql/create-pipe.html AUTO_INGEST = TRUE | FALSE Specifies whether to automatically load data files from the specified external stage and optional path when event notifications are received from a configured message service. -TRUE enables automatic data loading. Snowpipe supports loading from external stages -FALSE disaading. You must make calls to the Snowpipe REST API endpoints to load data files.Snowpipe supports loadinbles automatic data log from internal stages (i.e. Snowflake named stages or table stages, but not user stages) or external stage

Question 62: Correct What is the best recommended size of data file in case of SNOWPIPE continuous loading? 1.Same as of Bulk Loading (10 MB - 100 MB uncompressed) 2.if file taking more than a minute, then split the files into more files 3.1 GB Compressed 4.Same as of Bulk Loading (100 MB - 250 MB compressed)

2.if file taking more than a minute, then split the files into more files 4.Same as of Bulk Loading (100 MB - 250 MB compressed) Explanation Snowpipe is designed to load new data typically within a minute after a file notification is sent. Follow the best practices as per bulk loading for file sizes (100 MB - 250 MB compressed) but split into more files if it takes more than a minute https://docs.snowflake.com/en/user-guide/data-load-considerations-prepare.html#:~:text=To%20optimize%20the%20number%20of,or%20larger)%20in%20size%20compressed. General File Sizing Recommendations The number of load operations that run in parallel cannot exceed the number of data files to be loaded. To optimize the number of parallel operations for a load, we recommend aiming to produce data files roughly 100-250 MB (or larger) in size compressed.

Question 90: Incorrect Please select the best option. 1.Internal Table Stage is referenced as @% 2.Internal Named Stage is referenced as @ 3.All of these 4.Internal user stage is referenced using @~

3.All of these Explanation -Internal User Stage - It is allocated to each user for storing files. Managed by a single user. Can't be altered or dropped. User Stages are referenced using @~. -Internal Table Stage - It is available for each table created in Snowflake and available for one of many users but only loaded into a single table. Can't be altered or dropped. Stage is referenced as @%. When copying data from files in a table stage, the FROM clause can be omitted because Snowflake automatically checks for files in the table stage. Internal Named Stage - A named internal stage is a database object created in a schema. Stage is referenced as @.

Question 28: Correct Which Snowflake edition does allow for customer-managed keys encryption. 1.Only VPS 2.Standard (or higher) 3.Business Critical (of higher) 4.Enterprise (or higher)

3.Business Critical (of higher) Explanation Business Critical (or higher) edition supports for encrypting data using customer-managed keys. Part of Tri-Secrect Secure

Question 59: Correct Which three objects did we explicitly refer to using the COPY INTO command in the lesson on using external stages? 1.DATABASE 2.SCHEMA 3.FILE FORMAT 4.STAGE 5.TABLE 6.VIEW

3.FILE FORMAT 4.STAGE 5.TABLE

!!!Question 18: Correct Stages which do not support setting up File Formats are : 1.External Named Stage 2.Internal named Stage 3.Internal Table Stage 4.Internal User Stage

3.Internal Table Stage 4.Internal User Stage Explanation Table Stage and User Stage are created automatically whenever a table is created or a new user is added into the system respectively. They don't support setting up the file format.

Question 57: Incorrect Which parameter does help in loading files whose metadata has expired? 1.Set LAST_MODIFIED_DATE to more than 64 days 2.set LAST_MODIFIED_DATE to within 64 days 3.set LOAD_UNCERTAIN_FILES to TRUE 4.set LOAD_EXPIRED_FILES to TRUE

3.set LOAD_UNCERTAIN_FILES to TRUE To load files whose metadata has expired, set the LOAD_UNCERTAIN_FILES copy option to true. https://docs.snowflake.com/en/user-guide/data-load-considerations-load.html#loading-older-files Workarounds To load files whose metadata has expired, set the LOAD_UNCERTAIN_FILES copy option to true. The copy option references load metadata, if available, to avoid data duplication, but also attempts to load files with expired load metadata. Alternatively, set the FORCE option to load all files, ignoring load metadata if it exists. Note that this option reloads files, potentially duplicating data in a table

Question 82: Incorrect What is the recommendation for file size for Parquet files for loading? 1.16 MB 2.2 GB 3.3 GB 4.1 GB

4.1 GB Explanation Currently, data loads of large Parquet files (e.g. greater than 3 GB) could time out. Split large files into files 1 GB in size (or smaller) for loading.

Suestion 91: Incorrect Snowflake provides native support for semi-structured data. Select true option about Snowflake native support for Semi-Structure data. 1.Flexible-Schema data types for loading semi-structured data without transformation 2.Database Optimization for fast and efficient SQL querying. 3.Automatic conversion of data to optimize internal storage format 4.All of these

4.All of these

Question 50: Correct How can you remove the outer array structure of JSON data? 1.Set STRIP_OUTER_ARRAY = 1 2.Set STRIP_INNER_ARRAY = TRUE 3.Set STRIP_OUTER_BRACKET = TRUE 4.Set STRIP_OUTER_ARRAY = TRUE

4.Set STRIP_OUTER_ARRAY = TRUE Explanation STRIP_OUTER_ARRAY = TRUE in COPY INTO command removes the outer array structure and load the records into separate table rows.

Question 92: Incorrect When using a LATERAL FLATTEN in a semi-structured data query, what does the LATERAL do? 1.Causes the FLATTEN command to be recursive and drill down into all levels of the record 2.Modifies the data in an array to a variant data type so it will work with the FLATTEN command 3.Works as a self join to join the current record key / value pairs to an array in the same record 4.Causes the FLATTEN command to read an array in reverse order 5.Breaks an array into multiple elements

5.Breaks an array into multiple elements Explanation LATERAL specifies an inline view within the FROM clause. It breaks an array into multiple elements

Question 2: Incorrect Which are the options available for SECURITYADMIN in Snowflake UI ribbon?(Choose all applicable) Data Exchange Notifications Account Warehouses History

Account Warehouses History Explanation Notifications option is only available for ACCOUNTADMIN.

JDBC

ODBC is an SQL-based Application Programming Interface (API) created by Microsoft that is used by Windows software applications to access databases via SQL. JDBC is an SQL-based API created by Sun Microsystems to enable Java applications to use SQL for database access. Java Database Connectivity (JDBC) is an application programming interface (API) for the programming language Java, which defines how a client may access a database. It is a Java-based data access technology used for Java database connectivity. It is part of the Java Standard Edition platform, from Oracle Corporation Java™ database connectivity (JDBC) is the JavaSoft specification of a standard application programming interface (API) that allows Java programs to access database management systems. The JDBC API consists of a set of interfaces and classes written in the Java programming language.

ODBC

Open Database Connectivity (ODBC) is a specification for an application programming interface (API) that enables applications to access multiple database management systems using Structured Query Language (SQL). ODBC is an SQL-based Application Programming Interface (API) created by Microsoft that is used by Windows software applications to access databases via SQL. JDBC is an SQL-based API created by Sun Microsystems to enable Java applications to use SQL for database access.

Tri-Secret Secure Tri-Secret Secure is the combination of a Snowflake-maintained key and a customer-managed key in the cloud provider platform that hosts your Snowflake account to create a composite master key to protect your Snowflake data. The composite master key acts as an account master key and wraps all of the keys in the hierarchy; however, the composite master key never encrypts raw data. If the customer-managed key in the composite master key hierarchy is revoked, your data can no longer be decrypted by Snowflake, providing a level of security and control above Snowflake's standard encryption. This dual-key encryption model, together with Snowflake's built-in user authentication, enables the three levels of data protection offered by Tri-Secret Secure.

Requires Business Critical Edition (or higher).

SHOW GRANTS Lists all access control privileges that have been explicitly granted to roles, users, and shares. For more information about privileges and roles, see Access Control in Snowflake. For more information about shares, see Introduction to Secure Data Sharing.

SHOW GRANTS SHOW GRANTS ON ACCOUNT SHOW GRANTS ON <object_type> <object_name> =>List all the roles granted to the demo user =>List all privileges granted to the analyst role: SHOW GRANTS TO { ROLE <role_name> | USER <user_name> | SHARE <share_name> } =>List all roles and users who have been granted the role: SHOW GRANTS OF ROLE <role_name> SHOW GRANTS OF SHARE <share_name> =>list all privileges granted on future objects in the schema: SHOW FUTURE GRANTS IN SCHEMA { <schema_name> } SHOW FUTURE GRANTS IN DATABASE { <database_name> }

Question 47: Correct Which command will help you get the lists of pipes for which you have access privileges? LIST PIPE LIST PIPES SHOW PIPE SHOW PIPES SELECT PIPE()

SHOW PIPES https://docs.snowflake.com/en/sql-reference/sql/show-pipes.html Lists the pipes for which you have access privileges. This command can be used to list the pipes for a specified database or schema (or the current database/schema for the session), or your entire account. See also: ALTER PIPE , CREATE PIPE , DESCRIBE PIPE , DROP PIPE

Question 66: Incorrect Which SQL command will give the list of all the warehouses in an account? DISPLAY WAREHOUSE; LIST WAREHOUSE; LIST WAREHOUSES; SHOW WAREHOUSES; SHOW WAREHOUSE;

SHOW WAREHOUSES; Explanation SHOW WAREHOUSES is the right command and you can add LIKE as well to narrow the list.

Question 100: Correct Normally a warehouse begins to consume credits once all the servers are provisioned for the warehouse. FALSE TRUE

TRUE - A warehouse begins to consume credits once all the compute resources are provisioned for the warehouse. -In a rare instance when some of the compute resources fail to provision, the warehouse only consumes credits for the provisioned compute resources. -Once the remaining compute resources are successfully provisioned, the warehouse starts consuming credits for all requested compute resources. Snowflake does not begin executing SQL statements submitted to a warehouse until all of the compute resources for the warehouse are successfully provisioned, unless any of the resources fail to provision: -If any of the compute resources for the warehouse fail to provision during start-up, Snowflake attempts to repair the failed resources. -During the repair process, the warehouse starts processing SQL statements once 50% or more of the requested compute resources are successfully provisioned.

Question 17: Correct Compute can be scaled up, down, out, or in and there is no effect on storage used. TRUE FALSE

TRUE Explanation Compute and Storage are decoupled in Snowflake.

Does Snowflake have a recommended file size for loading?

To optimize the number of parallel operations for a load, we recommend aiming to produce data files roughly 100MB to 250MB in size, compressed. Splitting large files into a greater number of smaller files distributes the load among the servers in an active warehouse and increases performance.

Question 12: Incorrect Which of these must return a value? Both User-Defined Function Stored Procedure

User-Defined Function Stored Procedure may or may not return the value but UDF must return the value.

Question 7:Snowflake supports many methods of authentication. Which are the supported authentication methods in ALL Snowflake Editions? 1. MFA (Multi-factor authentication) 2.SSO 3.Only MFA is supported by all the Snowflake editions 4.OAuth 5.Only MFA and SSO are supported by all the Snowflake editions

1. MFA (Multi-factor authentication) 2.SSO 4.OAuth https://docs.snowflake.com/en/user-guide/admin-security.html Account/user authentication for all editions: -Key Pair Authentication & Key Pair Rotation for increased security with client authentication. -MFA (multi-factor authentication) for increased security for account access by users. -OAuth for authorized account access without sharing or storing user login credentials. -Support for user SSO (single sign-on) through federated authentication. -Key-Pair Authentication as an alternative to basic authentication (i.e. username and password) and key-pair rotation to support multiple active keys.

??Question 16: Incorrect How many files each server in a cluster can process in parallel? 1.8 Files 2.16 Files 3.Depends on how large is the file. If file is 32 MB in size, the server can process 4 such files in parallel 4.4 Files 5.1 File

1.8 Files Explanation Each server in a cluster can process 8 files in parallel. E.g., XS size (1 SERVER) Warehouse can process 8 files M Size (4 Servers) Warehouse can process 32 files. If you ingesting less than 8 files by a server, you are wasting the compute resources.

Question 72: Which statements accurately describes the Snowflake Data Sharing? 1.A share can contain more than one database 2.When creating views in a share, secured views are required 3.A share can't be cloned by Consumer 4.Data sharing is only supported between accounts in the same snowflake region

1.A share can contain more than one database 2.When creating views in a share, secured views are required 3.A share can't be cloned by Consumer 4.Data sharing is only supported between accounts in the same snowflake region https://docs.snowflake.com/en/user-guide/data-sharing-provider.html#:~:text=A%20share%20can%20include%20data%20from%20multiple%20databases. https://docs.snowflake.com/en/user-guide/data-sharing-mutiple-db.html According to Snowflake Doc, a share can contain more than one database via secured view.

Who can create shares? 1.ACCOUNTADMIN 2.ROLE with CREATE SHARES global Privilege

1.ACCOUNTADMIN 2.ROLE with CREATE SHARES global Privilege You must use the ACCOUNTADMIN role or a role granted the CREATE SHARES global privilege.

Question 26: Incorrect Which SQL statement will not consume Warehouse credit? 1.ALTER WAREHOUSE MY_WAREHOUSE SET WAREHOUSE_SIZE = "SMALL"; 2.SHOW WAREHOUSES; 3.DROP WAREHOUSE MY_WAREHOUSE; 4.SHOW WAREHOUSE LIKE '%DEMO';

1.ALTER WAREHOUSE MY_WAREHOUSE SET WAREHOUSE_SIZE = "SMALL"; 2.SHOW WAREHOUSES; 3.DROP WAREHOUSE MY_WAREHOUSE; 4.SHOW WAREHOUSE LIKE '%DEMO'; None of these SQL statements needs a running warehouse as the result comes from metadata of cloud services layer.

Question 76: Incorrect Which of the following is a common case for Cloning in Snowflake? (Select all that apply) 1.Agile Release and Development) 2.Data Life Cycle Management 3.Data Encryption Protection 4."Point in Time" Snapshot

1.Agile Release and Development) 2.Data Life Cycle Management 4."Point in Time" Snapshot Clone is a "point in time version" of the table data as of the time the clone was made. Cloning is an efficient and cost effective approach for code migration for Agile Release Management. Cloning also help in Data life cycle management.

Question 94: A user can connect Snowflake in multiple ways. Please select 1.Browser-based web interface 2.Command line clients (e.g. SnowSQL) 3.Any 3rd-party partner that can connect to Snowflake 4.Any client application connected via JDBC or ODBC

1.Browser-based web interface 2.Command line clients (e.g. SnowSQL) 3.Any 3rd-party partner that can connect to Snowflake 4.Any client application connected via JDBC or ODBC https://docs.snowflake.com/en/user-guide-connecting.html These topics provide general information about the 3rd-party tools and technologies that form the extended ecosystem for connecting to Snowflake. They also provide detailed installation, configuration, and usage information for the Snowflake-provided clients (CLI, connectors, and drivers) that form the core of the Snowflake ecosystem Connectors & Drivers - Detailed instructions for installing, configuring, and using the Snowflake-provided drivers and connectors for Python, Spark, JDBC, ODBC, and other clients:

Question 99: Correct Which of the following are valid context functions? 1.CURRENT_SESSION( ) 2.CURRENT_REGION( ) 3.CURRENT_CLIENT( ) 4.CURRENT_WORKSHEET( ) 5.CURRENT_CLOUD_INFRASTRUCTURE( )

1.CURRENT_SESSION( ) 2.CURRENT_REGION( ) 3.CURRENT_CLIENT( ) CURRENT_WORKSHEET() and CURRENT_CLOUD_INFRASTRUCTURE() are not valid context functions.

Question 53: If you find a data-related tool that is not listed as part of the Snowflake ecosystem, what industry standard options could you check for as a way to easily connect to Snowflake? (Select 2) 1.Check to see if the tool can connect to other solutions via JDBC 2.Check to see if the tool can connect to other solutions via ODBC 3.Check to see if there is a petition in the community to create a driver 4.Check to see if you can develop a driver and put it on GitHub

1.Check to see if the tool can connect to other solutions via JDBC 2.Check to see if the tool can connect to other solutions via ODBC ODBC (Open Database Connectivity) and JDBC (JAVA Database Connectivity) are the industry standard options to connect Snowflake easily.

Question 41: Incorrect What are the security layers that Snowflake takes care of? 1.Data Protection 2.Access 3.Authorization 4.Infrastructure 5.Authentication

1.Data Protection 2.Access 3.Authorization 5.Authentication Explanation Infrastructure Security is managed by cloud provide

Which of the following workload is Snowflake suited for? 1.Data Sharing and Exchanges 2.Data Engineering 3.Data Warehouse 4.Data Applications 5.Data Lakes 6.Data Science

1.Data Sharing and Exchanges 2.Data Engineering 3.Data Warehouse 4.Data Applications 5.Data Lakes 6.Data Science Snowflake is not a just Data Warehouse platform. It is a Data platform which can handle all of these workloads

Question 98: Snowflake calculates usage of data stored in the system for: 1.Database tables, including historical data for Time Travel 2.Files staged for bulk data loading/unloading 3.Fail-safe for database tables 4.Cloud storage used in External Stage 5.Clones of database tables that reference data deleted in the table that owns the clones

1.Database tables, including historical data for Time Travel 2.Files staged for bulk data loading/unloading 3.Fail-safe for database tables 5.Clones of database tables that reference data deleted in the table that owns the clones https://docs.snowflake.com/en/user-guide/admin-usage-billing.html https://docs.snowflake.com/en/user-guide/tables-storage-considerations.html#cloning-tables-schemas-and-databases for example, when a clone is created of a table, the clone utilizes no data storage because it shares all the existing micro-partitions of the original table at the time it was cloned; however, rows can then be added, deleted, or updated in the clone independently from the original table. Each change to the clone results in new micro-partitions that are owned exclusively by the clone and are protected through Customer Data Platform(CDP)

Question 6: Incorrect How to choose the right size of warehouse to achieve the best results based on the Query processing? 1.Execute relatively homogenous queries on the same warehouse 2.Execute varieties of queries on same warehouse to achieve the best result

1.Execute relatively homogenous queries on the same warehouse To achieve the best results, try to execute relatively homogeneous queries (size, complexity, data sets, etc.) on the same warehouse; executing queries of widely-varying size and/or complexity on the same warehouse makes it more difficult to analyze warehouse load, which can make it more difficult to select the best size to match the size, composition, and number of queries in your workload.

Question 2: Correct Increasing the size of a warehouse always improves data loading performance. 1.FALSE 2.TRUE

1.FALSE Increasing the size of a warehouse does not always improve data loading performance. Data loading performance is influenced more by the number of files being loaded (and the size of each file) than the size of the warehouse.

Which of the following object types are child objects within schemas? 1.File Formats 2.Stored Procedures 3.Roles 4.User Defined Functions 5.Sequences 6.Stages

1.File Formats 2.Stored Procedures 4.User Defined Functions 5.Sequences 6.Stages

Question 9: Incorrect Which are the correct statements about Snowflake data Storage? 1.Hybrid Columnar Storage 2.Natural data clustering and optimization 3.Automatic Micro-partitioning 4.Encryption on demand

1.Hybrid Columnar Storage 2.Natural data clustering and optimization 3.Automatic Micro-partitioning All the data in rest or in motion are always encrypted.

Question 69: Incorrect John wants to create a warehouse which should not start immediately. Which property will him to create a warehouse in suspended mode. 1.INITIALLY_SUSPENDED = TRUE 2.AUTO_SUSPEND = INITIAL 3.INITIATE = FALSE 4.AUTO_START = FALSE

1.INITIALLY_SUSPENDED = TRUE INITIALLY_SUSPENDED = TRUE will not start the warehouse after creation. By default it is set to FALSE. INITIALLY_SUSPENDED = TRUE | FALSE Specifies whether the warehouse is created initially in the 'Suspended' state.

Stored Procedure supports 1.JavaScript 2.SQL 3.Java (Using Snowpark) 4.Go 5.Python(using Snowpark) 6.Scala (using Snowpark)

1.JavaScript 2.SQL 3.Java (Using Snowpark) 5.Python(using Snowpark) 6.Scala (using Snowpark) https://docs.snowflake.com/en/sql-reference/stored-procedures-overview.html

Question 57: Incorrect In what order servers are removed when a warehouse is suspended or resized? 1.LIFO ("LAST IN, FIRST OUT") 2.FIFO ("FIRST IN, FIRST OUT") 3.LILO ("LAST IN, LAST OUT") 4.FILO ("FIRST IN, LAST OUT")

1.LIFO ("LAST IN, FIRST OUT") Explanation Each server in a warehouse cluster has a position in the warehouse that is maintained, even when the warehouse is suspended or resized. This position impacts how servers are added and removed because servers are always removed in reverse order of when they were added (aka LIFO, "Last In, First Out").

Question 83: Incorrect Which of the following are performed by the Cloud Services layer? (Select 4) 1.Metadata Storage 2.Metadata Management 3.User Authentication 4.Data Security 5.Availability Zone Management.

1.Metadata Storage 2.Metadata Management 3.User Authentication 4.Data Security Availability zone management is managed by Cloud Provider (like, AWS, AZURE, GCP) not by Snowflake Cloud Services layer.

Question 40: Incorrect Micro-partitions are IMMUTABLE. What are features make micro-partitions IMMUTABLE? 1.Micro-Partitions are not editable. 2.When new data gets ingested Snowflake tries to insert data in existing micro-partition to save the storage. 3.Snowflake creates new micro-partitions every time there is data change. 4.Snowflake purges the micro-partitions as soon as you delete any record from table.

1.Micro-Partitions are not editable. 3.Snowflake creates new micro-partitions every time there is data change. 4.Snowflake purges the micro-partitions as soon as you delete any record from table. Snowflake creates new partitions in case of any data change or new ingested data and also keeps the old version of micro-partition. The old versions of micro-partitions are used for Time-Travel and Fail-Safe. Services layer stores metadata about every micro-partition like - MIN/MAX ranges of values in each column, Number of distinct values, Row count etc.

When choosing a geographic deployment region, what factors might an enrollee consider? 1.Number of availability zones within a region 2.Additional fees charged for regions with geo-political unrest 3.Proximity to the point of service 4.End-user perceptions of glamorous or trendy geographic locations

1.Number of availability zones within a region 3.Proximity to the point of service It is better to choose the nearest region to avoid any lag or latency with higher number of availability zones.

Question 19: Incorrect 1.Once the Snowflake Account is provisioned, user can access 2.Snowflake using following methods: 3.Browser-based web interface 4.Any client application connected via JDBC or ODBC 5.Any 3rd-party partner that can connect to Snowflake SnowSQL, the Snowflake command line client

1.Once the Snowflake Account is provisioned, user can access 2.Snowflake using following methods: 3.Browser-based web interface 4.Any client application connected via JDBC or ODBC 5.Any 3rd-party partner that can connect to Snowflake SnowSQL, the Snowflake command line client

Question 22: Incorrect What all file formats are supported by Snowflake for unloading data from Snowflake? 1.PARQUET 2.CSV, TSV 3.AVRO 4.XML 5.JSON

1.PARQUET 2.CSV, TSV 5.JSON https://docs.snowflake.com/en/user-guide/intro-summary-unloading.html

Question 12: Correct What types of Infrastructure Security are available to ensure customer data is secured? 1.Regional data centers 2.Cloud provider's redundancy 3.Cloud Provider's physical security 4.Customer Admin takes the backup of data into local data disk Explanation

1.Regional data centers 2.Cloud provider's redundancy 3.Cloud Provider's physical security Snowflake leverages all the Cloud Provider's securities like physical and digital, data replication across multiple regions etc.

!!!Question 105: Correct Which cache type gets purged regularly? 1.Results Cache 2.Metadata Cache 3.Warehouse Cache

1.Results Cache Result Cache gets suspended every 24 hours unless queried again within 24 hours. It can go until 31 days. If you also see option for Warehouse and it is mentioned that AUTO_SUSPEND is set then Warehouse Cache will be you best answer.

Question 47: Incorrect Which type of the object key is only used for decryption? 1.Retired Key 2.None of these 3.Active key 4.Destroyed key

1.Retired Key Explanation Active Key is used for both encryption and decryption. Retired Key is used for decryption only. Destroyed Key is no longer used.

Question 68: Incorrect As an ACCOUNTADMIN, how can you find the credit usage of a warehouse? 1.Run SQL query on WAREHOUSE_METERING_HISTORY view under ACCOUNT_USAGE Schema 2.Run SQL query on METERING_HISTORY view under ACCOUNT_USAGE Schema 3.Run SQL query on ACCOUNT_USAGE table under Snowflake Database 4.Using Web interface > Account > Usage

1.Run SQL query on WAREHOUSE_METERING_HISTORY view under ACCOUNT_USAGE Schema 2.Run SQL query on METERING_HISTORY view under ACCOUNT_USAGE Schema 4.Using Web interface > Account > Usage -using SQL - ACCOUNT_USAGE: - Query the METERING_HISTORY to view hourly usage for an account. - Query the METERING_DAILY_HISTORY to view daily usage for an account. - Query the WAREHOUSE_METERING_HISTORY to view usage for a warehouse. - Query the QUERY_HISTORY to view usage for a job. -INFORMATION_SCHEMA: - Query the QUERY_HISTORY table function.

Question 44: Incorrect We loaded an XML file that included a header entity called <dataset>. In order to bypass this entity and treat each EMPLOYEE object as a separate record (loading each into a separate row), what FILE_FORMAT property did we change? 1.STRIP OUTER ELEMENT 2.IGNORE UTF-8 ERROR 3.PRESERVE SPACE 4.DISABLE SNOWFLAKE DATA 5.DISABLE AUTO CONVERT

1.STRIP OUTER ELEMENT https://docs.snowflake.com/en/user-guide/data-load-external-tutorial-create-file-format.html Enable the STRIP_OUTER_ARRAY file format option for the COPY INTO <table> command to remove the outer array structure and load the records into separate table rows.

Select all the features Snowflake supports: 1.Semi-Structured data 2.Dedicated resource for compute 3.Unstructured data 4.SQL-based transformation 5.A graphical UI to build data pipelines

1.Semi-Structured data 2.Dedicated resource for compute 3.Unstructured data 4.SQL-based transformation https://docs.snowflake.com/en/user-guide/unstructured.html This topic introduces key concepts and provides instructions for accessing and sharing unstructured data files. Snowflake supports the following actions: Securely access data files located in cloud storage. Share file access URLs with collaborators and partners. Load file access URLs and other file metadata into Snowflake tables.

Security validations: 1.Soc 1 Type II and Soc 2 Type II compliance. 2.Support for HIPAA compliance. 3.PCI DSS compliance. 4.HITRUST CSF compliance (see supported regions). 5.FedRAMP Moderate compliance (in the US government regions). 6.IRAP Protected compliance (in specified Asia Pacific regions). 7.Cloud GBDQ

1.Soc 1 Type II and Soc 2 Type II compliance. 2.Support for HIPAA compliance. 3.PCI DSS compliance. 4.HITRUST CSF compliance (see supported regions). 5.FedRAMP Moderate compliance (in the US government regions). 6.IRAP Protected compliance (in specified Asia Pacific regions).

Question 92: Correct Which of the following have drivers/connectors (or information about where to find them) available via Help->Downloads in the Snowflake WebUI? (Select 4) 1.Spark 2.Node.js 3.Go 4.C 5.Kafka 6.JDBC

1.Spark 2.Node.js 3.Go 6.JDBC It also includes python connector, ODBC drive and SnowSQL Spark is also a connector

Question 18: Incorrect ??? Which are the types of Virtual Warehouses in Snowflake? 1.Standard 2.Premier 3.Multi-Cluster 4.Enterprise

1.Standard 3.Multi-Cluster 1- Standard Warehouse - It is type of single compute cluster warehouse 2 - It can spawn additional compute clusters (scale out) to manage changes in user and concurrency needs A virtual warehouse, often referred to simply as a "warehouse", is a cluster of compute resources in Snowflake. A warehouse provides the required resources, such as CPU, memory, and temporary storage, to perform the following operations in a Snowflake session:

Question 97: Incorrect Zero Copy Cloning allows users to have multiple copies of your data without the additional cost of storage usually associated with replicating data. Which other statements about the Cloning features in Snowflake are True? 1.The clone is a pointer to the original table data 2.Clone is a "point in time version" of the table data as of the time the clone was made 3.Any new record in the parent table gets available in the cloned table 4.Cloning is an efficient and cost effective approach for code migration for Agile Release Management

1.The clone is a pointer to the original table data 2.Clone is a "point in time version" of the table data as of the time the clone was made 4.Cloning is an efficient and cost effective approach for code migration for Agile Release Management

Question 4: Incorrect Which are the different ways a user can get permissions to a database object? 1.The user is granted a role that inherits another role with permission to the object 2.Grant the role with the permissions on the database object to the user 3.The user creates the objects and the current role becomes the owner of the object 4.Grant the user permissions to the object

1.The user is granted a role that inherits another role with permission to the object 2.Grant the role with the permissions on the database object to the user 3.The user creates the objects and the current role becomes the owner of the object https://docs.snowflake.com/en/user-guide/security-access-control-considerations.html Following best practices for role hierarchies, grant the highest-level functional roles in a role hierarchy to the system administrator (SYSADMIN) role. Snowflake supports Role-Based Access control. When creating roles that will serve as the owners of objects in the system, Snowflake recommends creating a hierarchy of custom roles, with the top-most custom role assigned to the system role SYSADMIN. This role structure allows system administrators to manage all objects in the account, such as warehouses and database objects, while restricting management of users and roles to the SECURITYADMIN or ACCOUNTADMIN roles.

Question 90: Incorrect How does Snowflake provide continuous availability? 1.Time Travel Storage 2.Snowflake synchronizes data across availability zones 3.Fail Safe Storage 4.Customer has to backup the data locally out of cloud

1.Time Travel Storage 2.Snowflake synchronizes data across availability zones 3.Fail Safe Storage Snowflake Transparently synchronizes data across availability zones which are geographically separated and on separate power grids. No downtime while updates and patches. Time- Travel and Fail-Safe features help customers recover data within a defined retention period.

Question 28: Incorrect What are the best practices for JOIN on unique keys? 1.Use distinct keys 2.Avoid MANY-TO-MANY Join 3.Avoid unintentional cross join

1.Use distinct keys 2.Avoid MANY-TO-MANY Join 3.Avoid unintentional cross join

Question 62: Correct Which is not the function under Metadata Management of Cloud Services Layer? 1.Used for storing physical micro partition 2.Used for Time Travel and Cloning 3.Stores Metadata as data is loaded into the system 4.Handles Queries that can be processes completely from metadata

1.Used for storing physical micro partition

Which is not the function under Metadata Management of Cloud Services Layer? 1.Used for storing physical micro partition 2.Used for Time Travel and Cloning 3.Stores Metadata as data is loaded into the system 4.Handles Queries that can be processes completely from metadata

1.Used for storing physical micro partition

Question 61: Incorrect Which are the correct statements for a Virtual Warehouse? 1.Warehouse are Sized in "t-shirt" sizing 2.Size determines the number of servers that comprise each cluster in a warehouse 3.Each larger size is double the preceding, in both VMs in the cluster and in Snowflake credits consumed 4.Standard Virtual Warehouse can Scale-Out

1.Warehouse are Sized in "t-shirt" sizing 2.Size determines the number of servers that comprise each cluster in a warehouse 3.Each larger size is double the preceding, in both VMs in the cluster and in Snowflake credits consumed Standard Virtual Warehouse is single cluster Warehouse so it can't scale-out rather Scale-Up to bigger size. All other options mentioned here are true.

Question 47: Can you share data with multiple consumers with row based access control? 1.YES 2.NO

1.YES Secure View with Account mapping using CURRENT_ACCOUNT(); https://docs.snowflake.com/en/user-guide/data-sharing-secure-views.html#step-1-create-data-and-mapping-tables-in-private-schema A "mapping table" (also in the "private" schema), which is only required if you wish to share the data in the base table with multiple consumer accounts and share specific rows in the table with specific accounts.

Question 59: Correct You have a LARGE sized warehouse 'MY_WAREHOUSE'. You want to change the size to SMALL. Which SQL statement will help you to change the size of warehouse? 1.UPDATE MY_WAREHOUSE SET SIZE = "SMALL"; 2.ALTER WAREHOUSE MY_WAREHOUSE SET WAREHOUSE_SIZE = "SMALL"; 3.ALTER MY_WAREHOUSE SET SIZE = "SMALL"; 4.ALTER WAREHOUSE MY_WAREHOUSE SET SIZE = "SMALL";

2.ALTER WAREHOUSE MY_WAREHOUSE SET WAREHOUSE_SIZE = "SMALL";

Question 29: Incorrect At what places can you specify File Format? 1.As part of Internal Table stage 2.As part of a table definition 3.All of these 4.As part of a named stage or pipe definitiont 5.In the COPY INTO command

2.As part of a table definition 4.As part of a named stage or pipe definitiont 5.In the COPY INTO command File Format can be specified in COPY INTO command, as part of a named stage or pipe definition as well as part of table definition. If file format options are specified in multiple locations, the load operation applies the options in the following order of precedence: 1.COPY INTO Statement 2.Stage definition 3.Table definition

Question 46: Correct SQL Clause which helps defining the clustering key: 1.CLUSERTING ON 2.CLUSTER BY 3.CLUSTERING BY 4.CLUSTER ON

2.CLUSTER BY create or replace table t1 (c1 date, c2 string, c3 number) cluster by (c1, c2);

???Question 49: Incorrect What are the key considerations for using warehouse effectively and efficiently? 1.Start with smallest warehouse always and Scale up if the performance is poor 2.Experiment with different types of queries and different warehouse sizes to determine the combinations that best meet your specific query needs and workload. 3.Don't focus on warehouse size. Snowflake utilizes per-second billing, so you can run larger warehouses (Large, X-Large, 2X-Large, etc.) and simply suspend them when not in use 4.Snowflake utilizes per-hour billing so, customer doesn't have to worry about the warehouse size.

2.Experiment with different types of queries and different warehouse sizes to determine the combinations that best meet your specific query needs and workload. 3.Don't focus on warehouse size. Snowflake utilizes per-second billing, so you can run larger warehouses (Large, X-Large, 2X-Large, etc.) and simply suspend them when not in use

Question 83:Snowflake does not begin executing SQL statements submitted to a warehouse until all of the servers for the warehouse are successfully provisioned. If any server fails to provision, Snowflake attempts to repair the failed server. The warehouse starts processing statement once 100% of the requested servers are successfully provisioned. 1.TRUE 2.FALSE

2.FALSE Starting/Resuming a Warehouse Warehouses consume credits while running: -A warehouse begins to consume credits once all the compute resources are provisioned for the warehouse. ->In a rare instance when some of the compute resources fail to provision, the warehouse only consumes credits for the provisioned compute resources. ->Once the remaining compute resources are successfully provisioned, the warehouse starts consuming credits for all requested compute resources. Snowflake does not begin executing SQL statements submitted to a warehouse until all of the servers for the warehouse are successfully provisioned, unless any of the servers fail to provision: - If any of the servers for the warehouse fail to provision during start-up, Snowflake attempts to repair the failed server(s) . - During the repair process, the warehouse starts processing SQL statements once 50% or more of the requested servers are successfully provisioned. https://docs.snowflake.com/en/user-guide/warehouses-tasks.html

Question 69: Correct New or modified data in tables in a share are immediately not available to all consumers who have created a database from a share 1.TRUE 2.FALSE

2.FALSE New or modified data in tables in a share are immediately available to all consumers who have created a database from a share. You must grant usage on new objects created in a database in a share in order for them to be available to consumers.

Question 48: Correct John is having SECURITYADMIN role. He created a custom DBA_ROLE. He Granted SYSADMIN role to DBA_ROLE. He created a user 'Monica'. John granted DBA_ROLE to Monica. Monica creates a Database Monica_DB. Monica then created a Table T1 in Monica_DB under PUBLIC schema. What should John do to access Table T1 created by Monica? 1.USE ROLE dba_role; USE DATABASE monica_db; Select * from t1; 2.GRANT TOLE DBA_ROLE TO John; USE ROLE DBA_ROLE; USE DATABASE monica_db; Select * from t1; 3.USE ROLE SECURITYADMIN; USE DATABASE monica_db; Select * from t1; 4.GRANT ROLE DBA_ROLE TO John; USE DATABASE monica_db; Select * from t1;

2.GRANT TOLE DBA_ROLE TO John; USE ROLE DBA_ROLE; USE DATABASE monica_db; Select * from t1; Doesn't matter if John has created the DBA_ROLE. If John wants to access the object created by DBA_ROLE, he needs to GRANT DBA_ROLE to himself.

Question 37: Incorrect The BI group is complaining about their queries taking too long to run. Checking the virtual warehouse information shows the queued time is pretty high. What is the best way to fix this issue? 1.Determine which users have the high priority queries and set the other users 2.Increase the virtual warehouse MAX_CLUSTER_COUNT property 3.STATEMENT_QUEUED_TIMEOUT_IN_SECONDS parameter to a low value to cancel those queries if they get in the queue 4.Increase the virtual warehouse MAXCONCURRENCY_LEVEL parameter 5.Provide a virtual warehouse for every user in the group 6.Increase the size of the virtual warehouse

2.Increase the virtual warehouse MAX_CLUSTER_COUNT property Remember, queing can be solved by SCALE-OUT (add new clusters),MAX_CLUSTER_OUT helps to the maximum number of clusters which a warehouse can run in case of high workload.

Question 80: Correct John has a same query which runs for around 40 minutes every hour. Most of the time the data in the table remain same. What could be best option for John to keep warehouse running or AUTO_SUSPEND? 1.John should set AUTO_SUSPEND to 40 minutes 2.John should not set AUTO_SUSPEND 3.John should set AUTO_SUSPEND to 10 minutes 4.John doesn't have to set AUTO_SUSPEND. Snowflake will take care of suspension as needed.

2.John should not set AUTO_SUSPEND Explanation It is better to not suspend the warehouse as the cache gets drop on suspension. The query is same and table data is not changing frequently so it is better to utilize the warehouse cache as much as possible.

Question 23: Incorrect if another objects get added to a shared database. Will consumer able to see immediately? 1.Yes 2.No

2.No Consumer will only see the object which was Granted by Provider. For example - If Provider created a new table and if provider GRANT SELECT ON new table to the share then Consumer will be able to see the new Object.

Question 16: Correct If you are defining a multi-column clustering key of a table, the order in which the columns are specified in the CLUSTER BY clause is important. As general rule, Snowflake recommends: 1.Ordering the columns from highest cardinality to lowest cardinality 2.Ordering the columns from lowest cardinality to highest cardinality 3.Order doesn't matter

2.Ordering the columns from lowest cardinality to highest cardinality As a general rule, Snowflake recommends ordering the columns from lowest cardinality to highest cardinality. Putting a higher cardinality column before a lower cardinality column will generally reduce the effectiveness of clustering on the latter column.

Snowflake offers rich set of drivers and connectors to connect with external applications. Select the Connectors from the list. (Choose 3) 1.Node.js 2.Python 3.Spark 4.JDBC 5.Kafka

2.Python 3.Spark 5.Kafka JDBC and Node.js are drivers not connectors. Snowflake also provides the Kafka Connector. Kafka connector is not available in Help > download menu. Kafka connector jar file can be downloaded from https://www.confluent.io/hub/ or https://mvnrepository.com/artifact/com.snowflake

Q49.How is the most effective way to test if clustering a table helped performance? 1.Use SYSTEM$CLUSTERING_INFORMATION. Check the total_constant_partition_count 2.Run a sample query before clustering and after to compare the results 3.Use SYSTEM$CLUSTERING_INFORMATION. Check the average_overlaps 4.Use SYSTEM$CLUSTERING_INFORMATION. Check the average_depth 5.Use the SYSTEM$CLUSTERING_DEPTH and check the depth of each column

2.Run a sample query before clustering and after to compare the results Also, Snowflake strongly recommends that you test a representative set of queries on the table to establish some performance baselines.

Question 90: Correct Which SQL statements will need a running Warehouse and consume credit? Consider table created as - CREATE OR REPLACE TABLE T1 ( ID number(2), Name varchar(20), Salary number (11,2) ); 1.SELECT COUNT(*) FROM T1; 2.SELECT AVG(ID) FROM T1; 3.SELECT MIN(ID) FROM T1; 4.SELECT MAX(ID) FROM T1; 5.SELECT MIN(ID) FROM T1 WHERE NAME ='JOHN';

2.SELECT AVG(ID) FROM T1; 5.SELECT MIN(ID) FROM T1 WHERE NAME ='JOHN'; =>Cloud services layer does not store average value of column data, and so, it gets computed and requires a running warehouse. =>Cloud Services layer can return queries like COUNT(*), MAX, MIN without using virtual warehouse unless there is no conditional filters. But if there is a filter, then we neeed a running warehouse

Question 67: Correct What all objects can be shared? 1.Standard View 2.Secure UDF 3.Table 4.Secure View

2.Secure UDF 3.Table 4.Secure View Data Share is meant for Secures access and so, Standard View is not allowed to be shared.

Question 94: Correct The columns in the query history include the QueryID, the SQL Text, the Warehouse name, the Warehouse Size, the Session ID and others. Which column is a good indicator of whether a Warehouse was used (and Compute costs incurred) by a query? 1.Session ID 2.Size 3.Warehouse Name 4.QueryID

2.Size Explanation If SIZE column is blank that means the query didn't consume compute credit and returned the result from cache or handled by Cloud Services Layer.

Question 13: Correct Decreasing the size of a running warehouse removes servers from the warehouse. When the servers are removed, the cache associated with the servers is dropped. 1.FALSE 2.TRUE

2.TRUE Decreasing the size of a running warehouse removes servers from the warehouse. When the servers are removed, the cache associated with the servers is dropped, which can impact performance in the same way that suspending the warehouse can impact performance after it is resumed. Keep this in mind when choosing whether to decrease the size of a running warehouse or keep it at the current size. In other words, there is a trade-off with regards to saving credits versus maintaining the server cache.

Question 63: Correct When there is new load of data in Snowflake, Snowflake automatically analyzes and compresses data into table. 1.FALSE 2.TRUE

2.TRUE Explanation Snowflake automatically analyzes and compresses data into table on load. It finds the optimal compression scheme for each data type.

Question 93: Objects that are dropped from a shared database and then recreated with the same name are not immediately available in the share; you must execute grant usage on the objects to make them available 1.FALSE 2.TRUE

2.TRUE True, you need to GRANT on newly create object. It doesn't matter if you create the same object what you have dropped. https://docs.snowflake.com/en/user-guide/data-sharing-provider.html#:~:text=A%20share%20can%20include%20data%20from%20multiple%20databases. =>New and modified rows in tables in a share (or in tables referenced by a view in a share) are available immediately to all consumers who have created a database from the share. Keep this in mind when updating these tables. =>A new object created in a database in a share is not automatically available to consumers. To make the object available to consumers, you must use the GRANT <privilege> ... TO SHARE command to explicitly add the object to the share. =>This also applies to objects that have been dropped from a database and then recreated with the same name in the database; the recreated object is treated as a new object and is, therefore, not accessible until the object has been explicitly granted the necessary privileges in the share.

Question 84: Incorrect Which tables will experience the most benefit from clustering? 1.Tables with sizes between the range of 1 GB to 10 GB compressed 2.Tables in the multi-terabyte (TB) range 3.Tables with sizes between the range of 100 MB to 1 GB compressed 4.All sizes of tables

2.Tables in the multi-terabyte (TB) range In general, tables in the multi-terabyte (TB) range will experience the most benefit from clustering, particularly if DML is performed regularly/continually on these tables The table contains a large number of micro-partitions. Typically, this means that the table contains multiple terabytes (TB) of data. before choosing to cluster a table, Snowflake strongly recommends that you test a representative set of queries on the table to establish some performance baselines. https://docs.snowflake.com/en/user-guide/tables-clustering-keys.html#label-clustering-keys

Question 82: If you want to view query history older than 14 days, where can you go to view it? Choose one path and one "term" commonly used. 1.History-> Profile -> Account_Usage 2.The Account Usage Share 3.Account->Usage->History 4.The Usage Sharing Monitor 5.SNOWFLAKE (Database) -> ACCOUNT_USAGE (Schema) -> QUERY_HISTORY (Secure View)

2.The Account Usage Share 5.SNOWFLAKE (Database) -> ACCOUNT_USAGE (Schema) -> QUERY_HISTORY (Secure View) History is available for only 14 days through UI. Older history can be found in ACCOUNT_USAGE.

Question 7: Incorrect The compute resource used by Snowflake for data loading jobs can be provided by: 1.Hardware provisioned by user directly from cloud providers 2.User managed virtual warehouse 3.Snowflake managed service

2.User managed virtual warehouse 3.Snowflake managed service When loading using SNOWPIPE, the compute is provided by Snowflake managed service and for any other loading it is provided by user managed virtual warehouse

How many maximum columns (or expressions) are recommended for a cluster key? 1.7 to 8 2.12 to 16 3.3 to 4 4.Higher the number of columns (or expressions) in the key, better will be the performance

3.3 to 4 A single clustering key can contain one or more columns or expressions. For most tables, Snowflake recommends a maximum of 3 or 4 columns (or expressions) per key. Adding more than 3-4 columns tends to increase costs more than benefits.

Question 101: Incorrect Monica is an admin and wants to see the usage of Fail-Safe. How can she get details on Fail-Safe usage? 1.Monica should contact Snowflake support 2.Fail Safe usage details are never available to customer 3.Admins can view Fail-safe use in the Snowflake Web UI under Account > Billing & Usage 4.None of these

3.Admins can view Fail-safe use in the Snowflake Web UI under Account > Billing & Usage

Question 53: Incorrect Snowflake data storage costs include which types of data? 1.Metadata 2.Cached Results 3.Data retained to enable data recovery (time travel and fail-safe) 4.Semi-Structured data - additional fees 5.Persistent data stored in permanent tables

3.Data retained to enable data recovery (time travel and fail-safe) 5.Persistent data stored in permanent tables All storage within Snowflake is billable in compressed format including the persistent data stored in permanent tables, time-travel and fail-safe. There is no additional fee for handling Semi-Structured data.

Question 56: Correct John has table T1 with Time-Travel retention time period set to 20 days. He increases the retention period by 10 days to make it 30 days. What impacts will happen on Table data. Please select 2. 1.Changes will be ONLY effective for new data coming to Time-Travel Data that have moved to Fail-safe after 20 days will now be available in Time-Travel for additional 10 days 2.No impact on existing data which moved from table to Time-Travel before the increase of Time-Travel retention period 3.Data that would have been removed after 20 days is now retained for an additional 10 days before moving into Fail-safe 4.No impact on any data that is 20 days older and has already moved into Fail-safe

3.Data that would have been removed after 20 days is now retained for an additional 10 days before moving into Fail-safe 4.No impact on any data that is 20 days older and has already moved into Fail-safe Increasing Retention causes the data currently in Time Travel to be retained for the longer time period. The new data retains for the increased retention period as well.

!!!Which is not the function of Optimization activities of Cloud Services layer? 1.Pruning using metadata about micro-partitions 2.Automatic statistic gathering 3.Handles Queries that can be processes completely from metadata 4.Automatic JOIN order optimization

3.Handles Queries that can be processes completely from metadata Optimizer Service has following main functions: - Cost-based optimization (CBO) i.e., SQL Optimizer -Automatic JOIN order optimization -Automatic statistics gathering -Pruning using metadata about micro-partitions =>Metadata Management service takes care of Handling queries that can be processed completely from metadata like SELECT MAX(ID) FROM T1;

Question 6: Incorrect Which vendors do support Snowflake natively for federated authentication and SSO? 1.Google G Suite 2.Onelogin 3.Okta 4.Microsoft Azure Active Directory 5.Microsoft ADFS

3.Okta 5.Microsoft ADFS Explanation Okta and Microsoft ADFS(Active Directory Federation Services,) provide native Snowflake support for federated authentication and SSO. Other are not native but Snowflake supports using SAML 2.0-compliant.

!!!Which of these can access database objects and issue SQL statements 1.Both 2.User-Defined Function 3.Stored Procedure

3.Stored Procedure DDL and DML operations are not permitted in UDF. Stored Procedure can access database objects and issue SQL statements. User-defined functions (UDFs) let you extend the system to perform operations that are not available through the built-in, system-defined functions provided by Snowflake.

Question 2: Incorrect A user's default role is 1.The name used to log in to the WebUI 2.The user's default role is always PUBLIC 3.The role a user is set to each time the user logs in to the system 4.The user's default role changes each time the user logs in to the system.

3.The role a user is set to each time the user logs in to the system

??Question 1: How can you create a user in Snowflake without password? 1.User creation without password is not possible 2.Using Salesforce Web interface 3.Using SQL Command - CREATE USER

3.Using SQL Command - CREATE USER https://docs.snowflake.com/en/user-guide/admin-user-management.html#label-snowflake-password-policy Additionally, Snowflake allows creating users without an initial password to support business processes in which new users are not allowed to log into the system. If this occurs, the user's PASSWORD property value will be NULL. However, as a general rule, Snowflake expects that users are created with initial passwords.

Question 14: Correct How can you disable auto-suspend for a warehouse? 1.You can not disable AUTO SUSPEND in standard warehouse 2.You can not disable AUTO SUSPEND in multi-cluster warehouse 3.Using UI, selecting NEVER against AUTO SUSPEND 4.Specifying NULL in SQL

3.Using UI, selecting NEVER against AUTO SUSPEND 4.Specifying NULL in SQL

If snowflake hostname is https://bbx99999.us-east-1.snowflakecomputing.com, what is the customer Full account name/account locator? 1.bbx99999 2.bbx99999.us-east-1.snowflakecomputing 3.bbx99999.us-east-1 4.us-east-1 5.snowflakecomputing

3.bbx99999.us-east-1 https://docs.snowflake.com/en/user-guide/admin-account-identifier.html#label-account-name If you have any accounts that existed before the Organizations feature was enabled, the Format 2 (Legacy): Account Locator in a Region is used as the account name. account_locator.cloud_region_id account_locator.cloud_region_id.cloud xy12345.us-east-2.aws.snowflakecomputing.com. The organization name is a name chosen by you at the time when your first Snowflake account is provisioned for your organization. The name must be unique across all other organizations in Snowflake. It can include uppercase letters and numbers, but not underscores or other delimiters. Each account name must be unique within your organization. You specify an account name when you create the account (see Creating an Account). While an account name uniquely identifies an account within your organization, it is not a unique identifier of an account across Snowflake organizations. To uniquely identify an account in Snowflake, you must prepend your organization name to the account name. organization_name-account_name

Question 23: Incorrect What is the best practice after creating a custom role in a Snowflake account? 1.Grant ownership of the role to itself so a member of the role can control access to the role 2.Grant the role to the USERADMIN role so users can be added to the role. 3.Grant PUBLIC to the role so all database objects owned by PUBLIC will be available to the new role 4.Grant ownership to the SYSADMIN role or a child role under SYSADMIN

4.Grant ownership to the SYSADMIN role or a child role under SYSADMIN Explanation Custom role is mostly created for specific access of specific objects. As per best practice to grant ownership to SYSADMIN or child role under SYSADMIN so that SYSADMIN can control the access on the objects.

What type of Privilege Data Consumer should have to administer shares 1.SHOW SHARE 2.EXPORT SHARE 3.LIST SHARE 4.IMPORT SHARES

4.IMPORT SHARES Data Consumer must use ACCOUNTADMIN role or a role granted the IMPORT SHARES global privilege to administer shares. https://docs.snowflake.com/en/user-guide/security-access-privileges-shares.html

Question 23: Incorrect Which privilege is needed to be able to monitor a warehouse? 1.OPERATE 2.USAGE 3.MODIFY 4.MONITOR

4.MONITOR Virtual Warehouse Privileges MONITOR Enables viewing current and past queries executed on a warehouse as well as usage statistics on that warehouse. USAGE Enables using a virtual warehouse and, as a result, executing queries on the warehouse. If the warehouse is configured to auto-resume when a SQL statement (e.g. query) is submitted to it, the warehouse resumes automatically and executes the statement.

Question 106: Correct In the History Page, a query shows Bytes Scanned having Assigned Partitions: 110, Scanned Partitions 58, and Original Partitions 110. Why did the optimizer show fewer partitions scanned than assigned? 1.The query was using an Xlarge warehouse and could scan the partitions in parallel 2.One of the tables in the query was an external table and didn't have micro partitions The metadata for the table was out of date and there were really only 58 partitions total 3.During the execution of the query, new data was added to the table and the optimizer had to add those micro partitions into the scan. 4.The static optimization determined the number of possible micro partitions would be 110 but the dynamic optimization was able to prune some of the partitions from a joined table

4.The static optimization determined the number of possible micro partitions would be 110 but the dynamic optimization was able to prune some of the partitions from a joined table Snowflake produces well-clustered data in tables in micro-partitions. Snowflake only targets those micro-partitions which come under the range of query criteria. If table is clustered well, Snowflake scans only few of the micro-partitions.

Encryption Key Rotation How often All Snowflake-managed keys are automatically rotated by Snowflake?

All Snowflake-managed keys are automatically rotated by Snowflake when they are more than 30 days old. Active keys are retired, and new keys are created. When Snowflake determines the retired key is no longer needed, the key is automatically destroyed. When active, a key is used to encrypt data and is available for usage by the customer. When retired, the key is used solely to decrypt data and is only available for accessing the data.

Question 19: Correct Which SQL command will give the list of all the warehouses in an account?

SHOW WAREHOUSE

Context Functions

This family of functions allows gathering information about the context in which the statement is executed. These functions are evaluated at most once per statement. select current_warehouse(), current_database(), current_schema(); select current_date, current_time, current_timestamp; CURRENT_CLIENT() Returns the version of the client from which the function was called. If called from an application using the JDBC or ODBC driver to connect to Snowflake, returns the version of the driver. CURRENT_ACCOUNT() Returns the account locator used by the user's current session. If you want to find the account name rather than the account locator, use SHOW ORGANIZATION ACCOUNTS.

Question 14: Incorrect If an account has federated authentication enabled. Can Snowflake admins still maintain user id and passwords in Snowflake? No Yes

Yes https://docs.snowflake.com/en/user-guide/admin-security-fed-auth-use.html#managing-users-with-federated-authentication-enabled -With federated authentication enabled for your account, Snowflake still allows maintaining and using Snowflake user credentials (login name and password). In other words: -Account and security administrators can still create users with passwords maintained in Snowflake. -Users can still log into Snowflake using their Snowflake credentials. -However, if federated authentication is enabled for your account, Snowflake does not recommend maintaining user passwords in Snowflake. Instead, user passwords should be maintained solely in your IdP.(Identy Provider) -If you create a user with no password (or alter an existing user and remove their password), this effectively disables Snowflake authentication for the user. Without a password in Snowflake, a user cannot log in using Snowflake authentication and must use federated authentication instead. -Note that you cannot use the Snowflake web interface to create users with no passwords or remove passwords from existing users. You must use CREATE USER or ALTER USER.


Set pelajaran terkait

Gait Deviations Secondary to Specific Impairments

View Set

Chapter 7 - Defining Competitiveness

View Set

BMGT 350 Exam 2 Quizzes & Warmups (7-14)

View Set

chemistry module 2 smart book (early ideas about matter) and (defining the atom)

View Set

DD-Viral Structure and Replication

View Set