MISC

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

Following are some general recommendations when you are working with files and filegroups:

1. Most databases will work well with a single data file and a single transaction log file. 2. If you use multiple data files, create a second filegroup for the additional file and make that filegroup the default filegroup. In this way, the primary file will contain only system tables and objects. 3.To maximize performance, create files or filegroups on different available disks as possible. Put objects that compete heavily for space in different filegroups. 4.Use filegroups to enable placement of objects on specific physical disks.Put different tables used in the same join queries in different filegroups. This will improve performance, because of parallel disk I/O searching for joined data. 5.Put heavily accessed tables and the nonclustered indexes that belong to those tables on different filegroups. This will improve performance, because of parallel I/O if the files are located on different physical disks.6. Do not put the transaction log file(s) on the same physical disk that has the other files and filegroups.

Filegroups

1. Primary 2. Memory Optimized Data 3. Filestream 4. User-defined

The following workloads may benefit from batch mode on rowstore:

1. A significant part of the workload consists of analytical queries (as a rule of thumb, queries with operators such as joins or aggregates processing hundreds of thousands of rows or more), AND 2. The workload is CPU bound (if the bottleneck is IO, it is still recommended to consider a columnstore index, if possible), AND 3. Creating a columnstore index adds too much overhead to the transactional part of your workload OR creating a columnstore index is not feasible because your application depends on a feature that is not yet supported with columnstore indexes.

The COLLATE clause can be specified at several levels. These include the following:

1. Creating or altering a database. 2. Creating or altering a table column. 3. Casting the collation of an expression.

using memory-optimized tables for

1. ETL (Extract Transform Load) 2. Tempdb object replacement 3. Caching and session state 4. Data ingestion, including IoT (Internet-of-Things) 5. High-throughput and low-latency transaction processing https://docs.microsoft.com/en-us/sql/relational-databases/in-memory-oltp/overview-and-usage-scenarios?view=sql-server-2017#in-memory-oltp-overview

The behavior of READ COMMITTED depends on the setting of the READ_COMMITTED_SNAPSHOT database option:

1. If READ_COMMITTED_SNAPSHOT is set to OFF (the default on SQL Server), the Database Engine uses shared locks to prevent other transactions from modifying rows while the current transaction is running a read operation. The shared locks also block the statement from reading rows modified by other transactions until the other transaction is completed. The shared lock type determines when it will be released. Row locks are released before the next row is processed. Page locks are released when the next page is read, and table locks are released when the statement finishes. 2.If READ_COMMITTED_SNAPSHOT is set to ON (the default on SQL Azure Database), the Database Engine uses row versioning to present each statement with a transactionally consistent snapshot of the data as it existed at the start of the statement. Locks are not used to protect the data from updates by other transactions.

Constructs that inhibit parallelism include:

1. Scalar UDFs 2. Remote Query 3. Dynamic cursors 4. Recursive queries 5. Table Valued Functions (TVFs) 6. TOP keyword

Memory Optimized Data

A memory-optimized filegroup is based on filestream filegroup

Partitioned Views

A partitioned view joins horizontally partitioned data from a set of member tables across one or more servers. This makes the data appear as if from one table. A view that joins member tables on the same instance of SQL Server is a local partitioned view.

Primary Key Constraints

A table can contain only one primary key constraint. A primary key cannot exceed 16 columns and a total key length of 900 bytes. The index generated by a primary key constraint cannot cause the number of indexes on the table to exceed 999 nonclustered indexes and 1 clustered index. If clustered or nonclustered is not specified for a primary key constraint, clustered is used if there no clustered index on the table. All columns defined within a primary key constraint must be defined as not null. If nullability is not specified, all columns participating in a primary key constraint have their nullability set to not null. If a primary key is defined on a CLR user-defined type column, the implementation of the type must support binary ordering.

Foreign Key Constraints

A table can reference a maximum of 253 other tables and columns as foreign keys (outgoing references). SQL Server 2016 (13.x) increases the limit for the number of other table and columns that can reference columns in a single table (incoming references), from 253 to 10,000. (Requires at least 130 compatibility level.) The increase has the following restrictions: Greater than 253 foreign key references are only supported for DELETE DML operations. UPDATE and MERGE operations are not supported. A table with a foreign key reference to itself is still limited to 253 foreign key references. Greater than 253 foreign key references are not currently available for columnstore indexes, memory-optimized tables, Stretch Database, or partitioned foreign key tables.

How to control transaction durability

ALTER DATABASE ... SET DELAYED_DURABILITY = { DISABLED | ALLOWED | FORCED }

avg_fragmentation_in_percent: > 30%

ALTER INDEX REBUILD WITH (ONLINE = ON) 1 1 Rebuilding an index can be executed online or offline. Reorganizing an index is always executed online. To achieve availability similar to the reorganize option, you should rebuild indexes online.

avg_fragmentation_in_percent: > 5% and < = 30%

ALTER INDEX REORGANIZE

When ON, the system is in implicit transaction mode. This means that if @@TRANCOUNT = 0, any of the following Transact-SQL statements begins a new transaction. It is equivalent to an unseen BEGIN TRANSACTION being executed first:

ALTER TABLE FETCH REVOKE BEGIN TRANSACTION GRANT SELECT (See exception below.) CREATE INSERT TRUNCATE TABLE DELETE OPEN UPDATE DROP

APPLOCK syntax

APPLOCK_MODE( 'database_principal' , 'resource_name' , 'lock_owner' )

Triggers and Cascading Referential Actions

All the cascading referential actions directly caused by the original DELETE or UPDATE are performed first. If there are any AFTER triggers defined on the affected tables, these triggers fire after all cascading actions are performed. These triggers fire in opposite order of the cascading action. If there are multiple triggers on a single table, they fire in random order, unless there is a dedicated first or last trigger for the table. This order is as specified by using sp_settriggerorder. If multiple cascading chains originate from the table that was the direct target of an UPDATE or DELETE action, the order in which these chains fire their respective triggers is unspecified. However, one chain always fires all its triggers before another chain starts firing. An AFTER trigger on the table that is the direct target of an UPDATE or DELETE action fires regardless of whether any rows are affected. There are no other tables affected by cascading in this case. If any one of the previous triggers perform UPDATE or DELETE operations on other tables, these actions can start secondary cascading chains. These secondary chains are processed for each UPDATE or DELETE operation at a time after all triggers on all primary chains fire. This process may be recursively repeated for subsequent UPDATE or DELETE operations. Performing CREATE, ALTER, DELETE, or other data definition language (DDL) operations inside the triggers may cause DDL triggers to fire. This may subsequently perform DELETE or UPDATE operations that start additional cascading chains and triggers. If an error is generated inside any particular cascading referential action chain, an error is raised, no AFTER triggers are fired in that chain, and the DELETE or UPDATE operation that created the chain is rolled back. A table that has an INSTEAD OF trigger cannot also have a REFERENCES clause that specifies a cascading action. However, an AFTER trigger on a table targeted by a cascading action can execute an INSERT, UPDATE, or DELETE statement on another table or view that fires an INSTEAD OF trigger defined on that object.

Cascading Referential Integrity: SET NULL

All the values that make up the foreign key are set to NULL when the corresponding row in the parent table is updated or deleted. For this constraint to execute, the foreign key columns must be nullable. Cannot be specified for tables that have INSTEAD OF UPDATE triggers.

Cascading Referential Integrity: SET DEFAULT

All the values that make up the foreign key are set to their default values if the corresponding row in the parent table is updated or deleted. For this constraint to execute, all foreign key columns must have default definitions. If a column is nullable, and there is no explicit default value set, NULL becomes the implicit default value of the column. Cannot be specified for tables that have INSTEAD OF UPDATE triggers.

Indexed Views

An indexed view is a view that has been materialized. This means the view definition has been computed and the resulting data stored just like a table. You index a view by creating a unique clustered index on it. Indexed views can dramatically improve the performance of some types of queries. Indexed views work best for queries that aggregate many rows. They are not well-suited for underlying data sets that are frequently updated.

All

Analyze your workload thoroughly in terms of all queries shapes and their execution frequencies and other statistics.Identify new queries in your workload.Detect if ad-hoc queries are used to identify opportunities for user or auto parameterization.

User-defined

Any filegroup that is specifically created by the user when the user first creates or later modifies the database.

AVG_RANGE_ROWS

Average number of rows with duplicate column values within a histogram step, excluding the upper bound. When DISTINCT_RANGE_ROWS is greater than 0, AVG_RANGE_ROWS is calculated by dividing RANGE_ROWS by DISTINCT_RANGE_ROWS. When DISTINCT_RANGE_ROWS is 0, AVG_RANGE_ROWS returns 1 for the histogram step.

COLLATE syntax

COLLATE { <collation_name> | database_default } <collation_name> :: = { Windows_collation_name } | { SQL_collation_name }

The COMMIT syntax is extended so you can force delayed transaction durability. If DELAYED_DURABILITY is DISABLED or FORCED at the database level (see above) this COMMIT option is ignored.

COMMIT [ { TRAN | TRANSACTION } ] [ transaction_name | @tran_name_variable ] ] [ WITH ( DELAYED_DURABILITY = { OFF | ON } ) ]

Performance High resource (CPU, I/O, network or memory) usage.

CPU Natively compiled stored procedures can lower CPU usage significantly because they require significantly fewer instructions to execute a Transact-SQL statement compared to interpreted stored procedures. In-Memory OLTP can help reduce the hardware investment in scaled-out workloads because one server can potentially deliver the throughput of five to ten servers. I/O If you encounter an I/O bottleneck from processing to data or index pages, In-Memory OLTP may reduce the bottleneck. Additionally, the checkpointing of In-Memory OLTP objects is continuous and does not lead to sudden increases in I/O operations. However, if the working set of the performance critical tables does not fit in memory, In-Memory OLTP will not improve performance because it requires data to be memory resident. If you encounter an I/O bottleneck in logging, In-Memory OLTP can reduce the bottleneck because it does less logging. If one or more memory-optimized tables are configured as non-durable tables, you can eliminate logging for data. Memory In-Memory OLTP does not offer any performance benefit. In-Memory OLTP can put extra pressure on memory as the objects need to be memory resident. Network In-Memory OLTP does not offer any performance benefit. The data needs to be communicated from data tier to application tier.

Atomic block level control - Natively Compiled Stored Procedures The following code goes inside the atomic block.

CREATE PROCEDURE <procedureName> ... WITH NATIVE_COMPILATION, SCHEMABINDING, EXECUTE AS OWNER AS BEGIN ATOMIC WITH ( DELAYED_DURABILITY = ON, TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'English' ... ) END

Recovery model: Simple: Recover to point in time?

Can recover only to the end of a backup.

Recovery Model: Full: Recover to point in time?

Can recover to a specific point in time, assuming that your backups are complete up to that point in time.

Recovery Model: Bulk logged: Recover to point in time?

Can recover to the end of any backup. Point-in-time recovery is not supported.

Recovery model: Simple:Work loss exposure

Changes since the most recent backup are unprotected. In the event of a disaster, those changes must be redone.

sys.stats

Contains a row for each statistics object that exists for the tables, indexes, and indexed views in the database in SQL Server. Every index will have a corresponding statistics row with the same name and ID (index_id = stats_id), but not every statistics row has a corresponding index.

Cascading Referential Integrity: Cascade

Corresponding rows are updated or deleted in the referencing table when that row is updated or deleted in the parent table. CASCADE cannot be specified if a timestamp column is part of either the foreign key or the referenced key. ON DELETE CASCADE cannot be specified for a table that has an INSTEAD OF DELETE trigger. ON UPDATE CASCADE cannot be specified for tables that have INSTEAD OF UPDATE triggers.

1211

Disables lock escalation based on memory pressure, or based on number of locks. The SQL Server Database Engine will not escalate row or page locks to table locks.Using this trace flag can generate excessive numbers of locks. This can slow the performance of the Database Engine, or cause 1204 errors (unable to allocate lock resource) because of insufficient memory.If both trace flag 1211 and 1224 are set, 1211 takes precedence over 1224. However, because trace flag 1211 prevents escalation in every case, even under memory pressure, we recommend that you use 1224. This helps avoid "out-of-locks" errors when many locks are being used. Scope: global or session

1224

Disables lock escalation based on the number of locks. However, memory pressure can still activate lock escalation. The Database Engine escalates row or page locks to table (or partition) locks if the amount of memory used by lock objects exceeds one of the following conditions: 1. Forty percent of the memory that is used by Database Engine. This is applicable only when the locks parameter of sp_configure is set to 0. 2. Forty percent of the lock memory that is configured by using the locks parameter of sp_configure. If both trace flag 1211 and 1224 are set, 1211 takes precedence over 1224. However, because trace flag 1211 prevents escalation in every case, even under memory pressure, we recommend that you use 1224. This helps avoid "out-of-locks" errors when many locks are being used.Note: Lock escalation to the table-level or HoBT-level granularity can also be controlled by using the LOCK_ESCALATION option of the ALTER TABLE statement.Scope: global or session

sp_autostats

Displays or changes the automatic statistics update option, AUTO_UPDATE_STATISTICS, for an index, a statistics object, a table, or an indexed view. If the specified index is disabled, or the specified table has a disabled clustered index, an error message is displayed. AUTO_UPDATE_STATISTICS is always OFF for memory-optimized tables.

Accent-sensitive (_AS)

Distinguishes between accented and unaccented characters. For example, 'a' is not equal to 'ấ'. If this option is not selected, the collation is accent-insensitive. That is, SQL Server considers the accented and unaccented versions of letters to be identical for sorting purposes. You can explicitly select accent insensitivity by specifying _AI.

Width-sensitive (_WS)

Distinguishes between full-width and half-width characters. If this option is not selected, SQL Server considers the full-width and half-width representation of the same character to be identical for sorting purposes. Omitting this option is the only method of specifying width-insensitivity.

Kana-sensitive (_KS)

Distinguishes between the two types of Japanese kana characters: Hiragana and Katakana. If this option is not selected, the collation is Kana-insensitive. That is, SQL Server considers Hiragana and Katakana characters to be equal for sorting purposes. Omitting this option is the only method of specifying Kana-insensitivity.

Case-sensitive (_CS)

Distinguishes between uppercase and lowercase letters. If selected, lowercase letters sort ahead of their uppercase versions. If this option is not selected, the collation is case-insensitive. That is, SQL Server considers the uppercase and lowercase versions of letters to be identical for sorting purposes. You can explicitly select case insensitivity by specifying _CI.

Variation-selector-sensitive (_VSS)

Distinguishes between various ideographic variation selectors in Japanese collations Japanese_Bushu_Kakusu_140 and Japanese_XJIS_140 first introduced in SQL Server 2017 (14.x). A variation sequence consists of a base character plus an additional variation selector. If this _VSS option is not selected, the collation is variation selector insensitive, and the variation selector is not considered in the comparison. That is, SQL Server considers characters built upon the same base character with differing variation selectors to be identical for sorting purposes. See also Unicode Ideographic Variation Database. Variation selector sensitive (_VSS) collations are not supported in Full-text search indexes. Full-text search indexes support only Accent-Sensitive (_AS), Kana-sensitive (_KS), and Width-sensitive (_WS) options. SQL Server XML and CLR engines do not support (_VSS) Variation selectors.

UTF-8 (_UTF8)

Enables UTF-8 encoded data to be stored in SQL Server. If this option is not selected, SQL Server uses the default non-Unicode encoding format for the applicable data types.

EQ_ROWS

Estimated number of rows whose column value equals the upper bound of the histogram step.

RANGE_ROWS

Estimated number of rows whose column value falls within a histogram step, excluding the upper bound.

DISTINCT_RANGE_ROWS

Estimated number of rows with a distinct column value within a histogram step, excluding the upper bound.

Auto

Focus your attention on relevant and actionable queries; those queries that execute regularly or that have significant resource consumption.

Recovery Model: Bulk logged: Work loss exposure

If the log is damaged or bulk-logged operations occurred since the most recent log backup, changes since that last backup must be redone. Otherwise, no work is lost.

Reducing Locking and Escalation

In most cases, the Database Engine delivers the best performance when operating with its default settings for locking and lock escalation. If an instance of the Database Engine generates a lot of locks and is seeing frequent lock escalations, consider reducing the amount of locking by: Using an isolation level that does not generate shared locks for read operations. READ COMMITTED isolation level when the READ_COMMITTED_SNAPSHOT database option is ON. SNAPSHOT isolation level. READ UNCOMMITTED isolation level. This can only be used for systems that can operate with dirty reads. Note Changing the isolation level affects all tables on the instance of the Database Engine. Using the PAGLOCK or TABLOCK table hints to have the Database Engine use page, heap, or index locks instead of row locks. Using this option, however, increases the problems of users blocking other users attempting to access the same data and should not be used in systems with more than a few concurrent users. For partitioned tables, use the LOCK_ESCALATION option of ALTER TABLEto escalate locks to the HoBT level instead of the table or to disable lock escalation. You can also use trace flags 1211 and 1224 to disable all or some lock escalations. For more information, see Trace Flags (Transact-SQL). Also, monitor lock escalation by using the SQL Server Profiler Lock:Escalation event; and see Using SQL Server Profiler.

Reorganizing and Rebuild: Limitations and Restrictions

Indexes with more than 128 extents are rebuilt in two separate phases: logical and physical. In the logical phase, the existing allocation units used by the index are marked for deallocation, the data rows are copied and sorted, then moved to new allocation units created to store the rebuilt index. In the physical phase, the allocation units previously marked for deallocation are physically dropped in short transactions that happen in the background, and do not require many locks. For more information about extents, refer to the Pages and Extents Architecture Guide. The ALTER INDEX REORGANIZE statement requires the data file containing the index to have space available, because the operation can only allocate temporary work pages on the same file, not another file within the filegroup. So although the filegroup might have free pages available, the user can still encounter error 1105: Could not allocate space for object '###' in database '###' because the '###' filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup. Creating and rebuilding non-aligned indexes on a table with more than 1,000 partitions is possible, but is not recommended. Doing so may cause degraded performance or excessive memory consumption during these operations. An index cannot be reorganized or rebuilt if the filegroup in which it is located is offline or set to read-only. When the keyword ALL is specified and one or more indexes are in an offline or read-only filegroup, the statement fails.

Key range lock for the key value 12 will not be held. Why?

Key range locks are not taken in a repeatable read isolation level.

Scalability Most scaling issues in SQL Server applications are caused by concurrency issues such as contention in locks, latches, and spinlocks.

Latch Contention A typical scenario is contention on the last page of an index when inserting rows concurrently in key order. Because In-Memory OLTP does not take latches when accessing data, the scalability issues related to latch contentions are fully removed. Spinlock Contention Because In-Memory OLTP does not take latches when accessing data, the scalability issues related to spinlock contentions are fully removed. Locking Related Contention If your database application encounters blocking issues between read and write operations, In-Memory OLTP removes the blocking issues because it uses a new form of optimistic concurrency control to implement all transaction isolation levels. In-Memory OLTP does not use TempDB to store row versions. If the scaling issue is caused by conflict between two write operations, such as two concurrent transactions trying to update the same row, In-Memory OLTP lets one transaction succeed and fails the other transaction. The failed transaction must be re-submitted either explicitly or implicitly, re-trying the transaction. In either case, you need to make changes to the application. If your application experiences frequent conflicts between two write operations, the value of optimistic locking is diminished. The application is not suitable for In-Memory OLTP. Most OLTP applications don't have a write conflicts unless the conflict is induced by lock escalation.

Recovery model: Simple

No log backups. Automatically reclaims log space to keep space requirements small, essentially eliminating the need to manage the transaction log space. For information about database backups under the simple recovery model, see Full Database Backups (SQL Server). Operations that require transaction log backups are not supported by the simple recovery model. The following features cannot be used in simple recovery mode: -Log shipping -Always On or Database mirroring -Media recovery without data loss -Point-in-time restores

A shared lock on the dbo.Employees table will not be held. Why?

No shared locks are taken at the table level for this query

INCLUDE (column [ ,... n ] ): Limitations and Restrictions

Nonkey columns can only be defined on nonclustered indexes. All data types except text, ntext, and image can be used as nonkey columns. Computed columns that are deterministic and either precise or imprecise can be nonkey columns. Computed columns derived from image, ntext, and text data types can be nonkey columns as long as the computed column data type is allowed as a nonkey index column. Nonkey columns cannot be dropped from a table unless that table's index is dropped first. Nonkey columns cannot be changed, except to do the following: Change the nullability of the column from NOT NULL to NULL. Increase the length of varchar, nvarchar, or varbinary columns.

Recovery Model: Full: Work loss exposure

Normally none. If the tail of the log is damaged, changes since the most recent log backup must be redone.

Delayed Transaction Durability Guarantees

Once transaction commit succeeds, the changes made by the transaction are visible to the other transactions in the system. Transaction durability is guaranteed only following a flush of the in-memory transaction log to disk. The in-memory transaction log is flushed to disk when: A fully durable transaction in the same database makes a change in the database and successfully commits. The user executes the system stored procedure sp_flush_log successfully. If a fully durable transaction or sp_flush_log successfully commits, all previously committed delayed durability transactions are guaranteed to have been made durable. SQL Server does attempt to flush the log to disk both based on log generation and on timing, even if all the transactions are delayed durable. This usually succeeds if the IO device is keeping up. However, SQL Server does not provide any hard durability guarantees other than durable transactions and sp_flush_log.

Using non-parameterized queries when that is not absolutely necessary (for example in case of ad-hoc analysis) is not a best practice. Cached plans cannot be reused which forces Query Optimizer to compile queries for every unique query text. Also, Query Store can rapidly exceed the size quota because of potentially a large number of different query texts and consequently a large number of different execution plans with similar shape. As a result, performance of your workload will be sub-optimal and Query Store might switch to read-only mode or might be constantly deleting the data trying to keep up with the incoming queries. Consider following options:

Parameterize queries where applicable, for example wrap queries inside a stored procedure or sp_executesql. For more information, see Parameters and Execution Plan Reuse. Use the Optimize for Ad Hoc Workloads option if your workload contains many single use ad-hoc batches with different query plans. Compare the number of distinct query_hash values with the total number of entries in sys.query_store_query. If the ratio is close to 1 your ad-hoc workload generates different queries. Apply forced parameterization, for the database or for a subset of queries if the number of different query plans is not large. Use plan guide to force parameterization only for the selected query. Configure forced parameterization as using the Parameterization database option command, if there are a small number of different query plans in your workload: when the ratio between the count of distinct query_hash and the total number of entries in sys.query_store_query is much less than 1. Set the Query Capture Mode to AUTO to automatically filter out ad-hoc queries with small resource consumption. https://docs.microsoft.com/en-us/sql/relational-databases/performance/best-practice-with-the-query-store?view=sql-server-2017

Recovery Model: Bulk logged

Requires log backups. An adjunct of the full recovery model that permits high-performance bulk copy operations. Reduces log space usage by using minimal logging for most bulk operations.

Recovery Model: Full

Requires log backups. No work is lost due to a lost or damaged data file. Can recover to an arbitrary point in time (for example, prior to application or user error).

Stats_date

Returns the date of the most recent update for statistics on a table or indexed view.

APPLOCK

Returns the lock mode held by the lock owner on a particular application resource. Lock mode can have any one of these values: NoLock Update Shared Exclusive IntentShared IntentExclusive *UpdateIntentExclusive *SharedIntentExclusive *This lock mode is a combination of other lock modes and sp_getapplock cannot explicitly acquire it. Function properties: Nondeterministic Nonindexable Nonparallelizable

1204

Returns the resources and types of locks participating in a deadlock and also the current command affected. For more information, see this Microsoft Support article. Scope: global only

1222

Returns the resources and types of locks that are participating in a deadlock and also the current command affected, in an XML format that does not comply with any XSD schema. Scope: global only

Optimized bulk load operations on heaps block queries that are running under the following isolation levels:

SNAPSHOT READ UNCOMMITTED READ COMMITTED using row versioning

Stats_date syntax

STATS_DATE ( object_id , stats_id )

Secondary

Secondary data files are optional, are user-defined, and store user data. Secondary files can be used to spread data across multiple disks by putting each file on a different disk drive. Additionally, if a database exceeds the maximum size for a single Windows file, you can use secondary data files so the database can continue to grow. The recommended file name extension for secondary data files is .ndf.

Best Practice:Delete less relevant queries when maximum size is reached.

Setting:Activate size-based cleanup policy.

Best Practice:Filter out non-relevant queries.

Setting:Configure Query Capture Mode to Auto.

Best Practice:Limit retained historical data.

Setting:Configure time-based policy to activate auto-cleanup.

SNAPSHOT

Specifies that data read by any statement in a transaction will be the transactionally consistent version of the data that existed at the start of the transaction. The transaction can only recognize data modifications that were committed before the start of the transaction. Data modifications made by other transactions after the start of the current transaction are not visible to statements executing in the current transaction. The effect is as if the statements in a transaction get a snapshot of the committed data as it existed at the start of the transaction. Except when a database is being recovered, SNAPSHOT transactions do not request locks when reading data. SNAPSHOT transactions reading data do not block other transactions from writing data. Transactions writing data do not block SNAPSHOT transactions from reading data. During the roll-back phase of a database recovery, SNAPSHOT transactions will request a lock if an attempt is made to read data that is locked by another transaction that is being rolled back. The SNAPSHOT transaction is blocked until that transaction has been rolled back. The lock is released immediately after it has been granted. The ALLOW_SNAPSHOT_ISOLATION database option must be set to ON before you can start a transaction that uses the SNAPSHOT isolation level. If a transaction using the SNAPSHOT isolation level accesses data in multiple databases, ALLOW_SNAPSHOT_ISOLATION must be set to ON in each database. A transaction cannot be set to SNAPSHOT isolation level that started with another isolation level; doing so will cause the transaction to abort. If a transaction starts in the SNAPSHOT isolation level, you can change it to another isolation level and then back to SNAPSHOT. A transaction starts the first time it accesses data. A transaction running under SNAPSHOT isolation level can view changes made by that transaction. For example, if the transaction performs an UPDATE on a table and then issues a SELECT statement against the same table, the modified data will be included in the result set.

READ UNCOMMITED

Specifies that statements can read rows that have been modified by other transactions but not yet committed. Transactions running at the READ UNCOMMITTED level do not issue shared locks to prevent other transactions from modifying data read by the current transaction. READ UNCOMMITTED transactions are also not blocked by exclusive locks that would prevent the current transaction from reading rows that have been modified but not committed by other transactions. When this option is set, it is possible to read uncommitted modifications, which are called dirty reads. Values in the data can be changed and rows can appear or disappear in the data set before the end of the transaction. This option has the same effect as setting NOLOCK on all tables in all SELECT statements in a transaction. This is the least restrictive of the isolation levels. In SQL Server, you can also minimize locking contention while protecting transactions from dirty reads of uncommitted data modifications using either: 1. The READ COMMITTED isolation level with the READ_COMMITTED_SNAPSHOT database option set to ON. 2. The SNAPSHOT isolation level.

READ COMMITTED

Specifies that statements cannot read data that has been modified but not committed by other transactions. This prevents dirty reads. Data can be changed by other transactions between individual statements within the current transaction, resulting in nonrepeatable reads or phantom data.

REPEATABLE READ

Specifies that statements cannot read data that has been modified but not yet committed by other transactions and that no other transactions can modify data that has been read by the current transaction until the current transaction completes. Shared locks are placed on all data read by each statement in the transaction and are held until the transaction completes. This prevents other transactions from modifying any rows that have been read by the current transaction. Other transactions can insert new rows that match the search conditions of statements issued by the current transaction. If the current transaction then retries the statement it will retrieve the new rows, which results in phantom reads. Because shared locks are held to the end of a transaction instead of being released at the end of each statement, concurrency is lower than the default READ COMMITTED isolation level. Use this option only when necessary.

SERIALIZABLE

Specifies the following: Statements cannot read data that has been modified but not yet committed by other transactions. No other transactions can modify data that has been read by the current transaction until the current transaction completes. Other transactions cannot insert new rows with key values that would fall in the range of keys read by any statements in the current transaction until the current transaction completes. Range locks are placed in the range of key values that match the search conditions of each statement executed in a transaction. This blocks other transactions from updating or inserting any rows that would qualify for any of the statements executed by the current transaction. This means that if any of the statements in a transaction are executed a second time, they will read the same set of rows. The range locks are held until the transaction completes. This is the most restrictive of the isolation levels because it locks entire ranges of keys and holds the locks until the transaction completes. Because concurrency is lower, use this option only when necessary. This option has the same effect as setting HOLDLOCK on all tables in all SELECT statements in a transaction.

INCLUDE (column [ ,... n ] )

Specifies the non-key columns to be added to the leaf level of the nonclustered index. The nonclustered index can be unique or non-unique. Column names cannot be repeated in the INCLUDE list and cannot be used simultaneously as both key and non-key columns. Nonclustered indexes always contain the clustered index columns if a clustered index is defined on the table. For more information, see Create Indexes with Included Columns. All data types are allowed except text, ntext, and image. The index must be created or rebuilt offline (ONLINE = OFF) if any one of the specified non-key columns are varchar(max), nvarchar(max), or varbinary(max) data types. Computed columns that are deterministic and either precise or imprecise can be included columns. Computed columns derived from image, ntext, text, varchar(max), nvarchar(max), varbinary(max), and xml data types can be included in non-key columns as long as the computed column data types is allowable as an included column.

System Views

System views expose catalog metadata. You can use system views to return information about the instance of SQL Server or the objects defined in the instance. For example, you can query the sys.databases catalog view to return information about the user-defined databases available in the instance. For more information, see System Views (Transact-SQL)

Cascading Referential Integrity: NO ACTION

The Database Engine raises an error and the delete or update action on the row in the parent table is rolled back.

Following are some general recommendations when you are working with transaction log files:

The automatic growth (autogrow) increment of the transaction log, as set by the FILEGROWTH option, must be large enough to stay ahead of the needs of the workload transactions. The file growth increment on a log file should be sufficiently large to avoid frequent expansion. A good pointer to properly size a transaction log is monitoring the amount of log occupied during: The time required to execute a full backup, because log backups cannot occur until it finishes. The time required for the largest index maintenance operations. The time required to execute the largest batch in a database. When setting autogrow for data and log files using the FILEGROWTH option, it might be preferred to set it in size instead of percentage, to allow better control on the growth ratio, as percentage is an ever-growing amount. Keep in mind that transaction logs cannot leverage Instant File Initialization, so extended log growth times are especially critical. As a best practice, do not set the FILEGROWTH option value above 1,024 MB for transaction logs. The default values for FILEGROWTH option are: Version Default values Starting with SQL Server 2016 (13.x) Data 64 MB. Log files 64 MB. Starting with SQL Server 2005 (9.x) Data 1 MB. Log files 10%. Prior to SQL Server 2005 (9.x) Data 10%. Log files 10%. A small growth increment can generate too many small VLFs and can reduce performance. To determine the optimal VLF distribution for the current transaction log size of all databases in a given instance, and the required growth increments to achieve the required size, see this script. A large growth increment can generate too few and large VLFs and can also affect performance. To determine the optimal VLF distribution for the current transaction log size of all databases in a given instance, and the required growth increments to achieve the required size, see this script. Even with autogrow enabled, you can receive a message that the transaction log is full, if it cannot grow fast enough to satisfy the needs of your query. For more information on changing the growth increment, see ALTER DATABASE (Transact-SQL) File and Filegroup options Having multiple log files in a database does not enhance performance in any way, because the transaction log files do not use proportional fill like data files in a same filegroup. Log files can be set to shrink automatically. However this is not recommended, and the auto_shrink database property is set to FALSE by default. If auto_shrink is set to TRUE, automatic shrinking reduces the size of a file only when more than 25 percent of its space is unused. The file is shrunk either to the size at which only 25 percent of the file is unused space or to the original size of the file, whichever is larger. For information about changing the setting of the auto_shrink property, see View or Change the Properties of a Database and ALTER DATABASE SET Options (Transact-SQL). https://docs.microsoft.com/en-us/sql/relational-databases/logs/manage-the-size-of-the-transaction-log-file?view=sql-server-2017#Recommendations

Primary

The filegroup that contains the primary file. All system tables are allocated to the primary filegroup.

Primary

The primary data file contains the startup information for the database and points to the other files in the database. User data and objects can be stored in this file or in secondary data files. Every database has one primary data file. The recommended file name extension for primary data files is .mdf.

Transaction Log

The transaction log files hold the log information that is used to recover the database. There must be at least one log file for each database. The recommended file name extension for transaction logs is .ldf.

MIN_MEMORY_PERCENT and MAX_MEMORY_PERCENT

These settings are the minimum and maximum amount of memory reserved for the resource pool that can not be shared with other resource pools. The memory referenced here is query execution grant memory, not buffer pool memory (for example, data and index pages). Setting a minimum memory value for a pool means that you are ensuring that the percentage of memory specified will be available for any requests that might run in this resource pool.

MIN_CPU_PERCENT and MAX_CPU_PERCENT

These settings are the minimum and maximum guaranteed average CPU bandwidth for all requests in the resource pool when there is CPU contention. You can use these settings to establish predictable CPU resource usage for multiple workloads that is based on the needs of each workload.

MIN_IOPS_PER_VOLUME and MAX_IOPS_PER_VOLUME

These settings are the minimum and maximum physical IO operations per second (IOPS) per disk volume for a resource pool. You can use these settings to control the physical IOs issued for user threads for a given resource pool.

sys.dm_db_index_operational_stats

This dynamic management object does not accept correlated parameters from CROSS APPLY and OUTER APPLY. You can use sys.dm_db_index_operational_stats to track the length of time that users must wait to read or write to a table, index, or partition, and identify the tables or indexes that are encountering significant I/O activity or hot spots. Use the following columns to identify areas of contention. To analyze a common access pattern to the table or index partition, use these columns: leaf_insert_count leaf_delete_count leaf_update_count leaf_ghost_count range_scan_count singleton_lookup_count To identify latching and locking contention, use these columns: page_latch_wait_count and page_latch_wait_in_ms These columns indicate whether there is latch contention on the index or heap, and the significance of the contention. row_lock_count and page_lock_count These columns indicate how many times the Database Engine tried to acquire row and page locks. row_lock_wait_in_ms and page_lock_wait_in_ms These columns indicate whether there is lock contention on the index or heap, and the significance of the contention. To analyze statistics of physical I/Os on an index or heap partition page_io_latch_wait_count and page_io_latch_wait_in_ms These columns indicate whether physical I/Os were issued to bring the index or heap pages into memory and how many I/Os were issued.

AFFINITY

This setting lets you affinitize a resource pool to one or more schedulers or NUMA nodes for greater isolation of CPU resources.

CAP_CPU_PERCENT

This settings is a hard cap limit on the CPU bandwidth for all requests in the resource pool. Workloads associated with the pool can use CPU capacity above the value of MAX_CPU_PERCENT if it is available, but not above the value of CAP_CPU_PERCENT.

Trace flag 7745

Trace flag 7745 will prevent the default behavior where Query Store writes data to disk before SQL Server can be shut down. This means that Query Store data that has been collected but not been yet persisted to disk will be lost.

RANGE_HI_KEY

Upper bound column value for a histogram step. The column value is also called a key value.

Control transaction log file growth

Use the ALTER DATABASE (Transact-SQL) File and Filegroup options statement to manage the growth of a transaction log file. Note the following: To change the current file size in KB, MB, GB, and TB units, use the SIZE option. To change the growth increment, use the FILEGROWTH option. A value of 0 indicates that automatic growth is set to off and no additional space is permitted. To control the maximum the size of a log file in KB, MB, GB, and TB units or to set growth to UNLIMITED, use the MAXSIZE option.

When OFF, each of the preceding T-SQL statements is bounded by an unseen BEGIN TRANSACTION and an unseen COMMIT TRANSACTION statement. When OFF, we say the transaction mode is autocommit. If your T-SQL code visibly issues a BEGIN TRANSACTION, we say the transaction mode is explicit. There are several clarifying points to understand:

When the transaction mode is implicit, no unseen BEGIN TRANSACTION is issued if @@trancount > 0 already. However, any explicit BEGIN TRANSACTION statements still increment @@TRANCOUNT. When your INSERT statements and anything else in your unit of work is finished, you must issue COMMIT TRANSACTION statements until @@TRANCOUNT is decremented back down to 0. Or you can issue one ROLLBACK TRANSACTION. SELECT statements that do not select from a table do not start implicit transactions. For example SELECT GETDATE(); or SELECT 1, 'ABC'; do not require transactions. Implicit transactions may unexpectedly be ON due to ANSI defaults. For details see SET ANSI_DEFAULTS (Transact-SQL). IMPLICIT_TRANSACTIONS ON is not popular. In most cases where IMPLICIT_TRANSACTIONS is ON, it is because the choice of SET ANSI_DEFAULTS ON has been made. The SQL Server Native Client OLE DB Provider for SQL Server, and the SQL Server Native Client ODBC driver, automatically set IMPLICIT_TRANSACTIONS to OFF when connecting. SET IMPLICIT_TRANSACTIONS defaults to OFF for connections with the SQLClient managed provider, and for SOAP requests received through HTTP endpoints. To view the current setting for IMPLICIT_TRANSACTIONS, run the following query. DECLARE @IMPLICIT_TRANSACTIONS VARCHAR(3) = 'OFF'; IF ( (2 & @@OPTIONS) = 2 ) SET @IMPLICIT_TRANSACTIONS = 'ON'; SELECT @IMPLICIT_TRANSACTIONS AS IMPLICIT_TRANSACTIONS;

None

You have already captured the query set that you want to monitor in runtime and you want to eliminate the distractions that other queries may introduce.None is suitable for testing and bench-marking environments.None is also appropriate for software vendors who ship Query Store configuration configured to monitor their application workload.None should be used with caution as you might miss the opportunity to track and optimize important new queries. Avoid using None unless you have a specific scenario that requires it.

Cascading Referential Integrity: CASCADE, SET NULL, SET DEFAULT and NO ACTION

can be combined on tables that have referential relationships with each other. If the Database Engine encounters NO ACTION, it stops and rolls back related CASCADE, SET NULL and SET DEFAULT actions. When a DELETE statement causes a combination of CASCADE, SET NULL, SET DEFAULT and NO ACTION actions, all the CASCADE, SET NULL and SET DEFAULT actions are applied before the Database Engine checks for any NO ACTION.

Trace flag 7752

enables asynchronous load of Query Store. This allows a database to become online and queries to be executed before the Query Store has been fully recovered. The default behavior is to do synchoronous load of Query Store. The default behavior prevents queries from executing before the Query Store has been recovered but also prevents any queries from being missed in the data collection.

Resource Governor

enables you to specify limits on the amount of CPU, physical IO, and memory that incoming application requests can use within the resource pool. Each resource pool can contain one or more workload groups. When a session is started, the Resource Governor classifier assigns the session to a specific workload group, and the session must run using the resources assigned to the workload group.

The COLLATE clause can be applied only

for the char, varchar, text, nchar, nvarchar, and ntext data types.

Activity Monitor

is a tabbed document window with the following expandable and collapsible panes: Overview, Active User Tasks, Resource Waits, Data File I/O, and Recent Expensive Queries. When any pane is expanded, Activity Monitor queries the instance for information. When a pane is collapsed, all querying activity stops for that pane. You can expand one or more panes at the same time to view different kinds of activity on the instance.

Batch mode on rowstore can only help by

reducing CPU consumption. If your bottleneck is IO-related, and data is not already cached ("cold" cache), batch mode on rowstore will NOT improve elapsed time. Similarly, if there is not enough memory on the machine to cache all data, a performance improvement is unlikely.

sp_autostats syntax

sp_autostats [ @tblname = ] 'table_or_indexed_view_name' [ , [ @flagc = ] 'stats_value' ] [ , [ @indname = ] 'statistics_name' ]


Ensembles d'études connexes

CH 12 - Cotton is King: The Antebellum South, 1800-1860

View Set

Ch. 21: International Corporate Finance

View Set

Ch. 7 Statement of stockholders' equity

View Set