You are on page 1of 67

Chapter 2: Optimizing the Performance of Queries

Page 1 of 67

Chapter 2: Optimizing the Performance of Queries


Microsoft Exam Objectives Covered in this Chapter:
Troubleshoot and maintain query performance. Identify poorly performing queries. Analyze a query plan to detect inefficiencies in query logic. Maintain and optimize indexes. Enforce appropriate stored procedure logging and output. Troubleshoot concurrency issues. As a continuation of Chapter 1, Optimizing the Performance pof Databases and Database Servers, youll now turn your attention to specifically optimizing the performance of queries within the SQL Server 2005 environment. The DBA often has to optimize the performance of queries in response to users of the database solution complaining about slow performance or unresponsive systems. However, this does not necessarily mean that the root cause has to do with poorly written queries or poor indexing strategies. It is important to appreciate the concurrent nature of SQL Server 2005 and realize that the problem might have to do with concurrency or contention instead. Consequently, youll look at troubleshooting concurrency in this chapter as well.

Troubleshooting and Maintaining Query Performance


In any large SQL Server 2005based database solution, there are typically quite a large number of queries, potentially executing at any given point in time. Likewise, there might be hundreds of tables per database. Consequently, it can be difficult to troubleshoot query performance. Again, some structured approach or methodology is required when troubleshooting your query performance. Of course, a lack of resources, such as processors or memory, can be the cause of poor query performance, but we covered this in Chapter 1. What we are interested in talking about here are the causes for poor query performance internal to the SQL Server 2005 database engine. So, lets go through a list of the possible causes of poor query performance that you need to check: T-SQL code Poor query performance can obviously be attributed to the T-SQL code that has been written. In certain cases, such as whether to use the IN versus OR construct, there is no difference; in other cases, such as using set-based queries versus cursors, there will be. Believe it or not, database developers should write efficient T-SQL code! Note Inefficient T-SQL code is commonly detected by looking at the query plan. Indexes Perhaps the main cause of poor query performance will be a lack of indexes or an inappropriate indexing strategy. This invariably means that the query optimizer will be generating query plans that needlessly perform expensive operations such as table scans, hash joins, excessive bookmark lookups, and/or sorts.

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 2 of 67

Note As with inefficient T-SQL code, inappropriate indexing is usually determined by examining the query plan. Database options This represents a subtler category, but you should not forget to check the database options as part of your methodology for troubleshooting query performance. Certain database options such as AUTO_CREATE_STATISTICS and AUTO_UPDATE_STATISTICS will have a direct impact on query performance within your database solution. If either of these options has been turned off, the query optimizer might not be generating optimal query plans. Other database options, such as READ_ONLY or SINGLE_USER, can improve performance because the lock manager does not have to maintain locks for that particular database. Statistics Inaccurate or out-of-date statistics can dramatically affect the performance of queries in SQL Server 2005. Without accurate statistics, the query optimizer may not generate optimal query plans for your queries. Consequently, it is important to check whether the statistics are up-to-date as part of your query troubleshooting methodology. Note For completeness sake, we should add that you can also potentially improve query performance by taking advantage of persisted computed columns and indexed views. Obviously there are other techniques involving redesigning the databases schema; however, these fall under the auspice of changing the database design, not optimizing query performance for the purposes of this chapter. The first step, of course, is to identify potentially poorly performing queries. Microsoft has certainly made that process a lot easier in SQL Server 2005, as we will discuss in a moment. Database Tuning Advisor Why do things the hard way? The Database Tuning Advisor (DTA) is a replacement for the Index Tuning Wizard that you might have experience with from SQL Server 2000. The DTA can analyze SQL Server Profiler traces or a T-SQL workload script against your database solution and can recommend various performance tuning enhancements: Creating additional indexes Dropping existing indexes Implementing indexed views Implementing a partitioning strategy The DTA in SQL Server 2005 can recommend a number of performance enhancements across a number of databases simultaneously from a single trace/workload file. You can limit DTA in a number of ways: Time spent tuning your database solution Partitioning strategy to use Aligned partitions Full partitions Physical design structures to use in the database Indexes Indexed views Nonclustered indexes Physical design structures to keep in the database

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 3 of 67

The DTA generates a list of recommendations that can be converted into an XML script or T-SQL scripts. It also produces a number of reports that summarize different aspects of its analysis of your database solution. You can evaluate these recommendations and decide whether you want to implement them. Simple!

In fact, youll now go through the DTA and see how it works in Exercise 2.1. Exercise 2.1: Using the Database Tuning Advisor The DTA represents an easy way to tune your SQL Server 2005 database solution, quickly and easily. Dont be under the illusion that you can do better than the DTA. Although that might be the case, depending on your skill and knowledge of the environment, we still recommend running the DTA to see whether its recommendations mirror yours. This exercise assumes you have captured a SQL Server Profiler/SQL Trace workload file, as discussed in Chapter 1. 1. Use the Windows Start menu, and choose All Programs Tools Database Engine Tuning Advisor. Microsoft SQL Server 2005 Performance

2. Connect to your SQL Server 2005 instance using Windows authentication. 3. Type in a session name and the location of the previously captured workload file. Click the database that you want to tune, and filter out any tables as appropriate.

4. Click the Tuning Options tab. The Tuning Options tab allows you to further refine what you want the DTA to perform during the tuning session. Review and configure the DTA tuning options appropriately. If in doubt, leave the default settings.

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 4 of 67

5. Click the Advanced Options button. The Advanced Tuning Options dialog box allows you to further refine the tuning options. Review and configure the DTA advanced tuning options appropriately. If in doubt, leave the default settings. Click the OK button when you have finished.

6. Click the Start Analysis button on the toolbar. The DTA should start the tuning process.

7. After the DTA has completed, it will generate a Recommendations tab where you can analyze the indexing and partitioning recommendations it has made. Notice the DTA will also generate an estimated improvement in performance. You can ignore recommendations by deselecting the appropriate check box.

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 5 of 67

Note: Remember that the quality of the DTAs recommendations is based on the quality of the workload file you have captured. So, ensure that the workload file is a correct representation of the database activity expected. 8. Click the recommendations hyperlink, located at the far right of the Recommendations tab, to see the T-SQL script that would implement the recommendation. Click the Close button when you have finished.

9. Click the Reports tab. The Reports tab will show you a summary of this DTA session. Click the Select Report drop-down list to see what tuning reports are available.

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 6 of 67

10. Review the various tuning reports available. 11. You can save the results of the DTAs session through the File menu for later analysis as required. 12. When you have finished, you can exit the DTA.

Identifying Poorly Performing Queries


Traditionally, DBAs would have to rely on user feedback to identify poorly performing queries. They would wait until users of a database solution complained about a specific performance problem and then investigate whether it was a true performance issue or a contention issue. Not anymore! SQL Server 2005 offers some innovative new techniques that allow you to identify poorly performing queries that are being run in your environment. This allows you to tune these poorly performing queries without having to rely on user complaints. An additional benefit is that, once tuned, these queries typically have less of an impact on the other queries being executed, so the overall database solution performs better. Otherwise, you still can use the more traditional tools to identify poorly performing queries in SQL Server 2005. Well run through a summary of these techniques in the following sections. Query Plan Once a query has been identified for potential performance tuning, be it through user experience or its relative importance in the SQL Server 2005 database solution, the main technique you would use to analyze its performance is to analyze its query plan. There are also a few other metrics to consider: STATISTICS IO STATISTICS TIME STATISTICS PROFILE A new feature of SSMS allows you to the capture and display execution metrics of your queries. These metrics can be compared between multiple executions and averaged, as shown in Figure 2.1, which can be useful to exclude environmental anomalies when testing.

Figure 2.1: Client statistics

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 7 of 67

To generate these client statistics, you need to click the Include Client Statistics button in the Standard toolbar before executing the query. Note These client statistics are not automatically reset for a given user connection in SSMS. If you change your query, dont forget to reset the client statistics via the Reset Client Statistics option in the Query menu. But as we indicated, analysis of the querys query plan would allow you to determine whether the query is performing optimally. We will be discussing query plans and their analysis in more detail in a moment. SQL Server Traces In Chapter 1, we covered how you can capture a trace of the queries being executed on your SQL Server 2005 instance either through SQL traces or through SQL Server Profiler. The captured trace information obviously provides a means of identifying potentially poorly performing queries. When analyzing your SQL traces, look for events that indicate the following: Excessive recompilations/compilations Expensive operations: Table scans Hash operations Join operations Sort operations You can capture quite a number of SQL Server Profilers trace events for your analysis: Errors and Warnings event classes Execution Warnings Hash Warning Missing Column Statistics Missing Join Predicate Sort Warnings Stored Procedures event classes SP:Recompile TSQL event classes SQL:StmtRecompile Note Of course, you can also create a trace based on the TSQL and Stored Procedure event classes. You can then analyze your trace based on the longest Duration values-the slowest queries, or, rather, the queries that are taking the longest to run. But in our book, time is always an imprecise measurement. For obvious reasons, the queries that are taking the longest to run might not necessarily be inefficient. But

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 8 of 67

they can be. So we suppose its a worthwhile method to adopt. Dynamic Management Views SQL Server 2005 offers a number of DMVs that you can use to identify poorly performing queries. We will go through them in more detail as required, but it is worth listing them all here. A query will not be performing optimally if the indexes that it uses are heavily fragmented or otherwise inefficient. To monitor index fragmentation, usage, overhead, and hotspots, you can take advantage of the following DMVs: sys.dm_db_index_usage_stats sys.dm_db_index_operational_stats sys.dm_db_index_physical_stats We will cover the sys.dm_db_index_physical_stats DMV and go through an example in more detail later in this chapter. A particularly exciting set of DMVs in SQL Server 2005 provides information about potentially missing indexes that could enhance query performance: sys.dm_db_missing_index_group_stats sys.dm_db_missing_index_groups sys.dm_db_missing_index_details sys.dm_db_missing_index_columns The sys.dm_db_missing_index_details DMV returns detailed information about missing indexes, whereas the sys.dm_db_missing_index_columns(index_handle) DMV returns information about database table columns that are missing an index. Potentially missing indexes are applicable to the following types of query types: Equality predicates Inequality predicates (which represent any operator other than an equality) Included columns (for covering queries) These missing DMVs are not intended as fine-tuning mechanisms, and they dont provide an order for the columns of an index. They are intended to be used as guidelines for the indexes that the SQL Server 2005 database engine considers you need for a database. Be particularly careful with inequality predicates recommendations. Note Because DMVs are memory-only structures, the missing index information is deleted upon the SQL Server 2005 instance being shut down. Consequently, you should export the missing index information if required before shutting down your SQL Server 2005 instance. The following example queries the missing index DMVs to determine whether there are any missing indexes that would result in poorly performing queries:
SELECT * FROM sys.dm_db_missing_index_details AS mid CROSS APPLY sys.dm_db_missing_index_columns (mid.index_handle) JOIN sys.dm_db_missing_index_groups AS mig

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 9 of 67

ON

mig.index_handle = mid.index_handle

The following set of execution-related DMVs can also be queried to help you identify potentially poorly performing queries: sys.dm_exec_query_stats sys.dm_exec_query_plan sys.dm_exec_sql_text sys.dm_exec_cached_plans The sys.dm_exec_query_stats DMV is particularly useful because it returns aggregate performance metrics about the cached execution plans. This allows you to quite easily query your SQL Server 2005 instance for the queries that have taken up the most resources, be they logical reads, physical writes, CLR time, and so forth. For example, you might decide to investigate the most processor-intensive queries that have been executed on your SQL Server 2005 instance. In this case, you could query the sys.dm_exec_query_stats DMV:
SELECT * FROM sys.dm_exec_query_stats AS eqs CROSS APPLY sys.dm_exec_query_plan(eqs.plan_handle) ORDER BY total_worker_time DESC ;

Alternatively, you might decide to look for queries that use processor-intensive operators such as hash matches and sorts. In this case, you could query the sys.dm_exec_cached_plans DMV filtering for either 'Hash Match' or 'Sort' on the query_plan column from the follow-ing base query:
SELECT * FROM sys.dm_exec_cached_plans AS ecp CROSS APPLY sys.dm_exec_query_plan(ecp.plan_handle) ;

The sys.dm_os_wait_stats DMV returns processes that are currently waiting and thus can be queried to help diagnose potential query performance problems. We will cover the sys.dm_ os_wait_stats DMV in more detail later in this chapter. The sys.dm_tran_locks DMV returns the database resources that are currently locked, along with requests for those same resources. Likewise, we will cover the sys.dm_tran_locks DMV in more detail later. There are other DMVs of course, so make sure you are comfortable with them. Welcome to the brave new world!

Analyzing a Query Plan to Detect Inefficiencies in Query Logic


A query plan basically describes how the query optimizer executed a T-SQL statement within a batch. The query plan shows the different types of operations that needed to be performed and the order in which they were performed. It also shows the data access method used to retrieve data from the tables, be it an index scan, index seek, or table scan. It shows which steps consumed the most resources and/or time within both the T-SQL statement and the batch. Note The query plan is more commonly referred to as the execution plan. The SQL Server Management Studio (SSMS) environment has the ability to display the exe-cution plan that SQL Server 2005s query optimizer used in the execution of your T-SQL batch. This ability can help you

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 10 of 67

determine whether your queries are executing efficiently. If you decide they are not efficient, you can take corrective measures, such as rewriting the query or redesigning your indexing strategy. Sometimes you will have to override the optimizer through query/table hints. Generating Query Plans One of the features we really like in SQL Server 2005 is its ability to easily generate query plans. Understanding them might be another matter, of course! There is no longer any mystery as to how the database engine is executing a particular query, unlike many other relational database products. Of course, the amount of information and metrics that query plans return can be a bit daunting at first. You can generate a query plan in two basic ways in SQL Server 2005. The first way, through T-SQL, which will generate a text version of the query plan, is to use one of the following SET options: SHOWPLAN_TEXT The SET SHOWPLAN_TEXT ON statement will show the execution plan for the T-SQL statements executed. Note that the T-SQL will not actually be executed. SHOWPLAN_ALL The SET SHOWPLAN_ALL ON statement will return the same information that the SHOWPLAN_TEXT option does with additional information about estimated resource utilization. Again, the T-SQL will not actually be executed. SHOWPLAN_XML The SET SHOWPLAN_XML ON statement will return the same information that the SHOWPLAN_TEXT option does, but in an XML format. Again, the T-SQL will not actually be executed. Note The query_plan column of the sys.dm_exec_query_plan DMV returns the same information as the SHOWPLAN_XML option. STATISTICS PROFILE The SET STATISTICS PROFILE ON statement will return the same information that the SET SHOWPLAN_ALL ON does as well as execution statistics. The difference is that the T-SQL will be executed. STATISTICS XML The SET STATISTICS XML ON statement will return the same information that the STATISTICS PROFILE option does, but in an XML format. Again, the T-SQL will be executed. These SET options are designed mainly for database developers and/or DBAs who want to send an execution plan in a text format to someone else, as in the case of an email. The second and more popular way, as shown in Figure 2.2, is graphically through the SSMS environment. SSMS has the ability to display both the actual and estimated execution plans. With the estimated execution plan, your T-SQL script is only parsed and an execution plan estimated based on the best efforts of the query optimizer. The actual execution plan, on the other hand, can be generated only when your T-SQL script is actually executed.

Figure 2.2: Query plan Warning Be careful of drawing conclusions from the estimated execution plan because SQL Server 2005 does not guarantee that it will be the same as the actual execution plan at runtime. Database developers typically use the estimated execution plan as an indication of how their T-SQL query is going to perform without consuming the resources of the SQL Server instance, which could have a

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 11 of 67

dramatic impact on performance in a production environment. To get more information about a particular operation of the query plan, you can move your mouse over that icon until a notepad comes up, as shown in Figure 2.3. Note The graphical execution plan contains a lot of information that is often overlooked. For example, the width of the arrows linking the nodes indicates the amount of data that was passed across. You can determine the number of rows passed across by holding the mouse over the arrow.

Figure 2.3: Additional query plan operation information Tip For even more detailed information about a particular operation of the execution plan, you can view the Properties window by pressing F4. Youll now look at the different elements that make up the graphical execution plan. Note that not all of these elements will be visible for every operation in an execution plan. Physical Operation The physical operation performed by the query, such as a bookmark lookup, hash join, nested loop, and so on. Physical operators correspond to an execution algorithm and have costs associated with them. Tip You should watch out for physical operators in the query execution plan that are displayed in red, because they typically indicate some sort of a problem, such as missing statistics. Logical Operation The relational algebraic operation used that matches the physical operation; typically, various physical operators can implement logical operation. Actual Number of Rows The actual number of rows returned by this operation. Estimated I/O Cost The estimated cost of all I/O resources for this operation. Tip The estimated I/O cost should be as low as possible. Estimated CPU Cost The estimated cost of all CPU resources for this operation. Estimated Operator Cost The estimated cost of performing this operation. Note The estimated operator cost is also represented as a percentage of the overall cost of the query in

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 12 of 67

parentheses. Estimated Subtree Cost The estimated cost of performing this operation and all preceding operations in its subtree. Estimated Number of Rows The estimated number of rows returned by the operation. Tip You should watch out for a large discrepancy between the Estimated Number of Rows value and the Actual Number of Rows value. Estimated Row Size The estimated size of the rows, in bytes, retrieved by the operation Actual Rebinds/Actual Rewinds The number of times the physical operator needed to initialize itself and set up any internal data structures. A rebind indicates that the input parameters changed and a reevaluation was done. The rewind indicates that existing structures were used. Ordered Whether the rows returned by this operation are ordered. Node ID A unique identifier for the node. Object/Remote Object The database object that this operation accessed. Output List The list of outputs for this particular operation. Note For more information about the different types of logical operators, search for the topic Graphical Execution Plan Icons (SQL Server Management Studio) in SQL Server 2005 Books Online. Understanding the execution plan comes with experience, so start using them as soon as you can. We always advise students to have the option to see them in SSMS turned on and observe them as you are developing queries. Its all about exposure to them. Youll examine the different ways in which you can generate an execution plan in Exercise 2.2. Exercise 2.2: Generating Query Plans In this exercise, you will examine the different ways in which you can generate an execution plan for analysis. 1. Open SQL Server Management Studio, and connect to your SQL Server 2005 instance using Windows authentication. 2. Click the New Query button, connect to your SQL Server 2005 instance, and choose the AdventureWorks database. 3. Type the following query into the query pane:
SELECT MIN(SalesOrderID), MAX(SalesOrderID) FROM Sales.SalesOrderHeader

4. Click the Include Actual Execution Plan button on the toolbar. 5. Execute the query. 6. Click the Execution Plan tab in the bottom pane. Examine the various components of the execution plan, as shown here.

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 13 of 67

7. When you have finished, deselect the Include Actual Execution Plan button on the toolbar. 8. Modify the query as follows:
SET SHOWPLAN_ALL ON GO SELECT MIN(SalesOrderID), MAX(SalesOrderID) FROM Sales.SalesOrderHeader

9. Execute the query. 10. Examine the various columns of information in the execution plan. 11. Exit SQL Server Management Studio when you have finished.

Analyzing Query Plans Its all about picking your battles! In other words, you could almost argue that every query will have a bottleneck or some component that is the slowest. Consequently, you need to know where to look and what to look for. You should start with analyzing the query plans of the most resource expensive and/or the slowest queries of your SQL Server 2005 database solution. So, you should follow this methodology: 1. Identify the most expensive batches (or queries) that are running on your SQL Server 2005 instance. Identify the batches (or queries) that are running the slowest on your SQL Server 2005 instance. 2. Identify what queries are taking up the highest percentage of the resources of the batches identified previously through the query plan. 3. Identify the needless or expensive operations within the query plan of the queries identified previously. When analyzing the query plans, which can for obvious reasons be quite complex, you can watch out for the following potentially expensive and/or needless operations: Table scans Table scans on large tables can be a particularly expensive operation in a SQL Server 2005 instance, especially where memory is a limited resource. Dont forget that if you can avoid using table scans, you free up the buffer pools memory for more useful data. Tip You should not bother with table scans on smaller tables. In fact, in certain cases table scans can be more efficient than index seeks. So, do not get carried away with eliminating table scans,

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 14 of 67

especially if you are confident that you have an appropriate indexing strategy on that table. Hash operations In certain cases, typically where no useful indexes exist, SQL Server 2005 will execute an operation such as a JOIN or a GROUP BY by using a hash operation. Hash operations are particularly computer-intensive. They can also consume a lot of memory for the internal structures generated. Hash recursion or hash bailout will further reduce performance. Tip Hash operations typically suggest the need for an index. Bookmark lookups Excessive bookmark lookups can potentially contribute to poor query performance. A bookmark lookup is where SQL Server 2005 in the execution of a query used a bookmark (row ID or clustering key) from a nonclustered index to look up the corresponding data row in the table (heap or clustered index). If you have a lot of bookmark lookups for a query, this jumping between the leaf level of the nonclustered index and the table can be inefficient. Tip Excessive bookmark lookups typically suggest the need for a covering index. Note As an aside, bookmark lookups are no longer technically called that in SQL Server 2005. They are now called either clustered index seeks or RID lookups. To confuse you further, in SQL Server 2005 Service Pack 2, the key lookup operator replaces the clustered index seek. Sorts Sorting operations can likewise needlessly waste your server resources. SQL Server 2005 has to stream the querys result set into the tempdb system database, sort it, and then stream it to the query. Filtering As with sorting, filtering operations can needlessly consumer server resources. Once you have analyzed the query plan and identified potentially inefficient operations, you will need to resolve these potential performance issues. Resolving Query Performance Issues Resolving query performance issues can be particularly challenging, especially for the DBA. Not only do you have to know SQL Server 2005s architecture and how it works, but you also need to know two crucial points: How your database users are using the database solution The nature of the data being queried: Statistics Selectivity Density These two points are critical to know in SQL Server 2005 because it uses a cost-based optimization model. Unfortunately, the user and data patterns are typically more part of the developers (or business analysts) domain. In any case, you should consider the following techniques of resolving your query performance issues.
Run the Database Tuning Advisor

This should be obvious by now.


Implement Appropriate Indexes

Perhaps the most common reason for poor query performance is a poor indexing strategy- or a complete lack of an indexing strategy, as we have witnessed so many times. (We just saw it again at a television channel where one of us was performing a SQL Server health check last week.)

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 15 of 67

As a DBA, you will most likely have to liaise with database developers and database users so as to be able to implement an appropriate indexing strategy. For completeness sake, we should also mention that you should check to see whether indexes are disabled (a new feature in SQL Server 2005), in which case you will have to enable them.
Rewrite the Query

Another obvious technique is to rewrite the query. This generally falls on the database developer, though, so go and harass him or her! As a DBA, it is usually sufficient to identify the poorly performing queries. You can employ all sorts of considerations and techniques when writing T-SQL queries: use joins versus subqueries, appropriately use user-defined functions, avoid cursors, and so on.
Update Statistics

Another common cause of poor query performance that is commonly overlooked is inaccurate/ out-of-date statistics. Accurate statistical information about your data is almost as important as an appropriate indexing strategy. Without accurate statistics, the query optimizer can choose suboptimal query plans. You can run UPDATE STATISTICS on the appropriate table, index, or view. Otherwise, you can execute the sp_updatestats system stored procedure, which will update statistics for all the appropriate objects in the database. Note You can use the sp_autostats system stored procedure to display or change the automatic updating of statistics for tables and indexes. In particular, do not forget to check the database options as part of your troubleshooting methodology: AUTO_CREATE_STATISTICS AUTO_UPDATE_STATISTICS We will cover statistics in more detail when we look at maintaining and optimizing indexes later in this chapter.
Use the RECOMPILE Option

In certain cases, SQL Server 2005s ability to cache and reuse execution leads to suboptimal performance. Your queries might be using atypical or temporary values for their parameters for which the cached execution plan is suboptimal. In this case, you can use the RECOMPILE option. You can force SQL Server 2005 to recompile the query when it is next executed at a number of different levels: As part of the stored procedures definition so that the stored procedures cached execu-tion plan is never reused As part of the EXECUTE statement As an optimizer hint (which we will cover shortly) Through the sp_recompile system stored procedure Note Dont forget to document your reasons for adding the RECOMPILE option. The following example shows the [uspGetBillOfMaterials] stored procedure being altered in the AdventureWorks database so that its query plan will never be reused:

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 16 of 67

USE AdventureWorks ; GO ALTER PROCEDURE [dbo].[uspGetBillOfMaterials] @StartProductID [int], @CheckDate [datetime] WITH RECOMPILE AS BEGIN SET NOCOUNT ON; ...

Statement-Level Compilation Statement-level recompilation is a new feature in SQL Server 2005. When SQL Server 2005 recompiles stored procedure, it recompiles only the statement that caused the recompilation and not the entire stored procedure (or batch). The obvious benefit is that SQL Server 2005 has less work to do in the case of a recompilation being triggered. It is important to realize that you cannot compare recompilation counts between SQL Server 2000 and SQL Server 2005. In SQL Server 2000, the SP:Recompile event class indicated that a stored procedure, trigger, or user-defined function had been recompiled, so it was at the batch level. In SQL Server 2005, the same SP:Recompile event class indicates a recompilation within the batch at the statement level. Because of this change, the SP:Recompile event class is being deprecated. You should use the SQL:StmtRecompile event class instead.

Use Query Hints

When all else fails, you should consider using query hints. Query hints basically allow you to override the query optimizer by telling it what strategy it should use in a component of the execution plan. We cannot stress enough that using query hints is considered a last resort, so you should investigate why SQL Server 2005 is executing a query in a particular way that you consider to be incorrect. With all due respect, the chances are that you, not the query optimizer, will be wrong. So often in the field we have encountered scenarios where people have utilized query hints that were not optimal, and this was primarily because of the developers not understanding indexes and how SQL Server 2005 really works internally. Having said that, there are always exceptions. Note Dont forget to document your reasons for adding the query hint. You can direct a number of query hints at the query optimizer that control how the query optimizer will process the query:
<query_hint > ::= { { HASH | ORDER } GROUP | { CONCAT | HASH | MERGE } UNION | { LOOP | MERGE | HASH } JOIN | FAST number_rows | FORCE ORDER | MAXDOP number_of_processors | OPTIMIZE FOR ( @variable_name = literal_constant [ , ...n ] ) | PARAMETERIZATION { SIMPLE | FORCED } | RECOMPILE | ROBUST PLAN | KEEP PLAN | KEEPFIXED PLAN | EXPAND VIEWS | MAXRECURSION number

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 17 of 67

| USE PLAN N'xml_plan' }

Well go through some of the more commonly used query hints: The { HASH |ORDER } GROUP query hint indicates that aggregations generated by operations such as GROUP BY and DISTINCT should use hashing or ordering. The { CONCAT | HASH | MERGE } UNION query hint indicates that UNION operations should either concatenate, hash, or merge the UNION sets. The { LOOP | MERGE | HASH } JOIN query hint indicates the type of join to be performed by the query. The FAST number of rows query hint indicates that the query should be optimized for the first number_rows number of rows. Note The FASTFIRSTROW table hint is equivalent to OPTION (FAST 1). The FORCE ORDER query hint indicates that the tables should be joined in the order that they appear in the query. The MAXDOP query hint indicates the maximum number of processors to use when executing the query. The OPTIMIZE FOR ( @variable_name = literal_constant [ , ...n ] ) query hint indicates that the query should be optimized for the specific parameter value specified. The EXPAND VIEW query hint indicates that the indexed view should not be considered when executing the query. The RECOMPILE query hint indicates that any cached execution plans should be ignored so that a new and (ideally) optimal query plan can be generated. Tip The RECOMPILE query hint is particularly useful for queries (or stored procedures) that execute with greatly different parameter values. Warning Be careful with using the RECOMPILE query hint because recompilations consume processor resources. The USE PLAN query hint indicates that a particular query plan (specified in XML) should be used when executing the query. You can also tell the query optimizer to use a particular index. The INDEX ( index_val [ ,... n ] ) table hint indicates that a particular index should be used based on its name or index_id value (which can be determined from sys.indexes). Note A table hint of INDEX(0) indicates that the query should use a table scan. Note A table hint of INDEX(1) indicates that the query should use the clustered index. We will cover the rest of the table hints in more detail in the Troubleshooting Concurrency Issues section later in this chapter, because the topic is more appropriate there. Tip Dont forget that poor query performance might also be attributed to inappropriate optimizer hints. So, you might have to rewrite the queries and remove the query hints.
Use Plan Guides

What about the scenario where you cannot change the query in your SQL Server 2005 database solution because it is a third-party solution and you do not have access to the source code? More and more independent software vendors are using SQL Server as their database engine for their software solutions-

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 18 of 67

unfortunately, with little knowledge of SQL Server itself. So, in a lot of cases, these third-party solutions are giving SQL Server 2005 a bad reputation! In these circumstances when you cannot or do not want to change the query code, you can take advantage of plan guides in SQL Server 2005. Note You can take advantage of plan guides in SQL Server 2005 in other circumstances as well, such as when you want a query to behave in a consistent manner, as is the case for development or benchmarking purposes. A plan guide is a mechanism that allows you to provide query hints to queries as SQL Server 2005 is executing them. So when a query is executed, if SQL Server 2005 cannot find a cached execution plan, it will generate a new one taking into account the query hints provided by the execution plan. Note Plan guides are available in the Developer, Standard, and Enterprise Editions of SQL Server 2005. There are three different scopes for plan guides in SQL Server 2005: OBJECT An OBJECT plan guide is used for queries that execute as T-SQL stored procedures, scalar functions, multistatement table-valued functions, or T-SQL DML triggers. SQL A SQL plan guide is used for ad hoc T-SQL statements or batches. TEMPLATE A TEMPLATE plan guide is used for stand-alone parameterized queries. SQL Server 2005 plan guides support the following query hints: {HASH | ORDER} GROUP {CONCAT | HASH | MERGE} UNION {LOOP | MERGE | HASH} JOIN FAST number_rows FORCE ORDER MAXDOP number_of_processors OPTIMIZE FOR ( @variable_name = literal_constant ) [ ,...n ] RECOMPILE ROBUST PLAN KEEP PLAN KEEPFIXED PLAN EXPAND VIEWS MAXRECURSION number USE PLAN <xmlplan> Plan guides have the following restrictions in SQL Server 2005: Plan guides cannot be created against encrypted objects.

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 19 of 67

Plan guides cannot be created against DDL triggers. Plan guides apply only within the database in which they were created. The query text matching used by the SQL and TEMPLATE scopes has to be exact, including comments and whitespace. SQL Server 2005 provides two stored procedures to create and manage plan guides: sp_create_plan_guide The sp_create_plan_guide system stored procedure is used to create a plan guide. sp_control_plan_guide The sp_control_plan_guide system stored procedure is used to enable, disable, or drop a plan guide. The following example shows a SQL plan guide being created for the SELECT TOP 10 PERCENT * FROM Production.Product ORDER BY ListPrice DESC T-SQL statement. In this case, you want to ensure that the T-SQL statement will never use more than one processor because it is not an important query:
EXEC sp_create_plan_guide @name = 'PlatypusPlanGuide', @stmt = 'SELECT TOP 10 PERCENT * FROM Production.Product ORDER BY ListPrice DESC', @type = 'SQL', @module_or_batch = NULL, @params = NULL, @hints = 'OPTION (MAXDOP 1)' ;

SQL Server 2005 provides the sys.plan_guides database catalog view to show what plan guides exist in a database. Note For more information about plan guides, search for the Optimizing Queries in Deployed Applications by Using Plan Guides topic in SQL Server 2005 Books Online.

Maintaining and Optimizing Indexes


Another SQL Server bookanother section about indexes. The fundamentals of indexes have not changed since SQL Server 7.0 (since SQL Server 6.0 really, although that can be debated because there have been some minor architectural changes). But you really should focus on the physical side of indexing because you are predominantly concerned about the maintenance and optimization of the index structures. Note Of course, SQL Server 2005 has new XML indexes, improved full-text indexes, and enhancements to nonclustered indexes (included columns), but they are beyond the scope of this book. Index Architecture Indexes are fundamentally structures internal to SQL Server, called balanced trees (B-trees), which predominantly improve performance for data access because they allow the database engine to access the data quicker by traversing the B-tree. They are also the mechanism that SQL Server uses to enforce uniqueness. Its as simple as that! It helps to understand what a B-tree is, so Figure 2.4 shows an index on the [CustomerID] field of a [Customers] table. The top of our B-tree is known as a root, and the bottom-most level is known as the leaf level.

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 20 of 67

Figure 2.4: SQL Server index (B-tree) The disadvantages of indexes is that that they consume space and can potentially slow down the performance of your Data Manipulation Language (DML) operations, especially if you create too many of them, because SQL Server has to maintain the B-tree structures in real time. So, optimizing indexes is about finding a balance between the performance gains and overhead on the database engine. Ultimately, and not knowing this is a common mistake, the most important factors are your users and the data usage patterns. We know you didnt want to hear this, but the point is, you might have created what you think is the worlds best indexing strategy, but its based on assumptions on what data you think your users will be accessing. Note Generally, you should not be overly concerned about indexes during your database design. You really should be determining your indexing strategy after determining your users data usage patterns. In the real world this rarely happens, so a basic indexing strategy is usually incorporated into the initial database design. The syntax for creating an index is as follows:
CREATE [ UNIQUE ] [ CLUSTERED | NONCLUSTERED ] INDEX index_name ON <object> ( column [ ASC | DESC ] [ ,...n ] ) [ INCLUDE ( column_name [ ,...n ] ) ] [ WITH ( <relational_index_option> [ ,...n ] ) ] [ ON { partition_scheme_name ( column_name ) | filegroup_name | default } ] [ ; ] <object> ::= { [ database_name. [ schema_name ] . | schema_name. ] table_or_view_name } <relational_index_option> ::= { PAD_INDEX = { ON | OFF } | FILLFACTOR = fillfactor | SORT_IN_TEMPDB = { ON | OFF } | IGNORE_DUP_KEY = { ON | OFF } | STATISTICS_NORECOMPUTE = { ON | OFF } | DROP_EXISTING = { ON | OFF }

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 21 of 67

| | | | }

ONLINE = { ON | OFF } ALLOW_ROW_LOCKS = { ON | OFF } ALLOW_PAGE_LOCKS = { ON | OFF } MAXDOP = max_degree_of_parallelism

Note You can create an index incorporating multiple columns, but you still have the 16-column limit. This has not changed from previous versions. For completeness sake, the syntax for creating an XML index is as follows:
CREATE [ PRIMARY ] XML INDEX index_name ON <object> ( xml_column_name ) [ USING XML INDEX xml_index_name [ FOR { VALUE | PATH | PROPERTY } ] [ WITH ( <xml_index_option> [ ,...n ] ) ] [ ; ] <object> ::= { [ database_name. [ schema_name ] . | schema_name. ] table_name } <xml_index_option> ::= { PAD_INDEX = { ON | OFF } | FILLFACTOR = fillfactor | SORT_IN_TEMPDB = { ON | OFF } | STATISTICS_NORECOMPUTE = { ON | OFF } | DROP_EXISTING = { ON | OFF } | ALLOW_ROW_LOCKS = { ON | OFF } | ALLOW_PAGE_LOCKS = { ON | OFF } | MAXDOP = max_degree_of_parallelism }

The main option to be aware of is the type of index youre creating, clustered or nonclus-tered; you will look at this shortly. Realistically, youll probably never have to implement this, but you should be familiar with what the FILLFACTOR option does. Basically, the FILLFACTOR option stipulates how much of the 8KB pages of the B-tree that SQL Server uses to store rows during the index creation. Note To get a deeper understanding of where to use the FILLFACTOR option (and page splits), search for the Fill Factor topic in SQL Serve Books Online. The reason not to use all of the available space in a page during the initial index creation is to lessen the impact on the performance of future DML operations. So, you would implement a FILLFACTOR setting only for indexes on volatile tables where you anticipate or are experiencing heavy OLTP activity and suspect that is the cause of performance problems. Note A FILLFACTOR setting of 0 or 100 is identical. The PAD_INDEX option applies to the FILLFACTOR option and basically dictates whether the fill factor setting should also be applied to the nonleaf levels of the B-tree. SQL Server 2005 includes a new option when creating indexes to improve performance. We will discuss this INCLUDE option shortly when we cover nonclustered indexes. Note Dont forget that a fill factor will internally fragment your data. We discuss that in more detail in Chapter 5, Designing a Strategy to Maintain a Database Solution.
Clustered Indexes

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 22 of 67

Creating a clustered index on a table has the effect of rearranging the data in your table so that the index order and the physical order are one and the same. Structurally this is because the leaf level of your clustered index is the actual data rows that make up the table. Note A table without a clustered index on it is known as a heap. So if you create a clustered index based on the [LastName] field on your [Customer] table, your table will be physically stored in order of the customers surnames. Alternatively, if you create a clustered index on the [FirstName] field, all the customers will be stored in order of the customers names. Simple! Because the clustered index determines the physical order in which the table is stored, you can have only one clustered index per table. After all, the clustered index is the table. Clustered indexes work well where the data is highly selective but also where the data is dense. Note Think of selectivity as the degree of uniqueness of your data. So, the [LastName] field should have a higher selectivity when compared to the [FirstName] field. Note Density refers more to the number of duplicates in your data. So, the Smith value probably will be denser than the Isakov value in the [LastName] field. This functionally translates into the fact that clustered indexes work well for point queries, range queries, join operations (which are really either a point or range query), or pretty much most queries. The following example shows a clustered index being created on the [LastName] field of the [Customer] table:
CREATE CLUSTERED INDEX [CLI_LastName] ON Customer(LastName) Nonclustered Indexes

Nonclustered indexes are the default in SQL Server 2005 when you are creating indexes. They are separate B-tree structures from the table. Consequently, you can have more than one non-clustered index. In fact, you should be able to create 249 nonclustered indexes on a table (though weve never tried it). Tip Dont create 249 nonclustered indexes on a table! Nonclustered indexes work well for where the data is highly selective, but they have limited value when the data is dense. So, an inappropriate nonclustered index might have all of the overhead with no performance benefits. Functionally, nonclustered indexes work well in point queries but have limited value for range queries. Why? Well, simply put, the data is not clustered together but all over the place, unlike with clustered indexes. The following example shows a nonclustered index being created on the [FirstName] field of the [Customer] table:
CREATE NONCLUSTERED INDEX [NCI_FirstName] ON Customer(FirstName)

You also have the capability of creating nonclustered indexes on multiple columns. They are commonly referred to as composite or compound indexes. There are two primary reasons for doing this. The first is to facilitate queries that have search arguments based on multiple columns from the same table. The second is to reduce the overall number of indexes that SQL Server 2005 has to maintain. So, instead of creating a nonclustered index on the [LastName] and [FirstName] fields, as they are frequently searched, we should consider creating a composite index on the [LastName, FirstName] combination. The following example shows a nonclustered index being created on multiple fields of the

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 23 of 67

[Customer] table:
CREATE NONCLUSTERED INDEX [NCI_CompundIndex] ON Customer(LastName, FirstName)

Tip When creating nonclustered indexes, remember that the more columns you add, the less likely the index will be used because it is getting too wide. Keep the index-to-table-size ratio in mind. Of course, there are exceptions-called covering indexes-and you will examine them shortly. SQL Server 2005 has a new index option that allows you to improve nonclustered indexes by including nonkey columns in the index DDL. The columns that are included by the INCLUDE clause are stored only in the leaf levels of the index and consequently are not subject to the 16-column limit. This allows for the creation of larger covering indexes. Here are the other restrictions: One key column must be defined. A maximum of 1,023 columns can be included. Included columns cannot be repeated in the INCLUDE list. Columns cannot be defined in both the nonclustered index and the INCLUDE list. The following example shows a nonclustered index being created on the [Country] field of the [Customer] table, which includes three more fields:
CREATE NONCLUSTERED INDEX [NCI_PassportNumber] ON Customer(Country) INCLUDE (Address, Region, Postcode)

Did we mention covering indexes earlier? They represent the real art of indexing in SQL Server. The idea behind a covering index is to create an index that can cover important queries, thereby avoiding the need for the queries to go to the underlying table. Real World Scenario-Covering Indexes To illustrate the concept of covering indexes, lets assume you have some sort of a [Customers] table that has a large number of columns, say more than 500, that contain demographical and other statistical information. So, the partial table definition would be as follows:
CREATE TABLE [Customer] ( [CustomerID] INT [Name] VARCHAR(20) [SurName] VARCHAR(50) [PhoneNumber] VARCHAR(20) [FaxNumber] VARCHAR(20) [PassportNumber] CHAR(8) [Address] VARCHAR(50) [Region] VARCHAR(20) [PostCode] VARCHAR(10) [Country] VARCHAR(20) [Sex] CHAR(1) [MarriageStatus] CHAR(1) [Salary] MONEY [Dogs] TINYINT [Cats] TINYINT [Platypus] BIGINT [Kids] TINYINT ... CONSTRAINT [PK_CustomerNumber] PRIMARY KEY CLUSTERED ([CustomerID]) NOT NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 24 of 67

) ; GO

This table has more than 20,000,000 records, so as you would expect, queries against this table are going to be slow. Call-center personnel frequently want to call up a particular customer, so run the following queries:
SELECT * FROM [Customer] WHERE CustomerID = @ID ; SELECT * FROM [Customer] WHERE Surname = @Surname ; SELECT * FROM [Customer] WHERE Surname = @Surname AND Name = @Name ;

You want to improve performance so you can create two separate indexes for the [Surname] and [Name] fields. To reduce the number of indexes, a better choice might be to create a single index that includes the [Name] and [Surname] fields:
CREATE INDEX [NCI_Surname_Name] ON Customer(Surname, Name) ; GO

This will most likely improve performance for their original set of queries. But you can substantially improve performance further by reworking the queries to the following:
SELECT ID, Name, Surname, Phone FROM [Customer] WHERE ID = @ID ; SELECT ID, Name, Surname, Phone FROM [Customer] WHERE Surname = @Surname ; SELECT ID, Name, Surname, Phone FROM [Customer] WHERE Surname = @Surname AND Name = @Name ;

Why? Because you have now created a covering index! In other words, all the unknowns requested by the query, in this case the telephone number, are located within the B-tree of the nonclustered index. There is no need for SQL Server to go through the additional step of traversing the table. Considering the underlying size of the table, you have substantially improved performance because your queries can be serviced by a much smaller B-tree.

So, thats SQL Server 2005 indexes in a nutshell. Its quite a large nutshell! Now well cover the maintenance and optimization issue. Index Optimization More than 10 years ago, there was a common misconception that you needed to reindex your indexes because they were not up-to-date. Nothing can be further from the truth because SQL Server always keeps the indexes up-to-date in real time. (Thats why you can degrade performance of your online transaction processing [OLTP] databases if you overindex them.) The real reason has to do with fragmentation and its impact on performance.
Table/Index Fragmentation

Fragmentation basically exists when the logical ordering of pages in the leaf level of the B-tree does not match the physical ordering of the pages in the database data file. The leaf pages of the index have a pointer to the next and the previous page in their headers (as do intermediate pages). Figure 2.5 shows this doubly

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 25 of 67

linked list, which is the logical order that we are talking about.

Figure 2.5: Doubly linked list at the leaf level of the index (B-tree) As data in your tables is modified through DML operations, the data pages of the index will have to be split to accommodate new records or records that have increased in size. This process of page splitting involves SQL Servers storage engine allocating a new page to this particular index, which might not be located on a contiguous extent, let alone a contiguous page. So after a period of time, the physical order of the doubly linked list from Figure 2.5 might look something like Figure 2.6.

Figure 2.6: Physical location of pages that make up the doubly linked list in Figure 2.5 Because of this fragmentation, queries that invoke table scans or partial scans can be substantially slower. This is because the physical order of the doubly linked list does not match the logical order, so disk throughput will degrade because the disk drive heads must jump back and forth reading the various extents that the next page is located on, instead of being able to sequentially read all of the pages if they were contiguous. Note This external fragmentation affects disk I/O performance only. It does not adversely affect queries whose data pages reside in the SQL Servers buffer pool. We discuss the difference between external and internal fragmentation, and how to determine it, in Chapter 5. When you rebuild the index, SQL Server 2005 basically creates a new B-tree before wiping out the old version of it, so effectively everything becomes contiguous again. Figure 2.7 shows the result of such an operation.

Figure 2.7: Physical location of pages after the index has been rebuilt
Detecting Fragmentation

As we indicated earlier, you should no longer be using the DBCC SHOWCONTIG command to determine the level of fragmentation that exists, because the command is being deprecated. You should use the sys.dm_db_index_physical_stats DMV to determine the level of fragmentation for an index or table, as the case might be.

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 26 of 67

Table 2.1 describes the important columns of the sys.dm_db_index_physical_stats DMV to watch. Table 2.1: Description of sys.dm_db_index_physical_stats Columns Open table as spreadsheet Column Description avg_fragmentation_in_percent Percent of logical fragmentation. (Pages out of order.) fragment_count The number of fragments in the index. (A fragment is a set of physically consecutive leaf pages.) avg_fragment_size_in_pages Average number of pages in one fragment in an index. The following example shows the sys.dm_db_index_physical_stats DMV being queried for the [Production].[Product] table in the AdventureWorks database:
USE AdventureWorks ; GO SELECT i.index_id, name, avg_fragmentation_in_percent, avg_fragment_size_in_pages, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('Production.Product'), NULL, NULL, NULL) AS dmv JOIN sys.indexes AS i ON dmv.object_id = i.object_id AND dmv.index_id = i.index_id ; GO

It has the following output:


index_id -------1 2 3 4 name --------------------PK_Product_ProductID AK_Product_ProductNum AK_Product_Name AK_Product_rowguid avg_fragmentation_in_percent ---------------------------23.0769230769231 50.0 66.6666666666667 50.0

(4 row(s) affected)

Of course, the million-dollar question is, What constitutes a bad level of fragmentation? Well, thats a bit of a tough one, because it depends. However, Microsoft recommends the following guidelines. If the avg_fragmentation_ in_percent value is greater than 30 percent, you should rebuild the index. If the avg_ fragmentation_in_percent value is between 5 percent and 30 percent, you should reorganize the index. If the avg_fragmentation_in_percent value is less than 5 percent, dont bother. Youll now look at the difference between reorganizing and rebuilding the index.
Maintaining Indexes

Basically, you can employ three techniques to solve your index fragmentation problem: Reorganize the index. Rebuild the index.

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 27 of 67

Drop and re-create the index. Tip To defragment a heap, you have to create a clustered index on the heap and then drop the clustered index. In SQL Server 2005, the statement you generally use now to perform the majority of these actions (well, two of them anyway) is the ALTER INDEX statement. The syntax for altering the index is as follows:
ALTER INDEX { index_name | ALL } ON <object> { REBUILD [ [ WITH ( <rebuild_index_option> [ ,...n ] ) ] | [ PARTITION = partition_number [ WITH ( <single_partition_rebuild_index_option> [ ,...n ] ) ] ] ] | DISABLE | REORGANIZE [ PARTITION = partition_number ] [ WITH ( LOB_COMPACTION = { ON | OFF } ) ] | SET ( <set_index_option> [ ,...n ] ) } [ ; ] <object> ::= { [ database_name. [ schema_name ] . | schema_name. ] table_or_view_name } <rebuild_index_option > ::= { PAD_INDEX = { ON | OFF } | FILLFACTOR = fillfactor | SORT_IN_TEMPDB = { ON | OFF } | IGNORE_DUP_KEY = { ON | OFF } | STATISTICS_NORECOMPUTE = { ON | OFF } | ONLINE = { ON | OFF } | ALLOW_ROW_LOCKS = { ON | OFF } | ALLOW_PAGE_LOCKS = { ON | OFF } | MAXDOP = max_degree_of_parallelism } <single_partition_rebuild_index_option> ::= { SORT_IN_TEMPDB = { ON | OFF } | MAXDOP = max_degree_of_parallelism } <set_index_option>::= { ALLOW_ROW_LOCKS= { ON | OFF } | ALLOW_PAGE_LOCKS = { ON | OFF } | IGNORE_DUP_KEY = { ON | OFF } | STATISTICS_NORECOMPUTE = { ON | OFF } }

REORGANIZING THE INDEX Reorganizing an index basically defragments the leaf level of the index by physically reordering the leaflevel pages to match the logical order of the leaf-level pages. Reorganizing also compacts the index pages based on the fill factor setting. Reorganizing an index is an online operation and uses the least system resources. It does not hold long-term

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 28 of 67

blocking locks, so it doesnt block running queries or updates. The REORGANIZE clause is used in the ALTER INDEX statement to reorganize an index. Warning You should no longer use the DBCC INDEXDEFRAG statement because it is being deprecated by Microsoft. The following example shows the [AK_Product_Name] index in the AdventureWorks database being reorganized:
USE AdventureWorks ; GO ALTER INDEX [AK_Product_Name] ON [Production].[Product] REORGANIZE ;

REBUILDING THE INDEX Rebuilding an index basically drops the index and creates a new index in its place. The new index will initially be completely contiguous, which, as discussed, will translate to good disk performance. The REBUILD clause is used in the ALTER INDEX statement to reorganize an index. Warning You should no longer use the DBCC DBREINDEX statement because it is being deprecated by Microsoft. The following example shows the [AK_Product_Name] index in the AdventureWorks database being rebuilt using the REBUILD clause with a new fill factor setting of 69:
USE AdventureWorks ; GO ALTER INDEX [AK_Product_Name] ON [Production].[Product] REBUILD WITH (FILLFACTOR = 69) ;

Alternatively, you can rebuild an index by using the CREATE INDEX statement with the DROP_EXISTING clause. The difference between using this technique to the previous one includes the following: The CREATE INDEX ... WITH DROP_EXISTING statement allows you to change the index definition: Add key columns. Remove key columns. Change the column order. Change the column sort order. The CREATE INDEX ... WITH DROP_EXISTING statement allows you to move the index to another file group. The CREATE INDEX ... WITH DROP_EXISTING statement allows you to repartition a partitioned index. The ALTER INDEX ... REBUILD statement allows you to rebuild more than one index in a single transaction.

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 29 of 67

The ALTER INDEX ... REBUILD statement allows you to rebuild a single index partition. The following example shows the [AK_Product_Name] index in the AdventureWorks database being rebuilt using the DROP_EXISTING clause with a new fill factor setting of 69:
USE AdventureWorks ; GO CREATE INDEX [AK_Product_Name] ON [Production].[Product](Name) WITH (FILLFACTOR = 69, DROP_EXISTING = ON) ;

DROPPING AND RE-CREATING THE INDEX For completeness sake, you can also drop an index using the DROP INDEX statement and re-create it with the CREATE INDEX statement. Microsoft does not recommend this technique for a number of reasons, including the resources required and locks acquired. The following example shows the [AK_Product_Name] index in the AdventureWorks database being recreated using this nonrecommended technique:
USE AdventureWorks ; GO DROP INDEX [AK_Product_Name] ON [Production].[Product] ; GO CREATE INDEX [AK_Product_Name] ON [Production].[Product](Name)

Its important for you to thoroughly understand the various index maintenance commands, so youll examine them in more detail through Exercise 2.3. Exercise 2.3: Maintaining Indexes As weve said, it is important to get some exposure to what the various index maintenance commands do. The first step in this exercise is to look at the properties of the index and its potential level of fragmentation. 1. Open SQL Server Management Studio, and connect to your SQL Server 2005 instance using Windows authentication. 2. Expand the Databases, AdventureWorks, Tables, Production.Product, and Indexes folders in Object Explorer. Right-click the AK_Product_ProductNumber index, and choose Properties. Click the Options page. Examine and record the fill factor setting. You should see a window similar to the one shown here.

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 30 of 67

3. Click the Fragmentation page to examine the level of fragmentation. Examine and record the number of pages taken up by the [AK_Product_ProductNumber] nonclustered index. You should see a dialog box similar to the one shown here. Close the Index Properties window.

4. Back in SQL Server Management Studio, click the New Query toolbar button to open a new query window. 5. Lets first rebuild the index using the ALTER INDEX statement. Use a new fill factor setting of 30 percent. Type the following T-SQL code, execute it, and observe the results:
USE AdventureWorks ; GO ALTER INDEX [AK_Product_ProductNumber] ON [Production].[Product] REBUILD WITH (FILLFACTOR = 30)

6. Examine the Fragmentation page again by following steps 2 and 3 again. You should see a dialog box similar to the one shown here. Examine the new value for the number of pages taken up by the [AK_Product_ProductNumber] nonclustered index. You should see that the nonclustered index now consumes more pages because the 30 percent fill factor setting used. Close the Index Properties window.

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 31 of 67

7. Lets now rebuild the index using the CREATE INDEX statement. You will use the original fill factor setting of 100 percent. Type the following T-SQL code, execute it, and observe the results:
USE AdventureWorks ; GO CREATE UNIQUE NONCLUSTERED INDEX [AK_Product_ProductNumber] ON [Production].[Product](ProductNumber) WITH (FILLFACTOR = 100, DROP_EXISTING = ON)

8. If you want to check how many pages the index now takes up, you know what to do now. 9. Now, youll update the statistics for the [Product] table using a full scan. Type the following T-SQL code, execute it, and observe the results:
USE AdventureWorks ; GO UPDATE STATISTICS [Production].[Product] WITH FULLSCAN ;

10. Finally, youll examine the statistics for the [AK_Product_Product] nonclustered index. Type the following T-SQL code, execute it, and observe the results:
USE AdventureWorks ; GO DBCC SHOW_STATISTICS ('[Production].[Product]', [AK_Product_ProductNumber])

11. As you would expect, you have high selectivity/low density for the [AK_Product_ ProductNumber] nonclustered index. After all, it is unique! You should also see the Updated date reflecting when you updated the statistics a couple of seconds ago.

Optimizing Statistics

Another potentially important component of your index maintenance strategy is to ensure that statistics are up-to-date. We are certainly not going to discuss statistics in detail here, but suffice it to say that accurate statistics are critical to the query performance in SQL Server 2005. Up-to-date statistics allow SQL Servers query optimizer to accurately determine a high-quality execution plan. You can view the current distribution statistics with the DBCC SHOW_STATISTICS command. Good luck with understanding it all, though!

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 32 of 67

The syntax for the DBCC SHOW_STATISTICS command is as follows:


DBCC SHOW_STATISTICS ( 'table_name' | 'view_name' , target ) [ WITH [ NO_INFOMSGS ] < option > [ , n ] ] < option > :: = STAT_HEADER | DENSITY_VECTOR | HISTOGRAM

The following example shows the statistics for the [AK_Product_Name] unique nonclus-tered index in the [Production].[Product] table of the AdventureWorks database:
USE AdventureWorks; GO DBCC SHOW_STATISTICS ('Production.Product', 'AK_Product_Name') ; GO

This has the example output shown here.

Note Eric Hanson has an excellent paper on statistics titled Statistics Used by the Query Optimizer in Microsoft SQL Server 2005 that is located at http://www.microsoft.com/technet/prodtechnol/sql/2005/qrystats.mspx. In most cases, you will not have to bother about updating statistics because they are usually automatically updated as required by the query optimizer. However, in certain cases, you might decide to turn off this behavior through the database option:
ALTER DATABASE database_name SET AUTO_UPDATE_STATISTICS OFF

Why would you do this? Well, in some rare cases, you would rather that SQL Server did not update statistics in the middle of your work hours, because that can potentially impact performance. Alternatively, you might want to fine-tune the sampling rate that is being used. In any case, what you would do in these cases is to schedule the updating of statistics to ensure optimal performance. We have, for example, seen some environments where statistics are updated nightly and indexes are rebuilt on the weekend when the maintenance window is longer. Tip Dont make the mistake of scheduling a reindex and an update of statistics on the same index at the same time. Reindexing by its nature will ensure that the statistics are up-to-date. You will not believe how many times we have seen this in the field! The syntax for updating statistics is as follows:
UPDATE STATISTICS table | view [ { { index | statistics_name } | ( { index |statistics_name } [ ,...n ] ) }

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 33 of 67

] [

WITH [ [ | | | ] [ [ , [ [ ,

FULLSCAN ] SAMPLE number { PERCENT | ROWS } ] RESAMPLE <update_stats_stream_option> [ ,...n ] ] [ ALL | COLUMNS | INDEX ] ] NORECOMPUTE ]

] ; <update_stats_stream_option> ::= [ STATS_STREAM = stats_stream ] [ ROWCOUNT = numeric_constant ] [ PAGECOUNT = numeric contant ]

Heres how the syntax breaks down: The FULLSCAN option indicates that all rows in table should be read to gather statistics. The SAMPLE option indicates that a percentage of the table or number of rows should be read to gather statistics. Note The default behavior for SQL Server 2005 is to take a sample based on the size of the table. The NORECOMPUTE option indicates that statistics that become out-of-date should not be automatically updated (recomputed). The following example shows all statistics being updated for the [Production].[Product] table by scanning all rows:
USE AdventureWorks ; GO UPDATE STATISTICS [Production].[Product] WITH FULLSCAN ; GO

AUTO_UPDATE_STATISTICS_ASYNC DATABASE OPTION SQL Server 2005 supports a new AUTO_UPDATE_STATISTICS_ASYNC database option that controls how statistics are automatically updated. Usually when a query triggers an automatic updating of statistics, the query has to wait until the statistics are updated before continuing. In other words, it is a synchronous process. You can use the AUTO_UPDATE_STATISTICS_ASYNC database option to turn off the wait of the query. So, the query does not wait until the statistics are updated before continuing on with its execution. However, it will be using out-of-date statistics and consequently might not be using an optimal execution plan, unlike subsequent queries. Note The AUTO_UPDATE_STATISTICS_ASYNC database option has no effect if the AUTO_UPDATE_STATISTICS database option is off.

Enforcing Appropriate Stored Procedure Logging and Output


Were a bit confused about what the Enforce appropriate stored procedure logging and out-put exam objective is getting at. Is it talking about the logging of a stored procedure execu-tion, in which case you can look at the relevant SQL trace event classes? Or is it talking about the logging of stored procedures internally, in which case this is done programmatically? In any case, enforcing appropriate stored procedure logging, whatever that might be, is another matter altogether.

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 34 of 67

Cristian Lefters suggestion of the new TRY/CATCH construct in SQL Server 2005 is worth investigating. So, lets get at it: SQL Server 2005 has a new structured exception handling technique through the TRY/CATCH construct. The BEGIN TRY/END TRY block contains the T-SQL code where an error might be generated. The BEGIN CATCH/END CATCH block contains the exception handler. Note The CATCH block should immediately follow the TRY block. The following errors cannot be handled by the CATCH/TRY construct: Compile errors Statement-level recompilation errors The following functions are available in the CATCH block: ERROR_NUMBER() ERROR_MESSAGE() ERROR_SEVERITY() ERROR_STATE() ERROR_LINE() ERROR_PROCEDURE() The following pseudocode shows an example of how to use the CATCH/TRY construct:
... BEGIN TRY ... -- T-SQL code representing what you are trying to do ... END TRY BEGIN CATCH IF (ERROR_NUMBER() = 1) BEGIN -- Perform appropriate action END ELSE IF (ERROR_NUMBER() = 2) BEGIN -- Perform appropriate action END ELSE IF (ERROR_NUMBER() = 3) BEGIN -- Perform appropriate action END ELSE BEGIN -- Perform appropriate action END END CATCH ...

So, of course your CATCH block could log certain exceptions and write them to an error log or send some sort of notification. Enforcing this, as we discussed, is another matter. An Iterative Process Wed like to finish this section of the chapter by saying that troubleshooting and maintaining query performance are iterative processes. Obviously through a database solutions life cycle, the tables and/or

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 35 of 67

queries might change, which will necessitate reinvestigation. But dont forget that SQL Server 2005s query optimizer uses a cost-based model, which means that over a period of time, as the data in your database solution changes and grows your previous indexing strategy, optimizer hints, and other optimization techniques might no longer be applicable. They could in fact be causing performance problems. This should also highlight the need to document what you have done previously and maintain a change management log. But, again, an iterative process is required.

Troubleshooting Concurrency Issues


Concurrency issues can account for performance issues. In other words, whether a user of your database solutions is experiencing poor query performance because of inadequate hardware resources or contention, the end result is the same-poor query performance. Your role as a DBA is to isolate whether the poor query performance is due to inadequate hardware resources, as covered in Chapter 1, or due to a concurrency issue. So, where do you start? Wellwhat are your database users doing when there are concurrency issues? Waiting. So, why not start with SQL Server 2005 waits? When a process connected to SQL Server 2005 tries to access a resource that is unavailable, it has to wait. The process is placed in a resource wait list until the resource is available. You can see this information in a number of ways. The sys.sysprocesses system table returns information about the various processes connected to your SQL Server 2005 instance and returns the following information about the waits of these processes: waittype waittime lastwaittype waitresource Note The sys.sysprocesses system table is being deprecated in a future release of SQL Server. Microsoft recommends you use the sys.dm_exec_connections, sys.dm_exec_sessions, and sys.dm_exec_requests DMVs instead. In SQL Server 2005 Microsoft has added a DMV, called sys.dm_os_wait_stats, that will allow you to view aggregated information about all the waits experienced by all processes connected to your SQL Server 2005 instance since it was started. It makes it easier to diagnose potential concurrency issues. The sys.dm_os_wait_stats DMV returns the following statistical information: wait_type waiting_tasks_count

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 36 of 67

wait_time_ms max_wait_time_ms signal_wait_time Note signal_wait_time represents the difference between the time the waiting thread was signaled and when it started running. Dont forget that the statistical information is cumulative because either the SQL Server 2005 instance was started or the statistical information was reset. Tip You can reset the wait statistics through the DBCC SQLPERF ('sys.dm_os_ wait_stats' , CLEAR) statement. So if you want to keep the statistical information for benchmarking/baseline purposes, you should export the statistical information from the sys.dm_os_wait_stats DMV to a table or some other format, such as a Microsoft Excel spreadsheet, before restarting your SQL Server 2005 instance. So, how do you recognize a concurrency issue? Database users of course will be complaining, but they are always complaining anyway! At the database engine you should be able to observe some of the following: Nonzero values in the blocked column of the sys.sysprocesses system table. Nonzero values for the BlkBy column of the sp_who2 system stored procedure. Large values for you waittype column of the sys.sysprocesses system table. Attention events. Your SQL Server 2005 instance might not appear to be under stress as processes are waiting (which does not consume hardware resources). In this case, you might observe some of the following: Low values for the processor- and memory-related performance object counters Low processor utilization Low disk utilization Low values for the CPUTime and DiskIO columns of the sp_who2 system stored procedure Low values for the cpu and physical_io columns of the sys.sysprocesses system table All of these are indicative of some sort of blocking going on. If you are unsure, you can use a blocker script, such as sp_blocker_pss80, to find out. PSS Head Blocker Script Microsoft Product Support Services (PSS) provides a stored procedure called sp_blocker_pss80 that you can use to troubleshoot concurrency issues. When executed, the stored procedure gathers the following information: Start time Connections (sys.sysprocesses system table) Lock resources (sys.syslockinfo system table)

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 37 of 67

Resource waits (DBCC SQLPERF(WAITSTATS)) Blocked and blocking processes (DBCC INPUTBUFFER) End time When you suspect a concurrency issue, you can execute the sp_blocker_pss80 stored procedure in an infinite loop to help you resolve the potential concurrency issue:
WHILE (666=666) BEGIN EXECUTE master.dbo.sp_blocker_pss80 WAITFOR DELAY '00:00:15' END

You can download the T-SQL script for the sp_blocker_pss80 stored procedure from the How to monitor blocking in SQL Server 2005 and in SQL Server 2000 Knowledge Base article located at http://support.microsoft.com/kb/271509.

SQL Server 2005 uses two different mechanisms to control concurrency in the database engine: Latches Locks Youll now learn how you troubleshoot latches and locks, before finishing up with troubleshooting deadlocks.

Troubleshooting Latches
Think of a latch as a lightweight synchronization object that is used by SQL Server 2005 internally to control access to internal data structures, control index concurrency, and control access to rows in data pages. Latches do not have the overhead of locks, so you have less overhead on the SQL Server 2005 database engine. Latches are categorized into different classes such as ACCESS_METHODS_HOBT, BUFFER, DATABASE_CHECKPOINT, FILE_MANAGER, LOG_MANAGER, and MSQL_TRANSACTION_MANAGER. Tip To see the different latch classes, you can execute SELECT latch_class FROM sys.dm_os_latch_stats. Latches are held only for the duration of the operation required, unlike locks that can be held for the duration of the transaction. Latch waits occur when a latch request cannot be granted to a thread because of another thread holding an incompatible latch on the same resource. Monitoring Latches Monitoring latches to determine user activity and resource usage can help you identify performance bottlenecks. You should examine the relative number of latch waits and wait times to determine whether there is excessive latch contention.
Performance Object Counters

The SQLServer : Latches performance object has a number of counters that you can use to monitor latches:

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 38 of 67

Average Latch Wait Time (ms) Latch Waits/sec Number of SuperLatches SuperLatch Demotions/sec SuperLatch Promotions/sec Total Latch Wait Time (ms)
DMVs

The sys.dm_os_latch_stats DMV returns information about latch waits organized by latch class. The sys.dm_os_latch_stats DMV tracks only latch waits, so if a latch request was immediately granted or failed, it will not contribute to the DMVs statistics. The sys.dm_os_latch_stats DMV returns the following statistical information: latch_class waiting_request_count wait_time_ms max_wait_time_ms Tip A high value for the max_wait_time_ms value might indicate an internal deadlock. An important consideration is that the statistical information is cumulative since either the SQL Server 2005 instance was started or the statistical information was reset. Tip You can reset the wait statistics through the DBCC SQLPERF ('sys. dm_os_ latch_stats', CLEAR) statement. So if you want to keep the statistical information for benchmarking/baseline purposes, you should export the statistical information from the sys.dm_os_latch_stats DMV to a table or some other format, such as an Excel spreadsheet, before restarting your SQL Server 2005 instance. The sys.dm_os_wait_stats DMV covered earlier also contains statistical information relevant to latches. There are 24 different latch wait types that are monitored (see Table 2.2). Tip Another way to see the different latch wait types available to you is by executing SELECT wait_type FROM sys.dm_os_wait_stats WHERE wait_type LIKE '%latch%'. Table 2.2: SQL Server 2005 Wait Types Open table as spreadsheet Wait Type LATCH_DT LATCH_EX LATCH_KP LATCH_NL LATCH_SH LATCH_UP PAGEIOLATCH_DT Description Destroy latch Exclusive latch Keep latch Null latch Shared latch Update latch Destroy buffer page I/O latch

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 39 of 67

PAGEIOLATCH_EX PAGEIOLATCH_KP PAGEIOLATCH_NL PAGEIOLATCH_SH PAGEIOLATCH_UP PAGELATCH_DT PAGELATCH_EX PAGELATCH_KP PAGELATCH_NL PAGELATCH_SH PAGELATCH_UP

Exclusive buffer page I/O latch Keep buffer page I/O latch Null buffer page I/O latch Shared buffer page I/O latch Update buffer page I/O latch Destroy buffer page latch Exclusive buffer page latch Keep buffer page latch Null buffer page latch Shared buffer page latch Update buffer page latch

So lets finish up with an example. The following query shows page I/O latch wait statistics:
SELECT wait_type, waiting_tasks_count, max_wait_time_ms FROM sys.dm_os_wait_stats WHERE wait_type like 'PAGEIOLATCH%' ORDER BY wait_type

This has the following example output:


wait_type --------PAGEIOLATCH_DT PAGEIOLATCH_EX PAGEIOLATCH_KP PAGEIOLATCH_NL PAGEIOLATCH_SH PAGEIOLATCH_UP waiting_tasks_count ------------------0 5419 0 0 18961 520 max_wait_time_ms ---------------0 690 0 0 8332 312

Resolving Latch Issues Latch-specific problems tend to be reasonably rare. A lot of the time they indicate a lack of resources, but they could also indicate some sort of hardware problem. Watch out for latch timeouts for I/O operations that are taking too long to complete. If you get any latch-specific error messages or you have determined that your SQL Server 2005 instance is experiencing excessive latch waits or timeouts, you can try the following: Determine whether your SQL Server 2005 instance is experiencing any hardware bottlenecks. Check for any logged errors: Error logs Event logs Hardware vendor logs Run any hardware vendor diagnostic tools. Reduce the workload on your SQL Server 2005 instance: Tune your queries. Turn off database autoshrink.

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 40 of 67

Turn off data autogrowth. Check to see whether turning off any of the following SQL Server 2005 configuration parameters helps: Lightweight pooling Priority boost Set working set size Otherwise, youll be calling Microsoft Product Support!

Troubleshooting Locks
The key to troubleshooting locks is to understand the different types of lockable objects and types of locks that are available in SQL Server 2005. Although the basics for the lock manager and locking have not substantially changed since SQL Server 2000, each iteration of SQL Server has evolved from the previous release, so ensure you are familiar with SQL Server 2005s locking architecture, and dont assume anything from your knowledge of SQL Server 2000. SQL Server 2005 Lock Architecture The lock manager in SQL Server 2005 manages locks (funny that). It does this through an internal inmemory structure known as the lock hash table. The amount of memory allocated to this lock hash table is based on the amount of memory available on your SQL Server 2005 instance. Of course, this is a simplified view-very simplified. Note You can of course override the default amount of memory allocated to lock-ing if you need to do so. You do this via trace flags, a topic not covered in detail in this book. One of the more important decisions the lock manager has to make is what lock grain to use for locking data. What are the trade-offs between using a finer-grained lock versus a coarser-grained lock? As an example, lets imagine you want to modify a hundred records in a table that contains a million records. What lock grain should you use? The benefit of using a finer-grained lock (think of a row-level lock, as an example, placed on the hundred records) is that you create less contention within the table (users can access the rest of the records in the table), but you consume more lock resources (a hundred locks). Whereas a coarser-grained lock (table-level lock) uses less resources (one lock) but creates greater contention (no one else could access the table while you modified your hundred records). SQL Server 2005 uses a lock escalation strategy, which means that the lock manager will escalate a finergrained lock to a coarser-grained lock automatically as required. Once a transaction has started, the lock manager can decide to escalate the locks used to a coarser grain so as to improve performance and conserve memory resources. The following thresholds trigger this lock escalation: Five thousand locks have been acquired on a single object for a single T-SQL statement. Note If lock escalation cannot occur because of lock conflicts, SQL Server 2005 will attempt lock escalation again for every 1,250 new locks acquired. Memory consumed by locking exceeds 40 percent of non-AWE memory when the locks configuration option is set to 0. Memory consumed by locking exceeds 40 percent of allocated memory when the locks configuration option is set to a value of nonzero.

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 41 of 67

Note These thresholds can change in a service pack. SQL Server 2005 does not escalate row-level locks to page-level locks but directly to table-level locks. Likewise, page-level locks are always escalated to table-level locks. Tip You can monitor lock escalation in SQL Server Profiler through the Lock:Escalation event. Before you look at the different types of locks available in SQL Server 2005, we need to discuss transaction isolation levels.
Transaction Isolation Levels

This is stock-standard stuff for any concurrent environment such as any relational database, and SQL Server 2005 is no exception. Basically were talking about the I in ACID, or how transactions are isolated from each other to ensure data consistency. Heres a bit of a backgrounder first, though. Fundamentally, any concurrent environment that allows concurrent transactions potentially faces the following data anomalies: Dirty reads A dirty read involves reading data that has been modified by another transaction that has not yet been committed. Nonrepeatable reads A nonrepeatable read occurs when a transaction reads some data and then goes off and does some other processing, and when it comes back to the original data that was read, it has changed. In other words, the data has changed between the first and subsequent read operations. Simple! Phantom values A phantom value occurs when, again, a transaction reads some data and goes off and does some other processing, and then when it rereads the original data, a new value has appeareda phantom. In other words, the data has had a new record inserted between the first and subsequent read operations. Again, simple! As an aside, ANSI has defined a number of transaction isolation levels (TILs) that basically control how transactions can be isolated from each other in a concurrent environment. The TILs are cumulative, which means that a higher level does everything that the previous level did plus more. Table 2.3 shows the equivalent SQL Server 2005 isolation level. You will lock at what the isolation levels protect against when we cover the SQL Server 2005 isolation levels. Table 2.3: ANSI Transaction Isolation Levels Open table as spreadsheet ANSI TIL 0 1 2 3 SQL Server 2005 Equivalent READ UNCOMMITTED READ COMMITTED REPEATABLE READ SERIALIZABLE

So as you can see, SQL Server 2005 supports all four ANSI TILs. In fact, it has since SQL Server 7.0. What is new in SQL Server 2005 is that there are a few new variations of them. So, lets go through the isolation levels supported by SQL Server 2005: Read uncommitted Read uncommitted isolation allows transactions to read uncommitted data. The transaction basically ignores any acquired locks and does not issue any shared locks for read operations.

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 42 of 67

Warning Be careful with using read uncommitted isolation mode. We see a lot of database solutions out there (especially web-based) that use read uncommitted isolation because they want to improve performance. This is not the correct reason to be going to read uncommitted isolation; you are reading dirty data, after all! Read committed Read committed isolation prevents dirty reads from occurring. A transaction will have to wait until an incompatible lock is released. With read committed isolation, read operations acquire shared locks for the duration of the read operation. Read committed isolation does not prevent nonrepeatable reads or phantom values. Note Read committed isolation is the default behavior for SQL Server 2005. Read committed snapshot Read committed snapshot isolation (RCSI) is a completely new addition to SQL Server 2005. RCSI SQL Server 2005 uses row versioning based on the datas previously committed value to prevent dirty reads. No row or page locks are required. Obviously, the benefit is that transactions do not have to wait for existing transactions to complete. Note Read committed snapshot isolation is invoked at the database level via the ALTER DATABASE ... SET READ_COMMITTED_SNAPSHOT ON statement. Repeatable read Repeatable read isolation prevents dirty reads and nonrepeatable reads from occurring. Under repeatable read isolation, SQL Server 2005 holds shared locks for the duration of the transaction, thus guaranteeing that a read is repeatable. Repeatable read isolation does not prevent phantom values. Serializable Serializable isolation prevents dirty reads, nonrepeatable reads, and phantom values. With serializable isolation, SQL Server 2005 places range locks on the data being read, thus preventing insertion of new data until the transaction completes. Snapshot Snapshot isolation (SI) also prevents dirty reads, nonrepeatable reads, and phantom values. It does this by taking a snapshot of the data at the time the transaction was started. Note Snapshot isolation needs to be invoked at both the data level via the ALTER DATABASE ... SET ALLOW_SNAPSHOT_ISOLATION ON and the session level using the SET TRANSACTION ISOLATION LEVEL SNAPSHOT. Note The difference between SI and RCSI (if you are curious) is that with RCSI you work with data that was committed at the beginning of the T-SQL statement, whereas with SI you work with data that was committed at the beginning of the T-SQL transaction. Table 2.4 summarizes the data anomalies prevented by the various isolation modes supported by SQL Server 2005. Table 2.4: Data Anomalies Prevented by Isolation Level Open table as spreadsheet Isolation Level Read Uncommitted Read Committed and Read Committed Snapshot Repeatable Read Serializable Snapshot Dirty Read No Yes Yes Yes Nonrepeatable Read No No Yes Yes Phantom No No No Yes

The following syntax shows you how to change the isolation level at the session level:
SET TRANSACTION ISOLATION LEVEL { READ UNCOMMITTED | READ COMMITTED | REPEATABLE READ

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 43 of 67

| SNAPSHOT | SERIALIZABLE } [ ; ]

Alternatively, the following example shows you how to change the UnitedNations database to support RCSI:
ALTER DATABASE UnitedNations SET READ_COMMITTED_SNAPSHIT ON

Warning Be careful with going to higher isolation levels because you are effectively creating greater contention. You are placing stronger locks and/or holding them for a greater period of time. Warning Be especially careful with using the new variations of RCSI and SI. Make sure you understand the implications of reading snapshot data.
SQL Server 2005 Locks

Turn your attention to the SQL Server 2005 lock architecture, which falls under the auspices of the lock manager. You will find that with each release of SQL Server Microsoft has, at the least, modified the way in which locking behaves. Well start by talking about the various entities that can lock objects inside the SQL Server 2005 database engine. Table 2.5 shows the different types of entities that can request a lock from the lock manager. Table 2.5: SQL Server 2005 Entities That Can Request Locks Open table as spreadsheet Entity CURSOR EXCLUSIVE_TRANSACTION_WORKSPACE TRANSACTION SESSION SHARED_TRANSACTION_WORKSPACE Description A cursor Exclusive part of the transaction workspace A transaction A user session Shared part of the transaction workspace

As you would expect, different types of objects can be locked inside the SQL Server 2005 database engine. If you are familiar with earlier version of SQL Server, you will notice that some of the lockable objects, such as the HOBT, are new to SQL Server 2005. Table 2.6 shows the different types of locks supported by SQL Server 2005. Table 2.6: SQL Server 2005 Resource Types Open table as spreadsheet Resource Type _TOTAL ALLOCATION_ UNIT APPLICATION DATABASE EXTENT FILE HOBT KEY METADATA OBJECT Description Information for all locks Allocation unit Application-specific resource The entire database Eight contiguous pages (64KB) Database file Heap or B-tree; an allocation unit used to describe a heap or B-tree Row lock within an index used to protect key ranges in serializable transactions Catalog information about an object Database object

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 44 of 67

PAGE RID TABLE

Data or index page (8KB) Row identifier, which represents a single row within a table Entire table, including the indexes

The various SQL Sever 2005 entities can request different types of locks depending on the type of operation they want to perform on an object within the SQL Server 2005 database engine. Table 2.7 shows the different types of request modes available in SQL Server 2005. Table 2.7: SQL Server 2005 Lock Request Modes Open table as spreadsheet Request Mode BU I IS IU IX RangeS_S RangeS_U RangeI_N RangeI_S RangeI_U RangeI_X RangeX_S RangeX_U RangeX_X S Sch-M Sch-S SIU SIX U UIX X Description Bulk-Update lock Intent lock Intent-Shared lock Intent-Update lock Intent-Exclusive lock Shared Range-Shared resource lock Shared Range-Update resource lock Insert Range-Null resource lock Insert Range-Shared resource lock Insert Range-Update resource lock Insert Range-Exclusive resource lock Exclusive Range-Shared resource lock Exclusive Range-Update resource lock Exclusive Range-Exclusive resource lock Shared lock Schema-Modification lock Schema-Stability lock Shared Intent-Update lock Shared Intent-Exclusive lock Update lock Update Intent-Exclusive lock Exclusive lock

Generally speaking, read operations require a shared lock, whereas data modifications require exclusive locks. The names imply that you can have multiple shared locks on the same resource, whereas you can have only one exclusive lock on a resource. Obviously, you dont want people accessing data that you are currently modifying-not ordinarily anyway. Another factor is the duration that locks are held on a resource. Again, generally speaking, the lock manager holds locks for read operations only for the duration of the read operation itself, whereas for data modifications the locks are held for the entire transaction. This is an important distinction! To tie it all together, we need to show the compatibility of the locks within the SQL Server 2005 database engine. A lock is said to be compatible in SQL Server 2005 if the lock that a particular transaction is requesting can be acquired while the other lock is being held by another transaction. Table 2.8 show the SQL Server 2005 lock compatibility. A C indicates a conflict, an I indicates illegal, and an N indicates no conflict. Finally, Table 2.9 shows that different types of lock requests in SQL Server 2005.

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 45 of 67

Table 2.8: SQL Serer 2005 Lock Compatibility Open table as spreadsheet NL SCH- SCH- S U X IS IU IX SIU SIX UIX BU S M NL N N N NNNN N N N N N N SCH- N N C NNNN N N N N N N S SCH- N C C C C C C C C C C C C M S N N C NNC N N C N C C C U N N C NC C N C C C C C C X N N C C C C C C C C C C C IS N N C NNC N N N N N N C IU N N C NC C N N N N N C C IX N N C C C C N N N C C C C SIU N N C NC C N N C N C C C SIX N N C C C C N N C C C C C UIX N N C C C C N C C C C C C BU N N C C C C C C C C C C N RS-S N I I NNC I I I I I I I RS-U N I I NC C I I I I I I I RI-N N I I NNNI I I I I I I RI-S N I I NNC I I I I I I I RI-U N I I NC C I I I I I I I RI-X N I I C C C I I I I I I I RX-S N I I NNC I I I I I I I RX- N I I NC C I I I I I I I U RX- N I I C C C I I I I I I I X Table 2.9: SQL Server 2005 Lock Request Status Open table as spreadsheet Lock Request Status GRANT WAIT CNVT Monitoring Locks Phewthat was a stretch! But what did you expect? Locking is always going to be a complex, if not difficult, subject. Thankfully, it is not difficult in SQL Server 2005 to detect what kinds of locks are being acquired in the database engine, what transactions are being blocked, the blocking transactions, and whether any deadlocks have occurred. So, well now go through all the tools and techniques you can use to determine what is going on with your SQL Server 2005 instance as far as locks are concerned.
Activity Monitor

RSS N I I N N C I I I I I I I N N C C C C C C C

RSU N I I N C C I I I I I I I N C C C C C C C C

RIN N I I N N N I I I I I I I C C N N N N C C C

RIS N I I N N C I I I I I I I C C N N N C C C C

RIU N I I N C C I I I I I I I C C N N C C C C C

RIX N I I C C C I I I I I I I C C N C C C C C C

RXS N I I N N C I I I I I I I C C C C C C C C C

RXU N I I N C C I I I I I I I C C C C C C C C C

RXX N I I C C C I I I I I I I C C C C C C C C C

Description The lock was granted to process. The process is being blocked by another process. The lock is being converted to another type of lock.

Perhaps the easiest way to look at what is going on with your SQL Server 2005 instance is to use the Activity Monitor utility within SSMS. It enables you to quickly and concisely see what processes are

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 46 of 67

connected to your SQL Server 2005 instance, what locks they have acquired, or alternatively what objects they are waiting for locks to be released on. Figure 2.8 shows the Process Info page of the Activity Monitor.

Figure 2.8: Activity Monitor, Process Info page


Performance Object Counters

The SQLServer : Locks performance object has a number of counters that can be used for monitoring the lock manager: Average Wait Time (ms) Lock Requests/sec Lock Timeouts (timeout > 0)/sec Lock Timeouts/sec Lock Wait Time (ms) Lock Waits/sec The resource types shown in Figure 2.6 represent an instance of each of these performance object counters.
SQL Server Profiler Event Classes

You can capture a rich set of information through SQL traces or SQL Server Profiler. The lock event classes will enable you to monitor the locks that are being acquired, canceled, or released. They also allow you to monitor lock escalation and whether your queries/transactions are timing out. Table 2.10 shows the lock event classes. Table 2.10: Lock Event Classes Open table as spreadsheet Event Class Lock:Acquired Lock:Cancel Lock:Escalation Lock:Released Lock:Timeout Lock:Timeout Description This event class indicates that a lock has been acquired. This event class indicates that a lock has been canceled. This event class indicates that a lock escalation has occurred. This event class indicates that a lock has been released. This event class indicates that a request for a lock on a resource has timed out because of an incompatible lock acquired by another transaction. This event class is the same as the Lock:Timeout event class except it does not

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 47 of 67

(timeout > 0) include any event where the timeout value (@@LOCK_TIMEOUT) is 0. Tip Watch out for excessive lock escalation using the Lock:Escalation event class.
System Stored Procedures

A number of system stored procedures have been with SQL Server since forever and will quickly show you what processes are running on your SQL Server 2005 instance, what locks they have acquired, or whether they are waiting for locks to be released. Table 2.11 shows the system stored procedures. Table 2.11: SQL Server 2005 Locking System Stored Procedures Open table as spreadsheet Stored Procedure sp_who sp_who2 sp_lock Description Reports basic information similar to what the Activity Monitor shows. Reports richer information compared to the sp_who system stored procedure. This is undocumented. Reports basic information about locks.

The sp_who system stored procedure represents a quick way of determining whether a particular process is being blocked by another process. If users are complaining about slow transactions or unresponsive queries, try executing the following command to determine whether their process is being blocked by other processes:
EXEC sp_who ACTIVE ; GO

The ACTIVE parameter excludes sessions that are waiting for the next command from a user connection. Look for a SPID value in the blk column, which would indicate that the process is being blocked by that SPID. Note The sp_lock system stored procedure is being deprecated in a future release of SQL Server. You should use the sys.dm_tran_locks DMV instead. Figure 2.9 shows a sample output of the sp_who system stored procedure. Figure 2.10 shows a sample output of the sp_who2 system stored procedure.

Figure 2.9: sp_who

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 48 of 67

Figure 2.10: sp_who2


DMVs

We have already discussed the sys.dm_os_wait_stats DMV and how it shows the various tasks that are waiting to acquire a particular lock on a resource before proceeding The main DMV to query to see what the lock manager is up to is sys.dm_tran_locks. It returns all the currently active processes that have either had a lock granted to them or are waiting for a lock to be acquired. Table 2.12 shows the columns returned by the sys.dm_ tran_locks DMV. Table 2.12: sys.dm_tran_locks DMV Columns Open table as spreadsheet Column resource_type resource_subtype Description Lock request type. (See Table 2.6.) Lock request subtype. (Not all lock resource types have lock request subtypes.) resource_database_id Resource database ID. resource_description Resource database. resource_associated_entity_id Entity ID associated with the lock resource. resource_lock_partition Partitioned lock resource lock partition ID. request_mode Lock request mode. (See Table 2.7) request_type Lock request type. request_status Lock request status. request_reference_count Approximate number of times the requesting entity has requested this resource. request_lifetime Reserved. request_session_id Lock requests session ID. request_exec_context_id Execution context ID of the process that currently owns this request. request_request_id Lock request ID. request_owner_type Lock request owner type. (See Table 2.5) request_owner_id Lock request owner ID. request_owner_guid Lock request owner GUID. request_owner_lockspace_id Reserved. lock_owner_address Internal data structure used to track lock request. (This is related to the resource_address column in the sys.dm_os_waiting_tasks DMV.) Figure 2.11 shows sample output of the sys.dm_tran_locks DMV.

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 49 of 67

Figure 2.11: Output of sys.dm_tran_locks We hope this ties together all of the theory that we have discussed. As you can see from the output, the DMV returns the resource type, request type, and request status, as shown in the earlier tables.
System Tables

For completeness sake, you can still query these backward-compatible SQL Server 2000 compatibility views: sys.sysprocesses sys.syslockinfo Resolving Locking Issues Remember that it is in the nature of a concurrent system to have contention issues, given enough concurrent users. Nevertheless, there are good programming techniques that your database developers can use to minimize such contention. Generally speaking, SQL Server 2005 knows best; by default, it finds the right balance between protecting data and allowing concurrent access to data. Remember that it is a dynamic system, so it should adjust its behavior depending on the current activity and resources available. Lock escalation is really not a bad idea. Nevertheless, you can use several techniques to help you resolve any locking problems that you are experiencing. What you will employ will obviously depend on the locking issues you are experiencing, be it running out of memory, excessive contention, and so on. These techniques include the topics in the following sections. Warning Be careful when changing the default locking behavior of the SQL Server 2005 database engine. Make sure you understand the implications and impact of your potential changes. Remember that you are always making a trade-off between the overhead on the SQL Server 2005 database engine and concurrency of the data (or accuracy in the case of dirty reads, or even RCSI and SI).
Setting Database Options

You can set a number of database options that will have the effect of disabling locking for the database. READ_ONLY Setting a database to read-only will ensure that the lock manager does not bother with maintaining locks within the database because there is no contention. Note There is a common myth on newsgroups and other sources that making a file group read-only will disable locking for that particular file group. This is wrong, as can be easily tested.

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 50 of 67

SINGLE_USER Likewise, setting a database to single-user will ensure that the lock manager does not bother with maintaining locks within the database because there is no contention. READ_COMMITTED_SNAPSHOT As discussed, setting the database to RCSI will make the SQL Server 2005 database engine use row versioning instead.
Changing Isolation Levels

Another potential technique that you can use to resolve any locking issues is to change the isolation level being used. All of the following isolation levels would improve the concurrency of your database solution because shared locks will not be acquired for read operations: Read Uncommitted Read Committed Snapshot Snapshot Isolation But dont forget that there is a trade-off, as discussed earlier, so be particularly careful with these options. Tip Always try to use the lowest possible TIL you can afford to minimize the amount of contention that is created.
Disabling Lock Escalation

You can disable lock escalation altogether if you want to do so. You do this via the following trace flags: Trace flag 1211 disables lock escalation completely. Trace flag 1224 disables lock escalation based on the number of locks acquired but will enable lock escalation when more than 40 percent of the memory allocated to locks is exceeded or 40 percent of the non-AWE memory is consumed by locks.
Disabling Row-Level/Page-Level Locks

A little known feature in SQL Server 2005 (and SQL Server 2000) is the ability to disable row-level or page-level locking at the table or index option. You do this through the sp_indexoption system stored procedure. The following example shows page-level locks being disallowed for the [Production] .[Product] table in the AdventureWorks database:
USE AdventureWorks ; GO EXEC sp_indexoption 'Production.Product', 'DisallowPageLocks', TRUE ; GO Using Optimizer Hints

As with the query hints we discussed earlier, you can direct a number of table hints at the query optimizer that control locking. As an example, you could use table hints so that the SQL Server 2005 database engine uses page-level or table-level locks instead of row-level locks. This would reduce the amount of memory used for locking but would potentially create greater contention. As you can see, its always this trade-off between overhead on the SQL Server 2005 database engine and concurrency. Table 2.13 shows the more common table hints that are available in SQL Server 2005 that will affect locking.

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 51 of 67

Tip Dont forget the READPAST table hint. It is one that is not well remembered. Table 2.13: SQL Server 2005 Table Hints Open table as spreadsheet Table Hint HOLDLOCK NOLOCK NOWAIT PAGLOCK READCOMMITTED READCOMMITTEDLOCK READPAST READUNCOMMITTED REPEATABLEREAD ROWLOCK SERIALIZABLE TABLOCK TABLOCKX UPDLOCK XLOCK Description Equivalent to SERIALIZABLE. Equivalent to read uncommitted isolation level. Equivalent to SET LOCK_TIMEOUT 0. Use page-level locks. Equivalent to read committed isolation level dependent on RCSI. Equivalent to read committed isolation level irrespective of RCSI. Indicates that rows locked by other transactions should be ignored. Equivalent to read uncommitted isolation level. Equivalent to repeatable read isolation level. Use row-level locks. Equivalent to serializable isolation level. Use table-level locks. Use table-level exclusive locks. Use update locks. Use exclusive locks.

The following example shows a query where the [Production].[Product] table is queried. Any data that is locked will be ignored. This could return an incomplete result set in this case.
USE Adventureworks ; GO SELECT * FROM [Production].[Product] WITH (READPAST)

Warning Overriding the optimizer via table or query hints is considered a last resort, so you should try other techniques first. Generally speaking, SQL Server 2005 does know best! All of these strategies represent the more common ones that are implemented. They are just a starting point. Please appreciate that there are other tricks up the sleeve that the experienced DBA can utilize. Real World Scenario-Caveat Emptor: Overriding the Optimizer One consulting engagement I had was to tune an auctioning database solution for the wool industry in New Zealand. It was the usual story of poor performance-a legacy application that had been developed on an earlier version of SQL Server and had been upgraded as is, no in-house knowledge of SQL Server, and of course a history of hiring consultants. Of course, there was no documentation or any change management processes in place. The performance was generally poor, queries were slow for an auctioning system, and there was evidence of both internal and external memory pressures. In any case, I eventually found a lot of different problems, but a couple stuck in my mind. In particular, the contract developers (I think they came from a FoxPro background) had decided that they knew better than the SQL Server database engine and had put in optimizer hints for every single statement, especially ROWLOCK and a few others. Every single T-SQL statement! In summary, I recommended removing all the optimizer hints. Performance was restored. There are a number of morals here, including not always applying knowledge and experience from one

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 52 of 67

database engine to another. But be particularly careful of overriding the optimizer, make sure you document your reasons, and reinvestigate the reason whenever you upgrade SQL Server through a release, edition, or even service pack. -Victor Isakov

Troubleshooting Deadlocks
Deadlocks occur when two or more processes each have some resource locked in SQL Server 2005 and they cannot continue on until they get access to the other processs resources (which happen to be locked). Its a catch-22 situation. These resources will wait potentially indefinitely, because SQL Server 2005s default timeout is infinity! Well, in the case of a deadlock occurring, SQL Server 2005 automatically chooses one process as the victim and kills it off automatically. A 1205 error will be generated. The victims transaction will be rolled back, whereas the other process is allowed to complete. Note So, what transaction will be killed off? Well, theres no simple answer, because there are always exceptions, but the transaction that has done the least amount of work will be killed off. Why? Think about it: there is less to roll back! Figure 2.12 shows an example of a deadlock in SQL Server 2005 and how the SQL Server 2005 engine has chosen a particular process as a victim.

Figure 2.12: Resolving a deadlock A lot of sites will not have too many problems with deadlocks; nevertheless, you should know how to monitor them if the need arises. Monitoring Deadlocks You do not have as rich a set of tools and options to monitor deadlocks as with other locking issues. But remember that you can still use the various techniques, such as the DBCC OPENTRAN command and the sys.dm_tran_locks DMV, that you have encountered elsewhere that relate to locking. Having said that, SQL Server 2005 has some sexy new ways of monitoring deadlocks.
SQL Server Profiler Event Classes

You have a number of event classes available for your SQL Server Profiler/SQL traces. Table 2.14 shows these specific event classes. Table 2.14: Deadlock Related Lock Event Classes Open table as spreadsheet Event Name Deadlock Graph Lock:Deadlock Chain Lock:Deadlock Description Provides an Extensible Markup Language (XML) description of a deadlock Is produced for each participant in a deadlock and captures additional information such as the owner, lock mode, owner, and resource type to help troubleshoot deadlocks Indicates that a transaction has been rolled back as a deadlock victim because it tried to acquire a lock on a resource that caused a deadlock to occur

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 53 of 67

The sexy new event class is the deadlock graph, which returns a picture. Wow! Well, maybe not, but Microsoft made a big deal of it at all the TechEd events in 2006. In any case, Figure 2.13 shows a deadlock graph being captured in SQL Server Profiler.

Figure 2.13: Deadlock graph in SQL Server Profiler Tip You can export the deadlock graph to an XML file (XDL) to share with your friends.
SQL Server Error Log

You can also optionally configure SQL Server 2005 to report additional deadlock information to the SQL Server error log. Two trace flags are relevant for configuring the SQL Server 2005 database engine in this way. Both these trace flags are global, which means they have to be enabled via the -T switch for the SQLSERVR.EXE service in SQL Server Configuration Manager. Trace flag 1204 returns the queries, their resources, and the lock type that got deadlocked. The following shows an example of the output of trace flag 1204 in the SQL Server error log:
Deadlock encountered .... Printing deadlock information Wait-for graph Node:1 KEY: 8:72057594045071360 (920111cc4128) CleanCnt:3 Mode:X Flags: 0x0 Grant List 0: Owner:0x03BC6900 Mode: X Flg:0x0 Ref:0 Life:02000000 SPID:53 ECID:0 XactLockInfo: 0x05F7A274 SPID: 53 ECID: 0 Statement Type: UPDATE Line #: 2 Input Buf: Language Event: UPDATE Production.Product SET ListPrice = ListPrice * 1.2 Requested By: ResType:LockOwner Stype:'OR'Xdes:0x051D7690 Mode: U SPID:52 BatchID:0 ECID:0 TaskProxy:(0x05A2A374) Value:0x3bbd820 Cost:(0/256180) Node:2 KEY: 8:72057594055622656 (010086470766) CleanCnt:2 Mode:X Flags: 0x0 Grant List 0: Owner:0x03BC6940 Mode: X Flg:0x0 Ref:0 Life:02000000 SPID:52 ECID:0 XactLockInfo: 0x051D76B4 SPID: 52 ECID: 0 Statement Type: UPDATE Line #: 2 Input Buf: Language Event: UPDATE Production.ProductListPriceHistory SET ListPrice = ListPrice * 1.1 Requested By: ResType:LockOwner Stype:'OR'Xdes:0x05F7A250 Mode: U SPID:53 BatchID:0 ECID:0 TaskProxy:(0x05CA6374) Value:0x3bbd3e0 Cost:(0/101236) Victim Resource Owner: ResType:LockOwner Stype:'OR'Xdes:0x05F7A250 Mode: U SPID:53 BatchID:0 ECID:0 TaskProxy:(0x05CA6374) Value:0x3bbd3e0 Cost:(0/101236)

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 54 of 67

Trace flag 1222 returns pretty much the same information but in a different order/format. Well let you examine the output to see the difference. The following shows an example of the output of trace flag 1204 in the SQL Server error log:
deadlock-list deadlock victim=process8796a8 process-list process id=process8796a8 taskpriority=0 logused=101236 waitresource=KEY: 8:72057594055622656 (010086470766) waittime=2468 ownerId=1583 transactionname= user_transaction lasttranstarted=2007-03-20T22:46:30.843 XDES=0x64704d8 lockMode=U schedulerid=1 kpid=5596 status=suspended spid=54 sbid=0 ecid=0 priority=0 transcount=2 lastbatchstarted=2007-03-20T22:59:48.187 lastbatchcompleted=2007-03-20T22:46:30.873 clientapp= Microsoft SQL Server Management Studio - Query hostname=VAIOTX650P hostpid=3788 loginname= VAIOTX650P\ace isolationlevel=read committed (2) xactid=1583 currentdb=8 lockTimeout=4294967295 clientoption1=671090784 clientoption2=390200 executionStack frame procname=adhoc line=2 stmtstart=4 sqlhandle= 0x02000000d5a5f2017e2ec8dc1a7e3a7156d48038a31f5417 1 numeric(2,1))UPDATE [Production].[Product] set [ListPrice] = [ListPrice]*@1 frame procname=adhoc line=2 stmtstart=4 sqlhandle= 0x020000007af3b50dd02114b4e23b2800ce631225b2675158 UPDATE Production.Product SET ListPrice = ListPrice * 1.2 inputbuf UPDATE Production.Product SET ListPrice = ListPrice * 1.2 process id=process879888 taskpriority=0 logused=256180 waitresource=KEY: 8:72057594045071360 (920111cc4128) waittime=793437 ownerId=1470 transactionname= user_transaction lasttranstarted=2007-03-20T22:46:13.983 XDES=0x52af378 lockMode=U schedulerid=1 kpid=5900 status=suspended spid=53 sbid=0 ecid=0 priority=0 transcount=2 lastbatchstarted=2007-03-20T22:46:37.263 lastbatchcompleted=2007-03-20T22:46:14.047 clientapp= Microsoft SQL Server Management Studio - Query hostname= VAIOTX650P hostpid=3788 loginname=VAIOTX650P\ace isolationlevel=read committed (2) xactid=1470 currentdb=8 lockTimeout=4294967295 clientoption1=671090784 clientoption2=390200 executionStack frame procname=adhoc line=2 stmtstart=34 sqlhandle= 0x020000003f05e11a988aaacb1398c8ba136a53be22c8da7d UPDATE [Production].[ProductListPriceHistory] set [ListPrice] = [ListPrice]*@1 frame procname=adhoc line=2 stmtstart=4 sqlhandle= 0x020000000fae5d186499b953acc5dc4ffb76a9f1e8a96b61 UPDATE Production.ProductListPriceHistory SET ListPrice = ListPrice * 1.1 inputbuf UPDATE Production.ProductListPriceHistory SET ListPrice = ListPrice * 1.1 resource-list keylock hobtid=72057594055622656 dbid=8 objectname=AdventureWorks.Production.Product indexname=PK_Product_ProductID id=lock3b71a00 mode=X associatedObjectId=72057594055622656 owner-list owner id=process879888 mode=X waiter-list waiter id=process8796a8 mode=U requestType=wait keylock hobtid=72057594045071360 dbid=8 objectname= AdventureWorks.Production.ProductListPriceHistory

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 55 of 67

indexname=PK_ProductListPriceHistory_ProductID_StartDate id=lock3b75980 mode=X associatedObjectId=72057594045071360 owner-list owner id=process8796a8 mode=X waiter-list waiter id=process879888 mode=U requestType=wait

We hope you enjoyed examining it! Resolving Deadlock Issues Any concurrent environment will experience deadlocks. Its unavoidable. Such is the nature of the beast. However, you can minimize deadlocks using the following techniques: Keep transactions as short as possible. Use the lowest transaction isolation level you can afford. Use optimizer hints. Access database objects in the same order. Avoid user interaction in transactions. Consider using RSCSI. Consider using SI. Consider using bound connections. Consider using MARS. Use appropriate indexing strategies.
Deadlock Priority

SQL Server 2005 automatically picks which transaction will be killed off in a deadlock scenario. Students always ask whether you can influence its decision. Wellyes, you can, at the session level, through the SET DEADLOCK_PRIORITY command. Basically, SQL Server 2005 has 21 levels of deadlock priority, from 10 to 10. A lower priority level will be killed in preference of a higher-level priority, all things being equal. The SET DEADLOCK_PRIORITY command supports the following deadlock priorities: LOW (5) NORMAL (0) HIGH (5) A numeric value Tip You can see a sessions deadlock priority through the deadlock_priority column of the sys.dm_exec_sessions DMV. So, you can use the SET DEADLOCK_PRIORITY HIGH command in the more important stored procedures and batches that you would rather not be killed off if a deadlock occurs.

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 56 of 67

Note In earlier versions of SQL Server, you had only the LOW deadlock priority, so you could only write stored procedures and batches that were the preferred victim. Lets finish up by troubleshooting deadlocks in Exercise 2.4. Exercise 2.4: Troubleshooting Deadlocks Youll examine how you can generate deadlock graphs in SQL Server Profiler for further analysis. 1. Open SQL Server Management Studio, and connect to your SQL Server 2005 instance using Windows authentication. 2. Start the SQL Server Profiler through the Tools menu. 3. Give the trace a name, as shown here.

4. Click the Events Selection tab. 5. Clear all the events, as shown here.

6. Click the Show All Events check box. 7. Expand the Locks events, and select the Deadlock Graph, Lock:Deadlock, and Lock:Dead-lock Chain events, as shown here.

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 57 of 67

8. Click the Run button. 9. Switch to SQL Server Management Studio, click the New Query button, and connect to your SQL Sever 2005 instance. 10. Type the following query into the query pane, and execute it:
USE AdventureWorks ; GO BEGIN TRAN UPDATE Production.Product SET ListPrice = ListPrice * 1.1

11. Click the New Query button, and connect to your SQL Sever 2005 instance a second time. 12. Type the following query into the second query pane, and execute it:
USE AdventureWorks ; GO BEGIN TRAN UPDATE Production.ProductListPriceHistory SET ListPrice = ListPrice * 1.2

13. Switch to the first query pane, modify the query as shown here, and then execute it. You should notice that the query does not complete because the [Production] .[ProductListPriceHistory] is locked by the second query pane.
USE AdventureWorks ; GO /* BEGIN TRAN UPDATE Production.Product SET ListPrice = ListPrice * 1.1 */ UPDATE Production.ProductListPriceHistory SET ListPrice = ListPrice * 1.1

14. Switch to the second query pane, modify the query as shown here, and then execute it:
USE AdventureWorks ; GO /* BEGIN TRAN UPDATE Production.ProductListPriceHistory SET ListPrice = ListPrice * 1.2 */ UPDATE Production.Product SET ListPrice = ListPrice * 1.2

15. This should cause a deadlock. You should see an error message similar to the one shown here. (If you

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 58 of 67

do not, check the first query pane.)


Msg 1205, Level 13, State 51, Line 2 Transaction (Process ID 69) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.

16. Switch to the SQL Server Profiler, and stop the trace. 17. Click the Deadlock Graph event class to examine the deadlock graph.

18. When you have finished, exit SQL Server Profiler. 19. Switch to SQL Server Management Studio. 20. Close the second query pane. 21. In the first query pane, modify the query as shown here, and then execute it:
USE AdventureWorks ; GO /* BEGIN TRAN UPDATE Production.Product SET ListPrice = ListPrice * 1.1 UPDATE Production.ProductListPriceHistory SET ListPrice = ListPrice * 1.1 */ ROLLBACK TRAN

22. Exit SQL Server Management Studio.

Summary
In this chapter, you looked at how to optimize the performance of queries in SQL Server 2005. You started by looking at the different categories for poor query performance and learned how you can use the Database Engine Tuning Advisor to quickly tune your SQL Server 2005 database solution. We then covered different techniques that you can use to help you identify potentially poorly performing queries. Next we discussed what execution plans are and how to generate and analyze them. We provided some tips on what to watch out for in execution plans and general tips for optimizing the performance of poorly performing queries. We then covered the importance of maintaining and optimizing indexes for query performance. You looked at what index fragmentation was and the difference between reorganizing them and rebuilding them.

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 59 of 67

Locking and concurrency are other important topics, and you examined how the lock manager works in SQL Server 2005. Transaction isolation levels defined the way transactions operate in SQL Server 2005. Again, we covered ways in which you can override the default behavior of SQL Server 2005. Finally, we finished up with an examination of what deadlocks are in SQL Server 2005, how to detect them, and recommendations for minimizing them.

Exam Essentials
Know about the important DMVs. Make sure you understand where to use the important DMVs and the output of each, such as sys.dm_exec_query_stats, sys.dm_db_index_ physical_stats, sys.dm_os_wait_stats, and sys.dm_tran_locks. Understand index fragmentation. Make sure you can interpret the fragmentation of an index and be able to decide whether corrective action is appropriate. Be able to maintain indexes. It is important to understand the difference between rebuilding and defragmenting an index and when to use each technique. Know how to use query hints. Make sure you understand the more common query hints such as INDEX ( index_val [ ,... n ] ) and others. Understand read committed snapshot isolation and snapshot isolation. It is important to understand how RCSI and SI work and where it is appropriate to use each.

Review Questions
1. You are a database administrator responsible for maintaining a SQL Server 2005 instance. You get a call from the corporate help desk saying that database users are complaining about poor query performance. You need to see what queries are currently consuming the most SQL Server 2005 resources. What do you do? A. Use SQL Server Profiler using the Tuning template. B. Use SQL Server Profiler using the TSQL_Duration template. C. Query the sys.dm_exec_query_stats DMV. D. Query the sys.dm_exec_query_plan DMV. 2. You are the database administrator for you company. Marion, a junior developer, has asked you to tune a store procedure. The stored procedure reads a [Products] table four times to calculate four different reports. The problem is that because the stored procedure takes so long to run, new products can be inserted between the calculations of these four reports, which leads to an inconsistency between the reports. After reading this book, Marion has assumed that this can be solved by changing the stored procedures transaction isolation level. What transaction isolation level should you use? A. UNCOMMITTED READ

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 60 of 67

B. COMMITTED READ C. REPEATABLE READ D. SERIALIZABLE 3. You are a database administrator responsible for tuning a database on a SQL Server 2005 instance. What tool should you use to improve the performance of your database? A. SQL Server Profiler B. Database Engine Tuning Advisor C. Index Tuning Wizard D. DTSRUN.EXE 4. You are a database administrator for your company. Your company has purchased a third-party application that uses SQL Server 2005. The third-party application generates T-SQL code on the fly, which is submitted as ad hoc queries. Performance has been abysmal for a number of these queries. You have monitored the database using SQL Server Profiler and have captured a representative trace. Your early conclusion points to poor indexing and suboptimal queries. What should you do? (Each correct answer represents part of the solution. Choose two.) A. Use optimizer hints. B. Use indexed views. C. Use plan guides. D. Use partitioning. E. Use the Database Engine Tuning Advisor. F. Use the SQL Server 2005 Upgrade Advisor. 5. You are a database administrator for a SQL Server 2005 instance that is experiencing a lot of deadlocks. What trace flag do you turn on to get more information about all nodes involved in the deadlocks that are occurring? A. 3604 B. 3605 C. 1204 D. 1205 6. You are a database administrator responsible for a SQL Server 2005 instance that is driving a website responsible for market trades. Performance is critical, and users connected to the system must get the latest data. Data is being modified continuously. You have noticed that readers are being blocked by writers, and this is becoming a serious problem. Readers typically read a few rows at a time, whereas writers only ever modify a single row at a time. Transactions are never committed. What should you use to solve the concurrency issue? A. Use the UNCOMMITTED READ isolation level. B. Use the REPEATABLE READ isolation level. C. Use snapshot isolation.

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 61 of 67

D. Use read committed snapshot isolation. 7. You are a database administrator for your company. A number of range queries are experiencing poor performance on a particular SQL Server 2005 instance. You have determined that this is due to lock escalation. This is causing further concurrency issues. SQL Server 2005 seems to be acquiring row-level locks that are always escalated. Additionally, you are getting errors indicating you have run out of memory for locks. You are planning to add more memory to the SQL Server 2005 instance but need to reduce these lock escalation problems today. What query hint should you use for these range queries to see whether it solves the problem? A. READPAST B. ROWLOCK C. PAGLOCK D. TABLOCK 8. You are the database administrator for your company. A database, using the default options, has been implemented on a SQL Server 2005 Enterprise Edition instance. While monitoring the performance of this database, you have noticed that the updating of statistics at certain periods of a particular table degrades performance unacceptably. You want to turn off automatic updating of statistics for this table only. What do you use? A. Use the sp_autostats stored procedure. B. Use the AUTO_UPDATE_STATISTICS database option. C. Use the sp_createstats stored procedure. D. Use the sp_updatestats stored procedure. 9. You are the database administrator for your company. A sales database solution has been implemented on a SQL Server 2005 instance. You have identified poor performance for the following query:
SELECT SalesOrderID, SUM(UnitPrice) AS TotalPrice FROM Sales.SalesOrderDetail WITH (INDEX(0)) WHERE ProductID BETWEEN @low AND @high GROUP BY ProductID ;

The [SalesOrderDetails] table contains in excess of 1 billion rows. There is a clustered index on the [SalesOrderID] column and no nonclustered indexes. What should you do to improve performance? (Each correct answer represents part of the solution. Choose two.) A. Create a nonclustered index on the [ProductId] column. B. Change the hint to INDEX(1). C. Remove the INDEX(0) hint. D. Create a nonclustered index on the [ProductId] column, and include the [UnitPrice] column. E. Create a nonclustered index on the [ProductId] and [UnitPrice] columns. F. Add the FAST hint. 10. You are the database administrator for your company managing a SQL Server 2005 solution.

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 62 of 67

The [Products] table contains more than a million records. There are no indexes on the table at present. You need to create an indexing strategy for the following queries:
-- Query 1 SELECT ProductCategory, COUNT(*) FROM Products GROUP BY ProductCategory -- Query 2 SELECT * FROM Products ORDER BY ProductNumber -- Query 3 SELECT ProductNumber, ProductName, ProductCategory, ProductStockLevel, Discounted FROM Products WHERE ProductCategory = 'Nuclear'

What indexes should you create? (Each correct answer represents part of the solution. Choose two.) A. Create a clustered index on the [ProductName] column. B. Create a nonclustered index on the [ProductName] column. C. Create a clustered index on the [ProductNumber] column D. Create a nonclustered index on the [ProductNumber] column. E. Create a nonclustered index on the [ProductCategory] column and include the [ProductStockLevel] and [ProductDiscounted] columns. F. Create a nonclustered index on the [ProductNumber] column and include the [ProductStockLevel] and [ProductDiscounted] columns. G. Create a nonclustered index on the [ProductNumber], [ProductName], [ProductCategory], [ProductStockLevel], and [ProductDiscounted] columns. H. Create a nonclustered index on the [ProductNumber], [ProductName], [ProductCategory], and [ProductStockLevel] columns. 11. You are a database administrator for your company. Your company has purchased a third-party application that uses SQL Server 2005 Express Edition. The third party uses stored procedures and views for all data access. Performance has been abysmal for a number of queries. The vendor has since gone bankrupt. You have monitored the database using SQL Server Pro-filer and have captured a representative trace. Your early conclusion points to suboptimal query plans. Indexing seems to be optimal and statistics are up-to-date. What should you do? A. Use optimizer hints. B. Use indexed views. C. Use plan guides. D. Use partitioning. 12. You are the database administrator for KatsNuke, a nuclear power station in New Zealand.

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 63 of 67

KatsNuke has a SQL Server 2005based database solution that is being used to store real-time feeds from a nuclear reactor. A particular table has a lot of inserts being performed on it from these feeds, is queried often, and has a number of indexes on it. You have determined that the indexes become fragmented very quickly. Data insertions and query performance degrades throughout the day. Your indexes are rebuilt daily. What should you do when rebuilding the indexes? A. Use a FILLFACTOR setting of 0. B. Use a FILLFACTOR setting of 20. C. Use a FILLFACTOR setting of 80. D. Use a FILLFACTOR setting of 100. 13. You are the database administrator for a call center that operates between 7 A.M. and 7 P.M. in Sydney, Australia. The call centers database is implemented on a SQL Server 2005 instance. You have determined that page splits are causing performance problems for the [Customers] table. You have decided to apply a fill factor setting to the indexes on the [Customers] table. The average size of the rows in this table is 116 bytes. The primary key is a clustered index on the UNIQUEIDENTIFIER data type. How should you implement your fill factor strategy? (Each correct answer represents part of the solution. Choose three.) A. Use a FILLFACTOR setting of 100. B. Use a FILLFACTOR setting of 90. C. Use a FILLFACTOR setting of 10. D. Set the PAD_INDEX option to ON. E. Set the PAD_INDEX option to OFF. F. Rebuild the indexes daily. G. Reorganize the indexes daily. H. Rebuild the indexes weekly. I. Reorganize the indexes weekly. 14. You are a database administrator for your company. A SQL Server 2005 database solution has a complex stored procedure with hundreds of lines of code that is generating exceptions. The error could be generated at any of 69 lines of code within the stored procedure. You need to modify the stored procedure to appropriately trap and log any errors so that you can investigate the cause further. What mechanism should you use within the stored procedure to generate appropriate logging? A. Set XACT_ABORT ON at the end of the stored procedure. B. Use a CATCH/TRY block within the store procedure. C. Use the RAISERROR statement after each line of code. D. Set XACT_ABORT ON at the beginning of the stored procedure. 15. You are the database administrator for your company. You have been asked to look at the execution plan for a query running on SQL Server 2005. The query involves a complex join and a sort operation. You have noticed that the execution plan has a number of hash operations that are taking up the majority of the querys execution time. What should you do to improve

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 64 of 67

performance? A. Use the SQL Server Profiler to trace for potential deadlocks. B. Investigate what additional indexes can be created on the tables that are being used by the query. C. Partition the tables used by the query. D. Investigate what indexes can be dropped from the tables that are being used by the query. 16. You are a database administrator for a SQL Server 2005 instance that has been experiencing a lot of deadlocks. A particular transaction that is critical to the business has sometimes been chosen as the victim, and this is not acceptable. What should you do to help prevent this from occurring? A. Set DEADLOCK_PRIORITY for the transaction to LOW. B. Run the DBCC OPENTRAN command in the transaction. C. Set DEADLOCK_PRIORITY for the transaction to HIGH. D. Use the FAST query hint in the transaction. 17. You are a database administrator for your company. You are in charge of maintaining a VLDB based on a SQL Server 2005 instance. Automatic creating and updating of statistics has been turned off. All indexes are rebuilt nightly after hours. One morning, after enjoying your breakfast, your manager, Julie, indicates that database users have been having performance problems with reports that they run daily throughout the day. These reports use three stored procedures as follows:
sp1 @CustomerID sp2 @OrderID, [@CustomerID] sp3 @OrderDate, [@CustomerID]

After investigating this further, you discover that a bulk load of data has occurred shortly after work began as a result of important data arriving. This bulk load of data affected the tables these stored procedures are accessing. What should you do? A. Add the RECOMPILE option to the EXECUTE statement that calls the three stored procedures. B. Turn on the AUTO_CREATE_STATISTICS database option. C. Run the sp_updatestats stored procedure. D. Add the RECOMPILE option to the three stored procedures. 18. You are a database administrator for a SQL Server 2005 instance running in your company. The SQL Server 2005 instance is running a mission-critical system where performance is critical and cannot be impacted during business hours. You are seeking advice about how to tune a particular query that is run after-hours as part of an end-of-day process. This query takes more than an hour to run and is extremely resource intensive. You intend to email the query plan and all additional resource metrics to an external performance-tuning guru. You do not want to impact the performance of the SQL Server instance. What statement should you use? A. SET SHOWPLAN_ALL ON B. SET SHOWPLAN_TEXT ON C. SET STATISTICS PROFILE ON

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 65 of 67

D. SET STATISTICS XML ON 19. You are the database administrator for your company. Iris, a database user, complains about the performance of a query running on the SQL Server 2005 instance. Iris has been waiting for five minutes for the query to complete. You query the sys.dm_tran_locks and get the fol-lowing partial results: Open table as spreadsheet resource_type resource_description KEY (c39d32a242) KEY (22ab2242aa) DATABASE KEY (139111a288) DATABASE KEY (239432a242) KEY (2394324232) KEY (2394324242) PAGE 1:24831 KEY (34BD8999A4) PAGE 1:24831 KEY (2394324242) PAGE 1:24831 PAGE 1:34553 METADATA METADATA

request_mode X X S X S X X U IX IX IU IX Sch-S Sch-S

request_status GRANT GRANT GRANT GRANT GRANT GRANT GRANT WAIT GRANT GRANT GRANT GRANT GRANT GRANT GRANT GRANT

request_session_id 70 67 69 67 66 66 66 69 67 66 66 69 70 66 69

What command do you run to determine who is the blocking process? A. EXEC sp_who 66 B. EXEC sp_who 69 C. SELECT USER_NAME(66) D. SELECT USER_NAME(69) 20. You are a database administrator for a SQL Server 2005 instance that is experiencing a lot of deadlocks. You want to set up a SQL Server Profiler trace that will capture an XML description of the deadlocks. What event class should you capture? A. Deadlock Graph B. Lock:Deadlock Chain C. Lock:Deadlock D. Lock:Timeout (timeout > 0) Answers 1. C. The sys.dm_exec_query_stats DMV returns aggregate performance metrics about the cached execution plans. The sys.dm_exec_query_plan DMV will show you only the query plans. Using SQL Server Profiler is too late because it will show you only future traced events.

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 66 of 67

2. D. The SERIALABLE transaction isolation level will prevent phantom values. New products will not be able to be inserted between the first and last queries that access the [Products] table. 3. B. The Database Engine Tuning Advisor was designed to make recommendations about how performance can be improved on your SQL Server 2005 instance. 4. C, E. Using the Database Engine Tuning Advisor will enable you to potentially create an optimal indexing strategy. Using plan guides will help with the application. One of the uses for plan guides is to help database administrators tune third-party applications where they cannot change the code. 5. C. Trace flag 1204 returns deadlock information about each node involved in the deadlock. 6. A. The UNCOMMITTED READ isolation level will not honor locks or acquire locks for read operations. Dirty reads should be all right because transactions are never committed and only ever modify a single row. The REPEATABLE READ isolation will cause greater contention. There is no need to go to SI or RCSI because readers might not be working with the latest data and there might be an overhead on the SQL Server 2005 instance. 7. C. Page-level locks could reduce lock escalation and should consume less memory than row-level locks. Table-level locks would create too much contention. The READPAST query hint tells the query to skip locked data. 8. A. The sp_autostats system stored procedure changes the automatic UPDATE STATISTICS setting for tables or indexes. The sp_updatestats system stored procedure updates the statistics for the entire database. The AUTO_UPDATE_STATISTICS database option turns off automatic statistics for the entire database. The sp_createstats system stored procedure creates statistics. 9. C, D. Removing the INDEX(0) hint, creating a nonclustered index on the [ProductId] column, and including the [UnitPrice] column will be optimal. Creating a nonclustered index on the [ProductId] and [UnitPrice] columns will result in a bigger index. Changing the hint to INDEX(1), adding the FAST hint, or creating a nonclustered index on the [ProductId] column will not improve performance. 10. C, E. C will be optimal for query 2. E will be optimal for queries 1 and 3. 11. A. You can improve the performance of the queries through optimizer hints. Indexed views cannot be automatically used in SQL Server 2005 Express Edition. Plan guides and partitioning are not supported by SQL Server 2005 Express Edition. 12. C. Using a FILLFACTOR setting of 80 will create free space on the data pages, avoiding page splits throughout the day, which should improve performance. The other FILLFACTOR settings would degrade performance. 13. B, E, F. A fill factor setting of 90 percent will be optimal for this table. There is no need to pad the index. The index should be rebuilt daily. Reorganizing the indexes will not rebuild the B-tree with the fill factor setting. 14. B. The CATCH/TRY block is designed to provide structured exception handling in batches and stored procedures. The RAISERROR statement traps the error of only the last T-SQL statement. The XACT_ABORT option does not trap errors. 15. B. Hash operations indicate a lack of appropriate indexes. 16. C. Setting DEADLOCK_PRIORITY to HIGH for the transaction will help ensure that the transaction is not chosen as the victim when a deadlock occurs. The DBCC OPENTRAN command will return information about the oldest open transaction. The FAST query hint only helps returning a certain number of rows quicker. 17. C. Because of the bulk load of data and the AUTO_UPDATE_STATISTICS database option being turned off, the statistics are out-of-date. This is probably causing the performance problems because the optimizer is using out-of-date statistics to generate suboptimal query plans. Running the sp_updatestats system stored procedure will update the statistics for the entire database, ensuring that the query optimizer will be generating efficient query plans. The other options do not update the statistics. 18. A. The SET SHOWPLAN_ALL option will return the most information about the query plan and related metrics without executing the query. The SET STATISTICS options will execute the query, which will impact the performance of the SQL Server instance and thus are inappropriate. The SHOWPLAN_TEXT option returns only basic information. 19. A. The system process ID (SPID) of 69 represents the blocked process. The SPID of 66 represents the blocking process. The sp_who system stored procedure will return information about the SPIDs connected to the SQL Server 2005 instance. The USER_NAME() system function returns the name of the database user account. 20. A. The Deadlock Graph event class provides an XML description of a deadlock.

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

Chapter 2: Optimizing the Performance of Queries

Page 67 of 67

file://C:\Documents and Settings\guekil\Local Settings\Temp\~hh1383.htm

14/09/2011

You might also like