Archive

Archive for the ‘SQL Server 2008’ Category

How to remove “Server name” entry from “Connect to Server” dialog in SSMS

20/11/2012 Leave a comment

If you connect to a lot of different SQL servers with SQL Server Tools (SSMS, SQL Profiler etc.), the list of servers in “Connect to Server” dialog become large. Before SQL Server 2012 there was no way to delete individual server entry from the list or supported way to delete the list.

Some time ago, somewhere on the web I found a way how to delete the list of servers in “Connect to Server” dialog.
For SQL Server 2005 method is to close SSMS and then delete mru.dat file. On windows 2003 server file is located at %UserProfile%\Application Data\Microsoft\Microsoft SQL Server\90\Tools\Shell .
For SQL Server 2008/2008R2 method is to close SSMS and then delete SqlStudio.bin file. On windows 2003 server file is located at %UserProfile%\Application Data\Microsoft\Microsoft SQL Server\100\Tools\Shell. On Windows 7 and Windows 2008 server file is located at %UserProfile%\AppData\Roaming\Microsoft\Microsoft SQL Server\100\Tools\Shell.
A few days ago, I used the same method for SSMS 2012 by deleting SqlStudio.bin file from %UserProfile%\AppData\Roaming\Microsoft\SQL Server Management Studio\11.0.

But the problem in using these methods is that it’s unsupported by Microsoft and we lose user specific settings in SSMS.

Today I found that with SQL Server 2012 we have a supported way to delete server name from the list of servers in “Connect to Server” dialog by simply selecting the server name from the drop down list and then pressing the delete button.
It is documented in this blog post.

Advertisements

Reads in Read Committed Isolation Level

23/07/2012 Leave a comment

I’ll start this blog post with a question: “What kinds of locks in SQL Server are acquired during simple select statement in read committed isolation level (READ_COMMITTED_SNAPSHOT database option is set to OFF)?”

Few days ago, I would answer: “Intent-Shared locks at table and pages level and shared locks on a row by row basis.”

But, I found out that it’s not correct.  In some situations shared locks are not issued at all. Let’s demonstrate that. First let’s create a database and table with 3 rows.

 CREATE DATABASE TestLocks
GO
ALTER DATABASE TestLocks SET READ_COMMITTED_SNAPSHOT OFF
GO
USE TestLocks
GO
-- Create table
CREATE TABLE dbo.DemoTable ( id int, col1 varchar(4000))
GO
-- Insert 3 rows = 2 pages
INSERT INTO dbo.DemoTable
SELECT 1, REPLICATE('a',4000) UNION ALL
SELECT 2, REPLICATE('a',4000) UNION ALL
SELECT 3, REPLICATE('a',4000) 

If we run DBCC IND we can see that table contains 2 data pages (in my case page 78 and 80).

DBCC IND('TestLocks','DemoTable',1)

Let’s run simple select statement.

SET TRANSACTION ISOLATION LEVEL READ COMMITTED
GO
SELECT * FROM dbo.DemoTable
GO

I used SQL Profiler to check acquired lock.

We can see that only Intent-Shared (IS) locks at the table and page levels are issued. There are no row-level shared (S) locks.

I was very surprised when I noticed that. I thought that shared locks are always acquired and that optimizer only decides about granularity of locks (row or page level shared lock).

I searched about this lock behavior in SQL Server and found Paul White’s post “The Case of the Missing Shared Locks” where he wrote about locking optimization.

It seems that SQL Server avoids acquiring row-level shared (S) locks when it knows that no data has changed on a page. I didn’t find more information about shared lock optimization. If you know more you can leave a comment.

SQL Server Page Restore

10/01/2012 2 comments

We all know how backup is important. I usually say that restore is more important than backup. We can create backup but if we don’t know how to restore it, or we cannot because it isn’t correct we have nothing. So always check your backups and restore it on different locations. In situation when you need to react quickly restore practice can save you a lot of times and nerves.

In this post I decided to show how to use page restore. Page restore isn’t something that has to be done on a regular basic, but every DBA should be familiar with technique and requirements in case page restore is needed.

Page restore requirements and limitations

Page restore is future available from SQL server 2005 version. Enterprise edition allows online page restore (restore of metadata pages require offline restore) while standard edition support only offline page restore.

Page restore is a nice future especially if you have a large database and a few damaged pages. As with all nice futures, page restore has some requirements and limitations. To be able to use page restore database must to be in full or bulk logged recovery model. Page restore must start with full, file or filegroup backup and there must be an unbroken chain of log backups up to the current log file. All log backups must be applied so that the page is brought up to date with the current log file.

Be aware that page restore cannot be used to restore: transaction log, full-text, allocation pages (GAM, SGAM, PFS, DIFF, ML), file header page (PageId =0 ), boot page (PageId =9).

How to identify damaged pages?

Damaged pages can be identified in multiple ways.

1. Sometimes, executing some queries on database can result in an error 824 (invalid page checksum error).

Msg 824, Level 24, State 2, Line 1

SQL Server detected a logical consistency-based I/O error: incorrect checksum (expected: 0xbf649245; actual: 0xbf648ec5). It occurred during a read of page (1:80) in database ID 9 at offset 0x000000000a0000 in file ‘D:\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\CorruptDB.mdf’. Additional messages in the SQL Server error log or system event log may provide more detail. This is a severe error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.

Analyzing the error we can see that database with databaseid = 9 has damaged page 80 on file 1 (page 1:80).

2. Damaged pages can be identified with DBCC CHECKDB command. Example results:

Msg 8928, Level 16, State 1, Line 2
Object ID 2105058535, index ID 0, partition ID 72057594038779904, alloc unit ID 72057594039828480 (type In-row data): Page (1:80) could not be processed. See other errors for details.
Msg 8939, Level 16, State 98, Line 2
Table error: Object ID 2105058535, index ID 0, partition ID 72057594038779904, alloc unit ID 72057594039828480 (type In-row data), page (1:80). Test (IS_OFF (BUF_IOERR, pBUF->bstat)) failed. Values are 12716041 and -4.
CHECKDB found 0 allocation errors and 2 consistency errors in table ‘CorruptTable’ (object ID 2105058535).
CHECKDB found 0 allocation errors and 2 consistency errors in database ‘CorruptDB’.
repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (CorruptDB).

3. dbo.suspect_pages table on msdb database. It contains one row per page that failed with an 823 error or an 824 error.

Firtst3

4. Check the SQL Server Error Log, and look for errors related to corrupted pages

How to perform page restore?

To perform page restore we need full backup containing un-damaged pages, differential and all transaction log backups to the current time. In the RESTORE DATABASE statement, we need to use the PAGE clause to list the page IDs of all of the pages to be restored. If we use offline page restore we first need to create backup log with NORECOVERY option. In this example, I will use online page restore.

--Restore database
RESTORE DATABASE CorruptDB PAGE = '1:80'
FROM DISK = 'D:\Backup\CorruptDB_full.bak'
WITH NORECOVERY
-- Restore log
RESTORE LOG CorruptDB
FROM DISK = 'D:\Backup\CorruptDB_log1.bak'
WITH NORECOVERY

/*
Processed 1 pages for database 'CorruptDB', file 'CorruptDB' on file 1.
RESTORE DATABASE ... FILE= successfully processed 1 pages in 0.023 seconds (0.339 MB/sec).
Processed 0 pages for database 'CorruptDB', file 'CorruptDB' on file 1.
RESTORE LOG successfully processed 0 pages in 0.002 seconds (0.000 MB/sec).
*/

In the example above, full backup with pageid = 80 is restored with option norecovery. Then we restored log with norecovery option. If we try to restore log with RECOVERY option without taking the last backup we would get an error:

Processed 0 pages for database ‘CorruptDB’, file ‘CorruptDB’ on file 1.
The roll forward start point is now at log sequence number (LSN) 43000000026100001. Additional roll forward past LSN 43000000042700001 is required to complete the restore sequence.
RESTORE LOG successfully processed 0 pages in 0.002 seconds (0.000 MB/sec).

You can query database and all data are available except damaged page. If we try to query data from table containing damaged page after we restored it with NORECOVERY option we will get an error:

ERROR:

Msg 829, Level 21, State 1, Line 1
Database ID 9, Page (1:80) is marked RestorePending, which may indicate disk corruption. To recover from this state, perform a restore.

To finalize page restore we need to take log backup and to restore it with RECOVERY option.

-- Backup DB log
BACKUP LOG CorruptDB
TO DISK = 'D:\Backup\CorruptDB_log2.bak'
-- Restore DB log with RECOVERY option
RESTORE LOG CorruptDB
FROM DISK = 'D:\Backup\CorruptDB_log2.bak'
WITH RECOVERY

Now we can query our table without error.

Offline page restore procedure is slightly different. With offline restore we first need to take log backup with NORECOVERY option. It will put database to restoring state. Then we need to restore full database using PAGE clause and NORECOVERY option. Then differential and all transaction log backup need to be applied with NORECOVERY option. Last transaction backup, taken with NORECOVERY option, should be restored with RECOVERY option.

SQL Server: Simple Parameterization

31/10/2011 Leave a comment

Simple parameterization is SQL Server feature which allow the optimizer to parameterize submitted queries. If submitted query has no parameters and has constant values plugged in, the optimizer can choose to treat constant values as parameters and automatically create parameterized query and execution plan for it. Simple parameterization is on by default and it’s designed to reduce cost of generating execution plans. SQL Server uses a simple parameterization for a relatively small set of queries where query plan doesn’t depend on particular constant values.
To see how simple parameterization works I will use AdventureWorks database. Let’s run some simple query and see what is written in SQL Server’s plan cache.

DBCC FREEPROCCACHE
GO
SELECT *
FROM Production.TransactionHistory
WHERE TransactionID = 213429
GO
SELECT cp.objtype,
cp.usecounts,
cp.size_in_bytes,
cp.plan_handle,
st.text,
qp.query_plan
FROM sys.dm_exec_cached_plans cp
CROSS APPLY sys.dm_exec_sql_text (cp.plan_handle) st
CROSS APPLY sys.dm_exec_query_plan (cp.plan_handle) qp
WHERE st.text LIKE '%TransactionHistory%'
AND st.text NOT LIKE '%sys.dm_exec_cached_plans%'
GO

Here are the results from the plan cache query:

We can see that SQL Server created two entries in the cache. The first entry is submitted query with constant value and it’s cached as objtype adhoc. It is a shell query and it doesn’t contain execution plan. A shell query contains only the exact adhoc batch text and pointer to the parameterized (prepared) plan. The second entry is parameterized query and has objtype of prepared. Prepared query contains compiled execution plan and query text with a parameter in place of a constant value. We can also see that shell query use 16KB of cache while prepared plan use 32KB. If we open a query_plan of the shell query we can see that shell query contains only pointer to the prepared plan. From (1) we can see which parameterized plan is used with parameterized query text and in (2) we can see which statement is submitted.

Let’s run same type of the query but with different constant value (213430). If we observe SELECT operator in the actual execution plan of the query we can tell that query is parameterized.

(1) In “Parameter List” we can see “Parameter Compiled Value” and “Parameter Runtime Value”. (2) Shows us parameterized text and (3) gives us hash value of the query plan. We can use QueryPlanHash value to return query plan and execution count of the parameterized query.

SELECT st.text, qs.execution_count, qp.query_plan
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text (qs.plan_handle) st
CROSS APPLY sys.dm_exec_query_plan (qs.plan_handle) qp
WHERE qs.query_plan_hash = 0xAFB37EEDFFA28FE3

If we look at results of plan cache query again, we can see that the second shell query is created with the exact text of the second query and it points to a parameterized plan.

Why shell query is cached when it only points to parameterized query? It’s cached only to make it easier to find parameterized query if the exact same query with the same constant is re-executed later.

Now, I will show one interesting behavior of simple parameterization. Let’s run same type of the query but with constant value of 5. Here are the results from the plan cache query:

Two new entries are created. (1) One shell query and one (2) prepared but with parameter data type of tinyint. It’s strange that the optimizer created a new prepared query with tinyint data type parameter when there is a cached plan with int parameter value. It seems that the optimizer, during simple parameterization, use parameter with the lowest possible data type for the specified constant value of number data type. If we execute the query with varchar or nvarchar constant value, data type varchar(8000) and nvarchar(4000) is used as parameter.

As I mentioned at the begging of the post, relatively small set of queries will be simple parameterized. Only simple queries whose plan doesn’t depend on constant value will be considered “safe” for simple parameterization and will be cached. Also, there are a lot of statement types which SQL Server will not even attempt to auto-parameterize.  For complete list of these statement types you can check Apendix A in the article Plan Caching in SQL Server 2008.

Let’s take a look at these simple queries.

SELECT TransactionID, ProductID
FROM Production.TransactionHistory
WHERE ProductID = 725
GO
SELECT *
FROM Production.TransactionHistory
WHERE ProductID = 725
GO

If we execute those 2 queries and check results from plan cache query we can see that first query is parameterized and second one isn’t. Why?

For the first query, Query Optimizer thinks that nonclustered index seek is the best option for the query even when constant value change, because only columns we need are contained in the nonclustered index (clustered index is created on TransactionID column).
The second query is different, we want all columns from the table and a nonclustered index seek and key lookup might be a good choice when only a few rows are returned, but it might be a terrible choice when many rows are returned.  If we look at query_plan from the second query we can see that query is parameterized. It seems that query went through auto-parameterization process but query optimizer decided that it is “unsafe” and parameterized query isn’t cached.

Conclusion:

SQL server will simple parameterize some simple queries whose plan doesn’t depend on different constant values. During simple parameterization, if the optimizer decide that parameterized query is “safe”, two entries are written to the plan cache: shell query and prepared query. Shell query doesn’t contain full plan, it only point to parameterized plan which contain full execution plan for the query. Shell and parameterized queries are cached to reduce cost of generating execution plans. If we run a lot of adhoc queries with different constant value, more parameterized queries for the same type of query but with different parameter data type and large number of shell queries will be written to the plan cache.

SQL Server: Find statistics information in database

13/10/2011 1 comment

Recently I worked on a performance tuning of a database and noticed a lot of user created statistics. I checked few of them and saw that lots of user created statistics are multi-column statistics and some of them have a filter defined. There was no documentation of created statistics and I wanted to find out all statistics with all defined columns and filter. That is easy with sys.stats, but for better visibility, I wanted to list all statistics columns and defined filter in a single concatenated string.
Below script will return all statistics with all columns and defined filter for all user tables and views in a database. Script works in SQL Server 2008 and Denali (SQL Server 2005 doesn’t have filtered statistics).

;WITH Stat(SchemaName, TableName, StatisticsName, StatisticsType, filter_definition, stats_column_id, name)
AS
(
SELECT sh.name AS SchemaName ,
o.name AS TableName,
s.name AS StatisticsName,
CASE
WHEN s.user_created = 1 THEN 'USER'
WHEN s.auto_created = 1 THEN 'AUTO'
WHEN s.auto_created = 0 AND s.user_created = 0 THEN 'INDEX'
ELSE 'UNKNOWN'
END AS StatisticsType,
s.filter_definition,
sc.stats_column_id,
c.name
FROM sys.stats s
INNER JOIN sys.objects o
ON o.object_id = s.object_id
INNER JOIN sys.schemas sh
ON sh.schema_id = o.schema_id
INNER JOIN sys.stats_columns sc
ON sc.stats_id = s.stats_id
AND sc.object_id = s.object_id
INNER JOIN sys.columns c
ON c.object_id = sc.object_id
AND c.column_id = sc.column_id
WHERE o.type in ('U','V') -- UserTable and View
)

SELECT s.SchemaName,
s.TableName,
s.StatisticsName,
STUFF (( SELECT ', ' + s1.name
FROM Stat s1
WHERE s1.TableName=s.TableName
AND s1.StatisticsName=s.StatisticsName
ORDER BY s1.stats_column_id
FOR XML PATH('')
), 1,1, null) + CASE
WHEN s.filter_definition IS NOT NULL THEN
' FILTER: ' + s.filter_definition
ELSE ''
END AS StatisticsColumns
FROM Stat s
/* Filter statistics: USER - User-created, AUTO - automac-created, INDEX - index statistics */
-- WHERE s.StatisticsType in ('USER', 'AUTO')
GROUP BY s.SchemaName, s.TableName, s.StatisticsType, s.filter_definition, s.StatisticsName
ORDER BY s.SchemaName, s.TableName, s.StatisticsType

In the query we can filter for only automatically created and user created statistic if we want to exclude index statistics. Below is sample output for user and automatically created statistics in AdventureWorks database. (1) Show you the output for multi-column statistics with a filter defined.

If you have some suggestions for the script feel free to add comment.

SQL Server 2008 Backup Compression Performance

18/07/2011 Leave a comment

U prethodnom postu pisala sam o kompresiji backupa, novoj funkcionalnosti SQL Servera. Tu su date osnove vezane za kompresiju backupa, kako uključiti kompresiju backupa na nivou instance, za svaki pojedinačni backup kao i neke karakteristike vezane za media set prilikom kompresije. U ovom postu pokazati ću usporedne rezultate performansi do kojih sam došla testiranjem backup i restore operacija kompresovanog/nekompresovanog backupa.

Testiranje backup operacije AdventureWorks2008R2 baze

Prvi test sam napravila korištenjem AdventureWorks2008R2 baze na klijentskoj mašini. Rezultat je sljedeći:

Nekompresovani backup je urađen za 5.442 sekunde, veličina backupa je 213MB a CPU je bio zauzet oko 16%.

Kompresovani backup je urađen za 2.849 sekundi, veličina backupa je 49MB a CPU je bio zauzet oko 29%.

Vidljivo je, da su performanse kompresovanog backup daleko bolje. Kompresovani backup je 4.34 puta manji od nekompresovanog backupa. Za očekivati je i da je vrijeme izvršenja kompresovanog backupa kraće (zbog mnogo manje količine IO operacija) što se i pokazalo testom. Ali opterećenost CPU-a je bila veća.

Testiranje restore operacije AdventureWorks2008R2 baze

Prije testa očekivala sam i da će restore operacije kompresovanog backupa biti bolje što se i pokazalo u testu. Rezultat je sljedeći:

Verifikacija backupa:

Baza

Vrsta backupa Vrijeme (sekundi)
AdventureWorks2008R2

Nekompresovani

1.703

AdventureWorks2008R2 Kompresovani

0.403

Restore baze (baza je već postojala na serveru):

Baza

Vrsta backupa Vrijeme (sekundi) CPU %
AdventureWorks2008R2

Nekompresovani

4.649

5.5

AdventureWorks2008R2

Kompresovani 2.663

9.5

Testiranje TestDB baze (26GB) na serveru

Sljedeći test je rađen na serveru s bazom veličine oko 26 GB. Rezultat je sljedeći.

Baza

Vrsta backupa Veličina backupa Vrijeme (mm:ss)

Omjer kompresije

TestDB

Nekompresovani

25.8 GB 03:36

1

TestDB

Kompresovani

5.97 GB 02:03

4.32

Opterećenost CPU-a prilikom kreiranja kompresovanog backupa veća je za 5-6%.

Verifikacija backupa za nekompresovani backup završena je za 1 minutu i 15 sekundi dok je za kompresovani backup završena za 41 sekundu.

Kompresija podataka i kompresija backupa

Kako SQL Server 2008 omogućava kompresije podataka na nivou reda (row) ili stranice (page), zanimalo me je kako kompresija backupa radi u kombinaciji sa kompresijom podataka (row or page compression).

Napravila sam test na mojoj klijentskoj mašini, i to tako da sam prilikom testiranja koristila bazu kod koje sam izvršila kompresiju svih tabela i indeksa. Rezultat je sljedeći:

Baza

Veličina baze

Veličina Backupa

Omjer kompresije

Backup Time (sekundi)

Vrsta backupa

Bez kompresije podataka

TestDb 168 MB 168 MB

1

4.453

Nekompresovani backup
TestDb 168 MB 48 MB

3.49

2.787

Kompresovani backup

Row Compression

TestDb 115 MB 115 MB

1

3.034

Nekompresovani backup
TestDb 115 MB 44 MB

2.61

2.237

Kompresovani backup

Page Compression

TestDb 72 MB 72 MB

1

2.195

Nekompresovani backup
TestDb 72 MB 41 MB

1.75

1.903

Kompresovani backup

Iz rezultata je vidljivo, da kompresija backupa komprimira i baze kod kojih su sve tabele i indeksi kompresovani row ili page kompresijom iako je omjer kompresije kompresovanog/nekompresovanog backupa manji nego kod baze kod koje nije uključena kompresija podataka. Vrijeme izvršenja kompresovanog backupa je kraće. Opterećenost CPU-a je veća prilikom kreiranja kompresovanog backupa.

TDE (Transparent Data Encryption) i kompresija backupa

Zanimalo me je i kakve su performanse kompresije backupa za baze kod kojih je uključen TDE. Rezultat je sljedeći:

Baza Veličina baze Veličina Backup

Omjer
kompresije

Backup Time
(sekundi)

Komentar
Test (TDE on) 222 MB 222 MB

1

4.199

Nekompresovani backup
Test (TDE on) 222MB 221 MB

1

4.218

Kompresovani backup

Kompresija backupa baza kod koje je uključen TDE rezultira visokim korištenje CPU-a, izvršenje kompresovanog backupa traje neznatno duže a gotovo da i nema razlike između veličine kompresovanog i nekompresovanog backupa.
Kompresija enkriptiranih podataka nije baš dobar izbor, tako da kompresiju backupa treba izbjegavati ukoliko imate bazu kod koje je uključen TDE.

Zaključak:
Vidljivo je da kreiranjem kompresovanog bakupa štedimo i prostor i vrijeme ali je CPU malo više opterećen prilikom izrade kompresovanog backupa. Ukoliko je CPU na serveru dosta opterećen a radite kompresovani backup tokom radnog vremena i ne želite da vam on ometa ostale operacije na serveru, korištenje CPU-a prilikom kompresije backupa moguće je ograničiti s Resource Governor-om.
Kompresija backupa je odlična izbor i za baze koje imaju uključenu kompresiju podataka (row ili page) dok bi se trebala izbjegavati za baze kod kojih je uključen TDE.
Oni koji koriste Log Shipping dobro mogu osjetiti poboljšanje performansi ukoliko uključe backup kompresiju, jer Log Shipping radi na principu kreiranja backupa, kopiranja backupa na network share lokaciju (obično na sekundarni server) i nakon toga restore backupa na sekundarnom serveru. Treba jedino obratiti pažnju na CPU, a ukoliko se primijeti  bottleneck ograničiti korištenje CPU-a za backup korištenjem Resource Governor -a.

SQL Server 2008 R2 SP1 i SQL Server “Denali” CTP3

13/07/2011 3 comments

Jučer je bio dan pun novosti za sve one koji koriste SQL Server. Svjetlo dana ugledao je Service Pack 1 za SQL Server 2008 R2 kao i SQL Server „Denali“ CTP3.

SQL Server 2008 R2 SP1

SP1 sadrži sve zakrpe koje se nalaze u CU1 (Cumulative Update) – CU6. CU7 i CU8 nije uključen u SP1. Informacije o ostalim zakrpama koje se nalaze u SP1 možete naći na: http://support.microsoft.com/kb/2528583
Više informacija o novim funkcionalnostima i poboljšanjima koje donosi SP1 možete naći u članku:
http://social.technet.microsoft.com/wiki/contents/articles/microsoft-sql-server-2008-r2-sp1-release-notes.aspx

SQL Server 2008 R2 SP1 možete preuzeti na sljedećem linku:
http://www.microsoft.com/download/en/details.aspx?id=26727

Za SQL Server 2008 R2 SP1 Express:
http://www.microsoft.com/download/en/details.aspx?id=26729

Ukoliko koristite SQL Server 2008 R2 SP1 Feature Packs možete ga preuzeti na:
http://www.microsoft.com/download/en/details.aspx?id=26728

SQL Server “Denali” CTP3

Dugo očekivani CTP3 za novu verziju SQL Servera pod kodnim nazivom „Denali“ napokon je izašao. Svi oni koji žele vidjeti kako izgleda i koje to novosti donosi „Denali“ mogu ga preuzeti na:
SQL Server „Denali“ CTP3 download

Release Note za CTP3 možete naći na sljedećoj adresi:
http://social.technet.microsoft.com/wiki/contents/articles/3711.aspx

Categories: Denali, SQL Server 2008