SQL Server: How to fix error 7903

16/04/2013 1 comment

Recently I had to take care of the filestream corruption in SQL Server 2008 database, so I decided to share my experience. The error from DBCC CHECKDB was:

Msg 7903, Level 16, State 2, Line 1
Table error: The orphaned file “00005c8a-00006362-0001″ was found in the FILESTREAM directory ID 9ae6ae63-bcc4-46f7-81cf-db99b01549e8 for object ID 903062353, index ID 1, partition ID 72057594125680640, column ID 11.

Explanation of the error 7903 is (from Cause and Resolution of Database Engine Errors):

“A FILESTREAM file was found in a FILESTREAM column directory; however, the corresponding column value in the partition is missing.”

This error cannot be repaired with CHECKDB so I decided to test what will happen if I physically delete mentioned filestream file. (You should take all user actions described in the document.)

I restored database to different server and executed DBCC CHECKDB again. It finished with the same error as I expected.

So I started to search for the document in the filestream directory of the database.

error_7903

First I copied the file to another location (I wanted to have a copy of the file) and then I deleted it. When I executed DBCC CHECKDB again it finished without errors.

Deleting the orphaned file will resolve filestream consistency error 7903 (If you decide to delete the file be sure that you don’t need it.).

If you decide to do that on a production SQL Server, you do so entirely at your own risk.

Compare tables between 2 databases with Tablediff and PowerShell

25/02/2013 Leave a comment

Few days ago, I needed to compare tables and find all differences in data between 2 databases on different SQL Server instances. For that task I decided to use the Tablediff utility which is included with SQL Server.

Tablediff utility can be used for schema and row counts comparison or a row by row comparison of data between 2 tables. It can generate T-SQL script to fix differences between 2 tables.  More information about tablediff utility you can find in BOL.

By default, tablediff utility can be found in the COM directory of SQL Server install folder.
For SQL Server 2012 default location is C:\Program Files\Microsoft SQL Server\110\COM\
For SQL Server 2008/2008 R2 default location is C:\Program Files\Microsoft SQL Server\100\COM\.

By default, tablediff is designed to compare 2 tables at time and will generate one .sql script file, and there is no way to do an automatic comparison on all tables between 2 databases with tablediff. To do that, I decided to create PowerShell script to run tablediff on all tables in database.

I use PowerShell version 3, with imported SQL Server 2012 SQLPS module. To demonstrate solution let’s create 2 databases with 2 tables (source and destination databases must have the same tables names).

CREATE DATABASE SourceDB;
GO
CREATE DATABASE DestinationDB;
GO
USE SourceDB;
GO
CREATE TABLE dbo.CompareTable (col1 int identity(1,1),col2 varchar(10))
GO
INSERT INTO dbo.CompareTable (col2)
SELECT 'test' UNION ALL
SELECT 'test1' UNION ALL
SELECT 'test2'
GO
CREATE TABLE dbo.CompareTable1 (col1 int identity(1,1),col2 varchar(10))
GO
INSERT INTO dbo.CompareTable1 (col2)
SELECT 'test' UNION ALL
SELECT 'test1' UNION ALL
SELECT 'test2'
GO
USE DestinationDB
GO
CREATE TABLE dbo.CompareTable(col1 int identity(1,1),col2 varchar(10))
GO
INSERT INTO dbo.CompareTable (col2)
SELECT 'test1' UNION ALL
SELECT 'test2'
GO
CREATE TABLE dbo.CompareTable1 (col1 int identity(1,1),col2 varchar(10))
GO
INSERT INTO dbo.CompareTable1 (col2)
SELECT 'test' UNION ALL
SELECT 'test1' UNION ALL
SELECT 'test2'
GO
SELECT * FROM SourceDB.dbo.CompareTable
SELECT * FROM SourceDB.dbo.CompareTable1
SELECT * FROM DestinationDB.dbo.CompareTable
SELECT * FROM DestinationDB.dbo.CompareTable1
GO
/*
SourceDB.dbo.CompareTable
col1        col2
----------- ----------
1           test
2           test1
3           test2
SourceDB.dbo.CompareTable1
col1        col2
----------- ----------
1           test
2           test1
3           test2
DestinationDB.dbo.CompareTable
col1        col2
----------- ----------
1           test1
2           test2
DestinationDB.dbo.CompareTable1
col1        col2
----------- ----------
1           test
2           test1
3           test2
*/

Now we have 2 databases with 2 tables and sample data. We see that data in CompareTable of SourceDB and DestinationDB aren’t the same.
To compare data in these tables with PowerShell we can run. (Before you run the command change the SourceServer and DestinationServer)

& "C:\Program Files\Microsoft SQL Server\110\COM\tablediff.exe" -SourceServer SERVER\SQL2012 -SourceDatabase SourceDB -sourcetable CompareTable -DestinationServer SERVER\SQL2012 -DestinationDatabase DestinationDB  -destinationtable CompareTable -c -f D:\1\Script

TableDiff

We see 3 differences from result and Script.sql file is created to fix differences.
Content of Script.sql is:

TableDiff_Script_sql

 That is simple example how to run tablediff to compare data in 2 tables and to generate .sql script to fix differences.

To compare data in all tables we need to loop through all tables in SourceDB and run tablediff for all tables. For every table we also need to set new file name for the T-SQL script file. (If file name specified in -f parameter exists tablediff will fail with error “The file %s already exists.”).

Here is the script:

# Date:     25/02/2013
# Author:   Ana Mihalj
# Description:  PS script to execute tablediff to compere and find all differences in data between 2 databases.
# Version:  1.0
# Example Execution: (with default parameters) .\TableDiffWithParam.ps1
# or with non-default parameters
# .\TableDiffWithParam.ps1 -SourceServer SERVER\INSTANCE -SourceDatabase SourceDB -DestinationServer SERVER\INSTANCE -DestinationDatabase DestinationDB -OutputFolder D:\Folder

param( [string]$SourceServer = "SERVER\INSTANCE",
[string]$SourceDatabase = "SourceDB",
[string]$DestinationServer = "SERVER\INSTANCE",
[string]$DestinationDatabase = "DestinationDB",
[string]$OutputFolder = "D:\Folder"
)
#set path to tablediff utility
$tablediff = "C:\Program Files\Microsoft SQL Server\110\COM\tablediff.exe"

#get tables for compare
if ($SourceServer.Contains("\") -eq "True")
{ $Tables = Get-ChildItem SQLSERVER:\SQL\$SourceServer\Databases\$SourceDatabase\Tables\ | SELECT name }
else
{ $Tables = Get-ChildItem SQLSERVER:\SQL\$SourceServer\Default\Databases\$SourceDatabase\Tables\ | SELECT name }

#create output folder if it does not exist
if ((Test-Path $OutputFolder) -eq $false)
{
md $OutputFolder
}

#Output file
$OutputFile = $OutputFolder+"\Output.txt"

#for each table
foreach($table in $Tables)
{
#create new file name for the Transact-SQL script file
$DiffScript = $OutputFolder+"\"+$table.Name+".sql"

#If file exists throw the errror.
if ((Test-Path $DiffScript) -eq $true)
{
throw "The file " + $DiffScript + " already exists."
}

#execute tablediff
& $tablediff -sourceserver $SourceServer -sourcedatabase $SourceDatabase -sourcetable $table.Name -destinationserver $DestinationServer -destinationdatabase $DestinationDatabase -destinationtable $table.Name -strict -c -o $OutputFile -f $DiffScript

# check the return value and throw an exception if needed
# tablediff return values: 0 - Success, 1 - Critical error, 2 - Table differences
if ($LastExitCode -eq 1)
{
throw "Error on table " + $table.Name + " with exit code $LastExitCode"
}
}

Before you run the script you need to change parameters value and the path to tablediff location (if your installation isn’t SQL Server 2012 on default installation folder).
I hope you’ll find this script useful.

T-SQL Tuesday #39 – Set file permissions with PowerShell

12/02/2013 1 comment

TSQL2sday

Wayne Sheffield (blog|twitter) is hosting this month’s T-SQL Tuesday and the subject is “Can you shell what the PoSH is Cooking“. More specifically he want that we blog about anything PowerShell and SQL Server related.

As a DBA I use PowerShell for some automation tasks such as: reading error logs, scripting SQL server objects, comparing data in tables etc. Today I will write about script I wrote for setting read file permission on SQL Server trace files.

From SQL Server 2005 onwards, when you run server side trace in SQL server, a new trace file will be created and the permission for this trace file is set only to SQL Service account. The permissions of the directory are not inherited.

Because of that SQL Server security policy, I find myself in a situations that I have hundreds of trace files on the server share, but I cannot access them with my user. So I decided to write a PowerShell script to give my user read access to all trace files in specified folder.

$path = "D:\Folder\"
$user = "domain\username"
$SystemRights = "Read"
$AccessType = "Allow"
$AccessRule = New-Object System.Security.AccessControl.FileSystemAccessRule($user, $SystemRights, $AccessType)
$files = Get-ChildItem $path
foreach($file in $files)
{
$acl = (Get-Item $file).GetAccessControl("Access")
$acl.AddAccessRule($AccessRule)
$filepath = $path +""+ $file.name
Set-Acl $filepath $acl
} 
Categories: T-SQL Tuesday

Tracking Transaction Log Records With SQL Server 2012 Extended Events

29/01/2013 Leave a comment

Last week I tested sqlserver.transaction_log event in SQL Server 2012 SP1 CU2. I wanted to compere results from fn_dblog() and transaction_log event. Jonathan Kehayias (blog|twitter) wrote here about this event while SQL Server 2012 was in CTP1.
At that time, there were 70 Log Operations and 29 Contexts for those operations available in SQL Server Denali CTP1. (Information taken from Jonathan’s post)

 In SQL Server 2012 SP1 CU2 there are 73 Log Operations and 32 Contexts for those operations.
Description for transaction_log event is: “Occurs when a record is added to the SQL Server transaction log. This is a very high volume event that will affect the performance of the server. Therefore, you should use appropriate filtering to reduce the number of events, and only use this event for targeted troubleshooting during a short time period.”

From description transaction_log event should capture all log records that will be written to transaction log file. So I expected same number of log operations returned from transaction_log event and fn_dblog() function. Let’s confirm that.

For testing I will create database TruncateDb and set database to simple recovery mode so the log clears out on CHECKPOINT. Then I will create a table and insert 1 record in it. I will create event session named TransactionLog to capture the sqlserver.transaction_log event for the database TruncateDb and to store events to the file C:\XELogs\TransactionLog.xel.

CREATE DATABASE TruncateDb;
GO
USE TruncateDb;
GO
ALTER DATABASE TruncateDb SET RECOVERY SIMPLE;
GO

-- Create table
CREATE TABLE dbo.t1 (c1 INT IDENTITY(1,1), c2 CHAR (8000));
GO
-- Insert 1 record
INSERT INTO dbo.t1(c2)
SELECT 'a';
GO

-- Create Xevent session
DECLARE @sqlcmd nvarchar(2000) = '
CREATE EVENT SESSION [TransactionLog] ON SERVER
ADD EVENT sqlserver.transaction_log(
    ACTION(package0.event_sequence)
    WHERE ([database_id]=('+ cast(DB_ID() as varchar(3))+')))
ADD TARGET package0.event_file(SET filename=N''C:\XELogs\TransactionLog.xel'')
WITH (MAX_MEMORY=4096 KB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,
MAX_DISPATCH_LATENCY=30 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=NONE,TRACK_CAUSALITY=OFF,
STARTUP_STATE=OFF)'
EXEC (@sqlcmd)
GO

When the event session is created, I will execute CHECKPOINT on the database so that transaction log clears out and fn_dblog() function will return only 3 records.

-- clear the log
CHECKPOINT;
GO
-- returns only 3 records
SELECT * FROM fn_dblog (NULL, NULL);
GO

checkpoint

Then I’ll start event session, truncate table t1 and stop event session TransactionLog.

--Start session
ALTER EVENT SESSION TransactionLog
ON SERVER
STATE=START;
GO
USE TruncateDb;
GO
-- Truncate table
TRUNCATE TABLE t1;
GO
--Stop session
ALTER EVENT SESSION TransactionLog
ON SERVER
STATE=STOP
GO

Now I can query data from event file and compare it with records from fn_dblog().

-- records from fn_dblog()
SELECT [Current LSN], Operation, Context, [Xact ID], [Transaction ID], [Log Record Fixed Length],
[Log Record Length], [Log Reserve], AllocUnitId
FROM fn_dblog (NULL, NULL);

-- records from transaction_log event
SELECT 
    XEvent.value('(event/@name)[1]', 'varchar(50)') AS event_name,
	XEvent.value('(event/data[@name="operation"]/text)[1]', 'nvarchar(100)') as log_op_name,
    XEvent.value('(event/data[@name="transaction_id"]/value)[1]', 'int') as transaction_id,
	XEvent.value('(event/data[@name="log_record_size"]/value)[1]', 'nvarchar(100)') as log_record_size,
	XEvent.value('(event/data[@name="context"]/text)[1]', 'nvarchar(20)') as context,
	XEvent.value('(event/data[@name="replication_command"]/value)[1]', 'int') as replication_command,
	XEvent.value('(event/data[@name="alloc_unit_id"]/value)[1]', 'nvarchar(100)') as alloc_unit_id,
	XEvent.value('(event/data[@name="transaction_start_time"]/value)[1]', 'datetime2') as transaction_start_time
FROM (
	SELECT CAST(event_data AS XML) AS XEvent
	FROM sys.fn_xe_file_target_read_file('C:\XELogs\TransactionLog*.xel',null,null,null) )as src 
ORDER BY XEvent.value('(event/action[@name="event_sequence"]/value)[1]', 'int')

fn_dblog()

transaction_log

First 3 records from fn_dblog() are generated by CHECKPOINT and they aren’t included in event session file because I started event session after checkpoint. The first record from event file is LOP_BEGIN_XACT with transaction_id = 74018. It correspond to the [Xact ID] of LOP_BEGIN_XACT log record returned in row 4 of the fn_dblog().

We can see that event session file returns only 12 records while we have 16 records from fn_dblog() after checkpoint. We can see that LOP_COUNT_DELTA operations missing from event session file.

If we select only LOP_COUNT_DELTA operations from fn_dblog() we can see that LOP_COUNT_DELTA operations are used for updating system catalog and that Transaction ID is different (0000:00000000).

SELECT [Current LSN], Operation, Context, [Transaction ID], 
AllocUnitId, AllocUnitName
FROM fn_dblog (NULL, NULL)
WHERE Operation ='LOP_COUNT_DELTA'

LP_COUNT_DELTA

Although descriptions says that transaction_log event capture all log records that will be written to transaction log file it isn’t true. I also tested transaction_log event with INSERT statements and noticed that transaction_log event didn’t capture:

non-transactional updates

It seems that transaction_log event doesn’t capture log records with Transaction ID=0000:00000000. These log records are not part of transactional updates (called being non-transactional – thanks Paul Randal for naming it here).

T-SQL Tuesday #38 – Standing Firm

08/01/2013 Leave a comment

TSQL2sday

Today’s T-SQL Tuesday is hosted by Jason Brimhall (blog|twitter). The topic this month is “Standing Firm”. This means that I should write something related to one of these words: resolve, resolution, or resolute.

It’s the time of year when many people make New Year’s resolutions and this is the first time I will make a three related to SQL Server:  to write more blog posts about SQL Server,  to do more local community speaking about SQL Server and to learn more about AlwaysOn, Performance Tuning and PowerShell.

Before a year and a half I started with writing a blog which is related to SQL Server and till now I wrote 28 posts. But only 5 of them I wrote last year.  I started to write a few blog posts last year but I never finished them.  Topics I’ve chosen for these posts were very complex and I never had time to finish them. Maybe it’s time to try a different tactic – writing about simple topics or writing more posts about complex topic (like blog post series).

I spoke once in local MS Community about SQL Server Policy Based Management and had very positive feedback. That was excellent experience and I want to more actively participate in our local user group. I will try to speak at least 2 times this year.

I always try to learn something new and this time I want to go further with AlwaysOn and Performance Tuning areas of SQL Server. I used PowerShell several times and I can say that it can be a powerful tool in the hands of DBA. I can even try to write a blog post series on these topics.

With this post I made a lot of firsts: the first blog post this year, the first time I am participating in T-SQL Tuesday and the first time I made SQL Server resolution. :)

Categories: T-SQL Tuesday

How to remove “Server name” entry from “Connect to Server” dialog in SSMS

20/11/2012 Leave a comment

If you connect to a lot of different SQL servers with SQL Server Tools (SSMS, SQL Profiler etc.), the list of servers in “Connect to Server” dialog become large. Before SQL Server 2012 there was no way to delete individual server entry from the list or supported way to delete the list.

Some time ago, somewhere on the web I found a way how to delete the list of servers in “Connect to Server” dialog.
For SQL Server 2005 method is to close SSMS and then delete mru.dat file. On windows 2003 server file is located at %UserProfile%\Application Data\Microsoft\Microsoft SQL Server\90\Tools\Shell .
For SQL Server 2008/2008R2 method is to close SSMS and then delete SqlStudio.bin file. On windows 2003 server file is located at %UserProfile%\Application Data\Microsoft\Microsoft SQL Server\100\Tools\Shell. On Windows 7 and Windows 2008 server file is located at %UserProfile%\AppData\Roaming\Microsoft\Microsoft SQL Server\100\Tools\Shell.
A few days ago, I used the same method for SSMS 2012 by deleting SqlStudio.bin file from %UserProfile%\AppData\Roaming\Microsoft\SQL Server Management Studio\11.0.

But the problem in using these methods is that it’s unsupported by Microsoft and we lose user specific settings in SSMS.

Today I found that with SQL Server 2012 we have a supported way to delete server name from the list of servers in “Connect to Server” dialog by simply selecting the server name from the drop down list and then pressing the delete button.
It is documented in this blog post.

Reads in Read Committed Isolation Level

23/07/2012 Leave a comment

I’ll start this blog post with a question: “What kinds of locks in SQL Server are acquired during simple select statement in read committed isolation level (READ_COMMITTED_SNAPSHOT database option is set to OFF)?”

Few days ago, I would answer: “Intent-Shared locks at table and pages level and shared locks on a row by row basis.”

But, I found out that it’s not correct.  In some situations shared locks are not issued at all. Let’s demonstrate that. First let’s create a database and table with 3 rows.

 CREATE DATABASE TestLocks
GO
ALTER DATABASE TestLocks SET READ_COMMITTED_SNAPSHOT OFF
GO
USE TestLocks
GO
-- Create table
CREATE TABLE dbo.DemoTable ( id int, col1 varchar(4000))
GO
-- Insert 3 rows = 2 pages
INSERT INTO dbo.DemoTable
SELECT 1, REPLICATE('a',4000) UNION ALL
SELECT 2, REPLICATE('a',4000) UNION ALL
SELECT 3, REPLICATE('a',4000) 

If we run DBCC IND we can see that table contains 2 data pages (in my case page 78 and 80).

DBCC IND('TestLocks','DemoTable',1)

Let’s run simple select statement.

SET TRANSACTION ISOLATION LEVEL READ COMMITTED
GO
SELECT * FROM dbo.DemoTable
GO

I used SQL Profiler to check acquired lock.

We can see that only Intent-Shared (IS) locks at the table and page levels are issued. There are no row-level shared (S) locks.

I was very surprised when I noticed that. I thought that shared locks are always acquired and that optimizer only decides about granularity of locks (row or page level shared lock).

I searched about this lock behavior in SQL Server and found Paul White’s post “The Case of the Missing Shared Locks” where he wrote about locking optimization.

It seems that SQL Server avoids acquiring row-level shared (S) locks when it knows that no data has changed on a page. I didn’t find more information about shared lock optimization. If you know more you can leave a comment.

Follow

Get every new post delivered to your Inbox.