Cleveland DBA

SQL Server administration in Cleveland, OH

A Fairy Tale – Epilogue

When the cheers subsided and the parade was over (oh yes, there was a parade;  complete with floats and bagpipes), the DBA found herself dissatisfied.  There was a minor detail that was nagging at her.  What about those linked servers that used an alias?  With her original query, the remote server would be the alias name, and that may or may not be helpful.  Surely she could do better.

So our heroine searched the Windows registry until she found where all of the alias information was stored, and she wrote a function that used the xp_regread extended stored procedure to retrieve the target server for each alias.

IF OBJECT_ID('fn_getaliastarget', 'FN') IS NOT NULL
DROP FUNCTION fn_getaliastarget
GO
create function dbo.fn_getaliastarget (@alias varchar(40))
returns nvarchar(250)
AS
BEGIN
declare
@regbase nvarchar(30)
,@regkey nvarchar(100)
,@dirbasepath nvarchar(250)

SET @regbase = 'HKEY_LOCAL_MACHINE'
SET @regkey = 'Software\Microsoft\MSSQLServer\Client\ConnectTo\'

EXECUTE master..xp_regread @regbase
, @regkey
, @alias
, @dirbasepath OUTPUT

RETURN substring(@dirbasepath,CHARINDEX(',',@dirbasepath)+1,LEN(@dirbasepath))
END
GO

Then she revised her query to use the new function.

SELECT @@servername as [SQLInstance]
, s.name as [LinkedServer]
, ISNULL(dbo.fn_getaliastarget(s.data_source), s.data_source) as [RemoteServer]
, CASE l.uses_self_credential
WHEN 1 THEN 'UNMAPPED LOGINS'
ELSE ISNULL(p.name, 'UNMAPPED LOGINS')
END AS [LocalLogin]
, CASE l.uses_self_credential
WHEN 1 THEN 'USE SELF'
ELSE l.remote_name
END AS [RemoteLogin]
FROM sys.linked_logins l
JOIN sys.servers s ON s.server_id = l.server_id AND is_data_access_enabled = 1
LEFT OUTER JOIN sys.server_principals p ON p.principal_id = l.local_principal_id
ORDER BY [LinkedServer], [LocalLogin]

And finally, the DBA wrote a version of the query for SQL Server 2000 instances, too, just for good measure.

SELECT @@servername as [SQLInstance]
, s.srvname as [LinkedServer]
, ISNULL(dbo.fn_getaliastarget(s.datasource), s.datasource) as [RemoteServer]
,  ISNULL(p.name, 'UNMAPPED LOGINS') AS [LocalLogin]
,  ISNULL(l.rmtloginame, 'USE SELF')     AS [RemoteLogin]
FROM master..sysoledbusers l
JOIN master..sysservers s ON s.srvid = l.rmtsrvid AND dataaccess = 1 AND isremote = 1
LEFT OUTER JOIN master..syslogins p ON p.sid = l.loginsid
ORDER BY [LinkedServer], [LocalLogin]

And she was happy. And awesome.

Advertisements

March 31, 2011 Posted by | General, T-SQL | , , | Leave a comment

A Fairy Tale

Once upon a time there was a developer who was concerned about the security of his application’s data.  So he decided to change the password for his application’s SQL login.  So he logged into the database and used sp_password to change his password and he was happy.

On the other end of the hall lived another developer.  This developer was not happy.  His application had suddenly stopped working and he was getting angry phone calls from users.  “But I didn’t change anything!” he cried.  And that was true.  But his application was using a linked server that mapped to the login whose password was changed by the first developer (who lived in the house that Jack built).

The second developer was very angry with the first developer.

“You can’t just go around changing passwords willy-nilly!”

The first developer got defensive.

“It’s my application and my login, how was I supposed to know you were mapped to it?”

Suddenly they stopped, and the hall got very quiet.  They both turned and looked at the DBA.

“Surely there must be some way of knowing what linked servers are mapped to what logins on what servers,” they implored.

The DBA replied, equivocally, “Let me see what I can do.”

Now this DBA was a very smart DBA (and beautiful.  like a princess, really).  She already had a program she had written that inventoried each SQL Server instance to track things like jobs, database growth, backup locations, etc.  She had even written MSRS reports against this data that listed applications on each instance, job statuses, license counts and more.  She knew it wouldn’t be difficult to gather this bit of data, too.  Once she knew where it was stored, that is.

So she started looking in the system views in the master database.  That was where sys.servers and sys.database_principals were located, after all.  It would make sense to store the linked server login mapping there, too, right?  Only, it wasn’t there!  Hmm.  Not to be discouraged, our (smart, beautiful, princess-like) heroine looked in the msdb database.  And voila!  There, in the system views:  sys.linked_logins.

And, after a bit of muttering about why the hell did they put it in msdb, she wrote the following code and included it in her program.

SELECT @@servername as [SQL Instance]
, s.name as [Linked Server]
, s.data_source as [Remote Server]
, CASE l.uses_self_credential
WHEN 1 THEN 'PASS-THRU'
ELSE ISNULL(p.name, 'UNMAPPED LOGINS')
END AS [Local Login]
, CASE l.uses_self_credential
WHEN 1 THEN 'PASS-THRU'
ELSE l.remote_name
END AS [Remote Login]
FROM sys.linked_logins l
JOIN sys.servers s ON s.server_id = l.server_id AND is_data_access_enabled = 1
LEFT OUTER JOIN sys.server_principals p ON p.principal_id = l.local_principal_id
ORDER BY [Linked Server], [Local Login]

Now developers could know what linked servers might be impacted by a password change.  Peace and harmony reigned throughout the hall.  Hurrah!

And they lived happily ever after.

The End.

March 28, 2011 Posted by | General, T-SQL | , | Leave a comment

Using Service Broker to replicate logins

Talking about mirroring and its shortcomings got me thinking:  there has to be a reliable way to replicate logins from the primary server to the mirror.  Sure, you could do it manually, that would work just fine for those applications with a limited number of logins where there isn’t much turnaround.  But then again, if you’re not creating logins on this server very often, you’re probably even more likely to forget to create a corresponding login on the mirror server.  And during a failover you’re going to have a lot of things on your plate, so if you already know your logins are synced up, that’s one less thing you have to worry about.

So I decided to come up with an automated system for keeping logins in sync between my primary server and my mirror.  Fortunately, in our environment, the primary and mirror instances are dedicated to this single application, so I don’t have to worry about replicating unnecessary logins, or having conflicting logins.  I’m also not going to worry about replicating logins from the mirror to the primary (in the event of a failover).  What I do want to ensure was that the SIDs are the same on each server to avoid having to sync up the users and logins after a failover.  I also definitely want the passwords to match.

I considered a few different options.  I discarded the idea of DDL triggers, I didn’t want a trigger failure preventing a login from being created.  I have a stored procedure that extracts CREATE LOGIN statements for every login, complete with SID, hashed password, and server level role assignments.  I could use that in a job to export the script to a file.  Definitely a possibility.  Then I happened across an article about Service Broker.  I’ve had a little bit of exposure to Service Broker in the past, dealing with DDL auditing, but it was very basic and local to a single server.  I’d never tried to configure a distributed Service Broker app, but I was pretty sure this was the direction I wanted to go in for this little project.  Thus began my crash course in SQL Server Service Broker.

My first step was to employ the Great Gazoogle to find examples.  Fortunately I came across this fantastic article on SQL Server Central, and I was able to use much of the sample code to get a basic distributed Service Broker application working.  Perfect!  This was going to be a piece of cake.  Then my problems started.  First I tried using this service, as-is, with my event notification.  No dice.  The queue was not compatible with the PostEventNotification schema.  The sample code creates its own Message Types for the contract.  So I tried using the PostEventNotification in lieu of the sample contract, like this:

CREATE SERVICE SyncSenderService
ON QUEUE [SyncSenderQueue]
([http://schemas.microsoft.com/SQL/Notifications/PostEventNotification])
GO

Fail.  Then I tried using the sample contract, but changing the message types to PostEventNotification.  That produced an error just trying to create the contract.  SQL Server would allow me to create the contract with EventNotification message types, but that resulted in an error when the Event Notification tried to write to the service.

Closed event notification conversation endpoint with handle ‘{B2ACA311-9E54-E011-9EDF-02215ECA965F}’, due to the following error: ‘<?xml version=”1.0″?><Error xmlns=”http://schemas.microsoft.com/SQL/ServiceBroker/Error”><Code>-8408</Code><Description>Target service &apos;SyncSenderService&apos; does not support contract &apos;http://schemas.microsoft.com/SQL/Notifications/PostEventNotification&apos;.</Description></Error>’.

I tried a couple more configurations to no avail.  Gah!  Perhaps I just don’t know enough about XML and Service Broker, but I was beginning to think this just wasn’t possible.  Then I thought, ok, I’ve got a working distributed Service Broker app sending generic messages.  And I’ve got a working Event Notification tracking DDL locally.  Why not just keep both and have the Event Notification stored procedure record the logins that are created, and then use a scheduled job to send messaged via the distributed broker?

So I started with my Event Notification procedure.  It reads from EventQueue and records the information, including the SQL statement in a table I’ve created called syncSQLLogins.  It also logs the XML data to a Sync_Log table, just for backup/safety purposes.

USE Mirror_sync
GO
CREATE TABLE Sync_Log
(    messagename        NVARCHAR(256),
xmldata            XML)
GO 

CREATE TABLE SyncSQLLogins
(    loginname        nvarchar(128),
logintype        char(1),
loginsid        varbinary(85),
passwordhash    varbinary(256),
sqlcommand        varchar(max),
propagated        char(1) default 'N')
GO 

USE Mirror_sync;
GO
ALTER PROCEDURE [dbo].[SyncQueueReceive_usp]
AS
BEGIN
SET NOCOUNT ON;
SET ARITHABORT ON; 

DECLARE     @message XML,
@messageName NVARCHAR(256),
@dialogue UNIQUEIDENTIFIER 

BEGIN TRY 

--Continuous loop
WHILE (1 = 1)
BEGIN 

BEGIN TRANSACTION; 

--Retrieve the next message from the queue 

SET @dialogue = NULL; 

WAITFOR (
GET CONVERSATION GROUP @dialogue FROM dbo.EventQueue
), TIMEOUT 2000; 

IF @dialogue IS NULL 

BEGIN
ROLLBACK;
BREAK;
END
;RECEIVE TOP(1)
@messageName=message_type_name,
@message=message_body,
@dialogue = conversation_handle
FROM dbo.EventQueue
WHERE conversation_group_id = @dialogue; 

IF @message.value('(/EVENT_INSTANCE/LoginType)[1]', 'VARCHAR(100)')  LIKE '%SQL Login%'
BEGIN
INSERT INTO syncSQLLogins (
loginname, logintype, loginsid, passwordhash,sqlcommand  )
SELECT name, type, sid, password_hash,@message.value('(/EVENT_INSTANCE/TSQLCommand)[1]', 'VARCHAR(max)')
from sys.sql_logins
where sid = @message.value('(/EVENT_INSTANCE/SID)[1]', 'VARBINARY(85)');
INSERT INTO Sync_Log (messagename, xmldata)
VALUES (@messageName, @message) 

END 

IF @message.value('(/EVENT_INSTANCE/LoginType)[1]', 'VARCHAR(100)')  NOT LIKE '%SQL Login%' --LIKE '%Windows (NT) Login%'
BEGIN
INSERT INTO SyncSQLLogins (loginname,logintype,loginsid,sqlcommand)
VALUES (
@message.value('(/EVENT_INSTANCE/ObjectName)[1]', 'VARCHAR(100)'),
'U',
@message.value('(/EVENT_INSTANCE/SID)[1]', 'VARBINARY(85)'),
@message.value('(/EVENT_INSTANCE/TSQLCommand)[1]', 'VARCHAR(max)'))
INSERT INTO Sync_Log (messagename, xmldata)
VALUES (@messageName, @message) 

END 

COMMIT; 

END 

END TRY
BEGIN CATCH
DECLARE @errorNumber INT,@errorMessage NVARCHAR(MAX),@errorState INT,@errorSeverity INT,@errorLine INT,@errorProcedure NVARCHAR(128)
SET @errorNumber = error_number();
SET @errorMessage = error_message();
SET @errorState =  error_state();
SET @errorSeverity = error_severity();
SET @errorLine = error_line();
SET @errorProcedure = error_procedure();
if not(xact_state() = 0)
ROLLBACK;
RAISERROR('%s:%d %s (%d)',@errorSeverity,@errorState,@errorProcedure,@errorLine,@errorMessage,@errorNumber) WITH log;
END CATCH
END
GO 

So now I have the login name, SID and password hash from sys.sql_logins, login type, and the actual sql command issued to create it, all recorded in a table.  There’s also a propagated column with a default value of ‘N’ to tell my next procedure what logins still need to be replicated to the mirror.  Now I need a script to take the logins from this table and send them to the mirror via the distributed Service Broker service.  This can be left as a script or put into a stored procedure, either way can be run ad-hoc or via a scheduled job.

USE Mirror_sync;
GO
Declare @ConversationHandle uniqueidentifier,
@newmessage XML,
@sqlcommand varchar(max),
@passwordhash varbinary(256),
@loginsid varbinary(85),
@logintype char(1)

DECLARE lc1 CURSOR FOR SELECT loginsid, logintype, passwordhash, sqlcommand
FROM syncSQLlogins where propagated = 'N'
FOR UPDATE OF propagated

OPEN lc1
FETCH lc1 INTO @loginsid, @logintype, @passwordhash, @sqlcommand
WHILE @@FETCH_STATUS = 0
BEGIN

IF @logintype = 'S'
SET @newmessage = replace(@sqlcommand, 'N''******''', master.sys.fn_varbintohexstr(@passwordhash)+' HASHED')
+ ', SID=' + master.sys.fn_varbintohexstr(@loginsid) +';'

IF @logintype = 'U'
SET @newmessage =  @sqlcommand + ';'

Begin Transaction
Begin Dialog @ConversationHandle
From Service SyncSenderService
To Service 'SyncTargetService'
On Contract SampleContract
WITH Encryption=off;
SEND
ON CONVERSATION @ConversationHandle
Message Type SenderMessageType
(@newmessage)
UPDATE syncSQLlogins SET propagated = 'Y' WHERE CURRENT OF lc1

Commit

FETCH lc1 INTO @loginsid, @logintype, @passwordhash, @sqlcommand

END

CLOSE lc1
DEALLOCATE lc1

In a nutshell, what I’m doing here is taking the original SQL command and replacing the obfuscated password with the hashed password I had pulled earlier from sys.sql_logins and adding the SID.  In the case of a Windows login, the sql command is left as-is.  The final command is then sent to the SyncTargetService on the mirror server.  Finally, the row in syncSQLLogins is updated to set propagated = ‘Y’.

The last piece was to create a stored procedure on the mirror/target server to handle the incoming queue there.  This is fairly straightforward, it records the statement to a table for posterity, then executes it to create the login.  The only trouble I had with this one was that the message_body is a varbinary field and converting that directly to an nvarchar left me with some unwanted garbage at the beginning of the text.  I fixed that by first casting it as XML, then as an nvarchar.  This worked to give me nice, clean, syntactically acceptable statements.

USE Mirror_sync
GO

CREATE TABLE SyncSQLLogins
(
sqlcommand        varchar(max),
createdate        datetime default getdate())
GO

CREATE PROCEDURE usp_ProcessTargetQueue
AS
Declare @ConversationHandle as uniqueidentifier
Declare @MessageBody as nvarchar(max)
Declare @MessageType as sysname

Begin Transaction
Print 'Started Receiving ';

RECEIVE top (1)
@MessageType = message_type_name,
@ConversationHandle = conversation_handle,
@MessageBody = cast(cast(message_body as xml) as nvarchar(max))--message_body
FROM syncTargetQueue;

if @MessageType = 'SenderMessageType'
Begin

INSERT INTO Mirror_sync.dbo.SyncSQLLogins (sqlcommand) values (@MessageBody);
exec sp_executesql @MessageBody;
--PRINT @MessageBody;

SEND
ON CONVERSATION @ConversationHandle
Message Type ReceiverMessageType
('Message is received')
END Conversation @ConversationHandle
END

Commit
GO

I then altered the queue to associate it with the new procedure.

ALTER QUEUE SyncTargetQueue WITH ACTIVATION (
STATUS = ON,
PROCEDURE_NAME = dbo.usp_ProcessTargetQueue ,
MAX_QUEUE_READERS = 2, EXECUTE AS SELF);

And that’s pretty much it.  I’ll have to play around with it to see if it will also process DROP LOGIN and ALTER LOGIN statements, and I could probably have the initial event stored procedure handle sending the messages, thus eliminating the need for a scheduled job, but for now I’m happy with how this turned out.  And fighting through the process has really helped me get more familiar with Service Broker.  Now I’m wondering where else we could be using it…

March 24, 2011 Posted by | Features, General | , | 1 Comment

Mirror, Mirror…

So, while I was discussing database snapshots in SQL Server, I kind of glossed over the subject of database mirroring, assuming you already had it set up in your environment.  But maybe you don’t.  So today I’m going to go through the steps involved in setting up database mirroring in SQL Server 2008.  To keep things interesting, let’s say you want to mirror between servers in… untrusted domains (dun! dun! dun!).  And we’ll only use T-SQL (gasp!!!)  Ready?

I’ll set up the demo by creating your primary database on QAServer.

/*    Create the database and set the recovery model to full */
CREATE DATABASE [MirrorTestDB] ON  PRIMARY
( NAME = N'MirrorTestDB', FILENAME = N'F:\QAServer_data\MirrorTestDB.mdf' , SIZE = 3072KB , FILEGROWTH = 1024KB )
LOG ON
( NAME = N'MirrorTestDB_log', FILENAME = N'F:\QAServer_log\MirrorTestDB_log.ldf' , SIZE = 10240KB , FILEGROWTH = 10%)
GO

USE MirrorTestDB;
GO

CREATE TABLE MyBooks (
Title        varchar(50),
Author        varchar(50),
Published    char(4)
);
GO

INSERT INTO MyBooks VALUES ('Gone With The Wind', 'Mitchell, Margaret', '1936');
INSERT INTO MyBooks VALUES ('Go Dog Go', 'Eastman, P. D.', '1966');
INSERT INTO MyBooks VALUES ('Through The Looking Glass', 'Carroll, Lewis', '1871');
GO

USE master;
GO
ALTER DATABASE MirrorTestDB
SET RECOVERY FULL;
GO

The first step in mirroring is to take a backup of the primary database and copy is over to DEVServer.

BACKUP DATABASE MirrorTestDB
TO DISK = 'F:\QAServer_Backup\MirrorTestDB.bak'
WITH FORMAT
GO

Once the backup file is copied to DEVServer, we need to restore it.

RESTORE DATABASE [MirrorTestDB]
FROM  DISK = N'D:\Program Files\Microsoft SQL Server\MSSQL10.DEVServer\MSSQL\Backup\MirrorTestDB.bak'
WITH    FILE = 1,
MOVE N'MirrorTestDB' TO N'D:\Program Files\Microsoft SQL Server\MSSQL10.DEVServer\MSSQL\DATA\MirrorTestDB.mdf',
MOVE N'MirrorTestDB_log' TO N'D:\Program Files\Microsoft SQL Server\MSSQL10.DEVServer\MSSQL\DATA\MirrorTestDB_1.ldf',
NORECOVERY,  NOUNLOAD,  STATS = 10
GO

Here’s where this demo deviates from your run-of-the-mill mirroring configuration.  Because the servers are in untrusted domains, you can’t use Windows authentication to communicate between them.  So we need to create certificates that the servers will use to authenticate.  So we start by creating a certificate and mirroring endpoint on QAServer, and copying the certificate to DevServer.

/*    Execute on Primary */

USE master;
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'SuperStrongPassword!';
GO

USE master;
CREATE CERTIFICATE QAServer_cert
WITH SUBJECT = 'QA Server certificate for database mirroring';
GO

CREATE ENDPOINT MirroringEndpoint
STATE = STARTED
AS TCP (
LISTENER_PORT=7024
, LISTENER_IP = ALL
)
FOR DATABASE_MIRRORING (
AUTHENTICATION = CERTIFICATE QAServer_cert
, ENCRYPTION = REQUIRED ALGORITHM AES
, ROLE = ALL
);
GO

BACKUP CERTIFICATE QAServer_cert TO FILE = 'F:\QAServer_backup\QAServer_cert.cer';
GO

/*    Copy the .cer file to the Mirror server */

And do the same thing on DevServer.

/*    Execute on Mirror */

USE master;
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'SuperStrongPassword!';
GO

USE master;
CREATE CERTIFICATE DevServer_cert
WITH SUBJECT = 'Dev Server certificate for database mirroring';
GO

CREATE ENDPOINT MirroringEndpoint
STATE = STARTED
AS TCP (
LISTENER_PORT=7024
, LISTENER_IP = ALL
)
FOR DATABASE_MIRRORING (
AUTHENTICATION = CERTIFICATE DevServer_cert
, ENCRYPTION = REQUIRED ALGORITHM AES
, ROLE = ALL
);
GO

BACKUP CERTIFICATE DevServer_cert TO FILE = 'D:\Program Files\Microsoft SQL Server\MSSQL10.DevServer\MSSQL\Backup\DevServer_cert.cer';
GO

/*    Copy the .cer file to the Primary server */

So now we have our certificates and we have our mirroring endpoints using the certs to authenticate.  But DevServer still doesn’t recognize QAServer’s certificate, and vice versa.  So now we need to create logins and  inbound certificates on each server using the .cer files we copied from the other server.  Starting on QAServer:

/*    Execute on Primary    */

USE master;
CREATE LOGIN DevServer_login
WITH PASSWORD = 'SuperStrongPassword!';
GO

USE master;
CREATE USER DevServer_user FOR LOGIN DevServer_login;
GO

USE master;
CREATE CERTIFICATE DevServer_cert
AUTHORIZATION DevServer_user
FROM FILE = 'F:\QAServer_data\DevServer_cert.cer'
GO

USE master;
GRANT CONNECT ON ENDPOINT::MirroringEndpoint TO [DevServer_login];
GO

Then on DevServer:

/*    Execute on Mirror    */

USE master;
CREATE LOGIN QAServer_login
WITH PASSWORD = 'SuperStrongPassword!';
GO

USE master;
CREATE USER QAServer_user FOR LOGIN QAServer_login;
GO

USE master;
CREATE CERTIFICATE QAServer_cert
AUTHORIZATION QAServer_user
FROM FILE = 'D:\Program Files\Microsoft SQL Server\MSSQL10.DEVServer\MSSQL\DATA\QAServer_cert.cer'
GO

USE master;
GRANT CONNECT ON ENDPOINT::MirroringEndpoint TO [QAServer_login];
GO

From this point the remaining setup is the same as any other mirroring configuration.  We tell each database to partner up with the other.  This time, we start on the mirror server, DevServer.

/*    Execute on Mirror    */

ALTER DATABASE MirrorTestDB
SET PARTNER = 'TCP://QAServer.firm.jonesday.qa:7024';
GO

And finally on the primary server, QAServer:

/*    Execute on the primary    */

ALTER DATABASE MirrorTestDB
SET PARTNER = 'TCP://DEVServer.firm.jonesday.dv:7024';
GO

/*    change to high-performance mode    */

ALTER DATABASE MirrorTestDB
SET PARTNER SAFETY OFF;
GO

Tada!  Mirroring between servers in untrusted domains.

To be honest, I love the idea of mirroring, but I hate setting it up.  It seems like such a pain, even without the added complexity of using certificates.  Backup here, copy there, restore there, configure the endpoints, set the partners, oops you need to backup the tail of the transaction log so go back to the primary and do that, copy it to the mirror, restore, etc. etc.  And then you have to worry about keeping logins in synch, and jobs, linked servers, etc.  Sure, you can automate a lot of it, but I really hope Microsoft comes up with a more comprehensive solution in future versions.

March 21, 2011 Posted by | Features, General | , , | Leave a comment

Database Snapshots – Part 3 – Data Recovery

In this final part of our series on SQL Server database snapshots, I’m going to talk about what, I think, is the most forgotten feature/use of snapshots:  data recovery.  If you (or ‘a friend’) makes a mistake and deletes data you shouldn’t have, or perhaps accidentally drops a table, if you have a snapshot, you can restore that lost data.  Now, before I go any further, I am in no way, shape, or form suggesting that you use database snapshots as your backup/restore solution. Let me reiterate that:  DO NOT RELY ON SNAPSHOTS FOR YOUR BACKUP/RESTORE SOLUTION!  Got it?  Excellent!  Now then, where was I?

Restoring from snapshots comes in especially handy in testing scenarios.  You’ve got this mammoth database and a developer wants you to take a backup so he can test some data changes and then have you restore the database to its prior state afterwards.  Depending on the size of the database and the speed of your disk subsystem, this entire process could take hours.  Who’s got that kind of time?  A faster solution would be to create a snapshot of the database prior to the testing, and just restore from that afterwards.  Why is this faster?  Because a) you’re not writing out all of the data when you create the snapshot like you would with a backup and b) you’re not writing back all the data when you restore from a snapshot like you would if you restored from a backup. See a pattern here?  When you recover from a snapshot you only write back the data that was changed since the snapshot was taken.

Let’s look at an example.

/*    create the snapshot    */

USE master;
GO
CREATE DATABASE AW_Snap1
    ON
        ( NAME = 'AdventureWorks_Data',
          FILENAME = 'D:\Program Files\Microsoft SQL Server\MSSQL10.D1DB08MS\MSSQL\DATA\AW_Snap1.ss')
AS SNAPSHOT OF Adventureworks;
GO

/*    Query the database and snapshot    */

SELECT * from AW_Snap1.Sales.SalesOrderDetail where SalesOrderID = 57049;
SELECT * from AdventureWorks.Sales.SalesOrderDetail where SalesOrderID = 57049;
GO

Restoring - first query

For the sake of drama, let’s imagine “someone” dropped a table in the database.

/*    oops!  someone dropped the table!    */

DROP TABLE AdventureWorks.Sales.SalesOrderDetail;
GO
SELECT * from AW_Snap1.Sales.SalesOrderDetail where SalesOrderID = 57049;
SELECT * from AdventureWorks.Sales.SalesOrderDetail where SalesOrderID = 57049;
GO

Restoring - after drop

DON’T PANIC!  Just restore from the snapshot.

/*    no problem, restore from the snapshot    */

USE master;
GO
RESTORE DATABASE AdventureWorks FROM DATABASE_SNAPSHOT = 'AW_Snap1';
GO

/*    table is back    */

SELECT * from AW_Snap1.Sales.SalesOrderDetail where SalesOrderID = 57049;
SELECT * from AdventureWorks.Sales.SalesOrderDetail where SalesOrderID = 57049;
GO

Restoring - after restore

Pretty cool, huh?

As with most things in life, there are some restrictions and considerations to keep in mind when restoring from a snapshot.  For one thing, the snapshot you’re restoring from can be the only current snapshot on the database, so drop all the other ones first.  Also, you can’t do a point in time recovery using snapshots (well, unless that point in time is the moment that snapshot was created).  On the same point, all data changes made since the snapshot was created will be lost.  Snapshot restores don’t fix problems like media failure, and they can’t restore over databases with compressed or read-only filegroups.  Restoring from a snapshot breaks the log backup chain, so always take a full backup after restoring from a snapshot.

I hope this series has helped introduce you to the possibilities for using database snapshots in your own environment.  And if you have any questions, feel free to leave a comment.

March 17, 2011 Posted by | Features, General | , , | 1 Comment

Database Snapshots – Part 2 – Offload reporting

So now that we know a little bit about database snapshots in SQL Server, what else are they good for?  One thing you can do is use snapshots in conjunction with database mirroring to offload reporting onto another server.

Suppose you’re using mirroring as part of a high-availability solution.  Normally that mirrored database/server would sit idle, waiting until the day something happens to the primary server and it gets to take over.  Kinda seems like a waste of perfectly good cpu cycles, doesn’t it?  While you can’t query that mirror directly, you can create a snapshot of it and query that.  Then maybe you can offload your reporting to the mirror server, while keeping your primary server freed up for OLTP processing.  Let’s take a look at how to make that happen.

I’m going to assume you already have mirroring set up and not go into that here.  Right now, we have a primary Bookstore database on MyServer1 being mirrored to MyServer2.  Let’s assume that twice a day we want to refresh the data in the snapshot for reporting.  The first thing we do is create a snapshot on the mirror, and test it.

USE master;
GO
CREATE DATABASE Bookstore_Snap_AM
   ON
       ( NAME = 'Bookstore_data',
         FILENAME = 'D:\Program Files\Microsoft SQL Server\MSSQL10.MyServer2\MSSQL\DATA\Bookstore_Snap_AM.ss')
AS SNAPSHOT OF Bookstore;
GO
USE Bookstore_Snap_AM;
GO
SELECT * FROM MyBooks;
GO

Mirroring - First query of snap

So far so good.  We could just have our reports run against this snapshot.  But when we create the new snapshot for the second half of the day, we’ll have to change the connections in the reports to read from that new snapshot.  That’s a little kludgy.  So instead what we’ll do is create another database called Bookstore_reporting.  And all that database will contain is synonyms pointing to the snapshot.  This way reports can be run against one database, regardless of what the current snapshot is called.

/*    Create separate database with synonyms    */
USE [master]
GO
CREATE DATABASE [Bookstore_reporting] ON  PRIMARY
( NAME = N'Bookstore_data', FILENAME = N'D:\Program Files\Microsoft SQL Server\MSSQL10.MyServer2\MSSQL\DATA\Bookstore_data.mdf' )
LOG ON
( NAME = N'Bookstore_log', FILENAME = N'D:\Program Files\Microsoft SQL Server\MSSQL10.MyServer2\MSSQL\DATA\Bookstore_log.ldf' )
GO
USE Bookstore_reporting;
GO
CREATE SYNONYM [dbo].[MyBooks] FOR [MyServer2].[Bookstore_Snap_AM].[dbo].[MyBooks]
GO
/*    Test the synonym    */
USE Bookstore_reporting;
GO
SELECT * FROM Mybooks;
GO

Mirroring - First query of synonym

So now all of our reports are directed against Bookstore_reporting, which is really reading from our AM snapshot.  During the course of the morning, we have users making transactions in the primary Bookstore database, and in the afternoon we generate a new snapshot.

/*    update a record in the principal */
USE Bookstore; --this is the original database on the Primary server, MyServer1
GO
UPDATE Mybooks SET Published = '1936' WHERE Title = 'Gone With The Wind'
GO
/*    generate a new snapshot on the mirror server    */
USE master;
GO
CREATE DATABASE Bookstore_Snap_PM
   ON
       ( NAME = 'Bookstore_data',
         FILENAME = 'D:\Program Files\Microsoft SQL Server\MSSQL10.MyServer2\MSSQL\DATA\Bookstore_Snap_PM.ss')
AS SNAPSHOT OF Bookstore;
GO
/*    query the old and new snaps and the synonym        */
SELECT * FROM Bookstore_Snap_AM.dbo.MyBooks;
SELECT * FROM Bookstore_Snap_PM.dbo.MyBooks;
SELECT * FROM Bookstore_reporting.dbo.MyBooks;
GO

Mirroring - after update and new snap

Now we see that the AM snapshot has the original data, the PM snapshot contains the updated record, and Bookstore_reporting is still pointing to the AM snapshot.  So let’s redirect Bookstore_reporting to the PM snapshot.

/*    drop and recreate the synonym to point to the new snap and drop the old snap    */
USE Bookstore_reporting;
GO
DROP SYNONYM MyBooks;
GO
USE Bookstore_reporting;
GO
CREATE SYNONYM MyBooks FOR MyServer2.Bookstore_Snap_PM.dbo.MyBooks;
GO
USE master;
GO
DROP DATABASE Bookstore_Snap_AM;
GO
/*    Query the Synonym again    */
USE Bookstore_reporting;
GO
SELECT * FROM MyBooks;
GO

Mirroring - second query of synonym (snap2)

And even though our report connection hasn’t changed, we’re getting the updated data.  You can schedule the new snapshot creation and synonym switch into a job to automate everything, and it’s relatively seamless.  I would recommend creating a procedure to change all the synonyms in a single transaction.  You can find code examples of how to do this online.

Bear in mind that you may experience some blocking if you try to drop/recreate the synonyms while a report is running, and this may cause your synonym update or the report to timeout.  Don’t be tempted to use READ UNCOMMITTED in your queries to get around this, you could end up querying the old snapshot for one table and the new snapshot for another in the same query.  The dropping/recreating of a synonym is a fast process, so timeouts should be limited.

Coming up in Part 3 we’ll look at using database snapshots to recover data.

March 14, 2011 Posted by | Features, General | , | 1 Comment

Women In Technology

You’re probably aware that March is Women’s History Month, and even the SQL Server community is celebrating.  Here are a few of the events for this month:

SQL University – Women in Technology Week

If you haven’t already checked out SQL University, you really should.  For one thing, it’s free.  For another, it’s community driven.  Bloggers are spending their time to provide you with a quality SQL Server education.  And did I mention it’s free?  This week the focus is on Women in Technology (WIT).

24 Hours of PASS

Another free (!) resource, 24-hours of PASS is a 24-hour blitz of webcasts on SQL Server topics from database administration to business intelligence to development.  This time around all of the sessions are being presented by women.

SQL Saturdays

Depending on where you are, there may be a SQL Saturday going on in your area this month.  SQL Saturdays are generally free (there may be a small fee for lunch), and are great ways to learn and get involved in the SQL Server community.  And most of them have a WIT panel/discussion.

Bloggers

Take some time this month to check out some of the fine women (including yours truly) blogging about SQL Server and all things geek.  I’ve listed a few on my new blogroll to get you started.

Now get out there and celebrate that geeky woman that makes your life and work easier.  Maybe bring her some bourbon.  I hear she likes that.

March 10, 2011 Posted by | General, Professional Development | , , , | 1 Comment

Database Snapshots – Part 1 – An Introduction

Let’s say you support the database for a financial system, and periodically, whether it be monthly, quarterly, or yearly, they close out the period and run a bunch of reports before they let users back on the system to start making new transactions.  Depending on your system, the size of your database, and how those reports are written, that could take hours.

Or let’s say you have a datawarehouse environment that pulls from this financial database, and it has to be balanced against the source database.  It can be difficult and time consuming to balance against a moving target.

Or what if you wanted to offload both of the reporting and the datawarehouse extracts onto another server, so they wouldn’t impact other users?

Wouldn’t it be awesome if you could quickly create a read-only copy of that database at a specific point in time that your financial users could report against while opening the main database back up for next-period transactions, without hours of downtime?  Wouldn’t it be nice if your datawarehouse team had a static copy of the data to run their extracts and balancing against?  Wouldn’t it be a little slice of heaven if you could even offload all this onto another server??  Where do I sign?

The thing is, with database snapshots you can do all this, and more (yes more!).  Database snapshots were introduced in SQL Server 2005 and have some feature enhancements in SQL Server 2008.

What is a database snapshot?

In a nutshell, a snapshot is a read-only “copy” of your database as of a point in time.  I put “copy” in quotes because it’s not really a full copy of the database.  How database snapshots work is, initially an empty file is created, called a sparse file.  Only when data is changed in the original database does anything get written to the sparse file.  And what gets written is the data prior to the change.  So if you update a record in the customers table, before it updates the page SQL Server copies it in its original form to the sparse file, then it completes the update.  A picture’s worth a thousand words, so here are some words from the Microsoft.

How NTFS sparse files work

So, how do you know what data to pull from the snapshot and what data to pull from the original database?  You don’t need to know.  To read the snapshot, you treat it just as you would any other database.  It knows what data it has and what’s still in the original database and handles that logic behind the scenes.  Make sense?  Here are a couple more pics from Microsoft to illustrate the read process.  The first is at the initial snapshot creation, before any data changes.  You can see that, even though the user is querying the snapshot, SQL Server is reading entirely from the original database.

Reading from a new snapshot

Now let’s say someone’s gone in and changed a record in the database.  Prior to writing that page, SQL Server copies it to the snapshot.  Now if you query the snapshot, SQL Server knows to grab that page from the snapshot and the rest of the unchanged data from the database.  This is how you’re able to get a view of the data at a specific point in time, even with transactions going on.

image

How do I create a snapshot?

Unfortunately, you can’t create a snapshot via Management Studio, don’t ask me why.  So you’re stuck with T-Sql.  No biggie, though, it’s just a variation of the CREATE DATABASE statement.

USE master;
GO
CREATE DATABASE AW_Snap1
   ON
       ( NAME = 'AdventureWorks_Data',
         FILENAME = 'D:\MSSQL\Data\AW_Snap1.ss')
AS SNAPSHOT OF Adventureworks;
GO

If you take a peek at the snapshot file at the filesystem level, it appears to be the same size as the AdventureWorks datafile.  But if you look at the file properties or query sys.dm_io_virtual_file_stats, you’ll see the actual size is very small, in this case 131072 bytes.

USE AW_Snap1;
GO
SELECT size_on_disk_bytes from sys.dm_io_virtual_file_stats (DB_ID(),1)    ;
GO
image

A quick query of the snapshot and original database returns the same data:

USE AW_Snap1;
GO
SELECT * FROM Person.Contact WHERE ContactID = 200;
SELECT * FROM AdventureWorks.Person.Contact WHERE ContactID = 200;
GO

image

Now let’s update some data and run the query again.

/*    Update the original database    */
USE AdventureWorks;
GO
UPDATE Person.Contact SET MiddleName = 'Eugene' WHERE ContactID = 200;
GO
/*    Query the snap again    */
USE AW_Snap1;
GO
SELECT * FROM Person.Contact WHERE ContactID = 200;
SELECT * FROM AdventureWorks.Person.Contact WHERE ContactID = 200;
GO

image

As you can see, querying the snapshot returns the original data whereas querying the database returns the updated data.  Also, if we check sys.dm_io_virtual_file_stats again, we can see that the physical size of the snapshot file has gone up to 196608 bytes.

You can’t refresh an existing snapshot.  If you want a more current snap of the database, you’ll need to create a new snapshot.  Be careful with this.  SQL Server will continue to maintain any old snapshots until they’re dropped, so the more snapshots you have laying around out there the more overhead you’re going to incur.  As a rule, if you don’t need a snapshot anymore, drop it.

USE master;
GO
DROP DATABASE AW_Snap1;
GO

Stay tuned for future installments in this series focusing on the fun things you can do with snapshots.

March 7, 2011 Posted by | Features, General | , | 1 Comment

Deadlocks and trace flag 1204

I recently got an email from yet another developer about an application that would sporadically die for no apparent reason.  The application had logic to retry automatically, and would then run without error.  Given the intermittent nature of the problem, and the fact that the process was terminating, rather than hanging or timing out, to me it smacked of deadlocking.  So I decided to turn on trace flags 1204 and 3605 to confirm my suspicions.

DBCC TRACEON (1204, 3605, -1)

Flag 1204 records deadlock information.  Flag 3605 sends it to the error log.  And flag -1 turns the trace on for all sessions, not just the current session starting the trace.

I enabled the trace flags and sure enough, deadlocks.

Once a deadlock occurs and the trace information is written to the errorlog, actually making sense of it can be tricky.  I’ve found that, rather than trying to view it through Management Studio, make a copy of the actual ERRORLOG file and view it in Notepad or Wordpad.  It makes the information much more readable.  Let’s take a look at a typical deadlock trace output.

2011-02-25 16:00:45.96 spid4s      Deadlock encountered …. Printing deadlock information
2011-02-25 16:00:45.96 spid4s      Wait-for graph
2011-02-25 16:00:45.96 spid4s
2011-02-25 16:00:45.96 spid4s      Node:1

2011-02-25 16:00:45.96 spid4s      PAGE: 75:1:5994649             CleanCnt:3 Mode:IX Flags: 0x2
2011-02-25 16:00:45.96 spid4s       Grant List 0:
2011-02-25 16:00:45.96 spid4s         Owner:0x48A1FCE0 Mode: IX       Flg:0x0 Ref:0 Life:02000000 SPID:981 ECID:0 XactLockInfo: 0x4173350C
2011-02-25 16:00:45.96 spid4s         SPID: 981 ECID: 0 Statement Type: UPDATE Line #: 1
2011-02-25 16:00:45.96 spid4s         Input Buf: Language Event: UPDATE MyTable SET MyPatApplication = NULL WHERE MyPatApplication = ” AND MySendNotice = -1
2011-02-25 16:00:45.96 spid4s       Requested By:
2011-02-25 16:00:45.96 spid4s         ResType:LockOwner Stype:’OR’Xdes:0x1E202E00 Mode: S SPID:1122 BatchID:0 ECID:0 TaskProxy:(0x196BC378) Value:0x4a7f5d80 Cost:(0/4232)
2011-02-25 16:00:45.96 spid4s
2011-02-25 16:00:45.96 spid4s      Node:2

2011-02-25 16:00:45.96 spid4s      OBJECT: 75:1886017850:0        CleanCnt:2 Mode:X Flags: 0x0
2011-02-25 16:00:45.96 spid4s       Grant List 1:
2011-02-25 16:00:45.96 spid4s         Owner:0x1EC887A0 Mode: X        Flg:0x0 Ref:22 Life:02000000 SPID:1122 ECID:0 XactLockInfo: 0x1E202E24
2011-02-25 16:00:45.97 spid4s         SPID: 1122 ECID: 0 Statement Type: SELECT Line #: 1
2011-02-25 16:00:45.97 spid4s         Input Buf: Language Event: SELECT MyTable.MyFolderID, CLName1, MDesc, MNarrative,  MArrangement FROM MyTable
2011-02-25 16:00:45.97 spid4s       Requested By:
2011-02-25 16:00:45.97 spid4s         ResType:LockOwner Stype:’OR’Xdes:0x417334E8 Mode: IX SPID:981 BatchID:0 ECID:0 TaskProxy:(0x4AD92378) Value:0x131f63c0 Cost:(0/28112)
2011-02-25 16:00:45.97 spid4s
2011-02-25 16:00:45.97 spid4s      Victim Resource Owner:
2011-02-25 16:00:45.97 spid4s       ResType:LockOwner Stype:’OR’Xdes:0x1E202E00 Mode: S SPID:1122 BatchID:0 ECID:0 TaskProxy:(0x196BC378) Value:0x4a7f5d80 Cost:(0/4232)

The first thing we see is that this deadlock occurred between 2 processes, Node 1 and Node 2.  Deadlocks can happen between more than 2 processes as well, and I’ll show an example of that a little later.  For now, we’ll focus on Node 1 of this deadlock.  I’m not going to go into every bit of information, but I’ll show you the juicy bits.

 

2011-02-25 16:00:45.96 spid4s      PAGE: 75:1:5994649             CleanCnt:3 Mode:IX Flags: 0x2
2011-02-25 16:00:45.96 spid4s       Grant List 0:
2011-02-25 16:00:45.96 spid4s         Owner:0x48A1FCE0 Mode: IX       Flg:0x0 Ref:0 Life:02000000 SPID:981 ECID:0 XactLockInfo: 0x4173350C
2011-02-25 16:00:45.96 spid4s         SPID: 981 ECID: 0 Statement Type: UPDATE Line #: 1
2011-02-25 16:00:45.96 spid4s         Input Buf: Language Event: UPDATE MyTable SET MyPatApplication = NULL WHERE MyPatApplication = ” AND MySendNotice = -1
2011-02-25 16:00:45.96 spid4s       Requested By:
2011-02-25 16:00:45.96 spid4s         ResType:LockOwner Stype:’OR’Xdes:0x1E202E00 Mode: S SPID:1122 BatchID:0 ECID:0 TaskProxy:(0x196BC378) Value:0x4a7f5d80 Cost:(0/4232)

In the first line, “PAGE: 75:1:5994649” is the resource this process had locked.  This means the lock was held on the object located on page 1:5994649 in database 75.  We can figure out what object this equates to by querying sys.databases to find out which database is number 75, and using the dbcc page command, dbcc page (75, 1, 5994649, 3), to get the id of the object on that page.  Then we query sys.objects to find the name of the object.  Also in the first line we see that the lock Mode on this resource was “IX” or intent exclusive.

In the fourth line, we can see that this Node was SPID 981, and it was performing an update statement.  On the next line we see the exact statement that SPID 981 was executing at the time.

Beginning with the 6th line, we see information about the process that was requesting a lock on the resource in question:

 

2011-02-25 16:00:45.96 spid4s       Requested By:
2011-02-25 16:00:45.96 spid4s         ResType:LockOwner Stype:’OR’Xdes:0x1E202E00 Mode: S SPID:1122 BatchID:0 ECID:0 TaskProxy:(0x196BC378) Value:0x4a7f5d80 Cost:(0/4232)

Here we see that SPID 1122 was requesting a shared lock on the same resource owned by SPID 981.

The information on Node 2 follows the same pattern:

2011-02-25 16:00:45.96 spid4s      OBJECT: 75:1886017850:0        CleanCnt:2 Mode:X Flags: 0x0
2011-02-25 16:00:45.96 spid4s       Grant List 1:
2011-02-25 16:00:45.96 spid4s         Owner:0x1EC887A0 Mode: X        Flg:0x0 Ref:22 Life:02000000 SPID:1122 ECID:0 XactLockInfo: 0x1E202E24
2011-02-25 16:00:45.97 spid4s         SPID: 1122 ECID: 0 Statement Type: SELECT Line #: 1
2011-02-25 16:00:45.97 spid4s         Input Buf: Language Event: SELECT MyTable.MyFolderID, CLName1, MDesc, MNarrative,  MArrangement FROM MyTable
2011-02-25 16:00:45.97 spid4s       Requested By:
2011-02-25 16:00:45.97 spid4s         ResType:LockOwner Stype:’OR’Xdes:0x417334E8 Mode: IX SPID:981 BatchID:0 ECID:0 TaskProxy:(0x4AD92378) Value:0x131f63c0 Cost:(0/28112)

SPID 1122 had an exclusive lock on the object located on page 1886017850:0 in database 75 while executing a SELECT statement.  And SPID 981 was requesting an intent-exclusive lock on that same resource.

In the final section:

 

2011-02-25 16:00:45.97 spid4s      Victim Resource Owner:
2011-02-25 16:00:45.97 spid4s       ResType:LockOwner Stype:’OR’Xdes:0x1E202E00 Mode: S SPID:1122 BatchID:0 ECID:0 TaskProxy:(0x196BC378) Value:0x4a7f5d80 Cost:(0/4232)

We see that SQL Server chose SPID 1122 to be the victim of this deadlock and terminated the process.  Generally speaking, SQL Server will choose to terminate a select statement over an insert/update/delete statement.

Two-node deadlocks are pretty straightforward to read once you get the hang of it.  Deadlocks with more nodes can be confusing, but just follow the chain.  For example:

 

2011-02-25 16:00:45.95 spid4s      Deadlock encountered …. Printing deadlock information
2011-02-25 16:00:45.95 spid4s      Wait-for graph
2011-02-25 16:00:45.95 spid4s
2011-02-25 16:00:45.95 spid4s      Node:1

2011-02-25 16:00:45.95 spid4s      PAGE: 75:1:5994649             CleanCnt:3 Mode:IX Flags: 0x2
2011-02-25 16:00:45.95 spid4s       Grant List 0:
2011-02-25 16:00:45.95 spid4s         Owner:0x48A1FCE0 Mode: IX       Flg:0x0 Ref:0 Life:02000000 SPID:981 ECID:0 XactLockInfo: 0x4173350C
2011-02-25 16:00:45.95 spid4s         SPID: 981 ECID: 0 Statement Type: UPDATE Line #: 1
2011-02-25 16:00:45.95 spid4s         Input Buf: Language Event: UPDATE MyTable SET MyPatApplication = NULL WHERE MyPatApplication = ” AND MySendNotice = -1
2011-02-25 16:00:45.95 spid4s       Requested By:
2011-02-25 16:00:45.95 spid4s         ResType:LockOwner Stype:’OR’Xdes:0x1E202E00 Mode: S SPID:1122 BatchID:0 ECID:0 TaskProxy:(0x196BC378) Value:0x4a7f5d80 Cost:(0/4232)
2011-02-25 16:00:45.95 spid4s
2011-02-25 16:00:45.95 spid4s      Node:2

2011-02-25 16:00:45.95 spid4s      OBJECT: 75:1886017850:0        CleanCnt:3 Mode:X Flags: 0x0
2011-02-25 16:00:45.95 spid4s       Wait List:
2011-02-25 16:00:45.95 spid4s         Owner:0x53909C00 Mode: IS       Flg:0x2 Ref:1 Life:00000000 SPID:977 ECID:0 XactLockInfo: 0x44EA8794
2011-02-25 16:00:45.95 spid4s         SPID: 977 ECID: 0 Statement Type: SELECT Line #: 1
2011-02-25 16:00:45.95 spid4s         Input Buf: Language Event: (@P1 nvarchar(31))SELECT * FROM MyTable WHERE MyFolderID = @P1
2011-02-25 16:00:45.95 spid4s       Requested By:
2011-02-25 16:00:45.95 spid4s         ResType:LockOwner Stype:’OR’Xdes:0x417334E8 Mode: IX SPID:981 BatchID:0 ECID:0 TaskProxy:(0x4AD92378) Value:0x131f63c0 Cost:(0/28112)
2011-02-25 16:00:45.95 spid4s
2011-02-25 16:00:45.95 spid4s      Node:3

2011-02-25 16:00:45.95 spid4s      OBJECT: 75:1886017850:0        CleanCnt:3 Mode:X Flags: 0x0
2011-02-25 16:00:45.95 spid4s       Grant List 1:
2011-02-25 16:00:45.95 spid4s         Owner:0x1EC887A0 Mode: X        Flg:0x0 Ref:22 Life:02000000 SPID:1122 ECID:0 XactLockInfo: 0x1E202E24
2011-02-25 16:00:45.95 spid4s         SPID: 1122 ECID: 0 Statement Type: SELECT Line #: 1
2011-02-25 16:00:45.95 spid4s         Input Buf: Language Event: SELECT MyTable.MyFolderID, CLName1, MDesc, MNarrative,  MArrangement FROM MyTable
2011-02-25 16:00:45.95 spid4s       Requested By:
2011-02-25 16:00:45.95 spid4s         ResType:LockOwner Stype:’OR’Xdes:0x44EA8770 Mode: IS SPID:977 BatchID:0 ECID:0 TaskProxy:(0x294C2378) Value:0x53909c00 Cost:(0/0)
2011-02-25 16:00:45.95 spid4s
2011-02-25 16:00:45.95 spid4s      Victim Resource Owner:
2011-02-25 16:00:45.95 spid4s       ResType:LockOwner Stype:’OR’Xdes:0x44EA8770 Mode: IS SPID:977 BatchID:0 ECID:0 TaskProxy:(0x294C2378) Value:0x53909c00 Cost:(0/0)

Here SPID 981 had something that SPID 1122 wanted, SPID 1122 had something that SPID 977 wanted, and SPID 977 had something that SPID 981 wanted.  As a result, SPID 977 was killed.

In the end, the developer was able to eliminate a lot of deadlocks by adding NOLOCK to his select statements, and his end-users are much happier.

March 1, 2011 Posted by | Troubleshooting | , | Leave a comment