Cleveland DBA

SQL Server administration in Cleveland, OH

SQL Server A to Z – Audit

In previous versions of SQL Server, there were a number of methods for auditing activity within an instance or database.  You could use DDL triggers to track any DDL changes, login auditing to write login successes or failures to the error log, Profiler tracing, and even C2/Common Criteria Compliance Auditing.  But each of these methods had its shortcomings.  It was either too limited in scope or granularity or just too hard to administer.

Introducing SQL Server Audit

With SQL Server 2008 we’re introduced to a more comprehensive solution called SQL Server Audit.  With SQL Server Audit you can track events at the instance or database level.  You can audit login failures at the server level, for example, or track object changes within a database, or even to audit T-SQL statements being issued against a specific object.

Create the Audit object

To get started using SQL Server Audit, we need to create an Audit Object.  This can be done through SSMS or T-SQL.  Using SSMS, right-click the Audit folder under Security and select New Audit.  Give the new audit a name and specify where you want the output to go.  You can choose to write to a file, the Application Log, or the Security Log.  Personally, I like to output to a separate file.  If you choose File, specify the path where you want the output file stored and, optionally the max size for the file.  Click OK.

Create the Audit Specification

Now that we’ve created the audit, we need to tell it what to track.  To do this, we create an audit specification.  Audit specifications can be created at the server level or at the database level, each having its own set of actions.  For the sake of this demonstration, let’s say I want to audit any DDL changes made to objects in the AdventureWorks database.  Within the AdventureWorks database in SSMS, right-click Database Audit Specifications under Security, and select New Database Audit Specification…  Give your spec a name and select the audit we just created.  Under Actions, for the Audit Action Type, choose SCHEMA_OBJECT_CHANGE_GROUP and click OK.


Enable the Audit and the Audit Specifications

You’ll notice that both the Audit and the Audit Specification are created disabled.  Right-click on each and enable them.  Now we’re ready to test it.

USE AdventureWorks
GO
CREATE table AuditDemo (
Column1 int);
GO

To view the audit log, right-click on the Audit object and select View Audit Logs.

And there you have it.  Easier than Profiler, less invasive than triggers, and more specific than C2 auditing.  I was impressed by the sheer number of events you can audit and how easy this was to implement.  What do you think?  Do you plan on using SQL Server Audit in your environment?

July 8, 2011 Posted by | Features | , , | Leave a comment

Stuff I learned today – Table Types

Since SQL Server 2000, you’ve been able to create variables of type TABLE, but you needed to define that table structure when (and each time) you declared the variable. You also couldn’t pass these table variables into a stored procedure.

SQL Server 2008 introduced the user-defined TABLE datatype. Now you can create and store custom-made TABLE types for use with variables and as parameters into your stored procedures. (They cannot, however, be used as the data type for a column within a table.)

TABLE types are created and dropped using the CREATE TYPE and DROP TYPE commands. There’s no ALTER TYPE command, so if you need to change a type, drop it and recreate it.

CREATE TYPE dbo.BookDetails
AS TABLE (
PatronID int NOT NULL,
BookID	int NOT NULL,
Title	varchar(250),
Author 	varchar(100));
GO

So, why would you want to pass a table into a stored procedure? Imagine you work for a library and you’re writing an app to record books that people borrow, and you use a stored procedure to insert the book details into the BooksBorrowed table. Previously, if a patron borrowed multiple books, you’d have to call that procedure once for each record, right? Maybe like this:

CREATE PROCEDURE CheckOutBooks
@CustomerID int, @BookID int, @Title nvarchar, @Author nvarchar
AS
INSERT INTO BooksBorrowed (PatronID, BookID, Title, Author, DateBorrowed)
VALUES (@PatronID, @BookID, @Title, @Author, GETDATE());
GO

-- Now we have to run it multiple times to check out all the books Patron 1 wants

EXEC CheckOutBooks (1, 45, 'The Grapes Of Wrath', 'John Steinbeck');
EXEC CheckOutBooks (1, 532, 'Gone With The Wind', 'Margaret Mitchell');

Not very efficient, right? Now let’s try it with the BookDetails TABLE type we created earlier.

CREATE PROCEDURE CheckOutBooks
@Books BookDetails READONLY
AS
INSERT INTO BooksBorrowed (PatronID, BookID, Title, Author, DateBorrowed)
SELECT PatronID, BookID, Title, Author, GETDATE() from @Books
GO

-- Now we only have to run the procedure once

DECLARE @vBooks BookDetails;

INSERT INTO @vBooks VALUES (1, 45, 'The Grapes Of Wrath', 'John Steinbeck'),
			   (1, 532, 'Gone With The Wind', 'Margaret Mitchell');

EXEC CheckOutBooks @vBooks;

This reduces round-trip overhead and speeds up the entire transaction. Having a single, pre-defined definition of the TABLE type also means consistency across applications. Consistency=good. Fast transactions=gooder.

So, look it up. Play with it. Have fun, people.

June 8, 2011 Posted by | Features, T-SQL | , | Leave a comment

Stuff I learned today – hierarchyid

I’m in training this week. Since I was restricted to a local class, I decided to take one focused on development, rather than dba-related topics, and get some exposure to facets of SQL Server I don’t normally deal with on a day-to-day basis. And I figured I’d share some of my newly garnered knowledge with you.

Today’s nugget of new knowledge: the hierarchyID data type. This is a new data type in SQL 2008 that addresses the challenges of storing parent/child hierarchical relationships inside a table. Previously, developers had to use CTEs and self-joins to handle this kind of scenario. A classic example is the Employee table, where Employees have Managers who are also Employees, seen below.

USE AdventureWorks;
GO

SELECT EmployeeID, LastName, FirstName, ManagerID
INTO EmployeeDemo
FROM HumanResources.Employee
JOIN Person.Contact ON Employee.ContactID = Contact.ContactID;
GO

SELECT
     Mgr.EmployeeID AS ManagerID, Mgr.FirstName + ' ' + Mgr.LastName AS ManagerName,
     Emp.EmployeeID AS EmployeeID, Emp.FirstName + ' ' + Emp.LastName AS EmployeeName
FROM EmployeeDemo AS Emp
LEFT JOIN EmployeeDemo AS Mgr
ON Emp.ManagerID = Mgr.EmployeeID
ORDER BY ManagerID, EmployeeID;

Another way to look at it is via a self-joining CTE. This will allow you to get the hierarchical level of the employee.

 WITH Organization(EmployeeId, LastName, Manager, HierarchyOrder)
 AS
 (
    SELECT emp.EmployeeId, emp.LastName, emp.LastName, 1 AS HierarchyOrder
    FROM EmployeeDemo AS emp
      WHERE emp.ManagerId is Null
    UNION ALL
    SELECT emp.EmployeeId, emp.LastName, Parent.LastName, HierarchyOrder + 1
    FROM EmployeeDemo AS emp
           INNER JOIN Organization AS Parent
                 ON emp.ManagerId = parent.EmployeeId
 )
 SELECT *
 From Organization
GO

That’s great and all, and pretty slick looking, but you could start to see some performance hit with a self-referencing view like that.
So let’s say I want to take my existing EmployeeDemo table and use it to create a hierarchical table, how would I do that? I’d start by creating a new table that contains a hierarchyid column.

CREATE TABLE OrgChart
(
  OrgNode hierarchyid,
  EmployeeID int,
  LastName varchar(50),
  FirstName varchar(50),
  ManagerID int
);
GO

I can then use a slightly modified version of my previous CTE to populate it.

WITH Organization(path, EmployeeID)
AS (
-- This section provides the value for the root of the hierarchy
SELECT hierarchyid::GetRoot() AS OrgNode, EmployeeID
FROM EmployeeDemo AS C
WHERE ManagerID IS NULL
UNION ALL
-- This section provides values for all nodes except the root
SELECT
CAST(p.path.ToString() + CAST(ROW_NUMBER() OVER (PARTITION BY ManagerID ORDER BY ManagerID) AS varchar(30)) + '/' AS hierarchyid),
C.EmployeeID
FROM EmployeeDemo AS C
JOIN Organization AS p
   ON C.ManagerID = P.EmployeeID
)
INSERT INTO OrgChart
SELECT path, o.EmployeeID, LastName, FirstName, ManagerID
FROM Organization o JOIN EmployeeDemo e ON o.EmployeeID = e.EmployeeID

SELECT * FROM OrgChart;
GO

So now what do I do with this new column? Well, having this new column makes the CTE unnecessary. You can traverse the hierarchy branches using built-in functions. Suppose I want to see who works under Peter Krebs, employee 21.

DECLARE @Manager hierarchyid

SELECT @Manager = OrgNode
FROM OrgChart
WHERE EmployeeID = 21 ;

SELECT *
FROM OrgChart
WHERE OrgNode.IsDescendantOf(@Manager) = 1
order by OrgNode;

With that query we can see that Cynthia Randall reports to Peter, and Andy Ruth reports to Cynthia, etc.

I’m definitely going to have to play around with this new datatype and all of the associated functions to get my brain wrapped around it better, but it looks pretty cool.

Some resources for further reading on the hierarchyid data type:
Populating a Table with Existing Hierarchical Data

Creating and Managing Data in a Hierarchical Table

Using hierarchyid Data Types (Database Engine)

June 6, 2011 Posted by | Features, T-SQL | , , | Leave a comment

Resource Governor causing memory problem??

So.  Last week I blogged about the experiment I was running with Resource Governor for one of our newly upgraded development databases.  I had enabled Resource Governor and created a procedure to categorize sessions, but I hadn’t set any limits and everyone was sharing the same default pool.  The plan, if you’ll recall, was to monitor the Resource Governor stats to get a feel for the resources different apps were using.

This week, users have been doing a lot of testing in this environment.  A developer called me on Wednesday morning and said that users were complaining of very slow performance.  So I took a look and saw several sessions waiting on resource_semaphores.  Now, I’ve been supporting this application’s database for years now and I’ve never seen any memory waits, so this got my attention quick.  One of the sessions that was running was one of the reports that I’d enabled Resource Governor to address in the first place.  A look at sys.dm_exec_query_memory_grants told me this query was requesting almost 19GB of memory (or, put another way, all of the available memory).  What the…?

I captured the query (and asked the user to kill his report) and ran the query in our production (SQL 2005) database.  In that environment it was only requesting 7GB of memory.  Still a lot, but do-able in a system with 40GB available.  The query plans were slightly different between the 2 environments, but updating stats didn’t change that at all.  I decided to use Resource Governor as it was intended, and I set a 40% memory grant limit for the Reports workload group.  I re-ran the query and this time it was only requesting 7GB of RAM.

The immediate problem was remediated, but it niggled at the back of my brain; why had the query requested that much memory in the first place?  Thursday morning I got another call.  This time, when I checked the system, a different application was requesting all of the available memory.  I could have set a restriction for that workload group, too, but this was just treating the symptoms.  There had to be a root cause.

I started searching the Great Gazoogle for known memory issues in SQL 2008, but found nothing that explained why queries were suddenly using a lot more memory than they did in SQL 2005.  Think.  Think.  Think.  What if Resource Governor is the problem?  So I ran a series of tests:  with Resource Governor disabled, with it enabled and the memory limit set to 40%, and enabled with the memory limit set to 100% (or unlimited).  I also ran it in our 2005 environment just for comparison purposes, and I monitored sys.dm_exec_query_memory_grants for each test.  Here are the very interesting results.

Again I say:  what the…?  With Resource Governor disabled the query only uses 4.8GB of memory.  Enabling Resource Governor actually appears to cause the query to request more memory.  It almost looks like, rather than being a limit, the memory grant % is telling the query how much to use (putting ideas into its head).  I know that can’t be the case, but that’s how it looks.  I thought maybe I’d misunderstood the settings, that maybe I’d mis-configured Resource Governor, but I don’t think I did.

Memory Grant %
Specify the maximum amount of memory that a single request can take from the pool. Range is 0 to 100.

And technically, that’s what SQL Server did.  I said “this group can use 100% of the memory,” and it gave that query 100% of the memory.  Or tried to, anyway.  But if it really needed all that memory, why doesn’t it ask for it with Resource Governor disabled?

So far I haven’t been able to find any bugs to explain this behavior.  And a post to SQLServerCentral’s forums hasn’t yielded any responses.  I guess for now I’m going to leave it disabled and keep searching for an explanation.  If anyone out there has any insight, please post a comment.  I’d really appreciate it.

April 22, 2011 Posted by | Features, Troubleshooting | , , , | 1 Comment