Cleveland DBA

SQL Server administration in Cleveland, OH

SQL Server A to Z – Backups

One of the most important, if not the most important tasks of the DBA is backing up your database.  The database backup is the most basic disaster recovery plan you can have in place.  It’s certainly the easiest to implement.  Today I’m going to go over the basics of backups, including the different types of backups and how they work.

Full Backup

A full backup of a database includes all data pages in the database and just enough of the transaction log to be able to recover up to the end time of the backup.  Curious as to all the internal steps involved in a full backup?  Turn on trace flags 3004 and 3605 using DBCC TRACEON (3004, 3605, -1) and perform a backup a backup.  Then take a look at your errorlog.  Just don’t forget to turn off the trace flags when you’re done, since they can cause some performance degradation in your backups.

Differential Backup

A differential backup is an incremental backup of all changed data since the last full backup.  The last full backup.  Not the last differential backup.  This is an important distinction to keep in mind because a lot of new DBAs (especially former Oracle DBAs) think that, in the event of a restore, they need to restore the full backup and then all of the differential backups taken after.  And that’s simply not the case.  You only need to restore the latest differential.

So how do differentials work?  SQL Server uses something called a Differential Change Map (DCM) page to track changes to extents.  The 6th page in every data file is a DCM page, and it tracks about 4GB worth of data.  So, for every 4GB of data in your database, there’s a corresponding DCM keeping watch.  Let’s take a quick look.

-- Start with a full backup
BACKUP DATABASE [AdventureWorks] TO  DISK = N'D:\SQL2008\Backup\AdventureWorks.bak' WITH NOFORMAT, INIT,  NAME = N'AdventureWorks-Full Database Backup', SKIP, NOREWIND, NOUNLOAD,  STATS = 10
GO

-- Look at the DCM
DBCC TRACEON (3604)
DBCC PAGE('adventureworks',1,6,3)
GO

Scroll down through the results until you get to the DIFF MAP section. Your results should look something like this:

DIFF_MAP: Extent Alloc Status @0x000000000F51A0C2

(1:0) – (1:24) = CHANGED
(1:32) – (1:21752) = NOT CHANGED

Now let’s update some data and take another look.

-- update some data
update Production.TransactionHistory
set Quantity = 100
GO

-- check out the differential map now
DBCC PAGE('adventureworks',1,6,3)
GO

Again, scroll through the results to the DIFF MAP section. Now it probably looks more like this:

DIFF_MAP: Extent Alloc Status @0x000000000F51A0C2

(1:0) – (1:24) = CHANGED
(1:32) – (1:152) = NOT CHANGED
(1:160) – = CHANGED
(1:168) – (1:224) = NOT CHANGED
(1:232) – = CHANGED
(1:240) – (1:20096) = NOT CHANGED
(1:20104) – (1:20160) = CHANGED
(1:20168) – (1:20224) = NOT CHANGED
(1:20232) – (1:20288) = CHANGED

So by reading 1 page for every 4GB of data, SQL Server knows exactly what extents have been changed and therefore knows what extents need to be backed up. And that’s what makes differential backups so fast. If we were to perform a differential backup now and check the DCM page, we’d see no difference. However, if we performed a full backup, the DCM page would revert back to its original state.

Note that you can take a special type of full backup, called a Copy-Only backup that will not reset the DCM pages.  Use this if you need to take a one-off backup, say to refresh a Dev environment, without interrupting your normal backup plan.

File Backup

Individual database files can be backed up and restored using full or differential file backups.  This is helpful if only one data file is corrupted or lost and needs to be restored.  File backups can also make backing up very large databases more manageable.  But you really need to think through your backup/restore strategy if you’re using file backups.  Miss one file and you could render your database unrecoverable.  In my humble opinion, they’re generally not worth the administrative nightmare.

Partial backup

SQL Server 2005 introduced partial backups.  Like file backups, partial backups are helpful in very large databases, however they don’t entail quite so much administration.  A full or differential partial backup will back up all read/write filegroups in the database and, optionally, any read-only filegroups you specify.

Transaction Log Backup

If your database is in Full or Bulk-Logged recovery mode, you need to take transaction log backups.  A transaction log backup backs up the active portion of the transaction log, including all transactions since the last log backup.  If you’re using Bulk-Logged recovery, the log backup also includes the actual data pages for any minimally logged transactions.  Once a log backup is performed, that portion of the transaction log is freed up for reuse.  This prevents those pesky transaction log full (9002) errors.

Advertisements

July 11, 2011 Posted by | Features, General, Maintenance | , , , | Leave a comment

Scripting out a database – revisited

Doh!  A while back I posted a Powershell script that would script out a SQL Server database.  Well, a reader pointed out to me yesterday that it wasn’t quite working for him.  And wouldn’t you know, it wasn’t working for me either.  The problem was that when I created the script I had a specific database in mind.  And this database resided on a SQL 2005/Powershell 1.0 server.  My script worked like a champ there.  On a SQL 2008/Powershell 2.0 server, well, not so much.  So I spent some time yesterday making some fixes and I think I’ve got it this time (for either system).

Many thanks to William for the heads up.

The new code:

# SQLExportDBSchema.ps1
# Usage -
# From PS:  .\SQLExportDBSchema.ps1 "[Drive letter]" "[Server]" "[Database]"
# From CMD: powershell -command "& 'D:\powershell\SQLExportDBSchema.ps1' "[Drive letter]" "[Server]" "[Database]" "

param(
	[string]$drive=$null,
	[string]$srv=$null,
	[string]$database=$null
	)

[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null

$filepath = $drive+":\"+$srv+"_backup\"+$database+"\"

$s = new-object ('Microsoft.SqlServer.Management.Smo.Server') $srv
$db = New-Object ("Microsoft.SqlServer.Management.SMO.Database")

$db=$s.Databases[$database]

$scrp = new-object ('Microsoft.SqlServer.Management.Smo.Scripter') ($s)

$scrp.Options.AppendToFile = $False
$scrp.Options.ClusteredIndexes = $True
$scrp.Options.DriAll = $True
$scrp.Options.ScriptDrops = $False
$scrp.Options.IncludeHeaders = $True
$scrp.Options.IncludeIfNotExists = $True
$scrp.Options.ToFileOnly = $True
$scrp.Options.Indexes = $True
$scrp.Options.Triggers = $True
$scrp.Options.Permissions = $True

#CREATE DATABASE
$scrp.Options.AppendToFile = $False
$outfile =$filepath+$database+"_create.sql"
$scrp.Options.FileName = $outfile
$scrp.Script($db)

#SECURITY
$scrp.Options.AppendToFile = $False
$outfile = $filepath+$database+"_security.sql"
$scrp.Options.FileName = $outfile
$scrp.Options.IncludeDatabaseRoleMemberships = $true
Foreach ($ro in $db.Roles)
{
   $scrp.Script($ro)
   $scrp.Options.AppendToFile = $True
}
$scrp.Options.AppendToFile = $True
Foreach ($appro in $db.ApplicationRoles)
{
   $scrp.Script($appro)
}
Foreach ($us in $db.Users)
{
   $scrp.Script($us)
}
Foreach ($sch in $db.Schemas)
{
   $scrp.Script($sch)
}

$scrp.Options.WithDependencies = $True

#PARTITIONING
$scrp.Options.AppendToFile = $False
$outfile = $filepath+$database+"_partitioning.sql"
$scrp.Options.FileName = $outfile
Foreach ($part in $db.PartitionFunctions)
{
   $scrp.Script($part)
   $scrp.Options.AppendToFile = $True
}

$scrp.Options.AppendToFile = $True
Foreach ($psch in $db.PartitionSchemes)
{
   $scrp.Script($psch)
}

#TABLES
$scrp.Options.AppendToFile = $False
$outfile = $filepath+$database+"_tables.sql"
$scrp.Options.FileName = $outfile
Foreach ($tab in $db.Tables)
{
   $scrp.Script($tab)
   $scrp.Options.AppendToFile = $True
}

#DDLTRIGGERS
$scrp.Options.WithDependencies = $False
$scrp.Options.AppendToFile = $False
$outfile = $filepath+$database+"_triggers.sql"
$scrp.Options.FileName = $outfile
Foreach ($trig in $db.Triggers)
{
   $scrp.Script($trig)
   $scrp.Options.AppendToFile = $True
}

$scrp.Options.WithDependencies = $True

#VIEWS
$scrp.Options.AppendToFile = $False
$outfile = $filepath+$database+"_views.sql"
$scrp.Options.FileName = $outfile
$views = $db.Views | where {$_.IsSystemObject -eq $false}
$scrp.Script($views)

#FUNCTIONS
$scrp.Options.AppendToFile = $False
$outfile = $filepath+$database+"_functions.sql"
$scrp.Options.FileName = $outfile
$functions = $db.UserDefinedFunctions | where {$_.IsSystemObject -eq $false}
$scrp.Script($functions)

#STORED PROCS
$scrp.Options.AppendToFile = $False
$outfile = $filepath+$database+"_storedProcs.sql"
$scrp.Options.FileName = $outfile
$storedprocs = $db.StoredProcedures | where {$_.IsSystemObject -eq $false}
$scrp.Script($storedprocs)

#SYNONYMS
$scrp.Options.AppendToFile = $False
$outfile = $filepath+$database+"_synonyms.sql"
$scrp.Options.FileName = $outfile
Foreach ($syn in $db.Synonyms)
{
   $scrp.Script($syn)
   $scrp.Options.AppendToFile = $True
}

July 6, 2011 Posted by | Features, General, Maintenance | , | 2 Comments

Maintenance 3.0

I just couldn’t let it go.  Right after my last post I decided I just wasn’t satisfied with that maintenance script.  First of all, it only worked on one database at a time.  And secondly, there were just too many selects within the cursor loop.  And, personally, I’d rather work with a temp table than a physical table.

So I made a few adjustments, and here’s the new and improved (!)… Maintenance 3.0.

SET NOCOUNT ON;
DECLARE @dbid int;
DECLARE @dbname varchar(250);
DECLARE @objectid int;
DECLARE @indexid int;
DECLARE @partitioncount bigint;
DECLARE @schemaname sysname;
DECLARE @objectname sysname;
DECLARE @indexname sysname;
DECLARE @partitionnum bigint;
DECLARE @partitions bigint;
DECLARE @frag float;
DECLARE @fillfactor int;
DECLARE @command varchar(8000);

CREATE TABLE [dbo].[#work_to_do](
[dbid] [int] NULL,
[dbname] [varchar](250),
[schemaname] [varchar](250),
[objectid] [int] NULL,
[objectname] [varchar](250),
[indexid] [int] NULL,
[indexname] [varchar](250),
[partitionnum] [int] NULL,
[frag] [float] NULL,
[fillfactor] [int]
)

exec sp_msforeachdb 'USE [?];
INSERT INTO #work_to_do
SELECT
db_id() as dbid,
''?'',
sch.name,
s.object_id AS objectid,
o.name,
s.index_id AS indexid,
i.name,
s.partition_number AS partitionnum,
s.avg_fragmentation_in_percent AS frag,
i.fill_factor
FROM sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL , NULL, ''LIMITED'') s
join sys.objects o (NOLOCK) on o.object_id = s.object_id
join sys.indexes i (NOLOCK) on i.object_id = s.object_id and i.index_id = s.index_id
JOIN sys.schemas sch (NOLOCK) ON sch.schema_id = o.schema_id
WHERE avg_fragmentation_in_percent > 5.0 AND s.index_id > 0
and fragment_count > 1000;'

PRINT '************FINISHED LOADING TABLE, REBUILDING INDEXES**************';

DECLARE partitions CURSOR FOR
SELECT * FROM #work_to_do where [dbname] not in ('master', 'tempdb', 'model', 'msdb', 'distribution');

-- Open the cursor.
OPEN partitions;

-- Loop through the partitions.
FETCH NEXT
FROM partitions
INTO @dbid, @dbname, @schemaname, @objectid, @objectname, @indexid, @indexname, @partitionnum, @frag,
@fillfactor;

WHILE @@FETCH_STATUS = 0
BEGIN;

SELECT @partitioncount = count (*)
FROM sys.partitions (NOLOCK)
WHERE object_id = @objectid AND index_id = @indexid;

IF @fillfactor = 0
SET @fillfactor = 80
ELSE
SET @fillfactor = @fillfactor - 5

-- 30 is an arbitrary decision point at which to switch between reorganizing and rebuilding
IF @frag < 30.0
BEGIN;

SELECT @command = 'USE ['+@dbname+']; ALTER INDEX ' + @indexname + ' ON ' + @schemaname + '.' + @objectname + ' REORGANIZE';
IF @partitioncount > 1
SELECT @command = @command + ' PARTITION=' + CONVERT (CHAR, @partitionnum);
EXEC (@command);
PRINT 'Executed ' + @command;

SELECT @command = 'USE ['+@dbname+']; UPDATE STATISTICS ' + @schemaname + '.' + @objectname + ' (' +@indexname + ') WITH SAMPLE 30 PERCENT'
EXEC (@command);
PRINT 'Executed ' + @command;

END;

IF @frag >= 30.0
BEGIN;

SELECT @command = 'USE ['+@dbname+']; ALTER INDEX ' + @indexname +' ON ' + @schemaname + '.' + @objectname + ' REBUILD WITH (PAD_INDEX = ON,  FILLFACTOR = '+ cast(@fillfactor as varchar) +') ';
IF @partitioncount > 1
SELECT @command = @command + ' PARTITION=' + CONVERT (CHAR, @partitionnum);
EXEC (@command);
PRINT 'Executed ' + @command;

END;

FETCH NEXT
FROM partitions
INTO @dbid, @dbname, @schemaname, @objectid, @objectname, @indexid, @indexname, @partitionnum, @frag,
@fillfactor;

END;
-- Close and deallocate the cursor.
CLOSE partitions;
DEALLOCATE partitions;

TRUNCATE TABLE #work_to_do

PRINT '************FINISHED REINDEXING, UPDATING STATS ON REMAINING TABLES**************';

exec sp_MSforeachdb 'USE [?];
INSERT INTO #work_to_do
select distinct db_id(), db_name(), sch.name, t.object_id, t.name, s.index_id, i.name, NULL, NULL, NULL
from
sys.tables t
join sys.dm_db_index_usage_stats s
on s.object_id = t.object_id
and s.database_id = DB_ID()
and user_updates > 0
and STATS_DATE(s.object_id, s.index_id) < s.last_user_update
join sys.indexes i
on t.object_id = i.object_id
and i.index_id = s.index_id
and s.index_id > 0
JOIN sys.schemas as sch ON sch.schema_id = t.schema_id;'
;

DECLARE stats CURSOR FOR
SELECT * FROM #work_to_do where [dbname] not in ('master', 'tempdb', 'model', 'msdb', 'distribution')

-- Open the cursor.
OPEN stats;

-- Loop through the partitions.
FETCH NEXT
FROM stats
INTO @dbid, @dbname, @schemaname, @objectid, @objectname, @indexid, @indexname, @partitionnum, @frag,
@fillfactor;

WHILE @@FETCH_STATUS = 0
BEGIN

SELECT @command = 'USE ['+@dbname+']; UPDATE STATISTICS ' + @schemaname + '.' + @objectname + ' (' +@indexname + ') WITH SAMPLE 30 PERCENT'
EXEC (@command);
PRINT 'Executed ' + @command;
FETCH NEXT
FROM stats
INTO @dbid, @dbname, @schemaname, @objectid, @objectname, @indexid, @indexname, @partitionnum, @frag,
@fillfactor;

END;
CLOSE stats;
DEALLOCATE stats;

drop table #work_to_do

June 6, 2011 Posted by | Maintenance | | Leave a comment

The ever-shrinking maintenance window

As I’ve stated before, databases aren’t getting any smaller. Business and legal requirements are mandating that we gather and retain more and more data that, in the past, we might have purged from time to time. It’s a simple fact that database indexes must be maintained, statistics updated, and databases backed up; and the bigger our databases get, the more important basic maintenance becomes. But in these days of 24/7 availability, what is getting smaller is our maintenance window. Some of us are lucky to get a few hours every week or month for maintenance. Others don’t even get that.

I have a OLTP database I support that’s around 300GB. Certainly not huge compared to your average data warehouse, but big enough that index maintenance was becoming cumbersome. When we first migrated this database to SQL Server 2000 several years ago, it was less than 100GB and your out-of-the-box Maintenance Plan tasks were fine for index and statistics maintenance. Maintenance 1.0, if you will.

As the database grew and our maintenance window shrank, however, this solution started taking too much time, so I switched to a more fine-tuned, TSQL approach that you’re probably familiar with already. Basically, I would focus on indexes with a fragmentation higher than 5%, of those, I would rebuild indexes more than 30% fragmented and reindex those less than 30% fragmented. This was Maintenance 2.0.

Well, that was certainly better, but I seemed to be rebuilding the same indexes every time. Obviously the fill-factor on those indexes was set too high, causing page splits (and thus fragmentation) to occur during inserts/updates. So I added logic to drop the existing fillfactor by 5% when the index was rebuilt. I monitor the fillfactor on my indexes periodically to make sure nothing is getting too low. Maintenance 2.1.

The reindexing portion of Maintenance 2.0 and 2.1 takes almost 3 hours to run. I’d like to see that cut down, but I don’t know how I’d be able to on our current system. However, I still wanted to address updating statistics during the maintenance window, and running update stats for the entire database with a 30% sampling was taking over 5 hours. Add to this the fact that I was looking at switching from nightly full backups to a weekly full backup with nightly differentials. I wanted to be able to run the index maintenance, update statistics, and take the full backup all within our weekly maintenance window.

In prior Maintenance versions, I was updating statistics on all of the tables in the database. That was something I could do on a monthly basis, but on a weekly basis it felt like overkill. There were some tables, and some large tables at that, that simply aren’t used. And more importantly, that aren’t updated. That was the key: why update statistics on a table that hasn’t changed since the last time you updated stats? So, in my most recent version, I’ve added logic to update statistics on those indexes that have been updated since the last STATS_DATE. Maintenance 2.2. Or maybe it should be 3.0? Whatever. Here it is. And it ran in just over 3 hours.

USE myDatabase;
SET NOCOUNT ON;
DECLARE @objectid int;
DECLARE @indexid int;
DECLARE @partitioncount bigint;
DECLARE @schemaname sysname;
DECLARE @objectname sysname;
DECLARE @indexname sysname;
DECLARE @partitionnum bigint;
DECLARE @partitions bigint;
DECLARE @frag float;
DECLARE @fillfactor int;
DECLARE @command varchar(8000);
-- ensure the temporary table does not exist
IF EXISTS (SELECT name FROM sys.objects WHERE name = 'work_to_do')
    DROP TABLE work_to_do;
---- conditionally select from the function, converting object and index IDs to names.
SELECT
    object_id AS objectid,
    index_id AS indexid,
    partition_number AS partitionnum,
    avg_fragmentation_in_percent AS frag
INTO work_to_do
FROM sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL , NULL, 'LIMITED')
WHERE avg_fragmentation_in_percent > 5.0 AND index_id > 0
and fragment_count > 1000;
-- Declare the cursor for the list of partitions to be processed.

PRINT '************FINISHED LOADING TABLE, REBUILDING INDEXES**************';
DECLARE partitions CURSOR FOR SELECT * FROM work_to_do;

-- Open the cursor.
OPEN partitions;

-- Loop through the partitions.
FETCH NEXT
   FROM partitions
   INTO @objectid, @indexid, @partitionnum, @frag;

WHILE @@FETCH_STATUS = 0
    BEGIN;
        SELECT @objectname = o.name, @schemaname = s.name
        FROM sys.objects AS o
        JOIN sys.schemas as s ON s.schema_id = o.schema_id
        WHERE o.object_id = @objectid;

        SELECT @indexname = name, @fillfactor = fill_factor
        FROM sys.indexes
        WHERE  object_id = @objectid AND index_id = @indexid;

        SELECT @partitioncount = count (*)
        FROM sys.partitions
        WHERE object_id = @objectid AND index_id = @indexid;

	IF @fillfactor = 0
		SET @fillfactor = 80
	ELSE
		SET @fillfactor = @fillfactor - 5

-- 30 is an arbitrary decision point at which to switch between reorganizing and rebuilding
IF @frag < 30.0
    BEGIN;
    SELECT @command = 'ALTER INDEX ' + @indexname + ' ON ' + @schemaname + '.' + @objectname + ' REORGANIZE';
    IF @partitioncount > 1
        SELECT @command = @command + ' PARTITION=' + CONVERT (CHAR, @partitionnum);
    EXEC (@command);
        SELECT @command = 'UPDATE STATISTICS ' + @schemaname + '.' + @objectname + ' (' +@indexname + ') WITH SAMPLE 30 PERCENT'
    EXEC (@command);
    END;

IF @frag >= 30.0
    BEGIN;
    SELECT @command = 'ALTER INDEX ' + @indexname +' ON ' + @schemaname + '.' + @objectname + ' REBUILD WITH (PAD_INDEX = ON,  FILLFACTOR = '+ cast(@fillfactor as varchar) +') ';
    IF @partitioncount > 1
        SELECT @command = @command + ' PARTITION=' + CONVERT (CHAR, @partitionnum);
    EXEC (@command);
    END;
PRINT 'Executed ' + @command;

FETCH NEXT FROM partitions INTO @objectid, @indexid, @partitionnum, @frag;
END;
-- Close and deallocate the cursor.
CLOSE partitions;
DEALLOCATE partitions;

PRINT '************FINISHED REINDEXING, UPDATING STATS ON REMAINING TABLES**************';

DECLARE stats cursor static for
select distinct t.object_id, t.name, s.index_id, i.name
from
	sys.tables t
	join sys.dm_db_index_usage_stats s
		on s.object_id = t.object_id
		and s.database_id = DB_ID()
		and user_updates > 0
		and STATS_DATE(s.object_id, s.index_id) < s.last_user_update
	join sys.indexes i
		on t.object_id = i.object_id
		and i.index_id = s.index_id
		and s.index_id > 0;
;

OPEN stats
FETCH NEXT FROM stats INTO @objectid, @objectname, @indexid, @indexname
WHILE @@FETCH_STATUS = 0
BEGIN

		SELECT @schemaname = s.name
        FROM sys.objects AS o
        JOIN sys.schemas as s ON s.schema_id = o.schema_id
        WHERE o.object_id = @objectid;

        SELECT @command = 'UPDATE STATISTICS ' + @schemaname + '.' + @objectname + ' (' +@indexname + ') WITH SAMPLE 30 PERCENT'
		EXEC (@command);
		PRINT 'Executed ' + @command;
		FETCH NEXT FROM stats INTO @objectid, @objectname, @indexid, @indexname

END;
CLOSE stats;
DEALLOCATE stats;

-- drop the temporary table
IF EXISTS (SELECT name FROM sys.objects WHERE name = 'work_to_do')
    DROP TABLE work_to_do;
GO

June 2, 2011 Posted by | Maintenance | , | Leave a comment

Scripting out a database

Switching gears a little bit, much of my attention this week has revolved around backups. We have several large databases that have gotten to the point where full backups every night have become unwieldy. So I’ve been redesigning the maintenance plans for them around a full backup once a week and differentials nightly. So far it’s working out very well, saving us a lot of time, disk, and tape. Woot!

While I was thinking about backups in general, I also started thinking about a question that’s come up before: how to script out all the database objects in an automated fashion. Whether it’s so you can create a second ’empty shell” copy of your database, or so you can restore a single stored procedure that you accidentally dropped, or just because it’s a development environment and you don’t care about the data so much as the object definitions. (Or because it’s just cool.)  Whatever the reason, I personally think this is something that could come in handy, and I hadn’t come across any way in SQL Server to automate the scripting a database.

So i did some searching and found a few third party tools (nah), some options that revolved around VB or C# or something (yeah, have I mentioned I’m not a programmer?). Then I came across a few partial solutions using powershell. Now, again me != programmer, but I can handle a little bit of powershell. They were pretty straightforward, but they weren’t quite… enough. So I took a little of each, threw in a few touches of my own, and voila, a solution that I’m pretty darn pleased with. Can it be improved upon? I fairly confident it can be. And probably will be. But in the meantime it gets the job done. And it can be scheduled either through a database job or via the Windows scheduler, which was the whole point.  And so, without further ado, I present… the script… (ooooooo!)

# SQLExportDBSchema.ps1
# Usage -
# From PS:  .\SQLExportDBSchema.ps1 "[output location]" "[Server]" "[Database]"
# From CMD: powershell -command "& 'D:\powershell\SQLExportDBSchema.ps1' "[output location]" "[Server]" "[Database]" "

param(
	[string]$filepath=$null,
	[string]$srv=$null,
	[string]$database=$null
	)

[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null

$filepath = $filepath+"\"

$s = new-object ('Microsoft.SqlServer.Management.Smo.Server') $srv

$db=$s.Databases[$database]

$scrp = new-object ('Microsoft.SqlServer.Management.Smo.Scripter') ($s)

$scrp.Options.AppendToFile = $False
$scrp.Options.ClusteredIndexes = $True
$scrp.Options.DriAll = $True
$scrp.Options.ScriptDrops = $False
$scrp.Options.IncludeHeaders = $True
$scrp.Options.IncludeIfNotExists = $True
$scrp.Options.ToFileOnly = $True
$scrp.Options.Indexes = $True
$scrp.Options.Triggers = $True
$scrp.Options.Permissions = $True

#CREATE DATABASE
$outfile =$filepath+$database+"_create.sql"
$scrp.Options.FileName = $outfile
$scrp.Script($db)

#SECURITY
$outfile = $filepath+$database+"_security.sql"
$scrp.Options.FileName = $outfile
$scrp.Options.IncludeDatabaseRoleMemberships = $true
$scrp.Script($db.Roles)
$scrp.Options.AppendToFile = $True
$scrp.Script($db.ApplicationRoles)
$scrp.Script($db.Users)
$scrp.Script($db.Schemas)

$scrp.Options.AppendToFile = $False
$scrp.Options.WithDependencies = $True

#PARTITIONING
$outfile = $filepath+$database+"_partitioning.sql"
$scrp.Options.FileName = $outfile
$scrp.Script($db.PartitionFunctions)
$scrp.Options.AppendToFile = $True
$scrp.Script($db.PartitionSchemes)

$scrp.Options.AppendToFile = $False

#TABLES
$outfile = $filepath+$database+"_tables.sql"
$scrp.Options.FileName = $outfile
$scrp.Script($db.Tables)

#DDLTRIGGERS
$outfile = $filepath+$database+"_triggers.sql"
$scrp.Options.FileName = $outfile
$scrp.Script($db.Triggers)

#VIEWS
$outfile = $filepath+$database+"_views.sql"
$scrp.Options.FileName = $outfile
$views = $db.Views | where {$_.IsSystemObject -eq $false}
$scrp.Script($views)

#FUNCTIONS
$outfile = $filepath+$database+"_functions.sql"
$scrp.Options.FileName = $outfile
$functions = $db.UserDefinedFunctions | where {$_.IsSystemObject -eq $false}
$scrp.Script($functions)

#STORED PROCS
$outfile = $filepath+$database+"_storedProcs.sql"
$scrp.Options.FileName = $outfile
$storedprocs = $db.StoredProcedures | where {$_.IsSystemObject -eq $false}
$scrp.Script($storedprocs)

#SYNONYMS
$outfile = $filepath+$database+"_synonyms.sql"
$scrp.Options.FileName = $outfile
$scrp.Script($db.Synonyms)

May 5, 2011 Posted by | Features, General, Maintenance | , , | 3 Comments