Sunday 16 May 2010

Defragmenting SQL Server 2008+ using ALTER INDEX

In SQL 2009 DBCC INDEXDEFRAG will no longer work as Microsoft have removed the command. You need to user ALTER INDEX instead:

CREATE PROCEDURE sjh_defragment_indexes_2008
(
    @maxfrag FLOAT
)
AS
 
/* Simon Hughes 15 May 2010
 
For SQL 2008+
This stored procedure checks index fragmentation in a database and defragments
indexes whose fragmentation fall above a specified threshold: @maxfrag
 
Must be run in the database to be defragmented.
 
exec sjh_defragment_indexes_2008 5
*/
 
SET NOCOUNT ON
SET XACT_ABORT ON
 
-- Write start time for information purposes
PRINT 'Started defragmenting at: ' + CONVERT(VARCHAR, GETDATE())
 
-- Check fragmentation
DECLARE @execstr NVARCHAR(255)
DECLARE lcsr CURSOR local fast_forward READ_ONLY FOR
    SELECT  'ALTER INDEX [' + i.name + '] ON [' + so.name + '] REORGANIZE;' AS execstr
    FROM    sys.sysobjects AS so
            INNER JOIN sys.dm_db_index_physical_stats(DB_ID('bybox'), NULL, NULL, NULL, 'LIMITED') AS phystat
            INNER JOIN sys.indexes AS i
                ON i.object_id = phystat.object_id
                   AND i.index_id = phystat.index_id
                ON so.name = OBJECT_NAME(i.object_id)
    WHERE   (i.name IS NOT NULL)
            AND (so.type = 'U')
            AND (phystat.avg_fragmentation_in_percent > @maxfrag)
 
OPEN lcsr
FETCH NEXT FROM lcsr INTO @execstr
 
WHILE @@FETCH_STATUS = 0
BEGIN
    EXEC sp_executesql @execstr
    FETCH NEXT FROM lcsr INTO @execstr
END
 
CLOSE lcsr
DEALLOCATE lcsr
 
-- Report on finish time for information purposes
PRINT 'Finished defragmenting at: ' + CONVERT(VARCHAR, GETDATE())
GO

Download code here.

How to turn off auto-shrink on all your databases

Shrinking a database is bad. Why? Well its because it causes massive index fragmentation. (You can read about it here).


DECLARE @Base SYSNAME
 
DECLARE database_cursor CURSOR FOR
    SELECT  QUOTENAME(name) AS Base
    FROM    master..sysdatabases
    WHERE   DATABASEPROPERTYEX(name, 'Status') = 'ONLINE'
            AND DATABASEPROPERTYEX(name, 'IsAutoShrink') = 1
 
OPEN database_cursor
FETCH NEXT FROM database_cursor INTO @Base
 
WHILE @@FETCH_STATUS = 0 
BEGIN
    EXEC ('ALTER DATABASE ' + @BASE + ' SET AUTO_SHRINK OFF')
    FETCH NEXT FROM database_cursor INTO @Base
END
CLOSE database_cursor
DEALLOCATE database_cursor

Defragmenting SQL Server 2005 and 2008 using dynamic management views

You can read all about SQL defragmentation here by Satya Jayanty. The bottom line - defragmentation is bad. It causes high CPU load and poor SQL performance. Shrinking your database causes massive index fragmentation. You can read about that here, therefore you should always turn off auto-shrink (I also have a script for that here).

For SQL 2000 you must use DBCC SHOWCONTIG. You can use the SQL given here.

However, for SQL Server 2005 onwards, there is a better way using the dynamic management view sys.dm_db_index_physical_stats introduced in SQL 2005.

Note: In SQL 2009 this proc is not going to work because Microsoft are going to remove DBCC INDEXDEFRAG from the next version of Microsoft SQL Server (read 2009), you need to user ALTER INDEX instead, as shown in my next blog entry.

SELECT  OBJECT_NAME(i.object_id) AS TableName,
        i.name AS TableIndexName,
        phystat.avg_fragmentation_in_percent,
        phystat.fragment_count
FROM    sys.dm_db_index_physical_stats(DB_ID('your_database_name'), NULL, NULL, NULL, 'LIMITED') phystat
        INNER JOIN sys.indexes i
            ON i.object_id = phystat.object_id AND i.index_id = phystat.index_id
WHERE   phystat.avg_fragmentation_in_percent > 10
ORDER BY phystat.avg_fragmentation_in_percent DESC

Building that into a stored proc to execute every so often to defrag gives us:

CREATE PROCEDURE sjh_defragment_indexes
(
    @maxfrag FLOAT
)
AS
 
/* Simon Hughes 15 May 2010
 
For SQL 2005+ only
This stored procedure checks index fragmentation in a database and defragments
indexes whose fragmentation fall above a specified threshold: @maxfrag
 
Must be run in the database to be defragmented.
Please change 'your_database_name' to be the name of your database 
 
exec sjh_defragment_indexes 5.0
*/
 
SET NOCOUNT ON
SET XACT_ABORT ON
 
-- Check this is not being run in a system database
IF DB_NAME() IN ('master', 'msdb', 'model', 'tempdb') 
BEGIN
    PRINT 'This procedure should not be run in system databases.'
    RETURN
END
 
-- Check fragmentation
SELECT  OBJECT_NAME(i.object_id) AS TableName,
        i.name AS TableIndexName,
        phystat.avg_fragmentation_in_percent,
        phystat.fragment_count,
        'DBCC INDEXDEFRAG (0, [' + OBJECT_NAME(i.object_id) + '], ' + i.name + ') WITH NO_INFOMSGS' AS execstr
INTO    [#frag]
FROM    sys.dm_db_index_physical_stats(DB_ID('your_database_name'), NULL, NULL, NULL, 'LIMITED') phystat
        INNER JOIN sys.indexes i
            ON i.object_id = phystat.object_id
               AND i.index_id = phystat.index_id
WHERE   i.name IS NOT NULL
        AND phystat.avg_fragmentation_in_percent > @maxfrag
ORDER BY phystat.avg_fragmentation_in_percent DESC
 
SELECT * FROM [#frag]
 
-- Write start time for information purposes
PRINT 'Started defragmenting at: ' + CONVERT(VARCHAR, GETDATE())
 
 
-- Begin defrag
DECLARE @execstr NVARCHAR(1024)
DECLARE cc CURSOR FAST_FORWARD FOR
    SELECT execstr FROM [#frag]
 
OPEN cc
FETCH NEXT FROM cc INTO @execstr
 
WHILE @@FETCH_STATUS = 0 
BEGIN
    EXEC sp_executesql @execstr
 
    FETCH NEXT FROM cc INTO @execstr
END
 
-- Close and deallocate the cursor
CLOSE cc
DEALLOCATE cc
 
-- Report on finish time for information purposes
PRINT 'Finished defragmenting at: ' + CONVERT(VARCHAR, GETDATE())
 
-- Delete the temporary table
DROP TABLE [#frag]
GO

Download code here.

Friday 14 May 2010

Generating Insert Statements

Quite a good article by Oleg Netchaev on how to automatically generate INSERT statements for a given table.
The SQL for this is available here.

If you only want a small subset generated, then have a look at the SSMS Tools Pack which is free.

Friday 7 May 2010

Microsoft Hyper-V vs VMware ESXi

Full credit for this article goes to Jonathan Payne, who works as one of our esteemed DBAs. Unfortunately he has no blog, so here it is:

What has been going on?
Over the past week we have been testing and comparing VMware’s ESXi virtualisation product with Microsoft’s Hyper-V. This has come about because our existing virtualisation infrastructure, which is built on VMware’s WMware Server, has not been performing as expected and has been have a real impact on our services. The servers V1, V2 and BuildServer are all virtual machines running in this environment on the physical server Virtual1 and we have had problems with VMware Server where it just couldn’t seem to allocate enough CPU resources to run the VMs correctly when there was at least 60% of CPU capacity not being used.

What are we comparing?
We have investigated the use of ESXi or Hyper-V to provide the virtualisation environment. These are both type 1 hypervisors, which means that hardware resources are presented directly to the VMs. So a 1 core VM is presented with 1 physical CPU core with nothing in-between. Okay, it isn’t as simple as that, but you get the idea. For devices such as network adapters, there is more intervention involved and this is where the biggest differences beween ESXi and Hyper-V occur. Hyper-V uses the installed windows drivers, which are easy to update. ESXi creates its own environment and can only talk to the physical devices that it has pre-installed drivers for. This is why the hardware compatibility list is very important for ESXi, whereas it isn’t so important for Hyper-V as the Windows drivers are used. In this case, our HP hardware is supported by ESXi, so there is no problem there.
We set up ESXi and Hyper-V environments to run on Virtual2 so that we could install test and compare the solutions.

How will changing to a type 1 hypervisor help?
The VMs will get more resources, which should help the performance bottlenecks that we are experiencing.
The testing of both environments highlighted some of the pros and cons of using the technologies. Although the performance testing was limited, as it is difficult to create an environment containing multiple test VMs, we compared the test VM build server performances to the current production build server performance running under VMware Server.

So what were the test results?
Both ESXi and Hyper-V show performance improvements over VMware Server. The build server tests show that a software build will complete in half the time on the ESXi and Hyper-V versions of the VM when compared to the VMware Server VM. So we are expecting tangible performance improvements by moving to ESXi or Hyper-V.

Cut to the chase - Hyper-V or ESXi?
Hyper-V.

What benefits does Hyper-V provide?
Hyper-V provides a number of benefits. It is a familiar technology within the company with a number of people having experience with older Microsoft virtualisation technologies, such as virtual PC and Virtual Server as well as experience with Hyper-V itself. Hyper-V is currently used to provide the SQL Server Reporting Server solution on two nodes.
Using Hyper-V gives us the ability to use the Windows Server Datacenter licence that we have. This license covers any virtual machine that is run on a server where the host OS is the datacenter edition and means that we can create any number of VMs with any Windows OS to run on the hosts without any licensing issues.
Combined with the clustering features of Windows Server 2008 R2, we can cluster the virtual machine hosts to provide a high availability environment that would protect us from a hardware failure. This would fail over any VMs that were running on a host that failed to another host.
Another feature of clustering the virtualisation hosts is that we can use a feature called Live Migration. This feature allows us to move virtual machines between hosts with no interruption of service. This would be very useful for moving VMs between hosts to balance loads across hosts and also to be able to move VMs if maintenance is required on a host without any downtime.

Why not ESXi?
A major drawback of ESXi is that we are not very familiar with VMware software. It does things in an unfamiliar way, so there is a learning curve involved with maintaining the host server and managing VMs. To use ESXi would also introduce a licensing problem for our VM OSes. Currently the VM OS licenses are covered by the Windows Server Datacenter license that is used as the host OS on Virtual1. If we used ESXi, this license could no longer be used because the Datacenter OS would not be being used as the host OS. We would have to purchase additional OS licenses to cover the VMs.

Is ESXi all bad?
No, not at all. It seems to perform quite well, it has better monitoring tools than Hyper-V and it provides the ability to over allocate resources, so you can have a higher density of VMs on a server. vSphere Client is the tool used to maintain ESXi and this tool seems quite comprehensive, although it took a bit of getting used to.

So why have we changed our minds about going for VMware over Hyper-V?
We talked to Vohkus about virtualisation, and were seduced by the idea of a virtual cloud in which to run and manage our servers, with the ability to move VMs around to balance the load on the host servers. The Windows Server 2008 version of Hyper-V couldn’t provide that level of functionality and so it appeared that VMware was the way to achieve this dream. However, it turns out that the improvements in Hyper-V offered by the R2 version of the product do provide this functionality and more than that, it provides it without any further investment. To provide this with ESXi and vSphere would need a further investment in OS licenses and the vSphere license costs.
ESXi is considered best of breed in terms of features with VMware having many years of experience in this field, but Hyper-V is playing catch-up and it appears to be very close behind now.
As Hyper-V suits our requirements with no further investment, it has been decided to go with Hyper-V.

Okay, so will Hyper-V really provide everything without any further investment?
Umm, actually no. Yes we can provide a high availability environment and everything with what we have got, but we don’t have a way to perform live backups of VMs and there is no automated way to manage the VMs. Microsoft’s System Center will provide the management tool to manage the backing up and restoring VMs and provide the facility to automatically spread VMs across hosts to manage workloads on the host servers. There is a new version of System Center coming out at the end of the month and we’ll be investing in a copy.

What now?
So, that’s it. Windows Server 2008 R2 Core will be installed on Virtual2 and it will be set up as a single node cluster with Hyper-V. There will be a phased migration of VMs from Virtual1 to Virtual2 and once all the VMs have been migrated, Virtual1 will be re-built as an R2 Core server and joined to the cluster. We can then distribute the existing VMs across the cluster and add the servers that we are missing at the moment.