Monday, 20 December 2010

SQL Server MAXDOP and parallelism

At our company, we have a 4 core blade running SQL Server 2008 Enterprise Edition.
This is a very, very busy SQL Server and is running on HP EVA hardware, so the I/O is not an issue.
We have over 50 warehouses all running SQL Server 2008 and replicating data to/from this primary database.
This database is also used by our website, webservices and import/export applications. The only thing this database does not serve is our SSRS reports, which are hosted on another blade.

What is an issue, however, are the CPU's. They were very busy. To help with this we can play around with two settings on SQL Server:
  • MAXDOP (Max degree of parallelism)
  • Cost threshold for parallelism

Our DBA and I had a few discussions around this recently. Read plenty of articles on it, such as Adam Machanic

On a normal system, leaving these settings alone are fine, but on a very very busy system you need to change them.

  1. MAXDOP=1. What this does is to always use 1 core for the query.
  2. MAXDOP=0, Cost = 20. This will use 1 core for simple queries, and for complex queries (cost > 20)  will be allowed to use parallel queries.
What we found is setting MAXDOP = 1 is optimal.

This is because running complex queries faster made the replication suffer. By using MAXDOP=1 we save CPU time by not having to join up parallel queries at the end. Save time by stopping the "thread thrashing" that can occur on many queries all running in parallel.

Because the SQL box is very busy fielding many queries at once, using MAXDOP=1still utilises all cores due to running 4 or more queries at the same time.
Also replication is a lot happier now.

Monday, 11 October 2010

Index Analysis

Some nice articles by Jason Strate on index analysis, page splits, etc.Check them out at:

Tuesday, 7 September 2010


I just found my very first commercial software I wrote in Z80 assembler. It's called lifeguard and can be found here.
Its basically an unlimited lives finder for the ZX Spectrum and ran on the Multiface hardware. Romantic Robot made the multiface and distributed it for me. They used to pay me a commssion on a monthly basis.
I still remember the algorithm for finding lives, grenades, timers, etc.

Good times.

Tuesday, 15 June 2010

Pex and Moles

Pex and Moles are Visual Studio 2010 Power Tools extension that help Unit Testing .NET applications.

  • Pex automatically generates test suites with high code coverage. Right from the Visual Studio code editor, Pex finds interesting input-output values of your methods, which you can save as a small test suite with high code coverage. Microsoft Pex is a Visual Studio add-in for testing .NET Framework applications.Moles allows to replace any .NET method with a delegate.
  • Moles supports unit testing by providing isolation by way of detours and stubs. The Moles framework is provided with Pex, or can be installed by itself as a Microsoft Visual Studio add-in.
Microsoft Moles helps with writing unit tests, but what remains is the tedious task of writing the specific unit tests that exhaustively exercise and validate the logic of the code-under-test. Microsoft Pex can help you in understanding the input/output behavior of your code, finding inputs that cause the code-under-test to crash, and exploring parameterized unit tests to check whether your code implements the desired functionality for all inputs.
When you run Microsoft Pex on your .NET code, Pex generates test cases by analyzing the code-under-test. For every statement in the code, Pex will eventually try to create a test input that will reach that statement. Pex will do a case analysis for every conditional branch in the code—for example, if statements, assertions, and all operations that can throw exceptions.
The result appears as a table that shows each test attempted by Pex. Each line in the table shows the input and the resulting output or exceptions generated by the method for the specific input.

This is great, automatic unit test generation and complete code coverage!
You can download it here

Ok been using for a while and my verdict is: Pex rocks!
I can't believe I've never known about it until now!
Pex generates MSTests or NUnit tests. MSTests are default, but if your project already has NUnit tests, it will use those.
Here is a video you should watch.
Pex first records your codes responses, and generates ASSERTS, so in future, these can be used as regression tests. It achieves 100% code coverage by analysing your code, and cleverly generating tests. Very impressive.

Pex generates edge cases, and every combination of null and empty strings to each parameter, etc, which is not what you'd like to see pollute your nice and clean nUnit tests. Plus, those would take forever to make. So by having your hand crafted nUnit tests and Pex generated MSTests separate its best of both worlds.
Resharper will also happily run both nUnit and MSTests in your solution together and gives you the green/red lights as normal.
In a couple of hours, Pex has so far generated over 1100 lines of MSTest C# code for me. And I didn't have to write a line of code (apart from me having to fix my C# code when Pex found a fault with it).
I like it very much as it allows me to concentrate on building nUnit tests and not mind-numbing combinatorial parameter checking :-)

Automated buildsUsing FinalBuilder or CruiseControl to automate your builds and run your tests? In order to have the generated tests execute on a machine that does not have Pex installed, you can check in into your depot the following assemblies:
  • Microsoft.Pex.Framework.dll
  • Microsoft.ExtendedReflection.dll

Here is a picture of Pex in action:

Sunday, 16 May 2010

Defragmenting SQL Server 2008+ using ALTER INDEX

In SQL 2009 DBCC INDEXDEFRAG will no longer work as Microsoft have removed the command. You need to user ALTER INDEX instead:

CREATE PROCEDURE sjh_defragment_indexes_2008
    @maxfrag FLOAT
/* Simon Hughes 15 May 2010
For SQL 2008+
This stored procedure checks index fragmentation in a database and defragments
indexes whose fragmentation fall above a specified threshold: @maxfrag
Must be run in the database to be defragmented.
exec sjh_defragment_indexes_2008 5
-- Write start time for information purposes
PRINT 'Started defragmenting at: ' + CONVERT(VARCHAR, GETDATE())
-- Check fragmentation
DECLARE @execstr NVARCHAR(255)
DECLARE lcsr CURSOR local fast_forward READ_ONLY FOR
    SELECT  'ALTER INDEX [' + + '] ON [' + + '] REORGANIZE;' AS execstr
    FROM    sys.sysobjects AS so
            INNER JOIN sys.dm_db_index_physical_stats(DB_ID('bybox'), NULL, NULL, NULL, 'LIMITED') AS phystat
            INNER JOIN sys.indexes AS i
                ON i.object_id = phystat.object_id
                   AND i.index_id = phystat.index_id
                ON = OBJECT_NAME(i.object_id)
            AND (so.type = 'U')
            AND (phystat.avg_fragmentation_in_percent > @maxfrag)
OPEN lcsr
FETCH NEXT FROM lcsr INTO @execstr
    EXEC sp_executesql @execstr
    FETCH NEXT FROM lcsr INTO @execstr
CLOSE lcsr
-- Report on finish time for information purposes
PRINT 'Finished defragmenting at: ' + CONVERT(VARCHAR, GETDATE())

Download code here.

How to turn off auto-shrink on all your databases

Shrinking a database is bad. Why? Well its because it causes massive index fragmentation. (You can read about it here).

DECLARE database_cursor CURSOR FOR
    FROM    master..sysdatabases
            AND DATABASEPROPERTYEX(name, 'IsAutoShrink') = 1
OPEN database_cursor
FETCH NEXT FROM database_cursor INTO @Base
    FETCH NEXT FROM database_cursor INTO @Base
CLOSE database_cursor
DEALLOCATE database_cursor

Defragmenting SQL Server 2005 and 2008 using dynamic management views

You can read all about SQL defragmentation here by Satya Jayanty. The bottom line - defragmentation is bad. It causes high CPU load and poor SQL performance. Shrinking your database causes massive index fragmentation. You can read about that here, therefore you should always turn off auto-shrink (I also have a script for that here).

For SQL 2000 you must use DBCC SHOWCONTIG. You can use the SQL given here.

However, for SQL Server 2005 onwards, there is a better way using the dynamic management view sys.dm_db_index_physical_stats introduced in SQL 2005.

Note: In SQL 2009 this proc is not going to work because Microsoft are going to remove DBCC INDEXDEFRAG from the next version of Microsoft SQL Server (read 2009), you need to user ALTER INDEX instead, as shown in my next blog entry.

SELECT  OBJECT_NAME(i.object_id) AS TableName, AS TableIndexName,
FROM    sys.dm_db_index_physical_stats(DB_ID('your_database_name'), NULL, NULL, NULL, 'LIMITED') phystat
        INNER JOIN sys.indexes i
            ON i.object_id = phystat.object_id AND i.index_id = phystat.index_id
WHERE   phystat.avg_fragmentation_in_percent > 10
ORDER BY phystat.avg_fragmentation_in_percent DESC

Building that into a stored proc to execute every so often to defrag gives us:

CREATE PROCEDURE sjh_defragment_indexes
    @maxfrag FLOAT
/* Simon Hughes 15 May 2010
For SQL 2005+ only
This stored procedure checks index fragmentation in a database and defragments
indexes whose fragmentation fall above a specified threshold: @maxfrag
Must be run in the database to be defragmented.
Please change 'your_database_name' to be the name of your database 
exec sjh_defragment_indexes 5.0
-- Check this is not being run in a system database
IF DB_NAME() IN ('master', 'msdb', 'model', 'tempdb') 
    PRINT 'This procedure should not be run in system databases.'
-- Check fragmentation
SELECT  OBJECT_NAME(i.object_id) AS TableName, AS TableIndexName,
        'DBCC INDEXDEFRAG (0, [' + OBJECT_NAME(i.object_id) + '], ' + + ') WITH NO_INFOMSGS' AS execstr
INTO    [#frag]
FROM    sys.dm_db_index_physical_stats(DB_ID('your_database_name'), NULL, NULL, NULL, 'LIMITED') phystat
        INNER JOIN sys.indexes i
            ON i.object_id = phystat.object_id
               AND i.index_id = phystat.index_id
        AND phystat.avg_fragmentation_in_percent > @maxfrag
ORDER BY phystat.avg_fragmentation_in_percent DESC
SELECT * FROM [#frag]
-- Write start time for information purposes
PRINT 'Started defragmenting at: ' + CONVERT(VARCHAR, GETDATE())
-- Begin defrag
DECLARE @execstr NVARCHAR(1024)
    SELECT execstr FROM [#frag]
    EXEC sp_executesql @execstr
    FETCH NEXT FROM cc INTO @execstr
-- Close and deallocate the cursor
-- Report on finish time for information purposes
PRINT 'Finished defragmenting at: ' + CONVERT(VARCHAR, GETDATE())
-- Delete the temporary table
DROP TABLE [#frag]

Download code here.

Friday, 14 May 2010

Generating Insert Statements

Quite a good article by Oleg Netchaev on how to automatically generate INSERT statements for a given table.
The SQL for this is available here.

If you only want a small subset generated, then have a look at the SSMS Tools Pack which is free.

Friday, 7 May 2010

Microsoft Hyper-V vs VMware ESXi

Full credit for this article goes to Jonathan Payne, who works as one of our esteemed DBAs. Unfortunately he has no blog, so here it is:

What has been going on?
Over the past week we have been testing and comparing VMware’s ESXi virtualisation product with Microsoft’s Hyper-V. This has come about because our existing virtualisation infrastructure, which is built on VMware’s WMware Server, has not been performing as expected and has been have a real impact on our services. The servers V1, V2 and BuildServer are all virtual machines running in this environment on the physical server Virtual1 and we have had problems with VMware Server where it just couldn’t seem to allocate enough CPU resources to run the VMs correctly when there was at least 60% of CPU capacity not being used.

What are we comparing?
We have investigated the use of ESXi or Hyper-V to provide the virtualisation environment. These are both type 1 hypervisors, which means that hardware resources are presented directly to the VMs. So a 1 core VM is presented with 1 physical CPU core with nothing in-between. Okay, it isn’t as simple as that, but you get the idea. For devices such as network adapters, there is more intervention involved and this is where the biggest differences beween ESXi and Hyper-V occur. Hyper-V uses the installed windows drivers, which are easy to update. ESXi creates its own environment and can only talk to the physical devices that it has pre-installed drivers for. This is why the hardware compatibility list is very important for ESXi, whereas it isn’t so important for Hyper-V as the Windows drivers are used. In this case, our HP hardware is supported by ESXi, so there is no problem there.
We set up ESXi and Hyper-V environments to run on Virtual2 so that we could install test and compare the solutions.

How will changing to a type 1 hypervisor help?
The VMs will get more resources, which should help the performance bottlenecks that we are experiencing.
The testing of both environments highlighted some of the pros and cons of using the technologies. Although the performance testing was limited, as it is difficult to create an environment containing multiple test VMs, we compared the test VM build server performances to the current production build server performance running under VMware Server.

So what were the test results?
Both ESXi and Hyper-V show performance improvements over VMware Server. The build server tests show that a software build will complete in half the time on the ESXi and Hyper-V versions of the VM when compared to the VMware Server VM. So we are expecting tangible performance improvements by moving to ESXi or Hyper-V.

Cut to the chase - Hyper-V or ESXi?

What benefits does Hyper-V provide?
Hyper-V provides a number of benefits. It is a familiar technology within the company with a number of people having experience with older Microsoft virtualisation technologies, such as virtual PC and Virtual Server as well as experience with Hyper-V itself. Hyper-V is currently used to provide the SQL Server Reporting Server solution on two nodes.
Using Hyper-V gives us the ability to use the Windows Server Datacenter licence that we have. This license covers any virtual machine that is run on a server where the host OS is the datacenter edition and means that we can create any number of VMs with any Windows OS to run on the hosts without any licensing issues.
Combined with the clustering features of Windows Server 2008 R2, we can cluster the virtual machine hosts to provide a high availability environment that would protect us from a hardware failure. This would fail over any VMs that were running on a host that failed to another host.
Another feature of clustering the virtualisation hosts is that we can use a feature called Live Migration. This feature allows us to move virtual machines between hosts with no interruption of service. This would be very useful for moving VMs between hosts to balance loads across hosts and also to be able to move VMs if maintenance is required on a host without any downtime.

Why not ESXi?
A major drawback of ESXi is that we are not very familiar with VMware software. It does things in an unfamiliar way, so there is a learning curve involved with maintaining the host server and managing VMs. To use ESXi would also introduce a licensing problem for our VM OSes. Currently the VM OS licenses are covered by the Windows Server Datacenter license that is used as the host OS on Virtual1. If we used ESXi, this license could no longer be used because the Datacenter OS would not be being used as the host OS. We would have to purchase additional OS licenses to cover the VMs.

Is ESXi all bad?
No, not at all. It seems to perform quite well, it has better monitoring tools than Hyper-V and it provides the ability to over allocate resources, so you can have a higher density of VMs on a server. vSphere Client is the tool used to maintain ESXi and this tool seems quite comprehensive, although it took a bit of getting used to.

So why have we changed our minds about going for VMware over Hyper-V?
We talked to Vohkus about virtualisation, and were seduced by the idea of a virtual cloud in which to run and manage our servers, with the ability to move VMs around to balance the load on the host servers. The Windows Server 2008 version of Hyper-V couldn’t provide that level of functionality and so it appeared that VMware was the way to achieve this dream. However, it turns out that the improvements in Hyper-V offered by the R2 version of the product do provide this functionality and more than that, it provides it without any further investment. To provide this with ESXi and vSphere would need a further investment in OS licenses and the vSphere license costs.
ESXi is considered best of breed in terms of features with VMware having many years of experience in this field, but Hyper-V is playing catch-up and it appears to be very close behind now.
As Hyper-V suits our requirements with no further investment, it has been decided to go with Hyper-V.

Okay, so will Hyper-V really provide everything without any further investment?
Umm, actually no. Yes we can provide a high availability environment and everything with what we have got, but we don’t have a way to perform live backups of VMs and there is no automated way to manage the VMs. Microsoft’s System Center will provide the management tool to manage the backing up and restoring VMs and provide the facility to automatically spread VMs across hosts to manage workloads on the host servers. There is a new version of System Center coming out at the end of the month and we’ll be investing in a copy.

What now?
So, that’s it. Windows Server 2008 R2 Core will be installed on Virtual2 and it will be set up as a single node cluster with Hyper-V. There will be a phased migration of VMs from Virtual1 to Virtual2 and once all the VMs have been migrated, Virtual1 will be re-built as an R2 Core server and joined to the cluster. We can then distribute the existing VMs across the cluster and add the servers that we are missing at the moment.

Monday, 19 April 2010

Disabling script debugging in VS 2008 + IE8

If you experience terrible performance and constant refreshing of a Script Documents node when using the Visual Studio Debugger, follow the instructions here

Work around:

  1. This work around requires the Silverlight 2 Tools for Visual Studio 2008 SP1 to be installed. So if you don't have these tools installed, you can download them from Microsoft ( If this is not an option for you, there is some alternative work arounds at the bottom of this blog post.
  2. Right click on your project and open up project properties
  3. Go to the 'Start Options' sheet, and click the 'Silverlight' button at the bottom of the screen. Because the debugger cannot debug both Silverlight code and Script code at the same time, this will disable Script debugging and therefore avoid the problem.

Friday, 12 March 2010

VHD disk images being read while VirtualPC is not running?

I setup a Virtual PC recently and noticed my hard disks were very busy lately. My VirtualPC is not on, but checking what files were being accessed (Task Manager --> Performance --> Resource monitor) I noticed that the VHD virtual disk image was being read a lot. Which is really odd as it's not in use.

The solution was to disble the service "Superfetch". Hey presto, all is quiet again.

Superfetch was trying to keep my system running fast by pre-loading the VHD into memory. Madness.

I've had superfetch turned off all day, and my hard disks's have never been so quiet.


Now, if only there was a way to exclude folders from superfetch...
Having a Google around there's no exclusion list, however, there is a way to refine what's going on via the registry key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management\PrefetchParameters

Modify the EnablePrefetcher key to one of the following settings:

  • Disable Caching: 0
  • Cache Applications Only: 1
  • Cache Boot Files Only: 2
  • Cache Everything (default): 3

Experiment with this key to see if caching applications or boot files rather than everything still provides enough of a performance boost without getting bogged down.

Thursday, 11 March 2010

HOWTO: Compress Virtual PC virtual hard disks (.VHDs)

If you use Virtual PC 2007 you know that the virtual hard drive files (.VHDs) can grow to extremely large sizes. The .VHD files dynamically expand when software is installed in the .VHD. After installation, the original setup files are deleted and no longer take up space in the .VHD. However the .VHD does not correspondingly dynamically shrink in size. Once a .VHD file expands - it doesn't shrink.

EXAMPLE: If the installation of a product uses 500MB to expand its setup files and only really consumes 200MB in the "C:\Program Files\" directory, the .VHD file expands accordingly. When the 500MB of setup files are subsequently deleted after installation has completed, the .VHD file doesn't shrink afterwards.

That is wasted host storage.

So the question is:
How do I shrink the size of a .VHD when I know it's inflated unnecessarily?

Here's how to do it:
  1. Clear out the machine of filler
    Install ccleaner from  which deletes all temporary files from the Windows installation, Internet Explorer cache, Recycle Bin, and any number of other "garbage"/"temp" directories in the Virtual PC.
  2. Disable hibernation
    Go into Power options in Control Panel and disable hiberation to get rid of the large hybernation file.
  3. Capture the Virtual Disk Precompactor .ISO In the menu of Virtual PC windows for your machine, capture the Virtual Disk Precompactor .ISO file which is located at:
    x32:  C:\Program Files\Microsoft Virtual PC\Virtual Machine Additions\Virtual Disk Precompactor.iso
    x64: C:\Program Files (x86)\Microsoft Virtual PC\Virtual Machine Additions\Virtual Disk Precompactor.iso
  4. Run the Precompactor
    Go to My Computer and double click on the mounted disk. It will begin to run the precompactor software which will essentially zero out all the free space on the disk, making it available for elimination from the file by the Virtual Disk Wizard, which we will use later. This process may take upwards of 15 minutes depending on the size of the VHD file.
  5. Shutdown the Virtual PC
  6. Run the Virtual PC - Virtual Disk Wizard
    From the Virtual PC console, click File --> Virtual Disk Wizard and "Edit an existing virtual disk" It will ask you to identify the .VHD file to reduce in size - select it.
  7. Compact the .VHD
    Select the "Compact" option then select "Replacing the original file". This again will take a while.
When this process finishes, you should have a .VHD file with a much smaller size.

The other final thing you can do is on your host PC, right-click and compress the folder:
C:\Users\Public\Downloads\My Virtual Machines

As an example:
  • Before compression: The size of my 8Gb VHD file (windows xp 32bit) was reported as only taking 3Gb physical space due to folder compression.
  • After compression: After following the steps above, the VHD file was 3.5Gb and was reported as taking 2.6Gb physical space due to folder compression.