New Blog — sqlPerspectives

I’m excited to introduce everyone who follows this blog to a new blog  This blog will have 3 regular authors and one guest author and will post on a weekly basis while the content of the book SQL Server MVP Deep Dives, available here:

Add this new blog site to your favorites.


Move non clustered indexes to a new filegroup

So, over the years I have needed to move an index or a set of indexes to a new or different file group. I’ve written scripts for this in the past and lost them along the way and I couldn’t find anything out there on the bing or google interwebs.

The concept of multiple files and file groups for a database is beginning to sink in with a lot of developers and dba’s. The rationale for using them is typically sound and in my experience it’s one of wanting to decrease the internal file fragmentation of the data and the indexes. Often, one can find that there is a performance improvement as well because this allows you to move those indexes to that raid 1+0 array that your SAN admin has been waiting for you to use. 🙂

Here’s a stab at it that makes sense to me and it worked great for what I needed to accomplish today. What it doesn’t take into account are partitions and their related indexes, so be sure to read this code and test it prior to just taking my word for it working well.

I’m 100 percent open to ideas and suggestions, in the form of t-sql. I certainly appreciate comments and ideas, however, I will not write the t-sql for your idea. I’m giving this away and it’s my hope that as you improve upon it, you will kindly re-send or re-post it with the improvements.

Lastly, this handles non-clustered indexes only. It will ignore system indexes and statistics. Both of those are named out in the script so in the event that you want them; it’s an easy modification. I have comments in the code posted below and I STRONGLY advise you to read them before you run on a live system.

Click on the word doc attached to this post.  WordPress really makes the formatting of the code look ugly and I haven’t yet figured out how to make it look decent.



On an Island with no Umbrella Drinks?

I was tagged by Chris Shaw to answer this question.

“So You’re On A Deserted Island With WiFi and you’re still on the clock at work. Okay, so not a very good situational exercise here, but let’s roll with it; we’ll call it a virtual deserted island. Perhaps what I should simply ask is if you had a month without any walk-up work, no projects due, no performance issues that require you to devote time from anything other than a wish list of items you’ve been wanting to get accomplished at work but keep getting pulled away from I ask this question: what would be the top items that would get your attention?”

heh… well, in the event that my golf clubs didn’t make it, I suppose that I would spend a great deal of time learning more about the guts or internals of SQL Server. It’s a subject area that I’m very interested in, yet I find that the time to truly dig in is often cut short by work.

I would also spend a fair bit of time coming up with ideas for articles, blog posts and presentations. These are all activities that I enjoy and it’s one way that I can say “thank-you” to those who have helped me along the way.

Given enough time and concentration, I would write a tool that could take any schema and the data in that schema and build a true load test. Over the years I have encountered many clients who face the challenge of truly perf testing their environment prior to going “live”. Often, those who have invested the time and money into this activity find that the performance they experienced in the lab is different than in production… The reasons vary but I believe that I’ve seen it enough to be able to hit this pretty closely.

Wow, *if* I could accomplish all of that in a month, I’d pack up my laptop, flipflops and hit the beach. 🙂

Since I’m new to blogging, I’m not going to tag anyone just yet.

Update Statistics for a partition

Last week, I was working with a client and we wanted to update statistics on a partition. After trying it out, then consulting with BOL and finally twitter, I determined that it could not be done. Thanks to Lara Rubbelke for her help and pointing out the second issue that is discussed below.

In addition to that problem, when a partition’s index is defragmented, statistics are not updated. I suppose an easy workaround is to update statistics, but the problem is that it updates statistics for all of the partitions and/or indexes of that object, not the specific one that was just defragmented.

So, I created a suggestion I’ll post the entire subject / issue below, but please visit the site, vote and validate this request.

This is an https:// link, so if you aren’t logged into connect it will not work. That said, if you don’t have a login for this site, I’d recommend that you get one. Tons of great information can be found there. If you wish to search for it, go to and type in “Update Statistics at the partition level” OR search for feedbackid 468517.

Full description and proposed solution below:

I’d like to see Update Statistics modified to include partitionID in combination with the existing table or indexed view name.

With the improvements of 2k8 and partitioning, very large result sets are being seen at one of my clients that utilizes partitioning. Having to update statistics against partitions that haven’t changed in order to achieve the change on the partition that actually needs it, creates additional overhead that is not required. It also forces a change to plans that wouldn’t otherwise be altered which causes for memory churn in the proc cache.

One other related note is that you can defrag an individual partition index, but it will not update the statistics.

Proposed Solution
Two parts:
Modify Update Statistics to include an option for declaring either a partition name or partitionID for either a table or an indexed view.
UPDATE STATISTICS table_or_indexed_view_name
{ index_or_statistics__name }
{ index_or_statistics_name }
| ( { Partition_index_or_partition_statistics_name } [ ,…n ] )
2.) Upon completion of the defrag of an individual partition index have it update the statistics for that partition.

Improved Reliability
Improved Administration
Improved Performance

Meta Data – Cardinality Explored

Last night I had the privilege of speaking at the Colorado Springs SQL Server Users Group. The subject was Meta Data – Cardinality Explored.

This topic is a very enjoyable one to me. Along with the DMV’s that SQL Server now provides, the meta-data about your database implementations is critical to their scalability, stability, accuracy and performance.

Over the years, I have found that cardinality is not a topic that is often understood. I also believe that it’s not discussed all that often. It is an absolutely critical data-set for the developers, DBA’s, testers and performance testers out there.

Attached to this post you will find my sql script and PowerPoint slide deck. I’d recommend starting with the slide deck… walk through it and when you see “demo”, open up the sql file and learn how this result set can make an impact with the databases that you are responsible for or those that you are developing against.

Have fun and if you have questions about the presentation or the sql code I put together, feel free to comment!

Thanks again to the terrific folks down in Colorado Springs for the warm welcome and not falling asleep as I droned on and on. =)

MetaData – Cardinality Explored – SQL

SQL Server 2008 R2

Yesterday at TechEd, Microsoft announced the R2 version of SQL Server 2008.

It created a bit of confusion for me as I thought the next release “Kilimanjaro and Madison” were both going to be part of a new (major) SQL Server version. As I understand it, much of this functionality will simply be a “bolt-on” to SQL Server 2k8.

When I first read about this, I was conflicted. However, after spending the evening contemplating many of the conversations that I’ve had with clients around 2008, I think that this move by Microsoft is brilliant! The main reason that I’m excited is around the MDM and self-service reporting that Kilimanjaro and Madison were all about. It will be substantial in them martket have them sooner and more importantly, create an additional incentive to help justify the move to 2008. Over the last couple of years, I have had many conversations with clients who “want” to go to 2008, but have had a hard time justifying the regression testing, time etc… The additional features now included will really round out a signficant 2008 release.

Details of the offering can be found below.

I just read this article and I liked it as well.

These are the specific items that I’m looking forward to seeing:

Master Data Services (MDS)

Master Data Hub
Improve the Quality of Your Data
• Centralized approach to defining, deploying, and managing master data
• Ensure reporting consistency across systems and deliver faster more accurate results across the enterprise
• Serves both BI and operational requirements and improve the quality and efficiency of data and processes for all systems
• Enables organizations to track versions of master data to answer questions about master data at a specific point in time
• Supports integration through a Services (WCF) API, business rules, workflow, notification, and hierarchy management
Stewardship Portal
Improve the Quality of Your Data
• Central portal to create, edit and update master data master including members and hierarchies
• Enables direct stewardship and ensures data quality by enforcing business rules and managing supporting domains
• Enables model administrators to upload data, review validation status, manage data versions, develop models, copy versions of models and define the structure of hierarchies

Self Service Analysis
Add-in for Microsoft Office SharePoint 2010
Share & Collaborate with Confidence
• Enables users to publish reports and analyses to SharePoint to make them available to others in the organization
• Applications and reports can be refreshed automatically while also maintaining version control and tracking
• Central, secure location where users can easily control who has access to sensitive data while also maximizing the number of people who can view the reports
SharePoint 2010 Management Console
Manage User-Generated Analytical Applications
• Provides a single, comprehensive management tool through Microsoft Office SharePoint that gives IT administrators the ability to set server policies and monitor Self Service BI activities
• IT can manage and secure all BI assets, freeing the original authors to focus on business priorities
• Dashboard views to monitor access and utilization of analyses and reports as well as track patterns of hardware usage to help ensure the right security privileges are applied and user generated solutions are continuously available, up-to-date, and secured

Search… Are you Lijit?

I’ve recently added a new button on the blog. You can see it in the upper right hand corner. It’s labeled “Search my Network”. Click it!

This is pretty cool actually. What it does is search my blog for your criteria and it also searches other sites that I’ve added as “trusted”. Sites include Microsoft’s SQL CAT team, SSWUG, Ted Malone’s blog, Brent Ozar’s blog etc…

Think about this for your own blog or website! Wow, to be able to direct your readers to information that you deem as trustworthy and accurate. It is the opposite of a Google search in that the results are quality based (based on them being “trusted” by you), versus quantity based (Google).

This clever technology is provided free of charge, minus the ad pollution, but hey, it’s free! The company name is Lijit, based here in Colorado at the foothills of the majestic Rocky Mountains.

A shout out to a great colleague and even better friend, Zach Conger for clueing me in.

%d bloggers like this: