New Home

For those of you who follow this blog, please change your bookmarks to http://blog.DataRealized.com 

Thanks for reading!

Kerberos, SQL Server 2008, Windows Server 2008

This blog post has been updated with the following information ::

Updated Jan. 4th, 2011 to include:

Service account information for SQL Browser  |  Loopback registry setting  |  Additional SPN’s for Analysis Services

The updated information and URL for this blog can be found here ::  http://blog.datarealized.com/?p=117

So over the past couple of weeks I have been working on a multi-node server 2008 cluster (5 active, 1 passive).  Part of the prep work before the databases were migrated from their old Server 2003 and SQL Server 2005 homes was to set up linked servers.  Simple task.  right?

It always had been in the past.

Ever seen this error? 

Message
Error: 18456, Severity: 14, State: 11.

Login failed for user ‘NT AUTHORITY\ANONYMOUS LOGON’. Reason: Token-based server access validation failed with an infrastructure error. Check for previous errors.

Now, I’ve seen that error plenty in my life and typically it’s for a handful of well-know reasons such as these :  Resource #1 | Resource #2 | Resource #3  | Resource #4 … each of these provided a great deal of insight, but none of them resolved the issue. 

One of the most frustrating elements of this issue was to get it to occur on demand (either failure or success).  It appeared to be an intermittent issue.  Well, it wasn’t an intermittent issue at all at the end of the day… it was simply the manner in how I was testing it.

To begin I ran kerblist tickets from a command prompt (kerblist.exe is part of the 2003 resource pack) this showed me which tickets had been generated.  … screech… back up.  Prior to all of this, it’s important to validate that your SPN’s are already set and already accurate.  If you are unsure, visit this site and spend some time understanding kerberos and SPN’s.

Once I had a list of the tickets, I could see that some were generated as I expected yet others were missing that I expected to see there… So, I would run “kerblist purge” from the command prompt and successfully TEST a linked server successfully … and not have a ticket generated.  c.o.n.f.u.s.i.n.g.

Well, it’s because my sql server session had already been established and the ticket was created when I originally connected to the server via management studio.  When I ran the “kerblist purge”; it did indeed remove the kerberos tickets, but it did NOT sever my already established connection to sql server; so it continued to work even without a ticket being generated.

Now, the fix for the testing is pretty easy.  Don’t stay connected in management studio.  close it, re-open it or un-register and re-register the server.  viola, this solved the “intermitted” issues with linked servers… kind – of.  Throughout the process I was personally convinced that it was a physical host issue on some level or another.  So, I was moving resources from one node to another, testing, moving, testing, all sorts of differing results.  It got to the point where I created 4 linked servers on each server in an attempt to build out a matrix of what worked and what didn’t.  Even after all of that work it still was inconsistent. 

Enter Microsoft.  Tier one and two spent a lot of time on this (two weeks) and we did a slew of items ranging from registry entries to new service accounts to trying differing ways of connecting to the cluster using kerberos etc… none of it was successful.  I do owe an apology to tier two support for how frustrated I became after a week of working on this with them.  They were trying their hardest and I did not request an escalation point as soon as I should have.  Once I finally did, tier 3 called me and we got down to business.  Tait Neville with the Enterprise Platforms Support – Directory Services team is one heck of a guy.  We spent about 8 hours on the phone the first day and ran traces, kerblist commands galore, setspn commands six ways from Sunday, read logs, re-read logs, failed over the environment, blew away stuff, re-built stuff… and on and on… That day ended with us realizing that the inconsistency we were seeing was perhaps a larger issue than the lack of connectivity; because sometimes, it did indeed work correctly.

Late in the call (7+hours) Tait came across a newer kb article that showed promise; but didn’t fully explain what we were seeing.  Nonetheless, we installed it.  tried it out, inconsistency.  went home.  frustrated and tired.

The next day brought about a new attitude (namely mine).  It started with the realization that I wrote about above (Management studio being connected)… Once we could easily re-produce tickets being generated, progress was very swift.  At that point, we decided to clear out a huge swath of SPN’s that had been created over the period of trying and testing everything. 

Here’s what we ended up with:

MSSQLSvc/Servername.domainname:Port  (in all cases we are running named instances in SQL Server)  — A total of five entries with the servername and ports being different, as applicable.

MSSQLSvc/Servername.domainname:SQLServerNamedInstanceName

MSSQLSvc/Servername:Port

Wait for AD to replicate…. ho hum ho hum….   viola.  works like a charm.

It turned out that we had a couple of issues in our environment.

1.) We had too many SPN’s (none of them malformed) but not all of them were needed and it was causing some odd issues.

2.) Every time we failed over the SQL Server resource to another node, we would see it work, sometimes and not other times.  The hotfix resolved this.

To Paul (the SE here) and Tait (MSFT engineer), thank you.

Fun times… well, not really, but I learned a lot and hopefully this post will also help you if you find yourself in a similar scenario.

The main take away for me is two-fold. 

1.) It’s absolutely critical to have a reproducible set of steps that you can test and reproduce at will.  A lot of time was spent on this case because of what appeared to be inconsistent behavior; when in reality, it was the test that was causing a lot of the confusion.

2.) When you just can’t figure it out, a call to Microsoft, while long, time-consuming and at times frustrating, can yield tremendous results.

Plan Guides

In November 2008 I had the opportunity to speak at Dev Connections.  One of the subjects that I covered was Plan Guides.  The abstract read “In SQL Server 2008, there exists a way to change or create a new plan for a query without touching the query itself. In environments where the database was provided by a third-party or in a high availability environment, these will help to decrease your impact on production systems while improving performance at the same time. Over the course of the session, we will walk through the creation of a plan guide.” 

The reason for this blog post is that I was discussing the upcoming event that the Denver SQLUG is hosting with Kalen Delaney on June 9th, 2010.  Register here

I’ve already registered and I’m very much looking forward to hearing her talk on this subject.  I wanted to provide some background and familiarity with to plan guides for those who follow my blog and will be attending the UG meeting on the 9th.  

For me personally, understanding plan guides, why they matter, how to use them and what it’s all about stems from a solid understanding of execution plans and how SQL Server chooses to get to the data based on the query that you are passing it.  This is not a trivial subject and one that is best understood with a solid foundation of HOW the optimizer reports the methods it is utilizing in the retrieving of data. 

In my slide deck these questions are answered:  

     Q:  Why Plan Guides?  

     A:  Due to increased complexity in database environments today.  These complexities include Linq to SQL and it’s impact on plan cache.  C.O.T.S (Commercial Off the Shelf) implementations and the ability to be Proactive vs. Reactive in your environments. 

     Q:  What are they?  (Plan Guides) 

     A:  Plan Guides can be used to optimize the performance of queries when you cannot or do not want to change the text of the query directly.  Plan Guides can be useful when a subset of queries in a database application deployed from a third-party vendor are not performing as expected and the ability to change the query does not exist.  Plan Guides influence optimization of queries by attaching query hints or a fixed query plan to them.  (The last answer is from Microsoft). 

The slide deck and the demo also talk about the different memory pools in SQL Server and how to “see” the cache on the server.  I also touch on the life of cache and why it’s relevant. 

One of the biggest items in this presentation revolved around parameterization (forced or simple).  This is an incredibly powerful option that must be tested thoroughly in your environment.  The gist of it is that if you have a ton of compilations in your environment the changing of parameterizataion to forced might be something to really look into as it will increase your concurrency due to the decrease in compilations. 

Up until now in the demonstration, all I had covered was background information that is helpful to truly grasp Plan Guides… the last half of the presentation is on plan guides, how to create them, disable them, view them etc…    

Without further ado, here is the Powerpoint and the set of t-sql scripts that I used in my demo’s.  I’ve also commented the code that you can find below as it should help you when you walk through the powerpoint presentation. 

We begin with an Object Plan Guide   

 

USE AdventureWorks2008; 

go 

CREATE PROCEDURE Sales.GetSalesOrderByCountry (@Country_region nvarchar(60)) 

AS 

BEGIN 

    SELECT * 

    FROM Sales.SalesOrderHeader AS h, Sales.Customer AS c, 

        Sales.SalesTerritory AS t 

    WHERE h.CustomerID = c.CustomerID 

        AND c.TerritoryID = t.TerritoryID 

        AND CountryRegionCode = @Country_region 

END;        

GO 

  

— Create an OBJECT plan guide 

 

sp_create_plan_guide 

@name = N'Guide1', 

@stmt = N'SELECT *FROM Sales.SalesOrderHeader AS h,       

        Sales.Customer AS c, 

        Sales.SalesTerritory AS t 

        WHERE h.CustomerID = c.CustomerID 

            AND c.TerritoryID = t.TerritoryID 

            AND CountryRegionCode = @Country_region', 

@type = N'OBJECT', 

@module_or_batch = N'Sales.GetSalesOrderByCountry', 

@params = NULL, 

@hints = N'OPTION (OPTIMIZE FOR (@Country_region = N''US''))'  

   

— @hints || base this on selectivity of the data “US” is good because it will be representative of rougly 2/3rds 

—                           of the depth in the index / heap 

—  Where do the created plans exist? 

 

select * from sys.plan_guides  -- (notice the HINTS column) 

GO 

--Disable the plan guide. 

EXEC sp_control_plan_guide N'DISABLE', N'Guide1'; 

GO 

select * from sys.plan_guides 

-- Now shows as disabled 

--Enable the plan guide. 

EXEC sp_control_plan_guide N'ENABLE', N'Guide1'; 

GO 

select * from sys.plan_guides 

-- Now shows as enabled 

--Drop the plan guide. 

EXEC sp_control_plan_guide N'DROP', N'Guide1'; 

GO 

select * from sys.plan_guides 

-- Now it's gone 

-- Disable ALL plan guides. 

EXEC sp_control_plan_guide N'DISABLE ALL'; 

GO 

   

— SQL Plan Guide 

— Example:  Set the Maxdop (Degree of parallelism = 1 

 

EXEC sp_create_plan_guide 

    @name = N'Guide1', 

    @stmt = N'SELECT TOP 1 * 

              FROM Sales.SalesOrderHeader 

              ORDER BY OrderDate DESC', 

    @type = N'SQL', 

    @module_or_batch = NULL, 

    @params = NULL, 

    @hints = N'OPTION (MAXDOP 1)'; 

GO 

   

 — NOTE ** any Option clause in a select statement will work with the @hints parameter. 

— Template Plan Guide 

          — Let’s Force the parameterization on this statement.  

 

DECLARE @stmt nvarchar(max); 

DECLARE @params nvarchar(max); 

EXEC sp_get_query_template 

    N'SELECT * FROM AdventureWorks2008.Sales.SalesOrderHeader AS h 

      INNER JOIN AdventureWorks2008.Sales.SalesOrderDetail AS d 

          ON h.SalesOrderID = d.SalesOrderID 

      WHERE h.SalesOrderID = 45639;', 

    @stmt OUTPUT, 

    @params OUTPUT 

EXEC sp_create_plan_guide N'TemplateGuide1', 

    @stmt, 

    N'TEMPLATE', 

    NULL, 

    @params, 

    N'OPTION(PARAMETERIZATION FORCED)'; 

GO 

   

  — NOTE:  This forces the query to be parameterized and then a guide can be created against it. 

  — Creating a plan using XML_showplan 

 

USE AdventureWorks2008; 

GO 

SELECT City, StateProvinceID, PostalCode FROM Person.Address ORDER BY PostalCode DESC; 

GO 

DECLARE @xml_showplan nvarchar(max); 

SET @xml_showplan = (SELECT query_plan 

    FROM sys.dm_exec_query_stats AS qs 

    CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS st 

    CROSS APPLY sys.dm_exec_text_query_plan(qs.plan_handle, DEFAULT, DEFAULT) AS qp 

    WHERE st.text LIKE N'SELECT City, StateProvinceID, PostalCode FROM Person.Address ORDER BY PostalCode DESC;%'); 

EXEC sp_create_plan_guide 

    @name = N'Guide1_from_XML_showplan', 

    @stmt = N'SELECT City, StateProvinceID, PostalCode FROM Person.Address ORDER BY PostalCode DESC;', 

    @type = N'SQL', 

    @module_or_batch = NULL, 

    @params = NULL, 

    @hints =@xml_showplan; 

GO 

   

— Create a plan guide for the query by specifying the query plan in the plan cache via the plan handle.  

 

USE AdventureWorks2008; 

GO 

SELECT WorkOrderID, p.Name, OrderQty, DueDate 

FROM Production.WorkOrder AS w 

JOIN Production.Product AS p ON w.ProductID = p.ProductID 

WHERE p.ProductSubcategoryID > 4 

ORDER BY p.Name, DueDate; 

GO                           

   

— Inspect the query plan by using dynamic management views. 

 

SELECT * FROM sys.dm_exec_query_stats AS qs 

CROSS APPLY sys.dm_exec_sql_text(sql_handle) 

CROSS APPLY sys.dm_exec_text_query_plan(qs.plan_handle, qs.statement_start_offset, qs.statement_end_offset) AS qp 

WHERE text LIKE N'SELECT WorkOrderID, p.Name, OrderQty, DueDate%'; 

GO 

   

  — Create a plan guide for the query by specifying the query plan in the plan cache via the plan handle. 

 

DECLARE @plan_handle varbinary(64); 

DECLARE @offset int; 

SELECT @plan_handle = plan_handle, @offset = qs.statement_start_offset 

FROM sys.dm_exec_query_stats AS qs 

CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS st 

CROSS APPLY sys.dm_exec_text_query_plan(qs.plan_handle, qs.statement_start_offset, qs.statement_end_offset) AS qp 

WHERE text LIKE N'SELECT WorkOrderID, p.Name, OrderQty, DueDate%'; 

EXECUTE sp_create_plan_guide_from_handle 

    @name =  N'Guide8', 

    @plan_handle = @plan_handle, 

    @statement_start_offset = @offset; 

GO 

   

Reference the previous example of ….. “Means that there are 4811 plans for three nearly identical statements” 

With this, a plan can be created against that string….   ‘SELECT WorkOrderID, p.Name, OrderQty, DueDate%’ 

This is HUGE!!!  Think about the Commercial-Off-The-Shelf (COTS) implementations in your environment where you are largely bound by licensing and cannot physcially modify the schema, triggers, clustered indexes etc…  Additionally, there are still many applications that use a lot of dynamically built or in-line t-sql that don’t always perform well or, in the case of the example that I cited, there are hundreds or thousands of execution plans for the same t-sql statement. 

This next piece of t-sql checks the validity of the plan guides in the system. 

             An empty result set means that all plan guides are valid. 

 

USE AdventureWorks2008; 

GO 

SELECT plan_guide_id, msgnum, severity, state, message 

FROM sys.plan_guides 

CROSS APPLY fn_validate_plan_guide(plan_guide_id); 

GO 

   

Select statement taken from SQL Profiler 

Look at the execution plan. 

 

 SELECT COUNT(*) AS c 

FROM Sales.SalesOrderHeader AS h 

INNER JOIN Sales.SalesOrderDetail AS d 

  ON h.SalesOrderID = d.SalesOrderID 

WHERE h.OrderDate BETWEEN '20000101' and '20050101'; 

GO 

   

Look at the execution plan and note the Hash Match Inner Join  

Now, let’s make it a Merge Join instead. 

 

EXEC sp_create_plan_guide 

    @name = N'MyGuideProfiler1', 

    @stmt = N' SELECT COUNT(*) AS c 

FROM Sales.SalesOrderHeader AS h 

INNER JOIN Sales.SalesOrderDetail AS d 

  ON h.SalesOrderID = d.SalesOrderID 

WHERE h.OrderDate BETWEEN ''20000101'' and ''20050101''; 

', 

    @type = N'SQL', 

    @module_or_batch = NULL, 

    @params = NULL, 

    @hints = N'OPTION (MERGE JOIN)'; 

   

Spacing is critical on this. 

Check the statement to ensure that the planguide will force a merge join 

 

 SELECT COUNT(*) AS c 

FROM Sales.SalesOrderHeader AS h 

INNER JOIN Sales.SalesOrderDetail AS d 

  ON h.SalesOrderID = d.SalesOrderID 

WHERE h.OrderDate BETWEEN '20000101' and '20050101'; 

--Drop the plan guide. (if needed) 

EXEC sp_control_plan_guide N'DROP', N'MyGuideProfiler1'; 

EXEC sp_control_plan_guide N'DROP', N'TemplateGuide1'; 

EXEC sp_control_plan_guide N'DROP', N'Guide1'; 

    

— Jeremy Lowell http://www.datarealized.com || https://datarealized.wordpress.com || http://sqlperspectives.wordpress.com 

Feel free to ask questions as you step through this and I look forward to seeing everyone on the 9th!

Statistics for partitions

As data volumes continue to expand and partitioning is further adopted in SQL Server 2008 the lack of control over statistics at the partition level will become a larger issue for more companies.

I placed this as a connect item on Microsoft’s site in June of 2009.  Per the final comment from Microsoft, they recommend that the more votes it has, the higher priority it will become.

If you are reading this post, please vote, here:  https://connect.microsoft.com/SQLServer/feedback/details/468517/update-statistics-at-the-partition-level

I’m posting the connect details below. 

I’d like to see Update Statistics modified to include partitionID in combination with the existing table or indexed view name.

With the improvements of 2k8 and partitioning, very large result sets are being seen at one of my clients that utilizes partitioning. Having to update statistics against partitions that haven’t changed in order to achieve the change on the partition that actually needs it, creates additional overhead that is not required. It also forces a change to plans that wouldn’t otherwise be altered which causes for memory churn in the proc cache.

One other related note, is that you can defrag an individual partition index, but it will not update the statistics.

Proposed Solution
Two parts:

1.)
Modify Update Statistics to include an option for declaring either a partition name or partitionID for either a table or an indexed view.
i.e…
UPDATE STATISTICS table_or_indexed_view_name
[
{
{ index_or_statistics__name }
{ index_or_statistics_name }
| ( { Partition_index_or_partition_statistics_name } [ ,…n ] )
}
]

2.) Upon completion of the defrag of an individual partition index have it update the statistics for that partition.

 
Benefits
Improved Reliability
Improved Administration
Improved Performance
Other (please provides details below)

DMV’s and t-sql for the DBA

The Rocky Mountain Tri-Fecta was today and I had the opportunity to speak.  I mentioned to those brave souls who attended my session that I would post the slide deck and the demo’s.

Feel free to download them (links are at the end of this post) and provide your comments / thoughts.

I had a great time attending the event and I learned a great number of things.  A special thanks to everyone involved with putting this together.  I’m sill amazed and the stacks of empty pizza boxes.  WOW!

Part of this post is also to ask the readers of this blog if they would be interested in checking out a new tool that I’ve been testing over the last couple of months.  It’s formally known as SmartMon and it provides a form of pattern analysis against the data looking for anomalies and patterns of similar behavior.  It has decreased the amount of time that I spend doing analysis on perfmon files by many factors.  It’s a great tool and the guys building it are stand up guys in the San Jose, CA area.  If you are interested in becoming one of the first handful of people to see the release of this product, e-mail feedback@perfmonanalysis.com.  Please mention that I referred you.

Without further ado, enjoy the SQL and thanks again for the opportunity to share what I’ve learned with all of you.

RockyMountainTriFecta (Link to the slide deck PPT)

Essentials-DMVs_tsql_dba (t-sql – Demo)

Technical Debt, Normalization Debt, Requirements Debt, Git-r-done Debt, OH MY!

An individual that I respect a great deal recently commented on a blog post that I wrote in a way that I didn’t see coming.  It’s interesting, how when one takes a moment to get out of what’s in their brain and onto paper the meaning that it has to others.  His comment struck a chord with me and put into words something that I talk about all of the time, but he did it in a manner that was concise, relevant and easy to digest.

The basic premise revolves around “debt”, technical debt to be specific.  In my initial blog post I was attempting to make the case for the critical nature of Finding Functional Dependencies as they relate to a business and the nature of that business as it’s persisted, stored and understood in the form of data.  The original content that spurred this post can be found in chapter three of this book “SQL Server MVP Deep Dives” and was authored by Hugo Kornelis.

Cutting to the chase, the point of this post is partially to share my opinion but it’s also the hope that it will resonate with a larger audience and we can begin to clearly communicate the danger with technical debt. 

As defined in 1992 by Ward Cunningham , technical debt was described as:  “Shipping first time code is like going into debt.  A little debt speeds development so long as it is paid back promptly with a rewrite.  The danger occurs when the debt is not repaid.  Every minute spent on not-quite-right code counts as interest on that debt.  Entire engineering organizations can be brought to a stand-still under the debt load of an unconolidated implementation, object-oriented or otherwise.”

Over the years I have had countless conversations about incurring the debt, as defined above, with a boss, a client, a co-worker, the business and even CxO level executives.  The outcome is rarely any different from one time to another and it typically boils down to “is it good enough?”.  I like to compare this question to that of a moped in the Rocky Mountains on a beautiful July afternoon…. Is it enough?  Heck yeah, it’s perfect!  I get the sunlight reflecting from the Ray-Bans, the wind through my hair, absolutely nothing inhibiting my view of the beautiful Rocky Mountains.  Then, as time goes on and it’s the middle of December and you find yourself in -32 degree weather with the wind blowing at 50 mph and your wife telling you that it’s time to head to the emergency room as you are an expecting father.  Suddenly that moped is just a bit short of being adequate.  What to do?  Well, there’s no time to head to the car dealership.  There’s no time to put studs on the moped and build a little canopy around it.  There’s no time for anything other than what you have immediately available.  In this case, not enough thought had been put into the original decision. 

Granted, that story is a bit beyond silly when it comes to the amount of thought that most organizations and individuals put into their technical, architectural and design considerations.  What I’ve seen isn’t the amount of respect that a decision is given but rather the context and understanding of the subject.  Another example that illustrates this well outside of the technical arena begins just like the first story above.  Young man moves from South America.  Let’s say he’s moving from Quito, Ecuador to Vail, Colorado.  This guy brings his lovely new bride to Vail, CO, they buy a little cabin and decide that transportation would be in need.  He thinks about the elevation and decides that since Vail, CO has an altitude of 8,200 feet and Quito Ecuador is roughly 9,895 feet in altitude then he shouldn’t have to worry about cold weather… after all, the higher in the atmosphere you go, the colder it gets and since he is decreasing in altitude, he should be fine.  A logical deduction but unfortunately he has a missing data element that he’s unaware of that completely changes the equation.  Latitude.  A simple little word that’s easy to understand for those folks who live in an area where it changes their sleeping patterns, eating patterns, etc… but for this guy, having come from Quito, Ecuador, which happens to be on the equator; latitude had no bearing or effect.

I hope that those illustrations helped to get the point across that technical debt is not something to be trifled with.  For the moment that you need the capital that should have been invested in your solution, product or service offering and you find youself in debt, it will undoubtably be at a very inopportune time where the answers might not be acceptable.

Over the years I have had the pleasure of arguing this exact scenario before but without the language or the concise understanding of what I was trying to communicate.  I would often default to my position on modular design and loosely coupled methodologies, especially in an iterative development process.  While I don’t have empirical proof on this, it appears to me that as technology continues to mature at a overwhemlingly rapid pace, the amount of technical debt is being incurred not only in design considerations but at an even more expensive level; the feeling of being “cool”.  I must admit, like every driven and successful IT person out there, I LOVE to be working on the newest, fastest, best stuff I can find.  I often find myself asking the question; is this sound in light of what this client needs?  Is this sound in light of where they are?  Would it make more sense to S.T.O.P. for a moment, address what isn’t fun in order to do the right thing?  Perhaps moving the “cloud” isn’t as important as addressing the Normalization debt that has already been incurred.  Perhaps that new third-party application really won’t solve all of the problems the sales guy said it would… perhaps it will simply be one more thing added to the VISA. 

tick.tock.  Every moment that goes by, every line of code that’s written against the initial short-cut is interest.  That’s a lot of interest.  Stop compounding your debts interest and be the unpopular one who stands up for doing it right, the first time.  Part of that process, in my experience, is to first-and-foremost admit that you don’t know it all and more importantly admit that you might not have all of the facts necessary to make the correct decision.  Don’t let that be a reason to shy away from this.  Let that be the reason to embrace it!  Think of what can be learned, think of what you can contribute and more importantly think of how driven and excited you will be knowing that you are paying down your companies technical debt!

 tick.tock.

What three events brought you here?

I was tagged by Chris Shaw for this…  It’s an open-ended question that I could have a lot of fun with.  For instance, I could simply say that what brought me here was a need for a larger house, the need for a dedicated home office and a killer deal.  That’s what brought me to where I’m currently sitting as I write this blog post…  I don’t suppose that’s the intent, so I’ll do my best to share a bit of my history.

1.)  I was in marketing.  I did a lot of work with pulling data from an AS400, “importing” it into Access so that I could then “import” it into Word and mail merge direct marketing campaigns.  A lot of fun.  After having all of that fun, there came the day that I was introduced to SQL Server.  Not knowing any better, I thought it to be a form of Access.  Thankfully it wasn’t and thus began my career as a ‘geek’.

2.)  I did some bouncing around in the early years at companies such as MCI (WorldCom), Norwest (WellsFargo), Lucent(Avaya).  Do you see the trend there?  Every company that I went to work for was purchased by another company.  I thought to myself, there has to be a better way to life than going through these mergers / acquisitions etc… so I went independent.  Since that time, I have had a couple of stints being an employee at startup’s but by and large, I’ve been independent and I love it.

3.)  Being independent has upsides and downsides.  For one, I don’t have the traditional “boss”, instead, I have many of them in the form of clients.  This is awesome in a lot of ways, yet significantly more demanding in others.  It has allowed me the freedom to look for the type of work that I want to do and not worry about being on a certain project team or growing stagnant in a particular organization fulfilling a particular role.

Going forward I hope to expand my little business as I believe that I’m able to deliver quality better than most of the big consulting shops out there for less money overall.

With that said, if you are in need of some SQL Server help, I’d love to hear from you.

New Blog — sqlPerspectives

I’m excited to introduce everyone who follows this blog to a new blog http://sqlperspectives.wordpress.com/.  This blog will have 3 regular authors and one guest author and will post on a weekly basis while the content of the book SQL Server MVP Deep Dives, available here: http://www.manning.com/nielsen/.

Add this new blog site to your favorites.

Move non clustered indexes to a new filegroup

So, over the years I have needed to move an index or a set of indexes to a new or different file group. I’ve written scripts for this in the past and lost them along the way and I couldn’t find anything out there on the bing or google interwebs.

The concept of multiple files and file groups for a database is beginning to sink in with a lot of developers and dba’s. The rationale for using them is typically sound and in my experience it’s one of wanting to decrease the internal file fragmentation of the data and the indexes. Often, one can find that there is a performance improvement as well because this allows you to move those indexes to that raid 1+0 array that your SAN admin has been waiting for you to use. 🙂

Here’s a stab at it that makes sense to me and it worked great for what I needed to accomplish today. What it doesn’t take into account are partitions and their related indexes, so be sure to read this code and test it prior to just taking my word for it working well.

I’m 100 percent open to ideas and suggestions, in the form of t-sql. I certainly appreciate comments and ideas, however, I will not write the t-sql for your idea. I’m giving this away and it’s my hope that as you improve upon it, you will kindly re-send or re-post it with the improvements.

Lastly, this handles non-clustered indexes only. It will ignore system indexes and statistics. Both of those are named out in the script so in the event that you want them; it’s an easy modification. I have comments in the code posted below and I STRONGLY advise you to read them before you run on a live system.

Click on the word doc attached to this post.  WordPress really makes the formatting of the code look ugly and I haven’t yet figured out how to make it look decent.

Enjoy!

 NonClusteredIndex_MOVE_newFG

On an Island with no Umbrella Drinks?

I was tagged by Chris Shaw to answer this question.

“So You’re On A Deserted Island With WiFi and you’re still on the clock at work. Okay, so not a very good situational exercise here, but let’s roll with it; we’ll call it a virtual deserted island. Perhaps what I should simply ask is if you had a month without any walk-up work, no projects due, no performance issues that require you to devote time from anything other than a wish list of items you’ve been wanting to get accomplished at work but keep getting pulled away from I ask this question: what would be the top items that would get your attention?”

heh… well, in the event that my golf clubs didn’t make it, I suppose that I would spend a great deal of time learning more about the guts or internals of SQL Server. It’s a subject area that I’m very interested in, yet I find that the time to truly dig in is often cut short by work.

I would also spend a fair bit of time coming up with ideas for articles, blog posts and presentations. These are all activities that I enjoy and it’s one way that I can say “thank-you” to those who have helped me along the way.

Given enough time and concentration, I would write a tool that could take any schema and the data in that schema and build a true load test. Over the years I have encountered many clients who face the challenge of truly perf testing their environment prior to going “live”. Often, those who have invested the time and money into this activity find that the performance they experienced in the lab is different than in production… The reasons vary but I believe that I’ve seen it enough to be able to hit this pretty closely.

Wow, *if* I could accomplish all of that in a month, I’d pack up my laptop, flipflops and hit the beach. 🙂

Since I’m new to blogging, I’m not going to tag anyone just yet.