Archive for the ‘ SQL Server ’ Category

DMV’s and t-sql for the DBA

The Rocky Mountain Tri-Fecta was today and I had the opportunity to speak.  I mentioned to those brave souls who attended my session that I would post the slide deck and the demo’s.

Feel free to download them (links are at the end of this post) and provide your comments / thoughts.

I had a great time attending the event and I learned a great number of things.  A special thanks to everyone involved with putting this together.  I’m sill amazed and the stacks of empty pizza boxes.  WOW!

Part of this post is also to ask the readers of this blog if they would be interested in checking out a new tool that I’ve been testing over the last couple of months.  It’s formally known as SmartMon and it provides a form of pattern analysis against the data looking for anomalies and patterns of similar behavior.  It has decreased the amount of time that I spend doing analysis on perfmon files by many factors.  It’s a great tool and the guys building it are stand up guys in the San Jose, CA area.  If you are interested in becoming one of the first handful of people to see the release of this product, e-mail feedback@perfmonanalysis.com.  Please mention that I referred you.

Without further ado, enjoy the SQL and thanks again for the opportunity to share what I’ve learned with all of you.

RockyMountainTriFecta (Link to the slide deck PPT)

Essentials-DMVs_tsql_dba (t-sql – Demo)

Technical Debt, Normalization Debt, Requirements Debt, Git-r-done Debt, OH MY!

An individual that I respect a great deal recently commented on a blog post that I wrote in a way that I didn’t see coming.  It’s interesting, how when one takes a moment to get out of what’s in their brain and onto paper the meaning that it has to others.  His comment struck a chord with me and put into words something that I talk about all of the time, but he did it in a manner that was concise, relevant and easy to digest.

The basic premise revolves around “debt”, technical debt to be specific.  In my initial blog post I was attempting to make the case for the critical nature of Finding Functional Dependencies as they relate to a business and the nature of that business as it’s persisted, stored and understood in the form of data.  The original content that spurred this post can be found in chapter three of this book “SQL Server MVP Deep Dives” and was authored by Hugo Kornelis.

Cutting to the chase, the point of this post is partially to share my opinion but it’s also the hope that it will resonate with a larger audience and we can begin to clearly communicate the danger with technical debt. 

As defined in 1992 by Ward Cunningham , technical debt was described as:  “Shipping first time code is like going into debt.  A little debt speeds development so long as it is paid back promptly with a rewrite.  The danger occurs when the debt is not repaid.  Every minute spent on not-quite-right code counts as interest on that debt.  Entire engineering organizations can be brought to a stand-still under the debt load of an unconolidated implementation, object-oriented or otherwise.”

Over the years I have had countless conversations about incurring the debt, as defined above, with a boss, a client, a co-worker, the business and even CxO level executives.  The outcome is rarely any different from one time to another and it typically boils down to “is it good enough?”.  I like to compare this question to that of a moped in the Rocky Mountains on a beautiful July afternoon…. Is it enough?  Heck yeah, it’s perfect!  I get the sunlight reflecting from the Ray-Bans, the wind through my hair, absolutely nothing inhibiting my view of the beautiful Rocky Mountains.  Then, as time goes on and it’s the middle of December and you find yourself in -32 degree weather with the wind blowing at 50 mph and your wife telling you that it’s time to head to the emergency room as you are an expecting father.  Suddenly that moped is just a bit short of being adequate.  What to do?  Well, there’s no time to head to the car dealership.  There’s no time to put studs on the moped and build a little canopy around it.  There’s no time for anything other than what you have immediately available.  In this case, not enough thought had been put into the original decision. 

Granted, that story is a bit beyond silly when it comes to the amount of thought that most organizations and individuals put into their technical, architectural and design considerations.  What I’ve seen isn’t the amount of respect that a decision is given but rather the context and understanding of the subject.  Another example that illustrates this well outside of the technical arena begins just like the first story above.  Young man moves from South America.  Let’s say he’s moving from Quito, Ecuador to Vail, Colorado.  This guy brings his lovely new bride to Vail, CO, they buy a little cabin and decide that transportation would be in need.  He thinks about the elevation and decides that since Vail, CO has an altitude of 8,200 feet and Quito Ecuador is roughly 9,895 feet in altitude then he shouldn’t have to worry about cold weather… after all, the higher in the atmosphere you go, the colder it gets and since he is decreasing in altitude, he should be fine.  A logical deduction but unfortunately he has a missing data element that he’s unaware of that completely changes the equation.  Latitude.  A simple little word that’s easy to understand for those folks who live in an area where it changes their sleeping patterns, eating patterns, etc… but for this guy, having come from Quito, Ecuador, which happens to be on the equator; latitude had no bearing or effect.

I hope that those illustrations helped to get the point across that technical debt is not something to be trifled with.  For the moment that you need the capital that should have been invested in your solution, product or service offering and you find youself in debt, it will undoubtably be at a very inopportune time where the answers might not be acceptable.

Over the years I have had the pleasure of arguing this exact scenario before but without the language or the concise understanding of what I was trying to communicate.  I would often default to my position on modular design and loosely coupled methodologies, especially in an iterative development process.  While I don’t have empirical proof on this, it appears to me that as technology continues to mature at a overwhemlingly rapid pace, the amount of technical debt is being incurred not only in design considerations but at an even more expensive level; the feeling of being “cool”.  I must admit, like every driven and successful IT person out there, I LOVE to be working on the newest, fastest, best stuff I can find.  I often find myself asking the question; is this sound in light of what this client needs?  Is this sound in light of where they are?  Would it make more sense to S.T.O.P. for a moment, address what isn’t fun in order to do the right thing?  Perhaps moving the “cloud” isn’t as important as addressing the Normalization debt that has already been incurred.  Perhaps that new third-party application really won’t solve all of the problems the sales guy said it would… perhaps it will simply be one more thing added to the VISA. 

tick.tock.  Every moment that goes by, every line of code that’s written against the initial short-cut is interest.  That’s a lot of interest.  Stop compounding your debts interest and be the unpopular one who stands up for doing it right, the first time.  Part of that process, in my experience, is to first-and-foremost admit that you don’t know it all and more importantly admit that you might not have all of the facts necessary to make the correct decision.  Don’t let that be a reason to shy away from this.  Let that be the reason to embrace it!  Think of what can be learned, think of what you can contribute and more importantly think of how driven and excited you will be knowing that you are paying down your companies technical debt!

 tick.tock.

New Blog — sqlPerspectives

I’m excited to introduce everyone who follows this blog to a new blog http://sqlperspectives.wordpress.com/.  This blog will have 3 regular authors and one guest author and will post on a weekly basis while the content of the book SQL Server MVP Deep Dives, available here: http://www.manning.com/nielsen/.

Add this new blog site to your favorites.

Move non clustered indexes to a new filegroup

So, over the years I have needed to move an index or a set of indexes to a new or different file group. I’ve written scripts for this in the past and lost them along the way and I couldn’t find anything out there on the bing or google interwebs.

The concept of multiple files and file groups for a database is beginning to sink in with a lot of developers and dba’s. The rationale for using them is typically sound and in my experience it’s one of wanting to decrease the internal file fragmentation of the data and the indexes. Often, one can find that there is a performance improvement as well because this allows you to move those indexes to that raid 1+0 array that your SAN admin has been waiting for you to use. 🙂

Here’s a stab at it that makes sense to me and it worked great for what I needed to accomplish today. What it doesn’t take into account are partitions and their related indexes, so be sure to read this code and test it prior to just taking my word for it working well.

I’m 100 percent open to ideas and suggestions, in the form of t-sql. I certainly appreciate comments and ideas, however, I will not write the t-sql for your idea. I’m giving this away and it’s my hope that as you improve upon it, you will kindly re-send or re-post it with the improvements.

Lastly, this handles non-clustered indexes only. It will ignore system indexes and statistics. Both of those are named out in the script so in the event that you want them; it’s an easy modification. I have comments in the code posted below and I STRONGLY advise you to read them before you run on a live system.

Click on the word doc attached to this post.  WordPress really makes the formatting of the code look ugly and I haven’t yet figured out how to make it look decent.

Enjoy!

 NonClusteredIndex_MOVE_newFG

Meta Data – Cardinality Explored

Last night I had the privilege of speaking at the Colorado Springs SQL Server Users Group. The subject was Meta Data – Cardinality Explored.

This topic is a very enjoyable one to me. Along with the DMV’s that SQL Server now provides, the meta-data about your database implementations is critical to their scalability, stability, accuracy and performance.

Over the years, I have found that cardinality is not a topic that is often understood. I also believe that it’s not discussed all that often. It is an absolutely critical data-set for the developers, DBA’s, testers and performance testers out there.

Attached to this post you will find my sql script and PowerPoint slide deck. I’d recommend starting with the slide deck… walk through it and when you see “demo”, open up the sql file and learn how this result set can make an impact with the databases that you are responsible for or those that you are developing against.

Have fun and if you have questions about the presentation or the sql code I put together, feel free to comment!

Thanks again to the terrific folks down in Colorado Springs for the warm welcome and not falling asleep as I droned on and on. =)

CSSQLUG_June09
MetaData – Cardinality Explored – SQL

Schema Management (DDL Triggers)

Over the years I have used a variety of tools for schema management.  The best one that I really relied on for quite a while was from RedGate.  They offer schema compare and data compare tools that really make quick work of determing the delta against two environments.

Recently I’ve been working with DB Pro to manage this and thus far I’ve been very, very impressed.  However, it has some downside; primarly the cost.  Some of my clients just can’t justify / stomach the dollars involved.  So, in this gap I have been using DDL Triggers.  It’s an automated log, stored in a database (table) that captures ddl changes (Create, Drop, Alter).   This concept is different than a traditional trigger in that it only fires for database events, not data events.  Meaning that the scope of this trigger is at the schema or the structure level, not the data or the dml level.

The value that it has provided some of my clients is simply the certainty that they know what happened in a specific environment and it saves a great deal of time when it comes to debugging, troubleshooting or otherwise trying to figure out why something that worked perfectly yesterday in the database does not work correctly today.

There are a few steps that need to be taken to make this work.  The first is to determine if you want to store this data in it’s own database; or store it in a table in an existing database.  It can be done either way.  In the code below you will see that I’ve created a stand alone db with one table in it.

CREATE DATABASE [DDL_ChangeLog]

GO

use ddl_changelog;

CREATE TABLE DDLChangeLog (

DDLChangeLogID [int] IDENTITY(1,1) NOT NULL,

eventtype nvarchar(100),

objectname nvarchar(100),

objecttype nvarchar(100),

databasename nvarchar(100),

schemaname nvarchar(100),

SystemUser nvarchar(255),

AppName nvarchar(255),

HostName nvarchar(255),

tsql nvarchar(MAX),

createddate datetime,

CONSTRAINT [PK_DDL_Changelog] PRIMARY KEY CLUSTERED

(DDLChangeLogID ASC))
The code above will simply create a database and a table.

Once you have that in place, then choose the database that you would like to track ddl changes on and create this database trigger.

 

CREATE TRIGGER [trgLogDDLEvent] ON DATABASE

FOR DDL_DATABASE_LEVEL_EVENTS

AS

DECLARE @data XML

SET @data = EVENTDATA()

IF @data.value(‘(/EVENT_INSTANCE/EventType)[1]’, ‘nvarchar(100)’) <> ‘CREATE_STATISTICS’

INSERT INTO ddl_changelog..DDLChangeLog

(EventType, ObjectName, ObjectType, DatabaseName, SchemaName, SystemUser, AppName, HostName, tsql , createddate)

VALUES (

@data.value(‘(/EVENT_INSTANCE/EventType)[1]’,

                           ‘nvarchar(100)’),

@data.value(‘(/EVENT_INSTANCE/ObjectName)[1]’,

                           ‘nvarchar(100)’),

@data.value(‘(/EVENT_INSTANCE/ObjectType)[1]’,

                           ‘nvarchar(100)’),

@data.value(‘(/EVENT_INSTANCE/DatabaseName)[1]’,

                           ‘nvarchar(100)’),

@data.value(‘(/EVENT_INSTANCE/SchemaName)[1]’,

                           ‘nvarchar(100)’),

system_user , app_name (),host_name(),

@data.value(‘(/EVENT_INSTANCE/TSQLCommand)[1]’,

                           ‘nvarchar(max)’) ,

getdate()) ;

GO

ENABLE TRIGGER [trgLogDDLEvent] ON DATABASE

GO
So, let’s test this out.

With the database that you just created the trigger on, let’s do something simple like this:

            SELECT 1 as test into TestTable;

           SELECT * from TestTable;

           DROP TABLE TestTable;

SELECT * FROM DDL_ChangeLog..DDLChangeLog

wow!  there is data there, not just any data, but it should have returned two rows; even though there are three statements above in our test.  This is because that trigger only fired for the Create table (select into) and the drop table statements.  The trigger will not fire for the select statement.

I initially started using these for my own reasons / purposes; however as time has gone on, I’ve found that there are other audiences that LOVE knowing this can be tracked… Auditors love this functionality, QA loves to be able to verify that the blackbox they are testing has some controls / audit processes around it.  Management loves it because they can have hard data to go back to the business on in the event of an outage, mistake or whatever.  I’ve found that it’s typically best to be as transparent as possible with changes / modifications and the state of the databases that I’m responsible for.  In this manner, I’m able to better serve my clients and in turn that are more likely to believe / trust my opinion when it counts.

vConference – This week!

This is the week that the vConference will air.  SQL Server WorldWide Users Group (SSWUG) puts on the event and this year I was chosen to speak at the event.

If you have not already registered, you can do so here : https://www.vconferenceonline.com/upcoming.asp 

I have a discount code for you to use as well : SVPJLOSP09

The recording of the event was very different from an in person conference.  It took me a few minutes to get used to the cameras and not have feedback from an audience.  That said, I’m happy with my presentations even though I moved a bit too fast and I felt that I didn’t focus on some of the points and bring them back around the way I was hoping for. 

At any rate, enjoy the conference, there is no better value for the money! 

Later this week I’ll have a couple of posts up about my sessions at Dev Connections.  They will get a lot more technical and it’s my hope, be of use for you.

%d bloggers like this: