Performance: Alter a table’s clustered index in SQL Server while keeping the table online

Back in the old SQL Server 2000 days, creating an index was a major
operation that would prevent queries and data modifications until it had
completed. Unless the index could be created in a few seconds, any new
indexes had to be performed during a maintenance period on the server.
Starting in SQL Server 2005, indexes can be created in an online mode,
allowing the table to queried and updated while the operation completes.
It’s a truly wonderful feature, which Microsoft knows and thus limits to
Enterprise edition. In other words, you have to pay extra for it.

During some recent performance tuning efforts, we determined that one of
the tables would benefit from having a different set of columns for its
clustered index. Since the clustered index represents the actual data
inside the table itself, I assumed that this was not a modification that
could be done while the table was online and would need to wait for a
maintenance period.

One of my colleagues informed me that this is not correct. It is in
fact possible to alter a clustered index in online mode. In this
situation, SQL Server is essentially creating a new copy of the
underlying table. It keeps the old table around to process queries, and
it makes data modifications on both. Once the new version of the table
is complete, it drops the old version.

--Get the name of the current clustered index
Select @IndexName = N'MyTable.'+[name] 
from sys.indexes 
where Object_id = OBJECT_ID('MyTable') 
And index_id = 1;

--If the name of the current clusted index does not match the name that we expect
--then rename it and rebuild it.
IF @IndexName <> N'MyTable.IDX_MyTable_MyIndexColumns'
            BEGIN
            Select 'rename and rebuild index'
            --rename the clustered index
            EXEC sp_rename @IndexName, N'IDX_MyTable_MyIndexColumns', N'INDEX';

            --recreate the clustered index--
            EXEC('CREATE CLUSTERED INDEX IDX_MyTable_MyIndexColumns on MyTable (MyColumn1, MyColumn2, MyColumn3) WITH (ONLINE = ON, DROP_EXISTING = ON ))
            END
ELSE IF @IndexName IS NULL --No Clustered index exists so just create it.
            BEGIN
            Select 'index not found so create it'
            EXEC('CREATE CLUSTERED INDEX IDX_MyTable_MyIndexColumns on MyTable (MyColumn1, MyColumn2, MyColumn3) WITH (ONLINE = ON))
            END
GO

You’ll notice that the script takes an extra step of renaming the
existing clustered index, if it exists. Since we name our indexes based
on the columns being indexed for readability, and we don’t want to
actually drop the clustered index itself, we rename to keep it
consistent.

Posted in Uncategorized | Tagged , , | Leave a comment

Join.me – free desktop sharing so easy, even my mom can do it

Okay, that’s a little unfair to my mom, since she is a relatively tech savvy person who sports an iPhone and uses a gmail account. But the point is, the people at Join.me (the same people behind LogMeIn) have done a remarkably good job at making web conferencing easy, simple, and free. There are no accounts, no registrations, and no software to install. Anyone can do it.

As the computer person in the family, I’m frequently called on for technical support. Many times I can rattle off some instructions that are helpful, but nothing is more frustrating than trying to interpret someone’s vague explanations about what they see on their screen or trying to talk them through a series of program launches, menu commands, and option changes. If I was there in front of the computer, I could solve it in five seconds.

For people in my “tier-one” technical support crew, like my wife or my sister-in-law, I had taken to installing LogMeIn on their machines so that when they had a problem, I could remote in and just fix it for them. However, this is a pain since I need to set it up in person, and it always needs to be running on the off-chance that I want to try to help them with something. And it wasn’t worth the bother for people who needed help only rarely or for auxiliary computers.

Now that I have discovered Join.me, I can forgo these LogMeIn installations. It’s really as easy as pressing a button.

Here’s how it works:

1. The person who wants to share their desktop goes to http://join.me

2. They press the Orange share button:

Pastedgraphic-1

3. They open the join.me file that it downloads (most browsers will give a security warning about opening it, which can sometimes trip up users).

4. They will now see the Join.Me dashboard at the top of the screen

Pastedgraphic-2

5. They email or send the link url with the identification number to the person (or multiple people) they want to share with

6. The recipient can now see the user’s desktop.

The person sharing can give the viewers control, making remote troubleshooting a snap.

The beauty of this is that there are no logins, no email verifications, no software installations, no interruptions with pitches for a more advanced paid version. It’s just free and simple.

It lacks some of the more advanced feature that paid-for versions like webex offer. For example, you can’t pass around presenter rights to multiple parties; only the person who initiates the session can share their desktop.

But this has certainly made my tech support life much easier. I’ve used it to help fix a problem with my mom’s gmail calendar, install dropbox on my wife’s computer at work, and more.

p.s. For all of the progress than social media advertising has made, I discovered join.me from a billboard on the Mass Pike. What caught my eye is that I couldn’t figure out what the actual URL to visit was, and it was confusing me. Finally, I realized that “join.me” was the url. Apparently, .me is the country domain name for Montenegro.

Posted in Uncategorized | Tagged , | Leave a comment

Mourning the loss of my Italian coffee maker, but not for its coffee

It’s been four months since my Italian coffee maker died, and today I have finally come to the hard decision that it is time to move on and buy a new one.
I’m not very particular about the coffee itself.  While there are many coffee connoisseurs out there, I’m not one of them.  I’ve never been able to tell the difference between freshly ground and out-of-a-bag, and both Starbucks or Dunkin Donuts are acceptable.  Just put in some half-and-half, and I’ll happily drink it.  I just need my one travel mug full a day to get myself into gear in the morning.
Nonetheless, I was very attached to my Italian coffee maker.  I can’t claim that the coffee it made was any better than what I would get with the cheapest $25 Mr. Coffee, but it had something else that I really appreciated.  A fun, innovative design:
cf48df1461a9e22fe4c360a18c82a57a.jpg
It was an Alessi Geo drip coffee maker, designed by Alessandro Messini, and it is only available in Europe.  I first discovered it on a trip to Rome in the summer of 2006, and I instantly fell in love with it.  I use a coffee maker every day, and it is prominently visible in the kitchen.  Coffee makers all look the same – a boxy, bulky object with a drip brewer on the top and a carafe on the bottom.  But here, Alessi had turned a very mundane object into a fun, whimsical gadget.
I didn’t buy it, naively assuming that I would be able to find one in the US that would work with our electrical system.  Alas, upon returning, I discovered that apparently we Americans are very happy with our boxy, plain looking coffee makers, and Alessi never saw the need to build one for the North American market.  Each morning, I would make my coffee in my plain old coffee maker and think back to the Alessi model I saw in Europe.  Eventually, I decided that if I cared about it that much, I should just figure out how to get a European appliance to work in the US.  Mind you, this was back in the days before kids, and I had disposable income for a pointless project like this one.  I did a little research and found that it was possible to get transformers for European appliances to run here in the US.
I tried to order the coffee maker from an online site in England, and they helpfully canceled my order for me, sending me a note to explain that it wouldn’t work in the US.  I emailed back and forth to explain that I was aware of the limitations, and would they please send it anyways.  When it finally arrived, I was able to read through the specifications in the manual to discover that I needed one heck of a transformer to handle the big power spike when the coffee maker boils the water – over 1000 watts!  This ended up costing about as much as the coffee maker.
But it made me happy for many years.  It wasn’t just about having a gadget that made coffee.  It was about having a gadget that was delightful to use.  Someone had put thought into every piece of it and made it something more than just a coffee maker.
It’s the same reason that I continue to be such a devotee of my iPhone.  I don’t care how many features competing platforms like Android come out with.  Apple understood that the phone was more than just a pile of features – it should be delightful.  Sure, there are things about my iPhone that annoy me (why don’t my on-the-go playlists show up at the top? Why can’t I copy and paste an entire contact? Why do I have to tap so many times to delete a bad photo after I have taken it?), but I know that someone put thought into every single aspect of it.  Sure, they may not have made the decision that I would have made, but they were focused on the total experience.
Sadly, a couple of months ago, my Alessi coffee maker died a sad death.  I noticed that the coffee filter holder was getting hard to pull out, and when I looked more closely, I saw that the heating element on the top was starting to melt through the plastic.  I’m not sure if this was product flaw or some side effect of running through a transformer for all of those years, but it didn’t seem safe.  After a week of mourning it, I threw it away and brought back up the old coffee maker that had been sitting in the basement for the last several years.  It would serve until I could find something better.
I could order a new one from Europe, but I think the time has past.  With two kids in the house, there are better things to spend the money on, and more importantly, I don’t actually know if the melting plastic problem would happen again.  Every time I find myself in a store that sells coffee makers, I find myself taking a look, but they are all so…boring.
My hand was just forced when the backup coffee maker broke as well (it was ten years old), so I to buy a new one on short notice this afternoon.  I ended up with a $50 cuisinart model from Costco:
It just ain’t the same.
Posted in Uncategorized | Tagged | Leave a comment

Egypt proves Malcolm Gladwell was wrong about social media… and right

In October 2010, Malcolm Gladwell published an article in the New Yorker entitled “Small Change: Why the revolution will not be tweeted“.  In the article, he rejects the idea that social networking tools like Twitter and Facebook will be agents of change that will inspire repressed people around the world to rise up in protest.  He talks at great length about how active defiance – the kind that puts individuals at risk – requires deep commitment and strong ties to other passionate people.  Despite all of the attention Twitter received during the Iranian protests, he argues that social media tools like Facebook and Twitter are built on networks of loose ties.  People may be casually interested in what others are saying and click a “like” button, but this is not the type of engagement that will lead people to go put themselves at risk for the sake of an idea:

The evangelists of social media don’t understand this distinction; they seem to believe that a Facebook friend is the same as a real friend and that signing up for a donor registry in Silicon Valley today is activism in the same sense as sitting at a segregated lunch counter in Greensboro in 1960.

Naturally, devotees of social media tools like Twitter and Facebook were quick to rise to the defense of these networks.   They pointed to all of the connections that these networks allow, and how these loose ties build a foundation that strong ties are created from.  Angus Johnston writes:

Gladwell is right that strong-tie relationships were a crucial part of the Civil Rights Movement, and is a crucial part of any organizing effort. But he misses the fact that all strong ties start as weak ties, and that even weak-tie relationships can spur action within and between strong-tie communities.

At first glance, the ongoing protests in Egypt would seem to prove that Malcolm Gladwell was wrong.  Facebook and Twitter were key instruments used by protesters to organize themselves and coordinate their activities.  They also helped to draw international focus on the event by bringing people from around the world right into the center of the action.  This in turn has kept strong international attention on the issue, and while Mubarak has yet to step down, it has become more a question of when, not if.  

Social networking sites have played such a central role that the Egyptian government took the unprecedented step of shutting down Internet access in the country.  The goal, presumably, was to cut off access to Facebook and Twitter and throw the protesters into disarray.  Pundits everywhere are pointing the to the dramatic role that social networking sites are playing in the movement.  Clearly Gladwell must have been wrong.

However, a closer examination of what is going on in Egypt suggests that Gladwell may in fact be correct.  In this weekend’s broadcast of the NPR show On The Media, a weekly radio show about how the process of creating media coverage shapes the world we live in, they led off with an interview with Mona Seif.  She is one of the protesters in Egypt, and she has been particularly active in tweeting about what has been happening on the ground.  She talked at  great length about how important Facebook and Twitter have been to their efforts.

At one point in the interview, On The Media host Bob Garfield asked her what impact the shutdown of the Internet has had on their organizing efforts.  Seif said that rather than hurting the movement, the loss of access to Facebook and Twitter had the opposite effect, driving more people to leave their homes and join the protest:

Lots of the people who would have normally been satisfied with just following the updates on Facebook and Twitter suddenly did not have this connection, and so they found no way of being part of this movement except by going out on the streets.  So I actually think that cutting off the Internet helped the movement on the real ground and on the street become bigger.  

In effect, she is saying Facebook and Twitter were actually limiting the number of people on the streets to some extent.  Once they were cut off from the social media, the illusion that were involved because they were following along was dispelled.  The only way to be truly involved was to actually go out and participate.

Ultimately, the answer is somewhere in the middle.  Social media sites clearly have a powerful role in revolutionary activities by allowing broadly dispersed and disorganized protesters to coordinate their efforts.  It also allows them to broadcast their activities to an audience around the world and draw international support.  But in the end, Gladwell’s point still stands.  People need to get up out of their seats and take part.
Posted in Uncategorized | Tagged | Leave a comment

I’ve written over 1000 tweets in the last 15 months. How much time have I wasted?

I started experimenting with Twitter (and social networking in general) 15 months ago, and just this week, I sent my 1000th tweet.  It seems like a good time to take a step back and ask whether this is a good use of my time or not.

Writing a 140 character tweet takes longer than you would think.  Since virtually all of my tweets include a link that eats up 20 of those characters, I really am writing 120 character messages.  Anyone who has tried to write with space limitations knows that it is much harder to be brief than it is to be verbose.  I’d say that about half the time of writing a tweet is involved in making it short enough to fit in the space without butchering English too much and without losing the humor or emphasis I am trying to achieve.

Of course, the actual time writing those tweets is inconsequential to the amount of time I spend reading other people’s tweets on twitter.  I check up on twitter in the morning after I am caught up on email, and then I will check in on it a couple of times a day.  I will also catch up while sitting outside of Ayelet’s room during “waiting time” while she settles down and goes to sleep.  Scanning the tweets doesn’t take that long, but a large portion contains links to more information.  5% to 10% will intrigue enough for me to click through and look at the article or site they are linking to.

Many people dismiss twitter as a waste of time.  They hear about people sending mindless updates, like “I’m waiting in line for my driver’s license” or “I just had a roast beef sandwich for lunch”.  For the record, I agree 100% that these types of tweets are absolutely useless, and I have very little patience for them.  When they are lightly sprinkled in with more thoughtful content, they can add some interesting color to a person, but I would never follow anyone who posted this type of information regularly.

What keeps me so interested in twitter is that so many people are using it to share information.  The posts will be just a line or two about why a piece of information is interesting then follow it with a link that takes you to all of the details.  For example, Jade Abumrad, the co-host of NPR’s RadioLab, recently tweeted “Why are the letters "z" and "x" so popular in drug names? http://bit.ly/fUIpe0”.  I might be interested, or I might not, but it’s a jumping off point for more information.

To put it another way, Twitter is my crowd sourced newspaper.  When I read the New York Times, I tend to focus in on the Front Page section and the Business section.  I never read the Sports section.  I generally skip the Travel or the Arts & Leisure section, but occasionally my wife will point out an article in those sections she thinks I would like.  That’s perfect for me; I don’t have to read the sections, but she points out what might be worth it.  With Twitter, I have a hand-picked staff of subject-matter experts, and they are feeding me the links that they think are interesting.  I don’t read them all, but some percentage of them will interest me enough to click through.

The net benefit is that I am much more aware of developing trends in my industry with twitter.  I don’t have the patience to follow all of the nuances of developments in cyber security attacks, browser technology, database servers, and a dozen other topics.  But, by following people who do care about every detail, I can keep abreast of major new developments quickly. 

This, I would argue, is a good use of time.  For example, without Twitter I would not have known what oAuth is (it’s a way for sites to share credentials without actually sharing usernames and passwords – think of it as a way to not have to create yet another username and password for yet another site).  However, many people I follow kept talking about it, and so I paid attention.  When it came up as a topic for possible integration at work, I knew what it was and could talk intelligently about it.

I write my own tweets with this in mind.  By and large, I keep them focused on technical topics that I think are relevant to people in my field.  I try to put back in what I get out of it.

I used to stick to this rule 100%, but over the last few months I have realized that sprinkling in a little bit of personal tidbits or humor keeps the overall tweet stream more interesting.  This might be something cute, like my baby son holding a keyboard for the first time.  Or something hopefully funny, like my tweet about cancelling my snowstorm subscription (I can’t tell you how excited I was when Scott Kirsner, a Boston Globe columnist, retweeted it, and it was then retweeted over 50 times from there).  But the emphasis is that twitter is a way to share information relevant to my work.

So, 1000 tweets in 15 months?  I can’t say that it was always the 100% best way to use my time, but I certainly wouldn’t say that it was wasted.

p.s. My wife just accused me of an extensive bit of naval gazing when writing this post.  And she is right.  But you didn’t have to click through and read it.  And that is kind of the essence of twitter posts.  Read them if you care, and ignore them if you don’t.

Posted in Uncategorized | Tagged | Leave a comment

Skip the line at the rental car desk and use the self-service kiosk

We recently took a trip out to California for a family event, which meant a cross-country flight with a six month-old-baby and an almost three-year-old.  After surviving the flight with only one melt down, we made it off the plane, collected our luggage at baggage claim, and boarded the bus to the rental car facility.

With two kids and a lot of baggage in tow, we were the last people off of the bus, and this translated to being last in line at the Hertz rental car desk.  It wasn’t too bad, just five or six people ahead of us, but with all of us exhausted from the trip and two kids up way past their bedtime, it felt like it was a line a mile long.

As I was trying to keep Ayelet amused, another Hertz employee came up and pointed to the self-service kiosk, saying that anyone who already had a reservation could use it:

Image001

I looked at him skeptically.  Sure, I use kiosks all the time without fear, like ATM’s, flight checkins, and self-checkout at the grocery store.  Heck, I get annoyed when they are unavailable, and I have to wait to talk to a real person.  But for a rental car?  It never occurred to me that you could use a kiosk for something like that.

I walked over and gave it a try.  After entering my reservation number, it guided me through a list of questions, most of which were trying to get me to pay extra money for a nicer car, insurance, a GPS, etc.  I then had to fumble through how to scan my driver’s license, but once I did that, it informed me that my car was waiting in space #272, and the keys were inside.  It printed out some documentation, and we were on our way.

Leaving was a little confusing, since I didn’t understand that I had to inspect the vehicle and sign a form that I needed to give to the gate attendant, but that only took an extra minute.

Now that I know how easy it is to use the kiosks, I will skip the line in the future.

Posted in Uncategorized | Tagged , , | Leave a comment

Kick off a Cygwin script from a windows bat file with a different working directory

I recently created a script that would pull the Tomcat log files from a group of web servers and then run my 95th percentile awk script to generate a summary of response times for the 100 most popular pages.  I wanted to run this as a nightly scheduled job so that I could have a report each day, but the job needed to run from a Windows machine.

Cygwin allows my linux script to run on a windows box, but I needed a way to kick off my Cygwin shell script.  After googling around I found some instructions on how to invoke a bash shell script through Cygwin from a command-line.  The bat file that starts up Cygwin is pretty simple:

1
2
3
4
5
6
@echo off

C:
chdir C:cygwinbin

bash --login -i

By creating your own version, you can use the “-c” switch for bash to have it execute your own shell script and then exit:

1
2
3
4
5
6
@echo off

C:
chdir C:cygwinbin

bash --login -i "/cygdrive/d/MyDirectory/MySubDirectory/my_script.sh"

This was a good start, but it left me with a new problem.  My shell script assumed that the working directory was the same directory it was in, and it expected to have various awk scripts and other support files to be locally available.  When my shell script was invoked by my bat file, the working directory was in the Cygwin startup folder, and my script was getting “file not found” errors.

I needed to modify my shell script so that it would change the working directory to be the directory the script was located in.  A bash shell script can figure out its own name using a special parameter, $0.  If the script is being executed from a different directory, then it will contain the path that was used.  For example, if I executed “MyDirectory/MySubDirectory/my_script.sh arg1 arg2”, then $0 would be “MyDirectory/MySubDirectory/my_script.sh”.

I can now use the “dirname” command, which will take a string with a path and just return the directory name.  So, if I call “dirname MyDirectory/MySubDirectory/my_script.sh”, it will output “MyDirectory/MySubDirectory”.

I can now put these pieces together to automatically change to the script’s local directory like this:

1
2
3
4
5
6
#!/bin/bash

# switch the working directory to the script's local directory
targetdir=$(dirname "$0")
echo $targetdir
cd "$targetdir"

UPDATE: Duncan Smart has made some nice enhancements to the Windows bat file portion of the script, creating a much more versatile script that pass along command line arguments and automatically handle path mapping.  

Posted in Uncategorized | Tagged , , , | 6 Comments

Performance: Hyper-threading actually does boost server capacity

We recently noticed some odd behavior with one of our servers during a deployment.  Several machines were removed from the traffic pool to receive updates, and during this time, the other machines were shouldering the extra load.  We have significant extra capacity, and all the servers were responding fine, except one.  This one server was seeing increased latency, despite the fact that it was the same hardware as all the others and receiving the same traffic levels.

As we started to investigate why it was performing differently, the one thing that emerged was that it had hyper-threading disabled. 

Since hyper-threading was first announced almost 10 years ago, I have been very skeptical of it.  The idea that by tricking the operating system into thinking it had twice as many CPUs will lead to a performance boost just feels…wrong.  You don’t have two CPUs, so don’t fool the users into thinking they have twice as much horsepower as is really available. 

Tests of hyper-threading performance were equivocal, to say the least.  They always spoke about only certain kinds of applications would benefit, and other applications might see a decrease.  To top it off, the success stories talked about performance boosts in the 20% range.  While this is good, when the fact that you are tricking the OS into thinking there are twice as many CPUs kind of implies a bigger boost.

What is also bothersome about this type of data is that it is not clear how relevant it will be to a web application.  When talking about how fast an application can perform CPU intensive tasks, the numbers are meaningful.  However, on a web server where each request is assigned an individual thread, there is no boost.  The requests themselves are not executing faster.  At best, it should allow a server to have increased capacity, since it would be able to process more concurrent requests without degradation.  But this isn’t want the benchmark data was trying to measure.

The official data aside, what has always made me really skeptical was strange behavior I would see on hyper-threaded systems.  If you looked at their CPU graphs in Task Manager, you would see that only half the CPUs were being utilized.  While the OS wasn’t supposed to know there was a difference, clearly there was one.

Image001

We decided to run some tests.  We slowly increased traffic to two servers – the one with hyper-threading disabled, and a normal one.  Just as during our deployment, we saw that the server without hyper-threading started to increase latency when we doubled the load, while the other server handled the extra load without any issues.

We then re-enabled hyper-threading on our bad server and repeated the experiment.  Our server started behaving just like all of its peers.

Clearly, hyper-threading can boost the server capacity for a web-based application.

Nothing like data to break down stereotypes.

Posted in Uncategorized | Tagged , , | Leave a comment

Silly trick to avoid accidentally sending an email before it’s ready

One of the plot lines in last night’s Modern Family revolved around the repercussions of an email that was sent before it was finished.  This reminded me a trick I use to avoid the problem, so I thought I would share it.

Computers are full of obscure keyboard commands that will execute commands, including sending an email.  While they are big time savers and can help cut down on repetitive strain injuries, they can also have horrible repercussions if you hit them accidentally.  For example, in Outlook, if you hit Alt-S, your email will be sent immediately.  Since Alt is also an accelerator for many other keyboard commands, as well as for manipulating the menus without a mouse, I’ve accidentally sent emails more than a few times.  Other email programs and Gmail and Yahoo! Mail have their own auto-send accelerators.

Usually, an email sent too early is just humorous, and people can see that the email was cut off in mid-composition.  However, there are times where you are addressing a delicate subject or communicating with a client, and you find yourself reviewing it over and over to get it just right.  Premature sending could be a major embarrassment.

The solution I use (and Aviva now uses as well), is to put a fake sender in the recipient list while composing the message.  For example, if you put in some gibberish like “blah” on the CC list, any attempt to send the email will fail.  “Blah” is not a valid email address, and the email client will prompt you to fix the email address before continuing.  When you feel the email is perfect, you can remove the fake address and send the message.

Image001

The only hitch I have had with this silly little trick is that recently my company hired someone with the last name “Blaha”, and when I put in “blah” in the address bar, Outlook helpfully auto-completes to fill in this person’s name.  Now I use “blooga” instead.

Posted in Uncategorized | Tagged , | 3 Comments

Performance: Query the sqlserver plan cache to find queries that use a particular index

Part of keeping a SQL Server database running smoothly is to keep
indexes well tuned. As database schemas or usage change, indexes that
were once important may fall out of use. Eliminating unnecessary
indexes helps speed up inserts/updates/deletes, and it saves disk space.

Starting in SQL Server 2005, Microsoft introduced some very helpful
dynamic management views (dmv), including one that tracks index usage
. It records the number of seeks and scans each
index has received since the database came online. By looking at
indexes with very low seek/scan rates, you can quickly find indexes that
are candidates for removal.

The situation is murkier when you find indexes that you believe are bad,
but the dmv’s show they are being accessed. For example, you might have
an index that includes a column that is not regularly queried with, and
you want to remove it or replace it with a more helpful column. Most
likely, the index is being used because its other columns still make it
a good choice for certain queries, but there is always the possibility
that some obscure query needs that data and will suffer a major
performance penalty if the column is removed.

The only way to assess the risk is to go and find the queries that are
making use of the index and see if they actually need the column you
think you can eliminate. The question is, how do you find them?

Back in the SQL Server 2000 days, the profiler had an option that would
like you query against the text inside of query execution plans. You
could set it up with a filter that matched the name of your index, and
then start capturing. This worked pretty well, but you had to be
confident that you were capturing enough data to really find all of the
examples of usage.

With SQL Server 2008, this profiling trick doesn’t work anymore, but
there is another way. SQL Server maintains a plan cache for queries it
has executed. Using a couple of dmv’s, you can search for query plans
that use your index and then look at them to see if they would be
affected by your proposed change.

Below is how to do it. There are a couple of flavors of this query out
there on the internet. The one below is adapted from a couple of
sources that I can no longer pinpoint, but if you want more examples of
these types of queries, check out Jason Strate’s blog.

DECLARE @IndexName sysname = 'Idx_SomeTableName_SomeIndex';

WITH XMLNAMESPACES ('http://schemas.microsoft.com/sqlserver/2004/07/showplan' as p)

SELECT 

     DB_NAME(dbid) + '.' + OBJECT_SCHEMA_NAME(objectid, dbid) + '.' + OBJECT_NAME(objectid, dbid) AS database_object
     ,iobj.value('@Index','sysname') AS IndexName
     ,cp.plan_handle
     ,iobj.query('.') AS IndexUsage
     ,qp.query_plan
     ,cp.plan_handle

 FROM sys.dm_exec_cached_plans cp 
 CROSS APPLY sys.dm_exec_query_plan(cp.plan_handle) qp
 CROSS APPLY query_plan.nodes('//p:RelOp') IndexScan(scan)
 CROSS APPLY scan.nodes('//p:Object') as IndexObject(iobj)
 WHERE iobj.value('@Index','nvarchar(max)') = QUOTENAME(@IndexName,'[')

This query will then scan the procedure cache and return query plans
that contained the index in question:

Clicking on one of the plans will pull up the plan view, along with the
sql statements. Now you just need to find the index and see how it is
being used.

If you none of the plans would be affected by your proposed changes,
then you can be more confident that your index modification will not
adversely impact performance.

Of course, nothing is guaranteed. These are just the plans in cache,
and it is possible that a rare query will come along need your data. So
as always, proceed cautiously.

Posted in Uncategorized | Tagged , , | 3 Comments