15 November 2010

Nearly Competent

I thought this was a pretty good insight from Jeff Atwood over at CodingHorror
You can tell a competent software developer from an incompetent one with a single interview question:
What's the worst code you've seen recently?
If their answer isn't immediately and without any hesitation these two words:
My own.
Then you should end the interview immediately. Sorry, pal.

I love the brutal efficiency of the question, and find it to be revealing in the same way as the “on a scale of 1 – 10, how would you rate yourself” question.  This question isn't quite as efficient -- there are more than two answers -- but any answer that isn't actually between 1 and 10 should end the interview.  If the applicant doesn’t realize that 15 is not less than 10, what can you possibly hope to salvage by continuing?

Yes. I’ve heard, with my own ears, somebody seriously rate themselves higher than 10 out of 10.  I'll never forget the claim: "On a scale of 1 to 10, I'm a 13.  My SQL Server skills are that good."  Of course the candidate was hired (it wouldn't be much of a story otherwise) and the consequences weren't pretty.  Years later, the team that made the hire is still trying to overcome the reputation of minimal competence, unresponsiveness, and outright contempt that this individual bequeathed them.

On a lighter note, a little tidbit about the most overlooked risk in software engineering (again, from CodingHorror):
Q: What is the most often-overlooked risk in software engineering?
A: Incompetent programmers.

My name is “Bob” and I’m a developer. Now let’s begin planning around my incompetence.

11 November 2010

Restore Me, Please

Our shop has a stored procedure to restore databases that takes care of all the ugly details of file moves and the like.  You supply a target_database_name and a backup_file and it does the rest.  No worrying about which paths to move your data and log files to on each server.  It's even supposed to distinguish between native and Red-Gate backups and do the appropriate magic.


A couple years ago, we had a rock-star DBA "optimize" the restore procedure, and it has only worked intermittently ever since.  We've all taken to using the native backup syntax so we don't depend on "the procedure."  However, we occasionally get a compressed database we need to restore (.sqb), and I recently ran through a painful restore that I thought I'd share to give a quick overview of restoring backups that have been compressed with Red-Gate SQL Backup.

After making a good-faith effort to restore with "the procedure," I grabbed version 6.4 of the Red-Gate SQB Converter so I could extract the .SQB file into its constituent .BAK files.  Not entirely related, but we tend to have 15 .BAK files wrapped up in each .SQB file.  It helps backup performance, but can be a killer on MemToLeave Virtual Address Space (VAS) on 32-bit servers.

In this particular case, it wouldn’t convert the file because it wanted a password.  This is often an indication that the converter version is older than the SQL backup version that created the .sqb file, but I'm sure it could have other meanings, too (like needing a password?).

Since I couldn't get my .bak files out of the .sqb file, I had to crack open "the procedure" and scarf its Red-Gate syntax to do the restore.  Other than the additional Red-Gate syntax, the process is the same we would use with native tools…

1.       Restore filelistonly to get your logical filenames
2.       Determine the location to place your .mdf and .ldf files on the server
3.       Restore your database with move

Here is the relevant syntax…

1.       Get your logical filenames with this:

EXEC master..sqlbackup N'-SQL "restore filelistonly from disk = ''path_to_your_sqb_file.sqb''"'

2.       Get the default file locations for your server (if you can create a database, you should have permissions to write restored database files there, too):

CREATE DATABASE bob
EXEC bob..sp_helpfile
DROP DATABASE bob

3.  Once you have your logical filenames, and the paths to where you want to move them, you can run the restore…

EXEC master..sqlbackup N'-SQL "restore database your_database from disk = ''path_to_your_sqb_file.sqb'' with move ''data_file'' to ''path_to_mdf_file.mdf'', move ''log_file'' to ''path_to_your_log_file.ldf''"'

I'm still a big fan of Red-Gate SQL Backup, I just don't listen to rock any more.

08 November 2010

The Patching Treadmill

Well, it's almost Patch Tuesday again.  Are you ready to apply and test them?  It's one of the most important things you can do to maintain the security and availability of your SQL Servers.  Remember Slammer?  The patch for that was months old, but hadn't been widely applied.

Keeping up with patches used to be a time consuming, hit-and-miss sort of affair.  By scheduling a regular monthly patch release, Microsoft has provided the predictability that many organizations need to manage and keep up with the patching process.  There isn't really an excuse any more for not having some sort of regularly scheduled patch deployment and testing process.

We happen to use Windows Server Update Services (WSUS) to authorize and deploy patches in our environment.  We push the patches through QA and test environments before pushing them to pilot groups then the rest of the enterprise.  It's a pretty decent system and has served us well.

Testing the patches is critical, because they do occasionally cause production outages of their own.  We had to back one out of production a couple of months ago because it affected the communication between one of our Windows applications and its WCF service.  Of course, since we're not perfect, the same patch made it back into test last month, but we were able to get it backed out before it made it through to production again (it had been mistakenly re-authorized in WSUS).  We'll have to really watch out this month: the only thing more embarrassing than getting caught by the same mistake twice is getting caught by it three times.  Three strikes, and you're out, right?

Just to help get things rolling, this is the schedule we use for applying Microsoft's monthly patches. Just place the Microsoft release on the second Tuesday then testing and production rollouts fall right in place.






This is based on an Excel Template by Vertex42

03 November 2010

My Descendents Thank You

I have 3,932,160 grandchildren.

At least I will in 600 years, if my children and theirs each produce two reproducing children per generation.  And that’s exactly when today’s performance optimization will pay for its own development time in CPU cycles.

I have written previously about intentionally leaving query results unordered to save some academic CPU cycles while burning money in end-user time – a poor “passive” optimization.  Now we’ll discuss actively making inane optimizations that will never pay for themselves – a poor “aggressive” optimization.

A developer modifies a query with the academic goal of saving several milliseconds per execution (perhaps as few as 6 ms).  If this was a query that ran thousands of times per hour during user time, we might have been able to quantify some savings to the business.  Unfortunately, the optimized queries were in some procedures that run once per day during a nightly batch cycle that had no time pressure.

I ran the numbers on the CPU savings and it will take more than 600 years for the saved CPU cycles to accumulate the same amount of time spent to “optimize” the query.  Because the CPU time for a batch process does not correspond to the real money being spent in user-time, this optimization will never be paid for… even in the 600 year break-even period for the CPU time.

These two types of performance optimization illustrate the danger of optimizing for purity of execution without giving any consideration to the practical implications of where our technical decisions force the business to spend its money.

If we keep wasting it like this, they certainly won’t be spending it on us.