My Resume

  • My Resume (MS Word) My Resume (PDF)


Affiliations

  • Microsoft Most Valuable Professional
  • INETA Community Champion
  • Leader, NJDOTNET: Central New Jersey .NET User Group

Thursday, December 20, 2007

Community Leader Appreciation Outing

I just got back a little while ago from another one of Microsoft's Community Leader appreciation outings, put on by our local Microsoft Developer Evangelists, Peter Laudauti and Bill Zack.  Man, these guys really do know how to treat people!  I just wanted to show my appreciation for your appreciation - keep up the good work, guys... and thanks!

Ignore the fact that they just gave me free food for a minute while I continue on with more praise...

I've been meaning to blog a little something about how impressed I have been with our regional Microsoft Developer Evangelist team as a whole.  As I grow more and more active in the community, I get to see more and more - first hand - how encouraging, knowledgeable, and just plain helpful they all are.  Our recent InstallFest, during which they graciously gave away 35 copies of the recently released (as Peter said, "hot off the presses") Visual Studio 2008.  This was the most recent public-facing thing they've done for us, but what most attendees of the user groups don't know is how helpful they are behind the scenes, helping us coordinate with other leaders in the area and just generally helping our community continue to grow and prosper.  I'm sure I'll be blogging more about their wonderful contributions as I become more deeply involved, but at this point - even though I've basically just recently been introduced - I'm already impressed.  Again - thanks, guys, and keep up the great work!

Monday, December 17, 2007

WPF = Wow!

I feel like I'm fashionably late to some extravagant party.  A bunch of guest have already arrived and I walk in the door, impressed with people I see, and quite glad that I have finally arrived too...  I'm also thinking "what the heck was I waiting for?"

The "party" I'm talking about is the Windows Presentation Framework.  Now, I'm really a "Web guy," so I'll use that as my excuse for taking so long to join the party, but having finally gotten a good look at what I was missing, it's really no excuse at all.  I went to a 3-day training course last week, led by trainer extraordinare, Walt Ritscher, from the WinTellect training group,  and man was it impressive!  Having worked at Infragistics, I have been well-exposed to all of the marketing buzz around WPF.  Unfortunately, being the Webinary I am, I was not really exposed to the real meat and potatoes and ended up waiting until this offer of training came my way to finally get a taste of it.  Walt came in, took an exciting topic, and really brought it to life.  His informative, direct, knowledgeable, and (above all) enthusiastic approach to the course material really sparked my interest in WPF.  I just sat through three days of intense training, watching slide after slide and demo after demo and saying "Wow..." every few minutes. 

My problem with a lot of technologies (including WPF and even Silverlight) is that I see the marketing and say, "Yeah...  that's cool - now I just need to hire myself a graphical designer and I'm good to go."  It's hard to get a feel for how I, the developer, fit in to this new media-rich world.  Generally, I just end up learning the syntax, etc. and end up realizing the full potential later on, once I've got the technical stuff done.  But this time with WPF, Walt was able - through a fine blend of instruction and demo - show me, the graphically-challenged developer, how quickly and easily I can take this new platform and leverage it to build very impressive solutions.  In fact, these impressions extended all the way into my thoughts about Silverlight, since the two are so closely related. 

Is code becoming more and more "obsolete"?

After all this exposure, I'm definitely ready to write some code!  Er... wait... XAML? 

Outside of all the cool (and more importantly: easily-created) "rich interface" bells and whistles, WPF has a lot of concepts at the more technical levels that are just plain revolutionary.  It's quite obvious that XAML - and the paradigm in which it causes you to think - is one of the most important concepts on that list.  Being a web developer, I'm more than familiar with - and at home in - a declarative markup language.  In fact, it's more or less what drove me to working with web technologies in the first place ('cause it sure as heck wasn't ViewState!); I wouldn't say I was lured toward web development so much as I was running away from WinForms development!  To me, declarative markup - while helping you write less code - not only makes everything a bit clearer, but can help you focus on the functional details of both the user interface and user experience and forget about those right-brained concepts such as color and graphics by allowing the right-brainers to modify the looks of your application all they want while making it a lot harder for them to actually break any of your functionality.  For us left-brainers, it makes us more "at home" more of the time as well as greatly easing the burden on collaborating with the right-brainers.

In addition to the awesome increase in separation of function and form, I'd argue that using declarative markup in this way also provides less room for bugs; or, at the very least, lowering their impact.  Obviously, you're still going to have to write the business logic that drives everything, but there is so much "plumbing" that is taken care of for you in declarative markup languages, and with WPF in particular!  What really got me is how many demos Walt wrote and/or showed us that used NOTHING but XAML, and many more that only included a handful of lines of business logic.  It makes sense that this is the direction we're heading and if you follow that idea through it begs the question, "how much code are we going to be writing in a few years?"  By "code" I mean languages such as C#, VB, Python, etc. as opposed to the declarative markup I've been discussing.  Obviously, there will always be room for typos and poor syntax in markup, but things like XAML and the IntelliSense/background compiling of Visual Studio help us reduce these types of mistakes and spend more of our day being productive instead of troubleshooting.  What's more, we may end up with just about the same amount of actual code (C#, VB, etc.) that we did before, only the big difference now is that a tool has generated it for us.

And, since Silverlight and WPF share the common foundation of XAML, when we're all done with our WPF project a lot of it (the UI, at least) can be "easily" ported to Silverlight, allowing us to run essentially the same thing on the web...?  Oh yeah!  Granted, the glaring differences between the two makes this more easily said than done, but even the concept is enticing... and I can only imagine it will get easier as the two technologies evolve.

Diving Deeper...

There are plenty of other things about WPF that I've fallen in love with, but the ideas I've talked about above are the ones that have really intrigued me.  As I hope I've made abundantly clear - I just took the training, and I've done nothing but some sample applications, so I may be looking at this new world with stars in my eyes because my exposure is limited...  but I don't intend for that to remain true for very long.  And, from what I've seen and heard from those already at the party, I'll go through the same learning curve as any new language, but in the end it'll be worth it and I won't be disappointed.

Thursday, December 13, 2007

Taking Over NJDOTNET!

We had a great time tonight during our InstallFest at the Central Jersey NJDOTNET User Group meeting.  In celebration of the long-awaited release of Visual Studio 2008, the community guys at Microsoft were able to get us a bunch of free copies of this awesome new IDE to give away to the loyal (read "first-come-first-serve") members of our group!  Needless to say, the room was filled with a bunch of very happy people...  and that's before the talk got started!  Walt Ritscher was in town this week, giving some of us Infragistics guys some schoolin' in the ways of WPF (Edit: See by detailed blog post about what I think about WPF here).  He was also nice enough to stay for an extra few hours to share his WPF goodness with our users at tonight's meeting, which ended up in an absolute captivating and well-received presentation.  My thanks to Walt for sharing his knowledge and experience with us!

Oh yeah, and one more thing happened, too...  Ed Blankenship, who has been running the group for almost a year now, announced that he had to move back to Texas and that he would be passing over his leadership responsibilities to...  me!?  Cool!  So, here I am, now a Leader of the Central Jersey NJDOTNET user group...  Between Jason Beres and Ed, I've got pretty big shoes to fill, but I shall do my best!

Thursday, December 6, 2007

InstallFest at NJDOTNET Next Week!!

Everyone, come join us at NJDOTNET's upcoming user group meeting where our local Microsoft community representative will be helping us host a Visual Studio 2008 Install Fest!  The licenses will be giving out on a first-come-first-serve basis, so be sure to get there early...  Then stick around to hear our great speak, Walt Ritscher from WinTellect give a talk about WPF Databinding (oh yeah!).

See everyone there!!

Tuesday, December 4, 2007

Review: Red Gate SQL Toolbelt (SQL Prompt, SQL Data Compare, and SQL Compare)

I'm gonna start out this review with: "Wow, that's a LOT of stuff!" Red Gate's SQL Toolbelt package has a whole slew of tools that'll keep you happy for some time. If it has anything to do with working with databases, you can be sure that Red Gate has something that'll help you do it... if it doesn't just do it for you.

I'm gonna follow that intro off with an admission of guilt: I haven't fully explored everything in the Toolbelt.  Seriously, there is a LOT of stuff in there, and I just don't have that kind of time.  For that reason, this review will focus primary on SQL Prompt, which is the one thing I use on a daily (hourly!) basis, and has saved me untold amounts of time since I first installed it.  Another tool that I use on a daily basis - so much so that it's actually become a part of my standard release process - is SQL Data Compare, which has been a wonderful help in keeping all of my databases synchronized.  Along with its companion, SQL Compare, I can be sure that all my databases are exactly the same at any given time, from schema to data, all with a real nice user experience.

SQL Prompt

A few weeks ago, I tried reviewing SQL Prompt and I never finished.  The reason I didn't finish was not because I lost interest or became bored, but simply because there was so much I wanted to write about.  It's not really an understatement to say that this tool has changed the way I do database development.  In this post, I'm going to give more of a high-level review/overview of all my favorite features without all the screenshots and captions, and (hopefully) follow this post up with a more in-depth post drilling down into the specifics at some point in the near future.

So, in case you hadn't noticed, I think SQL Prompt is awesome.  For years I've been doing SQL development as part of my larger coding projects and every time I'd write a piece of SQL I'd think to myself, "ugh - this is so inefficient".  The structure of SQL statements are such that it seems like you keep writing the same things over and over again, substituting a few words here and there.  Just simple tab-completion of the typical commands (SELECT, UPDATE, DELETE, FROM, WHERE) would've been a godsend, but when I finally found SQL Prompt, I got a lot more than just that.  The first thing you'll notice when you first kick up Management Studio after installing is a little, unobtrusive dialog box pops up at the bottom right hand corner of your screen, letting you know that it's parsing through your databases, determining their schema so that it can provide you with the coolest feature: IntelliSense (though they don't call it that... 'cause that would be infringement... right?).  What's more, we're not just talking about a single drop-down with available choices, but a full-blown contextual dialog that intuitively shows you what you're most likely looking for... and then some.  In the initial view, tables, views, stored procs., etc. are all listed in the same list, each with a little icon indicating what type they are.  Then, if you know you're looking for a specific type of object, you can use the intuitive Ctrl-left & right keystrokes to change which tab you're on, allowing you to filter out all of the objects except for a particular type.  I know I keep using the word "intuitive", but that's because that's the word that kept coming to my mind the first few times I used it; there's simply no better word to describe it (plus I lost my thesaurus :)!  Oh, and if all of that intuitiveness isn't enough for ya, guess what happens when you highlight, say, a table or a view while in this dialog?  It shows you that object's schema!  Yeah... that's right.  This is especially helpful when you're not exactly sure what table/column, etc. you're looking for; instead of doing a bunch of typing and deleting, you can just look at the preview pane and see if the column, key, whatever that you're looking for is there.  If not, you just keep on scrolling to the next object.  It's almost like database object window shopping!

There isn't a whole lot of bad stuff I can say about SQL Prompt.  It shows up when you need it, it does what you'd expect, and it goes away...  I would expect that the most likely area complaints would arise in a product like this is in speed, efficiency, and accuracy/knowledge of the auto-completion.  To this point, I must admit that there have been a few times when I've expected something to be there (such as a simple/common keyword) that just didn't show up; to be fair, though, there have been many more instances in which I have expected something would not show up as a suggestion (such as a variable I just created a second ago), and it did.  I also feel as though I need to mention that I talked to our DBA today and he mentioned that he was not happy with the speed of SQL Prompt, so much so that he ended up completely uninstalling it.  (To be fair, he did mention that he's never been happy with any SQL efficiency tool, so maybe his standards are a bit high.)

 

SQL Data Compare

I'm not sure what I can say about this tool except that I have based a content release process around it.  That is to say, I rely on it to make sure that all of my production databases have the correct, pertinent, and up-to-date content at all times.  If that's not just the most succinct endorsement of a tool, I don't know what is!

Using SQL Data Compare is a snap - you just create a new "project", identify your database connections, and tell it to go to town.  By default, it'll synchronize any table with a primary key (or so it seems), but you can also decide to pick and choose which objects you'd like to synchronize so that you're not wasting your time waiting on a comparison of a table that you don't really care about.  Then, once you're ready, you just tell it to compare, review the results and publish!  It'll walk you through a wizard, and ask if you're really sure you want to completely overwrite/delete/update the precious data on your production server (of which you have made a backup first, right??).  Tell it that yeah, you're really sure, and BAM....  log off and go home, cause your day is done.

 

SQL Compare

Last on my list (for this post, at least) is SQL Compare. Much like its sibling, SQL Data Compare, this tool remembers changes you make to your database so that you don't have to.  Though I don't use it as frequently as the other two, this tool practically changed my life after I discovered it.  In order to accurately express the impact it's had on me, I have to make (another) confession: for my few bunch of releases containing database updates, I always seemed to miss something when moving to production - it was really a problem!  Well, not after I found SQL Compare!  Now I can go to release knowing exactly the differences between the two versions... Now, choosing to ignore a difference is a conscious decision.  :)

 

All-in-all, these two tools are incredibly cool, and end up playing an integral role in my daily development and production support life.  I really (honestly, quite literally) don't know what I would do without them.  

Oh yeah - the rest of the Toolbelt seems really cool, too.  :)  Hopefully one day I'll be able to work with them all enough to post some more reviews.  Until then, just enjoy these two tools, at least, and give the rest a try, too!

Friday, November 30, 2007

Upgrading from Visual Studio 2008 Beta 2 (and/or RC) to RTM

Just wanted to write up a blog entry about my experience upgrading my Beta 2 and RC installations of Visual Studio 2008 (which I will forever call "Orcas") to the RTM version.  Hopefully this may save some of you from wasting a few hours of your life as I did...

I had the Orcas RC installed on both my laptop and home workstation, having already upgraded both from the Beta 2 release (with roughly the same upgrade experience on both machines as I'm going to describe).  I decided to upgrade my laptop to the RTM first, following the recommended procedure of completely uninstalling all previously installed components related to .NET 3.5 & anything labeled "2008" and this step ended up taking me five and a half hours!  Then, when I first ran the installer, it yelled at me because I had IE7 windows open - fair enough.  So, I closed them, continued with the installer and inevitably began opening more instances of IE as I continued to use my PC normally, not really thinking about the on-going installation in the background.  At some point late in the game, about 45 minutes in or so, I received another popup yelling at me to close the IE instances I had re-opened!  After confirming with the installer that I had closed the applications it had asked me to, it proceeded to basically restart the installation from the beginning! I never actually clocked it, but I think the installation step itself ended up taking over three hours, which is just plain crazy!

But, all of those "please close IE" pop-ups I received along the way got me to thinking - this whole thing would probably go a lot smoother if I just kicked up the installer and left it alone to do its thing for a while.  I tested this theory on my home workstation...  I started the uninstall process and walked away, not touching anything or leaving anything running.  I came back about 45 minutes later with a happy "Successfully Uninstalled!" message waiting for me.  Super!  Then, on to the installation of Orcas RTM;  same deal - kick off the install and walk away.  Another 45 minutes went by and I decided to go back and check on the install, expecting to find something like 60% complete or so...  Once again, I was pleasantly surprised to see a wonderful "Installation Complete!" message waiting for me, and my newly-renovated and upgraded development playground was ready to roll again!  This time, the whole process took less than an hour and a half - from uninstall to completed install - and that's only because I chose to leave it for 45 minute increments; it was probably finished much quicker than that!  And, for what it's worth ('cause I know you're wondering), these two machines are very comparable in terms of memory and processor speed.

So, the moral of this story is:

START THE UNINSTALL AND INSTALL PROCESSES AND WALK AWAY. 

Stay out of Orcas's way while it's setting up its goodness for you and you'll both be better off.  Just another Public Service Announcement from your friendly neighborhood Webinary...

Monday, November 19, 2007

Microsoft Certification and Mobile Web Development in Visual Studio 2008

I have been looking to get myself Microsoft certified, because for some reason I've gone quite a few years without getting it.  Sure, I don't put too much merit in certifications achieved via written testing, but I figure it'd be nice to be able to list those acronyms on my resume...  So, I got myself a book or two, looked around on Microsoft's certification site, which looks really nice - and has a whole lot of things you can buy - but doesn't really give me straight answers to what I'm looking for.  Despite that, I figure out that what I'm looking to get (at least for starters) is the MCTS, or Microsoft Certificated Technology Specialist.  Specially, Technology Specialist: .NET Framework 2.0 Web Applications.

Eventually I figure out how to get the site to give me a free "Skills Assessment" (which is a whole lot different than a "practice test", unfortunately).  After 15 minutes or so, it tells me I need to work on my Mobile Apps skills.  I've got one thing to say to that: WTF?  I haven't yet met a developer who has ever done development using Mobile Apps (at least not that I'm aware of), nor have I encountered a company who listed mobile accessibility as one its top priorities, so I'm left wondering why practically one third of the questions on the skills assessment are concerning Mobile Web Applications.  It seems incredibly asinine to me, but I accept the fact that I've got to go through some B.S. in order to get through this process, so I kick up my newly-(re)installed Visual Studio 2008 and create a new Mobile Web Application..... or not.  Wait... what??  There is no Mobile Web Application project template anymore?  Ok....  So, I'll just create a regular ol' Web Application project - done that plenty of times.  Ok, great - project created; now I just add a new Mobile Web For---.....  W. T. F. 

It seems that Mobile Web Development is so important to Microsoft that they completely removed any of the templates to help create it. 

Alright, I've tried doing everything on my own - now it's time to get some help.  So, I head on over to http://www.asp.net for some answers.  First observation: nothing to be found on the home page concerning mobile stuff...  So, I switch to the "Get Started" tab, where I see a whole bunch of (awesome) videos, addressing every other topic on the certification exam except mobile development!  Ok....  so I try the next tab, "Learn" - it's gotta be here... NOPE!  Not one mention of it...  Finally, I do a site-wide search for "mobile" and come up with http://asp.net/mobile/...  Here are a few interesting tidbits I noticed surfing around the site:

  • I noticed phrases such as "The ASP.NET mobile controls [...] extend the power of the .NET Framework and Visual Studio"; that's right - extend the .NET Framework!
  • If you read the roadmap (as of the writing of this entry) it talks about how "ASP.NET 2.0 will" and "0-3 months after the release of ASP.NET 2.0...".  WHAT!?  Considering we're on the verge of releasing .NET 3.5, I'm sure you're aware how long ASP.NET 2.0 has been released...
  • The Web Matrix General Discussions forum (a tool only for .NET 1.x which I'd never even heard of, apparently the predecessor to VS2k5 Express) - currently has (as of the time of this writing) more threads than the Mobile and Handheld Devices forum: 2,238 and 2,235, respectively.

I'm left wondering what the heck is going on here - why are one third of the questions on the skills assessment for the certification mobile-centric, yet it doesn't seem like Microsoft really even cares about the technology, nor is it high on the list of most companies??  Obviously, I understand, accept, and respect what the growth of the Internet and usage of mobile devices will bring and I plan to model my on-going learning accordingly... but, that's not my point.  My point is that Microsoft Certification is supposed to (or at least should) indicate that I know what I say I know.  In Microsoft's words: "it's how they know you know".  My point is that I think it should stick to certifying what most companies are looking for...  not what Microsoft wants me to know.

This post - although somewhat cathartic - was not meant to be a rant (honestly!), but more-so a cry for change (and/or help, however you want to view it).  Who comes up with this crap?  How are these certification exams actually created?  Why do I have to waste my time learning stuff I will probably never use?  Whose agenda is dictating what we are expected to know in order to become "certified professionals"?  Because it certainly isn't inline with reality; at least not from what I can tell.  What do you think?

Tuesday, November 6, 2007

Trials and Tribulations: My Slow Windows Guest on a Gentoo Linux VMWare Host

From the very first time I used it, I loved VMWare.  BUT, I didn't exactly love the price tag, and so I went without it at home for a long time, trying to use substitutes like bochs, qemu, and even CoLinux with varying degrees of success to satisfy my virtualization needs...  But nothing ever hit that sweet spot that VMWare did.  And so it stood for quite a few years, and then VMWare released the free VMWare Player.  Yes!  I had to have the VM's created for me, but now at least I had access to some of that sweet, sweet VMWare goodness.  And then my wishes really were granted when I read that they had released a free Server product!  YEAH!!!  It was a matter of minutes before I had it downloaded, installed, and running my Windows Server 2003 image on my Gentoo GNU/Linux host.  Everything seemed to be going great, except after a while I noticed my Windows guests seemed to be a bit slooower than usual.  I had given them a respectable amount of memory, and there wasn't much else going on on this machine, so I started to wonder what was up.  I had never had these problems with my VMs - even these same exact VMs - under Windows, so I started troubleshooting.  I read all of the manuals, how-to's, and whitepapers, then re-installed everything including recompiling the modules...  still the same problem.  Then I finally Googled it.  It took me a while, but after I sifted through the garbage hits I found this gem, which I subsequently ran on my Linux host:

$ ethtool -k eth0

$ ethtool -K eth0 tso off

Now, I don't really know what those two lines do... and I don't really care.  All I know is after I muttered those magical phrases, my VMs all SPRANG to life!  It looks like the entirety of my perceived slowness was really derived from all the symptoms of a bad network connection.  In fact, in retrospect it should've been easy to diagnose: my biggest frustration was copying files over the network from my host to my guest.  At first I chalked it up to SMB, since surfing the Internet and copying files from my Windows machines seemed to be fine.  But, when I tried to copy from my host, the entire VM seemed to slow down, and a 3 Mb file could take upwards of 20 minutes.  Once I fixed that problem, the performance went back to the VMWare I know and used to - that 3 Mb file now took seconds, and the entire OS as a whole seemed to respond much better. 

I have to confess, I was starting to lose faith in VMWare, and even virtualization in general...  I really was.  But now I'm in love again.  Thank you, VMWare!

Footnote:

I mentioned CoLinux before...  It was (and I assume still is) pretty damn awesome for hosting a Linux environment on my Windows machines, but I often had difficultly justifying the time and processor I was giving to it, and subsequently replaced with Cygwin with is the BEST LINUX CROSSOVER environment ever.  I'm now able to drop directly to a Bash shell on my XP workstation and sed, awk, vi, grep, etc. my way to bliss.  This is really awesome due to the fact that my professional life is dominated by Microsoft - being a .NET developer and all - and using a Linux machine at work will never happen (ok, I won't say "never", but not any time soon...).  Cygwin (and Cygwin/X) has been by far the best way for me to get my Linux fix at work, in public... anywhere!  Even at home, curled up on the couch with my laptop, sipping some hot tea....  ah, Linux...  --Oh, are you still here?

Friday, November 2, 2007

Safe Handling of Uniqueidentifier Using T-SQL

I'm not a DBA, so I'll be the first to admit I am no expert in SQL. But, I do like to think I know my fair share and can write relatively efficient and effective SQL pretty easily.

That being said, I ran into some trouble today. I am doing web hit tracking, and to avoid losing any data, I'm storing environment variables (customer ids, download ids, and other assrt'd custom data) of varying types. As such, I'm storing these values as the nvarchar(MAX) type so I can convert them to whatever I want later while avoiding a lot of the type-casting errors at run-time.

Problem is, when I went to go insert some of this data into a temp table to work with, I kept getting this error when trying to convert a column that should have been filled with good uniqueidentifiers values (albeit stored as an nvarchar type):

Conversion failed when converting from a character string to uniqueidentifier.

OH NO! Well, first I had to figure out what value was giving me trouble. After a bit of searching, I found out that it was one like this: BE92BFCE-A425-4FXA-85X2-AAX4C7C92AAE. Can you see it? Yeah - 'X' is not a valid character in a UUID! Looks like someone was submitting wacky values into my tracking system... Well, I had already accepted the fact that I would get invalid values, but I certainly didn't want them to get in the way of processing my valid ones. So, I scoured the 'net for a "ToGUID" function that would safely fail when passed in a bad value like the one above and, surprisingly enough, I came up with NOTHING! I couldn't even really find any articles or forum posts on the most effective way to parse and validate a uniqueidentifier! I decided I had to write my own methods, and what I came up with were two UDFs: one to validate whether or not the string is a valid uniqueidentifier and the other to actually convert it... gracefully. That means you can pass it anything, and it'll either give you back a uniqueidentifier or NULL! No errors! Here's the code:


IsValidGuid(nvarchar(MAX))
CREATE FUNCTION dbo.[IsValidGuid](@input NVARCHAR(MAX))
RETURNS bit
AS
BEGIN
DECLARE @isValidGuid BIT;
SET @isValidGuid = 0;
SET @input = UPPER(LTRIM(RTRIM(REPLACE(@input, '-', ''))));

IF(@input IS NOT NULL AND LEN(@input) = 32)
BEGIN
DECLARE @indexChar NCHAR(1)
DECLARE @index INT;
SET @index = 1;
WHILE (@index <= 32)
BEGIN
SET @indexChar = SUBSTRING(@input, @index, 1);
IF (ISNUMERIC(@indexChar) = 1 OR @indexChar IN ('A', 'B', 'C', 'D', 'E', 'F'))
SET @index = @index + 1;
ELSE
BREAK;   
END

IF(@index = 33)
SET @isValidGuid = 1;
END

RETURN @isValidGuid;
END
ToGuid(nvarchar(MAX))
CREATE FUNCTION dbo.[ToGuid](@input NVARCHAR(MAX))
RETURNS UNIQUEIDENTIFIER
AS
BEGIN
DECLARE @guid UNIQUEIDENTIFIER;
SET @guid = NULL;

-- If this is a valid GUID, try to convert it
IF(dbo.[IsValidGuid](@input) = 1)
BEGIN
DECLARE @guidString AS NVARCHAR(MAX);
SET @guidString = UPPER(LTRIM(RTRIM(REPLACE(@input, '-', ''))));
SET @guidString = STUFF(@guidString, 9, 0, '-')
SET @guidString = STUFF(@guidString, 14, 0, '-')
SET @guidString = STUFF(@guidString, 19, 0, '-')
SET @guidString = STUFF(@guidString, 24, 0, '-')

IF(@guidString IS NOT NULL)
SET @guid = CONVERT(UNIQUEIDENTIFIER, @guidString);
END

RETURN @guid;
END

EDIT: As Sloan mentions in his comment below, all of the STUFF lines above can be abbreviated together into one line, so you end up with something like this: select @guidString = STUFF(STUFF(STUFF(STUFF(UPPER(LTRIM(RTRIM(REPLACE( @guidString, '-', '')))) , 9, 0, '-'), 14, 0, '-') , 19, 0, '-') , 24, 0, '-')

So you can just call "dbo.ToGuid(foo)" and it'll convert foo to a GUID for you or return a NULL in its place. As you can see, ToGuid() calls IsValidGuid() to validate the input, but you can still use IsValidGuid() where appropriate as well.

This seems to be working for me. Hope it helps you as well!

Wednesday, October 24, 2007

Custom Authentication with Community Server

You may have scrolled down to the bottom of one of our blog posts over at Infragistics at some point and noticed the little Community Server logo in the page footer.  We've been using Community Server to host our blogs for some time now, and it's such a great platform that we're looking to leverage it a bit more.  To do what we're thinking of, you guys will need to be able to log in...  and what kind of user experience is it for you to have two IG.com accounts?  Enter Single Sign On (SSO), and lucky for us, Telligent sells a custom module that does just that.  Setting it up was a cinch: just drop the assembly in, tell Community Server which cookie to use, and make sure that your main site (the authenticating site) is setting the cookie correctly.  And, it really was that simple!  Well, almost...

The out-of-the-box solution did work great for almost all scenarios.  There were, however, a few problems we had:

Cookie Timeouts

Unfortunately, it doesn't seem like the custom cookie authentication module can be configured to provide any type of sliding expiration behavior to the cookie.  That means that after a user logs in on the main site and the cookie is created, they only have as long as the cookie lives to be "singlely signed on" and once it expires, they'd be redirected back to the main site to re-authenticate themselves.  The most obvious and easiest fix would be to just set no expiration date on the cookie.  Sure - that'd work, but I prefer the security of having users automatically signed out after a period of inactivity, at least by default. 

So, I decided to write a module to check for the cookie on each request, and update the expiration date accordingly.  BAM - sliding expiration.  Below is the code I used to do it:

Cookie Manager

public class CookieManager : IHttpModule
{
    // Lifetime of sliding expiration (in minutes)
    private const int COOKIE_LIFETIME = 20;

    #region CookieDomain
    private static string _CookieDomain =
        ConfigurationManager.AppSettings["CookieDomain"];

    /// <summary>
    /// Gets or sets the cookie domain.
    /// </summary>
    /// <value>The cookie domain.</value>
    public static string CookieDomain
    {
        get
        {
            if (String.IsNullOrEmpty(_CookieDomain))
            {
                try
                {
                    string cookieDomain = null;
                    // If no domain name was specified, try to guess one
                    string hostname = HttpContext.Current.Request.Url.Host;
                    // Try to get the domain name
                    string[] hostnameParts = hostname.Split('.');
                    if (hostnameParts.Length >= 2)
                    {
                        cookieDomain = String.Format(
                            "{0}.{1}",
                            hostnameParts[hostnameParts.Length - 2],
                            hostnameParts[hostnameParts.Length - 1]);
                    }
                    _CookieDomain = cookieDomain ?? hostname;
                }
                catch { /* We don't really care if this doesn't work... */ }
            }

            // Don't allow an empty domain
            if (String.IsNullOrEmpty(_CookieDomain)) _CookieDomain = "localhost";

            return _CookieDomain;
        }

        set { _CookieDomain = value; }
    }
    #endregion

    #region SSOCookieName
    private static string _SSOCookieName;
    /// <summary>
    /// Gets the name of the SSO cookie that the 
    /// Custom Authentication module is using.
    /// </summary>
    /// <value>The name of the SSO cookie.</value>
    public static string SSOCookieName
    {
        get
        {
            if (_SSOCookieName == null)
            {
                Telligent.Components.Provider authProvider =
                    CSConfiguration.GetConfig()
                    .Extensions["CustomAuthentication"] as Telligent.Components.Provider;

                if (authProvider != null)
                    _SSOCookieName = authProvider.Attributes["authenticatedUserCookieName"];
                else
                    throw new ApplicationException(
                        "Could not find Community Server provider 'CustomAuthentication'.");
            }
            return _SSOCookieName;
        }
    }
    #endregion
    private void KeepSSOCookieAlive(HttpContext context)
    {
        // See if we've got an SSO cookie
        HttpCookie cookie = context.Request.Cookies[SSOCookieName];
        if (cookie != null)
        {
            // Make sure the domain name is set
            cookie.Domain = CookieDomain;
            // Update the cookie's expiration (use sliding expiration)
            cookie.Expires = DateTime.Now.AddMinutes(COOKIE_LIFETIME);
            // Send the cookie back
            context.Response.Cookies.Set(cookie);
        }
    }

    #region IHttpModule Members
    /// <summary>
    /// Disposes of the resources (other than memory) used 
    /// by the module that implements <see cref="T:System.Web.IHttpModule"></see>.
    /// </summary> public void Dispose() { }
    /// <summary> /// Inits the specified application.
    /// </summary>
    /// <param name="application">The application.</param>
    public void Init(HttpApplication application)
    {
        application.BeginRequest += new EventHandler(application_BeginRequest);
    }
    void application_BeginRequest(object sender, EventArgs e)
    {
        KeepSSOCookieAlive((sender as HttpApplication).Context);
    }
    #endregion
}

There are a few things in this class I'd like to point out:

The most important method, obviously, is the KeepSSOCookieAlive() method.  It's the heart of the module, but pretty straightforward.
I decided to try to get the CookieDomain programmatically, but didn't really get into too complex of an operation to get the correct domain.  If for whatever reason this didn't work in production, you could easily override the guessing by setting it in the AppSettings.
I'm getting the actual cookie name used by the provider - by actually retrieving the provider itself and asking it which cookie it's looking for - so that there won't be any confusion.

News Gateway Authentication

Ok, great - we've got the SSO for the main Community Server instance working like a charm, so I move on to getting Telligent's News Gateway service into place.  After reading through the docs for a bit, I see a very ominous line saying something to the effect of "Custom (cookie-based) Authentication won't work with the News Gateway."  Ugh!  I mean, it makes sense, but... Ugh! 

Of course, the first thing I do is try it and hope it works.  Unsurprisingly, it doesn't.  The next thing I do is take a step back, look through what's available, and spot Telligent's Form-based membership provider, which (I think) is the default provider in the Gateway.  The Cookie provider works by creating a Community Server account for a user that has logged in to the main site.  The problem with Forms (username/password combination) Authentication is that the Community Server account that is set up knows nothing about your password from the originating site, nor can it ask the originating site - that it knows nothing about - to authenticate you...  unless you help it.

The solution was obvious: override the ValidateUser(string username, string password) method of the Community Server Forms Membership Provider.  I wasn't going to paste the code because it's just a simple override, but here's an example snippet:

Custom Membership Provider

public class CustomMembershipProvider
    : CommunityServer.ASPNet20MemberRole.CSMembershipProvider
{
    public override bool ValidateUser(string username, string password)
    {
        // TODO: Insert your custom validation logic here
        return (username == "superdude" && password = "wickedcool");
    }
}

That's right - just a regular ol' override of the membership provider.  Then you can follow it up by copying the Membership provider section in the configuration file (CommunityServer.NntpServer.Service.exe.config or CommunityServer.NntpServer.Console.exe.config, depending on which one you're using) and replace the CSMembershipProvider with your own.  Then, BAM - you've got username/password authentication against whatever data source you like.

 

I hope this post can help you out if you were coming across some of the same problems we were!

What You Want vs. What I Want

You know what really grinds my gears? Bad UX!

I came across this login prompt today on a local county college library's website and it was just one more example of horrible UX that I see all too often and makes me physically sick:

Horrible Login

What's wrong with this? Oh geeze... where do I start!?

  • What's with the horizontal alignment?? Stack these fields vertically, please! Users are more than willing to scroll up/down almost indefinitely, but if you make them scroll horizontally, they may never come back again. Granted, this particular example isn't too wide, but why deviate from the de-facto standard?
  • "College ID number (ex G12345678)" This has always bothered me. Why can't I choose my own login name? Sure, I understand that it has to be linked to my asinine college ID number at some point - and that's fine - but let me choose a username I can remember. Some people defend this practice, citing "security concerns." What: security through obscurity? Give me a break - if I can log into my PayPal account with just a username and password, a county library system shouldn't need any more than that.
  • Status!?! What the heck!? Why do you need to know my status? And, what exactly is my status? This may sound a little philosophical, but what makes a Student or "Faculty/Staff", anyway? What if I am a student worker, employed by the college?
  • The only action on the page is "Access Databases." What databases? This is the prompt to login into an online library card catalogue . I don't want to access "databases", I want to search the card catalogue! Actually, my true goal is to find and request the book I'm looking for... but I've already accepted that I have to use the online catalogue to do that.

It's the second and third items that particularly bother me, and the issues that I'm going to attempt to tackle in this post.

The Problem

You may be saying, "calm down, it's just a library login..." But that's the point - it's not. You run into these things on a daily basis, and they need to stop! UI developers need to make the interface much more polite; they need to cater to their customers instead of insisting their customers cater to them. They ask you for things that they know just so they don't have to figure it out themselves. They can (or at least should be able to) take your ID number, cross-reference it against some database somewhere and determine what your status is, but - for whatever reasons - they don't. Of course, these atrocities aren't limited to login pages... but any time you're soliciting something from the user - asking them to give you some of their priceless time and energy to give you something you want.

It becomes an epic struggle between what you (the customer) and I (the developer) actually want out of this interaction. You usually want a few specific things - in the above example, it happens that you'd like to locate and possibly reserve a book without leaving the comfort of your home. I am here to write applications that give you what you need, but I also have concerns. They may be security-, performance-, business-related, or a myriad other things... but who's needs should take precedence? I'll give you a hint - if your needs didn't exist, I wouldn't have a job.

And the most annoying thing about it all is that the solution is very easy...

Put the Customer First

A lot of lip service is given to this concept, but somehow the sentiment often gets lost somewhere between the requirements phase and the UI design (if it was ever there to begin with)... But, there's a solution: design with the customer in mind. Obviously you're going to need some things - such as a username and password pair in order to authenticate a user, or billing information to complete an order - but before you ask them to spend their valuable time giving you something, be sure you really need what you're asking for! Not only that, ask for it without wasting their time. Here are a few tips:

  • For every field you're requiring the user to enter, ask yourself if there is any way you can figure it out yourself using some other piece of information they've already provided. If this is the case, don't waste their time asking them in the first place. For instance: in the above example I can probably determine whether the user is Staff or Student once they've given me their ID
  • Ask them for something they know (or can remember easily)! Don't require them to give you their randomly-assigned personal ID (e.g. college id, employee id, etc) every time! It may be very important to you, but you can help them out by asking them once (the first time) and then let them cross-reference it to something that they can remember, such as a username that they've picked. Let the system do the memorizing of obscure data!
  • It often helps to explain to customers - on a per-item basis - why it is you think this information is important. Not only is this likely to decrease their annoyance at having to answer your questions, it will also help them understand what, exactly, you're asking for and may even increase their chances of being honest!
    • I know, I know - you're thinking "My users don't lie!" Well, I guarantee that if you force them to give you information they don't want to give they will! Tell me - honestly - how many times have you filled out a form, and entered your email address as "effeafdsfefe@fdsafdsfe.com"? I thought so... (Effeafdsfefe must be getting a LOT of junk mail, whomever he or she is!)
  • Ask yourself if you're going to - realistically - do something with this data, or if it's just going to hang out in your database/data warehouse not being put to any good use. Wasting your users' time asking them for information you don't really care about is the biggest insult you can give them. If you determine this is true - that you don't really care about a particular item - but you decide not to remove it anyway (e.g. some business group somewhere thinks it's really important), here are a few ideas that you might implement to ease the added burden you are now placing on your users:
    • Make it optional, please! If you're only going to use it at the rare times you're interested in it, let me decide whether or not I want to enter it. I'll enter it during the rare times that I feel like wasting my time entering this extraneous data...
    • Try grouping all of these optional items away from the ones so they don't detract from the importance of the required fields. Then, you may even want to go so far as hiding these fields from the user, allowing the user to choose whether or not they even want to see them.
    • Pre-populate it with sane default data that the user can simply accept as opposed to entering themselves
    • If none of these works - and you're dealing with a known set of values - use the technique that's all the rage lately: auto-complete. This is where the user begins entering data and you try to guess what they're try to say before they're finished saying it... saving them a bunch of energy. If you're going to make them work, make them work as little as possible.

The common idea shared between those last two suggestions bears repeating: pre-populate whenever you can! If you know something, by all means be polite and tell the user you know it already. This can make all the difference between a smart, friendly interface and a waste of time. After all, filling out a form doesn't really have to be a tedious, laborious task. Not if it was created right!

Take Action!

So, now I've made you aware of some of the worst things you can do to your users (along with some helpful tips to mitigate them). Well... we're the developers who propagate these atrocities! And, that means that we hold the power to end them. So, I want you to try something: every morning when you wake up, look in the mirror and say "Put the Customer First" three times. One of two things will happen: either you'll start changing the way you think about designing interfaces, or... you'll feel incredibly silly and stop doing it very quickly. It will probably be the latter (that's what happened to me), but at least you gave it a shot, and maybe just by trying you'll start thinking about it just a bit more; and that's a step in the right direction. Believe me: your users will thank you. I know I will.

Recommended Reading:

Tuesday, October 9, 2007

Awesome Article on Usability

A buddy of mine, Alex (AKA Usability Suspect) sent me a link to the most concise article on usability I've seen in a while. There may not be a slew of brand new ideas in here, but it is a great checklist of standard and common practices and ideas. It starts with age-old, simplistic axioms such as the "7+-2 rule", the "2-second rule", and the "3-click rule", then delves into deeper topics, even finishing with a glossary!

Click here to check out the article. Do it now!

I had to laugh when I came across the "Baby-Duck-Syndrome," since I was employing this theory just the other day during a discussion with Ed Blankenship. I was arguing that his tendency toward excluding parenthesis when optional (e.g. preferring Fig. B over Fig. A, shown below) was due almost solely to the fact that he was raised as a Visual Basic programmer and has only recently seen the light at the end of the tunnel, switching over to C#.

Fig. Aimage
Fig. AFig. B

Whoops, did I just alienate all of my VB readers? Please don't let my C# elitism get in the way of our relationship... :) Remember, we're all running the CLR underneath...

Thursday, October 4, 2007

Be Careful of What You Cache... and How You Cache It!

This post is about a wonderfully embarrassing experience I had recently from which you can (hopefully) learn something. Or - at the very least - this can serve as a reminder of what you already know.

A few weeks ago, a couple of guys from the Infragistics IS department and I decided to get to the bottom of some of the performance issues we had been living with for a while on our main website. So, we locked ourselves in a room for a few days and scoured through the code for the entire website. The biggest thing we found ended up being a few stupid lines of code I had written...

Introducing the Bug

RSS Feed SummaryIt all started with "hey, we've gotta add RSS feeds to our site." The idea was to be able to add a whole bunch of feeds to a page, and then have a summary control (shown on the left) to organize them. Since our site makes heavy use of master pages and user controls, I wanted the ability for any of the items in the page lifecycle to be able to add an RSS feed to the page and have it show up in this summary control. And, oh yeah - I wanted to cache them so future requests to the page didn't have to load them again. So, I created a static RssFeedManager class that pages and controls could register their feeds with and, on PreRender, my Summary control could get the list of feeds that had been registered and display them.

Finding the Bug

The weirdest/most difficult thing about this bug was the fact that it seemed to rarely ever occur. That is, on most requests throughout the site, everything went fine and nothing seemed out of the ordinary. And, on top of that, even though the culprit ended up involving a (supposedly) managed object, it never really showed up in our memory profiling. We made the final breakthrough when one of the guys from IS, Martin, noticed that we could reproduce the problem every time we hit the login page... Since this is one of the few pages on our site that is not cached via the Output Cache we immediately started hitting some of the other pages that are not cached and got the same result: the server memory grew and grew until it reached the maximum allocated memory and the application pool recycled. Cool - we were able to reliably reproduce it!

For the next step - actually figuring out the specific bug - we kicked up our trusty profiler. It didn't take long for Martin to locate the problem: the controls created and added to support our RSS feeds. Eventually, I ended up narrowing the problem down to a function in the RssFeedManager, listed below. Can you spot the problem?

private static Dictionary<Page, List<RssChannel>> RegisteredFeeds = new Dictionary<Page, List<RssChannel>>();


public static void RegisterPageFeed(Page page, RssChannel channel)


{


// If we don't have a page or a channel, bail out now


if (page == null  channel == null)


return;




// Get the page feeds


List<RssChannel> pageFeeds = null;


if (RegisteredFeeds.ContainsKey(page))


{


// Get the existing feeds


pageFeeds = RegisteredFeeds[page];




// Add the feed to the existing feeds if it isn't there already


if (!pageFeeds.Contains(channel))


pageFeeds.Add(channel);


}


else


pageFeeds = new List<RssChannel>(new RssChannel[] { channel });




// Update the registered feeds list


RegisteredFeeds[page] = pageFeeds;


}





Yeah - that's right... I'm caching the list of registered feeds using the Page object. At first glance, that seems like it might work - associating the RssChannel with the current Page that it belongs on. It might work, that is, until you realize that the Page object is unique. That is to say, the Page object created for the first request to /default.aspx will not be the same (or Equals()) the one created for processing the second request to the same page, /default.aspx. The result of the preceding code is that the current Page object (referencing and referenced by literally thousands of objects) is placed into the static dictionary, never to be properly picked up by the garbage collector. The result of that is pure, asinine memory leakage: every request adds another Page object to the dictionary until the Application Pool finally reaches its limits and explodes... er, recycles.





The Fix



As usual, the fix for this one was pretty simple. I updated the above function to store the registered feeds the HttpContext.Current.Items collection. That way, I could track things on a per-request basis and everything was nicely cleaned up for me after the HttpContext was out of scope (after the request had been fully processed). The final, fixed method is shown below:





 1: public static void RegisterRssFeed(HttpContext context, RssChannel channel)


 2: {


 3: // If we don't have a page or a channel, bail out now


 4: if (context == null  channel == null)


 5: return;


 6:  


 7: // Get the existing feeds


 8: List<RssChannel> currentFeeds = 


 9: (context.Items[RSS_FEEDS_KEY] as List<RssChannel>) ??


 10: new List<RssChannel>();


 11:  


 12: // Add the feed to the existing feeds if it isn't there already


 13: if (!currentFeeds.Contains(channel))


 14: currentFeeds.Add(channel);


 15:  


 16: // Update the registered feeds list


 17: context.Items[RSS_FEEDS_KEY] = currentFeeds;


 18: }





Since it was introduced in ASP.NET 2.0, I've been using the HttpContext.Items collection more and more. It's readily available to practically anything (modules, handlers, user controls, pages, etc.) and it comes in real handy for situations like this.



Exactly How I Screwed Up





Here's a checklist of things that I knew to watch out for, but didn't:



  1. The most obvious - and main point of this post- is to be incredibly careful what you use as your Key in any collection. Also, be especially wary of the context in which you're doing it, which brings us to...
  2. Think hard about exactly what you're putting in static variables, since it keeps them away from the Trash Man (garbage collector), which could be a very bad thing.
  3. Reinforcing and expanding on #2: when you've got something like a static Dictionary<[foo],[bar]>, your goal may be to keep the Values alive, but keep in mind that you're also keeping the Key objects alive as well, which may not be exactly what you're shooting for. For example, instead of using an entire User object as a key, consider using the User's Username value instead (assuming it's unique, of course).

    • At the risk of being redundant, I'll just go ahead and get real specific here, since it's the catalyst for this post: in the average page request, the Page object is, like, the worst thing to cache and/or keep in a static collection... Think about all of the objects that are latched on to it and are subsequently kept alive due to their association with this monster of an object. The thought of everything I was keeping alive with EVERY unique page request still haunts my dreams today...

  4. Overall, take a minute and think about why you're choosing to cache this particular item in the first place, and if you're actually duplicating data that's already been cached somewhere else, or even increasing performance at all.


As it turned out in my case, not only was I keeping large objects from being destroyed, I was also caching data that was already cached elsewhere (the individual RSS feed links themselves), and I wouldn't have been helping performance even it if was working like I'd planned! Unless the entire page was being cached in the Output Cache (in which case this entire discussion is moot), the page still had to go through its lifecycle, and the RSS feeds were still being registered with the RssFeedManager - the only thing that would've changed is that the RssFeedManager would check to see if they'd already been added and decide not to add them. Where's the performance gain in that?




ASIDE: Sorry if I lost you in that last paragraph... My point is that I was attempting to cache something that was already cached. Had I thought through the entire process, I (hopefully!) would've noticed that.




Remember how I said that the Page object was a "(supposedly) managed object"? Well, for the most part, it didn't show up in the memory profiler reports; at least not all of it. That is, if we started up the profiler and watched the server memory and the profiler report while we hit the page repeatedly, the profiler report showed approx. 400k difference for each page hit while the server memory would shoot up between 5 and 20 megs! It was very interesting and it certainly didn't help us get to the bottom of the problem any faster. I never did get to the bottom of this, but it's definitely good to know going forward.



So, there you have it - my great blunder. Hopefully my mistake can be to your benefit.

Wednesday, October 3, 2007

Halo 3: Nag, Nag, Nag...

A couple of buddies and I finally set aside some time to beat Halo 3 last night (on Legendary mode, of course). It was definitely pleasing, though there were a few things that were pretty damn annoying. Scratch that - one major thing: why - in the middle of an intense firefight - does Cortana have to pop into your head, give you motion sickness, slow down time, and damn near give me an epileptic seizure?? And what is she saying, anyway? Does it help me complete the mission? No. It's almost as if I've reverted back in time and my mom is yelling up to my room, "do your homework!", "take out the trash!", "save the universe!" NAG, NAG, NAG! (Sorry, mom, if you ever read this...) I know what I gotta do, and you flashing in my face and making me walk in slow motion isn't gonna get me any closer to it.

The looming possibility of seizures aside, the gameplay was quite good... but not incredibly unique. I've got to admit, I'd grown tired of the Halo franchise. The multi-player, however, is a completely different story. I've always been pretty ashamed to admit this, but I spend - on average - about 4 hours a night online, playing Halo multi-player. I've always loved it, and Halo 3's multi-player is the best yet. I absolutely love all of the new features! One of the coolest things is the mere fact that it was designed for the Xbox 360 platform and you don't have to run in the confines of an emulator, desperating longing for at all of the great Xbox 360 features you paid for. I do have one question, though: what's the deal with the lag?? I don't know if it's my router/network or not, but I've noticed - even in the few days leading up to the game - a lot more lag lately... and it sucks. Bungie: FIX THE LAG!

Folks, the game has arrived.

And it's awesome.

Bravo, Bungie.

*clap* *clap* *clap*

Thursday, September 20, 2007

Virtual Bartender

Hey, anyone else see this in the September Popular Science magazine?

Online beverage management, password-protected (stay out of the alcohol, kids!), and fully automated. My kind of drinking.

The Virtual Bartender

Sunday, September 2, 2007

I Love Google!

I don't say this very often because - quite frankly - I'm hardly ever this impressed, but: I love Google! Everything they do seems to be cooler and cooler.

So of course they started out with the search engine, then GMail (with links to Calendar, Docs, etc.), then whatever other various cool things they've done that I haven't found out about yet... I even host my personal domain email (jesschadwick.com) with their "Google Apps" service - beautiful! So, when I recently went searching for a new place to call home to my personal blog, I came across Blogger/BlogSpot. I'm no historian - and to be honest I haven't done any research on the fact so I can't be sure - but from things I've read it seems like Google just bought the Blogspot service to add to its repertoire of awesomeness. Does that make it any less cool? No, because the fact is that it still has that Google feel to it. You know - simple, slick, easy-to-use, and quick. Oh, and did I mention customizable?

I'll 'fess up right now: once I found out that Blogger.com was a Google property, I stopped my search then and there (of course I took a walk through it first, but it pleased me just as I thought it would) so I'm not entirely privvy to all of the cool free blog sites out there, but the main thing that got me about Blogger.com (after the ad-free interface) was the depth of customization it offered. And right in the admin UI, too. It's just plain awesome.

I've got nothing but respect for Google. If Google was a woman, she'd be the hottest supermodel ever.

Wednesday, August 1, 2007

Dummy Page Handler

My current project at Infragistics is revamping our entire redirect and tracking backend. Since we are dealing with redirects and rewrites, my tests naturally ventured into the realm of doing something to/with the current HttpContext. As anyone who's dealt with this can attest to, I had my fair share of difficulties along this road, but decided to blog about one problem and solution I thought might be particularly interesting and/or useful for anyone testing redirects.

My situation

I have a list of request URLs along with a corresponding redirect path for each of them. The project I'm working on is a library - it's meant to be used in a web site, but is not actually run as a standalone site. You can unit test with your favorite framework all you want, but at some point when you're dealing with redirection and the HttpContext, you're eventually going to want to try it out in a hosted environment. Using blog posts and code from Scott Hanselman and Paul Haack, I set myself up a nice little WebDev process and ran my integration tests within that. Everything worked great, except... RED BAR?!?

My redirection tests had failed. "404 Error" messages filled my Output box. Oh... yeah - using the hosted approach actually creates a website using physical files, just like any normal website. The fact that my integration test class library was the thing starting it up was irrelevant - it still expected the physical pages to exist and it wouldn't allow me to just redirect to any hypothetical path that I wanted to (even though my tests didn't really care whether the destination path actually existed or not - just that our response was redirected properly).

(Note: If you're not partial to my storytelling, you can just jump down to the Solution section below for what I eventually came up with.)

My first attempt at was a knee-jerk reaction: WebDev wanted a physical file? I'll give it one. So, I added "dummy.html" to my project and used TestWebServer.ExtractResource("dummy.html", destinationPath) before every request. Echk. This approach gave me a few reasons to feel uneasy: I now have an extra (rather meaningless) static file around to clutter up my project; I'm writing a file before EVERY request; and well, what if sometimes I actually wanted the 404 error to occur?

Onto attempt #2. Well, let's call it Attempt 1.1, because I didn't actually solve either of the last two issues, but that stupid extra file in my project was bugging me, so I replaced the call to ExtractResource("dummy.html", destinationPath) with CreateDummyPage(destinationPath). The CreateDummyPage() method did away with reading the dummy HTML page from the resources and just used a StringBuilder to populate the content... then wrote it to a file. But, like I said - this didn't really help fix my two other issues.

Solution

The solution ended up being ridiculously easy: write an HttpHandler to serve up dummy pages. The code ended up looking like below. Of course, you can boil this code down even more and simply use a string constant for the page content. But, as you can see, I wanted some kind of content on the page (to make sure the Handler itself was actually working, at the very least!).

using System;
using System.Text;
using System.Web;

namespace Infragistics.Guidance.Web

public class DummyPageServer : IHttpHandler
{
private string _PageTitle = "Dummy Web Page";

public bool IsReusable
{
get { return true; }
}

public void ProcessRequest(HttpContext context)
{
string body = String.Format("Requested path: {0}", context.Request.Path);
StringBuilder sb = new StringBuilder("<html>");
sb.AppendFormat("<head><title>{0}</title></head>", _PageTitle);
sb.AppendFormat("<body>{0}</body>", body);
sb.Append("</html>");
context.Response.ClearContent();
context.Response.Write(sb.ToString());
context.Response.End();
}
}
}







Then, you register the module like so:

Oh, but wait - what about my third concern from before? What if I don't want every .aspx request to be mocked up?? Just be more specific in the handler definitions, such as:

This approach seems to be working great for all of my tests so far. No more 404's - yay! I hope it can help with yours.