My Resume

  • My Resume (MS Word) My Resume (PDF)


Affiliations

  • Microsoft Most Valuable Professional
  • INETA Community Champion
  • Leader, NJDOTNET: Central New Jersey .NET User Group

Thursday, November 12, 2009

What’s a “Unit Test”?

Photo courtesy of those show cancelling bastards at CBS.

No, I’m not talking about these guys...

Generally speaking, writing any kind of code that exercises the code you've written is a good thing, but the term “unit test” carries with it a very focused and specific meaning. Listed below are what I consider the top-most important qualities of a “unit test”:

  • Atomic


    A unit test should focus on validating one small piece (“unit”) of functionality. Generally, this will be a single behavior or business case that a class exhibits. Quite often, this focus may be as narrow as a single method in a class (sometimes even a specific condition in a single method!). In practice, this equates to short tests with only a few (preferably just one) deliberate and meaningful assertions (Assert.That([…])).

    Common Pitfalls & Code Smells
    • Dozens of lines of code in one test
    • More than 2-3 assertions, especially when they’re against multiple objects
  • Repeatable


    A unit test should produce exactly the same result at any time on any environment, given that environment fulfills a known set of dependencies, e.g. the .NET Framework. Tests cannot rely on anything in the external environment that isn’t under your direct control. For instance, you should never have to worry about having network/Internet connectivity, access to a database, file system permissions, or even the time of day (think DateTime.Now). Failed unit tests should indicate a bug in the code and nothing else.

    Common Pitfalls & Code Smells
    • Tests pass on the first execution, yet some or all fail on subsequent executions (or vice-versa)
    • “NOTE: The XYZTest must be run prior to this or it will fail!”
  • Isolated / Independent

    In a culmination of the first two qualities, a unit test should be completely isolated from any other system or test. That is to say, a unit test should not assume or depend upon any other test having been run or external system (e.g. database) having a specific state or producing some specific result. Additionally, a unit test should also not create or leave behind any artifacts that may trip up other tests. This is certainly not to say that unit tests cannot share methods or even whole classes between each other – in fact, that is encouraged. What this means is that a unit test should not assume some other test has run previously or will run subsequently; these dependencies should instead be represented as explicit function calls or contained in your test fixture’s SetUp and TearDown methods that run prior to and immediately following every single test.

    Common Pitfalls & Code Smells
    • Database access
    • Tests fail when your network or VPN connection is disabled
    • Tests fail when you have not run some kind of external script (other than perhaps an NAnt script to compile, of course)
    • Tests fail when configuration settings change or are not correct
    • Tests must be executed under specific permissions
  • Fast

    Assuming all of the above conditions are met, all tests should be “fast” (i.e. fractions of a second). Regardless, it is still beneficial to explicitly state that all unit tests should execute practically instantaneously. After all, one of the main benefits of an automated test suite is the ability to get the near-instant feedback about the current quality of your code. As the time to run the test suite increases, the frequency with which you execute it decreases. This directly translates into a great amount of time between the introduction and discovery of bugs.

    Common Pitfalls & Code Smells
    • Individual tests take longer than a fraction of a second to run

If one were really clever, they might arrange the above into a cute little acronym like “FAIR”, but the order in which they appear above is very deliberate; it is the rough order of importance that I place on each quality.

Unit Tests vs. Integration Tests

Odds are that if you have written any automated tests recently, you probably violated one of the above guidelines… and probably for very good reason! What you have produced, my friend, is another very valuable form of automated test called an integration test. As opposed to a unit test - whose sole purpose is to validate the logic and/or functionality of a specific class or method – an integration test exists to validate the interaction (or “integration”, as it were) between two or more components. In other words, integration tests give the system a good work-out to make sure that all of the individual parts work together to achieve the desired result – a working application.

As such, integration tests are just as – if not more so – valuable in a business sense as unit tests. Their major drawbacks, however, are their slow speed and fragility. Not only does this mean that they will get executed less frequently than a unit test suite, but the rate of false-positives (or negatives… however you want to look at it) is much higher. When a unit test fails, it is a sure indication of a bug in the code. In contrast, when an integration tests fails it may mean a bug in the code, but could also very well have been caused by other issues in the testing environment such as a lost database connection or corrupt/unexpected test data. These false positives - though a useful indicator that something is wrong in the developer’s environment – usually just serve to slow down the development process by taking the developer’s focus away from writing working code. Assuming you strive to avoid these distractions whenever possible, the conclusion I come to is that you should therefore strive to rely on extensive test coverage via a solid unit test suite and supplement that coverage with an integration test suite and not vice-versa.

References

A great deal of the reason I even took it upon myself to write this blog post was because I couldn’t really find any good online articles or posts concerning “what makes a unit test”!   Below are a few of the great ones I found.  It may seem like I stole from some of them, but the ideas above really are my opinions…  they just happened to be widely shared. :)

However, it seems at this point if you are very interested in learning more about this topic, books are your best bet.  Anything by the “usual suspects” (Fowler, Hunt, Thomas, Newkirk…) is a great bet, but here are a few I have read and loved:

Friday, October 16, 2009

TFS Ain’t So Expensive Anymore

Those of you who scoffed at the enormous price tag on previous releases of Team Foundation Server will be happy to hear about “TFS Basic” - the new offering of TFS 2010 (or as Brian calls it, “TFS for SourceSafe users”  **shudder**).  Presumably, this new offering includes all of the functionality that small development shops will need to thrive on TFS, while still offering a sane upgrade path.

I haven’t been able to find exactly how much this new SKU is going to run you, but from what I’ve seen it will not be $0.00 (AKA: free).  As my very last post may tell you, I am a huge fan of the free & open source offerings out there, but – as my last post shows – making these disparate projects integration together can quite often mean a whole lot of time and energy.  Even with the astronomical price tag of previous versions, the out-of-the-box integration of Source Control, Continuous Integration, and Change Tracking has always been incredibly alluring to me.  And, for those who really desired it, it was probably worth the cost.  The exciting part of this announcement is that you can now get this powerful integration – sans advanced features – for a fraction of what the full TFS system used to cost…  and that is pretty damn cool if you ask me.

Will I make the switch from Subversion+CruiseControl+[whatever change tracking and planning tool I’m using]?  Will I solicit my employer to switch?  No.  It’s nice to know that if one of the components isn’t working out for us or we find something better, we can replace just that one component and leave the others in place.  Additionally - while the initial pain in getting these open source solutions wired together can be substantial - once the initial price of time and effort is paid, it rarely gets in the way again.  However, those are existing installations I’m referring to;  for new projects, I will most definitely be evaluating TFS Basic along with the others and I expect that the savings we’d realize in integration alone will be enough to make it a leading contender.

For those of you who have never had the pleasure of using Team Foundation Server, I strongly suggest you go grab these bits and try it out.  Sure, it’s got its downsides (as anyone who follows me on Twitter knows), but it is also one hell of a nice product and certainly worth checking out.  What’s more – with TFS Basic, you no longer need to be in a server environment – you can feel free to install it on your local development environment!  Go download and install the bits and come back here and let me know what you think!

Friday, September 4, 2009

Issue Tracking Integration with Subversion & TortoiseSVN

Many development shops have the requirement to associate any code changes to a backlog item or defect to help track the time and energy spent working against a particular featureset.  This need is so prevelant, that the awesome TortoiseSVN Windows Explorer Subversion extension actually has some special settings you can use to help make your life a little easier.

Generally when demoing something like this, I like to show the finished result and then jump back to the beginning and follow the whole process step-by-step, but I’m going to break stride with this post.  I’m just gonna do it.  Here it goes:

How do you associate a backlog item/bug/issue ID with a check-in?

Add the “bugtraq:message” property to the root folder of your repository, and set it to something like “Backlog ID: %BUGID%”.

Yep, that’s it.  I know - crazy, right?  Next time you go to check in, you’ll see something like this:

image

NOTE

For those of you who have never used Subversion properties, the easiest way to add one is to right-click on your Subversion folder, then select “TortoiseSVN > Properties” which will bring you to the dialog where you can add and edit your Subversion properties.  All of the features I discuss in this post are unlocked using these properties.

Then, after filling in a value for the Backlog ID and hitting OK to complete the check-in, you can visit the history logs and see that TortoiseSVN has oh-so-nicely inserted the Backlog ID into your checkin comment for you, like so:

image 

Some of you might be thinking, “But hey – I could have just typed that myself!”  Yeah, sure – if you wanna waste a bunch of time typing the same thing in every comment, this probably won’t be of much help to you… but wait – there’s more!

Make that integration experience a little bit nicer

For those of you who were underwhelmed with the first section, let’s dive a little deeper and check out some of the other bug tracking properties that TortoiseSVN has made available to us that allow us to customize this behavior:

  • bugtraq:url – now this is the stuff you’ve been waiting for!  If you set this property with a URL pattern containing the %BUGID% placeholder, TortoiseSVN will be nice enough to turn those nifty little messages into direct links to your issue tracking system!  Naturally, this link will be different for every issue tracking system, but assuming your system allows you link directly to an issue via it’s ID, this is a pretty sweet option.  Here’s an example: 
    By setting the bugtraq:url property tohttp://myversioncontrol/issues/%BUGID%, my previous history message (shown below) now contains a link directly into my issue tracking system!
image image

  • bugtraq:warnifnoissue – this awesome setting tells Tortoise to yell at the dev (shown to the right) if they haven’t provided an Issue #.  It’s purely a client-side setting and provides no server-side validation so it’s not going to force the users to associate an issue #, but it sure is a nice reminder. 
    Note:  If you really want to perform server-side validation, stay tuned – a post on that topic should be coming up soon! clip_image001[7]

 

  • bugtraq:append – if you’re like me and want the backlog/issue ID right at the beginning of the message instead of the end, you can set the “bugtraq:append” property to “false” so it prepends the ID snippet to the beginning of the log message instead of appending it to the end (as is the default behavior shown earlier).
    clip_image001[5]

 

  • bugtraq:label – if you go waaay back to the first screenshot in this post, you’ll see that the label for the Backlog ID input box had the awkward default value of “Bug-ID / Issue-Nr:”  This bugged the heck out of me, and I wanted to change it to match my message of “Backlog ID:”, and luckily the bugtraq:label property let’s you do just that!   I just set the value of bugtraq:label=Backlog ID:, and I was good to go!

Well, I hope you found these useful and that they help you in your quest to write better software.  As always, if you know of a better way to do this or have any comments, suggestions, or questions, please feel free to comment below.

Good luck, and happy coding!

Saturday, July 25, 2009

On to the Next Set of Challenges!

You won’t see many of my blog entries get too personal, largely due to the fact that I am a relatively private person to start with, but also because this is supposed to be a technical blog.  That said, I wanted to break stride for one post and speak to the fact that I am leaving Infragistics and have now moved on to new challenges by deciding to start consulting.

It has been a great 3.5 years at Infragistics for me and I can not speak highly enough of my time there.  Never before have I worked in an environment so rich with knowledge, intensity, and passion.  It is an environment of productivity; it’s where teams of great minds and talented professionals join forces to produce amazing results, wasting no time in shipping amazing stuff.  It’s also fast-paced: a few of us had at one point discussed the concept of “Infragistics Time” joking that one or two days at IG would be the equivalent of up to a week anywhere else… and I’m not just talking deliverables.  To put it another way, I joined Infragistics as a Senior Web Developer, but after only one year in, I’d learned more and gained more experience than the entire rest of my career combined.

I didn’t do all this learning in a silo.  Reporting to the guy I would end up calling my mentor - Ambrose Little - was a crucial aspect of my development.  Until you’ve actually met him (and if you haven’t had the pleasure, the least you can do is follow him on Twitter!), it’s hard to describe just how awesome this guy is.  Crazy smart, level-headed, patient, and open-minded are just a few words that come to mind.  He guided both me and our group to continuously increasing levels of success… and he was really only doing it “part-time”, having another whole set of responsibilities above and beyond managing me and the website(s)!  It was also through this team that Ambrose led that I was able to foster deep personal and professional relationships with Todd Snyder and Ed Blankenship – two guys that I guarantee will continue to be two of my most valuable friends and colleagues for the rest of my professional (and personal!) life.

As an active supporter of the .NET community, Infragistics also introduced me to the amazing rewards of community involvement.  In a matter of months I had gone from never having attended a local user group meeting to becoming a presenter and eventually assuming leadership of or local group, NJDOTNET and later earning Microsoft’s MVP award!  This was all great fun, but I only recently realized just how deeply this involvement had affected me when my recent job search had me writing out my professional priorities and “community involvement” emerged as #1!  And, I owe all of this to Infragistics’ support, as well as trying to follow in the footsteps of both Ambrose and Jason Beres… which is not an easy thing to do!

I didn’t mean for this post to be a biography of my tenure at Infragistics, and as such I am focusing on those with whom I worked the longest and who had the deepest impact on my life. Unfortunately, that means leaving out the myriad other great folks that I was lucky enough to meet and work with.  So, I’m sorry that I am leaving so many of you out, but you know who you are and – even if I wasn’t able to mention you specifically – thank you for making my time at Infragistics a great one!  Farewell everyone – I’m sure I’ll see you all again sooner or later!

Shameless Plug:

So…  as you may have noticed, I opened up this post by mentioning that I decided to start consulting.  That means that if you’re looking for some help to knock out that next awesome project of yours, please feel free to contact me! 

If you’re interested, here’s a link to my resume in Word 2007 format.

Sunday, July 5, 2009

Book Review: NHibernate in Action

Over a year ago I wrote about my NHibernate Lazy Loading Snafu and in that blog post it was pretty clear I was mostly clueless when it came to NHibernate.  Unfortunately, that hasn’t changed much in the past year, so I was incredibly eager to get my hands on the new Manning book, NHibernate in Action.  Believe me, it did not disappoint.

I’d argue that this book may be more appropriately naming something along the lines of “ORM in Action (with a focus on NHibernate)” because it is not only a bible for understanding and using NHibernate, but for ORM concepts in general!  The authors skillfully intertwine detailed and insightful discussion of general database, ORM, and enterprise development concepts with the nitty-gritty implementation details of NHibernate, all in an easy-to-read manner.  Beginning with a tour of many of the various ORM (and ORM-ish) solutions available to .NET developers and ending with a few chapters dedicated to discussing best practices of enterprise application development, this is a very well-rounded book that is easily digested by developers of pretty much any skill level.  I knew only high-level details about NHibernate and had a few mis-guided attempts at implementing it by myself prior to reading this book, but now I feel incredibly confident that I will be able to create plenty of NHibernate-driven applications with ease.  Another great benefit is the comfort I get from knowing that when I hit any more snafus in the future, it is obviously that this book will be there as a solid reference to help get me through.

The cons?  It'd be nice if the book discussed NHibernate 2 & .NET 3.x functionality (like LINQ-to-NHibernate), but I think those expectations are somewhat unrealistic. Because of its open source nature, NHibernate is a living organism with stark contrast to a published book. Due to this contrast, I am more interested in a text that can explain the fundamental concepts than an incredibly in-depth (and quickly obsolete!) explanation of the technical implementation of those concepts.

When it comes down to it, this is a great book that delivers on its promises and provides a comprehensive look at NHibernate in Action and how you can get it working for you.  I’m just gonna come right out and say it – this is the NHibernate Bible.

Friday, June 12, 2009

Using WebForms Controls in ASP.NET MVC: The Unholy (and Cost-Effective!) Union

My buddy and fellow Infragisticsian, Craig Shoemaker, posted a blog post and a video on our Community site showing how you can use the current Infragistics Web controls in ASP.NET MVC.  Craig’s posts are invaluable because he shows you how you can leverage your current investment in the WebForms controls you’ve already purchased by using them in your ASP.NET MVC applications.

I worked with Craig on some parts of the sample he’s discussing (which is to say that I wrote about a dozen lines of code and then sat back while he did the rest…) and I can say that we’re not trying to play any tricks here – we’re not trying to sell you snake oil.  In fact, in his post, he admits almost immediately that mixing WebForms server controls and MVC is an “unholy union” – something I (and I’m sure most other MVC-ers) whole-heartedly agree with.

We all know that WebForms controls are not "MVC controls" (a concept which has yet to be clearly defined) and vice-versa.  However, that’s not to say that the product offerings available today can’t offer you a good of value if applied deliberately and judiciously.  That subjective phrase, “deliberately and judiciously”, is exactly what Craig does a great job of addressing with these posts by offering guidance on when, where, and how you might use these existing controls.  Hopefully, this guidance can help get you through until there are true “MVC controls” available for you to use.  After all, you may need to make some compromises and sacrifices along the way, but it still beats writing this stuff from scratch!

But hey - don’t let me jam my opinions down your throat.  What do you think?  Is this “unholy union” so unholy that it’s actually blasphemous?  Do you like this approach?  Are there any ways it could be better?  The only way the situation can improve is if we developers all constructively contribute to the larger discussion about what we want to see happen in this space… so let’s get it started!

Wednesday, May 20, 2009

Helping Silverlight and ASP.NET MVC Work Together

If you’ve worked with Silverlight you’ve probably used the WebForms control that comes with the Silverlight SDK.  Technically, you can still continue to use this control with ASP.NET MVC, you’ll just need to add the ScriptManager with EnablePartialRendering=”false” like so:

    <form id="form" runat="server">
<asp:ScriptManager runat="server" EnablePartialRendering="false" />
<asp:Silverlight ID="MySLApp" runat="server"
MinimumVersion="2.0.31005.0"
Source="~/ClientBin/MySLApp.xap"
OnPluginLoaded="pluginLoaded"
InitParameters="myParam=true"
Width="415" Height="280" />
</form>





Sure, this technically still works, but it's not very MVC-like, is it? The new ASP.NET MVC parlance is filled with code snippets and Extension Methods, not Server Controls! We'll instead want something that looks like this:



<%= Html.Silverlight("~/ClientBin/MySLApp.xap", new Size(415, 280),
new {
MinimumVersion="2.0.31005.0",
OnPluginLoaded="pluginLoaded",
InitParameters="myParam=true"
}) %>




Personally, I think the Extension Method way looks a lot cleaner and feels a lot more natural in MVC Land. However, if you don't really see a difference between those two, or see the difference and don't really care one way or another, feel free to continue using the WebForms example and don't bother reading any further. Just be sure to include that ScriptManager, make sure you set EnablePartialRendering="false" and you'll be ready to go.



Creating the Extension Methods



I'm assuming if you're still reading that you not only dig the Html.Silverlight Extension Method above, but you're more interested to see how it works! Well, it's pretty simple, really...



Before I show you the code, let's take a step back and reevaluate what I'm really looking to do here. Sure, I said before that I wanted to replace the Silverlight WebForms control, but what I really want to do is duplicate the HTML it renders (since that's what it's all about, right?). So, here's the markup I'm shooting for:



    <object data="data:application/x-silverlight-2," type="application/x-silverlight-2" height="280px" width="415px">
<param name='minRuntimeVersion' value='2.0.31005.0' />
<param name='autoUpgrade' value='true' />
<param name='source' value='/ClientBin/LogUploader.xap' />
<param name='OnPluginLoaded' value='pluginLoaded' />
<param name='InitParameters' value='customParam=true' />
<!-- [ Silverlight not installed message here ] -->
</object>





Pretty straighforward, right? Basically, you've got the <object> tag with some pretty standard attributes, then a bunch of <param> tags inside, filled with name/value pairs. Should be pretty simple to reproduce - let's a shot at it. The way I went about it was actually just copying and pasting the above snippet into my C# class and replacing each line with the appropriate C# calls to generate it. Here's what it looks like:



    public static string Silverlight(this HtmlHelper html, string relativeControlPath, 
Size size, object parameters)
{
var controlPath = VirtualPathUtility.ToAbsolute(relativeControlPath);

var objectTag = new TagBuilder("object")
{
Attributes = {
{"data", "data:application/x-silverlight-2,"},
{"type", "application/x-silverlight-2"},
{"width", size.Width.ToString()},
{"height", size.Height.ToString()},
}
};

var innerHtml = new StringBuilder();
innerHtml.AppendFormat(ParamHtmlFormatString, "minRuntimeVersion", "2.0.31005.0");
innerHtml.AppendFormat(ParamHtmlFormatString, "autoUpgrade", "true");
innerHtml.AppendFormat(ParamHtmlFormatString, "source", controlPath);

foreach (var param in new RouteValueDictionary(parameters))
innerHtml.AppendFormat(ParamHtmlFormatString, param.Key, param.Value);

innerHtml.AppendLine("\n<!-- [ Silverlight not installed message here ] --/>");

objectTag.InnerHtml = innerHtml.ToString();

return objectTag.ToString();
}





There are a couple interesting things going on in this snippet. First off, I start by resolving the absolute path to the Silverlight XAP; this needs to be resolved because this URL will be sent down to the client, and an application-relative path (starting with "~/") does us no good in a browser. Next, I use the new System.Web.Mvc.TagBuilder class which (as Reflector shows us) is what the framework uses to construct HTML in its Extension Methods (such as Html.ActionLink, Html.Form, etc.). I also supply it with a few standard attributes.





Note that I've hard-coded the Silverlight 2 version info and MIME type... I'm not recommending that you actually do this - it will most certainly attract rabid hamsters to come and eat your code - but for simplicity's sake in doing it in this example anyway.




By this point, you've probably got a pretty good idea about what's going on, but I want to point out one last thing - the usage of System.Web.Routing.RouteValueDictionary. Again taking a cue from the MVC framework itself, I’m using this incredibly helpful (albeit poorly named) class from the new System.Web.Routing namespace to convert anonymous types into a set of key-value pairs that we can then use in our Silverlight method to dynamically add parameters (which are, conveniently enough, simply name/value pairs!).



After it's all done setting everything up, the Silverlight method asks the TagBuilder to render out the markup for our new object tag and its children, and with that, we're pretty much done!

Saturday, May 16, 2009

Windows 7 Training and Informational Resources

Microsoft Learning has just launched three free eLearning Clinics that you or your friends and co-workers may be interested in checking out. These Clinics are geared towards three different audiences, and focus on introducing new features and functionality to those interested in simply learning more about the OS or those that are already considering deploying in the near future.

Also, in case you are interested in more Windows 7 training and skills development information, the new Windows 7 Learning Portal is now live as well! This site is currently showcasing great readiness content, including 7 Silverlight Learning Snacks, free sample chapters from upcoming MS Press Books, Learning Plans, links to clinics/HOLs and more. If you care to check it out, click on any of those links or visit the homepage: http://www.microsoft.com/learning/windows-7/default.mspx.

Enjoy, and let me know if you find anything helpful to you!

Thursday, May 14, 2009

Real Software Artisans Ship

In one of his amazing screenplays, Glengarry Glen Ross, David Mamet sends in a rock star salesman (played by Alec Baldwin) to antagonize an office of poorly performing salesmen.  He reminds them of a core tenet of sales: “A-B-C: Always Be Closing”.  Otherwise, First prize is a Cadillac El Dorado; Second Prize is a set of steak knives; Third prize is you’re fired.

Glengarry Glen Ross (warning: NSFW - language)

Steve Jobs says, “real artists ship.”  Now, I’m no artist, but code undoubtedly contains structure and style. When we developers care enough about our craft to consider this structure and style during the course of development, I don’t think it’d be too far off to consider ourselves an artist of sorts.  Or, if you want to sound more original (or pretentious) you might call us “Artisans”.

Of course, Steve Jobs built a booming hardware and software empire on the motto of “real artists ship,” so I don’t think it’s too far of a stretch to embrace and extend—er, I mean paraphrase Steve’s great line into, “real software artisans ship.”  Agile methodologies preach similarly: “A-B-S:  Always Be Shipping.”  If you’re practicing Agile properly, you are constantly shipping; you are shipping something at the end of every iteration.  Even if your customers/clients aren’t actually getting their hands on it and using it, you should still be “shipping” it.  You should strive to constantly and consistently have something that works. Test-Driven Development helps a great deal with this because, as Uncle Bob says, if you’re practicing it zealously you never go more than a few minutes without everything working.

Ok, so what about the real world?  I know, I know – there are plenty of Agile shops and TDD zealots working in the “real world”, but even the Agilists will (regretfully) admit that a majority of the software development industry is simply not following these practices (and some aren’t following any practices at all!).  But, does not being an active Agile practitioner preclude you from constantly and consistently shipping? 

There is obviously a vast difference between the quick iterations preached by Agile methodologies and a Death March, I’d like to think that even if you or your team are following a Waterfall or SCRUM-fall or even a Free-fall approach that there is still some room for “constantly shipping”.  Sure, you might have to loosen the definition of “constantly” to fit your reality… but it’s doable!  Following – or better yet, adapting – even some of the Agile methodologies is a great first step (for more on that, check out my follow-up post Some Tips on How to Ship Better Code).  More importantly, just keeping the goal of a shipping product instead of that next big feature in mind will probably help more than anything.

What do you think?  Have you been able to effectively employ any techniques in a Waterfall-ish environment to help improve your ability to ship regularly? Is this entire post just full of hot air?

Note:  This post was heavily influenced by Giles Bowkett’s incredibly awesome presentation at RubyFringe.  You must, must, must watch it!!

Wednesday, May 13, 2009

Some Tips on How to Ship Better Code

In my last post, I pontificated about the notion that Real Software Artists Ship.  But, I’ve got to take a step back and admit something – in that last post, I was full of crap.  I don’t really consider myself an “Agilist”, nor do I come anywhere close to zealousness when it comes to TDD, but I have studied (and I use the term loosely!) these movements for some time now and have been able to adopt many of them into my daily grind with varying degrees of success. 

Here are a few that I have found to be the most helpful in shipping better code as fast as possible:

  1. Use Source Control:  I originally didn’t have this listed until I just had to come back and put it as #1.  I’m sure you’re already doing this, but I just had to say it anyway.  If you’re not using source control, rabid hamsters will eat your code and there is nothing you will be able to do about it.
  2. Unit Testing:  What always seems to put everyone off about TDD is the seemingly massive amount of additional work it adds and the recommended zealousness with which you should adhere to it. To those complaints I say: obviously it’s more work; nobody’s debating that.  But, the ROI of having a suite of regression tests alone is so incredibly high it’s foolish not to do it. And, if you’re not keen on religiously adhering to a rigid development process of not writing a line of production code that’s not backed by a test, then don’t do it… but do seriously consider writing a least a few tests to cover the core functionality of your code at the very least.  Writing tests after the fact still offers significant value, even if you aren’t enjoying the full suite of benefits that true TDD has to offer.
  3. Continuous Integration (CI) Builds:  At my last job, a co-worker of mine had a sticker affixed to his monitor proudly proclaiming, “It works on my machine.”  Even if you are a one-developer shop, the benefits of ensuring that you’ve successfully checked in everything needed to build your application are pretty spectacular.  This is so very relevant because the fact is – one-developer shop or not – your production environment is not your machine (or if it is, well, I don’t know what to say… stop doing that? Pretty please?).  Also, if you’ve already got unit tests from the previous recommendation, you’ll find that they go very well with CI Builds.  They go beyond a simple compile to actually running your full suite of unit tests to exercise your code every time, which is a huge win! works-on-my-machine-starburst
    Your company doesn't have a CI server? Start one on your machine!
    This may sound contrary to avoiding the "Works on my machine" syndrome, but having a continuous integration server - even on your own machine - is better than nothing at all. You may not be testing your code on another machine, but you are at least testing it outside of your working codebase and are still being forced to run your unit tests at regular intervals, which are pretty big wins regardless of which machine they're occurring on.
  4. Use a Refactoring Tool (Liberally):  There are some bugs that just never should have happened.  I’m talking about things like existing code that worked until you wanted to move it into its own method – now it’s throwing null reference exceptions because you forgot to initialize that one variable.  Now, I’m not saying that these tools will eliminate this scenario, but they will make it much more difficult to achieve.  Interestingly enough, for those tools like ReSharper that provide suggestions on improving your code, I found that I was actually learning some things while using these tools!  At the very least, those suggestions really help encourage you to clean up your code by acting like a nagging parent - “are you really going to leave this like this? This is embarrassing!”  Course, unlike the nagging parent, if you disagree with the suggestion, you can just turn it off!

Those are my main tips.  What are some of yours?

Sunday, May 3, 2009

Leverage ASP.NET Control Adapters for a (slightly) Better UX

If you’re anything like me, you’ve heard of ASP.NET Control Adapters, but had just dismissed them as a tool that CSS enthusiasts and control freaks could use to make the Web Forms controls render out exactly the way they wanted.  Wanted a <div> instead of a <table> layout? Use a Control Adapter!  Want to… oh, I can’t even come up with a second one.  Point is, until recently I’d basically been dismissing Control Adapters as one of those extension points that the ASP.NET Framework offers, but nobody really has to use to get their usual work done.  Actually, I still pretty much feel that way, but I did recently come with what I think is a pretty good application for a Control Adapter.  I’ll explain it below you let me know what you think!

Pet Peeve:  Drop-Downs with a Single Selection

Select or Drop-down lists (or “combo boxes” as everyone else calls them) are a pretty useful UI element, so it makes sense that they’re used pretty liberally across the web.  But, have you ever gotten halfway through filling out that form and come across this?

image

Yeah, me too. And it's pretty annoying, especially since they're not usually as evident as this one is and you actually waste time expanding it just to find out that you never had an option to begin with.  The first approach most developers take is to just disable the control, graying it out so it is "clear" to the user that they have no other options to select.  I'm talking about something like this:

image

Meh. It certainly doesn't suck as much as the first example, but it's far from an ideal interaction. Users are still left wondering, "Well, what other options do I have that they won't let me see?" (and - depending upon their level of self-esteem - maybe something like, "What, am I not good enough for those other options? Man, this always happens to me - people are always leaving me out and [...]"). While there's not a whole lot you can do to raise your users' self-esteem (or if there is, that's a whole separate blog post altogether), you can eliminate this whole situation altogether in a very simple and straight-forward way: just tell them what the value will be. Just do this:

image

Looks simple enough, right? I'll bet for the developers in the crowd, your wheels are already churning, trying to figure out the best way to do this. Just like me, your knee-jerk reaction is probably going to involve extending or wrapping DropDownList, but the problem with that is that you now have this new control and in order to use it you have scour your entire site and replace any instances of DropDownList with MySuperAwesomeDropDownList. But, since that really wasn't an option for me, my response was to create a Control Adapter.

Implementing a Custom Control Adapter

ASP.NET Control Adapters are a neat way of controlling exactly how controls get rendered down to your clients… even the ASP.NET framework controls!  To take advantage of them, there are two steps: first, create your adapter; then, register it in your .browsers file so that the framework will pick it up.

To achieve the behavior I showed earlier, what we’re going to want to do is override the way our DropDownList controls get rendered out and insert some logic.  Namely, if we’ve got any more than one item, let the control do its thing… but, if we’ve got only one item, take over and instead just render the text of the item out instead of the combo box.  Here’s the code:

public class SmartListControlAdapter
: System.Web.UI.Adapters.ControlAdapter
{
protected ListControl WrappedControl
{
get { return this.Control as ListControl; }
}

protected bool ShouldDisplaySmartText
{
get
{
return WrappedControl.Items.Count < 2
&& WrappedControl.SelectedItem != null;
}
}

protected override void Render(System.Web.UI.HtmlTextWriter writer)
{
if (ShouldDisplaySmartText)
writer.Write(smartText());
else
base.Render(writer);
}

private string smartText()
{
return string.Format("<span class='smartListValue'>{0}</span>",
WrappedControl.SelectedItem.Text);
}
}



You’ll see I added the WrappedControl property to cast the base Control property to a ListControl so I don’t have to do that every time I access it.  Wait – why a ListControl when I said earlier that we were targeting a DropDownList control instead?  Well, after I was done writing all the code you see above, ReSharper let me know that based on the way I was using my reference, I was only using those properties and methods defined in the ListControl base class.  Even though I probably won’t ever use this for anything other than the DropDownList, I figured why limit myself? :)



You’ll also notice that – outside of the casting to a ListControl – that nowhere in this adapter code does it say which control it’s targeting.  In order to actually apply this adapter to the controls on my pages, I’ll need to tell the framework in a separate location which controls I’d like to apply it to.  This is where the .browsers file(s) come in.  If your project doesn’t have an App_Browsers folder, you can right-click on your project and click Add > Add ASP.NET Folder > App_Browsers.  Once this is complete, you can again right-click on this new folder and add a new item using the Browser File template (the name, other than .browser, doesn’t matter).  You can then paste the following inside this file:



<browsers>
<browser refID="Default">
<controlAdapters>
<adapter
controlType="System.Web.UI.WebControls.DropDownList"
adapterType="ControlAdapters.SmartDropDownListAdapter"
/>
</controlAdapters>
</browser>
</browsers>


Simple enough, right?  Here in the ControlAdapters section for the Default (every) browser, we’re telling the framework to wrap all of our DropDownList instances with our new SmartDropDownListAdapter.  It really doesn’t get much simpler than that!



Now we can create a quick test page:



<p>
Regular Drop-Down:
<asp:DropDownList ID="RegularDropDown" runat="server" AutoPostBack="true">
<asp:ListItem Text="First" Value="1" />
<asp:ListItem Text="Second" Value="2" />
<asp:ListItem Text="Third" Value="3" Selected="True" />
<asp:ListItem Text="Fourth" Value="4" />
</asp:DropDownList>
<br />
<em>Selected Value: <%= RegularDropDown.SelectedValue%></em>
</p>

<p>
Smart Drop-Down:
<asp:DropDownList ID="SmartDropDown" runat="server">
<asp:ListItem Text="One Value" Value="1" />
</asp:DropDownList>
<br />
<em>Selected Value: <%= SmartDropDown.SelectedValue%></em>
</p>


You can see I added a few test lines that write out the SelectedValue after each of the controls to prove that the underlying DropDownList control is not modified, just displayed differently.  This means that the SelectedValue (along with everything else) can still be used as normal.



Finally, the moment we’ve all been waiting for; the results of the previous snippet:



image



Not incredibly styled, but beautiful nonetheless!





Your Thoughts



So, what do you think about this approach?  Has this problem bothered you before?  What ways have you solved it?  I’d love to hear about them!

Sunday, April 19, 2009

Of HTTP Headers and Firefox Add-Ons…

Until just last week we had hosted our main website on a single web server, however an impressive team effort of just a few days brought that situation to a screeching halt when we finally moved everything to a brand new, long-awaited web farm. The benefits of web farms are both well-known and mostly obvious, but it’s not all roses and puppy dogs - web farms certainly bring their share of headaches as well.

One of the pains I immediately felt as soon as we went live was the fact that I had no idea which server in the farm I was hitting with any given request. Yes, of course to you the site user this really shouldn’t (and doesn’t) matter, but for me and my team when something isn’t going quite right, knowing which machine this uninvited behavior is originating from is a crucial piece of information. My knee-jerk reaction was to place some kind of identifier in the page content. To this end, I opened up the master pages on each server, scrolled to the end and plopped in an HTML comment “<!-- A -->”, “<!-- B -->”, and so on. This certainly fit my immediate requirement, but had a few annoying downsides: first off, this certainly wasn’t a maintainable solution, since it would have to be re-applied (probably manually, on every server) with every release and – if that weren’t enough by itself (which it was!) – it was just plain annoying having to right-click, View Source, scroll to the bottom, and search for this identifier. My immediate need was filled, but the geek in me was raging – there had to be a better way!

Take 2: After a bit of consideration, I thought taking advantage of the Footers feature in IIS might be a good idea. I’d never used them before, and didn’t know how well it would work for our goals, but decided to try it out anyway. “Open IIS Management console, right-click on my site, Properties… wait – the HTTP Headers tab!” Most any web developer who’s done a decent bit of AJAX has probably used – or at least seen usage of – custom HTTP Headers as a way of communicating meta-data about web responses (e.g. the “X-MicrosoftAjax” header used in ASP.NET AJAX to identify an AJAX request). Frustrated with myself that I hadn’t thought about this initially, I realized this is exactly where I’d wanted to put this info all along. My initial solution of putting the HTML comment in the text left a bad taste in my mouth for so many reasons, but at this point I realized that the worst part about it was that I was putting response metadata in the content, thus committing a blatant violation of Separating Concerns. After slapping my wrist with a ruler and vowing never to do that again (yeah, right!), I set forth to correct it, quickly adding a new custom HTTP Header called “X-ResponseOrigin” on each of my servers, and each with its own unique value. After going back and removing those silly HTML comments, I sat back, made a few requests and happily viewed the responses coming back in Firebug, knowing exactly which server had produced each one.

So at this point I’m pretty pleased with the way things are going. With Firebug open, I could see everything I needed to in order to troubleshoot server-specific issues. But, after a short time I started getting annoyed again. With every new request, I’d have to scroll down to the bottom of the request history in Firebug, find my request, expand it out, and scroll down, all the while scanning for my new special header. Trying to find this new piece of metadata I added – while arguably better than the previous “View Source, Scroll ‘n Scan” method – was still pretty darn annoying… oh, and what about the guys on my team who didn’t have Firebug?? (NOTE: this is really a hypothetical situation, since it is a well-known team rule that not having Firebug on your machine - even if you’re a non-developer – is punishable by 30 push-ups and having to buy everyone donuts the following morning.) It’s at this point that I remembered what Firebug was to begin with – a Firefox Addon – and realized what I must do next…

Enter the Firefox Addon

I needed to make a custom Firefox Addon to show me what server I was on, right there in my Firefox statusbar. No, it wasn’t a requirement for this project – it was utterly imperative to fulfilling my role of “Lead Code Monkey”; it was my destiny.

A quick Googling brought me to the Mozilla Developer portal, with its wealth of knowledge of all things Mozilla. Come to find out, writing a Firefox Addon is as simple as some XHTML/XML-ish markup and accompanying JavaScript – what could be easier!? The markup was quick – I just had to declare that I wanted a custom statusbarpanel that I could put my content in. The resulting code looked like this:

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet href="chrome://whereami/skin/overlay.css" type="text/css"?>
<!DOCTYPE overlay SYSTEM "chrome://whereami/locale/whereami.dtd">
<overlay id="whereami-overlay"
xmlns="http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul">
<script src="overlay.js"/>
<stringbundleset id="stringbundleset">
<stringbundle id="whereami-strings" src="chrome://whereami/locale/whereami.properties"/>
</stringbundleset>
<statusbar id="status-bar">
<statusbarpanel id="whereami.location" label="" />
</statusbar>
</overlay>




As you can see, it’s pretty straightforward. We’ve got our standard XML cruft, followed by what looks like a typical JavaScript reference tag, pointing to our script file (which we’ll get to in just a minute). This is followed by our <statusbar> and <statusbarpanel> element, letting Firefox know that all we’re going to need for our UI is a little piece of the existing status bar. We also gave that statusbarpanel an ID so we can easily reference it from our scripts later on. Actually, forget “later on” – let’s go see what those scripts look like!



Before we see the code behind all this, let’s revisit the requirements. Basically, we’re look for this add-on to do two things: get our custom HTTP header from requests that come in, and (if we find one) display it in the browser’s status bar. Initially, I had expected the latter to be the difficult part, but much to my surprise it turned out to be as simple as this one line:



document.getElementById(‘whereami.location’).label = responseOrigin;



The cool part here is the usage of the “document.getElementById()” DOM searching that you’re already used to from your normal JavaScript forays. And, there’s that element ID “whereami.location” that we set earlier in our overlay.xul file. Now that we’ve got the statusbar update figured out, let’s populate the value of responseOrigin



Quick Intro to the XPCOM API


In order to get a chance to look at what the user is browsing, you’ve got to create a class that you can register with Firefox as a listener. From that point, you’ll be notified whenever a browsing event happens – say, when a new response has been received. Before we look at the actual implementation, let’s look at a basic implementation of a Firefox listener class (as taken from the Mozilla Developer Center):



function myObserver()
{
this.register();
}
myObserver.prototype = {
observe: function(subject, topic, data) {
// Do your stuff here.
},
register: function() {
var observerService = Components.classes["@mozilla.org/observer-service;1"]
.getService(Components.interfaces.nsIObserverService);
observerService.addObserver(this, "myTopicID", false);
},
unregister: function() {
var observerService = Components.classes["@mozilla.org/observer-service;1"]
.getService(Components.interfaces.nsIObserverService);
observerService.removeObserver(this, "myTopicID");
}
}




You’ll see, all the magic happens in the observe() method, which gets fired whenever a browser event happens. For the purposes of this app, we’re looking out for any time an http-on-examine-response event is fired, indicating a new response has been received. That’s pretty easy, we’ll just check the value of eventName parameter:



if (topic == "http-on-examine-response") { /* Grab the Response Origin */ }



Now let’s take a look at my implementation:



var whereami = {
requestObserver:
{
isRegistered: false,

observe: function(subject, topic, data)
{
if (topic == "http-on-examine-response") {
var statusBar = document.getElementById('whereami.location');
statusBar.label = "";
var httpChannel = subject.QueryInterface(Components.interfaces.nsIHttpChannel);
var origin = httpChannel.getResponseHeader("X-ResponseOrigin");
if(origin && origin.length > 0)
origin = "Server: " + origin;
statusBar.label = origin;
}
},

get observerService() {
return Components.classes["@mozilla.org/observer-service;1"]
.getService(Components.interfaces.nsIObserverService);
},

register: function()
{
if(this.isRegistered) return;

this.observerService.addObserver(this, "http-on-examine-response", false);
this.isRegistered = true;
},

unregister: function()
{
if(!this.isRegistered) return;

this.observerService.removeObserver(this, "http-on-examine-response");
this.isRegistered = false;
}
}
};

window.addEventListener("load", function(e) { whereami.requestObserver.register(); }, false);










You can see this is basically the same as the snippet from Mozilla, but I’ve added my logic right into the observe() method (lines 8-16).  Let’s see what we’re doing here:




  1. First, we have to get a reference to our little section of the statusbar that we reserved via the ID that I gave it earlier in the overlay.xul markup, clearing any existing (stale) data (lines 9 & 10)


  2. Then, we examine the HTTP headers that got sent back in our response, looking for the value of the “X-ResponseOrigin” that was sent from the server.  (lines 11-13)


  3. Finally, we’ll update the statusbar label with the value we got from Step 2  (line 15)

Cleaner Validation with ASP.NET MVC Model Binders & the Enterprise Library Validation Application Block

I accidentally stumbled across an awesome combination the other day:  using the Enterprise Library Validation Block with ASP.NET MVC.  Though I’ve played around with them a few times in the past, this is the first time I’ve really started to apply the Validation block in a serious application, and it just so happened to have an ASP.NET MVC website as its client.  My jaw dropped more and more as I started to realize the awesomeness that was unfolding before me…  hopefully this blog post will do the same (or as close as possible) to you!

Using the Enterprise Library Validation Block

It all started with an innocent enough Model requiring a wee bit of validation that I didn’t feel like hand-writing, so (as usual) I turned to the EntLib library to do it for me.  Applying the Enterprise Library Validation Block was surprisingly simple. 

It all started with a simple enough class (the names have been changed to protect the innocent):

public class Product
{
public int ID { get; set; }
public string Name { get; set; }
public string Description { get; set; }
public double Price { get; set; }
public int Quantity { get; set; }
}



This is basically just a DTO (data transfer object), but this ain’t the Wild West – there are rules, and they need to be followed!  After a few minutes, I’d come up with something like this:

using Microsoft.Practices.EnterpriseLibrary.Validation;
using Microsoft.Practices.EnterpriseLibrary.Validation.Validators;

public class Product
{
[RangeValidator(
1, RangeBoundaryType.Inclusive,             /* Lower Bound */
int.MaxValue, RangeBoundaryType.Inclusive   /* Upper Bound */
)]
public int ID { get; set; }

// Let's assume that we've got a field length limitation in
// our database of 500 characters, which we'll check for here
[StringLengthValidator(
1, RangeBoundaryType.Inclusive,             /* Lower Bound */
500, RangeBoundaryType.Inclusive            /* Upper Bound */
)]
public string Name { get; set; }

// No rules for the description - anything goes!
public string Description { get; set; }

// The Price can be whatever we want, as long as it's positive
[RangeValidator(0, RangeBoundaryType.Inclusive, double.MaxValue, RangeBoundaryType.Inclusive)]
public double Price { get; set; }

// Same deal with the Quantity - we can never have a negative quantity
[RangeValidator(0, RangeBoundaryType.Inclusive, int.MaxValue, RangeBoundaryType.Inclusive)]
public int Quantity { get; set; }


public bool IsValid()
{
return Validate().IsValid;
}

public ValidationResults Validate()
{
return Validation.Validate<Product>(this);
}
}



There are a couple of cool things I like about this setup:



  1. Declarative validation rules:  These rules are very explicit expression of business logic - there is no “if-else-then”, mumbo-jumbo.  In other words, there isn’t any code to worry about… and no code means no bugs (well, less bugs at least :).  Moreover, if any of these business rules change, it’s very easy to update these attributes without hunting around for that stray line of “if-else” code somewhere.  Lastly, I’ve heard talk of these mystical “business people” who are also able to read and understand simple code; and, if you run into one of these guys/gals they’ll easily be able to verify that you have the rules set properly as well.
  2. All of the validation logic is in one place:  all consumers of this class need to do is set its properties and ask the object whether or not it is valid.  There are no stray “if(string.IsNullOrEmpty(product.Name)” scattered through your code, just “if(product.IsValid())”.  I feel like this approach has a decent amount of cohesion.  Granted, it could be a bit more cohesive if we had, say, a separate “ProductValidator”, but this seems like overkill.  Regardless, it was bugging me enough that I actually created a super-class to encapsulate this logic further of the chain of inheritance and that made me feel a bit more comfortable:
  3. public class SelfValidatingBase
    {
    public bool IsValid()
    {
    return Validate().IsValid;
    }
    
    public ValidationResults Validate()
    {
    return ValidationFactory.CreateValidator(this.GetType())
    .Validate(this);
    }
    }
    
    public class Product : SelfValidatingBase
    {
    // ...
    }


As with pretty much anything, there is at least one glaring drawback to this approach:  there is no “real-time” checking.  That is, this approach allows consumers to set invalid values on these validated properties at any time – possibly overwriting valid values without any checks prior to the update.  I think that as long as your application (i.e. developers) know about this limitation, it’s not so much of an issue, at least not for the scenarios I’ve used it in, so this drawback doesn’t really bother me.



Now, let’s see how this applies to ASP.NET MVC…



The Awesomeness that is ASP.NET MVC’s Model Binders



When it comes to me and ASP.NET MVC’s Model Binders it was love at first site – and I haven’t stopped using them since.  In case you’re not sure what I’m talking about, here’s an example.  Instead of an Action with individual parameters and populating a new instance ourselves like this:

[AcceptVerbs(HttpVerbs.Post)]
public ActionResult Create(string username, string message, string userUrl)
{
var comment = new Comment
{
Message = message,
Username = username,
UserUrl = userUrl,
PostedDate = DateTime.Now
};
commentsRepository.Add(comment);
return RedirectToAction("Index");
}



We let the MVC framework populate a new instance for us, like this:

[AcceptVerbs(HttpVerbs.Post)]
public ActionResult Create(Comment comment)
{
commentsRepository.Add(comment);
return RedirectToAction("Index");
}



I just think that’s beautiful, and so I’ve come to (over?)use Model Binders on my Controller Actions almost exclusively. 



ASP.NET MVC Model Binders + Enterprise Library Validation Block = BFF



The magic that I refer to at the beginning of this post first exposed itself when I inadvertently used one of my Model objects like the one I showed earlier as an Action parameter (which was really only a matter of time given the fact that I’d taken to using them so much!) using MVC’s Model Binding, and then created some validation logic for it (if you’re not sure what I’m referring to in regards to “creating validation logic”, you’ll want to check out this article on MSDN before continuing).  As I started writing my validation logic in my Action and populating the ModelState with my validation errors like so:

[AcceptVerbs(HttpVerbs.Post)]
public ActionResult Create(Product product)
{
if (!product.IsValid())
{
if(string.IsNullOrEmpty(product.Name))
this.ModelState.AddModelError("name", "Please enter a product name");
if(product.Price < 0)
this.ModelState.AddModelError("price", "Price must be greater than 0");
if(product.Quantity < 0)
this.ModelState.AddModelError("quantity", "Quantity must be greater than 0");

return View(product);
}

productRepository.Add(product);
return View("Index");
}



Now, even if I moved this code outside of my Action, I’d still be pretty embarrassed of it…  but after looking at it for a while I realized that I don’t have to do this after all – the EntLib ValidationResult (usually) maps perfectly to MVC’s Model Binding…  and ModelState errors!  Check out the same code, taking advantage of the EntLib validation results:

[AcceptVerbs(HttpVerbs.Post)]
public ActionResult Create(Product product)
{
var validationResult = product.Validate();
if (!validationResult.IsValid)
{
foreach (var result in validationResult)
this.ModelState.AddModelError(result.Key, result.Message);

return View(product);
}

productRepository.Add(product);
return View("Index");
}



I added this and awesomeness ensued.  The magic comes from the fact that the Key field of the EntLib ValidationResult is the name of the property which is causing the validation error.  This leads to what I can do in line 8 above, which is just iterate through all of the validation errors and add their message to the ModelState using their Key property, which corresponds to the form id’s that we’re using to populate the model.  Just so you don’t think I’m lying, here’s what the form would look like:

<%= Html.ValidationSummary(
"Create was unsuccessful. 
Please correct the errors and try again.") %>
<% using (Html.BeginForm()) {%>
<fieldset>
<legend>Add New Product</legend>
<p>
<label for="Name">Name:</label>
<%= Html.TextBox("Name") %>
<%= Html.ValidationMessage("Name", "*") %>
</p>
<p>
<label for="Description">Description:</label>
<%= Html.TextBox("Description") %>
<%= Html.ValidationMessage("Description", "*") %>
</p>
<p>
<label for="Price">Price:</label>
<%= Html.TextBox("Price") %>
<%= Html.ValidationMessage("Price", "*") %>
</p>
<p>
<label for="Quantity">Quantity:</label>
<%= Html.TextBox("Quantity") %>
<%= Html.ValidationMessage("Quantity", "*") %>
</p>
<p>
<input type="submit" value="Create" />
</p>
</fieldset>
<% } %>



I Think We Can Do Just a Bit Better…



So, there you have it – easy validation using ASP.NET MVC Model Binders, MVC’s Validation components, and Enterprise Library’s Validation block.  The preceeding should work like a charm, but me being the perpetual perfectionist and idealist saw one more piece of duplication that I wanted to remove.  Namely, the foreach loop used to map the ValidationResults to the ModelState.  Using an extension method to extend the ValidationResults class, this duplication can easily be removed like so:

using System.Web.Mvc;
using Microsoft.Practices.EnterpriseLibrary.Validation;

public static class EntLibValidationExtensions
{
public static void CopyToModelState(this ValidationResults results, ModelStateDictionary modelState)
{
foreach (var result in results)
modelState.AddModelError(result.Key ?? "_FORM", result.Message);
}
}



Now the previous Action looks just a bit cleaner:

[AcceptVerbs(HttpVerbs.Post)]
public ActionResult Create(Product product)
{
var validationResult = product.Validate();
if (!validationResult.IsValid)
{
validationResult.CopyToModelState(this.ModelState);
return View(product);
}

productRepository.Add(product);
return View("Index");
}



And with that, I’m happy…  What do you think??

Tuesday, March 10, 2009

Come Learn Silverlight 2 From a Master!

Hey all - if you're in the Princeton, NJ area this coming Thursday (March 12th, 2009), be sure to stop by our monthly NJDOTNET User Group meeting because this month we will be hosting Jason Beres - international conference speaker, Microsoft MVP, and an author of the two Silverlight Programmers References from WROX Press!  Jason is an incredible speaker and I strongly encourage you to do everything you can to attend this meeting!  Here are the details:

What:  NJDOTNET March Meeting – Understanding RIA’s with Silverlight 2

When: THIS Thursday, March 10, 2009  6:15 PM – 8:30 PM

Where:  Infragistics Corporate HQ  (Click here for directions)

Who:
Jason Beres is the Director of Product Management for Infragistics, the world’s leading publisher of presentation layer tools.  Jason is one of the founders of Florida .NET User Groups, he is the founder of the Central New Jersey .NET User Group, he is a Visual Basic .NET MVP, and he is on the INETA Speakers Bureau.  Jason is the author of several books on .NET development, including the recently published Silverlight 2 Programmers Reference from Wrox Press.  Jason is a national and international conference speaker; he is a frequent columnist for several .NET publications, and keeps very active in the .NET community.

Abstract:
Understanding RIA’s with Silverlight 2
In this code-focused talk, we’ll look at the features in Silverlight 2 and how they can help you build better RIA (Rich Internet Application) experiences.  We’ll look at the Silverlight development experience, how to build a Silverlight application with the new Silverlight 3 features, and how this will help you build rich line-of-business experiences using data binding, animations and media.

Tuesday, February 17, 2009

Tag Mappings to the Rescue!

Our recent project at work lately (and the main reason for my previous two months of blog silence) has been upgrading and re-theming our installation of Community Server.  I’ve written a few posts in the past on the couple of modules and customizations I’ve done for our current site, and this upgrade is no different.  In fact, I’ve had to do more!  The most recent one I did just a few minutes ago just happens to be not Community Server-specific at all, but a regular ol’ ASP.NET trick, so I wanted to write about it first.  You’ll see more about the Community Server-specific customizations I’ve had to do following this post.

Community Server sites are really just the “Community Server Platform” (which encompasses a whole lotta stuff!), and a customizable theme on top of that platform.  Like any well-made site, CS themes are plain ol’ ASPX pages with a mixture of user and server controls.  This leaves us with themes that have got somewhere between a dozen and a million pages with the following mark-up:

<CSForum:BreadCrumb runat="server" Tag="Div" />

… but I don’t like what it’s outputting and I want to override some of its behavior.  Overriding the control’s behavior is easy enough, of course – you just extend the class and throw in an override here and there and you’re all good.  Now we’ve got our new control – Infragistics.CommunityServer.Controls.ForumsBreadCrumb – and the trick is getting this nice shiny control in place.  Your first thought might be that we’re up for a massive global search n’ replace, right?  Wrong!


I know I kinda ruined the surprise in the title of this post, but for those of you who skipped over that part, forget the global replace – it’s tag mappings to the rescue!  Tag mappings allow you to substitute (or map if you want to get technical) one control for another using a simple web.config change!  In our case, we’ll do this:


<pages>
<tagMapping>
<add
tagType="CommunityServer.Discussions.Controls.BreadCrumb"
mappedTagType="Infragistics.CommunityServer.Controls.ForumsBreadCrumb"/>
</tagMapping>
</pages>

It’s pretty straight-forward – you’re telling ASP.NET that everywhere you’ve used a tag in your markup (tagType) you want to use another type instead (mappedTagType).  This makes it really easy to override and/or extend the functionality of a control and use your custom version instead of the original, without changing any of your code


This tactic can really help you reduce the risk of such a major change, since - I don’t know about you - but my history with global replacements in markup pages have more often than not cost me more problems (and time spent fixing those problems) than if I had just manually made all the replacements to begin with.  Next time you’re tempted to do a global replace on a control, take a couple of seconds to think about whether or not this tactic will work for you.  It might end up saving you quite a bit of time!


And, as always, happy coding!