My Resume

  • My Resume (MS Word) My Resume (PDF)


Affiliations

  • Microsoft Most Valuable Professional
  • INETA Community Champion
  • Leader, NJDOTNET: Central New Jersey .NET User Group

Saturday, October 1, 2011

Approval Testing–better ROI for UI testing?

A recent episode of Herding Code interviewed the creator of the Approval Tests project, Llewellyn Falco.  Initially, I was vehemently against the idea.  Rather, I consider automated UI testing (building scripts that execute the UI and inspect what happens) incredibly time-consuming and flaky. In other words, you spend a lot of time producing something that has limited value - the ROI is just not there.

But, by the end of the podcast, I was sold.  The concept of Approval Tests seems to drastically reduce the time it takes to create – and, more importantly, maintain – automated UI tests.  The value stays the same but the effort is reduced, which means that the ROI numbers start to become much more tolerable.  Frankly, I think the ROI of backend (non-UI) tests way overshadows UI testing…  but at least it’s palatable.

Friday, September 30, 2011

Where do Model Binding values come from?

ASP.NET MVC Model Binding is a very powerful feature – arguably one of the most valuable features in the entire framework.  As with many “very powerful” features, it is also pretty complex and this means that it works great… most of the time… until it doesn’t.

One of the biggest questions is “where are these values coming from”?  The simple answer to this question is:  the Request object.  The Request object is a core ASP.NET object - a dictionary of values aggregated from various sources such as the querystring (URL), form post values, and server variables.  The Request object is nothing new – it’s been around in one form or another since the days of ASP!

Ok, so you know how I just said that the model binding values come from the Request object?  Uh…  that was kind of a lie. The truth is that they come from ValueProviders (created by ValueProviderFactories). These value providers try to retrieve values from the same places - in the same order - as the Request object. Don’t believe me?  Have a look at the source:

public static class ValueProviderFactories {

private static readonly ValueProviderFactoryCollection _factories = new ValueProviderFactoryCollection() {
new ChildActionValueProviderFactory(),
new FormValueProviderFactory(),
new JsonValueProviderFactory(),
new RouteDataValueProviderFactory(),
new QueryStringValueProviderFactory(),
new HttpFileCollectionValueProviderFactory(),
};

public static ValueProviderFactoryCollection Factories {
get {
return _factories;
}
}

}




In this way, the order of the default collection of Value Providers essentially mimics the Request object… which is why you are usually pretty safe in considering them “the same” even when they’re not. 

Friday, August 19, 2011

Advice for new developers


After a recent user group presentation I was asked what advice I had for developers looking to break into the business. The result was the brain dump below.  For some reason I'm a huge fan of bulleted lists, so below is my advice in bulleted form.  The first two are really all you need. The rest are just icing.
  • Find a mentor What we do is a craft, and the best (quickest, at least) way to go from layman to craftsman is to go through an apprenticeship.
  • Focus on the "why", not the "how" There are many ways to skin a cat, but the biggest question is: what are you hoping to accomplish by skinning the cat? Our craft is filled with subtlety, nuance, and often strong opinions. The goal is to create working, useful software and the tools we use to do that are often different with each scenario. Thus, the best answer to "how should I do this?" is "It depends."
    • You might want to be an "ASP.NET developer", but ASP.NET and even .NET as a whole are just a piece of the larger development world. Languages and syntax are not universal, but fundamental concepts and techniques are. When you know the fundamentals you can apply them to quickly grasp any new technology or concept.
  • Soak in as many blogs and podcasts as you can Working physically alongside a mentor cannot be beat. But, if you are trying to "break in to the business" it is often not a viable option.  In lieu of face-to-face collaboration, go to the web! Online tutorials and API documentation tells you the how, but blogs and podcasts usually offer far more insight into the why. Find as many blogs and podcasts as you can and immerse yourself in them. Here are a few that I love:
  • Code, code, code! Focus on the why all you want - it's useless without the how!  You need to learn languages and frameworks and - like spoken languages - the best way to learn them is to use them... over and over.  Until your fingers hurt.
    • Find all the tutorials you can and run through them (considering the why along the way)
    • Make up your own projects and complete them.  Try to come up with things that resemble "real world scenarios"
    • Browse the source of open source projects and see how they do things (this is a tactical twist on the "mentor" concept).
    • Commit to an open source project!  Not only will this force you to figure out how to write the code, most open source project coordinators will give you a "free" code review to boot.  (Hey, that sounds almost like in-person mentoring!)
Software development can be fun, exciting, and very rewarding (in many ways), but in order to get the most out of it, you have to put the time in to learn the craft.

Good luck, and happy coding!

Tuesday, June 7, 2011

What is Test-Driven Development (TDD)?

Test-Driven Development is a development approach that relies on unit tests to drive the development and - more importantly - the design of applications. In order for software to be considered "testable" it must be adequately decomposable, allowing tests to target specific units of logic (e.g. classes, methods, or even specific portions of a method). The requirement for decomposition drives loosely-coupled, "SOLID" architecture which embraces OO principles.

Benefits of TDD

"True", dogmatic TDD – also called “Test-First Development” - dictates that code may only be written to satisfy a failing test, and only the bare minimum code is written to make that test pass. TDD provides several benefits:

  • Loosely Coupled Architecture
    The need for tests to completely control a component’s environment drives loosely-coupled components, which – when extrapolated to the system as a whole - leads to a loosely-coupled architecture. 
  • Focused Development
    Scope of code currently written is limited to the needs of the immediate business requirement. If more code is needed to support future requirements, that work is delayed until future tests will drive that development. This keeps developers focused solely on the task/requirement at hand.
  • Regression Test Suite
    Unit tests act as a regression test for the remainder of the application's lifetime. And, since dogmatic TDD states that no code can be written without a test to back it, this implies that an application developed using TDD will never have less than 100% code coverage (the number of lines of production code covered by unit tests). That said, true 100% code coverage is very impractical for a number of reasons.
  • Documentation
    Unit tests are merely code that executes other code, and act as extensive “real-world” examples of how components are used, thus providing a form of documentation.
  • More Productive Debugging
    Since “units under test” are adequately isolated and have at least one unit test focused specifically on them, it is often incredibly easy to locate a failing component by looking for the failing test(s). What’s more, since unit tests are executable, debug-able code, developers can easily attach their debugger to a specific test and execute it.

Detriments of TDD

  • More Code
    By definition, the test-first methodology produces a test suite which – at a minimum – doubles the size of your solution’s codebase. This leads to:
    • Increased Lines of Code
      Assuming it takes at least the same amount of time and effort to write test code as it does to write production code, TDD literally doubles the time spent writing code produced (and the corresponding time it takes to write said code).
      Perspective: In terms of the SDLC, the time spent actually writing code is only a fraction of the Implementation phase – much more time is spent on developer testing/verification, debugging, and bug fixing. Taking this into consideration, the increased coding time introduced by TDD is easily offset by more targeted and productive debugging, not to mention lowering the number of bugs to begin with (both in the long term and the short term!). 
    • Increased Cost of Change
      Since unit test code is so closely tied to production code, changes to business requirements mean that both production code and its corresponding tests will need to change. The implications of this change are the same as the preceding bullet: writing and changing code is only a fraction of the SDLC Implementation phase.
  • Even More Code!
    Developers can easily become carried away with writing an abundance of unit tests in an effort to achieve the highest level of code coverage they can. The ROI of additional unit tests against an already-tested component can drop quickly as the number of tests goes up.
  • False Sense of Security
    A high level of code coverage can provide a false sense of security if developers are convinced that the level of code coverage equates to the nonexistence of bugs. However, code coverage only measures whether or not a line of code was executed, not how it was executed (i.e. under what conditions). Consider a highway system: just because you drove your car over every foot of road doesn’t mean those same roads will react the same when traversed by a bus.

An Example of TDD in Action

Business Requirement

The application must produce the sum of two numbers

Step 1: Write a failing test

public void ShouldProduceSumOfTwoNumbers() {
    Assert.AreEqual(4, new Calculator().Sum(1, 3));                FAIL!
}

Step 2: Write just enough code to make the failing test pass

public class Calculator {
    public int Sum(int number1, int number2) {
        return 4;                                                                     PASS!
    }
}

And we’re done! Except that what we’ve produced is a method which returns a hard-coded value! This situation is easy to rectify: write another failing test against the same component.

Step 3: Write another test which specifies a different set of parameters

public void ShouldProduceSumOfTwoOtherNumbers() {
    Assert.AreEqual(5, new Calculator().Sum(2, 3));                FAIL!
}

Since the new test asserts a different result based on different inputs, this test fails because the initial implementation of the Sum method returned the hard-coded value different than what this new test expects.

Step 4: Revisit and refactor the production code to pass the new test

public class Calculator {
    public int Sum(int number1, int number2) {
        return number1 + number2;                                         PASS!
    }
}

Though simple and contrived, this example effectively demonstrates the process – and more importantly, the mindset – behind Test-Driven development.

TDD and UI Development

As you move further away from the statically-typed compiled “backend” code and closer to the UI, the unit tests associated with these parts of the system tend to introduce less resilient and reliable methods such as string comparison. As a result, the cost of creation and maintenance grows exponentially.

A word of warning: because of this exponential cost and loss of strong reliability, the ROI of the TDD approach often becomes negative when applied to the UI layers. It is often better to drive the testing of UI layers by professional (QA) testers as they will likely be applying these approaches anyway.

TDD vs BDD (Behavior-Driven Development)

Test-Driven Development – as its name implies – relies on unit tests to drive production code. Ideally, these unit tests derive from business requirements, however strict adherence to the Test First approach often means that developers end up writing unit tests to allow them to write code and ensure that that code works… not that it meets any kind of business requirements.

Behavior-Driven Development (BDD) is a philosophy grown from TDD which focuses on the software requirements of - and human interaction with - “the business” to deliver software that provides value to the business. Though the two approaches are variations on the same theme and the differences are subtle, BDD aims to please customers by satisfying their (ever-changing) requirements, as opposed to simply focusing on “working code”. This usually means less stringent code coverage requirements

Resources

General internet searches for the concepts in this document such as “test driven development” and “behavior-driven development” rarely leave much to be desired. I have not come across many bad resources in regards to Test-Driven Development. Unfortunately, because these are heavily philosophical concepts that go far beyond simply learning a language or syntax, the only way to truly understand it is to find a mentor and do it (and learn from your mistakes).

Regardless, here is a short list of some of the better resources I’ve found recently:

· Test-Driven Development Wikipedia (yes, it’s a great resource!)

· Test Driven Development Ward Bell, et al – the grandfather(s) of XP

· Guidelines for Test-Driven Development Jeffery Palermo

· Introduction to Behavior-Driven Development BddWiki

· Introducing BDD Dan North

· The Art of Agile Development: Test-Driven Development James Shore

· What is a Unit Test? Jess Chadwick

· Test-Driven Development: By Example Kent Beck

· Working Effectively With Legacy Code Michael Feathers (applying TDD to existing codebases)

Tuesday, May 17, 2011

“Being Agile” Means No Documentation, Right?

agile-pillsAsk most software professionals what Agile is and they’ll probably start talking about flexibility and delivering what the customer wants.  Some may even mention the word “iterations”.  But inevitably, they’ll say at some point that it means less or even no documentation.  After all, doesn’t creating, updating, and circulating painstakingly comprehensive documentation that everyone and their mother have officially signed off on go against the very core of Agile?  Of course it does!  But really, they’re missing the point!

Read The Agile Manifesto. (No, seriously - read it now. It’s short. I’ll wait.)  It’s essentially a list of values.  More specifically, it’s a right-side/left-side weighted list of values:  “Value this over that”. Many people seem to get the impression that this is really a “good vs. bad” list and that those values on the right side are evil and should essentially be tossed on the floor.  This leads to the conclusion that in order to be Agile we must throw away our fancy expensive tools, document as little as possible, and scoff at the idea of a project plan.  This conclusion is quite convenient because it essentially means “less work, more productivity!” (particularly in regards to the documentation and project planning).  I couldn’t disagree with this conclusion more.

My interpretation of the Manifesto targets “over” as the operative word.  It’s not just a list of right vs. wrong or good vs. bad.  It’s a list of priorities.  In other words, none of the concepts on the list should be removed from your development lifecycle – they are all important… just not equally important.  This is not a unique interpretation, in fact it says so right at the end of the manifesto!

So, the next time your team sits down to tackle that big new project, don’t make the first order of business to outlaw all meetings, documentation, and project plans.  Instead, collaborate with both your team and the business members involved (you do have business members sitting in the room, directly involved in the project planning, right?) and determine the bare minimum that will allow all of you to work and communicate in the best way possible.  This often means that you can pick and choose which parts of the Agile methodologies and process work for your particular project and end up with an amalgamation of Waterfall, Agile, XP, SCRUM and whatever other methodologies the members of your team have been exposed to (my favorite is “SCRUMerfall”).

The biggest implication of this is that there is no one way to implement Agile.  There is no checklist with which you can tick off boxes and confidently conclude that, “Yep, we’re Agile™!”  In fact, depending on your business and the members of your team, moving to Agile full-bore may actually be ill-advised.  Such a drastic change just ends up taking everyone out of their comfort zone which they inevitably fall back into by the end of the project.  This often results in frustration to the point that Agile is abandoned altogether because “we just need to ship something!”  Needless to say, this is far more devastating to a project.

Instead, I offer this approach: keep it simple and take it slow.  If your business members or customers are only involved at the beginning phases and nowhere to be seen until the project is delivered, invite them to your daily meetings; encourage them to keep up to speed on what’s going on on a daily basis and provide feedback.  If your current process is heavy on the documentation, try to reduce it as opposed to eliminating it outright.  If you need a “TPS Change Request” signed in triplicate with a 5-day “cooling off period” before a change is implemented, try a simple bug tracking system!  Tighten the feedback loop!

Finally, at the end of every “iteration” (whatever that means to you, as long as it’s relatively frequent), take as much time as you can spare (even if it’s an hour or so) and perform some kind of retrospective.  Learn from your mistakes.  Figure out what’s working for you and what’s not, then fix it.  Before you know it you’ve got a handful of iterations and/or projects under your belt and you sit down with your team to realize that, “Hey, this is working - we’re pretty Agile!” 

After all, Agile is a Zen state.  It’s a destination that you aim for, not force, and even if you never reach true “enlightenment” that doesn’t mean your team can’t be exponentially better off from merely taking the journey.

Friday, March 11, 2011

Presentation: Razor and the Art of Templating

Had a blast tonight giving a presentation on Razor to my hometown user group, NJDOTNET.  While I kind of regret how much focus we gave to MVC instead of Razor itself, I’m glad people are so eager to talk about and learn more about MVC.

If you’re looking for the RazorPad application that I showed and discussed, you can find it here:   http://razorpad.codeplex.com
Please feel free to comment publicly and/or privately – any and all feedback is welcome!  Or, if you’d like to help me code it, that’d be awesome, too – just let me know!

Razor and the Art of Templating from Jess Chadwick on Vimeo.

Friday, February 25, 2011

Presentation: Automated Unit Testing for Mere Mortals

Last weekend I had the immense pleasure of getting my unit testing presentation selected as one in the great Code Camp NYC lineup.  It was a great crowd and this is the first time I tried to record one of my talks.  I think it turned out alright!  I’ve embedded the low-quality version below.  If you prefer the high-def version, here it is:  Unit Testing for Mere Mortals (720p).  

Enjoy, and please feel free to let me know what you think!

Thursday, April 8, 2010

Presentation: Leveraging Continuous Integration for Fun and Profit!

This evening I had another chance to speak in front of a great group of folks:  the members of my “hometown” NJDOTNET!  Everyone had a lot of great questions and overall I thought it was a lot of fun, and (as always) I look forward to the opportunity to speak to this group again.  I just hope everyone got a lot out of it – I look forward to hearing about how everyone goes back to work tomorrow morning and asks their team to start doing continuous integration! :)

For those of you who were interested in my source code, config files or slides, I’ve uploaded them to my secret Internet file lair.  Feel free to download them and check them out, as well as hit me up with any questions – I’d be glad to try to answer them!

If you don’t care to download the files, you can peruse the slide deck online:

Saturday, March 6, 2010

NYC Code Camp 2010

I had a great time presenting on the ASP.NET MVC Framework at the really awesome NYC Code Camp 2010 event today.  For those who wanted to look through the code, here is a link to the code I showed (with “start” and “finish” versions) and check out the slide deck inline below:

   As always, feel free to contact me if you have any questions or are interested in learning more!

Thursday, November 12, 2009

What’s a “Unit Test”?

Photo courtesy of those show cancelling bastards at CBS.

No, I’m not talking about these guys...

Generally speaking, writing any kind of code that exercises the code you've written is a good thing, but the term “unit test” carries with it a very focused and specific meaning. Listed below are what I consider the top-most important qualities of a “unit test”:

  • Atomic


    A unit test should focus on validating one small piece (“unit”) of functionality. Generally, this will be a single behavior or business case that a class exhibits. Quite often, this focus may be as narrow as a single method in a class (sometimes even a specific condition in a single method!). In practice, this equates to short tests with only a few (preferably just one) deliberate and meaningful assertions (Assert.That([…])).

    Common Pitfalls & Code Smells
    • Dozens of lines of code in one test
    • More than 2-3 assertions, especially when they’re against multiple objects
  • Repeatable


    A unit test should produce exactly the same result at any time on any environment, given that environment fulfills a known set of dependencies, e.g. the .NET Framework. Tests cannot rely on anything in the external environment that isn’t under your direct control. For instance, you should never have to worry about having network/Internet connectivity, access to a database, file system permissions, or even the time of day (think DateTime.Now). Failed unit tests should indicate a bug in the code and nothing else.

    Common Pitfalls & Code Smells
    • Tests pass on the first execution, yet some or all fail on subsequent executions (or vice-versa)
    • “NOTE: The XYZTest must be run prior to this or it will fail!”
  • Isolated / Independent

    In a culmination of the first two qualities, a unit test should be completely isolated from any other system or test. That is to say, a unit test should not assume or depend upon any other test having been run or external system (e.g. database) having a specific state or producing some specific result. Additionally, a unit test should also not create or leave behind any artifacts that may trip up other tests. This is certainly not to say that unit tests cannot share methods or even whole classes between each other – in fact, that is encouraged. What this means is that a unit test should not assume some other test has run previously or will run subsequently; these dependencies should instead be represented as explicit function calls or contained in your test fixture’s SetUp and TearDown methods that run prior to and immediately following every single test.

    Common Pitfalls & Code Smells
    • Database access
    • Tests fail when your network or VPN connection is disabled
    • Tests fail when you have not run some kind of external script (other than perhaps an NAnt script to compile, of course)
    • Tests fail when configuration settings change or are not correct
    • Tests must be executed under specific permissions
  • Fast

    Assuming all of the above conditions are met, all tests should be “fast” (i.e. fractions of a second). Regardless, it is still beneficial to explicitly state that all unit tests should execute practically instantaneously. After all, one of the main benefits of an automated test suite is the ability to get the near-instant feedback about the current quality of your code. As the time to run the test suite increases, the frequency with which you execute it decreases. This directly translates into a great amount of time between the introduction and discovery of bugs.

    Common Pitfalls & Code Smells
    • Individual tests take longer than a fraction of a second to run

If one were really clever, they might arrange the above into a cute little acronym like “FAIR”, but the order in which they appear above is very deliberate; it is the rough order of importance that I place on each quality.

Unit Tests vs. Integration Tests

Odds are that if you have written any automated tests recently, you probably violated one of the above guidelines… and probably for very good reason! What you have produced, my friend, is another very valuable form of automated test called an integration test. As opposed to a unit test - whose sole purpose is to validate the logic and/or functionality of a specific class or method – an integration test exists to validate the interaction (or “integration”, as it were) between two or more components. In other words, integration tests give the system a good work-out to make sure that all of the individual parts work together to achieve the desired result – a working application.

As such, integration tests are just as – if not more so – valuable in a business sense as unit tests. Their major drawbacks, however, are their slow speed and fragility. Not only does this mean that they will get executed less frequently than a unit test suite, but the rate of false-positives (or negatives… however you want to look at it) is much higher. When a unit test fails, it is a sure indication of a bug in the code. In contrast, when an integration tests fails it may mean a bug in the code, but could also very well have been caused by other issues in the testing environment such as a lost database connection or corrupt/unexpected test data. These false positives - though a useful indicator that something is wrong in the developer’s environment – usually just serve to slow down the development process by taking the developer’s focus away from writing working code. Assuming you strive to avoid these distractions whenever possible, the conclusion I come to is that you should therefore strive to rely on extensive test coverage via a solid unit test suite and supplement that coverage with an integration test suite and not vice-versa.

References

A great deal of the reason I even took it upon myself to write this blog post was because I couldn’t really find any good online articles or posts concerning “what makes a unit test”!   Below are a few of the great ones I found.  It may seem like I stole from some of them, but the ideas above really are my opinions…  they just happened to be widely shared. :)

However, it seems at this point if you are very interested in learning more about this topic, books are your best bet.  Anything by the “usual suspects” (Fowler, Hunt, Thomas, Newkirk…) is a great bet, but here are a few I have read and loved: