Tuesday, 20 October 2009

You Forgot to Say the Magic Word…

In Microsoft SQL Server, this query won’t work:

SELECT * FROM ( SELECT * FROM Customer UNION SELECT * FROM Supplier) ORDER BY CompanyName

But – if you ask nicely, it does exactly what you’d expect:

SELECT * FROM ( SELECT * FROM Customer UNION SELECT * FROM Supplier) PLEASE ORDER BY CompanyName

You won’t believe the look on your colleague’s faces when you solve their problem using simple good manners.

(Of course, it actually works because PLEASE in that context just acts as a table-name alias for result of the UNION sub-select, and sub-selects in SQL Server need to have a name. But don't let that stop you using it for fun and profit.)

Is doctype.com a License Too Far for Stack Overflow?

Short answer:

No, because doctype.com doesn’t use technology licensed from Stack Overflow. Sorry. I got this one completely, completely wrong. D’oh.

Long answer:

This post was originally inspired by doctype.com. I now understand, thanks to an extremely informative comment from one of the doctype.com developers, that doctype.com doesn’t actually run on Stack Exchange. It looks and feels very similar, but is in fact a completely separate codebase built by the guys at doctype.com using Ruby on Rails.

This post is therefore based on completely incorrect assumptions. I’ve struck-out the bits that are actually factually incorrect, although my concerns about fragmenting the user based remain valid – even more so since I discovered that ask.sqlteam.com and ask.sqlservercentral.com are both Stack Exchange sites - but clearly doctype.com has nothing to do with it, and in fact, their platform offers a lot of design-centric tools that Stack Overflow doesn’t.

There’s also this disucussion at meta.stackoverflow.com that addresses a lot of the same concerns.

 

The derelict Parachute Drop ride at Coney Island.(Note: In this post, where I say Stack Overflow I’m referring to the website, and where I say StackOverflow LLC, I’m talking about the company  behind it.)

I’ve been using stackoverflow.com since it was in beta, and I love it. I ask questions. I answer questions. I hang out and read and comment and vote and generally find the whole thing a hugely rewarding experience. I think it works for two reasons.

First, the technology platform (now available as Stack Exchange – more on this in a moment) is innovative, usable and packed with great ideas.

Second, by actively engaging with people who followed Jeff Atwood and Joel Spolsky’s blogs, they gathered exactly the right audience to breathe life into their product. Stack Overflow launched with a committed, dedicated community of experts already in place. They created a forum where people like Jon Skeet will donate endless hours of their time for nothing more than kudos and badges. (I bet Jon’s employers are wishing they’d thought of that…)

Here’s a few choice quotes from Joel Spolsky’s Inc.com column I’m referring to (my emphasis)

“Between our two blogs, we felt we could generate the critical mass it would take to make the site work.

“I started a business with the objective of building a big audience, which we would figure out how to monetize later.”

“we promised the audience that the site would always be free and open to the public, and that we would never add flashing punch-the-monkey ads or pop-up windows.”

Now this is the web, where “monetize” usually means “slap advertising all over everything.” – but when Stack Overflow introduced advertising, they were sympathetic and responsive to users’ feedback, and quickly evolved an advertising model that’s elegant, unobtrusive and complements the ethos of the site. The Woot! badge was clever. The tiny Adobe logo on tags like flex and actionscript was really clever – possibly the best use of targeted advertising I’ve seen.

Before long, non-programmers were asking how they could get a slice of the Stack Overflow goodness, and so serverfault.com – for systems admin questions – and superuser.com – for general IT enthusiasts – were born. That clearly worked, so they set up Stack Exchange, to license the platform to third parties, and soon there was moms4mom.com (questions about parenthood), Epic Advice (questions about World of Warcraft), Ask Recipe Labs (cooking and food), Math Overflow (for mathematicians), and various other Stack Exchange sites covering video, photography, car maintenance – all sorts.

A few days ago, I stumbled across doctype.com – a Stack Exchange site for HTML/CSS questions, web design and e-mail design – and some unsettling questions popped into my head.

1. Where am I supposed to ask my jQuery questions now?

I work on everything from T-SQL to a very occasional bit of Photoshop. There is a huge amount of crossover between HTML, CSS, Javascript, AJAX, and web server platforms and their various view/markup engines. Here’s the all-time most popular 20 tags on Stack Overflow, as of 20th October 2009:

1 c# 43,860
2 .net 24,590
3 java 22,924
4 asp.net 20,678
5 php 16,797
6 javascript 16,363
7 c++ 15,462
8 python 11,639
9 jquery 11,287
10 sql 10,910
11 iphone 9,686
12 sql-server 9,165
13 html 7,932
14 mysql 7,794
15 asp.net-mvc 6,532
16 windows 6,425
17 wpf 6,370
18 ruby-on-rails 6,095
19 c 6,071
20 css 5,849
 

The highlighted rows are all Web client technologies – and that ignores all the questions that get tagged as PHP, ASP.NET or Ruby on Rails but actually turn out to involve HTML, CSS or jQuery once the experts have had a look at them. There’s clearly a thriving community of web designers and developers already signed up to Stack Overflow. Should we now all be asking CSS questions on doctype.com instead of using Stack Overflow? I have no idea! 

I realize there are HTML / CSS gurus out there who aren’t currently using Stack Overflow because they think it’s just for programmers – but wouldn’t it be better if Stack Overflow was looking at ways to attract that expertise, rather than renting them a walled garden of their own?  Getting designers and coders to communicate is hard enough at the best of times, and giving them their own “definitive” knowledge-exchange resources isn’t going to help.

2. What Does This Mean For The Stack Overflow Community?

Shortly after discovering doctype.com, I tweeted something daft about “stackoverflow failed as a business”, which elicited this response from one of the guys at Fog Creek… he’s absolutely right, of course. StackOverflow LLC is clearly doing just fine – their product is being enthusiastically received, and I’m thoroughly looking forward to their DevDays event later this month.

However, I think the success of StackOverflow LLC is potentially coming at a cost to stackoverflow.com – the site and the community that surrounds it – and in that respect, I believe that the single, definitive, free site that they originally launched last year has failed to fulfil its potential as a revenue stream.

The bridge in Central Park.The decision to license Stack Exchange to sites who are directly competing for mindshare with Stack Overflow’s “critical mass” worries me, because it suggests that StackOverflow LLC is now calling the shots instead of stackoverflow.com, and making decisions that are financially astute but potentially deleterious to the existing user base.

They are entitled to do this, of course. It’s their site, and I’m extremely grateful that I get to use it for free.

What’s ironic is that the worst case scenario here - for me, for stackoverflow.com, and for the developer community at large - is that doctype.com is wildly successful, becomes the de facto resource for HTML/CSS questions on the internet, generates a healthy revenue stream of its own, and StackOverflow LLC does quite nicely out of the deal. The format is copied by other technology sites, and soon there’s a site for SQL, a site for Java, a site for WinForms, a site for PHP… stackoverflow.com is no longer the definitive resource for programming questions, and we, the users, are back to using Google to trawl a dozen different forum sites looking for answers, and cross-posting our questions to half-a-dozen different sites in the hope that one of them might elicit a response. It’ll be just like 2006 all over again.

OK, So What Would I Have Done Instead?

Fortitude, the stone lion outside the New York Public Library.doctype.com is trying to compete with an established market leader, by licensing that leader’s technology, in a market where the leader has a year’s head start and controls the technology platform. That’s like trying to open a BMW dealership in a town where there’s already a BMW factory outlet, run by two guys everyone knows and loves, whose reputation for service and maintenance is second to none. It has to fail… right? [1]

But – I can appreciate what they’re trying to do. I appreciate that StackOverflow LLC is not a charity, and I appreciate why the folks behind doctype.com think there’s a niche for an SO-style site focusing on designers.

The key to Stack Overflow’s success isn’t the catchy domain name, or that fetching orange branding. The key is the information and the people - I see no technical reason why something like doctype.com couldn’t be licensed as a front-end product that’s integrated with the same database and the same user community as Stack Overflow. Modify the back-end code so that users who sign up at doctype.com get certain filters applied. Use a different web address, a different design, maybe just include questions tagged with html, css, jquery and javascript to start with, so new users see content that’s more immediately relevant to their interests - but when they search or ask a question, they’re getting the full benefit of Stack Overflow’s huge community of loyal experts – not to mention the tens of thousands of accepted answers already in the Stack Overflow database.

How about it? doctype.stackoverflow.com, javaguru.stackoverflow.com, aspnetmvc.stackoverflow.com… each a finely-tuned filtered view onto a single, authoritative information resource for programming questions, from assembler up to CSS sprites. That has to be better than the gradual ghettoization and eventual fragmentation of a thriving community, yes?

Stop Press: Someone just pointed me at ask.sqlservercentral.com. That’s – yep, you guessed it – a Stack Exchange site for SQL questions. As if having to choose between stackoverflow.com and serverfault.com wasn’t bad enough. Does anyone else think this is getting a bit silly?

[1] Of course, it’s entirely possible that Joel & the gang know this, and are quite happy to take $129 a month off the folks at doctype.com whilst they work this out for themselves...

Friday, 16 October 2009

Is There Such A Thing As Test-Driven Maintenance?

One of the great strengths of test-driven development is that systems that are built one tiny test at a time tend to be… well, better. Fewer bugs. Cleaner architecture. Better separation of concerns. The characteristics that make code hard to modify are the same characteristics that make it hard to test, so by incorporating testing into your development cycle, you find – and fix – these pain points early, whilst development is cheap, instead of discovering them three months after you’ve shipped and spending the next two years death-marching your way to version 1.1.

However, there’s a flip-side to this. Not a disadvantage of TDD per se, but something that I think is an unavoidable side-effect of placing so much emphasis on TDD as applied to green-field projects. The “test-driven” part and the “development” part are so tightly coupled that it's easy to assume that automated testing was only applicable to new systems. 

I can’t be the only one using Moq and NUnit on new projects, whilst the rest of the time grimly hacking away on legacy code, dancing the Alt-Tab-F5 fandango and spending hours manually testing new features before they go live. And I can’t be the only one who thinks this is just not right – after all, the big legacy apps are the ones with the thousands of paying customers; surely if we’re running automated tests on anything, it should be those?

I love TeamCity so much, I want to go to the park and carve "DB 4 TC 4 EVER" into a tree.Last week, two things happened. One was a happy afternoon spent setting up TeamCity to build and man age most of our web projects. The other was a botched deployment of an update to a legacy site – the new code worked fine, but a config change deployed as part of the update actually broke half-a-dozen other web apps running on the same host. Broke, as in they  disappeared and were replaced by a Yellow Screen of Death, because the root web.config was loading an HttpModule from a new assembly, and the other web apps were picking up the root’s config settings but didn’t have the necessary DLL. Easily fixed, but rather embarrassing.

If It Runs, You Can Test It

This was a stupid mistake on my part, easily avoided, and it suddenly occurred to me, screamingly easy to detect. We may not have any controller methods or IoC containers to enable granular unit tests, but we can certainly make sure that the site is actually up and responding to HTTP requests.

One of the team put together a very quick NUnit project, which just sent an HTTP GET to the default page of each web app, and asserted that it returned a 200 OK and some valid-looking HTML. Suddenly, after five years of tedious and error-prone manual testing, we had green lights that meant our websites were OK. It took another ten minutes or so to add the new tests to TeamCity, and voila – suddenly we’ve got legacy code being automatically pushed to the test server, and then a way of firing HTTP requests at the server and making sure something comes back.

image You can do this. You can do this right now. TeamCity is free, Subversion is free, NUnit is free, and it doesn’t matter what your web apps are running. Because the ‘API’ we’re testing against is plain simple HTTP request/response, you can test ASP, ASP.NET, PHP, ColdFusion, Java – even static HTML.

What’s beautiful is that, once the test project’s part of your continuous-integration setup, it becomes really easy to add new tests… and that’s where things start getting interesting. Retro-fitting unit tests to a legacy app is hard, but when you need to modify a piece of the legacy app anyway, to fix a bug or add a feature, it’s not that hard to put together a couple of tests for your new code at the same time. Test-first, or code-first – doesn’t matter; just make sure they make it into the test suite. If you’re coupled to legacy data models and payment services and ASP session variables, you’re probably going to struggle to set up the required preconditions. But, most of the time, you’ll find something you can test automatically, which means it’s one less feature you need to worry about every time you make a change or deploy a build.

We now have 19 tests covering over 50,000 lines of code. Yeah, I know - that’s not a lot. But it’s a start, and the lines that are getting hit are absolutely critical. They’re the lines that initialize the application, verify the database is accessible, make sure the server’s configuration is sane, and make sure our homepage is returning something that starts with <html> and has the word “Welcome” in it – because I figure if we’re going to start somewhere, it might as well be at the beginning.

Thursday, 15 October 2009

There Can Be Only One. Or Two. Or Three, but Never Four.

A quick but very simple technique to limit the number of instances of a .NET app that will execute at once:

using System;
using System.Linq;
using System.Diagnostics;

namespace ConsoleApplication1 {
  public class Program {
    static void Main(string[] args) {

      var MAX_PERMITTED_INSTANCES = 3;

      var myProcessName = Process.GetCurrentProcess().ProcessName;
      Process[] processes = Process.GetProcesses();
      var howManyOfMe = processes.Where(p => p.ProcessName == myProcessName).Count();
      if (howManyOfMe > MAX_PERMITTED_INSTANCES) {
        Console.WriteLine("Too many instances - exiting!");
      } else {
        Console.WriteLine("I'm process #{0} and I'm good to go!", howManyOfMe);
        /* do heavy lifting here! */
      }
      Console.ReadKey(false);
    }
  }
}

Very handy if – like we do – you’re firing off potentially expensive encoding jobs every few minutes via a scheduled task, and you’re happy for 3-4 of them to be running at any one time – hey, that’s what multicore CPUs are for, right? - but you’d rather not end up with 37 instances of encoder.exe all fighting for your CPU cycles like cats fighting for the last bit of bacon.

I’m still sometimes really, really impressed at how easy stuff like this is in .NET… I thought this would end up being hours of horrible extern/Win32 API calls, but no. It’s that easy. Very nice.

Tuesday, 13 October 2009

Hey… My World Wide Web Went Weird!

About a week ago, my world wide web went weird. There’s no other way to describe it. Well, OK, there’s probably lots of ways to describe it, but I like the alliteration. Anyway – what happened was, lots of websites just suddenly started looking horrible, for no readily apparent reason. Like the “spot the difference” screenshot below.

image See how the snapshot on the left looks really rather unpleasant, while the one on the right is nice and crisp and readable?

First time I saw it, I assumed it was some ill-inspired redesign of a single site. Second time, I thought it must be some new design trend. Then I noticed it happening on some of our own sites - including our wiki and our FogBugz install – and since I definitely hadn’t messed around with them, that ruled out the possibility of it being something happening server-side. Some sites were still working and looking just fine, so it probably wasn’t a browser issue… but, thanks to a bit of lucky exploration and the awesome power of Firebug, I just worked out what’s going on.

All the affected sites use the same CSS font specification:

body { font-family: Helvetica, Arial, sans-serif; }

Of course, Helvetica isn’t a standard Windows typeface, so on most Windows PCs, the browser will skip over Helvetica and render the document using Arial instead. Last week, whilst working on something for our editorial team, I installed some additional fonts on my workstation, which includes – you guessed it – the Helvetica typeface shown above.

Arial, and most other Windows fonts like Calibri, Verdana, Trebuchet, use a neat trick called font hinting which ensures that when they’re rendered at small sizes, the shape of the individual glyphs lines up nicely with the pixels of your display – so you get nice, crispy fonts. The particular Helvetica flavour I’d installed obviously doesn’t do this – hence the spidery nastiness in the left-hand screenshot.

I’m guessing either the designers who built most of these sites had a hinted version of Helvetica (possibly ‘cos they’re Mac-based?), or that they just never tested that particular CSS rule on a Windows system with a print-optimised Helvetica typeface installed.

I guess the moral of the story is that if you want to annoy somebody in a really subtle way, install the nastiest Helvetica font you can find on their Windows PC. I’m pretty sure that if I hadn’t stumbled across the solution, sooner or later I’d actually have reinstalled in despair just to get things looking crispy again.

RESTful Routing in ASP.NET MVC 2 Preview 2

Microsoft recently released Preview 2 of the next version of their ASP.NET MVC framework. There’s a couple of things in this release that are designed to allow your controls to expose RESTful APIs, and – more interestingly, I think – to let you build your own Web pages and applications on top of the same controllers and routing set-up that provides this RESTful API. In other words, you can build one RESTful API exposing your business logic and domain methods, and then your own UI layer – your views and pages – can be implemented on top of this same API that you’re exposing for developers and third parties.

Thing is… I think they way they’ve implemented it in preview  doesn’t really work. Don’t get me wrong; there are some good ideas in there – among them an HTML helper method, Html.HttpMethodOverride, that works with the MVC routing and controller back-end to “simulate” unsupported HTTP verbs on browsers that don’t support them (which was all of them, last time I looked). You write your form code like this:

<form action=”/products/1234” method=”post”>
    <%= Html.HttpMethodOverride(HttpVerbs.Delete) %>
    <input type=”submit” name=”whatever” value=”Delete This Product” />
</form>

and then in your controller code, you implement a method something like:

[AcceptVerbs(HttpVerbs.Delete)]
public ActionResult Delete(int id) {
  /* delete product here */
   return(Index());
}  

The London Eye and Houses of Parliament by night. Very restful.The HTML helper injects a hidden form element called X-HTTP-Method-Override into your POST submission, and then the framework examines that hidden field when deciding whether your request should pass the AcceptVerbs attribute filter on a particular method.

Now, most MVC routing examples – and the default behaviour you get from the Visual Studio MVC file helpers – will give you a bunch of URLs mapped to different controller methods using a {controller}/{action}/{id} convention – so your application will expose URLs that look like this:

  • /products/view/1234
  • /products/edit/1234
  • /products/delete/1234

Since web browsers only support GET and POST, we end up having to express our intentions through the URI like this, and so the URI doesn’t really identify a resource, it identifies the act of doing something to a resource. That’s all very well if you subscribe to the Nathan Lee Chasing His Horse school of nomenclature, but one of the key tenets of REST is that you can apply a different verb to the same resource identifier – i.e. the same URI – in order to perform different operations. Assuming we’re using the product ID as part of our resource identification system, then:

  • PUT /products/1234 – will create a new product with ID 1234
  • POST /products/1234 – will update product #1234
  • GET /products/1234 – will retrieve a representation of product #1234
  • DELETE /products/1234 – will remove product #1234

One approach would be to map all these URIs to the same controller method – say ProductController.DoTheRightThing(int id) – and then inspect the Request.HttpMethod inside this method to see whether we’re PUTing, POSTing, or what.

This won’t work, though, because Request.HttpMethod hasn’t been through the ‘unsupported verb translator’ that’s included with MVC 2; the Request.HttpMethod will still be “POST” even if the request is a pseudo-DELETE created via the HttpMethodOverride technique shown above.

Now, MVC v1 supports something called route constraints. Stephen Walther has a great post about these; basically they’ll let you say that a certain route only applies to GET requests or POST requests.

routes.MapRoute(
    "Product", 
    "Product/Insert",
    new { controller = "Product", action = "Insert"},
    new { httpMethod = new HttpMethodConstraint("POST") }
);

That last line there? That’s the key – you can map a request for /Product/1234 to your controller’s Details() method if the request is a GET request, and map the same URL - /Product/1234 – to your controller’s Update() method if the request is a POST request. Very nice, and very RESTful.

But – yes, you guessed it; it doesn’t work with PUT and DELETE, because it’s still inspecting the untranslated Request.HttpMethod, which will always be GET or POST with today’s browsers.

However, thanks to the ASP.NET MVC’s rich extensibility, it’s actually very simple to add the support we need alongside the features built in to preview 2. (So simple that this started out as a post complaining that MVC2 couldn’t do it, until I realized I could probably implement what was missing in less time than it would take to describe the problem)

You’ll need to brew yourself up one of these:

/// Allows you to define which HTTP verbs are permitted when determining 
/// whether an HTTP request matches a route. This implementation supports both 
/// native HTTP verbs and the X-HTTP-Method-Override hidden element
/// submitted as part of an HTTP POST
public class HttpVerbConstraint : IRouteConstraint {

  private HttpVerbs verbs;

  public HttpVerbConstraint(HttpVerbs routeVerbs) {
    this.verbs = routeVerbs;
  }

  public bool Match(
HttpContextBase httpContext,
Route route, string parameterName, RouteValueDictionary values,
RouteDirection routeDirection
) { switch (httpContext.Request.HttpMethod) { case "DELETE": return ((verbs & HttpVerbs.Delete) == HttpVerbs.Delete); case "PUT": return ((verbs & HttpVerbs.Put) == HttpVerbs.Put); case "GET": return ((verbs & HttpVerbs.Get) == HttpVerbs.Get); case "HEAD": return ((verbs & HttpVerbs.Head) == HttpVerbs.Head); case "POST": // First, check whether it's a real post. if ((verbs & HttpVerbs.Post) == HttpVerbs.Post) return (true); // If not, check for special magic HttpMethodOverride hidden fields. switch (httpContext.Request.Form["X-HTTP-Method-Override"]) { case "DELETE": return ((verbs & HttpVerbs.Delete) == HttpVerbs.Delete); case "PUT": return ((verbs & HttpVerbs.Put) == HttpVerbs.Put); } break; } return (false); } }

This just implements the IRouteConstraint interface (part of MVC) with a Match() method that will check for the hidden form field when deciding whether to treat a POST request as a pseudo-DELETE or pseudo-PUT. Once you’ve added this to your project, you can set up your MVC routes like so:

routes.MapRoute(
  // Route name - anything you like but must be unique.
  "DeleteProduct",				 
  
  // The URL pattern to match
  "Products/{guid}", 
  
  // The controller and method that should handle requests matching this route 
  new { controller = "Products", action = "Delete", id = "" },   
  
  // The HTTP verbs required for a request to match this route.
  new { httpVerbs = new HttpVerbConstraint(HttpVerbs.Delete) }
);

routes.MapRoute(
  "CreateProduct",
  "Products/{id}",
  new { controller = "Products", action = "Create", id = "" },
  new { httpVerbs = new HttpVerbConstraint(HttpVerbs.Put) }
);

routes.MapRoute(
  "DisplayProduct",
  "Products/{id}",
  new { controller = "Products", action = "Details", id = "" },
  new { httpVerbs = new HttpVerbConstraint(HttpVerbs.Get) }
);

and finally, just implement your controller methods something along these lines:

public class ProductsController {
  public ViewResult Details(int id) { /* implementation */ }
  public ViewResult Create(int id) { /* implementation */ }
  public ViewResult Delete(int id) { /* implementation */ }
}

You don’t need the AcceptVerbs attribute at all. I think you’re better off mapping each resource/verb combination to sensibly-named method on your controller, and leaving it at that. Let proper REST clients send requests using whichever verb they like; let normal browsers submit POSTs with hidden X-HTTP-Method-Override fields, trust the routing engine and route constraints to sort that lot out before it hits your controller code, and you’ll find that you can completely decouple your resource identification strategy from your controller/action naming conventions.

BLATANT PLUG: If you’re into this kind of thing, you should come along to Skills Matter in London on November 2nd, where I’ll be talking about the future of web development - HTML 5, MVC 2, REST, jQuery, semantic markup, web standards, and… well, you’ll have to come along and find out. If you’re interested, register here and see you on the 2nd.)

Sunday, 11 October 2009

Coordinating Web Development with IIS7, DNS and the Web Deployment Tool

DISCLAIMER: There’s some stuff in here which could cause all sorts of chaos if it doesn’t sit happily alongside your individual setup – in particular, hacking your internal DNS records is a really bad idea unless you know what’s already in there, and you understand how DNS resolution works within your organisation. Be careful, and if you’re not responsible for your DNS setup, probably best to discuss this with whoever is responsible for it first.

I’ve been setting up a continuous integration system for our main software products. We host 20+ web sites and applications across four different domain names, ranging from ancient legacy ASP applications based on VBScript and recordsets, to ASP.NET MVC apps built with TDD, Windsor, NHibernate and “alt-net” stack.

The City of London skyline, from the South Bank at low tide.Here’s a couple of things we’ve come up with that make the whole process run a little more smoothly. Let’s imagine our developers are Alice, Bob and myself, and we’re running a three-stage deployment process. Here’s how it works.

  1. Alice implements a new feature, working on code on her local workstation. She has a full local copy of every site, under Subversion control, which she can view and test at www.website.com.local
  2. Once the feature’s done, Alice commits her code. TeamCity – the continuous integration server – will pull the change from Subversion, build it, and deploy the results to www.website.com.build
  3. We run tests – both automated and manual – against this build site. If everything looks OK, we send this .build URL to the stakeholders and testers to get their feedback on the new feature.
  4. Once the tests are green and the stakeholders are happy, the feature is ready for launch. We’ll now use msdeploy to push the entire modified website onto the test server - www.website.com.test
  5. We run integration tests that hit  www.website.com.test – and also www.website.com.test/some_app, www.website2.co.uk.test, www.another-site.com.test – basically, they verify that not only do the individual apps and sites all work, but that they’re co-existing happily on the same server.
  6. Finally, we have a couple of msdeploy tasks set up in TeamCity, that will deploy the entire server configuration from the test server to the public-facing servers.

Setting up Developer Workstations

Most of our developer machines are running Windows 7, which includes IIS7, which supports multiple websites (this used to be a huge limitation of XP Professional, which would only run a single local website). We have a standard setup across workstations, build, test and live servers – they all have a 100Gb+ D: drive dedicated for web hosting, which means we can use msdeploy.exe to clone the test server onto new workstations (or just to reset your own configuration if things get messed up), and to handle the deployment from build to test to live.

Note that this doesn’t mean we’re hard-coding paths to D:\ drives – the apps and sites will happily run from any location on the server, since they use virtual directories and Server.MapPath() to handle any filesystem access. However, it does make life much easier to set up configuration once, and then clone this config across the various systems.

Finally, note that our workstations are 64-bit and the servers are 32-bit, which works fine with one caveat – you can sync and deploy configuration from the servers to the workstations, but not vice versa. In practise, this is actually quite useful – anything being pushed onto the servers should be getting there via Subversion and TeamCity anyway

Using DNS to manage .local, .build and .test zones

Unless you want to maintain a lot of /etc/hosts files, you’ll need your own local DNS servers for this part – but if your organisation is using Active Directory, you’re sorted because your domain controllers are all local DNS servers anyway. The trick here is to create “fake” locally-accessible DNS zones containing wildcard records. We have a zone called local, which contains a single DNS CNAME record that points  * at 127.0.0.1. This means that anything.you.like.local will resolve to 127.0.0.1 – so developers can access their local copies of every site by using something like www.sitename.com.local.

There’s a DNS zone called build, which contains an ALIAS record pointing * at build-server.mydomain.com, and another one called test, which has an ALIAS record pointing * at test-server.mydomain.com. We’ve also set up *.dylan as an alias for my workstation, and *.alice as an alias for Alice’s PC, and *.bob as an alias for Bob’s PC, and so on.

This seems simple but it’ll actually give you some very neat capabilities:

Of course, this doesn’t work unless there’s a web server on the other end that’s listening for those requests, so our common IIS configuration has the following bindings set up for every site:

image

This looks like a lot of work to maintain, but because developer workstations are set up by using msdeploy to clone the test server’s configuration, these mappings only need to be created once, manually, on the test server, and they’ll be transferred across along with everything else.

I’d be interested to hear from anyone who’s using a similar setup – or who’s found an alternative approach to the same problem. Leave a comment here or drop me a line – or better still, blog your own set-up and send me a link, and I’ll add it here.

A Neat Trick using Switch in JavaScript

You ever see code blocks that look like this?

if (someCondition) {
    doSomeThing();
} else if (someOtherCondition) {
    doSomeOtherThing();
} else if (someThirdCondition) {
    doSomeThirdThing();
} else {
    doUsualThing();
}

Turns out in Javascript - and, I suspect, in other dynamically-typed languages that support switch statements - that you can do this:

switch(true) {
    case someCondition:
        doSomeThing();
        break;
    case someOtherCondition:
        doSomeOtherThing();
        break;
    case someThirdCondition:
        doSomeThirdThing();
        break;
    default:
        doUsualThing();
        break;
    }

Of course, by omitting the break keyword you could wreak all sorts of havoc – but I can think of a couple of scenarios where you want to apply one or more enhancements (e.g. adding one or more CSS classes to a piece of mark-up) and this would fit very nicely.