Tuesday, 29 September 2015

Membership and Dynamics CRM - How's It All Going, Then?

A few months back, I blogged about how Spotlight has chosen Microsoft Dynamics CRM 2015 as the platform for delivering our new membership system, and our high-level plan for incorporating Dynamics into our existing infrastructure. Six months on, I thought it'd be worth revisiting to talk about how the project is progressing and the lessons we've learned so far. Particularly since people on Twitter keep asking me if I'm regretting it yet. :)

Now, I wear many hats. I'm a systems architect. I sit on the strategic board at Spotlight. I've been here long enough that I understand most of our business processes in excruciating detail. I'm interested in UX and interaction design. And, sometimes, when nothing else is going on, I write code and build stuff.

With my business hat on, Dynamics CRM is clearly the right solution for us. It works. It supports a business-to-consumer sales model, out of the box, which works really well for us (it surprised me how many of the other CRM systems we looked at assume that every customer is a company and every sale starts with a quote.) Dynamics CRM offers features like case management, service calendars, Outlook integration, multiple price lists and currencies, mobile device support, Active Directory integration - things that would probably never make it to the top of the backlog if we were building our own system. And that's fine - we are not in the CRM business. We build software to help people cast actors and make movies; everything else is overhead.

Strategically, it makes sense. Our initial customization and integration phase will end up being less than six months, after which membership and CRM will be very much a "solved problem" as far as our development team is concerned, freeing us up to focus on other things. Using an off-the-shelf product as the backbone of our membership and business activity gives us lots more options for making improvements in future - certainly more than if we'd built our own system.

So I still believe Dynamics CRM was, and remains, the best solution to our business requirements.

But... (you knew that was coming, didn't you?) as a software developer, it has frustrations, which can be broadly categorized as good, bad and ugly.

The Good...

It's a hugely complex product, and it has a learning curve. There's frequently several ways to achieve something - multiple email integration patterns, several different ways to implement single sign-on for user authentication. Almost anything is possible once you know how. This is frustrating when you don't know how, but once you get the hang of it, it works pretty well.

You can also write custom code in C# that runs directly within Dynamics, which - again - has a hell of a learning curve but is actually a really powerful technique. With a little ingenuity, you can even build C# components that implement your business logic and rules, and then decide later where that logic should be deployed. I like that sort of flexibility.

...the Bad...

Dynamics CRM is the heart of a software ecosystem that's still a long, long way from the software-as-craft ethos which is becoming fairly common elsewhere. Partly, this is a question of tooling and process. Things like revision control, continuous deployment and unit testing become really quite hard when your "project" is a ZIP file full of proprietary XML - no branching, no merging, only the most rudimentary support for rollbacks. As a developer, this is frustrating - but it's important to remember that systems like Dynamics blur the boundary between "developers" and "users" that's prevalent in most development projects. It's a platform. The people who use it every day for the next decade WILL be making changes to it - because the whole reason we're doing it is to empower The BusinessTM to manage their own membership system - and those people aren't software developers.

More worryingly, Dynamics CRM appears to be one of those products where you can make a very good living as an "expert" without actually knowing anything. The amount of misinformation and bad advice surrounding this product is astonishing. I've read blog posts promoting the most dreadful solutions. I've spoken to consultants - and interviewed contractors on quite generous day-rates - who are completely unaware of whole swathes of the core product's capabilities. That makes it really difficult to know who you can trust - and how much time to invest in validating your assumptions before committing to any kind of delivery.

This isn't a problem with Microsoft Dynamics per se - I think it's probably characteristic of any market sector where we see the dreaded "point and click - no development required!" shibboleth - but it's still a lot less fun than working on, say, Nancy or ServiceStack, where the code is open source and most of the community seem to know what they're doing.

...and the Ugly

And then there's the stuff that's just plain stupid. For example, there's an entity in CRM called a Contract. Once a contract has been "invoiced" - which you can do by clicking a single button in the UI - it is immutable. There is literally NOTHING you can do - even with all the Administrator permissions in the world - to change any detail on that contract. Ever. There's various workarounds for this. Microsoft's recommendation is that you need to "clone" the Contract as a draft, cancel the original and replace it with the clone. Our own solution was to write a custom plugin in C# that throws an exception if you ever try to invoice a contract - this works, but it's not pleasant having to work around such an arbitrary restriction.

They say there's only two hard problems in software development - cache invalidation and naming things. Well, Dynamics CRM will remind you of that on a daily basis. Almost everything in CRM needs to have a name otherwise you can't save the record. For people, companies and products, this makes sense... but, really, when you're creating an order to allow somebody to renew their membership, does it need a name? Really? So we're having to build our own code to automatically create these names all over the place, and it's a waste of time.

As for cache invalidation - there's a thing called the "portal toolkit", aka the Developer Extensions for Microsoft Dynamics CRM 2015. You can read all about it here. Pay particular attention to the sentence that says:

The Customer Portal and Partner Relationship Management (PRM) Portal solutions for Microsoft Dynamics CRM 2015 will be available from the Microsoft Dynamics Marketplace soon.

I don't know when "soon" is, but at the moment, if you start using the portal toolkit, you end up with a cache that you can't invalidate - the only supported invalidation mechanism involves deploying an (unreleased) solution to your CRM server, that calls an (undocumented) DLL that you're expected to deploy to your web server. Oh, and if you're running a server farm, you'll need to work out how to configure your load balancer to forward these cache invalidation calls to all your backend servers simultaneously.


Like democracy, it's probably fair to say that Dynamics CRM is the worst option apart from all of the alternatives. It certainly isn't perfect, but it is pretty damn good, and I think the long-term advantages will significantly outweigh the short-term headaches we're having. And if they don't? Well, that's why we have loosely-coupled architecture, isn't it?

Tuesday, 11 August 2015

Dynamics CRM Online: Any SMTP Server You Like! (as long as it's Gmail or Yahoo)

UPDATE: I take it all back. Following a really useful call from Dynamics CRM support, it turns out there are three different mail integration scenarios, only there's no reference to the third one in the CRM Online configuration screens.

  1. Server-side sync using Exchange
  2. Server-side sync using POP3/SMTP - this is the one that's restricted to Yahoo/Gmail, presumably because it IS doing something very clever.
  3. Dynamics CRM Email Relay - this is a completely separate tool that you need to download and install, and use it to configure your email routing.

I'm going through the mail relay config as we speak - will let you know how it goes. Sorry, Microsoft. :)

We are doing a lot of work at the moment with Microsoft Dynamics CRM Online. It's generally very nice - as a business tool, it's excellent; the UI is great, it's a good fit for our requirements, and despite a couple of headaches caused by the restrictions of the CRM Online hosting environment, we've now got a pretty elegant two-way data sync system up and running using EasyNetQ and webhooks.

Now, one of our key requirements for CRM is being able to send email to our customers. Yeah, I know - audacious, right? CRM Online is our first foray into the cloudy world of Microsoft Office 365, and all the rest of our infrastructure (Exchange, Active Directory, etc.) is still hosted on-premise. For testing and staging systems, we use mailtrap.io - a really slick service that will accept incoming mail and store it so you can test things are relaying properly without actually sending any real emails. For production, we use Mandrill, which is a commercial mail relay service - high availability, reputation management, excellent delivery statistics. We send about a million emails a month through Mandrill, and it works beautifully.

So... this morning I log into CRM Online, go into Settings => Email Configuration => New POP3/SMTP Server Profile. Looks pretty straightforward, right? I enter some details, click "save" and get this:


Weird. I don't want to set up server-side synchronization - I just want to send email. So I start poking around, Googling, that kind of thing, and find this article, which says:

You may have read in the documentation that GMail and Yahoo are listed as supported pop3/smtp providers for Microsoft Dynamics CRM Online and

"Although other POP3/SMTP systems may work with Microsoft Dynamics CRM, those systems were not been tested by Microsoft and are not supported."

Let’s be clear about “not supported“. In this context it means precisely “you will not be able to go past server profile screen as we will reject any pop3/smtp provider that is not GMail or Yahoo.”

And that is exactly what the email relay screen does. If you enter any value other than "smtp.gmail.com" or "smtp.mail.yahoo.com" in the Outgoing Server Location field, you get the "unsupported email server" message. I've even tried modifying the configuration using the SDK instead of the UI, but get the same response - "the email server is unsupported".

There's two possibilities here:

  1. Microsoft have worked closely with Yahoo and GMail to provide first-class business email support, with all sorts of clever features and proprietary extensions that aren't supported by any other SMTP mail servers. (UPDATE - yes, it appears this is more-or-less what they've done. See above.)
  2. Somebody has arbitrarily decided to support GMail and Yahoo and exclude all other SMTP servers. Including, by the way, Hotmail (owned by Microsoft), Outlook.com (owned by Microsoft), and our on-premise SMTP relay (powered by Microsoft Exchange)

If I were a betting man, I know which one I'd be putting money on. I'm really disappointed about this. CRM Online isn't a toy. It's a seriously powerful platform, with a hefty price tag attached... and just when everything is going really nicely, something like this comes along and our beautiful product vision is completely hamstrung by the fact that if we want to email our customers, we need to do it via GMail or Yahoo - and there's absolutely no rational justification for this.

Anyone else encountered with this particular restriction? Anyone have any bright ideas as to how we can work around it?

Tuesday, 4 August 2015

As a Developer, I Want to Abolish User Stories, So That I Can Ship Great Products Faster.

Once upon a time, when you programmed by the seat of your pants and the handlebars of your moustache, lots of people wrote specifications. And there was this lovely idea that you could get a Business Analyst to write you a specification, which laid out in very precise detail exactly how The Software was going to function, and then you would give the specification to a development team and they would build it for you, and Everything Would Be Lovely. Except, of course, it wasn't. Only one team ever shipped a massively successful project using this sort of specification-driven waterfall approach, and trust me - they were smarter than you, and they had a much bigger budget. So some bright folks started wondering why software always went wrong, and suggested a whole load of things that would probably help, and came up with things like scrum, and unit testing, and continuous integration.

One of the great ideas that came out this movement was the idea of user stories. See, specifications tended to be incredibly dry and formal, and often did a really bad job of communicating exactly why you were doing something. Joel Spolsky wrote a great article years ago on writing painless functional specifications, but user stories takes this idea even further. And, like a lot of the good ideas that came out of agile, there's a descriptive element to it and a prescriptive element to it. The descriptive part says "write short, simple stories, not detailed specifications" - and the prescriptive part suggests some 'templates' to help you get the hang of this. There's two formats that became popular for working with user stories - given-when-then and as-a-I-want-so-that.

As-a-I-want-so-that is pretty good for describing, at a very high level, what you are trying to do:

As a marketing coordinator, I want to send email to everyone on our mailing list, so that I can tell them about our big summer sale.

And then you'll add a couple of acceptance criteria to the story:

Given I am not subscribed to the mailing list, when the marketing coordinator sends an email about the summer sale, then I should not receive the email.

Given I have received a newsletter email, when I click the unsubscribe link, then I should be removed from the mailing list.

This sort of clarity makes it easy to build the feature, easy to test the feature, and easy to understand exactly what you're doing and why. See? Simple.

Right. Imagine we have a spec from the Olden Days, and Volume 6, Section 8, Subsection 14, Paragraph 9 says:

The handset will feature a Maplin RK82D component installed on the side of the unit. The RK82D will be positioned exactly 45mm from the top of the unit. When activated, the RK82D component will cause the internal speaker to emit a square wave signal at 16Hz, at a volume of not less than 90dBA as measured from a distance of 1 metre.

Now, let's take an old-school project manager and turn them into an agile product owner. Probably by sending them on a three-day course with a certificate and everything.  When they get back, they'll dig out the old spec, and they'll laugh at how dry and opaque it all is. And they'll sit down, all excited, and start writing stories that look like this:

As a handset user, I want a Maplin N27QQ component installed on the side of the unit, so that when the component is activated the device will emit a square wave signal at 16Hz at a volume of not less than 90dBA measured from a distance of 1m

And then they'll add acceptance criteria:

Given I am a handset user, when I activate the N27QQ component, then the frequency of the square wave signal will be 16Hz

Given I am a handset user, when I activate the N27QQ component and measure the signal volume at a distance of 1m, then the volume of the square wave will be not less than 90dBA

Given I am a handset user, when I examine the handset, then the distance of the N27QQ component shall be 45mm from the top of the unit

and everything is AWESOME because we're being AGILE! The story will sit there in the icebox for a couple of weeks, until it gets bumped up the list and included in a backlog refinement meeting. At which point this conversation will happen:

Dev: "Er... why are we doing this? I don't understand the justification for including this feature."
PM: "Sorry, what?"
Dev: "Well... do you know what a Maplin N27QQ component is?"
PM: "It's the component specified in the specification"
Dev: "Yes... it's also a big round red plastic button the size of a softball"
PM: "Oh. Well, it's in the specification now."
Dev: "Right. Explain to me what happens when you press it"
PM: "Oh, easy. The unit emits a square wave signal at 16Hz, at a volume of..."
Dev: "Yeah, yeah. Do you know what that sounds like?"
PM: "Er... it's probably some sort of ring tone"
Dev: "No, it's a fart noise as loud as a motorcycle"
PM: "Oh. Well, can we estimate the story, please?"

At which point the developer will start flicking through LinkedIn on their phone and wondering how long it is until they can go to the pub.

You know what the story should have said? It should have said something like:

As Bongo the Clown, I want my new phone handset to feature a massive red fart button, so that I can make children laugh at parties when I answer it.

First of all, unless you happen to be in the circus supply business, someone's going to say "hang on - why are we making phone handsets for clowns? Is that a thing now?"

Second, anybody who reads that story can immediately picture what they're building, who wants it, and why. It leads to creative discussion and suggestions. Someone might have a brilliant idea about replacing the fart noise with a trombone, or making the button light up when you press it. One of your team might remember how Dad used to get mad every time Tony Blair came on the radio, but the volume knob on Dad's stereo fell off when he tried to turn the radio off, and how hilarious it was watching him chase it across the kitchen while cursing and muttering under his breath, and maybe we should make the fart button actually fall off when you press it so Bongo has to chase it around the room? Clear stories let you cut through the waffle and get straight to the important bit - what is this? How do we build it? How might we make it better?

Now, compare these two sentences:

  1. As a handset user, I want a Maplin N27QQ component installed on the side of the unit, so that when the component is activated the device will emit a square wave signal at 16Hz at a volume of not less than 90dBA measured from a distance of 1m
  2. We'll put a giant red fart button on the side of the phone, so that Bongo the Clown can use it to make kids laugh when he's doing children's parties.

Which one makes more sense to you, as a reader? Which one is going to lead to better decisions, better estimation and less time wasted in meetings?

As-a-I-want-so-that is not some sort of witchcraft. It doesn't magically translate your dry, meaningless specification into readable English. Like writing unit tests, it can help to keep you honest when you're breaking bad habits, and it's one more tool to keep handy when you're doing this stuff for real, but it is not why we're here. It's the story that matters, not the syntax. And if you can't tell me a short, simple story about what you want and why, then maybe you don't actually understand the thing you're trying to build, and no amount of syntactic convention is going to fix that.

Wednesday, 29 July 2015

The Mysterious Case of the Missing Milliseconds

Strings are the lingua franca of many distributed systems, and Spotlight is no different. Earlier today, we hit a weird head-scratching bug in one of our services, and - surprise, surprise - turns out it's all do with strings. To work around limitations of an old line-of-business application, we have a database trigger (no, really) that captures changes made to a particular table, serializes them into an XML message, and pushes this into a SQL Service Broker queue; we then have a Windows service that pulls messages off the queue, parses the XML, breaks it into nicely manageable chunks and publishes them all to RabbitMQ using EasyNetQ. SImple. Except, once in a while, it blows up and starts complaining about FormatExceptions.

Now... within the database trigger, we're doing this:


which returns 2015-07-29T20:55:21.130 as you'd expect.

There's then a line of code in the Windows service that says:

var format = "yyyy-MM-ddTHH:mm:ss.fff";
DateTime.ParseExact(d, format, CultureInfo.InvariantCulture, DateTimeStyles.AdjustToUniversal | DateTimeStyles.AssumeUniversal);

Now, this is the code of somebody who knows that turning datetimes into strings and back again can get a bit tricky, and so has left absolutely nothing to chance - they've supplied an exact date format, they've specified a culture, they've even gone so far as to specify the DateTimeStyles. There's unit tests and integration tests, and everything looks absolutely lovely. And then it blows up. Very occasionally,

Except... SQL Server does something weird.

SELECT @DateTime = '2015-07-29 21:59:15:123'
SELECT CONVERT(VARCHAR(128), @DateTime, 126) -- returns 2015-07-29T21:59:15.123 (fine!)

SELECT @DateTime = '2015-07-29 21:59:15:000'
SELECT CONVERT(VARCHAR(128), @DateTime, 126) -- returns 2015-07-29T21:59:15

SELECT @DateTime = '2015-07-29 21:59:15:999'
SELECT CONVERT(VARCHAR(128), @DateTime, 126) -- returns 2015-07-29T21:59:16

SELECT @DateTime = '2015-07-29 21:59:15:001'
SELECT CONVERT(VARCHAR(128), @DateTime, 126) -- returns 2015-07-29T21:59:15

First, SQL Server doesn't have true millisecond precision - the milliseconds part will often get rounded by +- 0.001 seconds. Second - if the milliseconds part is zero, it'll be omitted from the string representation. Which means our incredibly specific and detailed date parsing routine will choke, because suddenly it has a date that doesn't match the format we've specified, and DateTime.ParseExact will throw a FormatException. Unit tests don't pick it up, because why would you mock such completely bizarre (and undocumented) behaviour, when you don't even know it exists?

What this means is that, since any changes done between .999 and .001 milliseconds will blow up, roughly 0.3% of all our transactions will be failing with a FormatException rather than getting synced to the rest of our systems. Which means fishing them out of the error queue and sorting them out manually - ah, the joy of distributed systems. This formatting weirdness happens on every version of SQL back as far as 2003, but there's no reference to it in the documentation until SQL Server 2012. It's been raised as a bug and closed as 'by design' because "the ISO 8601 spec leaves the conversion semantics for fractional seconds up to the implementation" - which I'm pretty sure didn't mean "go ahead and be internally inconsistent!" but as with so many other issues like this, fixing the bug would change behaviour that's been in place for years and could break things. I've no idea how - or why - anyone would build a system that genuinely relies on this bizarre idiosyncrasy, but I'll bet good money somebody out there has done it.

The beautiful irony, of course, is that if we'd used DateTime.Parse instead of ParseExact, we'd never have had a problem. :)

Friday, 26 June 2015

REST Workshop at Progressive.NET 2015 next week

I'll be delivering a hands-on workshop at Progressive.NET 2015 at SkillsMatter here in London next week, where I'll be talking about advanced REST architectural patterns, and actually implementing some of those patterns in .NET using several of the frameworks available for building HTTP/REST APIs on ASP.NET

I've tried quite hard to avoid any esoteric requirements, so attendees should only need

  • A laptop running Visual Studio 2013, Powershell and IIS
  • A reasonable working knowledge of C# and HTTP
  • A test runner capable of running NUnit tests - personally I love NCrunch deeply but ReSharper or plain old NUnit will do just fine./
  • Some familiarity with Git and GitHub - if you know how to fork a repo and clone it to your workstation, you should be fine.

The repo we'll be working from is https://github.com/dylanbeattie/Restival - there shouldn't be a great deal of setup required, but if you want to clone the repository, check it compiles, and set up your local IIS endpoints by running deploy.ps1 ahead of time, it'll save a little time on the day.

During the workshop, we'll be discussing advanced HTTP/REST API patterns - hypermedia, pagination, resource expansion, HTTP PATCH, OAuth2 - and showing off some tools that can help us design and monitor our HTTP APIs. Alongside the discussion, we'll be implementing some of the techniques covered using your preferred choice of WebAPI, ServiceStack, OpenRasta or NancyFX - or even all four, if you're feeling productive - and then discussing the relative pros and cons of these frameworks for each of the patterns we're implementing.

See you there!

Friday, 19 June 2015

Slides and code from NDC Oslo 2015

I’m here at the Oslo Spektrum in Norway at NDC 2015, where I’ve been talking about the machine code of the web, SASS, TypeScript, CoffeeScript, bundle transformations, web optimisation in ASP.NET, ReST, hypermedia, resource expansion, API versioning, OAuth2, Apiary, NGrok, RunScope – and that’s just the stuff I actually managed to cover in my two talks. It’s been a really great few days, and huge thanks to the organisers for going to such great lengths to make sure everything has gone so smoothly.

A couple of non-software highlights that I really liked:

  • The catering has been excellent, and having food available throughout the day is a great way to avoid the lunchtime rush. (And the free coffee at the Twilio stand is excellent!)
  • The overflow area – where you can tune into any of the 9 talks currently in progress via a wireless headset, or just sit and channel-surf – is a great idea. (But remember it’s there if you’re doing live demos with audience participation – I’m pretty sure the “winner” of my NGrok demo was one of the people in the overflow area!)
  • If you ever get the chance to see the Norwegian band LoveShack, do it. They played the conference after-party last night, and closed their set with a note-perfect 20-minute medley which went through (I think!) Jump, Celebrate, Girls! Girls! Girls!, Welcome to the Jungle, Paradise City, the theme from Baywatch, Livin’ on a Prayer, Radio Gaga and a half-dozen more before dropping back into Jump mid-guitar-solo without skipping a beat. They’re playing the John Dee bar in Oslo this evening, and I’m almost tempted to change my flight just to stick around and see them again…

Slides, Links and Code Samples

The slides and code samples for the talks I’ve given are up on GitHub: the repo is at https://github.com/dylanbeattie/ndc-oslo-2015 or if you want to download the slide decks directly, links are:

Front-End Fun with Sass and Coffee

The Rest of ReST

I also want to follow up on one specific question somebody asked after my ReST talk this morning, which can be  paraphrased as “are you comfortable recommending that people use HAL, seeing as it’s basically a dead specification?” An excellent question, and one that probably slightly more detailed answer than the one I gave on the spot. To put this in context, the HAL specification was submitted to the IETF as a draft-kelly-json-hal-06 in October 2013; that draft expired in 2014 and hasn’t been updated or ratified since, so I can see how you could argue that HAL is “dead”.

First – I’d disagree with that. Although the specification itself hasn’t changed in a while, the mailing list and community is still relatively active, and I’ve no doubt would still welcome engagement and contributions from anybody who wished to participate. Second – the spec still provides a perfectly valid approach. It’s a specification, not a tool or a framework, and in terms of delivering working software, if HAL helps you solve your problem then I say go go for it. Third – and I should have made this more obvious in this morning’s talk – HAL is just one of several approaches for delivering hypermedia over JSON. I used HAL in my examples because I think it’s the most readable, but that doesn’t mean it’s the best choice for your application. (Remember, one of my requirements for a hypermedia language in this context was “looks good on Powerpoint slides”.) If you’re interested, I would recommend also looking at JSONAPI, JSON-LD, Collection+JSON and SIREN. There is a great post by Kevin Sookocheff, which succintly summarises the difference between four of them – it doesn’t cover JsonAPI - and concludes “there is no clear winner. It depends on the constraints in place on your API”

Right. I’m going to watch Troy Hunt making hacking child’s play for an hour, and then head to the airport. Thank you, Oslo. It’s been a blast.


Friday, 12 June 2015

Restival Part 4: Deployment and Fun with Case Sensitivity

Before we dive into the next phase of API development, I wanted to make it a little easier to install and run the Restival app on your own machine, so I've added a Powershell Deploy.ps1 script to the project which will:

  • Create a new local IIS website called Restival, bound to http://restival.local/
  • Create applications for each of our API implementations
  • Configure the whole thing to run under the ASP.NET v4.0 application pool.

One interesting quirk I discovered whilst doing this is that OpenRasta appears to be case-sensitive when it comes to request URLs. I'd initially created the applications like this:

The test project uses lowercase URLs - so http://restival.local/api.nancy/ - and for some strange reason, the OpenRasta implementation just doesn't work if the IIS application name differs in case from the URL in the unit test. I'll dig into this a little further but for now, I've just modified the deploy script to do a .ToLower() on the application name and everything's working. Code for this instalment is in https://github.com/dylanbeattie/Restival/tree/v0.0.4