Monday, 4 May 2015

Restival Part 2: All Aboard The Routemaster

Hello, and welcome to the second instalment of Restival: The Great .NET ReST Showdown (part 1 is here if you missed it)  So far, our API defines a single very simple method – “hello”. Making a GET request to /hello will return { "Message" : "Hello, World!" }, and making a GET request to /hello/chris will return { "Message" : "Hello, Chris!" }

The code we're discussing here is on GitHub as release v0.0.1. This release supports /hello/{name}, which demonstrates routing and parameter binding. I've deliberately not implemented "Hello, World" at /hello yet,   because I want to do that by using the various frameworks' conventions for specifying default parameter values and that logically can't happen until you've defined your routes. Even at this incredibly early stage, there's some interesting design decisions going on.

Routing and Route Binding

Routing controls how your app will map incoming HTTP requests to methods - it's the set of rules that say "when you get a request that looks like X, run method Y on class Z"

Nancy has a really lightweight routing syntax inspired by Sinatra - by inheriting from NancyModule, you get access to a RouteBuilder, a sort of dictionary that maps routes to anonymous methods, for each supported HTTP verb (DELETE, GET, HEAD, OPTIONS, POST, PUT and PATCH) - to add a route, you supply the pattern to match and the method implementation:

public class HelloModule : NancyModule {
    public HelloModule() {
        Get["/hello/{name}"] = parameters => new Greeting(parameters.name);
    }
}

Note the Nancy convention whereby we use an underscore to indicate "we're not using this variable for anything" in handlers that don't actually use their parameter dictionary. It's also worth noting that Nancy's lightweight syntax won't stop you defining multiple handlers for the same route - but this can lead to non-deterministic behaviour, so don't do it :)

WebAPI uses an explicit routing table that's configured during application startup - in WebAPI, there's a call to WebApiConfig.Register(GlobalConfiguration.Configuration) in Application_Start, and routes are mapped by specifying the name, the URL pattern and the defaults to use for that route. (If you're familiar with routing in ASP.NET MVC, WebAPI uses a very similar routing configuration, but with the 'action' mapped to the HTTP verb instead of to a path segment.)

config.Routes.MapHttpRoute(
    "Hello",   // route name
    "hello/{name}", // route template
    new { Controller = "Hello" } // route defaults
);

OpenRasta and ServiceStack are both far more explicit about the relationship between resources, routes and handlers. OpenRasta uses a fluent configuration interface to declare your resources (i.e. the bits of data we're interested in), your URIs (routes), handlers (the bits of code that actually do the work), and contracts (which handle things like serialization and content types)

public class Configuration : IConfigurationSource {
    public void Configure() {
        using (OpenRastaConfiguration.Manual) {
            ResourceSpace.Has.ResourcesOfType<Greeting>()
                AtUri("/hello/{name}")
                .HandledBy<HelloHandler>()
                .AsJsonDataContract();
        }
    }
}

Finally, ServiceStack requires you to explicitly define requests (DTOs representing the incoming request data), services (analogous to handlers in our other frameworks) and responses. This is far more verbose than the other frameworks, but providing these abstraction layers between every aspect of your ReST API and your underlying codebase gives you more flexibility to evolve your API independently of the underlying business logic. You map your routes to your request DTOs using the Route attribute, and inherit from ServiceStack.Service when implementing your handlers. ServiceStack maps HTTP verbs onto service method names - HelloService.Get(Hello dto), HelloService.Post(Hello dto), etc. - but also supports a catch-all Any() method which will map incoming requests regardless of the request verb.

[Route("/hello")]
[Route("/hello/{name}")]
public class Hello {
    public string Name { get; set; }
}

public class HelloResponse {
    public string Message { get; set; }
}

public class HelloService : Service {
  public HelloResponse Any(Hello dto) {
    var greeting = new Greeting(dto.Name);
    var response = new HelloResponse() { Message = greeting.Message };
    return (response);
  }
}

So there you go. /hello/{name} takes one line in NancyFX, a couple of lines in OpenRasta and WebAPI, and three entire classes in ServiceStack. Before you draw any conclusions, though, try pointing a browser at the root URL of each API implementation.

Nancy gives you this rather splendid 404 page - complete with Tumbeast:

image

Running under IIS, WebAPI and OpenRasta both interpret GET / as a directory browse request, and give you the all-too-familiar IIS 7.5 HTTP error screen:

image

But the pay-off for the extra boilerplate required by ServiceStack is this rather nice API documentation page, describing the services and encoding formats supported by the API and providing WSDL files for adding our API as a service endpoint. Now, we're not actually using any of that yet... but as our API grows, it's going to be interesting to see how much extra work the other frameworks require to do things that ServiceStack provides for free. (Or for $800 per developer, depending on what you're doing with it.)

image

Now, it's important to remember that we're trying to reflect the conventions and idioms of our chosen frameworks here. You could, without too much difficulty, implement the request/service/response pattern favoured by ServiceStack on any of the other frameworks, or to get your ServiceStack services to return raw entities instead of mapping them into Response objects - but if you're trying to make framework A behave like framework B, you might as well just switch to framework B and be done with it.

In the next episode, we're going to make GET /hello return "Hello, World!", and in the process look at how to define default values for our route parameters in each of our frameworks. Until then, happy hacking!

Tuesday, 28 April 2015

One API, Four Frameworks: The Great .NET ReST Showdown

There’s only two hard problems in software: cache invalidation, naming things, off-by-one errors, null-terminated lists, and choice overload. Back when I started building data-driven websites, classic ASP gave you Request, Response, Application, Session and Server, and you built the rest yourself - and it was uphill both ways! These days, we’re spoiled for choice. Whatever you’re trying to achieve, you can bet good money that somebody’s been there before you. If you’re lucky, they’ve shared some of their code – and if you’re really lucky, they’ve built a framework or a library you can use.

Since the heady days of classic ASP, I’ve built quite a few systems that have to expose data or services to other systems – from standalone HTTP endpoints that just return true or false, to full-featured ReST APIs. I’ve come across several frameworks that are designed to help developers create ReSTful APIs using Microsoft .NET, each of which doubtless has its own strengths and weaknesses – and now I find myself facing the aforementioned choice overload, because if I was starting a new API project right now, I honestly don’t know which framework I’d use. Whilst I’ve got hands-on experience with most of them, I’ve never had the opportunity to try implementing the same API spec in multiple frameworks to compare and contrast their capabilities on a like-for-like basis.

So here goes. Over the next few months, I’m going to be developing the same API in four different frameworks, side-by-side. The APIs have to use the same back-end code – same data access, same business logic – and have to pass the same integration test suite. I’m going to start out really simple = “hello, world!” simple – and gradually introduce features like resource expansion, pagination, OAuth2, content negotiation. The idea is that some of these features will actually break the interface, so I’ll also be looking at how to handle API versioning in each of my chosen frameworks. I’m also going to try and respect the idioms and conventions of each of the frameworks I’m working with – good frameworks tend to be opinionated, and if you don’t agree with their opinions you’re probably not going to find them particularly helpful. 

The Frameworks

Microsoft ASP.NET WebAPI (http://www.asp.net/web-api)

Microsoft’s out-of-the-box solution for building HTTP-driven APIs. Superficially similar to ASP.NET MVC but I suspect there’s much more to it than that. I’ve built a couple of small standalone APIs in WebAPI but not used it for anything too substantial.

ServiceStack (https://servicestack.net/)

For a long while I was completely smitten with ServiceStack. Then it hit v4.0 and went from free-as-in-beer to expensive-as-in-$999-per-developer for any reasonable-sized project – there is a free usage tier, but it’s pretty restrictive. That said, it’s still a really powerful, flexible framework. Version 3 is still on NuGet, is still available under a BSD license, and there’s at least one active project based on a fork of the ServiceStack v3 codebase.  I like ServiceStack’s conventions and idioms; working with it taught me a lot about ReST; it has great support for things like SwaggerUI, and I suspect as I start implementing various API features ServiceStack is going to deliver additional value and capabilities which the other frameworks can’t match. Will it add enough value to justify $999 per developer? We’ll see :)

OpenRasta (http://openrasta.org/)

I’ve played with OpenRasta on-and-off over the years, though I’ve never used it on anything substantial, but I know the folks over at Huddle are using it in a big way and having great results with it, so I’m really curious to see how it stacks up against the others. (I should probably disclose a slight bias here in that Sebastien Lambla, the creator of OpenRasta, is a friend of mine; it was Seb who first got me thinking about ReST via London .NET User Group and prolonged conversations in the pub afterwards.)

NancyFX (http://nancyfx.org/)

This one is completely new to me – until last week, I’d never even looked at it. But so far, it looks really nice – minimalist, elegant and expressive, and I’m looking forward to seeing what it can do.

Other Candidates

It would be interesting to get some F# into the mix – mainly because I’ve never used F# and I’m curious. I’ve heard interesting things about WebSharper and Freya – and, of course, if anyone wants to add an F# implementation and send me a pull request, go for it!

The API

I’m using Apiary.IO to design the API itself – you can check it out at http://docs.restival.apiary.io/

The Code

The code is on GitHub - https://github.com/dylanbeattie/Restival.

If you want to run it, you’ll need to set up IIS applications for each of the four framework implementations – I use a hosts file hack to point restival.local at 127.0.0.1, and I’ve set up http://restival.local/api.webapi, http://restival.local/api.nancy, http://restival.local/api.servicestack and http://restival.local/api.openrasta

The test suite is NUnit, uses RestSharp to make real live HTTP requests, and all the actual test logic is in the abstract base class. There’s four concrete implementations, and the only difference is the URL of the API endpoint under test.

The Backlog

Features I want to implement include, but are probably not restricted to…

  • Pagination. What’s a good way to handle huge result sets without causing timeouts and performance problems?
  • Resource expansion. How do you request a customer, and all their orders, and the invoices linked to those orders, in a single API call?
  • API versioning – using all three different ways of doing it wrong
    • Version numbers in the URL (api.foo.com/v1/)
    • Version numbers in a custom HTTP header (X-Restival-Version: 1.0)
    • Content negotation based on MIME types (Accept: application/vnd.restival.v1.0+json)
  • OAuth2 and bearer token authentication. (You’ll like this one because I’m not using DotNetOpenAuth)
  • API documentation – probably by seeing how easily I can add Swagger support to each implementation

In every case, this is stuff I’ve already implemented in at least one project over the last couple of years, so it’s going to be interesting seeing how readily those implementations translate across to the other frameworks I’m using.

Sound like fun? You bet it does. Tune in and see how it unfolds. Or come to NDC Oslo in June or Progressive.NET in London in July, where you not only get to listen to me talk about ReST, you get several days of talks and workshops from some of the best speakers in the industry.

Friday, 24 April 2015

Spotlight, Dynamics CRM, and the age-old question of “build vs buy”

We’re in the early stages of creating a new membership system built on Microsoft Dynamics CRM 2015. This isn’t a decision we’ve taken lightly – the decision to buy vs. build is always complex where software is concerned, as Mike Hadlow explains in this excellent blog post. In our case, though, there’s two good reasons why we’ve decided to integrate an off-the-shelf solution.

First, we are willing to change our own business process to suit the software. We’ve been printing books since 1927, and our business model is tightly coupled to our publishing process. As part of this project, we’re going to decouple those two areas of activity. Publishing is a differentiator for us, and always will be, but membership is not – whilst it’s important that it works, it’s not hugely important how it works. If CRM can offer us a “pit of success”, we’ll happily do what’s necessary to fall in it.

Second – we really understand what it costs to write our own software. We’ve got a solid, mature agile process, which links in to our time-tracking system. One of the high-level metrics that we track is “cost per point” – how much, in pounds, does it cost us to get a single story point into production? It’s easy for businesses to think of off-the-shelf software as “expensive”, because it has a price tag, and bespoke software as ‘free’ because you’re already paying developers’ salaries, but that’s a pretty naive way to look at it. When you factor in the opportunity cost of all the things we could have been doing instead of reinventing the CRM wheel, the off-the-shelf option starts to look a lot more attractive even when it carries a hefty up-front price tag.

That’s two good reasons why we’re going with CRM2015. Now for two good reasons why CRM projects fail, and what we’re doing to mitigate them.

First – scope creep. CRM vendors will happily sell it CRM as the solution to all your problems, and then they’ll start showing off marketing campaigns and case management and Outlook integration and web portals and everybody’s eyes light up like this is the most amazing thing they’ve ever seen… and next thing you know you’ve got a 300-page “requirements” document and everyone’s got so carried away by what’s possible that they’ve forgotten what they were trying to fix in the first place. I’ve seen this happen first-hand, and it doesn’t work, and the reason it doesn’t work is that the project isn’t being driven by prioritised requirements, it’s being driven by wish-lists.

So… start with a problem. Any successful business probably has dozens of things that could be done better – so list them, analyze them, identify dependencies. Work out which one to solve first, and focus. CRM isn’t fundamentally different to any other software project. Identify your milestones. Be absolutely brutal about the MVP – what is the simplest possible thing that’s better than what we’ve got right now? Build that. Ship that. Get people using it. In our case, CRM’s first outing is going to be as a replacement for GroupMail, our email-merge tool, and that’s it. We’ll integrate just enough data that CRM can send personalised email to a specific group of customers, we’ll ship it, and we’ll use it – and then we’ll iterate based on feedback and lessons learned. We already have a pretty good idea what we’ll do after that, but we’re not going to worry about it until we’ve delivered that first MVP release.

I think the second reason CRM projects fail is over-extension. Dynamics CRM is a really powerful, flexible platform, and with enough consultants effort you can probably get it to do just about anything. But that doesn’t mean it’s the best solution. Sure, there’s going to be cases where it makes sense to customise CRM by adding a new field or some validation rules. Spotlight holds the same core “business” data  as any other company – what’s your name? Where do you live? what’s your email address? Is your account up-to-date? Off-the-shelf CRM is very, very good at managing this sort of information – and once you’ve got this core information in CRM, there’s dozens of off-the-shelf marketing tools available to help you use it more effectively.

But Spotlight stores a lot more than that. We also store all the information that appears on your professional acting CV – height, weight, eye colour, hairstyle, skills, credits. We store details of almost all the productions being cast in the UK, we track tens of thousands of CV submissions every day, and millions of job notification emails each week. We manage terabytes of photography and video clips.  You probably could get CRM to manage all this information. But hey, you can open a beer bottle with a dollar bill – doesn’t mean it’s a good idea, though.

image

The overlap – the green bit – is where CRM solves one of our problems. The blue bit is things CRM does that aren’t really relevant to us – not at the moment, anyway. The red bit is the stuff that we’re going to keep out of CRM. And that dark area on the border… that’s representation, which is a tremendously complicated tangle of operational data, business data and publishing data that we’re going to have to work out as part of our service roadmap. Fun.

Now, different people – and systems! – have different expectations about what “CRM” and “membership” mean.

  • To our customers, good CRM means it’s easy to join Spotlight, it’s easy to manage your account, it’s easy to talk to us and get answers to your questions. You know how you hate phoning the electric company because every time you get through you talk to a different person who has no idea what’s going on, and your “my account” page on their website says one thing and your bill says something else and the person on the phone doesn’t agree with either of them? Yeah. Imagine the complete opposite of that.
  • To our marketing team, good CRM is about accurate data, effective marketing campaigns, happy customers – and, yes, revenue. It’s about helping us work out what we’re doing right and what we’re doing wrong, giving us the intelligence we need to make decisions about new products and initiatives.
  • To our software team,  good CRM means easy access to the data and processes you’ll need to build great products. Responsive systems, logical data structures, simple integration patterns and intuitive API endpoints. In other words, if you want to build an awesome online tool that’s only available to customers with a current, paid-up Spotlight Actors membership, it should be trivial to work out whether the current user can see it or not.

Same system, same data, three radically different use cases. So here’s how we’re proposing to make it work:

image

Astute readers will notice a slim black box marked “abstraction layer”. That’s how we’re going to fool the rest of our stack into thinking that Dynamics CRM 2015 is a ReSTful microservice, so tune in over the next couple of weeks to find out how it works, what sort of patterns and techniques we’re using, and how we’re going to test and monitor it.

(Astute readers may also notice a resemblance between Spotlight’s customer base and the cast of Game of Thrones… well, that’s because Spotlight’s customers are the cast of Game of Thrones. I told you working here was awesome.)

Spotlight, Dynamics CRM, and the age-old question of “build vs buy”

We’re in the early stages of creating a new membership “engine” built on Microsoft Dynamics CRM 2015. This isn’t a decision we’ve taken lightly – the decision to buy vs. build is always complex where software is concerned, as Mike Hadlow explains in this excellent blog post. In our case, though, there’s two good reasons why we’ve decided to integrate an off-the-shelf solution.

First, we are willing to change our own business process to suit the software. We’ve been printing books since 1927, and our business model is tightly coupled to our publishing process. As part of this project, we’re going to decouple those two areas of activity. Publishing is a differentiator for us, and always will be, but membership is not – whilst it’s important that it works, it’s not hugely important how it works. If CRM can offer us a “pit of success”, we’ll happily do what’s necessary to fall in it.

Second – we really understand what it costs to write our own software. We’ve got a solid, mature agile process, which links in to our time-tracking system. One of the high-level metrics that we track is “cost per point” – how much, in pounds, does it cost us to get a single story point into production? It’s easy for businesses to think of off-the-shelf software as “expensive”, because it has a price tag, and bespoke software as ‘free’ because you’re already paying developers’ salaries, but that’s a pretty naive way to look at it. When you factor in the opportunity cost of all the things we could have been doing instead of reinventing the CRM wheel, the off-the-shelf option starts to look a lot more attractive even when it carries a hefty up-front price tag.

That’s two good reasons why we’re going with CRM2015. Now for two good reasons why CRM projects fail, and what we’re doing to mitigate them.

First – scope creep. CRM vendors will happily sell it CRM as the solution to all your problems, and then they’ll start showing off marketing campaigns and case management and Outlook integration and web portals and everybody’s eyes light up like this is the most amazing thing they’ve ever seen… and next thing you know you’ve got a 300-page “requirements” document and everyone’s got so carried away by what’s possible that they’ve forgotten what they were trying to fix in the first place. I’ve seen this happen first-hand, and it doesn’t work, and the reason it doesn’t work is that the project isn’t being driven by prioritised requirements, it’s being driven by wish-lists.

So… start with a problem. Any successful business probably has dozens of things that could be done better – so list them, analyze them, identify dependencies. Work out which one to solve first, and focus. CRM isn’t fundamentally different to any other software project. Identify your milestones. Be absolutely brutal about the MVP – what is the simplest possible thing that’s better than what we’ve got right now? Build that. Ship that. Get people using it. In our case, CRM’s first outing is going to be as a replacement for GroupMail, our email-merge tool, and that’s it. We’ll integrate just enough data that CRM can send personalised email to a specific group of customers, we’ll ship it, and we’ll use it – and then we’ll iterate based on feedback and lessons learned. We already have a pretty good idea what we’ll do after that, but we’re not going to worry about it until we’ve delivered that first MVP release.

I think the second reason CRM projects fail is over-extension. Dynamics CRM is a really powerful, flexible platform, and with enough consultants effort you can probably get it to do just about anything. But that doesn’t mean it’s the best solution. Sure, there’s going to be cases where it makes sense to customise CRM by adding a new field or some validation rules. Spotlight holds the same core “business” data  as any other company – what’s your name? Where do you live? what’s your email address? Is your account up-to-date? Off-the-shelf CRM is very, very good at managing this sort of information – and once you’ve got this core information in CRM, there’s dozens of off-the-shelf marketing tools available to help you use it more effectively.

But Spotlight stores a lot more than that. We also store all the information that appears on your professional acting CV – height, weight, eye colour, hairstyle, skills, credits. We store details of almost all the productions being cast in the UK, we track tens of thousands of CV submissions every day, and millions of job notification emails each week. We manage terabytes of photography and video clips.  You probably could get CRM to manage all this information. But hey, you can open a beer bottle with a dollar bill – doesn’t mean it’s a good idea, though.

image

The overlap – the green bit – is where CRM solves one of our problems. The blue bit is things CRM does that aren’t really relevant to us – not at the moment, anyway. The red bit is the stuff that we’re going to keep out of CRM. And that dark area on the border… that’s representation, which is a tremendously complicated tangle of operational data, business data and publishing data that we’re going to have to work out as part of our service roadmap. Fun.

Now, different people – and systems! – have different expectations about what “CRM” and “membership” mean.

  • To our customers, good CRM means it’s easy to join Spotlight, it’s easy to manage your account, it’s easy to talk to us and get answers to your questions. You know how you hate phoning the electric company because every time you get through you talk to a different person who has no idea what’s going on, and your “my account” page on their website says one thing and your bill says something else and the person on the phone doesn’t agree with either of them? Yeah. Imagine the complete opposite of that.
  • To our marketing team, good CRM is about accurate data, effective marketing campaigns, happy customers – and, yes, revenue. It’s about helping us work out what we’re doing right and what we’re doing wrong, giving us the intelligence we need to make decisions about new products and initiatives.
  • To our software team,  good CRM means easy access to the data and processes you’ll need to build great products. Responsive systems, logical data structures, simple integration patterns and intuitive API endpoints. In other words, if you want to build an awesome online tool that’s only available to customers with a current, paid-up Spotlight Actors membership, it should be trivial to work out whether the current user can see it or not.

Same system, same data, three radically different use cases. So here’s how we’re proposing to make it work:

image

Astute readers will notice a slim black box marked “abstraction layer”. That’s how we’re going to fool the rest of our stack into thinking that Dynamics CRM 2015 is a ReSTful microservice, so tune in over the next couple of weeks to find out how it works, what sort of patterns and techniques we’re using, and how we’re going to test and monitor it.

Thursday, 22 May 2014

NSBCon 2014 – All About NServiceBus

I'm excited to say I'll be speaking at NSBCon here in London in June. NServiceBus is the most popular open-source service bus for .NET platforms, and I'll be talking about how we use NServiceBus at Spotlight to deliver online photo and multimedia publishing systems aimed at professional actors, casting directors and production professionals.
I'll be talking about two distinct systems. The first is our proprietary system for managing online photo publication. Spotlight has been publishing photographs in one form or another since 1927, and so our current system is the latest in a long, long series of incremental developments - from acid-etched brass photograph plates, to lithographic bromide machines, to industrial film scanners and ImageMagick. Photography is an absolutely fundamental part of the creative casting process, and we reached a point a few years ago where our web servers were spending 60%-70% of their CPU time rendering photographs. Working within the limitations of our legacy systems, we developed a distributed thumbnailing and caching system that delivers the same results with a fraction of the processing overhead. I'll discuss how we used NServiceBus to work around those limitations, the architectural and operation challenges we faced building and running this system, and some of the lessons we learned when, a year after deploying it into production, we had to migrate this system from onsite hosting to a private cloud.
The second is an online audio/video upload and publishing system. Working with online video offers a unique set of challenges - dozens of incompatible file formats, massive file uploads, unpredictable analysis and encoding jobs. We built and deployed a system that satisfied these requirements whilst offering an intuitive, responsive user experience, and in this talk I'll cover the high-level architectural approach that enabled this. We'll look at how we used NServiceBus to decouple the 'heavy lifting' components of the application from the customer-facing user interface components, and some of the lessons we learned deploying a distributed NServiceBus application in an Amazon EC2 hosting environment.
NSBCon is at SkillsMatter here in London, on the 26th and 27th of June 2014. I'll be speaking alongside a great panel of .NET and distributed systems experts, including Udi Dahan, Ayende, Greg Young, Andreas Ohlund and many others. Follow @NSBCon on Twitter for more updates, and sign up at SkillsMatter if you're interested.

Friday, 31 January 2014

How to read FormsAuthentication.Timeout in .NET 3.5

Forms authentication in .NET has a timeout property, controlled via the web.config section like so:

<system.web>
    <authentication>
        <forms name="foo" loginUrl="~/Account/Login" timeout="30" />
    </authentication>
</system.web>

This setting has existed since .NET 1.0, I believe, but only in .NET 4 did we get a corresponding property on the FormsAuthentication object. To access the Timeout property in .NET 4+, you just call

FormsAuthentication.Timeout

To do the same in .NET 3.5, you need to do this:

var defaultTimeout = TimeSpan.FromMinutes(30);
var xml = new XmlDocument();
var webConfigFilePath = Path.Combine(HttpRuntime.AppDomainAppPath, "web.config");
xml.Load(webConfigFilePath);
var node = xml.SelectSingleNode("/configuration/system.web/authentication/forms");
if (node == null || node.Attributes == null) return (defaultTimeout);
var attribute = node.Attributes["timeout"];
if (attribute == null) return (defaultTimeout);
int minutes;
if (Int32.TryParse(attribute.Value, out minutes)) return(TimeSpan.FromMinutes(minutes));
return(defaultTimeout);

I'll file this under "Things that wrong with Forms Authentication, #217"

Friday, 17 January 2014

Friday Puzzle Time! (aka EU Roaming Data Charges)

I’m going to France. I want to be able to use my phone for data when I’m out there, so I’m investigating the cheapest way to do this. My phone is on Orange, who are now owned by EE, so I wander over to http://explore.ee.co.uk/roaming/orange/france and have a look.

(I should mention here that I’m generally really happy with Orange. The reception’s good. Their network doesn’t randomly drop calls or send me text messages 12 hours late, and Orange Wednesdays are awesome.)

Anyway. Here’s their data options:

image

Before we even get into the chaos of daily limits and terms and conditions, notice how the EU 100MB daily bundle gives you 100MB for £3, whilst the EU Data 500MB bundle gives you 500Mb for a hundred and fifty pounds? Same data but the bundle offers “93% saving on standard rates” – which is such a huge discount as to be really quite unsettling. If someone tried to sell you a MacBook for £50, you’d be a bit suspicious, right?

So I tried to work out what that catch might be – using EE’s website; Orange’s EU Mobile Internet Daily Bundles T&Cs, and a phone call to Orange customer services. Here’s what I found.

My Question Website T&Cs Customer Services (referring to an internal  T&C document dated Dec 15th 2013)
What happens if I go over 100MB in a single day? “This bundle reoccurs up to 10 times until midnight of that same day (local time), each time the bundle reoccurs you will be charged an additional bundle fee and recieve additional bundle allowance. If you exceed the reoccuring bundle limit you’ll be charged 45.9p/MB.”

(Does this mean I get ten bundles per day? Or one bundle per day, for up to ten days?)
“Any data used above the amount in your bundle, will be charged at our standard data roaming rate of 45.9p per MB. So, for example, if you have the £1 per day bundle and use more than your 20MB you will be charged an additional £1 for every additional 20MB you use up to 200MB (as the bundle reoccurs ten times). Any data usage in excess of 200MB will then be charged at our standard data roaming rate of 45.9p per MB.”

(that sounds to me like you get ten bundles per day – otherwise it would just say “if you use more than 20Mb you’ll pay standard rates for the rest of the day” – no?)
Customer services advised me that bundles only work once per day – so when you reach 100MB, you’ll pay normal roaming charges until midnight, and you’ll get another 100MB bundle the next day.
How many times can I use the bundle in a single day? See above. “A daily bundle will last until midnight (local time of the country you purchased the bundle in) on the day you made the  purchase, or until you have used up your data allowance, whichever comes first in time. Once  you’ve purchased a bundle, you will be opted in for the bundle to reoccur up to ten times”

Doesn’t say if that’s ten times per day, or ten times per trip, or per billing period, or what.
Just once – see above. You can’t use two daily bundles in the same day.
How many days does it last for? Really unclear whether it’s ten bundles per day – for as many days as you like – or whether it’s one bundle per day, for up to ten days. And no indication of whether that ten day limit is per trip or per billing period or what. See above. Customer services said there’s no limit – if I went to France for fifty days, I’d get fifty bundles. One per day, for the duration of my trip.
How much does it cost once I exceed my bundle limit? “If you exceed the reoccuring bundle limit you’ll be charged 45.9p/MB.” “Any data used above the amount in your bundle, will be charged at our standard data roaming rate of
45.9p per MB.”
£3.70 per MB. Though I was advised that information might be out of date.
What does the “ten bundle” limit actually mean? Unclear Unclear No idea. Neither of the advisors I spoke to could tell me what the “up to 10 times” limit actually meant.

So, let’s spin those various answers into some possible phone bills, bearing in mind nobody can actually give me a definitive answer. Imagine we’re going to France from Feb 1st to Feb 16th, and we’re going to use 25Mb/day most days on e-mail and Facebook and the like, and 250Mb on Fridays, ‘cos we’re watching snowboarding videos on YouTube.

  • With 1 EU100 bundle per day, unlimited days – that’d cost us £185.70
  • At 1 EU100 bundle per day, up to 10 days – would cost us £270.98
  • The EU500 plan? Given our 500MB quota runs out halfway through our trip, we’d pay £322.13
  • If the chap on the phone was right about £3.70/MB, we’d be looking at £1,110 in excess data charges for our two nights of YouTube, and a total bill of £1,158.

And on top of all that, whoever wrote their (legally binding?) terms & conditions cannot spell ‘receive’ or ‘recurring’.

No wonder I have a headache.

UPDATE: So when I texted EU100 to 2088 to activate the bundle, it failed… I rang them (again), and apparently there’s already a bundle on there. It’s been on there since February 2013. Will it run out? “No, sir”. Is there any kind of limit on how many days I get? “No, sir.”

So that’s an hour of research and phone calls to work out that nobody knows for sure what the bundle is, but it’s probably safe to assume it *doesn’t* run out because I’ve already got one and it’s been active for nearly a year.

O_o