Wednesday, 27 January 2016

Let’s Talk About Feedback

Photo of the enormous guitar amplifier from the start of "Back to the Future"

There’s been some really interesting blog posts about speaking at conferences recently – from Todd Motto’s “So you want to talk at conferences?” piece, to Basia Fusinska’s “Conference talks, where did we go wrong?” and Pieter Hintjens “Ten Steps to Better Public Speaking”, to Seb’s recent “What’s good feedback from a talk?” post. There’s a lot of very useful general advice in those posts, but I absolutely believe the best way to improve as a speaker is to ask your audience what you could be doing better, and that’s hard.

After most talks you’ll have a couple of people come up to you at the coffee machine or in the bar and say “hey, great talk!” – and that sort of positive feedback is great for your confidence but it doesn’t actually give you much scope to improve unless you take the initiative. Don’t turn it into an interrogation, but it’s easy to spin this into conversation – “hey, really glad you enjoyed it. Anything you think I could have done differently? Could you read the slides?” If there’s something in your talk which you’re not sure about, this is a great chance to get an anecdotal second opinion from somebody who actually saw it.

Twitter is another great way to get anecdotal feedback. Include your Twitter handle and hashtags in your slides – at the beginning AND at the end – and invite people to tweet at you if they have any comments or questions. In my experience, Twitter feedback tends to be positive (or neutral – I’ve been mentioned in lots of “@dylanbeattie talking about yada yada yada at #conference”-type tweets) – and again, whilst it’s good for giving your confidence a boost, it can also be a great opportunity to engage with your audience and try to get some more detailed feedback.

And then there’s the official feedback loops – the colored cards, the voting systems and the feedback app. I really like how  BuildStuff does this. They gather feedback on each talk through a combination of coloured-card voting and an online feedback app. Attendees who give feedback online go into a prize draw, which is a nice incentive to do so – and it makes it an easy sell for the speakers: “Thank you – and please remember to leave me feedback; you might win an iPad!” The other great thing BuildStuff does is to send you your feedback as a JPEG, which looks good and makes it really easy for speakers to share feedback and swap notes afterwards. Here’s mine from Vilnius and Kyiv last year:

Build Stuff Lithuania ratings16 Build Stuff UA6

Now, I’m pretty sure there were more than five people in my talk in Kyiv – so I think something might have got lost in the transcription here – but the real value here is in the anecdotal comments.

Some conferences also run a live “leaderboard”, showing which speakers are getting the highest ratings. I’m generally not a big fan of this – I think it perpetuates a culture of celebrity that runs contrary to the approachability and openness of most conference speakers – but if you are going to do it, then make sure it works. Don’t run a live leaderboard that excludes all the speakers from Room 7 because the voting machine in Room 7 was broken.

Finally, two pieces of feedback I had from my talk about ReST at NDC London this year. The official talk rating, which I’m quite happy with but doesn’t really offer any scope for improvement:

image

And then there’s this, thanks to Seb, who not only came along to my talk but sat in the front row scribbling away on his iPad Pro and then sent me his notes afterwards. There’s some real substance here, some good points to follow up on and things I know I could improve:

ndc_london_seb_notes

This also goes to highlight one of the pitfalls of doing in-depth technical talks – your audience probably aren’t in a position to judge whether there’s flaws in your content or not, so your feedback is more likely to reflect the quality of your slides and your presentation style than the substance of your content. In other words – just because you got 45 green tickets doesn’t mean you can’t improve. Find a subject matter expert and ask if they can watch your talk and give you notes on it. Share your slides and talks online and ask the wider community for their feedback. And don’t get lazy. Even if you’ve given a talk a dozen times before, we’re in a constantly-changing industry and every audience is different.

Sunday, 24 January 2016

“The Face of Things to Come” from PubConf

A version of my talk from PubConf London, “The Face of Things to Come”, is now online. This isn’t a recording of the actual talk – the audio has been recorded specially, one slide has been replaced for copyright reasons, and a couple of things have been fixed in the edit – but it’s close enough.

The Face of Things to Come from Dylan Beattie on Vimeo.

As always, the only way to improve as a speaker is to listen to your audience, so I would love to hear your comments or feedback – leave a comment, email me or ping me on Twitter.

Tuesday, 19 January 2016

Would you like to speak at London .NET User Group in 2016?

The London .NET User Group, aka LDNUG – founded and run by Ian Cooper, with help from Liam Westley, Toby Henderson and me – is now accepting speaker submissions for 2016.

We aim to run at least one meetup a month during 2016, with at least two speakers at each meetup. Meetups are always on weekday evenings in central London, are free, and we want to have at least two speakers at each of our meetups this year. We’re particularly keen to welcome some new faces and new ideas to the London .NET community, so if you’ve ever been at a talk or a conference and thought “hey – maybe I could do that!” – this is your chance.

We’re going to try and introduce some variation on the format this year, so we’re inviting submissions for 45-minute talks, 15-minute short talks and 5-minute lightning talks, on any topic that’s associated with .NET, software development and the developer community. Come along and tell us about your cool new open source library, or that really big project your team’s just shipped. Tell us something we didn’t know about asynchronous programming, or distributed systems architecture. We welcome submissions from subject matter experts but we’re also keen to hear your first-hand stories and experience. Never mind what the documentation said – what worked for you and your team? Why did it work? What would you do differently?

If you’re a new speaker and you’d like some help and support, let us know. We’d be happy to discuss your ideas for potential talks, help you write your summary, rehearse your talk, improve your slide deck. Drop me an email, ping me on Twitter or come and find me after the next meetup (I’m the one in the hat!) and I’ll be happy to help.

So what are you waiting for? Send us your ideas, come along to our meetups, and let’s make 2016 a great year for London.NET.

 

Yes! I want to speak at London.NET in 2016!

Conway’s Law and the Mythical 17:00 Split

I was at PubConf on Saturday. It was an absolutely terrific event – fast-paced, irreverent, thought-provoking and hugely enjoyable. Huge thanks to Todd Gardner for making it happen, to Liam Westley for advanced detail-wrangling, and to NDC London, Track:js, Red Gate and Zopa for their generous sponsorship. 

Seb delivered a great talk about the Mythical 17:00 Split, which he has now written up on his blog. His talk resonated a lot with me, because I also find a lot of things about workplace culture very strange. I’m lucky enough to work somewhere where I seldom encounter these issues directly, but I know people whose managers genuinely believe that the best way to write software is to do it wearing a suit and tie, at eight’o’clock in the morning, whilst sitting in a crowded office full of hedge fund traders.

But Seb’s write-up post said something that really struck a chord.

“Take working in teams. The best teams are made of people that like working together, and the worst teams I’ve had was when a developer had specific issues with me, to the point of causing a lot of tension”

Now, I’m a big fan of Conway’s Law – over the course of my career, I’ve seen (and built) dozens of software systems that turned out to reflect the communication structures of the organizations that created them. I’ve even given a conference talk at BuildStuff 2015 about Conway’s Law with Mel Conway in the audience – which was great fun, if a little nerve-wracking.

In a nutshell, Conway’s Law says of Seb’s observation regarding teams that if you take a bunch of people who are fundamentally incompatible, and force them to work together, you’ll end up with a system which is a bunch of incompatible components that are being forced to work together. If you want to know whether – and how – your systems are going to fail in production, look at the team dynamic of the people who are building them. If watching those people leaves you feeling calm, reassured and relaxed, then you’re gonna end up with something good. If one person is absolutely dominating the conversations, one component is going to end up dominating the architecture. If there’s two people on the team who won’t speak to each other and end up mediating all their communication through somebody else – guess what? You’re going to end up with a broker in your system that has to manage communication between two components that won’t communicate directly.

If your team hate each other, your product will suck – and in the entire history of humankind, there are only two documented exceptions to this rule. One is Guns’n’Roses “Appetite for Destruction” and the other is “Rumours” by Fleetwood Mac.

\m/

Thursday, 14 January 2016

The Rest of ReST at NDC London

A huge thank you to everyone who came along to my ReST talk here at NDC London. Links to a couple of resources you might find useful:

Thanks again for coming – and any comments, questions or feedback, you’ll find me on Twitter as @dylanbeattie.

Thursday, 7 January 2016

Confession Time. I Implemented the EU Cookie Banner

Troy Hunt kicked off 2016 with a great post about poor user experiences online – a catalogue of common UX antipatterns that “make online life much more painful than it needs to be”.

One of the things he picks up on is EU Cookie Warnings – “this is just plain stupid.” And yeah, it is. Absolutely everybody I know who added an EU cookie warning to their website agrees – this is just plain stupid. But for folks outside the European Union, it might be insightful to learn just why these things started appearing all over the place.

First, a VERY brief primer on how the European Union works. There’s currently 28 countries in the EU. The United Kingdom, where I live and work, is one of them. One of the aims of the EU is to create a consistent legal framework that covers all citizens of all its member states. Overseeing all this is the European Parliament. They make laws. It’s then up to the governments of the individual member states to interpret and enforce those laws within their own countries.

So, in 2009, the European Parliament issued a directive called 2009/136/EC – OpenRightsGroup has some good coverage of this. The kicker here is Article 5(3), which says

“The storing of information or the gaining of access to information already stored in the user’s equipment is only allowed on the condition that the subscriber or user concerned has given their consent, having been provided with clear and comprehensive information in accordance with Directive 95/46/EC, inter alia, about the purposes of the processing. This shall not prevent any technical storage or access for the sole purpose of carrying out the transmission of a communication over an electronic communications network, or as strictly necessary in order for the provider of an information society service explicitly requested by the subscriber or user to provide the service.”

In a nutshell, this means you can’t store anything (such as a cookie) on a user’s device, unless

  1. You’ve told them what you’re doing and they’ve given their explicit consent, OR
  2. It’s absolutely necessary to provide the service they’ve asked for.

Directive 2009/136 goes on to state (my emphasis):

“Under the added Article 15a, Member States are obliged to laydown rules on penalties, including criminal sanctions where applicable to infringements of the national provisions, which have been adopted to implement this Directive. The Member States shall also take “all measures necessary” to ensure that these are implemented. The new article further states that “the penalties provided for must be effective, proportionate and dissuasive and may be applied to cover the period of any breach, even where the breach has subsequently been rectified”.

Golly! Criminal sanctions? Retrospectively applied, even for something that we already fixed? That sounds pretty ominous.

Anyway. Here’s what happens next. Directive 2009/136 means that is is now THE LAW that you don’t store cookies without consent, and the various member states swing into action and try to work out what this means and how to enforce it. In the UK, Parliament interpreted this via something called the Privacy and Electronic Communications (EC Directive) (Amendment) Regulations 2011, which would come into effect in 2012.

My team and I found out in late 2011 that, when the new regulations came into force on 26 May 2012, we would be breaking the law if we put cookies on our user’s machines without their explicit consent. And nobody had the faintest idea what that actually meant, because nobody had ever broken this law yet, so nobody knew what the penalties for non-compliance would be. The arm of the UK government that deals with this kind of thing is the Information Commissioner’s Office (ICO), who have a reputation for taking data protection very seriously, and the power to exact fines up to £500,000 for non-compliance. The ICO also usually publish quite clear and reasonable guidelines on how to comply with various elements of the law – but that takes time, so in late 2011 we found ourselves with a tangle of bureacracy, a hard deadline, the possibility of severe penalties, and absolutely no guidance to work from.

So… we implemented it. Despite it being a pointless, stupid, ridiculous endeavour that would waste our time and piss off our users, we did it - because we didn’t want to end up in court and nobody could assure us that we wouldn’t.

We built a nice self-contained JavaScript library to handle displaying the banner across our various sites and pages.

image

Instead of just plastering something on every page saying “We use cookies. Deal with it”, the approach taken by most sites - we actually split our cookies into the essential ones required to make our site work, and the non-essential ones used by Boomerang, Google Analytics and other stats and analytics tools. And we allowed users to opt-out of the non-essential ones. We went live with this on 10th May 2012. Around 30% of our users chose to opt-out of non-essential cookies – meaning they became invisible to Google Analytics and our other tracking software. Here’s our web traffic graph for April – June 2012 – see how the peaks after May 10th are suddenly a lot lower?

image

On 25th May 2012, ONE DAY before the new regulations became law, the ICO issued some new guidance, which significantly relaxed the requirements around ‘consent’. “Implied consent” was suddenly OK – i.e. if your users hadn’t disabled cookies in their browser, you could interpret that as meaning they had consented to receive cookies from your site.

They also announced that any enforcement would be in response to user complaints about a specific site:

“The end of the safe period "doesn't mean the ICO is going to launch a torrent of enforcement action" said the deputy commissioner and it would take serious breaches of data protection that caused "significant distress" to attract the maximum £0.5m non-compliance fine.” (via The Register)

So there you have it. Go to http://www.spotlight.com/ and, just once, you’ll see a nice friendly banner asking if you mind us tracking your session using cookies. And if you opt out, that’s absolutely fine – our site still works and you won’t show up in any of our analytics. Couple of weeks of effort, a nice, clean, technically sound implementation… did it make the slightest bit of difference? Nah. Except now we multiply all our Analytics numbers by 1.5. And yes, we periodically review the latest guidance to see whether the EU has finally admitted the whole thing was a bit silly and maybe isn’t actually helping, but so far nada – and in the absence of any hard evidence to the contrary, it’s hard to make a business case for doing work that would make us technically non-compliant, even if the odds of any enforcement action are minimal.

Now, if the European Parliament really wanted to make the internet a better place, how about they read Troy’s post and ban popover adverts, unnecessary pagination, linkbait headlines and restrictions on passwords? Now that’s the kind of legislation I could really get behind.

Restival Part 6: Who Am I, Revisited

Note: code for this instalment is in https://github.com/dylanbeattie/Restival/tree/v0.0.6

In the last instalment, we looked at adding HTTP Basic authentication to a simple HTTP endpoint - GET /whoami - that returns information about the authenticated user.

Well... I didn't like it. Both the OpenRasta and the WebAPI implementations felt really over-engineered, so I kept digging and discovered a few things that made everything much, much cleaner.

Basic auth in OpenRasta - Take 2

There's an HTTP Basic authentication feature baked into OpenRasta 2.5.x, but all of the classes are marked as deprecated so in my first implementation I avoided using it. After talking with Seb, the creator of OpenRasta, I understand a lot more about the rationale behind deprecating these classes - they're earmarked for migration into a standalone module, not for outright deletion, and they'll definitely remain part of the OpenRasta 2.x codebase for the foreseeable future.

Armed with that knowledge, and the magical compiler directive #pragma warning disable 618 that stops Visual Studio complaining about you using deprecated code, I switched Restival back to running on the OpenRasta NuGet package instead of my forked build, and reimplemented the authentication feature - and yes, it's much, much nicer.

There's a RestivalAuthenticator which implements OpenRasta's IBasicAuthenticator interface - as with the other frameworks, this ends up being a really simple wrapper around the IDataStore:

public class RestivalAuthenticator : IBasicAuthenticator {
  private readonly IDataStore db;

  public RestivalAuthenticator(IDataStore db) {
    this.db = db;
  }

  public AuthenticationResult Authenticate(BasicAuthRequestHeader header) {
    var user = db.FindUserByUsername(header.Username);
    if (user != null && user.Password == header.Password) return (new AuthenticationResult.Success(user.Username, new string[] { }));
    return (new AuthenticationResult.Failed());
  }

  public string Realm { get { return ("Restival.OpenRasta"); } }
}

and then there's the configuration code to initialise the authentication provider.

ResourceSpace.Uses.PipelineContributor<AuthenticationContributor>();
ResourceSpace.Uses.PipelineContributor<AuthenticationChallengerContributor>();
ResourceSpace.Uses.CustomDependency<IAuthenticationScheme, BasicAuthenticationScheme>(DependencyLifetime.Singleton);
ResourceSpace.Uses.CustomDependency<IBasicAuthenticator, RestivalAuthenticator>(DependencyLifetime.Transient);

This one stumped me for a while, until I realised that - unlike, say, Nancy, which just does everything by magic, you need to explicitly register both the AuthenticationContributor and the AuthenticationChallengerContributor. These are the OpenRasta components that handle the HTTP header parsing, decoding and the WWW-Authenticate challenge response, but if you don't explicitly wire them into your pipeline, your custom auth classes will never get called.

Basic auth in WebAPI - Take 2

As part of the last instalment, I wired up the LightInject IoC container to my WebAPI implementation. I love LightInject, but something I had never previously realised is that LightInject can do property injection on your custom attributes. This is a game-changer, because previously I'd been following a pattern of using purely decorative attributes - i.e. with no behaviour - and then implementing a separate ActionFilter that would check for the presence of the corresponding attribute before running some custom code - and all this just so I could inject dependencies into my filter instances.

Well, with LightInject up and running, you don't need to do any of that - you can just put a public IService { get; set; } onto your MyCustomAttribute class, and LightInject will resolve IService at runtime and inject an instance of MyAwesomeService : IService into your attribute code. Which means the half-a-dozen classes worth of custom filters and responses from the last implementation can be ripped out in favour of a single RequireHttpBasicAuthorizationAttribute - which overrides WebAPI's built-in AuthorizeAttribute class to provide authorization header parsing, WWW-Authenticate challenge response, and hook the authentication up to our IDataStore interface.

I'm much happier now with all four implementations, so it raises the interesting question of how much development time is really worth. Based on the code provided here, I suspect a good developer could implement HTTP Basic auth on any of these frameworks in about fifteen minutes - but something that takes fifteen minutes to implement doesn't really count if it takes you two days to work out how to do that fifteen-minute implementation.

In forthcoming instalments, we're going to be adding hypermedia, HAL+JSON and resource expansion - as we move further away from basic HTTP capabilities and into more advanced REST/serialization/content negotiation, it'll be interesting to see how our four frameworks measure up.