https://gds.blog.gov.uk/2014/02/10/striking-a-balance-between-security-and-usability/

Striking a balance between security and usability

Mike Bracken at Code for America

Last year our boss Mike Bracken talked at the Code for America Summit in San Francisco, and spoke about the need to strike a balance between usability and security. There is a point beyond which over-zealous security gets in the way, and puts people off using the technology that's being protected.

We wanted to explore this in a bit more detail, so we asked out director of technical architecture, James Stewart, to talk us through the issues. We wanted to know: how much security is too much? How do you find the right balance?

In this short interview, James talks about taking a "measured approach" to security that looks at likely risks in context. New services must be secure and fit for purpose. It's no use making something secure if that results in a service that's unusable. In practical terms, that means making sure that everyone on a project is thinking about security and possible "misuse cases" - it's not a job that should be hived off to a separate team.

To listen to the full interview (about 10 minutes), click the play button in the embedded SoundCloud widget below. Alternatively, you can download the audio directly from the Internet Archive.

Follow James on Twitter: @jystewart

Follow Giles on Twitter: @gilest

Transcript

Interviewer: Hello. Can you tell me who you are and what you do?

James Stewart: I’m James Stewart. By GDS standards I’m a long-time member of the GDS team, and I’m currently director of technical architecture.

Interviewer: What is a director of technical architecture?

James: It’s a new role that we’ve only introduced quite recently. It’s about establishing a new way of doing technical architecture in an agile, digital delivery kind of world, making sure that the way that we build systems is flexible; that we’re approaching things in a coherent way across government, and also taking responsibility for the products that we build at GDS and the platforms that we build; that they’re fit for purpose; that they can evolve as our user needs evolve. All that kind of thing.

Interviewer: For the benefit of the uninitiated, what is technical architecture? What does that mean?

James: Well, that really depends who you talk to, and the organisation that you’re working within. But generally, it is about taking a step back from the way that bits of code are being written, to think about the system as a whole, thinking about products as a whole, or platforms, and making sure that they’re fitting together in a coherent way; that they’re exhibiting whatever characteristics your organisation needs, and that you’re looking a little bit ahead. In agile, you’re always trying to focus on which value, which bit of value are we trying to deliver immediately? What are we trying to get to next? But it’s always important to be looking a little way ahead. Is this going to be flexible to the directions that we want to take it in? Is this coming together to meet the general requirements to meet user needs? Rather than just, does this piece of code work in a micro-perspective?

Interviewer: Fantastic. Let’s talk a little bit about security. When Mike Bracken went to the Code for America conference, one of the things he said was this:

The third one is security. I don’t mean Snowden security; I mean this pernicious view that security must come ahead of usability at all times. When I started in government, I was given a laptop that required 22 discreet pieces of information so that I could work it. I couldn’t send an email to all my staff, and why not? Because of security. We’ve just got to get usability ahead of security.

Two things arise out of that. What is the "pernicious view" that he’s talking about, and why do we need to get usability ahead of security?

James: It’s very common for people to take a very, very risk-averse approach to anything that they’re building. Whenever you’re providing a service or building a system that people have to use, there’s a set of risks around that, and that’s the case whether you’re doing it in government or doing it for something that you might use in your home. But you have to be thoughtful and careful about balancing risks.

Rather than listing out everything that could possibly go wrong with something that you build, and then protecting against every single one of those, think about what, would the outcome be if that thing went wrong? Think about how likely is it that that’s going to happen? And take appropriate steps to prepare for those situations.

People in general aren’t great at thinking through risks and thinking through things in that kind of measured way. We can get scared quite easily, or we worry about things which are what’s on our mind, not the sort of balanced, really think this through approach. Often in computing, in service design in general, in all of those areas, people have taken this approach of: “We need to make this secure because people are going to want to attack it, and therefore we need to lock it down in every way that we possibly can.” But that stops people using it, and it means that any efforts that you’re making to meet user needs are flustered; they’re blocked by the fact that people can’t use this thing.

If we want people to work with government online, to adopt Digital by Default, we need to make those services that people want to use, and that are simple enough for them to use, and we need to start with that. We need to start with: how are we going to make these services really great? How are we going to make them attractive to people? Then think about the risks around that in that context, and always driving back to: how are we making this fit for purpose and something that people will want to use?

That can often mean having to think outside the box around security; think about how you can, not just lock things down, but perhaps seek verification for something through a different channel. Rather than making it such that you have to enter lots and lots of pieces of information to sign into a system, that you use different channels for checking that information.

You know that somebody’s got a mobile phone. You could send a message to that; gives you a little bit of extra verification of their identity. Rather than asking them half a dozen extra questions, can we use these different tools at our disposal to balance the security and the usability of a service? It’s really vital that we do that, if we actually want people to use these services. People just aren’t going to use them unless they can.

Interviewer: How do you strike the right balance between those two opposing criteria; between the security and the usability? At what point does something become insecure because it’s too usable, or the other way round?

James: I think we have to not really set those things up quite in opposition. They’re often in tension, but they’re not necessarily opposed. You might find that you think that the best way to give people a great experience of a service is to put on one page on the web everything that you know about them. There’s a lot of risk associated with doing that. It means that somebody who’s malicious who’s just looking over their shoulder suddenly knows everything about them, and can use that to commit identity theft, so you probably don’t want to be doing that.

But actually when you look at it, putting everything that you know on one page is probably information overload for somebody as well. One of our design principles is: “Do less”, and it’s a really good idea to think about that throughout the whole thing. “What’s the real core of what we’re trying to do and what we’re trying to offer people? Let’s not decorate that with lots of extra information.”

That’s a really good starting point, is that: “Do less. What’s the real core of what we’re trying to do?” Then start doing real user research, and exploring, “What are the particular risks that we’re trying to protect against, here, and how do other people think about those? How do the users of our service think about it?”

A very common one is if you’re sending an email to somebody as part of your service, and you’re putting a link in that, that sets up a context where it might be quite easy for somebody to forge that email and send people a link that looks like it’s to your service, but actually it’s to their honeypot thing that will capture their credit card details and steal all their money. It’s called phishing. We have to really think about how we’re signalling to people whether this is a genuine email or not, and you can only really do that if you start sitting down with them, understanding: how do they read those emails? How else to they receive information? How do you give them confidence in the right things? A lot of it's back to, work with the users as closely as you can.

Most people who use the internet much have received a lot of phishing emails, whether they call them that or not, but the things that pretend to be from HSBC or pretend to be from HMRC, offering you a tax refund, if you just give them every piece of information they could possibly want. People know there’s an issue there, and then they just want to make sure that they’re not really inconvenienced by what you do.

But generally there are all sorts of techniques, and you have to think about what could go wrong if this thing got exploited. How likely is it that they’ll be exploited, and then what do we do? Do you actually make sure that you always include a phone number; that people can call up to verify that this email was real? Or do you just offer them part of the information that they need, so that they can match that with what they already know about the way that they’ve used your service, and see that it’s genuine?

It’s one of the many things that we do that really lends itself to the cross-disciplinary way of doing things, that we want everybody to be building into their services. There are content issues, there are design issues, there’s technology, there’s just general service design; it all comes together.

All too often, it’s been the case that people have approached security as something that either people who deal with compliance and writing documents deal with, or that the techies deal with. It’s a fundamental part of the service; it’s not this separate thing that one team thinks about, and that email thing is a really good example of why that’s the case.

Interviewer: The whole team needs to be thinking about it from day one?

James: Yes. But making sure that they do it proportionately; that they don’t get paralysed by fears about security; that they’re just sort of conscious. There are some nice techniques, like one of our colleagues elsewhere in government talked about "misuse cases". When you’re writing some user stories and thinking about your use cases for some feature that you’re building, also think about the misuse cases; think about how somebody might play with what you’ve built, or abuse it. That’s quite a nice technique.

5 comments

  1. simonfj

    Thanks Giles, (you've got the best job)

    "technical architecture? What does that mean?"
    "Well, that really depends who you talk to, and the organisation that you’re working within".

    Good reply. Sounds like Stewart is a pollie in the making.
    It is an impossible one at the moment though; to define this role.

    There's so much to try and span across. The two main variations in perspective is that the web (the application layer) is an interface to be designed, intuitively, while all the other internet layers - the engineer's perspective - has little interest in aesthetics. "We delivered you a service that can't be compromised. It's not our fault users are too stupid to use it'. (and this from guys who run R&E networks).

    Look rather than just go on, let me disagree with this one point. " I think we have to not really set those things up quite in opposition. They’re often in tension, but they’re not necessarily opposed." No, they (design philosophies) are opposed. The aim of governance is to find utility in opposites.

    So I'll point you at this blog entry https://jonrouse.blog.gov.uk/2013/12/20/the-better-care-fund-a-spur-to-integration-and-innovation/ which I was able to use as an illustration of the opposing design philosophies. (I think you might have to chase the blog owner up though Giles. My comment is still awaiting moderation.)

    The other stuff, so far as technical architecture is concerned (as James hasn't said he has a networking background), is really just a matter of agreeing upon an (inter- net)working specification, and standard operating procedures & policies. James might be interested in this proof of concept. http://www.geant.net/service/eduGAIN/Pages/home.aspx (if he can't sleep 🙂

    Link to this comment
    • Giles Turnbull

      Thanks for the comments and links. I shall see what I can do about chasing up the comment you've posted on Jon Rouse's blog. (And yes, you're right about the job.)

      Link to this comment
  2. Andy W

    Your own Government Gateway is a prime example of too much security - needing to register through the post, 16 digit alphanumeric username, password reset is a 5 page form...

    Especially as it's only used for completing mundane tasks like renewing driving liscenses every few years - occasional users have no hope!

    Link to this comment
  3. simonfj

    Just a note, cause the "useability vs. security" comparison still doesn't quite work for me.

    I think progress here is more a matter of accessibility vs. security. Certainly a service has to be fit for purpose. But so much of what is considered 'intuitive' will depend on one understanding the (designers) lingo and (online) culture. And getting oriented in the online world (so one can work around a problem if necessary) simply takes time and practice in order to become a native.

    Much of the problem (of course) revolves around user education. i.e. "... making sure that everyone on a project is thinking about security and possible “misuse cases”, is impossible unless users have the means to know what to look for, so they can retain a level of security.

    Lastly, as we begin to approach the big IDA implementation(s), we still have to address citizen's user expectations. I.e. If a user wants personalized services then they must give away some privacy (to either a private company or a government). In the US, it seems they will trust a private company before their government, and the UK seems to be (tentatively) going down that path.

    I just don't believe - knowing the British culture - that the average citizen will want to go down this path once they understand what they are giving away. We shall see.

    Link to this comment