Skip to main content

https://gds.blog.gov.uk/2014/10/08/when-is-it-ok-not-to-open-all-source-code/

When is it ok not to open all source code?

Posted by: , Posted on: - Categories: GOV.UK, GOV.UK One Login, Technology

This post is now outdated. For a more up-to-date post, see 'Don't be afraid to code in the open: here's how to do it securely'.

To meet the Digital By Default Service Standard, digital services have to ‘make all new source code open and reusable, and publish it under appropriate licences (or give a convincing explanation as to why this can’t be done for specific subsets of the source code).’  This reflects our design principle of making things open because it makes them better.

This post is about the types of circumstances where we think it’s appropriate not to publish all source code, and how we in GDS approach decisions about what not to publish.

There are three types of code that we think it's appropriate not to publish.

1. Information about how our software is configured

In industry there is an accepted separation between configuration and code. We don’t publish information about how our software is configured. For example, this includes information about:

  • what software we’re using to enforce certain security measures (‘security enforcing function’) eg antivirus software, intrusion detection software
  • specific versions of the software that we use
  • detailed configuration of firewalls and other security-related software

This is because for these categories of things, making things open does not make them more secure and the public interest is better served by not publishing them. We also think the impact of not publishing these categories of things is low. The specifics are unlikely to be helpful to other people because they are at a level of detail that others wouldn’t find useful.

For example, there will inevitably be times when there is a gap between us finding out about a vulnerability in some software that we’re using, and us being able to fix it. Not publishing specific information about which version we’re using makes it harder for an attacker to use publicly known vulnerabilities to attack while we are still fixing them.

2. Implementation of code that performs a security enforcing function

Where our code performs a security enforcing function, we will make the design public but not the implementation.

We build on top of open standards, for example we use SAML for GOV.UK Verify and the profile is published. We don’t publish information about the implementation of the design because it would allow people to create a duplicate and practise hacking it without our being able to detect that activity.

In instances where we don’t publish our code because it fulfils a security enforcing function, we make it available for peer review and subject the code to penetration testing. This involves commissioning security experts to attempt to break into the system to help us identify any areas of weakness and potential improvement.

3. Code that it’s not appropriate to publish at that time, but may be later

We may also occasionally judge the public interest not to be served by publishing our own software when the policy hasn’t yet been announced so we can’t / it’s not helpful to reveal the code without that context. In this case we may publish the code later. We expect the teams responsible to still develop the code as if it were open to make sure there are as few barriers as possible to opening it up when the time is right. We did this in some cases whilst building GOV.UK.

Sharing and comments

Share this page

16 comments

  1. Comment by Ross posted on

    Decided to go look at the register-to-vote source code today, but couldn't find it. Then I decided to see if I can find any of the source code for any of the services at https://www.gov.uk/transformation and I couldn't.

    My search-fu appears to have abandoned me.

    This blog post starts ...

    "To meet the Digital By Default Service Standard, digital services have to ‘make all new source code open and reusable, and publish it under appropriate licences (or give a convincing explanation as to why this can’t be done for specific subsets of the source code).’ "

    Is it my search-fu that has failed, or is the source code for these projects not available?

    • Replies to Ross>

      Comment by martyninglis posted on

      In the case of the register to vote code we began with a closed source repository. The primary reason for this was the the project implements a policy that had not yet passed Royal Assent. Consequently the project remained private. The key lesson that we learned from starting closed is that  opening it again is hard. Comments, pull requests, commits etc may all contain content that is not applicable for a Government open source repository and need to be checked. Additionally the repo needs to be checked for keys, passwords and so on. There are some features - for example anti-bot measures - which by necessity are private and need to be re-implemented in shared code - which would be private. Finally there is the issue of deploying code to secure environments from open code repositories, a build pipeline set up for private code may need adaptation for open ones.

      I think this illustrates some of the issues we have - if projects don’t start open, then opening them is difficult, and there are many reasons why a department would want to start their project closed, register to vote illustrating one. The advice now is along these lines. If it doesn’t start open it can be very difficult to go back.

      The Register to Vote code for the citizen facing application is now open and can be found at: https://github.com/alphagov/ier-frontend

  2. Comment by Dia posted on

    Is this considered official GDS guidance then? If so, why is it only in a blog post and not the Service Design Manual?

    • Replies to Dia>

      Comment by James Stewart posted on

      This post is our next step in unpacking the way we'll consider this aspect of the Service Standard, and we'll be using it to update the more formal guidance we publish. Before we make those updates we're aiming to gather a few more case studies so that we can offer a wider range of concrete examples.

  3. Comment by peter posted on

    'there will inevitably be times when there is a gap between us finding out about a vulnerability in some software that we’re using, and us being able to fix it'.

    Do you find that you detect problems (in general) more quickly in code that you open source? It's interesting to compare the priority put on fixing vs detection.

    • Replies to peter>

      Comment by James Stewart posted on

      Hi peter,

      We don't have any clear numbers on that as yet. We've found and fixed problems in code that's open, code that's closed and in other peoples' code that we depend on, but none of those cases were directly affected by the openness of our code. As time goes on it will definitely be interesting to see whether any differences emerge.

  4. Comment by Bob posted on

    So you aren't sure whether obscurity is helpful to security, but you're presuming it is while you think it through some more?

    Meanwhile, door locks are widely available for practice, as are chip'n'pin readers and Edward Snowdon chats on open-source OTR.

  5. Comment by Carrie Barclay posted on

    Rob, Sam and Ross

    Thank you all for your comments.

    We don't rely on security through obscurity and we explicitly don't advocate that. It's just that sometimes there are things you will need to keep secret as part of a wider approach to protecting your service.

    The points in the post are about setting some parameters for the conversation - there are lots of details to work through for each specific case. The presumption (reflected in the service standard) remains that making things open makes them better.

    • Replies to Carrie Barclay>

      Comment by Ross posted on

      " It's just that sometimes there are things you will need to keep secret as part of a wider approach to protecting your service."

      Carrie, that sounds *exactly* like security through secrecy to me.

  6. Comment by Ross posted on

    Would it be accurate for me to paraphase point 2 as:

    We rely on secrecy for our security, and so we won't release the source code?

  7. Comment by Samuel Sabater posted on

    If your code that performs a security function needs to be kept secret in order for it to be secure then it's not secure. That's called security through obscurity and probably one of the most amateurish thing a programmer could ever think of.

    https://www.schneier.com/blog/archives/2008/06/security_throug_1.html

  8. Comment by RobIII posted on

    I'm terribly sorry but both points 1 and 2 *SCREAM* Security Through Obscurity (http://en.wikipedia.org/wiki/Security_through_obscurity). Sure, it's normal and even required you don't publish actual encryption keys, private keys etc. But the *code* can, and SHOULD, be open sourced. You're not fooling anyone by using the "but... security!!1"-argument.

  9. Comment by Harry posted on

    Hmm. 1 and 3 are completely reasonable, but I'm not so sure about 2.

    What's the line between code that performs a "security enforcing function" and code which doesn't?

    Although I appreciate that there's a difference between a firewall and a webserver in that respect, surely there's an extent to which all code performs security enforcing functions, if it's any good?

    Is the implication of this post that IDA Hub is performing a security enforcing function and so won't be published, or is it more that it's just not ready to publish yet?

  10. Comment by Indy posted on

    "There is, in industry, an accepted separation of configuration from code. " - How about some proper English please? For example - In industry there is an accepted separation between configuration and code