This post is now outdated. For a more up-to-date post, see 'Don't be afraid to code in the open: here's how to do it securely'.
To meet the Digital By Default Service Standard, digital services have to ‘make all new source code open and reusable, and publish it under appropriate licences (or give a convincing explanation as to why this can’t be done for specific subsets of the source code).’ This reflects our design principle of making things open because it makes them better.
This post is about the types of circumstances where we think it’s appropriate not to publish all source code, and how we in GDS approach decisions about what not to publish.
There are three types of code that we think it's appropriate not to publish.
1. Information about how our software is configured
In industry there is an accepted separation between configuration and code. We don’t publish information about how our software is configured. For example, this includes information about:
- what software we’re using to enforce certain security measures (‘security enforcing function’) eg antivirus software, intrusion detection software
- specific versions of the software that we use
- detailed configuration of firewalls and other security-related software
This is because for these categories of things, making things open does not make them more secure and the public interest is better served by not publishing them. We also think the impact of not publishing these categories of things is low. The specifics are unlikely to be helpful to other people because they are at a level of detail that others wouldn’t find useful.
For example, there will inevitably be times when there is a gap between us finding out about a vulnerability in some software that we’re using, and us being able to fix it. Not publishing specific information about which version we’re using makes it harder for an attacker to use publicly known vulnerabilities to attack while we are still fixing them.
2. Implementation of code that performs a security enforcing function
Where our code performs a security enforcing function, we will make the design public but not the implementation.
We build on top of open standards, for example we use SAML for GOV.UK Verify and the profile is published. We don’t publish information about the implementation of the design because it would allow people to create a duplicate and practise hacking it without our being able to detect that activity.
In instances where we don’t publish our code because it fulfils a security enforcing function, we make it available for peer review and subject the code to penetration testing. This involves commissioning security experts to attempt to break into the system to help us identify any areas of weakness and potential improvement.
3. Code that it’s not appropriate to publish at that time, but may be later
We may also occasionally judge the public interest not to be served by publishing our own software when the policy hasn’t yet been announced so we can’t / it’s not helpful to reveal the code without that context. In this case we may publish the code later. We expect the teams responsible to still develop the code as if it were open to make sure there are as few barriers as possible to opening it up when the time is right. We did this in some cases whilst building GOV.UK.