Skip to main content

Quantitative testing Betagov content and layout

Posted by: , Posted on: - Categories: Content design, Data, GOV.UK

I am Nick Breeze, one of the Senior Insight Managers at GDS and I work in the team responsible for testing content being developed for

Recently we completed quantitative usability testing to compare the performance of content and layout from Directgov with that of the forthcoming beta version of

This follows up conventional face-to-face usability testing that was conducted in December 2011, and used GDS's Summative Test methodology, which lets us reach a large, representative sample of 1,800 online users cheaply and quickly.

Conventional usability testing involves users being in an unfamiliar environment, using someone else’s computer, and being quizzed by a moderator. Quantitative usability testing has the added bonus of participants completing tasks in their homes, on their own computers, with no interference from a moderator.

The research involved testing how well we meet user needs in two areas:

  • Tax
  • Going to Court

Each featured a version using the existing Directgov content layout and style, and another using the Betagov version. These were then loaded up onto a test website, and participants from an online research panel were asked to take part in our study.

Screening questions, such as age and gender, were asked to ensure that we obtained a representative sample, and then each participant were tasked with finding four separate pieces of information. For example, one task asked people to imagine that they were a victim of a crime and the case was being taken to court. They were then asked to find information on who would contact them to offer advice and support. Following this they reported on how easy the information was to find, whether they had read all the content or not, what they thought of the writing style, and finally their opinion on the amount of information provided.

Overall the results showed that participants who had been asked to complete the tasks using Betagov were quicker and more successful in finding the correct information.

Some key findings from the research (attached in a slide deck below) are as follows:

  • Participants found information faster using Betagov – an average of 80 seconds compared with 123 seconds for Directgov versions. On some individual tasks it was over a minute quicker.
  • In general, we see a step change in successful task completion rates for betagov, compared to DirectGov, up to nearly 70% from around 60%. This would equate to well over 1 million more user needs being successfully met each month.
  • An average of 90% of participants saying it was 'quite/very easy' to find information on the Betagov versions.
  • The majority of participants described the Betagov style as 'straightforward', 'to the point', and/or 'reassuring'. Moreover, the majority of participants also agreed that Betagov content contained the 'right amount of information'.

At GDS we’re always very wary of making *too* much of user testing - there’s no substitute for putting a real product in front of real users with real needs. However, we can’t help but be cautiously excited about these results.

By way of context, if (and it is a big if) the same level of improvement were mirrored across the live service, a Betagov-style product could lead to:

  • Over 1 million more user needs being successfully met each month (aka a million frustrating & expensive failures avoided)
  • Users saving over 215,000 hours of their time each month
  • Quantifiably lower levels of user frustration, and higher levels of user reassurance

But to reiterate we are only cautiously optimistic: even a large 1800-strong panel is no substitute for real users facing real needs. When the beta of goes live it will doubtless reveal many further opportunities to improve.

View the slideshare:

Sharing and comments

Share this page


  1. Comment by Researching Inside Government | Government Digital Service posted on

    [...] also conducted some remote usability testing, or what we call ‘summative testing‘. This allows us to gauge the performance of content, layout and perception with a large [...]

  2. Comment by Tech weekly roundup | Cogapp posted on

    [...] recent study by the GDS shows that the beta site is already performing better than Directgov, with users [...]

  3. Comment by A simple guide to applying for a job at the GDS | Government Digital Service posted on

    [...] we want to carry on doing amazing things here at GDS then we need to carry on building up the world-class digital talent in [...]

  4. Comment by Introducing the beta of GOV.UK | Government Digital Service posted on

    [...] have of Government (broadly, those currently catered for by Directgov) - making them as findable, understandable and actionable as we can. We’ve built a scalable, modular open source technology platform to [...]

  5. Comment by Nick Breeze posted on


    Yes we did conduct significance testing, and all the tasks where Betagov outperformed Directgov content/layout were statistically significant. I'm sorry but i don't understand your first question?

    Kind regards,

  6. Comment by yorksranter posted on

    It's interesting how much of this focused on the textual content of the site (as opposed to gfx and interaction) - bring back the job description "technical writer"? Also, did you do any tests of statistical significance?

  7. Comment by Fats Brannigan posted on

    Isn't this a bit like A/B testing finding a needle in a haystack, vs a needle hidden amongst a few straws?