Good digital services

How to design good services? One way to develop smarter digital services is to put the right processes and practices in place to drive results and accountability, enabling people and organizations to do their best work. There are many good approaches to service delivery that increase our ability to be flexible, iterative, and most importantly, focused on the needs of the people using the services. So it’s all about an iterative, customer-centric development process. Many digital services do not work well, are delivered late, or are over initial estimated budget. A new approach is needed to increase the success rate of these projects. Here is a list of the 13 most important points for building effective digital services.

1. understand needs

We need to start digital projects by researching and concretizing the needs of the people who will use the service and how the service fits into their lives. Whether users are citizens or government employees, policymakers need to involve real people in their design process from the beginning. People’s needs – not the constraints of organizational structures – should influence technical and design decisions. We have to constantly test the products we build with real people to understand what is really important.

— Checklist

Engage with current and potential service users at the beginning of the project.

Use a range of qualitative and quantitative research methods to determine people’s goals, needs, and behaviors.

Test prototype solutions with real people, on-site if possible.

Document findings about user*s goals, needs, behaviors, and preferences.

Share results with team and agency leadership

Create a prioritized list of tasks that the user*s trying to complete, also known as “user stories”.

As you build the digital service, test it regularly with potential users to make sure it meets people’s needs.

— Important questions

    • Who are your main users?
    • Which users need this service address?
    • Why does the user want or need this service?
    • Which individuals will have the most difficulty with the service?
    • What research methods were used?
    • What were the most important results?
    • How were the results documented?
    • Where can future team members access the documentation?
    • How often do you test with real people?

2. focus on the complete user experience

We need to understand how people interact with our services, including the actions they take online, via a mobile application, on the phone or in person. Every encounter – whether online or offline – should bring users closer to their goal.

— Checklist

Understand the different points at which people interact with the service – both online and in person. Identify pain points in the current way users interact with the service and prioritize them according to users’ needs*. Design the digital parts of the service so that they are integrated with the offline touchpoints that users use to interact with the service. Develop metrics that measure how well the service meets the needs of users* at each step of the service.

— Important questions

    • What are the different ways (both online and offline) that people are currently doing the job that digital service is supposed to help with?
    • Where are the user*s pain points in the current way people are performing this task?
    • Where does this specific project fit into the larger way people currently receive the service provided?
    • What metrics best show how well the service is working for its users?

3. keep it simple!

Using a service should not be stressful, confusing or daunting. Our task is to develop services that are so simple and intuitive that users are successful the first time without outside help.

— Checklist

Use a simple and flexible design style guide for the service. Use web design standards!

Give users clear information about where they are in each step of the process!

Follow assessed accessibility procedures to ensure that everyone can use the service!

Give users* a way to stop the process and exit later.

Use language that is familiar to the user and easy to understand!

Consistent use of language and design throughout the service, including online and offline touchpoints!

— Important questions

    • What are the main tasks the user*s trying to accomplish?
    • Is the language as simple and universal as possible?
    • In which languages is your service offered?
    • If a user needs help using the service, how can they get it?
    • How does the design of the service relate visually to other government services?

4. development with agile and iterative methods

We should use an incremental, rapid style of software development to reduce the risk of failure. We want to get the software into the hands of users as early as possible to allow the design and development team to adapt based on user feedback about the service. An important feature is the ability to automatically test and deploy the service so that new features can be added frequently and easily put into production.

— Checklist

A working “minimum viable product” (MVP) that solves the needs of key users as quickly as possible, no longer than three months from the start of the project, with a “beta” or “test” period if needed.

Conduct regular usability testing to determine how well the service is working and what improvements should be made.

Ensure that the people building the service communicate closely with each other using daily stand-ups and team chat tools.

Keep the development team small and focused; limit the organizational layers that separate these teams from the decision makers.

Release features and enhancements several times a month Create a prioritized list of features and bugs, also known as a “feature backlog” and “bug backlog.”

Use a source code version control system Give the entire project team access to the issue tracker and version control system.

Use code reviews for quality assurance.

— Important questions

    • How long did it take to ship the MVP? If it hasn’t been shipped yet, when?
    • How long does a production run last?
    • How many days or weeks are in each iteration/sprint?
    • Which version control system is used?
    • How are bugs tracked and tickets issued? Which tool is used?
    • How is the feature backlog managed? Which tool is used?
    • How often do you review and prioritize the feature and bug backlog?
    • How do you collect user feedback during development?
    • How is this feedback used to improve the service?
    • What gaps were identified in each phase of usability testing?

5. budgets and contracts

To improve the chances of success when awarding development contracts, we need to work with experienced budgeting and contract managers. In cases where we use third parties to build a service, a clearly defined contract can facilitate good development practices such as conducting a research and prototyping phase, refining product requirements as the service is built, evaluating open source alternatives, ensuring frequent delivery milestones and ensuring flexibility in purchasing cloud computing resources.

— Checklist

The budget includes research, discovery and prototyping activities.

The contract is structured to require frequent performance rather than multi-month milestones.

The contract is structured to hold suppliers accountable for their performance.

The contract gives the government team enough flexibility to adjust the prioritization of features and delivery schedule as the project evolves.

Contract ensures that open source solutions are evaluated during technology selection.

The contract stipulates that software and data created by third parties remain under our control and can be reused and made available to the public in accordance with legal requirements.

contract allows us to use tools, services, and hosting from vendors with a variety of pricing models, including fixed fees and variable models such as “pay-for-what-you-use” services.

The contract establishes a warranty period during which defects discovered by the public will be addressed by the vendor at no additional cost to the state. The contract includes a transition period and a transition plan.

— Important questions

    • What is the scope of the project? What are the key findings?
    • What are the milestones? How common are they?
    • What performance metrics are defined in the contract (e.g., response time, system availability, time to resolve priority issues)?

6. responsibility in one hand

There must be a single product owner who has the authority and responsibility to assign tasks and work elements, make business, product and technical decisions and be accountable for the success or failure of the overall service. This product owner is ultimately responsible for how well the service meets the needs of its users, i.e. how a service should be evaluated. The Product Owner is responsible for ensuring that the features are created and manages the feature and bug backlogs.

— Checklist

One product owner has been identified.

All stakeholders agree that the product owner has the authority to assign tasks and make decisions about features and technical implementation details.

The Product Owner has a product management background with technical experience to evaluate alternatives and weigh tradeoffs.

The product owner has a work plan that includes budget estimates and identifies funding sources.

The product owner has a strong relationship with the client.

— Important questions

    • Who is the product owner?
    • What organizational changes have been made to ensure that the product owner has sufficient authority and support for the project?
    • What does the product owner need to add or remove a feature from the service?

7. experienced teams

We need talented people who have experience in developing modern digital services. This includes experienced product managers, engineers and designers. When outside help is needed, our teams should work with contracting officers who understand how to assess the technical expertise of third-party vendors so that our teams can be paired with contractors who are good at both building and delivering effective digital services. The requirements for the composition and experience of the team vary depending on the scope of the project.

— Checklist

Members of the team have experience building popular, high-traffic digital services.

One or more members of the team have experience in mobile and web application development.

Members of the team have experience working with automated testing frameworks.

Members of the team have experience with modern development and operational techniques (DevOps) such as continuous integration and continuous deployment.

One or more members of the team have experience in securing digital services

The appropriate privacy, civil liberties, and/or legal counsel for the department or agency is a partner.

8. use of modern technology stacks

The technology decisions we make must enable development teams to work efficiently and scale services easily and cost-effectively. Our choice of hosting infrastructures, databases, software frameworks, programming languages and the rest of the technology stack should aim to avoid vendor dependency and match what successful modern consumer and enterprise software companies would choose today. In particular, digital service teams should consider the use of open source, cloud-based and commodity solutions across the technology stack, as these are being adopted and supported by successful private sector technology companies.

— Checklist

Choose software frameworks that are also commonly used by private companies with similar needs.

Whenever possible, ensure that the software can be used on a variety of common hardware types.

Ensure that each project has clear, understandable instructions for setting up a local development environment and that team members can be quickly added or removed from projects.

Consider open source software solutions at every level of the stack.

— Important questions

    • What is your development stack and why did you choose it?
    • Which databases do you use and why did you choose them?
    • How long does it take for a new team member to start developing?

9. deployment in a flexible hosting environment

The services should be deployed on a flexible infrastructure where resources can be made available in real time to cope with peak traffic and user demand. Digital services are hamstrung when hosted in data centers that market themselves as “cloud hosting” but must directly manage and maintain the hardware. This outdated practice wastes time and leads to significantly higher costs.

— Checklist

Resources are provided as needed

Resource scale based on real-time user demand

Resources are provided via an API.

Resources are available in several regions

You only pay for the resources we consume.

Static assets are delivered via a content delivery network.

The application is hosted on commodity hardware.

— Important questions

    • Where is your service hosted?
    • What hardware does your service run on?
    • What is the demand or usage pattern of your service?
    • What happens to your service if it experiences an increase in traffic or load?
    • How much capacity is available in your hosting environment?
    • How long does it take you to deploy a new resource like an application server?
    • How have you scaled your service as needed?
    • How do you pay for your hosting infrastructure (e.g. by the minute, hourly, daily, monthly, fixed)?
    • Is your service hosted in multiple regions, availability zones, or data centers?
    • In the event of a data center disaster, how long will it take for the service to become operational?
    • What would be the impact of a longer downtime?
    • What data redundancy have you built into the system and what would be the impact of a catastrophic data loss?
    • How often do you need to contact a person from your hosting provider to fix a problem?

10. automated tests and implementations

Today, developers write automated scripts that can verify thousands of scenarios within minutes and then deploy updated code to production environments several times a day. While manual testing and quality assurance are still necessary, automated testing provides consistent and reliable protection against unintended errors and allows developers to safely release frequent updates to the service.

— Checklist

Create automated tests that verify all user functions. Create integration tests to verify modules and components Create automated tests as part of the build process Perform automated deployments using deployment scripts, continuous delivery services or similar techniques Perform load and performance tests at regular intervals, including prior to public launch

— Important questions

    • What percentage of the code base is covered by automated tests?
    • How long does it take to create, test, and implement a typical bug fix?
    • How long does it take to develop a new function, test it and integrate it into production?
    • How often are builds created?
    • Which test tools are used?
    • What deployment automation or continuous integration tools are used?
    • What is the estimated maximum number of simultaneous users* who want to use the system?
    • How many concurrent users could the system handle after the last capacity test?
    • How does the service work if you exceed the expected target volume?
    • What is your scaling strategy if demand suddenly increases?

11. security and data protection through reusable processes

Digital services must protect sensitive information and keep systems secure. This is typically a process of continuous review and improvement that should be integrated into the development and maintenance of the service. At the beginning of the development of a new service or function, the team leader should engage the appropriate privacy, security, and legal officer(s) to discuss the type of information collected, how it will be secured, how long it will be retained, and how it may be used and shared. The sustained commitment of a data protection officer helps to ensure that personal data is properly managed. Furthermore, building a secure service requires comprehensively testing and certifying the components in each layer of the technology stack for security vulnerabilities and then reusing these pre-certified components for multiple services. The following checklist provides a starting point. Teams should work closely with their data protection officer and security engineer to meet the requirements of the service in question.

— Checklist

Determine, in consultation with a case manager, what data is collected and why, how it will be used or shared, how it will be stored and secured, and how long it will be retained.

Determine, in consultation with a privacy officer, whether and how users* will be informed about how personal information is collected and used, including whether a privacy policy is required and where it should appear, and how users* will be notified in the event of a security breach.

Consider whether the user*s data can be accessed, deleted or removed from the service.

Use deployment scripts to ensure that the production environment configuration remains consistent and controllable.

— Important questions

    • Does the service collect personal data of the users? How is the user* informed about this collection?
    • Does he collect more information than necessary? Can the data be used in ways that the average user would not expect?
    • How can users access, correct, delete or remove personal data?
    • Will the personal data stored in the system be shared with other services, individuals or partners?
    • How and how often is the service tested for security vulnerabilities?
    • How can someone from the public report a safety issue?

12. making data-based decisions

At every stage of a project, you should measure how well your service is working for users. This includes measuring how well a system works and how people interact with it in real time. Teams and agency leadership should carefully monitor these metrics to find problems and determine which fixes and improvements should be prioritized. In addition to the monitoring tools, there should be a feedback mechanism that enables everyone to report problems directly.

— Checklist

Real-time monitoring of resource utilization at the system level

Real-time system performance monitoring (e.g., response time, latency, throughput, and error rates)

Create automated alerts based on this monitoring

Track concurrent users in real time and monitor user behavior in the aggregate to determine how well the service is meeting users’ needs.

Publish metrics internally

Publish metrics externally

— Important questions

    • What are the most important metrics for service?
    • How have these metrics evolved over the lifetime of the service?
    • Which monitoring tools are used?
    • What is the target average response time for your service? What percentage of requests take more than 1 second, 2 seconds, 4 seconds, and 8 seconds?
    • What is the average response time and percentage breakdown (percent of requests taking more than 1s, 2s, 4s, and 8s) for the top 10 transactions?
    • What is the volume of each of your service’s top 10 transactions? What is the percentage of transactions started and completed?
    • What is the monthly uptime of your service?
    • What is the monthly uptime of your service, including scheduled maintenance? Without scheduled maintenance?
    • How does your team receive automatic incident notifications?
    • How does your team respond to incidents? What is your post-mortem process?
    • What tools are available to measure user behavior?
    • What tools or technologies are used for A/B testing?
    • How do you measure customer satisfaction?

13. open standards

Open data can improve transparency and collaboration. By making services more open and publishing open data, we simplify public access to government services and information, allow the public to easily contribute and enable reuse by entrepreneurs, non-profit organizations and the public.

— Checklist

Provide a mechanism for users to report bugs and issues, and respond to those reports.

Making datasets available to the public in their entirety through bulk downloads and APIs (application programming interfaces).

Ensure that the service’s data is explicitly in the public domain, and that rights are secured globally through an international public domain license, such as the Creative Commons Zero.

Catalog the data in the agency’s enterprise dataset and add all public records to the agency’s public data directory.

Ensure that contractual rights are retained to all custom software developed by third parties in a manner that can be published and reused free of charge.

If necessary, create an API for third parties and internal users to interact directly with the service.

Publish source code of projects or components online, if applicable

Share your development process and progress publicly, if applicable.

— Important questions

    • How do you collect user feedback for bugs and problems?
    • If there is an API, what capabilities does it provide? Who uses it? How is it documented?
    • If the code base has not been released under an open source license, explain why.
    • Which components are made available to the public as open source?
    • What data sets will be made available to the public?

more tips, tricks and kits