How We Visualized the 43 Indicators of OECD’s Going Digital Toolkit

Luc Guillemot
Interactive Things
Published in
9 min readMar 21, 2019

--

The home page of OECD’s Going Digital Toolkit with the flagship visualization.

At Interactive Things, we create data-driven applications spanning from highly curated data stories (about Roger Federer’s career for instance) to more general data exploration tools (like the Public Values Atlas or UNESCO’s World Inequalities Database on Education). They represent different challenges: a data story allows to use custom, unique and original chart types with styles that are tailored to the story. An exploration tool implies more generic charts, a system that can be applied to any dataset without requiring a deep knowledge of each dataset.

When OECD asked us to create the Going Digital Toolkit, we faced a different challenge: the team at OECD selected 43 indicators and asked us to visualize them in an online toolkit that provides an access by countries, by policy dimensions (groups of indicators) and by themes (cross-dimensions groups of indicators). On one hand, 43 indicators is not enough to justify a sophisticated universal chart system, on the other hand, 43 is too great a number to customize and curate each visualization as thoroughly as if there were only one indicator.

The OECD gathers a vast range of indicators and has already made them accessible through an exploration tool. Most of the indicators were selected by the team at OECD responsible for the Going Digital Toolkit out of this data pool. Other indicators were created ad hoc to match the toolkit’s purpose of publishing an overview of the digital achievements of OECD countries, and to facilitate policy making for the countries’ governments.

1. An overview that became a data-driven signature.

The first interface with the indicators is a highly interactive radial plot. Users can already dive into a single indicator or country rankings, while still seeing the overview of all datasets available to explore.
An explanation banner can be displayed to help on-boarding visitors.

The first interface to the indicators is a radial layout of the indicators. It provides an overview of the datasets available (the user can already see that there are quite a few), and an overview of the indicators themselves. They are colored by policy dimensions to allow visual grouping (the user can see at a glance if a country performs well in a given dimension) and ease the user into the color scheme used across the toolkit to identify the policy dimensions. Carefully implemented interaction behaviors (hovering with a mouse, spinning on a touch screen) allow the users to scrutinize a dataset without leaving the global picture that the radial layout offers.

In addition, the visualization can be customized to highlight individual countries or aggregates of countries (OECD or EU28).

This is a highly curated view, and a very playful access to the datasets (especially on a touch screen where users can spin the “wheel” to select a random indicator), so distinctive and iconic that it became the data-driven signature that OECD used in its communication strategy to advertise the OECD Summit on Going Digital that took place early March in Paris. During the summit, visitors could interact with the toolkit on a big touch screen. The wheel was also printed in an manipulable form and handed out to visitors.

On a touch screen, users can “spin the wheel” to select an indicator at random.
At theOECD Summit on Going Digital in Paris.

2. Focus on policy dimensions and themes.

The second level gives an overview of how countries perform in a given policy dimension or a theme.

The second level of reading is about policy dimensions and themes. Policy dimensions are 8 areas of opportunities and risks that countries face in regards to digital transformations, such as innovation, jobs or growth & well-being. Themes are cross-dimensional entry points to indicators. They focus on specific societal issues like development or gender.

At this level, all indicators are displayed using a horizontal dot plot. Countries are positioned along one dimension, creating a ranking and a comparison. The range of OECD countries is subtly indicated with a white background. Countries are connected so that hovering on one indicator line also updates the other indicators. The purpose of this second level of reading is to give the big picture of how countries perform in a given policy dimension, not to deep-dive into every indicator.

3. Deep-dive into the 43 indicators.

Visualizing the indicators themselves was a challenge as we needed to translate the message that the team at OECD wanted to convey with an indicator, while still giving the possibility for users to explore other facets of the same indicator. For every indicator, the comparison between countries and the trends over time were also of interest. In other words, we are trying to create a tool that encompass both ends of the line we drew earlier between a curated data story and a universal data exploration tool.

The level of curation we implemented: the share of young females who can code, depicted as circles, is used to sort countries as it is the emphasis of this indicator. It can be benchmarked against the share of all individuals who can code (the bars in the background). It can also be compared with the share of young males who can code (the diamonds). The importance of gender disparities is conveyed with a high-low line.

The indicator titled “Women as a share of all 16–24 year-olds who can program” depicted above is a good example of the level of curation that we and OECD wanted to achieve. It’s an indicator about gender disparities in software education among young adults. To establish a comparison, we visualized the benchmark for each country (the ability to code among all young adults) as a bar, and the values that convey the message (gender disparities) as dots. We added a high-low line that depicts, for each country, the difference between males and females; the longer the line, the more important the disparity.

We decided to order the countries by the breakdown dedicated to women, instead of using the expected benchmark value of all individuals (the bars) as sorting value, in order to create a slight disruption, a visual nudge that points the eyes of the visitor to the fact that we want to emphasize: the share of young women who can code, which is always lower than the share of young males who can code.

To add more depth to this indicator, users can select among a subset of other breakdowns, other “dots”, to add to the chart. In most countries, the share of young women who can code is for instance higher than the share of all adults aged 25–54, gender plays a role in programming skills, but age also does.

Building a scalable chart system

Given the challenge of bridging the gap between a data story and an exploration tool, we followed an incremental strategy to fulfill prioritized goals that also fit the tight timeframe we had:

  1. The primary goal was to have a basic chart for all 43 indicators and breakdowns.
  2. The secondary goal was to customize the charts individually, in close collaboration with OECD.
  3. The tertiary goal was to add additional chart types to add more diversity and relevance to the visualizations.

We endeavored to visualize as much information as possible within the given timeframe by building a chart system that has the potential to scale in the future, and that can be enriched with more indicators on demand, without extra work.

Every small decision

When designing and developing an application, every improvement is the result of small decisions made along the way. This is how we gradually enriched the visualizations and how the Going Digital Toolkit’s indicators came to life:

  • As it often is, the first step consists in visualizing everything with a bar chart. The first iteration of the charts was to connect to every dataset through the OECD API, and to look at everything available in the data cubes by displaying them as simple bar charts. This allowed us, in a close work with OECD’s team, to select the breakdowns that were relevant to the topics tackled in the toolkit. This gave us a better overview of the data available and the chart types that we would need.
  • Latest year available, not latest year. To avoid having missing data, we decided to show the latest data available for every country, not the latest year. By filtering data prior to 2018, we would have missed comparing countries that still have recent data. We clearly indicate the relevant year in the tooltips displayed when hovering on a data point.
  • Trend over time means line chart. It became very quickly obvious that we would also need line charts to show time series. We therefore added a time checkbox to switch between the latest data available and the time series view.
  • Details as nudges. Small details were gradually added at the level of a single indicator to highlight facts that were deemed most relevant by us and OECD. We wanted, for every indicator, to provide meaningful visualizations that match the curated purpose, and still give to users the ability to visualize all facets in a dataset.
  • Default breakdowns were for instance curated. In the indicator about Internet users who buy online, an interesting fact is the disparities between income groups, we therefore selected these breakdowns to be displayed by default, users can then explore the many other breakdowns available with the controls below the chart.
  • Default view. The telling story is sometimes in the time trend, not in the latest numbers available, like in the indicator about most cited documents in computer science.
  • Sorting order. Most often, high values are better, hence on the left, which creates an implicit ranking, but for some indicators, low values are better, like for the indicator about restrictiveness where low values mean less restrictions. We therefore inverted the sorting order to show best values on the left, like the user has been used to see in other charts.
  • Stacking order. Stacked bars are useful to show summed up values, the colored stacks themselves are more indications than values to scrutinize, as they are not so easy to compare across breakdowns and countries. Nonetheless, the breakdown at the bottom of the stack can be easily compared horizontally. We strived therefore to make the bottom breakdown always meaningful. In the indicator about public spending on active labour market policies for instance, “training” is an important facet of public policies, and having this breakdown at the bottom allows to see disparities between how countries approach incentive programs.
  • High-low line. As soon as two breakdowns are selected, a range line is displayed. We assume that users are interested in measuring the dissimilarity between the breakdowns that they select for comparison. A high-low line visually quantifies this difference, like the gender disparity in the indicator about young women who can code.
  • Let users decide. In fine, we want the users to be able to explore whatever they are interested in, without being lost in a forest of data. That’s why we define default indicators, default breakdowns to be displayed when the user first land on an indicator page, while still leaving to users the agency to tailor (and download) the charts.
  • There are many features that we see as improvements for the data visualizations already implemented, and for the next iteration of the toolkit: stacked bar charts animation to single bars, expanded view on mobile with a horizontal scroll, an annotation layer to further curate the data within the charts, etc.

The Toolkit came to life in a close collaboration with the Going Digital team at OECD, we want to thank everybody who worked on the backend, the curation of the indicators and the Toolkit in general for the fruitful collaboration!

The Going Digital Toolkit was designed and created at Interactive Things by Christoph Schmid, Solange Vogt, Tomas Carnecky, Flore de Crombrugghe, Jeremy Stucki, Benjamin Wiederkehr, Jan Wächter & Luc Guillemot.

--

--