Project 3: Measuring visualisation engagement

This project focuses on capturing and understanding users’ engagement with data visualisations, through the iterative design of a mechanism to enable users to rate online visualisations. Like Project 2, it builds on Seeing Data research, on which a technique called Talking Mats was used to capture participants’ responses to visualisations. The Talking Mat is a 2 x 2 grid on which users locate thumbnail images in order to express feelings or preferences (see appended image), first used on Seeing Data as a means of visual elicitation, then as a tool for analysing responses to visualisations and for communicating about Seeing Data research. It is currently used on the Seeing Data website, where visitors can record their responses to visualisations and compare these with the responses of other visitors ( It has been positively received by visualisation professionals (eg Tableau), who value its potential to capture, accumulate and evaluate responses to data visualisations on a large scale. This project will develop an open source widget which can be used across visualisations, platforms and devices, which is freely available to all designers in order to enable more effective evaluation of visualisations. The project will develop understanding of: user engagement with visualisations; whether effectiveness can be measured; and how it can be defined. No such research has previously been undertaken and in this sense the project is timely and novel. The project will use iterative design methods, refining the widget in iterative cycles which engage target users through beta testing of visualisations produced by network partners and other data visualisers. The project will engage with critical debates about data visualisations, how they are seen to be engaging and the meanings that are attached to them. In producing measurements of visualisation effectiveness, this project explores questions about the role of metrics in contemporary culture (Beer 2016, Day 2014, Kennedy 2016, Porter 1996), reflecting on debates about the politics and consequences of metrics and data extraction.

The main objective is to explore whether it is possible to capture and understand people’s engagements with data visualisations on a large scale and through automated means and what this reveals about metrics and data extraction. Subsidiary objectives are:

  • To develop new knowledge about rating and measuring the effectiveness of visualisations;
  • To develop a freely available open source tool for capturing data about user responses to visualisations;
  • To contribute to visualisation usage evaluation;
  • To contextualise findings in relation to debates about metric power;
  • To make a contribution to the field of critical data studies.