Research Results Part 3: Measuring the Impact of Data Literacy Efforts

January 21, 2016 in Research

As there are a wide range of methodologies for achieving data literacy in social change efforts, there is also a range of approaches to determining effectiveness. The degree to which a data literacy practice has the capacity to measure effectiveness is largely based on that practice’s maturity level. Participants who work for older, established organizations reported devoting considerable resources to M&E, whereas most individuals and participants from smaller organizations recognized that they were very limited in their evaluation possibilities. Methodologies for evaluating efficacy of data literacy efforts are not standardised. To measure the impact of data literacy work in environments with limited resources, participants in our research focus on the following sources of information:

  • Analysis of data outputs: some of the participants mentioned the relative ease to measure the impact of data work (as compared to other ICT-related initiatives) because there will be outputs that you can analyze qualitatively.
  • Sentiment analysis: Data literacy trainers frequently mentioned the importance of measuring outcomes by trying to get a feel for the reactions of people before ending a workshop, particularly in processes where follow-up is unlikely.
  • Having an eye for the manifestation of organizational (vs individual) change: For some trainers, the true impact of data literacy work can be seen in how data work becomes internalised in an organization’s programmes and staffing.
  • Direct skills assessment: Perhaps difficult to evaluate without exams, some participants rely on self-reporting from their beneficiaries and try to compare pre and post surveys to see the impact of particular training processes.

Diversity in approach

More recently established data literacy efforts are using basic evaluation forms distributed at the end of their trainings. Data literacy trainers frequently mentioned the importance of measuring outcomes by trying to get a feel for the reactions of people in their workshops. This involves including questions about the setup and fulfilment of expectations in post-workshop surveys, but also looking for signs of independent work and pondered questions. A few of the participants consider these signs of engagement are crucial in processes where follow up isn’t likely.

Code for Africa has developed a robust set of success indicators that allow them to chart a path towards success such as data responsibilities being included in organisational job descriptions. For some practitioners, the true impact of data literacy work can be seen in organizational change. Will someone be hired to do data work in the organizations? Do senior executives value data work more and are they willing to allocate more funds for this type of work?

Some of the participants mentioned the relative ease to measure the impact of data work (as compared to other ICT-related initiatives) because there will be outputs that you can analyze qualitatively. A couple of organizations mentioned detailed analysis frameworks to measure the quality of stories in data journalism, for example – employing local data journalists who could evaluate stories from before and after the processes took place to compare the performance of beneficiaries.

Some participants rely on self-reporting from their beneficiaries (through surveys that ask questions on their level of comfort/knowledge on specific skills) – and try to compare pre and post surveys to see the impact of particular training processes.

Even though most participants had given thought to monitoring and evaluation in a way or another, few of them had developed frameworks to use before, during and after the implementation of a project or program. Many of these efforts need more opportunity to articulate what success would look like for their project, and then work backwards to understand what steps and endeavours they need to accomplish to attain that success.

Determining effective ways to measure impact

During a workshop on impact assessment provided to data literacy practitioners connected to the School of Data network in March of 2015, it was determined that the term ‘impact assessment’ may not be an appropriate term, as it implies more robust and resource intensive endeavours that is often applied when evaluating public policy. There was a strong desire for lightweight methodologies that will help them learn how to improve offerings that will deliver greater impact in the long term. They determined that the methodology should contain some basic elements, such as baselines, working with beneficiaries to establish indicators, having feedback loops, articulating clear and transparent goals, having consistency throughout their programs and taking the time to document.

While some exchange between data literacy practitioners has begun around methodologies for evaluation that leads to learning and improved projects, there needs to be continued dialogue in the School of Data network to determine effective ways of measuring impact.

In our next post, ‘Sustainable Business Models for Data Literacy Efforts’, we will explore viable models and opportunities for data literacy practitioners to fund and support their work.

 

Flattr this!