2017 Summer Camp dispatch #1: Thank You

Cedric Lombion - October 3, 2017 in Event report

image alt text

School of Data’s 2017 Summer Camp has reached an end and was, by most metrics, a resounding success! This is especially when one considers the leap of faith we took on several aspects: the first three days included a new mix of sessions, which we tried to broadcast live; the last 2 days featured an Open Training section with 70 (!) participants, which required its own dedicated event planning to make it work; the full camp was documented on a live agenda allowing for remote following and contribution.

The secret in making it work was to rely both on the skills of our staff and the power of our network. A round of thanks is consequently in order:

Joachim Mangilima, who has been School of Data’s conductor on the ground throughout the Summer Camp, and was able to produce the work of a full event team on his own;

image alt text

Photo by Juan Casanueva, SocialTIC

Our own Meg Foulkes, who was the second magician working behind the scenes and who made sure, among other key contributions, that School of Data network members reached and left Tanzania safely;

SocialTIC, longtime School of Data network member and partner, who brought the Latin American team to the Summer Camp;

the IREX team, who has been involved from the very beginning and helped make the Open Training with YALI Fellows a reality;

image alt text

The Data Collaboratives for Local Impact teams, who took responsibility for a huge part of the logistics involved in the Open Training, from the set up of the training space to the amazing barbecue!

And finally, of course, the dozens of participants from the combined networks of School of Data, DCLI, dLab and Data Zetu who all contributed to make this event a success.

image alt text

A thousand times thank you!

Flattr this!

Ten Cool Things I Learned at DataJConf

Yan Naung Oak - August 18, 2017 in Events, Fellowship

This article was cross-posted from its original location at the Open and Shut blog

I had a fantastic time at the European Computational and Data Journalism Conference in Dublin on 6-7 July in the company of many like-minded data journalists, academics, and open data practitioners. There were a lot of stimulating ideas shared during the presentations on the first day, the unconference on the second day, and the many casual conversations in between!

In this post I’d like to share the ten ideas that stuck with me the most (it was tough to whittle it down to just ten!). Hopefully you’ll find these thoughts interesting, and hopefully they’ll spark some worthwhile discussions about data journalism and storytelling.

I’d really love to hear what you have to say about all of this, so please do share any thoughts or observations that you might have below the line!

image alt text

The European Data and Computational Journalism Conference, Dublin, 6-7 July 2017

  1. ‘Deeper’ data journalism is making a real impact

Marianne Bouchart – manager of the Data Journalism Awards – gave a presentation introducing some of the most exciting award winners of 2017, and talked about some of the most important new trends in data journalism today. Perhaps unsurprisingly, given the electoral rollercoasters of the past year, a lot of great data journalism has been centred around electioneering and other political dramas.

Marianne said that “impact” was the theme that ran through the best pieces produced last year, and she really stressed the central role that investigative journalism needs to play in producing strong data-driven stories. She said that impactful investigative journalism is increasingly merging with data journalism, as we saw in projects shedding light on shady anti-transparency moves by Brazilian politicians, investigating the asset-hoarding of Serbian politicians, and exposing irresponsible police handling of sexual assault cases in Canada.

  1. Machine learning could bring a revolution in data journalism

Two academics presented on the latest approaches to computational journalism – journalism that applies machine learning techniques to dig into a story.

Marcel Broersma from the University of Groningen presented on an automated analysis of politicians’ use of social media. The algorithm analysed 80,000 tweets from Dutch, British and Belgian politicians to identify patterns of what he called the ‘triangle of political communication’ between politicians, journalists, and citizens.

The project wasn’t without its difficulties, though – algorithmically detecting sarcasm still remained a challenge, and the limited demographics of Twitter users meant that this kind of research could only look at how narrow certain segments of society communicated.

Jennifer Stark from the University of Maryland looked at the possibilities for algorithms to be biased – specifically looking at Google Image Search’s representations of presidential candidates Hillary Clinton and Donald Trump’s photos during their campaigns. Through the use of an image recognition API that detects emotions, she found that Clinton’s pictures were biased towards showing her appear happier whereas for Trump, both happiness and anger were overrepresented.

Although it’s still early days for computational journalism, talks like these hinted at exciting new data journalism methods to come!

  1. There are loads of ways to learn new skills!

The conference was held at the beautiful University College Dublin, where a brand new master’s program in data journalism is being launched this year. We also heard from one of the conference organisers, Martin Chorley, about Cardiff University’s Master’s in Computational and Data Journalism, which has been going strong for three years, and has had a great track record of placing students into employment.

But formal education isn’t the only way to get those cutting edge data journo skills! One of the conference organisers also presented the results of a worldwide survey of data journalists, taking in responses from 180 data journalists across 44 countries. One of the study’ most notable findings was that only half of respondents had formal training in data journalism – the rest picked up the necessary skills all by themselves. Also, when asked how they wanted to further their skills, more respondents said they wanted to brush up on their skills in short courses rather than going back to school full-time.

  1. Want good government data? Be smart (and be charming)!

One of the most fascinating parts of the conference for me was learning about the different ways data journalists obtained data for their projects.

Kathryn Tourney from The Detail in Northern Ireland found Freedom of Information requests useful, but with the caveat that you really needed to know the precise structure of the data you are requesting in order to get the best data. Kathryn would conduct prior research on the exact schemas of government databases and work to get hold of the forms that the government used to collect the data she wanted before making the actual FOI requests. This ensured that there was no ambiguity about what she’d receive on the other side!

Conor Ryan from Ireland’s RTÉ found that he didn’t need to make FOI requests to do deep investigative work, because there was already a lot of government data “available” to the public. The catch was that this data was often buried behind paywalls and multiple layers of bureaucracy.

Conor stressed the importance of ensuring that any data sources RTÉ managed to wrangle were also made available in a more accessible way for future users. One example related to accessing building registry data in Ireland, where originally a €5 fee existed for every request made. Conor and his team pointed out this obstacle to the authorities and persuaded them to change the rules so that the data would be available in bulk in the future.

Lastly, during the unconference one story from Bulgaria really resonated with my own experiences trying to get a hold of data from governments in closed societies. A group of techies offered the Bulgarian government help with an array of technical issues, and by building relationships with staff on the ground – as well as getting the buy-in of political decision makers – they were able to get their hands on a great deal of data that would have forever remained inaccessible if they’d gone through the ‘standard’ channels for accessing public information.

  1. The ethics of data sharing are tricky

The best moments at these conferences are the ones that make you go: “Hmm… I never thought about it that way before!”. During Conor Ryan’s presentation, he really emphasized the need for data journalists to consider the ethics of sharing the data that they have gathered or analysed.

He pointed out that there’s a big difference between analysing data internally and reporting on a selected set of verifiable results, and publishing the entire dataset from your analysis publicly. In the latter case, every single row of data becomes a potential defamation suit waiting to happen. This is especially true when the dataset involved is disaggregated down the level of individuals!

  1. Collaboration is everything

Being a open data practitioner means that my dream scenarios are collaborations on data-driven projects between techies, journalists and civil society groups. So it was really inspiring to hear Megan Lucero talk about how The Bureau Local (at the Bureau of Investigative Journalism) has built up a community of civic techies, local journalists, and civil society groups across the UK.

Even though The Bureau Local was only set up a few months ago, they quickly galvanized this community around the 2017 UK general elections, and launched four different collaborative investigative data journalism projects. One example is their piece on targeted ads during the election campaign, where they collaborated with the civic tech group Who Targets Me to collect and analyse data about the kinds of political ads targeting social media users.

I’d love to see more experiments like The Bureau Local emerging in other countries as well! In fact, one of the main purposes of Open and Shut is precisely to build this kind of community for folks in closed societies who want to collaborate on data-driven investigations. So please get involved!

image alt text

Who Targets Me? Is an initiative working to collect and analyse data about the kinds of political ads targeting social media users.

  1. Data journalism needs cash – so where can we find it?

It goes without saying these days that journalism is having a bad time of it at the moment. Advertising and subscription revenues don’t pull in nearly as much cash as the used to. Given that pioneering data-driven investigative journalism takes a lot of time and effort, the question that naturally arises is: “where do we get the money for all this?”. Perhaps unsurprisingly, no-one at DataJConf had any straightforward answers to this question.

A lot of casual conversations in between sessions drifted onto the topic of funding for data journalism, and lots of people seemed worried that innovative work in the field is currently too dependent on funding from foundations. That being said, attendees also shared stories about interesting funding experiments being undertaken around the world, with the Korean Center for Investigative Journalism’s crowdfunding approach gaining some interest.

  1. Has data journalism been failing us?

In the era of “fake news” and “alternative facts”, a recurring topic in many conversations was about whether data journalism actually had any serious positive impacts. During the unconference discussions, some of us ended up being sucked into the black hole question of “What constitutes proper journalism anyway?”. It wasn’t all despair and navel-gazing, however, and we definitely identified a few concrete things that could be improved.

One related to the need to better represent uncertainty in data journalism. This ties into questions of improving the public’s data literacy, but also of traditional journalism’s tendency to present attention-grabbing leads and conclusions without doing enough to convey complexity and nuance. People kept referencing FiveThirtyEight’s election prediction page, which contained a sophisticated representation of the uncertainty in their modelling, but hid it all below the fold – an editorial decision, it was argued, that lulled readers into thinking that the big number that they saw at the top of the page was the only thing that mattered.

image alt text

FiveThirtyEight’s forecast of the 2016 US elections showed a lot of details below the fold about their forecasting model’s uncertainty, but most readers just looked at the big percentages at the top.

Another challenge identified by attendees was that an enormous amount of resources were being deployed to preach to the choir instead of reaching out to a broader base of readers. The unconference participants pointed out that a lot of the sophisticated data journalism stories written in the run-up to the 2016 US elections were geared towards partisan audiences. We agreed that we needed to see more accessible, impactful data stories that were not so mired in party politics, such as ProPublica’s insightful piece on rising US maternal mortality rates.

  1. Data journalism can be incredibly powerful in the Global South

Many of the talks were about data journalism as it was practised in Western countries – with one notable exception. Eva Constantaras, who trains investigative data journalism teams in the Global South, held a wonderful presentation about the impactfulness of data journalism in the developing world. She gave the examples of IndiaSpend in India and The Nation in Kenya, and spoke about how their data-driven stories worked to identify problems that resonated with the public, and explain them in an accessible and impactful way.

Election coverage in these two examples shared by Eva focused on investigating the consequences of the policy proposals of politicians, engaging in fact-checking, and identifying the kinds of problems that were faced by voters in reality.

Without the burden of partisan echo-chambers, and because data journalism is still very new and novel in many parts of the world, data journalism could end up having a huge impact on public debate and storytelling in the Global South. Watch this space!

image alt text

Kenya’s The Nation has been producing data-driven stories more and more frequently, such as this piece on Kenya’s Eleventh Elections in August 2017*

  1. Storytelling has to connect on a human level

If there was one recurring theme that I heard throughout the conference about what makes data journalism impactful, it was that the data-driven story has to connect on a human level. Eva had a slide in her talk with a quote from John Steinbeck about what makes a good story:

“If a story is not about the hearer he [or she] will not listen… A great lasting story is about everyone, or it will not last. The strange and foreign is not interesting – only the deeply personal and familiar.”

“I want loads of money” — Councillor Hugh McElvaney caught on hidden camera video from RTÉ

Conor from RTÉ also drove the same point home. After his team’s extensive data-driven investigative work revealed corruption in Irish politics, the actual story that they broke involved a hidden-camera video of an undercover interview with one of these politicians. This video highlighted just one datapoint in a very visceral way, which ultimately resonated more with the audience than any kind of data visualisation could.


I could go on for longer, but that’s probably quite enough for one blog post! Thanks for reading this far, and I hope you managed to gain some nice insights from my experiences at DataJConf. It was a fascinating couple of days, and I’m looking forward to building upon all of these exciting new ideas in the months ahead! If any of these thoughts have got you excited, curious (or maybe even furious) we’d love to hear from you below the line.

Open & Shut is a project from the Small Media team. Small Media are an organisation working to support freedom of information in closed societies, and are behind the portal Iran Open Dat*a.

Flattr this!

Data is a Team Sport: Government Priorities and Incentives

Dirk Slater - August 13, 2017 in Data Blog, Event report, Research

Data is a Team Sport is our open-research project exploring the data literacy eco-system and how it is evolving in the wake of post-fact, fake news and data-driven confusion.  We are producing a series of videos, blog posts and podcasts based on a series of online conversations we are having with data literacy practitioners.

To subscribe to the podcast series, cut and paste the following link into your podcast manager : http://feeds.soundcloud.com/users/soundcloud:users:311573348/sounds.rss or find us in the iTunes Store and Stitcher.

The conversation in this episode focuses on the challenges of getting governments to prioritise data literacy both externally and internally, and incentives to produce open-data and features:

  • Ania Calderon, Executive Director at the Open Data Charter, a collaboration between governments and organisations working to open up data based on a shared set of principles. For the past three years, she led the National Open Data Policy in Mexico, delivering a key presidential mandate. She established capacity building programs across more than 200 public institutions.
  • Tamara Puhovskia sociologist, innovator, public policy junky and an open government consultant. She describes herself as a time traveler journeying back to 19th and 20th century public policy centers and trying to bring them back to the future.

Notes from the conversation:

Access to government produced open-data is critical for healthy functioning democracies. It takes an eco-system that includes a critical thinking citizenry, knowledgeable civil servants, incentivised elected officials, and smart open-data advocates.  Everyone in the eco-system needs to be focused on long-term goals.

  • Elected officials needs incentivising beyond monetary arguments, as budgetary gains can take a long time to fruition.
  • Government’s capacities to produce open-data is an issue that needs greater attention.
  • We need to get past just making arguments for open-data, but be able to provide good solid stories and examples of its benefits.

Resources mentioned in the conversation:

Also, not mentioned, but be sure to check out Tamara’s work on Open Youth

View the full online conversation:

Flattr this!

Data is a Team Sport: Mentors Mediators and Mad Skills

Dirk Slater - August 7, 2017 in Community, Data Blog, Event report

Data is a Team Sport is our open-research project exploring the data literacy eco-system and how it is evolving in the wake of post-fact, fake news and data-driven confusion.  We are producing a series of videos, blog posts and podcasts based on a series of online conversations we are having with data literacy practitioners.

To subscribe to the podcast series, cut and paste the following link into your podcast manager : http://feeds.soundcloud.com/users/soundcloud:users:311573348/sounds.rss or find us in the iTunes Store and Stitcher.

This episode features:

  • Emma Prest oversees the running of DataKind UK, leading the community of volunteers and building understanding about what data science can do in the charitable sector. Emma sits on the Editorial Advisory Committee at the Bureau of Investigative Journalism. She was previously a programme coordinator at Tactical Tech, providing hands-on help for activists using data in campaigns. 
  • Tin Geber has been working on the intersection of technology, art and activism for most of the last decade. In his previous role as Design and Tech Lead for The Engine Room, he developed role-playing games for human rights activists; collaborated on augmented reality transmedia projects; and helped NGOs around the world to develop creative ways to combine technology and human rights.

In this episode we take a deep dive into how to get organisations beyond ‘data literacy’ and reach ‘data maturity’, where organisations understand what is good practice on running a data project.  Some main points:

  • A red flag that indicates a data project will end in failure is when the goal is implementation of a tool as opposed to a mission-critical goal.
  • Training in itself can be helpful with hard skills, such as how to do analysis, but in terms of running data projects, it takes a lot of hand-holding and mentorship is a more effective.
  • A critical role in and organisations is people who can champion tech and data work, and they need better support in that role.
  • Fake news and data-driven confusion has meant the need for understanding good data practice is even more important.

DataKind UK’s resources:

Tin’s resources:

Resources that are inspiring Emma’s Work:

Resources that are inspiring Tin’s work:

  • DataBasic.io – A a suite of easy-to-use web tools for beginners that introduce concepts of working with data
  • Media Manipulation and Disinformation Online – Report from Data and Society on how false or misleading information is having real and negative effects on the public consumption of news.
  • Raw Graphs – The missing link between spreadsheets and data visualization

View the full online conversation:

Flattr this!

Data is a Team Sport: Government Incentives for Data Literacy

Dirk Slater - August 2, 2017 in Uncategorized

Data is a Team Sport is a series of online conversations examining the data literacy ecosystem. we seek to capture learnings about the ever-changing field of data literacy and how it is evolving in response to concepts like ‘big data’, ‘post-fact’ and ‘data cofusion’.  This open research project by School of Data, in collaboration with FabRiders, will produce a series of podcasts and blog posts as we engage data literacy practitioners with particular expertise within the ecosystem (e.g., investigative journalism, advocacy and activism, academia, government, etc) in conversation. 

You can view previous online conversations and access the podcast series.

You can join the conversation (see RSVP below) and provide inputs into the research we are conducting. During each online conversation we will give participants an opportunity to ask questions and share their own insights on the topic.

Our next online conversation will take a look at government incentives to prioritise data literacy and will take place on Tuesday August 8th at 9:00 PDT, 12:00 EDT, 17:00 BST, 18:00 CEST, 19:00 EAT/Istanbul, 21:30 India  & 23:00 Bangkok with:

  • Ania Calderon has recently taken on the role of Executive Director at the Open Data Charter, a collaboration between governments and organisations working to open up data based on a shared set of principles. For the past three years, she led the National Open Data Policy in Mexico, delivering a key presidential mandate. She established capacity building programs across more than 200 public institutions, developed tools and platforms to enable the release of standardised data, built channels to increase the ability of citizens to inform data release and started a national open data network of over 40 cities working to improve service delivery.
  • Tamara Puhovski is a sociologist, innovator, public policy junky and an open government consultant. With experience in civil society sector, academia, international organisations as well as civil service and diplomacy, she is constantly finding new ways to bring various sectors together and identify and pursue public good oriented innovative solutions. She describes herself as a time traveler journeying to 19th and 20th century public policy centers and trying to bring them back to the future with me.  She hold Bachelors Degrees in both Sociology and Political Science and a Masters in European Union and is about to finish a second Masters in Diplomacy.

Your hosts:

You can view the conversation live here:

 

Flattr this!

Data is a Team Sport: One on One with Friedhelm Weinberg

Dirk Slater - July 29, 2017 in Data Blog, Event report, Research

Data is a Team Sport is our open-research project exploring the data literacy eco-system and how it is evolving in the wake of post-fact, fake news and data-driven confusion.  We are producing a series of videos, blog posts and podcasts based on a series of online conversations we are having with data literacy practitioners.

To subscribe to the podcast series, cut and paste the following link into your podcast manager : http://feeds.soundcloud.com/users/soundcloud:users:311573348/sounds.rss or find us in the iTunes Store and Stitcher.

Friedhelm Weinberg is the Executive Director of Human Rights Information and Documentation Systems (HURIDOCS), an NGO that supports organisations and individuals to gather, analyse and harness information to promote and protect human rights.  In this conversation we take a look at what it takes to be both a tool developer and a capacity builder, and how the two disciplines can inform and build upon each other.  Some of the main points:

  • The capacity building work needs to come first and inform the tool development.
  • It’s critical that human rights defenders have a clear understanding of what they want to do with the data before they start collecting it.
  • It’s critical for human rights defenders to have their facts straight as this counts the most in international courts of law, and cuts through ‘fake news.’
  • Machine learning has enormous potential in documenting human rights abuses in being able to process large amount of case work.
  • They have been successful in bringing developers in-house by making efforts to get them to better understand how the capacity builders work and also vice-versa.

Specific projects within Huridocs he talked about:

  • Uwazi is an open-source solution for building and sharing document collections
  • The Collaboratory is their knowledge sharing network for practitioners focusing on information management and human rights documentation.

Readings/Resources that are inspiring his work:

View the full online conversation:

Flattr this!

The Genesis of The School of Data Fellowship

Katelyn Rogers - July 20, 2017 in Fellowship

In 2013, data literacy was, and in many ways remains, a nascent field. Unsurprisingly, finding reliable trainers to carry out School of Data missions around the world was a struggle. We started our Fellowship programme, as a way to address the lack of data literacy trainers throughout the world. Even in 2013, it was clear that while short term data trainings were effective at raising awareness of potential uses of data for storying telling and advocacy, more long term interventions were required to actually build data skills in civil society and the media. We designed the School of Data Fellowship to address these two primary challenges that we had identified and were regularly confronting during the course of our work:

  1. there is a severe shortage of data trainers able to work with local communities and adapt training to local needs and/or languages.
  2. organisations and individuals need to engage with data over a long period of time for data activities to become embedded within their work.

Building the foundations

Our Fellowships are nine-month placements with School of Data for existing data-literacy practitioners. We identify high potential individuals with topical expertise and help them mature as data literacy leaders by working alongside School of Data and our global network. At the start of the Fellowship, we create an individualised programme with each Fellow, designed to equip them with the skills they need to more effectively further data literacy in their community. This programme is built around the core competencies required for furthering data literacy: community building; content creation; and knowledge transfer (see Data Literacy Activity Matrix) for more details on these competencies).

From the outset, we were successful at recruiting high-potential individuals to participate in the programme and throughout the years the applicant pool has only grown. We have worked with the Fellows to adapt and translate materials, develop original learning content and provide training to local civil society. Each year, we make tweaks in the programme to reflect learnings both from where we are achieving our goals as well as where we have fallen short.

An evolving process

Over the years, we have fine-tuned the goals of the programme to reflect what we have found the Fellowship programme to be most effective at achieving as well as what is needed to advance data literacy. These goals are as follows:

  1. identify, train and support individuals who have the potential to become data leaders and sources of expertise in their country and/region;

  2. kickstart, or strengthen, data literacy communities in the countries where current and former Fellows are active

Prior to 2016, we had not clearly articulated that kickstarting data literacy communities was one of the goals of the Fellowship programme but it had become obvious that this was a critical component to the sustainability of our work. Given that data literacy is such a nascent field, it was always important, in each new city/country, for the Fellows to do substantial awareness raising work. The Fellows who were most successful would provide trainings and organise meet-ups not necessarily to build individual skills but to start sensitising local communities to the idea that data is a powerful tool for civil society.

A successful approach

In late 2016, we conducted interviews with two dozen School of Data Fellows to better understand whether we were achieving our goals as a programme. These interviews formed the basis of our first Fellowship Outcomes Mapping. Some of the highlights of these interviews can be found below.

The Fellows:

We found that the Fellowship has been successful in achieving its initial goal, creating a community of qualified local trainers knowledgeable in School of Data methodologies and actively spreading data literacy in their respective countries:

  1. Better Understanding of the Data Needs and Challenges of Civil Society: Over the years, we have recruited a number of developers, data analysts and entrepreneurs, who, prior to the Fellowship, had little understanding of the specific challenges faced by civil society in using data. Through working with local NGOs, governments and newsrooms, these Fellows gained an understanding of how they could use their skills to serve civil society more effectively.

  2. New Methodologies & Approaches for Training: Through the Fellowship programme, Fellows were able to tap into a network of data literacy practitioners and learn from the best about how to build an effective training programme for any audience.

  3. International Visibility & Connections: Finally, through the School of Data programme, Fellows were introduced to an international community, increasing both the visibility of their work and providing them with a number of new and exciting opportunities to train and to be recruited for consultancies and jobs. Fellows have gone on to work for large newsrooms, international organisations, development agencies and governments.

The local communities

In addition to supporting Fellows to achieve their own goals and personal development, the Fellowship programme also seeks to strengthen data literacy within local civil society. The potential of the Fellowship to have a meaningful impact on local civil society groups was formally acknowledged in 2016, with the inclusion of a specific programmatic goal relating to community-building. As seen in School of Data’s research on the value of different formats of data literacy activities, the Fellowship format is most successful in achieving outcomes related to awareness-building (understanding of data uses, awareness of data skill gaps, knowledge of the data pipeline) as well as the kickstarting of data-related activities locally.

This awareness raising work is required in every sector. It is not necessarily because there is an emerging data community focused on transparency and accountability in public finance or extractives that the local health or water CSOs will be sold on the idea of integrating more data into their work. To reflect these learnings, in 2016, we started recruiting Fellows with a particular topical interest or expertise who would work on data literacy in that specific sector.

Next Steps

We are continuously working to improve the Fellowship process and are overjoyed most of our past Fellows go on to become active members of the School of Data network. Over the next few months, we will be posting a series of articles about the Fellowship programme including:

  • Steps we have taken to ensure diversity in each Fellowship class as well as the challenges we still face in terms of inclusivity
  • Funding the low-visibility infrastructure-building work that is a critical part of the Fellowship process
  • How and where we have struggled to make the Fellowship model work and plan we have for changing that

We welcome any thoughts and feedback that you have. Get in touch on twitter @schoolofdata or via our contact page.

Flattr this!

Data is a Team Sport: One on One with Heather Leson

Dirk Slater - July 19, 2017 in Community, Data Blog, Event report, Research

Data is a Team Sport is our open-research project exploring the data literacy eco-system and how it is evolving in the wake of post-fact, fake news and data-driven confusion.  We are producing a series of videos, blog posts and podcasts based on a series of online conversations we are having with data literacy practitioners.

To subscribe to the podcast series, cut and paste the following link into your podcast manager : http://feeds.soundcloud.com/users/soundcloud:users:311573348/sounds.rss or find us in the iTunes Store and Stitcher.

This episode features a one on one conversation with Heather Leson, the Data Literacy Lead at International Federation of Red Cross and Red Crescent Societies. As a technologist, she strengthens community collaboration via humanitarian technologies and social entrepreneurship. She builds partnerships, curates digital spaces, fosters volunteer engagement and delivers training while inspiring systems for co-creation with maps, code and data. At the International Federation of Red Cross Red Crescent, her mandate includes global data advocacy, data literacy and data training programs in partnership with the 190 national societies and the 13 million volunteers. She is a past Board Member at the Humanitarian OpenStreetMap Team (4 years), Peace Geeks (1 year), and an Advisor for MapSwipe – using gamification systems to crowdsource disaster-based satellite imagery. Previously, she worked as Social Innovation Program Manager, Qatar Computing Research Institute (Qatar Foundation) Director of Community Engagement, Ushahidi, and Community Director, Open Knowledge (School of Data).

Main Points from the Conversation:

  • Data protection is the default setting for humanitarian organisations collecting data.
  • She’s found its critical to focus on people and what they are trying to accomplish, as opposed to focusing on tools.
  • She’s added ‘socialisation’ as the beginning step to the data pipeline.

Heather’s Resources

Blogs/websites

Heather’s work

The full online conversation:

Flattr this!

Rethinking data literacy: how useful is your 2-day training?

Cedric Lombion - July 14, 2017 in Research

As of July 2017, School of Data’s network includes 14 organisations around the world which collectively participate to organise hundreds of data literacy events every year. The success of this network-based strategy did not come naturally: we had to rethink and move away from our MOOC-like strategy in 2013 in order to be more relevant to the journalists and civil society organisations we intend to reach.

In 2016 we did the same for our actual events.

The downside of short-term events

Prominent civic tech members have long complained about the ineffectiveness of hackathons to build long-lasting solutions for the problems they intended to tackle. Yet various reasons have kept the hackathon popular: it’s short-term, can produce decent-looking prototypes, and is well-known even beyond civic tech circles.

The above stays true for the data literacy movement and its most common short-term events: meetups, data and drinks, one-day trainings, two-day workshops… they’re easy to run, fund and promote: what’s not to love?

Well, we’ve never really been satisfied with the outcomes we saw of these events, especially for our flagship programme, the Fellowship, which we monitor very closely and aim to improve every year. Following several rounds of surveys and interviews with members of the School of Data network, we were able to pinpoint the issue: our expectations and the actual value of these events are mismatched, leading us not to take critical actions that would multiply the value of these events.

The Data Literacy Activity Matrix

To clarify our findings, we put the most common interventions (not all of them are events, strictly speaking) in a matrix, highlighting our key finding that duration is a crucial variable. And this makes sense for several reasons:

  • Fewer people can participate in a longer event, but those who can are generally more committed to the event’s goals

  • Longer events have much more time to develop their content and explore the nuances of it

  • Especially in the field of data literacy, which is focused on capacity building, time and repetition are key to positive outcomes

Data Literacy Activity Matrix

(the categories used to group event formats are based on our current thinking of what makes a data literacy leader: it underpins the design of our Fellowship programme.)

Useful for what?

The matrix allowed us to think critically about the added value of each subcategory of intervention. What is the effective impact of an organisation doing mostly short-term training events compared to another one focusing on long-term content creation? Drawing again from the interviews we’ve done and some analysis of the rare post-intervention surveys and reports we could access (another weakness of the field), we came to the following conclusions:

  • very short-term and short-term activities are mostly valuable for awareness-raising and community-building.

  • real skill-building happens through medium to long-term interventions

  • content creation is best focused on supporting skill-building interventions and data-driven projects (rather than hoping that people come to your content and learn by themselves)

  • data-driven projects (run in collaboration with your beneficiaries) are the ones creating the clearest impact (but not necessarily the longest lasting).

Data Literacy Matrix - Value Added

It is important, though, not to set short-term and long-term interventions in opposition. Not only can the difference be fuzzy (a long term intervention can be a series of regular, linked, short term events, for example) but both play roles of critical importance: who is going to apply to a data training if people are not aware of the importance of data? Conversely, recognising the specific added value of each intervention requires also to act in consequence: we advise against organising short-term events without establishing a community engagement strategy to sustain the event’s momentum.

In hindsight, all of the above may sound obvious. But it mostly is relevant from the perspective of the beneficiary. Coming from the point of the view of the organisation running a data literacy programme, the benefit/cost is defined differently.

For example, short-term interventions are a great way to find one’s audience, get new trainers to find their voice, and generate press cheaply. Meanwhile, long-term interventions are costly and their outcomes are harder to measure: is it really worth it to focus on training only 10 people for several months, when the same financial investment can bring hundreds of people to one-day workshops? Even when the organisation can see the benefits, their funders may not. In a field where sustainability is still a complicated issue many organisations face, long-term actions are not a priority.

Next steps

School of Data has taken steps to apply these learnings to its programmes.

  • The Curriculum programme, which initially focused on the production and maintenance of online content available on our website has been expanded to include offline trainings during our annual event, the Summer Camp, and online skillshares throughout the year;

  • Our recommendations to members regarding their interventions systematically refer to the data literacy matrix in order for them to understand the added value of their work;

  • Our Data Expert programme has been designed to include both data-driven project work and medium-term training of beneficiaries, differentiating it further from straightforward consultancy work.

We have also identified three directions in which we can research this topic further:

  • Mapping existing interventions: the number, variety and quality of data literacy interventions is increasing every year, but so far no effort has been made to map them, in order to identify the strengths and gaps of the field.

  • Investigating individual subgroups: the matrix is a good starting point for interrogating best practices and concrete outcomes in each of the subgroups, in order to provide more granular recommendations to the actors of the field and the designing of new intervention models.

  • Exploring thematic relevance: the audience, goals and constraints of, say, data journalism interventions, differ substantially from those of the interventions undertaken within the extractives data community. Further research would be useful to see how they differ to develop topic-relevant recommendations.

Flattr this!

Data is a Team Sport: Advocacy Organisations

Dirk Slater - July 12, 2017 in Data Blog, Event report, Research

Data is a Team Sport is our open-research project exploring the data literacy eco-system and how it is evolving in the wake of post-fact, fake news and data-driven confusion.  We are producing a series of videos, blog posts and podcasts based on a series of online conversations we are having with data literacy practitioners.

To subscribe to the podcast series, cut and paste the following link into your podcast manager : http://feeds.soundcloud.com/users/soundcloud:users:311573348/sounds.rss or find us in the iTunes Store and Stitcher.

In this episode we discussed data driven advocacy organisations with:

  • Milena Marin is Senior Innovation Campaigner at Amnesty International. She is currently leads Amnesty Decoders – an innovative project aiming to engage digital volunteers in documenting human right violations using new technologies. Previously she worked as programme manager of School of Data. She also worked for over 4 years with Transparency International where she supported TI’s global network to use technology in the fight against corruption.
  • Sam Leon, is Data Lead at Global Witness, focusing on the use of data to fight corruption and how to turn this information into change making stories. He is currently working with a coalition of data scientists, academics and investigative journalists to build analytical models and tools that enable anti-corruption campaigners to understand and identify corporate networks used for nefarious and corrupt practices.

Notes from the Conversation

In order to get their organisations to see the value and benefit of using data, they both have had to demonstrate results and have looked for opportunities where they could show effective impact. What data does for advocacy is to show the extent of the problem and it provides depths to qualitative and individual stories.  Milena credits the work of School of Data for the fact that journalists now expect their to be data accessible from Amnesty to back up their data.

  • They see gaps in the way that advocates can see data and new technologies as easy answers to their challenges, and the realities of implementing complex projects that utilise them.
  • In today’s post-fact world, they find that the term used as a tactic to  more quickly discredit their work and as a result they need to work harder at presenting verifiable data.
  • Amnesty’s decoder project has involved 45,000 volunteers and along with being able to review a huge amount of video, has had the side benefit of providing those volunteers with a deeper understanding of what Amnesty does.
  • Global Witness has had a limited amount of data-sets they have released to the public. But there needs to be a lot more learning about the implications of releasing open data-sets before that can be a default.
  • Intermediaries and externals are the only way for Advocacy organisations  to cover the gaps in their own expertise around data.

More about their work

Milena

Sam

Resources and Readings

From FabRiders

View the Full Conversation:

 

Flattr this!