Research Results Part 5: Improving Data Literacy Efforts

Dirk Slater - February 5, 2016 in Research

As technologies advance and the accessibility of data becomes ubiquitous, data literacy skills will likely gain increasing importance. The School of Data training resources have already laid an important foundation for social change efforts to harness data and improve their impact. Going forward, School of Data local communities will have to take into account their role as stewards of the curriculum, and continue to develop and incorporate new learnings as access to data continues to increase.

From what we (Mariel Garcia, and myself) have learned by conducting this research, we make the following recommendations:

  • Training the trainers: The School of Data curriculum is the foundation for much of the Data Literacy training that is happening both inside and outside the School of Data network, as reported by interviewees; it would make sense to focus efforts on preparing materials not just for learner consumption, but also in a curriculum format for trainers.
  • More research on pedagogical methods: Additional research and establishment of effective pedagogical methods of data literacy training would be beneficial – many interviewees mentioned the importance of this topic, and yet had no resources to share about it. In this regard, Peer to Peer University is the one participant that has invested most resources into this understanding, and is a great ally going forward in this area.
  • More knowledge-sharing within the network: In this regard, the School of Data network also functions as a ‘community of practice’ for trainers who are sharing advice and tips on providing data literacy training, but this could be strengthened by actively promoting conversations around the topics covered in this research.
  • Measuring the impact: As with different initiatives, impact evaluation is an area in which data literacy work can still grow. Both the School of Data local communities and data literacy related organisations need much stronger articulations of their long-term goals and intended impact in the short term.  School of Data events might be a good space to have the necessary conversations to find frameworks of evaluation that work for different work formats and budgets. Some organizations outside of the School of Data network (IREX and Internews) have worked extensively on this, and could be good references going forward.
  • Promoting long term engagements: It appeared during the research that only older and established organisations had started long term projects and engagements related to data literacy. Consequently, it might make sense for School of Data to help smaller and newer organisations within its community to start and sustain long term engagements, by helping them find the necessary resources. This could provide an important focal point for collaborations within the network as it will likely yield important learnings.
  • Data literacy at the organisation level: Articulate how individual data literacy training can complement and support long term engagements that will lead to organisational data literacy. Building local fellowship programs that can engage social change organisations over the long-term and build their capacity to utilise data in their campaigns will likely lead to deeper alliances and joint funding opportunities.
  • Better collaboration with outside partners: The project would stand to benefit from more linkages and collaborations with academia, open data-related civil society efforts. Additionally, more efforts can be made to improve the accessibility of the School of Data curriculum, methodologies and trainings. This will likely lead to more diverse and sustainable funding.

The goal of this research was to empower the School of Data Steering Committee to take strategic decisions about the programme going forward along with helping the School of Data network members build on the successes to date. We hope that in providing this research and recommendations in an accessible format, both School of data and the wider network of data literacy practicioners will benefit from it. Hopefully, these research results will complement and contribute to the School of Data’s goal of improving the impact of social change efforts through data literacy.

In our next and final blog post, we will present a list of resources and references we used during our research.

Flattr this!

Research Results Part 4: Which Business Models for Data Literacy Efforts?

Dirk Slater - January 29, 2016 in Research

After researching the definitions of data literacy, along with the methodologies and impact of data literacy efforts, we looked into the question of business models: are there sustainable business models that can support data literacy efforts in the long term? Along with looking at how data literacy efforts can support themselves financially, we also looked for opportunities for linkages with other efforts.

No clear business model for sustainability

Many of the School of Data local communities and external organisations that provide data literacy training are using a mix of foundational funding and fee for service to sustain themselves. The organisations using this model are opportunistic in getting organisations and individuals to pay for trainings when they can, but a lack of understanding about data processes among clients is often a problem. At this point, there is not a clear ‘sustainable business model’ that would direct data literacy organisations towards longevity. To understand better the business models of established organisations working at the intersection of technology and social change, we looked at a two of them: Aspiration and Tactical Technology Collective. 

Aspiration

Aspiration is a US-based NGO that operates globally, providing a range of services to build capacity and community around technology for social change. Over the last decade, under Allen Gunn’s (Gunner) leadership, Aspiration has gained a strong reputation for delivering trainings and events that focus on strategic and tangible outcomes while strengthening communities of practice.  Aspiration has championed an approach known as participatory events, developing knowledge-sharing and leadership development methodologies that prioritize active dialog-based learning. The philosophy and design focus on maximizing interaction and peer learning while making spare use of one-to-many and several-to-many session formats such as presentations and panels. Aspiration has been able to scale their model across a range of meeting sizes and purposes, from smaller team strategy sessions and retreats to large-scale events, such as the Mozilla Festival, which brings together over 1,500 participants.

Aspiration’s services are in high demand, with clients ranging from both small civil society organisations to larger international NGO’s and foundations. They have seen a gradual reduction in reliance on grants, and now generate the majority of their funds through earned income from strategic consulting services, events, and trainings. In order to scale service delivery, program staff have all been trained in the unique skill sets required for delivering participatory events and providing strategic services within the Aspiration frame of analysis. The organization now has five full-time staff able to deliver both live events and strategic services.

Tactical Technology Collective

Fee for service work on utilising data in social change has had an increased market in the civil society sector over the last five to seven years. However, many social change organisations have been unaware of the amount of resources and effort it takes to analyse and produce outputs such as data visualisations and info-graphics. A more mature organisation with experience in utilising information and technologies in activism, Tactical Technology Collective, set up a social enterprise, Tactical Studios, to undertake data-driven projects for large-scale NGO’s. They attempted to better educate clients by engaging them in creation of design briefs and a more intentional process. Tactical Studios was marginally successful as most advocacy and activist connected organisations look for quick and low-cost solutions to their data visualisations. 

Using collaborations and linkages to improve the understanding of data needs

A hopeful example in developing the capacity of organisations to understand the amount of resources needed to utilise data is in the School of Data’s Embedded Fellowship with Global Witness. Through a six-month engagement, the fellow, Sam Leon, was able to provide data trainings at all levels of the organisation – from senior management to the front line staff. This has helped the organisation, rather than just the individuals, to improve its data literacy.  What this points to is a need to differentiate between individual data literacy and organisational data literacy. While the School of Data curriculum addresses individual data literacy, efforts like the fellowship programme, that have long term engagements succeed in building organisational capacities. Being more intentional in articulating both the difference and how they complement each other will likely lead to a greater ability to raise funds and develop deeper relationships with allies.

Other potential areas for linkages and collaboration on the School of Data Curriculum that could lead to greater sustainablity for data literacy organisations:

    • Schools and universities who are interested in expanding their course offerings to better address data literacy amongst students. An opportunity for chapters is to work with local academia in adapting the School of Data curriculum to address the needs of students who will potentially be using open-data in their careers. Teachers are also in need of training on the pedagogy in regards to understanding data and it’s contexts, as opposed to understanding how to use tools. Academic grants and funding could support this adaptation.
    • Civil society efforts that are working towards the release of data, particularly by governments, for use in the public domain. One area that has a strong need for greater data literacy is the open governments, transparency and accountability movements, whose area of expertise is in pressuring governments through advocacy campaigns to release data. Many do not have capacity to provide training to those who might actually use the data. In this regard, a conclusion of the International Open Data Conference in 2015 (as stated in its final report) poses the need for work to identify and embed core competencies for working with open data within existing organizational training, formal education, and informal learning programs.
    • Development initiatives, particularly those that are focused on supporting an emerging private sector that will be inspired by data for use in innovation.  Access and use-ability of open data could be exploited by the private sector in ways that could expand data literacy in emerging economies.  Current development initiatives could greatly benefit from engaging with School of Data chapters and engaging with the curriculum.

In order to sustain a long term data literacy initiative, it is likely that funding will need to come from a mix of foundational funding and fee for service work through expanding the diversity of clients, collaborations and linkages. As open-data usability and access continues to improve, it will be critical that Data Literacy organisations stay on top of future trends and continue to shape their curriculum to meet the needs of the communities they aim to serve. Hopefully, funders and social change organisations will also continue to evolve in their understanding of the importance of data and the resources involved in making it useful to stakeholders. As a network, the School of Data local communities will need to share information about how they grow and evolve sustainable business models.

In our next blog post ‘Recommendations for Improving Data Literacy Efforts’ we will discuss the conclusions that we have made as a result of undertaking this research effort.

Flattr this!

Improve Your Data Literacy: 16 Blogs to Follow in 2016

Cedric Lombion - January 22, 2016 in Tips

Learning data literacy is a never-ending process. Going to workshops and hands-on practice are important, but to really become acquainted with the “culture” of data literacy, you’ll have to do a lot of reading. Don’t worry, we’ve got your back: below is a curated list of 16 blogs to follow in 2016 if you want to: improve your data-visualisation skills; see the best examples of data journalism; discover the methodology behind the best data-driven projects; and pick-up some essential tips for working with data.


Using Feedly as your RSS Reader? Check out our shared collection which includes the blogs mentioned below plus other blogs!

Datavisualisation


Data Viz Done Right

This website, by Andy Kriebel, curates good examples of dataviz around the web, highlighting what was great, and also what could have been done better. Each post is quick and easy to read, and they add up to form a set of good practices to keep in mind when doing a data-visualisation.

Website link: http://www.datavizdoneright.com/

Frequency: 1 article/week


Flowing Data

Flowing Data is Nathan Yau’s full-time job, and it shows. Regularly updated with great original or curated content about data-visualisation, this blog is a good way to keep track of the major trends and events in the field. Other sections of the website feature tutorials for purchase and guides.

Website link: http://flowingdata.com

Twitter: @flowingdata

Frequency: 9 articles/week


Google Maps Mania

Do you like maps? Everybody likes maps. Managed by map-addict Keir Clarke for more than 10 years, this blog is the go-to resource for following the development of digital cartography. Don’t be fooled by the name, all digital maps are featured, not only Google ones.

Website link: http://googlemapsmania.blogspot.co.za/

Twitter: @gmapsmania

Frequency: 24 articles/week


Junk Charts

Prominent data-visualisation expert Kaiser Fung set out to become the web’s first data-visualisation critic. The result is a website which regularly deconstructs dataviz work, even from top publications, often proposing an alternative visualisation. The articles on Junk Charts regularly make ripples through the web, attracting praise, criticism, but most importantly, prompting discussion.

Website link: http://junkcharts.typepad.com/

Twitter: @junkcharts

Frequency: 2 articles/week


Visual Loop

Visual Loop is the ultimate datavisualisation web repository. Founded as simple blog in 2010 by Tiago Veloso, it grew to become the most active and up-to-date curation space for datavisualisation, in all formats. Featuring interviews with designers along with event announcements, this is the blog to follow to get inspiration.

Website link: http://visualoop.com/

Twitter: @visualoop

Frequency: 3 articles/week


Data In the News


FiveThirtyEight

Rather than simply having data journalists, FiveThirtyEight is data journalism. Founded by Nate Silver, a renowned statistician who reached stardom after predicting the 2008 and 2012 elections while blogging for the New York Times, FiveThirtyEight represents the boldest attempt to do pure data journalism. It works remarkably well, and is an inspiration for all data journalists, seasoned and aspiring ones alike.

Website link: https://fivethirtyeight.com/

Twitter: @FiveThirtyEight

Frequency: 40 articles/week


NYT – The Upshot

Website link: http://www.nytimes.com/upshot/

Twitter: @UpshotNYT

Frequency: 21 articles/week

After the departure of Nate Silver, the New York Times decided to aim even higher by starting The Upshot, a data journalism corner dedicated to politics, policy and economic analysis. It’s an ambitious and high-quality take on data journalism, with approachable articles on social issues (politics, nutrition…) mixed with innovative interactive data-visualisations.


Washington Post Information Graphics

The Washing Post Information Graphics blog is an unadulterated look at the data journalism articles produced by the « WaPo ». It is not only a great source of inspiration for anyone interested in dataviz, but a great source of quality articles, without all the fluff of the main website.

Website link: http://postgraphics.tumblr.com/

Twitter: @PostGraphics

Frequency: 4 articles/week


Understanding Uncertainty

David Spiegelhalter is the maestro behind this ever-useful website, which regularly takes on news articles (but not exclusively) which make a bad job of reporting on the risk/probability/chance of something happening. It is a great read to cut through sensationalist claims, as well as a source of examples on how to deal with uncertainty in reporting.

Website link: http://understandinguncertainty.org/

Frequency: Less than 1 article/week


Global Journalism Investigative Network

The GJIN, as a whole, is an extensive resource for journalists, but their series of curated top 10 data journalism links of the week is a great way of tracking the « #ddj” articles or news that made the rounds on Twitter for any particular week.

Website link: http://gijn.org/series/top-10-data-journalism-links/

Twitter: @gijn

Frequency: 1 article/week


Behind The Scenes


NPR Visuals Team Blog

A nerdier pick than the rest of the selection, the NPR Visual Teams blog is still an amazing place to see the methodology behind outstanding data journalism projects. Additionally, the NPR Team maintains several open source tools for data journalism which are described on the blog.

Website link: http://blog.apps.npr.org/

Twitter: @nprviz

Frequency: Less than 1 article/week


Source

No less nerdy than the NPR blog, the Source blog (a Mozilla/Open News project) is more varied in its content, thanks to regular blog posts by top data journalists from a wide variety of newsrooms. Alternating behind-the-scenes articles, guides, tutorials and event round-ups, this blog is a must-have in the RSS reader of every data journalist.

Website link: https://source.opennews.org

Twitter: @source

Frequency: 2 articles/week


Storybench

Storybench is a collaboration between the Media Innovation track at Northeastern University’s School of Journalism and Esquire magazine. A relative newcomer in the sphere of data journalism blogs, it features high quality articles, providing an « under the hood » look at examples of digital journalism, accompanied by interviews with the journalists who make them.

Website link: http://www.storybench.org/

Twitter: @storybench

Frequency: 2 articles/week


Learning to work with Data


Chandoo

Data journalists love spreadsheets. And why wouldn’t they? They’re so flexible! Chandoo.org is the place to go if you want to maximise this potential flexibility, or just pick some nice tricks that will make your work faster. Chandon focuses on Excel, but thankfully most of the tricks of use to data journalists will be available in other, similar software.

Website link: http://chandoo.org/wp/

Twitter: @r1c1

Frequency: 2 articles/week


HelpMeViz

HelpMeViz’s tagline is « helping people with everyday data visualization ». Whilst submitting your dataviz issue to the community can be really helpful, the real value of the website is in the aggregation of all the posts, each representing a small dataviz challenge. If you ever wondered in how many ways you could tackle a data-visualisation problem, HelpMeViz is there for inspiration.

Website link: http://helpmeviz.com/

Twitter: @HelpMeViz

Frequency: Less than 1 article/week


Journalist’s Resource

The Journalist’s Resource tackles a niche aspect of data literacy: understanding research papers. Mixing regular round-ups of research around specific topics with quality guides about understanding research terms or working with numbers (check out their amazing tip sheets), this blog from the Shorenstein Center of Harvard Kennedy School is a resource all journalists (and especially North American ones) should follow.

Website link: http://journalistsresource.org/

Frequency: 6 articles/week


Do you believe that some obvious blogs are missing? Tweet them to us at @Schoolofdata or on Facebook. And check out our Feedly shared collection, which includes more than the blogs mentioned above!

Flattr this!

Research Results Part 3: Measuring the Impact of Data Literacy Efforts

Dirk Slater - January 21, 2016 in Research

As there are a wide range of methodologies for achieving data literacy in social change efforts, there is also a range of approaches to determining effectiveness. The degree to which a data literacy practice has the capacity to measure effectiveness is largely based on that practice’s maturity level. Participants who work for older, established organizations reported devoting considerable resources to M&E, whereas most individuals and participants from smaller organizations recognized that they were very limited in their evaluation possibilities. Methodologies for evaluating efficacy of data literacy efforts are not standardised. To measure the impact of data literacy work in environments with limited resources, participants in our research focus on the following sources of information:

  • Analysis of data outputs: some of the participants mentioned the relative ease to measure the impact of data work (as compared to other ICT-related initiatives) because there will be outputs that you can analyze qualitatively.
  • Sentiment analysis: Data literacy trainers frequently mentioned the importance of measuring outcomes by trying to get a feel for the reactions of people before ending a workshop, particularly in processes where follow-up is unlikely.
  • Having an eye for the manifestation of organizational (vs individual) change: For some trainers, the true impact of data literacy work can be seen in how data work becomes internalised in an organization’s programmes and staffing.
  • Direct skills assessment: Perhaps difficult to evaluate without exams, some participants rely on self-reporting from their beneficiaries and try to compare pre and post surveys to see the impact of particular training processes.

Diversity in approach

More recently established data literacy efforts are using basic evaluation forms distributed at the end of their trainings. Data literacy trainers frequently mentioned the importance of measuring outcomes by trying to get a feel for the reactions of people in their workshops. This involves including questions about the setup and fulfilment of expectations in post-workshop surveys, but also looking for signs of independent work and pondered questions. A few of the participants consider these signs of engagement are crucial in processes where follow up isn’t likely.

Code for Africa has developed a robust set of success indicators that allow them to chart a path towards success such as data responsibilities being included in organisational job descriptions. For some practitioners, the true impact of data literacy work can be seen in organizational change. Will someone be hired to do data work in the organizations? Do senior executives value data work more and are they willing to allocate more funds for this type of work?

Some of the participants mentioned the relative ease to measure the impact of data work (as compared to other ICT-related initiatives) because there will be outputs that you can analyze qualitatively. A couple of organizations mentioned detailed analysis frameworks to measure the quality of stories in data journalism, for example – employing local data journalists who could evaluate stories from before and after the processes took place to compare the performance of beneficiaries.

Some participants rely on self-reporting from their beneficiaries (through surveys that ask questions on their level of comfort/knowledge on specific skills) – and try to compare pre and post surveys to see the impact of particular training processes.

Even though most participants had given thought to monitoring and evaluation in a way or another, few of them had developed frameworks to use before, during and after the implementation of a project or program. Many of these efforts need more opportunity to articulate what success would look like for their project, and then work backwards to understand what steps and endeavours they need to accomplish to attain that success.

Determining effective ways to measure impact

During a workshop on impact assessment provided to data literacy practitioners connected to the School of Data network in March of 2015, it was determined that the term ‘impact assessment’ may not be an appropriate term, as it implies more robust and resource intensive endeavours that is often applied when evaluating public policy. There was a strong desire for lightweight methodologies that will help them learn how to improve offerings that will deliver greater impact in the long term. They determined that the methodology should contain some basic elements, such as baselines, working with beneficiaries to establish indicators, having feedback loops, articulating clear and transparent goals, having consistency throughout their programs and taking the time to document.

While some exchange between data literacy practitioners has begun around methodologies for evaluation that leads to learning and improved projects, there needs to be continued dialogue in the School of Data network to determine effective ways of measuring impact.

In our next post, ‘Sustainable Business Models for Data Literacy Efforts’, we will explore viable models and opportunities for data literacy practitioners to fund and support their work.

 

Flattr this!

The 2nd Nigerian Open Data Party, a Great Success

Nkechi Okwuone - January 20, 2016 in Event report, Fellowship

The open data scene is rising in Nigeria, and it has seen the birth of a vibrant community: to the North, Connected Development; to the West, BudgIT, Orodata, Code for Nigeria; to the South, SabiHub and NODA, to mention a few. We all came together on the 11th and 12th of December 2015 to hold the second edition of the Open Data Party, the biggest open data event in Nigeria, with support from School of Data, Code for Nigeria and the Heinrich Böll Foundation.

The first edition was hosted by Sabi Hub in Benin City, Edo State, Nigeria, in collaboration with the Benson Idahosa University. The event, described as the highlight of Nigerian Open Data Conferences in 2015, brought together data enthusiasts among social workers, journalists, government officials, academics, and activists from all over Nigeria. They learned and shared skills around using data to enhance their activities.

Participants at the event.

Participants at the event.

The 2015 event was focused on waste management and saw a wealth of notable speakers/facilitators present. This included: Katelyn Rogers (Open Knowledge International Project Manager) Adam Talsma (Senior Program Designer and Nigeria Country Manager at Reboot) Stanley Achonu (Operations Lead at BudgIT), Temi Adeoye (Lead Technologist at Code for Nigeria), Nonso Jideiofor (Reboot), Joshua Olufemi  (Premium Times Nigeria), Ayodele Adeyemo (Nigeria Open Data Access),Tina Armstrong Ogbonna (Reporter with Radio Nigeria and Freelance Journalist), Oludotun Babayemi and Hamzat Lawal (Co-Creator of Follow The Money in Nigeria) and the hostess Nkechi Okwuone (School of Data Fellow, manager the Edo State Open Data Portal and Sabi Hub).

Facilitators at Open Data Party Benin

Facilitators at Open Data Party Benin

23779203736_0bb169dbf0_z

Skill Share Session

Day 1 of the event featured sessions on Data Pipelines (Finding Data, Getting Data, Scrapping Data, Analyzing and Publishing Data) and Ground Truthing Data using Mobile Phones. Other sessions that ran concurrently dealt with Data Scraping Tools and Digital Security and Privacy. The day ended with participants encouraged to document what they wanted to learn or teach on the unconference session of Day 2

23722868791_56ca340c41_z

Participants Documented their ares of interest – Either Learning or Teaching for the Unconference session

Day 2 kicked off with a panel session on waste management challenges in the Edo State and how it could be tackled from an advocacy, entrepreneurial and technology perspective.

Immediately following was a 2-hour long unconference session focusing on the learning interests written by participants on sticky notes. This included a Follow the Money session, securing funding for your ideas/projects and maximizing web analytics.

Rounding up Day 2 was the Ideation session which began with Temi Adeoye speaking to participants on how to better understand data problems, getting divergent and convergent ideas as well as thinking outside the box to get good results.

Participants formed groups and brainstormed on developing a tool/platform to solve challenges in waste management with emphasis on recycling, collection and dumping. The session lasted for 2 hours and had a total of 16 participants who were each given 3 minutes to make a presentation of their ideas to a panel.

23814570195_abf0b9e9a9_z

Winners of the Ideation Session

The winning idea came from Abdul Mohammed from Kano and Emmanuel Odianosen from Edo State who will be developing  a reporting tool to help waste managers (collectors) efficiently collect waste in communities. They were rewarded with a thousand British Pounds (£1,000) provided by School of Data, along with an incubation and mentorship package provided by Sabi Hub, Code for Nigeria and Connected Development.

And of course we went partying properly at the popular Subway Lounge In Benin City Nigeria!. The event attendees expressed delight at the effort of the organizers who ensured that the event was world class and they all look forward to a bigger event come 2016. A big Thank You to School of Data, OD4D, Code for Nigeria, Sabi Hub, Connected Development, Heinrich Böll Foundation and the Benson Idahosa University for making the event a success!

View details about the event here

See pictures and videos here

 

Flattr this!

Data visualisation or Data narration? Data in Radio Stories

Nkechi Okwuone - January 18, 2016 in Uncategorized

For an outsider looking at Nigeria’s news media lately, it would seem that the only things in the mind of Nigerians are politics or security-related. Breaking news are aplenty while more involved stories, either investigative or reporting on community issues, are scarce.

This is a problem, but what can we do about it? Development Watch, an initiative by Journalist for Social Development Initiative, hopes to solve this problem. They have plans for a different kind of journalism, providing objective analysis of social development issues and promoting inclusive growth across Africa. And to live up to their goal of creating quality journalism, they decided to facilitate a data journalism session on November 30, 2015, at the occasion of the launch of the main part of their web platform.

Data Journalism AbujaMore than 20 journalists were present: 15 from the broadcast, 5 from the print and the others from the new media. Beyond Google Alerts, most of them had little knowledge of the useful tools for digital journalism, and even less about where to find available data in Nigeria. This was expected: we hear this from 80% of the participants to datajournalism trainings. Luckily, the point of those trainings is to familiarize them with the available tools and sources.

“To find data for my reports, I only depend on references from other works, or request a meeting with concerned organizations, as I do not know where to go to, I find this difficult for my work”  said Sam Adeko of Punch Newspapers.

 According to a recent poll by NOI Polls, a polling organisation in Nigeria, most people in the country access daily news via the radio (67%), followed by television, social media and newsprint. With this information in mind, we try to tailor our datajournalism trainings to take into account stories for radio and television, in addition to the use of tools like Infogr.am, essentially useful for print and social media.

But before talking about visualising data, we had to cover some basic techniques. In this training, as is the case in many other ones, 90% of the participants used Google search to look up information, but few of them really knew how to search effectively. For example, you can search for specific content on a website by adding ‘site:example.com’ to your search phrase, which will prompt Google to only return results from the site you’ve specified. You can even narrow it down further by using ‘site:example.com/pages/’, and you’ll only see results that match that pattern.

Another useful tool that was introduced was Google Trends, which allow to find which search terms are trending on Google. “I really want to know how much people are interested in President Muhammadu Buhari of Nigeria compared to the President of Rwanda, Paul Kagame. Especially in recent times, this can give me an insight on how important Nigeria is over Rwanda” explained Roluke Ogundele of the Africa Independent Television. All you need do is to enter a couple of common search phrases and you will get how this has been trending over time. We also talked about Twitter, a micro-blogging service that is becoming more widely used in Nigeria. To discover public conversations about a link, you just paste the URL you’re interested in into the search box, and then possibly hit ‘more tweets’ to see the full set of results.

When the datavisualisation session eventually came, we asked the question of whether to visualize or not, and how. Tools like Google Fusion Tables, Tableau, Dipity and others make it easier than ever to create maps, charts, graphs useful for newsprint, social media, and television. But what happens when you broadcast on the radio? Because people only listen, the need of getting a story out of the data, rather than just a visualisation, is more obvious. Stories can be told in a captivating way on radio, and they can come from data. “So if you are a broadcast journalist in the radio – you have no excuse, dive in by looking at the problem you want to solve first, via the radio (also works for other media), then find and get the data, and tell your story to the world” said Gloria Ogbaki of Ray Power FM

In Nigeria, data journalism is nascent, and opportunities abound. As more new journalists get into the field, thinking of which sector to dive into, there is a need for newsrooms to innovate by, for example, embedding data analysts and Information technology experts with producers of news.

As you can see, most of us never knew what data journalism is, but at the end of this training, we were all excited, and can now go back to incorporate this into our work. We hope this is not a one -time training, we need more of it in our newsrooms” said Okoye Ginka of the News Agency of Nigeria

 

Flattr this!

Making budget audit reports more accessible to citizens

Sheena Carmel Opulencia-Calub - January 16, 2016 in Event report, Fellowship

An audit of a government budget, if it happens and can be trusted, is a very important public document. It contains information that is vital to achieve the goals of the government and to ensure transparency and accountability. It is not, however, an easily accessible document for most of the public because of the huge wall of numbers and text that it is often made of.

Happy Feraren, 2014 School of Data Fellow, facilitating an activity on how you cluster data.

Happy Feraren, 2014 School of Data Fellow, facilitating an activity on how you cluster data.

In the Philippines, the agency tasked with government audits is called the Commission on Audit (COA). They review the budget of every government entity, from the different government offices to public projects. One of this project is the Farm-to-Market Road (FMR), a project which aims to build concrete roads from farms to the town markets.

As part of a partnership with the World Bank, the COA called on the expertise of Open Knowledge and School of Data to design and conduct a Data Analytics and Visualisation Training workshop which would address the needs of the team responsible for auditing the Farm to Market Road project. It took place on November 11.

The workshop was attended by a mix of CoA Directors and Administrative Officers from Manila and Capiz. There was a total of 24 participants on the first day of the workshop, and 23 participants on the second day. The main goal of the event was to help the participants understand how data analytics and visualization can aid the creation of an audit report more accessible to the general public, dubbed “People’s Citizen Participatory Audit (CPA) Report”. There were sessions on Open Data and its relation with the work of COA, how and why there is a need to analyze and visualize data, what tools can be used for data visualization, and what data does the public want to know.

Happy Feraren and Sheena Opulencia-Calub, respectively 2014 and 2015 School of Data Fellows, co-facilitated the training, and later led online mentoring sessions with the participants to deliver a People’s CPA Report. The drafted People’s CPA report is currently being reviewed by COA for public dissemination.

It is necessary to develop the data analysis skills of auditors especially since they are given very little technical trainings like this one. The workshop itself was a bit challenging to conduct because of the varying levels of data knowledge and appreciation of the participants. However, because the training design was patterned after previous data skills training for government agencies, it was easy to align the training objectives that would meet the interest and needs of the participants. As always, asking participants to complete a pre-training and post-training survey were essential to get a sense of what and how much they have learned.

Training participants, organizers and resource persons.

Training participants, organizers and resource persons.

Flattr this!

Research Results Part 2: The Methodologies of Data Literacy

Dirk Slater - January 14, 2016 in Research

After exploring how to define data literacy, we wondered about the reality of the work of data literacy advocates. Which methodology do they use and what does it look like in practice? Unsurprisingly, there is a wide range of methodologies in use across different groups. Each of them fits the available opportunities, time and resources.

Short term efforts

Workshops

A large part of the training done by the data literacy advocates surveyed take the form of workshops. Participants agreed that workshops (rather than talks) were a good way to promote practical learning, and also the short timeframe allowed for the participation of individuals and organizations with resource constraints. Some workshops can be delivered inside of conferences and events (ranging from two hours to a half-day), and their learning goals are largely limited to introducing the basic topics to participants. Some of those workshops can be multi-day, ranging from two to five days: they allow for a longer exposure to processes and provide enough of a foundation to start concrete data projects. Multi-day workshops may also be augmented with follow-up sessions designed to provide guidance and support during the life of a data project.

The following short term workshops were mentioned:

  • 2 hour workshops: they are the ones that take place in conferences, and which only provide space for an introduction (but require few resources to make happen)
  • Half day workshops: they are seen as good for introductions to topics, as well as spaces to do practical work. For example: Data Therapy’s workshops.
  • 2 to 3-day workshops: they provide enough practice time to make it feasible to start specific projects
  • 5-day workshops: two of the organizations surveyed mentioned them as great opportunities to go through entire processes (like the data pipeline) with workshop participants
  • 10-day workshops: the longest workshop format that was mentioned in interviews; they provide enough space to go through entire processes as well as work on specific projects from scratch.

The Data Pipeline

In regard to the content of these workshops, they often start with what participants described as “data basics” (what is data, what is data journalism, etc). After this introduction, it is common for trainers to explain the process of working with data. Here, a recurring concept is the School of Data pipeline, as shown in the illustration. It is a pedagogical device created to show that “data wrangling takes place in several stages; in order to process data, it must be moved through a ‘pipeline’. […] Raw data must usually travel through every stage of the pipeline – sometimes going through one stage more than once” (Newman, 2012). While the data pipeline is heavily promoted and used by the School of Data network, participants outside of the network were found to use the pipeline model to describe the type of content they cover in workshops.

Beside the data pipeline-like methodologies, another specific type of exercice that came up during the research was the “reverse engineering” of data exercises: deconstructing existing examples to explain how they came to be and make them more relatable for the trainees.

Community events

Along with workshops, community events have developed as a way to have a more social component to events, while getting informal but practical help and advice.

  • Data clinics: those events provide space where people can develop their data skills, ask questions related to their own data and get help with challenges they are facing in their data projects
  • Data meetups: many organizations that do trainings also devote resources to hosting informal meetups where people can share learnings on their own data projects, along with getting insights into other data projects.

Datathons

Another intensive format is the Datathon, which is based on the concept of hackathons. They are often named “data expeditions”, “data dives”, “data quests” and are popular with data literacy practitioners, along with individuals who have more established data-literacy skills. Quoting a participant, “acquiring data skills requires short, intensive bursts of focus from a group, rather than the type of attention you would have during 6-8 weeks with sessions that last a couple of hours per week”.

Medium term efforts

Some of the formats do not require individuals to be removed from their workplace. The training is brought to them, either physically at their workplace, or online, allowing participation from their place of work. The online format is generally conducted over a period of several weeks or months, a few hours a week.

  • 5-week newslab model: in contexts where journalists cannot leave their newsroom, trainings can be brought to the newsroom.
  • 4-week training: in opposition to the hypothesis that led to the birth of data expeditions, this model relies on a relatively modest demand of time each week, and relies on the accumulation of practice over four weeks.
  • 1-week workshop with follow up: when there is interest in supporting a long term process, but offline training can’t be sustained over a long continuous period, follow up sessions can extend the process.

Some interviewees mentioned paying special attention to the need of developing communities of practice with alumni, in order to provide spaces where they can continue to develop expertise and learn from each other’s experiences.

Long term efforts

On the longer end of the scale, the long term efforts correspond to immersive endeavours where an individual is placed within a data project lasting anywhere from five months to a year. This takes the form of either fellowship programs, allowing an individual to gain expertise by being placed within a data project, or a mentorship program, where an individual with data expertise is placed within a data project to help build the skills of staff while working side by side.

  • Fellowship programs: tend to last from 5 months to a year. Some participants favor fellowship programs for data journalists in environments where such intensive involvement is not disruptive to the media industry. School of Data has experience in this regard, too, with its own group of School of Data fellows.
  • Research processes: some participants sustained long term capacity building through a research process that they documented. For communities that must collect and analyse their own data in the face of other challenges, such as marginalisation and illiteracy, an approach of a multi-year engagement towards empowerment can be successful. An example of this can be found in the FabRiders’ blog post series What we learned from Sex Workers.
  • Six-month projects: rather than doing it through workshops, data literacy training can take place in the form of involvement in specific projects with the guidance of a mentor.  

Online vs offline

Most of the aforementioned formats (except for the follow-up community) take place primarily offline regardless of the duration, but some online formats were brought up by participants – primarily by those whose native language isn’t English, and whose communities don’t have a wealth of data literacy resources in their language. Some mentioned MOOCs (one of the participants ran a five-week MOOC on data journalism – the first one ever in Portuguese, the language spoken in her country, Brazil); others, dedicated websites (one of the participants was prompted by the desire to introduce trends she admired in other contexts, while translating the resources that could aid in this adoption); webinars as an attempt to replicate brief offline trainings, and paying attention to social media content as a source of learning.

Choosing an effective format

Despite the wide range of formats that are used to help build data literacy, the selected format for an event often comes down to two factors: the availability of funding and the amount of time participants are willing to invest. In some contexts, journalists and/or activists can’t give up more than two days at a time; in others, they can give up to half a year. The least disruptive formats are chosen for each community. There is a noticeable difference in the types of actors and the engagements they will favor. Larger and older organizations favor intensive, long term processes with relatively few beneficiaries; smaller and younger organizations or individuals favor short-term trainings to reach larger audiences.

Interviewees focused on developing developing data literacy capacity in both individuals and organisations favor providing experiential, project-driven work. Often it’s about providing people with a dataset and getting them to develop a story from it; other times, it’s hands-on training on different parts of the data pipeline. Most interviewees so far have made an emphasis on the importance of providing opportunities for hands-on experience with data.  They also strive for having concrete outcomes for their trainings where participants can see immediate impact of their work with data.

Curriculum Sources

The research participants mentioned several sources of inspiration around data literacy, which can be found below:

In our next post ‘Measuring the Impact of Data Literacy Efforts’ we will look at how data literacy practitioners measure and evaluate their methodologies and efforts.

Flattr this!

Data in December: Sharing Data Journalism Love in Tunisia

Ali Rebaie - January 11, 2016 in Data Blog, Data Expeditions, Data for CSOs, Data Journalism

NRGI hosted the event #DataMuseTunisia in collaboration with Data Aurora and School of Data senior fellow Ali Rebaie on the 11th of December 2015 in beautiful Tunis where a group of CSO’s from different NGOs met in the Burge Du Lac Hotel to learn how to craft their datasets and share their stories through creative visuals.

Bahia Halawi, one of the leading women data journalism practitioners in the MENA region and the co-founder at Data Aurora, led this workshop for 3 days. This event featured a group of professionals from different CSO’s. NRGI has been working closely with School of Data for the sake of driving economic development & transparency through data in the extractive industry. Earlier this year NRGI did similar events in Washington, Istanbul, United Kingdom, GhanaTanzania, Uganda and many others. The experience was very unique and the participants were very excited to use the open source tools and follow the data pipeline to end up with interactive stories.

The first day started with an introduction to the world of data driven journalism and storytelling. Later on, participants checked out some of the most interesting stories worldwide before working with different layers of the data pipeline. The technical part challenged the participants to search for data related to their work and then scraping it using google spreadsheets, web extensions and scrapers to automate the data extraction phase. After that, each of the participants used google refine to filter and clean the data sets and  then remove redundancies ending up with useable data formats. The datasets were varied and some of them were placed on interactive maps through CartoDB while some of the participants used datawrapper to interactively visualize them in charts. The workshop also exposed participants to Tabula, empowering them with the ability of transforming documents from pdfs to excel.

Delegates also discussed some of the challenges each of them faces at different locations in Tunisia. It was very interesting to see 12321620_1673319796270332_5440100026922548095_nparticipants share their ideas on how to approach different datasets and how to feed this into an official open data portal that can carry all these datasets together. One of the participants, Aymen Latrach, discussed the problems his team faces when it comes to data transparency about extractives in Tataouine. Other CSO’s like Manel Ben Achour who is a Project Coordinator at I WATCH Organization came already from a technical backgrounds and they were very happy to make use of new tools and techniques while working with their data.

Most of the delegates didn’t come from technical backgrounds however and this was the real challenge. Some of the tools, even when they do not require any coding, mandate the knowledge about some technical terms or ideas. Thus, each phase in the data pipeline started with a theoretical explanatory session to familiarize delegates with the technical concepts that are to be covered. After that, Bahia had to demonstrate the steps and go around the delegates facing any problems to assist them in keeping up with the rest of the group.

It was a little bit messy at the beginning but soon the participants got used to it and started trying out the tools on their own. In reality, trial and error is very crucial to developing the data journalism skills. These skills can never be attained without practice. 11232984_1673319209603724_5889072769128707064_n Another important finding, according to Bahia who discussed the importance of the learnt skills to the delegate’s community and workplace, is that each of them had his/her own vision about its use. The fact that the CSO’s had a very good work experience allowed them to have unique visions about the deployment of what they have learnt at their workplaces. This, along with the strong belief in the change open data portals can drive in their country are the only triggers to learning more tools and skills and bringing out better visualizations and stories that impact people around. The data journalism community 3 years ago was still at a very embryonic stage with few practitioners and data initiatives taking place in Africa and Asia. Today, with enthusiastic practitioners and a community like School of Data spreading the love of data and the spirit of change it can make, the data journalism field has very promising expectations. The need for more initiatives and meet ups to develop the skills of CSOs in the extractive industries as well as other fields remains a priority for reaching out for true transparency in every single domain. 

Thank you,

You can connect with Bahia on Twitter @HalawiBahia.

Flattr this!

Research Results Part 1: Defining Data Literacy

Dirk Slater - January 8, 2016 in Impact

Thanks to the efforts of governments, organizations and agencies to make their information more transparent, the amount of open data has increased dramatically in recent years. Consequently, interest has arisen in the practitioners who develop data literacy, which they do often through international, collaborative networks of like-minded actors.

The work of School of Data has emerged in a context where different fields (from Information and Communications Technology (ICT) for change activism to data journalism curriculum creation in universities) have seen resources devoted to the transmission of skills related to data use in different journalism and advocacy contexts. ‘Data literacy’ has emerged as a term to refer to the umbrella of initiatives, though not without challenges (Data-Pop Alliance, 2015). What does the concept exactly mean?

‘Data literacy’ can be defined in terms of skills (‘the ability to use and analyse data’), and this can inspire different analysis on each component to those skills. However, attempts to define the term can also allude to the social transformations that can be sought through it, especially when seen through the lens of the history of literacy (Data-Pop Alliance, 2015).

Furthermore, once we accept a definition of ‘data literacy’, how does it coexist with discussions such as the difference between ‘statistical literacy’ and ‘statistical competence’ (“what every college graduate should know” vs “what we hope a business statistics student will retain five years later”, as Moore distinguishes –as cited by Schield, 2014–), or with the general concept of data awareness (as discussed by Rumsey, 2002)?

‘Data literacy’ as a concept stems from old visions of numeracy and information literacy; however, researchers who have examined current work in this field have categorized the approaches to define data literacy as the ability to read, work with, analyze and argue with data (Bhargava and D’Ignazio, 2014), as well as “the desire and ability to constructively engage in society through or about data” (Data-Pop Alliance, 2015). We consider both dimensions, skills and social engagement, are a good foundation to discuss the aims and practices of the School of Data community.

The underlying concept of data literacy that each actor holds will determine aspects of their methodology at the individual and collective levels. Inspired by the categorization done by Bhargava and D’Ignazio, in our interviews we asked participants questions to get insight on their visions of data literacy and the aims of their work. The following abilities were mentioned by two or more participants:

  • Knowing how to find information in different ways. This includes being able to track down sources of existing data, but also knowing how to collect it if it doesn’t exist yet.
  • Being able to apply critical thinking skills to data. This ranges from the ability to do data quality assessment or contextualizing specific information to other aspects of processes related to data-related work, such as the ethics of handling data.
  • Being able to ask questions to the data (and then finding an answer). Related to the last ability, different participants mentioned the ability to ask questions to data as one of the goals of data literacy trainings – even if they don’t go as far as finding the answers for them, though ideally they would.
  • Being able to find specific outputs (such as stories or visualizations) in data. Apart from the ability to ask and answer questions, a topic that recurred among participants from the field of data journalism was the importance of finding stories and other journalistic outputs.
  • Being able to use it to advance one’s own goals. Whether it is specifically more in-depth research, or generally better and more data-driven storytelling or campaigning, the link between data and action was evident in many of the interactions we had with the participants.
  • Feeling comfortable around data and working with it. At times as an intermediate aim to lead to the other abilities mentioned in this section, and at other times as an end in itself, many participants mentioned the importance of promoting comfort around data (and bringing down the psychological barriers that exist between people and data).
  • Being able to do basic statistical analysis with data. Even though more technical aspects of data literacy came up at different points (for example, the need to know how to clean data), the only one that was recurring was the ability to work with basic statistics.

Other general considerations

  • It’s a non-linear process. Two participants pointed out that it was important in their work not to view data literacy as a linear process, or a binary (“you are data literate or you aren’t”); they view data literacy as a process that involves very different actions depending on the context and needs of each individual (or group).
  • Data literacy can be promoted and assessed at the individual level, but also in groups (such as organizations or communities). When asked what data literacy looked like at the organization level, participants mentioned buy-in and engagement from different parts of the organization (including the senior staff). The proper allocation of resources to this type of work depends on an understanding of data work and its genuine possibilities.
  • An aim of data literacy work is to expand existing markets. In the case of data journalism, different participants mentioned data literacy work as a way to help journalists produce content that will bridge the gap between them and information they can act upon (a hypothesis based on solutions journalism, which is journalism that aims at covering solutions to social problems, for example). Also, as a way to increase the demand for open data.

It’s important to understand how the various actors in the field use the term ‘data literacy’ and in particular, how that impacts training and knowledge sharing goals. As the use of data becomes more ubiquitous in social change efforts, it is likely that the definition will continue to narrow and be as recognisable as terms like ‘computer literacy’.  

In our next post ‘Data Literacy Methodologies’ we will look at how data literacy practitioners reinforce their own definitions through their training and knowledge sharing practices.

Flattr this!