School of Data is hiring!

Cedric Lombion - November 14, 2015 in Announcement

Data-lovers and trainers around the world, hear this: School of Data is hiring! Our coordination team is looking for a passionate and outstanding data-trainer to be our all-things-content-and-training person moving forward.

Over the past four years, School of Data has succeeded in developing and sustaining a thriving and active network of data literacy practitioners in partnership with our implementing partners across Europe, Latin America, Asia and Africa.

Together we have published online dozens of lessons and hands-on tutorials on how to work with data, benefiting thousands of people around the world. Over 4500 people have attended our tailored training events, and our network has mentored dozens of organisations to become tech savvy and data driven. Our methodologies and approach for delivering hands-on data training and data literacy skills – such as the data expedition – have now been replicated in various formats by organisations around the world.

One of our flagship initiatives, the School of Data Fellowship Programme, has now successfully supported 26 fellows in 25 countries to provide long-term data support to civil society organisations in their communities. School of Data coordination team members are also consistently invited to give support locally to fellows in their projects and organisations that want to become more data-savvy.

In order to give fellows a solid point of reference in terms of content development and training resources, and also to have a point person to give capacity building support for our members and partners around the world, School of Data is now hiring an outstanding trainer/consultant who’s familiar with all the steps of the Data Pipeline and School of Data’s innovative training methodology to be the all-things-content-and-training for the School of Data network.


The hired professional will have three main objectives:

Technical Trainer & Data Wrangler: represent School of Data in training activities around the world, either supporting local members through our Training Dispatch or delivering the training themselves; Data Pipeline & Training Consultant: give support for members and fellows regarding training (planning, agenda, content) and curriculum development using School of Data’s Data Pipeline; Curriculum development: work closely with the Programme Manager & Coordination team to steer School of Data’s curriculum development, updating and refreshing our resources as novel techniques and tools arise. Terms of reference

  • Attend regular (weekly) planning calls with School of Data Coordination Team;
  • Work with current and future School of Data funders and partners in data-literacy related activities in an assortment of areas: Extractive Industries, Natural Disaster, Health, Transportation, Elections, etc;
  • Be available to organise and run in person data-literacy training events around the world, sometimes in short notice (agenda, content planning, identifying data sources, etc);
  • Provide reports of training events and support given to members and partners of School of Data Network;
  • Work closely with all School of Data Fellows around the world to aid them in their content development and training events planning & delivery;
  • Write for the School of Data blog about curriculum and training events;
  • Take ownership of the development of curriculum for School of Data and support training events of the School of Data network;
  • Work with Fellows and other School of Data Members to design and develop their skillshare curriculum;
  • Coordinate support for the Fellows when they do their trainings;
  • Mentor Fellows including monthly point person calls, providing feedback on blog posts and curriculum & general troubleshooting;
  • The position reports to School of Data’s Programme Manager and will work closely with other members of the project delivery team;
  • This part-time role is paid by the hour. You will be compensated with a market salary, in line with the parameters of a non-profit-organisation;
  • We offer employment contracts for residents of the UK with valid permits, and services contracts to overseas residents


  • A lightweight monthly report of performed activities with Fellows and members of the network;
  • A final narrative report at the end of the first period (6 months) summarising performed activities;
  • Map the current School of Data curriculum to diagnose potential areas of improvement and to update;
  • Plan and suggest a curriculum development & training delivery toolkit for Fellows and members of the network


  • Be self-motivated and autonomous;
  • Fluency in written and spoken English (Spanish & French are a plus);
  • Reliable internet connection;
  • Outstanding presentation and communication skills;
  • Proven experience running and planning training events;
  • Proven experience developing curriculum around data-related topics;
  • Experience working remotely with workmates in multiple timezones is a plus;
  • Experience in project management;
  • Major in Journalism, Computer Science, or related field is a plus

We strive for diversity in our team and encourage applicants from the Global South and from minorities.


Six months to one year: from November 2015 (as soon as possible) to April 2016, with the possibility to extend until October 2016 and beyond, at 10-12 days per month (8 hours/day).

Application process

Interested? Then send us a motivational letter and a one page CV via

Please indicate your current country of residence, as well as your salary expectations (in GBP) and your earliest availability.

Early application is encouraged, as we are looking to fill the positions as soon as possible. These vacancies will close when we find a suitable candidate.

Interviews will be conducted on a rolling basis and may be requested on short notice.

If you have any questions, please direct them to jobs [at]

Flattr this!

Course outline: mobile data collection with ODK

Sheena Carmel Opulencia-Calub - November 13, 2015 in HowTo


Decades ago, the use of papers and pen was a painstaking and very expensive effort to collect data. Most of us have experienced paper forms getting wet or damaged, or receiving paper forms that were barely answered. But as the age of smartphones and tablets arrived, mobile-based data collection technologies have also gained a huge following. The use mobile data collection tools has also improved the conduct of surveys and assessments. Some of the advantages of using mobile data collection are:

  • Most people are using smartphones for SMS messaging and have access to mobile data connection. According to Wikipedia, The Philippines is currently 12th across the world with the most number of cellphones, with more than 100 million cellphones which is more than our population!
  • In the absence of laptops and desktop computers, smartphones are cheaper and easier to use.
  • Mobile-based data collection compared with the use of paper forms, lessens the risk of losing the data when paper forms are damaged or lost.

One of the tools that was frequently used by information managers is the Open Data Kit (ODK). This was first introduced in the Philippines during the Typhoon Pablo response in 2011 for project monitoring. In the next emergency responses, ODK has been used to conduct one-off surveys and rapid needs assessments during and directly following disasters. While there is a huge variety of online and offline data collection tools, ODK has gained a lot of users because it is free, open source, easy to use and can be used both offline and online. Since ODK is a free and open source set of tools which help organizations author, field, and manage mobile data collection solutions, ODK in itself has evolved in several platforms and formats such as Kobo Toolbox which I prefer to use, GeoODK, KLL Collect, Formhub, Enketo, each one seeking to customize the use of ODK according to their own needs.

ODK provides an out-of-the-box solution for users to:

  1. Build a data collection form or survey (XLSForm is recommended for larger forms);
  2. Collect the data on a mobile device and send it to a server; and
  3. Aggregate the collected data on a server and extract it in useful formats.

This will be a basic course using Kobo Toolbox as one of the many platforms in which ODK forms are built, collected and aggregated for better data collection and management. According to Kobo Toolbox, acknowledging that many agencies are already using ODK, a de facto open source standard for mobile data collection, KoBo Toolbox is fully compatible and interchangeable with ODK but delivers more functionality such as an easy-to-use formbuilder, question libraries and integrated data management. It also integrates other open-source ODK-based developments such as formhub and Enketo. Kobo Toolbox can be used online and offline. You can share the data and monitor submissions together with other users and it offers UNLIMITED use for humanitarian actors.

Course requirements

  • basic understanding of Excel
  • a good smartphone using Android 4.0 with at least 1 gb of disk space
  • a good understanding on how to design a survey questionnaire
  • a Kobo Toolbox account (you can create one here)

Course Outline

Module 1: Creating your Data Collection Form (Excel)

Module 2: Uploading and Testing your forms using Kobo Toolbox

Module 3: Setting up and using your forms on your Android device

Module 4: Managing your data using Kobo Toolbox

Flattr this!

Happy Birthday, Data Expeditions! Some reflections.

Lucy Chambers - November 10, 2015 in Data Expeditions

10th November marks the 3 year anniversary of the very first data expedition. What have we learned in the last 3 years?

Anyone who followed School of Data closely in the early years knows that originally the focus of the project was online. This is the story of how and why the project moved away from prioritising its online offering to rely heavily on a network of humans to do the work. There are a diversity of views about the subject within the School of Data network, this is my take.

Musings about materials

Let’s talk for a second about why writing materials for data skills training is particularly tricky.

1. Tool volatility

You may be merrily using a tool one week and the next, it has been killed off. People were still grumbling about the loss of Needlebase several years later. Companies also change their offerings substantially (e.g. ScraperWiki) and materials quickly went out of date. We couldn’t keep up.

I felt strongly that one of School of Data’s tasks was to make the world of data tools less overwhelming: to show that you could do a lot with only a few key tools. We picked some staples — easy tools you could do a lot with.

New tools and services are appearing every day. Many are old wine in new bottles — but some are very impressive. Evaluating when it makes sense to move from an old favourite to something new is time intensive in and of itself, let alone writing training materials for them.

2. Software discrepancies

Through early user tests we discovered the diversity of software used for even basic tasks such as spreadsheets was very large. Even if we wrote a tutorial for one piece of software, e.g. LibreOffice, the differences between other versions of similar programmes e.g. Excel / GoogleDocs were just great enough to leave learners entirely stuck if they were using anything but the type we had written it for.

3. No two organisations ever want to do exactly the same thing

The direction of teaching materials falls somewhere on a spectrum between closely tailored to an individual use case and open ended general principles.

At one end: the handholding, instructive walkthrough. Pros: Very easy to follow. Excellent for beginners. Cons: Interesting for a very narrow audience. Doesn’t encourage the learner to think creatively about what they could do with those skills. Breaks very easily as soon as anything about the service you are using changes.

At the other end: general principles e.g. “mapping” (vs “using X tool to create maps” and open ended challenges.) Pros: Don’t need updating as often. Encourage learners to think more broadly about how the skills they use could be applied. Cons: There needs to be some way for the user to make the leap from general principle to concrete implementation.

When you are supporting organisations to find stories in data or use data to support their advocacy, no two organisations will ever have exactly the same questions. This makes it very hard to find a common set of materials for them.

4. The resource question $$$

Creating teaching materials for any topic is a lot of work. In the early days of School of Data, we were 2-3 people.

Don’t get me started on how much work it is to produce a MOOC. We dabbled in these for a while. I’ve personally taken part in some good ones and partners have had some success with them, but the problem for School of Data was that with our resourcing level, it would have been putting all of our eggs in one basket very early on in the project.

We needed more time and flexibility to experiment with different formats, to see what would work for our specific target audience.

5. The feedback problem

There was a feedback problem with online materials, we had no idea whether the people we were reaching with the online materials were the ones we were targeting. In the early days, we really only did workshops to get feedback for a more online approach. We got the best feedback from participants at test workshops we did in person. Feedback which we got through the website was sparse.

Then something happened…

Enter the dragon: the beginning of Data Expeditions

Dragon TTC

It’s 10th November 2012 and I’m surrounded by nerds in sparkly capes. This is Mozilla Festival (MozFest) — a playground for new ideas that have something to do with making use of the web in creative and fun ways.

MozFest 1

A few months earlier (on the day of the MozFest submission deadline) my colleague, Friedrich (in the green hoodie and silver cape above) had lamented that it was really hard to teach investigative skills in an interesting way. Michael Bauer (star cape, far left), from the School of Data team, happened to be in town visiting.

We agree that we should try and find a way of including investigations in the session. A far cry from the carefully planned tutorials with perfectly aligned practice data, participants would get a taste of reality… In the wild, there is no-one to clean your datasets for you. What we needed now was a way to get other people to help each other through the mires and holes that the participants will inevitably find themselves in.

Friedrich and Michael start nerding-out about how cool it would be to model a session on Dungeons and Dragons. Confession: I to this day have never played D&D. Nevertheless, I catch enough of their gist to gather that it is some kind of role-playing game, and there are dragons — how wrong can it go?

Mother of Data

We decide that if this idea is going to work anywhere, it’s going to be at MozFest, whose open minded guinea pigs — sorry, participants — are usually up for a laugh. We have a name, “Data Expeditions”, now we just have to work out how to facilitate a session with an unknown number of people, with unknown skillsets, and a mostly-hypothetical internet connection.

Bring it on! Worst case scenario: I’ll dress them all in something ridiculous and we’ll clown around to camouflage the parts of the session that don’t work.

Crunch time…

Head count: approx 60 - much more than expected

Skillset balance: good to excellent

Internet connection status: quaint

I won’t elaborate too much on the process of how a data expedition works as that is covered by the (now ancient) Guide for Guides.

But the principle simple: all teams start with a question e.g.

  • “The life expectancy in Botswana all of a sudden dropped sharply at a particular point in time. What was the reason?” or
  • “Who really owns these mines in the Democratic Republic of Congo?”

The facilitators then guide them as far as possible along the data pipeline as they can get in the allotted time.

Data Pipeline Source: Spending Data Handbook

At the end, people present whatever they can. Any output is valid; a clean dataset, a full data visualisation, a paper sketch of what they would have done had they had the time/resources/skills, or even a list of problems they experienced.

Back in the room

I’m astounded by the number of people who have come to the session, the room is packed and … it somehow appears to be working…?!

…People are asking each other if they don’t know how to do something and actually producing results. It’s absolute bedlam and incredibly noisy but it’s working!


Learnings from data expeditions

our inkling was that the only way to really teach data skills was to confront people with a mountain. By forging [their] own path […] data explorers can pinpoint the extra skills they need to develop in order to scale new obstacles, map their own journey and ultimately to tell their own story. The answer may be at the top, but there are multiple routes to the summit – and each will offer a fresh view over the landscape.

Followup blogpost to the first data expeditions

After MozFest, we went on to lead many data expeditions around the world. We had to adapt to many different things: knowledge levels, time constraints, participants who really wanted to get a specific thing from the expedition.

Here is my rundown from the major discoveries of that period:

Number 1: It is very hard to predict what someone will learn from a data expedition – but they will learn something

Everything depends on the course the group takes. It’s hard to know how far the group will even get.

If you are trying to teach a specific skill in a workshop, you either need to stage parts of the expedition very carefully (possible, but lots of work) or, you should probably pick another format.

Number 2: The right people are important, but they’re not the ones you might think.

Most important skillset: topic expertise — you can do a huge amount with basic tools, even if there are no advanced engineers or analysts in the room. All you need is one or two people who have a deep understanding of the topic area. If you are low on data-chops in the room, you’ll need to be more hands-on as a facilitator and probably spend more time helping people to google things. Don’t let it become too much about you showing them things, try and encourage the same self sufficiency as if they were genuinely on their own.

Number 3: Online expeditions can be hairy, but you can make them work.

Online expeditions are particularly facilitator intensive, because people don’t keep the same level of focus as they do in person. Even if they are engaged at the beginning, their attention wanes… they end up in Buzzfeed listicle rabbitholes. For longer expeditions, it’s hard to gauge availability and whether people are stuck. The poor stuck people are left hanging as the only person in their group who can help goes to have a bath or pick up their kid from kindergarten.

The most successful expeditions we ran online were short, a couple of hours to a day max. Both online and offline, a short timeline helps to focus people on their desired outcomes.

Unexpected side effects of data expeditions

Both at MozFest and in the online version, people were forced to spend time with people they wouldn’t normally do. I remember one girl coming up to me and saying, entirely out of the blue:

“I’ve never spoken to a coder before!”

Also online, while a lot of the groups entirely disintegrated, some people used the group structure we had set up to stay in touch or ask for help on data or tech issues long beyond the date that the data expedition was scheduled to finish.

Data expeditions were more than just a teaching tool, they brought people together in a way that working alone on a problem or exercise never could.

The final balance

The success of the Data Expeditions and other in-person formats like Data Clinics or targetted workshops meant that School of Data moved away from being a solely online learning mechanism to one which favoured human interaction.

School of Data did still produce materials, and as community members attest, they are a core part of the identity, but the English resources were usually produced “on demand” when an event was coming up which needed them.

The focus on in-person training also changed the nature of what we produced: more lesson plans and materials suited for in-person training.

As the reputation of School of Data grew, the demand for in-person training did too. This is the reason the fellowship was born and that a lot of what School of Data currently does is skillshare. It is much better for people to learn in their own language, taught by people who understand local contexts than for a small group of Europeans to fly around the world pretending to know everything.


  • Get yourself some foundational resources so that you can react quickly to common requests for training.
  • Instead of developing material for every topic on the planet, tailor existing resources to specific audiences you are going to work with. If you are working with a budgeting group from Nepal, use budget data from Nepal if you can get it. If you can’t get it, at least use something locally relevant.
  • Find yourself some trainers with big ears, who listen more than they talk. A teacher’s job is to understand the problems people are having and provide solutions which are appropriate for them — not to deliver pre-packaged solutions.

Materials are important for sustainability. They can quickly be picked up, translated and shared all across the world. But nothing compares to the reality check that comes from being with the users of those materials in person to make sure you are keeping the project on the right lines.


Some of you will have noticed that I promised to write a 5 part series nearly 6 months ago now and have so far produced only 2/5 posts.

The rest of these posts have been sitting on my harddrive, festering and I have been too deliberative to finish them.

On 13th of September 2015, procrastination exterminator and cattle-prod extraordinaire, Michael Bauer, friend and School of Data colleague tragically and unexpectedly passed away.

Michael could not stand procrastination, and never allowed anyone around him to engage in it.

I couldn’t think of a more fitting tribute to you than actually finishing this, Michael. I hope you realised how much things moved forward because of you.

A version of this post appears on Tech to Human as part of the 5 years worth of learnings series.

Flattr this!

Making open data accessible to data science beginners

Nkechi Okwuone - November 6, 2015 in Data Blog, Fellowship

If you’re reading this, I suspect you’re already familiar with open data, data science and what it entails. But if that’s not the case, fret not, here are a few beginner courses from School of Data to get you started.

As new data scientists, we need easy access to substantial, meaningful data without the restrictions of cost or licenses. It’s the best way to hone our new skillset, get objective answers to questions we have and provide solutions to problems. This is a fact that has been acknowledged by leading data scientists. So, how can new data scientists get easy and timely access to this type of data?

Open Data Companion (ODC) is a free mobile tool that has been created to provide quick, easy and timely access to open data. ODC acts as a unified access point to over 120 open data portals and thousands of datasets from around the world; right from your mobile device. All crafted with mobile-optimised features and design.

ODC was created by Utopia Software, a developer company being mentored by the Nigerian School of Data fellow in the open data community of SabiHub in Benin city, Nigeria.

We believe ODC successfully addresses some key problems facing open data adoption; particularly on the mobile platform.

  • With the growth of open data around the world, an ever-increasing number of individuals (open data techies, concerned citizens, software developers and enthusiasts), organisations (educational institutions, civic duty and civil society groups) and many more continually clamour for machine-readable data to be made available in the public domain. However, many of these interested individuals and organisations are unaware of the existence of relevant portals where these datasets can be accessed and only stumble across these portals after many hours of laborious searching. ODC solves this problem by providing an open repository of available open data portals through which portal datasets can be accessed in a reliable yet flexible manner.

  • The fact that mobile platforms and mobile apps are now a dominant force in the computing world is beyond dispute. The percentage of mobile apps used on a daily basis and their use-rate continues to grow rapidly. This means that mobile devices are now one of the easiest and fastest means of accessing data and information; if more people are to be made aware of the vast array of available open data producers, the open data at their disposal and how to use them, then open data needs a significant mobile presence with the mobile features users have come to expect. ODC tackles this problem effectively by providing a fast mobile channel with a myriad of mobile-optimised features and an easy design.

What can ODC offer data scientists? Here’s a quick run-through of its features:

  • access datasets and their meta-data from over 120 data portals around the world. Receive push notification messages when new datasets are available from chosen data portals. This feature not only ensures users get easy access to the data they need, but it also provides timely announcements about the existence of such data.

    image alt text

  • preview data content, create data visualisations in-app and download data content to mobile device. The app goes beyond a simple “data browser” by incorporating productivity features which allow users to preview, search and filter datasets. Data scientists can also start working on data visualisations likes maps and charts from within the app.

    image alt text

  • translate dataset details from various languages to your preferred language. This feature comes in really handy when users have to inspect datasets not provided in their native language. For instance, when investigating the state of agriculture and hunger across Africa, available datasets (and meta-data) would be in different languages (such as English, French, Swahili etc). ODC helps to overcome this language barrier.

  • bookmark/save datasets for later viewing and share links to datasets on collaborative networks, social media, email, sms etc., right from the app.

Armed with this tool, novice data scientists, and our more experienced colleagues, can start wrangling data with greater ease and accessibility. Do you have ideas or suggestions on how ODC can work better? Please do leave a reply!

Flattr this!

School of Data is part of the 19 million project!

Camila Salazar - November 2, 2015 in Data Journalism, Fellowship, Uncategorized

How can a diverse team of people, with different backgrounds from around the world, work together to find new ways to tell the story of hundreds of thousands of refugees that are migrating to Europe? How can they build new narratives that can help this people make a safer, better journey? How can they articulate possible solutions using technology?

Those are the questions that the 19 million project, and initiative of  Chicas Poderosas and La Coalizione Italiana Liberta e Diritti (CILD), will try to address the next two weeks in Rome.   The project will bring together journalists, programmers, designers and human right activists from different countries to work in teams from November 2 to the 13.


School of Data  fellow from Costa Rica, Camila Salazar, representing School of Data, is present in Rome to try to help with the discussion, bring new ideas and work on specific data projects related to the refugee crisis.  So we invite you to follow the project and involve in this initiative! You can follow the activities and help with fresh ideas on Twitter (@19mmproject) or Facebook.

Flattr this!

Data literacy research: update and OGP sessions

Mariel García - October 27, 2015 in Impact, Workshop Methods

Announcement: We will be presenting the preliminary findings of our data literacy research at the OGP convening in Mexico City. We are leading a knowledge café session on this topic on CSO day (Tuesday the 27th at 2, classroom C9) and participating on mySociety’s panel on research and digital democracy during the Summit (Wednesday the 28th at 4, also at classroom C9). We’ll be happy to see you there! 

As we shared a few months back, School of Data is working on a research project to understand data literacy efforts around the world. We are using a framework which is informed by the principles of action research. We have conducted a series of semi-structured interviews with relevant stakeholders, and have collected literature, existing research and resources that help illuminate effective methodologies that are in use. This is currently being analysed and written up with the goal of improving data literacy practice in the short term, informing efforts to provide data literacy in the long run.

While we are still in the process of putting the final touches on our research paper, we want to share a few facts from our preliminary findings…

  • Context: much data literacy work is independent from tools, and has to do with the ability to understand the context of data. How it came to be, where it is to be found, how it can be validated, what lines of analysis are worth exploring.
  • Data pipeline: The School of Data data pipeline has been the most recurring concept in interviews, even among actors outside the School of Data network. This finding has prompted us to start digging deeper into how this concept came to be and why the data literacy community finds it useful.
  • The role of soft skills: The level of comfort and confidence of beneficiaries when working with data is mentioned often, which could be an indication of the importance of looking beyond data literacy and into pedagogical resources to ensure data literacy work is designed around tactics that promote such environments (or “academic mindsets”, as described in one of the interviews.
  • Beneficiaries: The people we interviewed are either focusing their efforts on getting journalists to make better use of data in their reporting, or organisations and individuals to make better use of data in advocacy that will lead to social change.
  • Experiential methodology: Often it’s about providing people with a dataset and getting them to develop a story from it; other times, it’s hands-on training addressing different parts of the data pipeline. Most interviewees so far have made an emphasis on the importance of actually identifying and working with data sets.
  • The length of each data literacy process varies. Larger and older organizations favor intensive, long term processes with relatively few beneficiaries; smaller and younger organizations or individuals favor short-term trainings to reach larger audiences.

We will keep you all posted as this process evolves. That said – if you want to add some input, it’s still a good time to take the survey. If you’d like to get in touch with the people behind the research, you can reach us at dataliteracy [at] fabriders [dot] net.

Flattr this!

The Latin America open data community speaks loud

Camila Salazar - October 22, 2015 in Community, Data Stories, Events, Fellowship

Last September the open data community in Latin America gathered in Santiago de Chile in the two most important events in the region to talk and discuss about open data. Since 2013, Abrelatam and ConDatos have been a space to share experiences, lessons learned and discuss issues regarding open data in Latin America.

In this third edition hundreds people from the region came to Chile showing that the open data community has a lot of potential and is continuously growing and involving in the open data movement.

As a fellow of School of Data, this was my first time in Abrelatam and ConDatos and it was a great experience to see, exchange ideas with the community and learn from all the different projects in the region. I had the opportunity to share with journalists, civil society and technology groups that were working on amazing open data initiatives.

Since there was a lot of interest in learning new tools and working specifically with data, there was also a training track in the conference with several workshops about data analysis, data cleaning, data visualization, access to public information, among others. School of Data had three workshops with ex-fellows Antonio Cucho (Perú), Ruben Moya (México) and myself as a current fellow from Costa Rica. The attendants were excited and interested in learning more.


In the past years I’ve been mainly working as a data journalist in Costa Rica, but I had never had the chance to meet the community that shared my same interests and concerns. This is what makes Abrelatam and ConDatos most valuable. It helped me learn about how things and data projects are done in other countries and see how can I improve the work I’m doing in my own country.

We all have similar issues and concerns in the region, so there’s no point in trying to fix things by yourself if you have a huge community willing to help you and share their lessons and mistakes. On the other hand, as a School of Data fellow I was given the opportunity to share my knowledge with others in data workshops, and it was a great way to show people from other countries the work we are doing in School of Data, helping build data literacy in civil society.

The most important lesson learned from this four days in Chile is that there’s an eager movement and a growing need to work together as a region to make data available and to push the open data agenda with governments. There’s no doubt the region speaks loud and is creating a lot of noise worldwide, so it’s in our hands to keep up and innovate as a community!

If you are interested in learning more about the projects, here’s a list of the projects that participated in AbreLatam in 2014 (the 2015 list well be ready soon!).


Flattr this!

The State of Open Data in Ghana: Policy

David Selassie Opoku - October 20, 2015 in Fellowship, Policy


2014 chloropleth of Open Data Barometer Readiness and Impact


I joined the School of Data in April as one of the fellows for 2015. As a data scientist and software developer who had moved back to Accra in August 2014 — after 8 years of being away from school, — I wanted to understand the key stakeholders of the open data community and what role I could play in strengthening their work. I wanted to know what the State of Open Data in Ghana was.

Taking a pulse of any community, especially at a national level is never simple and will be always filled with degrees of subjectivity. This coupled with a young global Open Data movement, introduces challenges in identifying the right stakeholders who themselves are still trying to understand whether and where they fit into this nascent ecosystem.

In trying to assess the state of the Ghana open data community, I looked at 3 main areas: Policy, Research and Innovation, Capacity-Building.

I will be sharing my thoughts around these 3 areas over a series of blog posts. With these, I hope to start a conversation around the Open Data movement in Ghana which leads to more collaboration and innovation. So for this first post, I will talk about the State of Open Data in Ghana from a policy perspective.

Open Data Policy in Ghana


Ghana Open Data Initiative portal

Ghana Open Data Initiative

Search for the term “Open Data Ghana” on any search platform and you will be presented with a list of links on initiatives and events — portals, conferences, hackathons, grants etc — dating back to 2010 and 2012. First among these is one for the Ghana Open Data Initiative (GODI), a platform created to release public data sets for easy access and use by ordinary citizens.

The origins of the Open Data movement in Ghana can be traced back to a Web Foundation project in August 2010. This established an initial partnership with the government of Ghana through the National Information and Technology Agency (NITA), which eventually served as the agency responsible for implementing GODI. It was created in 2012 as a platform and framework to promote the release of government data for public re-use. It was

“to promote efficiency, transparency and accountability in governance as well as to facilitate economic growth by means of the creation of Mobile and Web applications for the Ghanaian and world markets.”

The vision was to start off with a repository of government data from which journalists, developers, advocacy groups and citizens could access for numerous civic, social and economic benefits. With this came several hackathons and workshop by organisations to unleash the power of these data sets through capacity-building, research and innovation.

laws and regulation RTI

Right to Information Law

GODI is a major endeavour and in its infancy, it will lack many data sets that ideally should be readily available to the public. In such cases, interested parties should have the ability to request the release of specific data from public institutions. This is where the Right to Information(RTI) Law comes to play. Other names for this are the Freedom to Information(FOI) law and Access to Information law.

Efforts to pass a RTI law in Ghana has been ongoing for about 13 years. However, there is growing work by advocacy and media groups, parliament and ordinary citizens to ensure the passing of a law. After many years of consultation, Select Committee on Constitutional, Legal and Parliamentary “advanced an amended right to information bill for consideration by the full Parliament.” This means as of October 10 2015, Ghana has no RTI law! In order to strengthen the Ghana open data movement, it is important to have in place the RTI law as a tool for open data enthusiasts to request access to relevant data.

The effort to pass the RTI law in Ghana has been long and it is worth highlighting the continued work by many advocacy organisations and individuals invested in making this a law:

There are many more advocacy groups and individuals who have contributed to advancing the RTI bill to this point not listed above. Their work continues to be essential and is worth supporting. If you know of any, please do share.

The way forward

What is the way forward with regards to policy? Ghana’s Open Data movement is young and this means there is a lot to learn, understand and implement to reach the standard of a world-class open data community. Ensuring that the right laws and mandates are in place and executed is key to creating the foundation for stakeholders to research, innovate and build capacity with open data. Taking the steps to implement GODI is a great start. However, GODI is still lagging behind. As of this writing, the data portal is still down from when I first noticed it at the end of August which does not help in building the reputation of the Ghana Open Data community. I hope the portal comes back online soon with an well-defined strategy to improve access to quality data sets and tools.

With regards to the RTI bill, the great efforts by some of the advocacy groups listed above will eventually get this law passed. It is important that journalists and citizens remain invested on this issue in order to give it the necessary attention to be passed.

In the next series, I will talk about the State of Open Data in Ghana from the research and innovation perspective.

Flattr this!

Less buzzword, more engagement: Event report from School of Data and UP Politica

Sheena Carmel Opulencia-Calub - October 9, 2015 in Event report, Fellowship

On September 28, 2015, School of Data, in partnership with the University of the Philippines People-Oriented Leadership for the Interest of Community Awareness (UP POLITICA), organized a Forum with the theme “Open Data, Open Government and Freedom of Information: Effects on the Political Landscape of the Philippines” at the CM Recto Hall, UP Diliman, Philippines.


Legally open + technically open = open data

The first speaker was Mr. Gabriel Baleos, Program Manager and Policy Head of Open Data Philippines Task Force. In his discussion, he explained the dilemmas in getting data from various government agencies and having them in machine-readable formats e.g. xls., .csv, .json and  which can easily be re-used by the citizens. Gabe characterized Open Data as being legally open – placed in public domain or under liberal terms of use – and technically open – data published in electronic formats. The Philippines has a Cabinet Cluster on Good Governance and Anti-Corruption, and part of this cluster’s initiatives is the Open Data Task Force, which is in charge of the Open Data portal.

Ms. Happy Feraren, 2014 School of Data fellow and CEO of gave her insights on the role of civil society organizations in promoting Open Data and Freedom of Information. Happy shared the undertakings of in engaging with ordinary citizens, students, journalists and local government staff through their Local Government Scorecard – a survey done by students across different Metro Manila local government offices to score their processes related to starting a business. She also posed the challenge that citizens should be proactive in using data being opened up by the government for access and use, otherwise, the Open Data is useless if no one is analyzing the data.

Forum speakers Gabe Baleos of Open Data Task Force, Sheena Opulencia-Calub or School of Data and Happy Feraren of

Forum speakers Gabe Baleos of Open Data Task Force, Sheena Opulencia-Calub or School of Data and Happy Feraren of

Freedom to know as the new buzzword

In my discussion about Open Data and Freedom of Information, I reinforced that Freedom of Information is not something new, that for the Philippines, the freedom to know is already in our constitution as early as 1973. It has become a new buzzword because of the Freedom of Information Act that has not been passed by our lawmakers. While most people think that Open Data is an alternative to Freedom of Information, I have stressed that we still need a law that will support the advocacy of making government agencies share their data and information for public use.

During the open forum, there was a good discussion on how to build government staff skills relevant to open data and freedom of information. All speakers agreed that there must be cooperation among civil society organizations, the academic community and the government in organizing learning activities for government staff who are producing and using the data. There was also a good discussion on how Open Data can be used to ensure provision of basic services to the people.#


UP POLITICA with Happy Feraren (first row, 1st to the left), Gabe Baleos (first row, 2nd to the left) and Sheena Opulencia-Calub.


Flattr this!

School of Data Fellows: What Are They Up To?

Meg Foulkes - October 8, 2015 in Fellowship

Our brilliant 2015 School of Data Fellows are a busy bunch! We asked them to reflect on the first half of their fellowships; here’s a roundup of just a few of the highlights:

  • Camila has run numerous training events, working with Abriendo Datos Costa Rica and with Costa Rican university students. She has also run two data expeditions and a workshop in Mexico City in the NGO Festival FITS – in total, Camila has trained 177 participants! Camila looks forward to engaging wider audiences of Costa Rican NGOs and journalists in data-literacy training during the remainder of her fellowship.

  • In Macedonia, Goran has been making great progress on the Open Budgets project and work is underway with the Metamorphosis Foundation on upgrading their ‘Follow The Money’ website. He has also been busy finalising contracts with the winners of the Open Data Projects competition and facilitating their kick-off. Goran is also finalising his first skillshare on TimelineJS, which we look forward to!

  • In Nepal, Nirab has responded to the devastation caused by April’s earthquake by supporting all manner of data-related support, working with a host of CSO’s, INGOs, government agents, technologists, journalists and researchers. He has a particular interest in post-disaster transport management and has trained 78 road engineers in OpenStreetMap, who are utilising this knowledge across 36 different districts of Nepal!

  • In Ecuador, Julio has been busy preparing a workshop for Campus Party Ecuador 2015, a fantastic technology festival kicking off later this week. He has also been collaborating recently with Innovation Lab Quito on an exciting upcoming training event in October and also with SocialTIC and the Ecuadorian Journalist Forum on an event planned for November.

  • Nkechi attended the Africa Open Data Conference (AODC) in Tanzania recently, where she did some fantastic networking at the School of Data booth. She also organised an Open Data Workshop for approximately 25 Tanzanian CSOs and journalists at the conference, comprising skill shares on data advocacy, finding and verifying data, the data pipeline, scraping and visualizing. Nkechi looks forward to consolidating her work in strengthening the Nigerian data-literacy community in the coming months of her fellowship.

  • In the Phillipines, Sheena has worked extensively on data skills for effective disaster response, organising successful training events in Northern Mindanao and Leyte with a total of 77 participants. She recently participated in in the Forum on Open Government Data organized by the Knowledge for Development Center, which provided powerful insights regarding School of Data’s role in supporting the Open Data movement. Sheena is focused on extending her network of local NGOs and media actors in the coming months, as she makes progress to her goal of establishing a local School of Data instance.

  • In Ghana, David has hosted several workshops, including a data scraping workshop with Code for Ghana, and another during the Africa Open Data Conference with fellow School of Data and Code for Africa colleagues. He has presented two online skillshares on Data Scraping and R programming which have received very positive feedback! David is currently organising the first H/H Accra meetup. He intends to focus on data journalism for the rest of his fellowship, in anticipation of the national elections that will happen in Ghana next year.

Flattr this!