reading time: 12 minutes
The creation of a Data Responsibility Process at 510 – A brief history
Before the digital era
Before the use of Internet, humanitarian aid organisations mostly exchanged information in person, on paper, by phone or by fax. Any time delays in relaying the information, responding to requests, or manually editing records and maps, were considered standard procedures.
After the digital era
The Internet introduced a digital transformation in the early eighties, which was followed by several major developments in mobile communications in the mid-nineties.
As a result, the data generated have become more and more accessible for those digitally connected, anywhere in the world at any time and with any device. This transformation has allowed data to flow across systems and country borders and has resulted into many duplications of the same data.
Legislation around Data protection: GDPR
The caveat of the digital transformation is the privacy and protection of data.
Privacy is a fundamental human right forming the basis for human dignity and other values. One of its forms, “the privacy of information/data”, is guided by privacy laws & regulations focusing on data protection.
Protecting the privacy of individuals has been addressed since the nineties by many laws and specifically by the Data Protection Directive 95/46/EC, but these were of the nature “ask for forgiveness not permission”.
The EU General Data Protection Regulation GDPR has replaced the Data Protection Directive 95/46/EC since 25th of May 2018 and has been designed to harmonise data privacy laws across Europe, to protect and empower all EU citizens’ data privacy and to reshape the way organisations approach data privacy.
With the GDPR, the response to the earlier Data Protection Directive is simply: “ask for permission, not forgiveness”. Although this regulation is about protecting data, it does not address:
- any specific field of application or sector;
- any (IT) technology to achieve data protection, as the regulation is “technology neutral” and
- the responsible handling of data, for example by addressing ethical considerations.
Even with the GDPR having taken effect, this did not automatically mean that humanitarian aid organisations were fully complying from day one, but rather made incremental steps to comply over time.
Complying over time is in essence also a type of transformation: an organisational transformation involving data, people and procedures. For humanitarian aid organisations the organisational transformation may either occur, or not, depending on the type of activity.
For example, in the context of providing immediate disaster relief aid or vital aid, many humanitarian aid organisations are exempt from complying with these privacy laws; adhering to and executing cumbersome processes for information privacy would simply slow down the provisioning of aid at the time of need. In this case an organisational transformation would not be required at all.
However, in the context of providing long-term recovery aid, where the speed of aid response is less critical, these organisations need to comply with the privacy laws and so need to undergo an organisational transformation.
What about cases where humanitarian aid organisations have the intention to comply, irrespective of the type of activity they perform, but may not be able to comply over time? We believe there are several main reasons for this:
- Procedures may be unknown, unclear or not yet developed in organisations;
In case procedures do exist in organisations, these may be:
- Difficult to structurally follow-up on due to a lack of financial or human resources;
- Too complex and therefore too time consuming to execute;
- Impractical as the required information processing systems and/or communications networks have not been implemented yet or are not yet operational, and
- Challenged by donors, who demand maximum transparency levels and disclosure of information about those being helped.
There are several approaches to address the above occurrences: 1. volunteers may be recruited to help in exploring and providing the required expertise to help develop procedures and then to share any best practices with other organisations, 2. (in addition to 1.) temporarily recruiting paid staff with a specialised background in data protection, IT, international law, etc, 3. replacing or simplifying any existing procedures altogether, or 4. not complying with any existing procedures by default. Replacing, simplifying or not complying with procedures will inherently lead to risks for people’s dignity, an organisation’s reputational record, or a possible reduction in donor funds.
We advise developing practical procedures from the bottom-up and by learning from others by means of recruiting volunteers and/or paid staff.
As 510’s vision is to use (big) data to positively impact faster & more (cost) effective humanitarian aid, it is vital that the opportunities of using data go hand in hand with ethical standards on how to use data in a responsible manner.
We define Data Responsibility as “the responsible usage of data (including collection, storage, processing and dissemination) with respect to ethical standards and principles in the humanitarian context, bearing in mind potential consequences and taking measures to avoid putting individuals or communities at risk”.
How to create a Data Responsibility Process, which provides practical guidance and simple to follow procedures for the team members in 510?
Data Responsibility Process in 2017
In April 2017, a Data Responsibility Project Team of several volunteers and one paid staff with a multidisciplinary background was formed within 510 with the goal of creating a concise and practical policy that would ensure the responsible use of data in our daily work. The result was a concise document delivered in November 2017 that incorporated seven core principles of data responsibility, which we believed to be relevant when processing data in our projects: “Do no harm”, “Purpose specification”, “Respect for the rights of the data subjects”, “Legitimate and lawful use”, “Minimisation”, “Data security” and “Data quality”. Each one of these principles is equally relevant and collectively they cover areas involving respecting humans, considering the legitimate, lawful and fair use of data and the quality and security aspects of data.
Lastly, the policy was complemented by a separate checklist for simplified application of the policy to specific projects, and guidelines for a threat and risk assessment in case the checklist raised ‘red flags’ (i.e. highlights potential risks).
Creation of awareness for Data Responsibility was achieved through several interactive workshops for the International Aid department of NLRC, IFRC, blog-posts and a team session during 510’s Team day.
Data Responsibility Process in 2018: Continuous improvement
Soon after the release of the first version of the policy, the awareness for the data responsibility topic was clearly there in the team, but there was no uptake in applying the policy document, the checklist and threat & risk assessment documentation for the projects in 510. What exactly was causing this? How and where would we need to make changes, to improve the policy’s uptake?
We explored these questions in more detail by following a “continuous improvement life cycle process” comprising of the four stages Assess, Design, Implement and Evaluate.
One-on-one interviews were conducted with team members, to capture their feedback and observations in terms of the added value and shortcomings of the first release. This formed the base-line for a re-design of the policy document and its attachments during the design stage.
During the assess stage we observed and learned that although the first version was focusing on the use of personal data and DII, in practice the team was only processing anonymous data. If the team were to process personal data they would not know where to store those, because the IT functional requirements for the storage systems at 510 were still in the process of being defined and agreed. Identifying suitable IT systems would also mean addressing specific questions on roles and responsibilities such as: ”Who may have access to the personal data?”, “Who manages the access control?”, etc.
Acknowledging the feedback from the team during the Assess stage, we put emphasis on:
- providing an overview of current and new storage systems;
- outlining the roles and responsibilities typically associated with processing data in 510;
- designing icons for each of the 7 principles;
- re-visiting the supporting materials for the policy document, i.e. checklist document, threat and risk assessment document, data sharing agreements etc.
We provided an overview of all storage systems currently in use as well as those planned. The main functionalities and key purpose were described enabling users to ultimately select an appropriate storage system for their data. Microsoft Azure was selected as a central repository system with supporting online collaboration tools such as MS Teams and MS Planner. After MS Azure was implemented and a channel structure with naming were realized by the end of March, we started migrating our data from the other platforms.
We associated the processing of data with “data specific roles” within 510 such as those for curating, owning, controlling and processing personal data. These were added to a new design of the data life cycle in the form of process diagrams.
In order to easily refer to the 7 principles throughout the policy, we designed icons for each principle:
Lists of tools, templates and practical methods were added to the policy document to help in documenting the metadata of datasets, the sharing of data between parties and for protecting data. The process diagrams described each data processing stage of the data life cycle to help users execute the required steps in the right sequence.
We also re-visited the terminology and definitions in the policy and closely aligned them with the terminology used in the GDPR.
Implement: The new release of the policy has been in use at 510 since the 5th of Oct 2018. As part of its implementation, a steering committee comprising of members of 510 and members of the NLRC, was formed. Every month this committee will discuss and evaluate the experiences of users applying the policy in their data projects. “Best practices”, “practices worth replicating” and the advice given will be captured over time and help in updating future releases of the policy in a targeted way.
Evaluate: Every six months the insights gained from the implementation stage will be used for updating and improving the policy. We will then go through the continuous improvement life cycle again, starting with the assess stage.
Several topics we would like to explore and address, as part of future releases of the policy:
- guidelines for evaluating the ethics of machine learning algorithms in our data projects;
- aligning and referring to (updated) templates developed by the NLRC;
- referring to supporting materials such as instructional videos and infographics for quick introductions to the data responsibility work, and
- development of online training materials with the University of Groningen.
We would like to thank all internal and external reviewers for their valuable feedback and observations, helping us to improve 510’s policy for processing data responsibly.
We hope that this policy will contribute to ongoing policy debates and exchange of experiences and best practices – thereby encouraging the responsible use of data in the humanitarian ecosystem.
510’s Data Responsibility Team
You can use our policy for non-commercial purpose and as a base for adapting your own policy, but please be so kind to give us some credit and reference it by indicating the following sentence where appropriate:
Please note that we used the data responsibility policy as initially developed and drafted upon initiative of NLRC 510 as a source of inspiration and starting point for the adaptation of our own policy, for the content and performance of which we carry sole responsibility.