By Julia Muraszkiewicz, Research Analyst at Trilateral Research Ltd.


Regrettably, loss and harm of humanitarian workers is a prevailing occurrence in the humanitarian conflict zones of today. One does not have to search far to find headlines such as: Six Afghan Red Cross aid workers have been killed in an ambush or Attack on aid workers in South Sudan. As humanitarian laws are being ignored how can we protect those that risk their lives to help others?


One solution could lie in Information and Communication Technologies (ICT), which can help humanitarian workers improve the efficiency of logistics, risk management and coordination of humanitarian missions. And this is already happening. As noted by Donini and Maxwell there is an increasing change in humanitarian relationships and practice ‘from face-to-face to face-to-screen’. The shift to relying on technology is a rapid process with low barriers to entry; we can thus expect many new technological solutions to protecting humanitarian work. Today already ‘ICT use for humanitarian response runs the gamut from satellite imagery to drone deployment; to tablet and smartphone use; to crowd mapping and aggregation of big data’ (Raymond and Harrity, 10).


However, the development and deployment of technologies that can help in crisis situations, such as The Syria Tracker Crisis Map or mobile apps that aim to serve different members of the humanitarian aid community, may have privacy and ethical consequences. Many technologies used in digital humanitarianism usually rely on tracking, monitoring or the use of social media or crowd sourcing. Their work happens in real-time and includes the collection, transfer, processing and storage of large amounts of data, such as the location of users or even their heart rate. The storage and transmission of personal information raises a host of privacy issues, such as the length of storage or who has access to the information. Ethical issues may also arise as humanitarians working in high pressure emergency contexts, where there is not always enough time to be fully trained and fully informed as to the details of technology, may be expected to use the technologies by their superiors. This may be without due consideration of the ethical risks. The opacity of the range of processing operations mean that it is difficult to know what is going on out of sight. This can be problematic when personal data is involved. Raymond and Harrity warn that ‘rapid diversification of available technologies as well as the increase in actors utilising them for humanitarian purposes means that the use of these technologies has far outpaced the ethical and technical guidance’ (Raymond and Harrity, 10).


One way of enabling safer and more ethical use of digital technologies within humanitarian action is to integrate an Ethics and Privacy impact assessment (E/PIA) into the design process. The term Ethical and Privacy Impact assessment was coined by Trilateral Research Ltd during the European Union funded project, PRESCIENT (Privacy and Emerging Sciences and Technologies).


What is an E/PIA and how is it done?

An E/PIA is a systematic process for identifying and addressing current and future ethical and privacy issues in an information system or technology. It helps to highlight specific categories of ethical and privacy problems and provides recommendations on how to address them. The analysis goes beyond a dialogue of the costs and benefits of surveillance, information and data collection devices. Instead, it seeks to understand the technology and the risks associated with information exchange, storage, and transfer when using it. In conducting an E/PIA we ask questions such as: are people being treated fairly and justly? Is their dignity respected? Is there a legal basis for the processing?


Conducting an E/PIA requires firstly identifying possible threats – natural or human, accidental or deliberate – e.g., a hacker. Secondly it obliges a consideration of possible vulnerabilities that the threats can exploit e.g., a poor security system. The vulnerabilities can come from within the organization implementing the technology of from the design of the technology itself. When the threat acts on vulnerability this can lead to a risk, e.g., loss of data and thus contravening the data subjects right to privacy. Such an understanding of risk is relied upon by the European Network and Information Security Agency (ENISA).


The E/PIA should be carried out at the beginning of a project to better ensure that ethics and privacy are accounted for from the start and become part of the technology, rather then being an after thought; as per ‘privacy by design.’ The process should be done by a person(s) with expertise in ethics, privacy and data protection, such as a data protection officer, or an external expert. The key steps of the E/PIA process include:

  • Understanding the technology (by undertaking in-depth interviews with each individual responsible for developing the technological components).
  • Mapping information flows.
  • Identifying risks. This step includes drawing on the expertise and opinions of stakeholders (e.g., ICT researchers and developers, end users and data subjects, service providers, NGOs, citizens, authorities and experts in the field that the technology concerns).
  • Preparation of a report, which documents the risks (threats and vulnerabilities) and proposes recommendations to address them.
  • Review of the report by an independent third-party expert in ethics and privacy.
  • Publication of the report to ensure transparency.  
  • Adherence to the recommendations when building/designing the technology in question and subsequently when using it in the field.


The involvement of relevant stakeholders (as identified above) is a key aspect of an E/PIA as it allows for a better chance of identifying risks and developing mitigating solutions. Furthermore, it helps to gauge the nature and intensity of relevant concerns and views with regard to the risks posed by the technology.


Those involved in the E/PIA should agree an initial set of principles that will help identify the risks. These could include the consideration of over-arching ethical and privacy principles, as found in the Universal Declaration on Human Rights, the European Convention on Human Rights 1952, the General Data Protection Regulation, and ISO 29100:2011 (Privacy Framework), to name a few. Of course much will also depend on the region where the technology will be used, and authors of an E/PIA should consider local legislation and standards. Some of the principles include, but are not limited to:

  • Dignity
  • Fairness
  • Justice
  • Purpose legitimacy and specification
  • Autonomy
  • Security
  • Non-discrimination
  • Data minimisation
  • Informed consent
  • Openness, transparency and notice
  • Use, retention and disclosure limitation
  • Trust
  • Avoidance of harm
  • Individual participation and access
  • Access and correction
  • Accountability


An E/PIA should be used as a dynamic tool rather than a one-off exercise given that as a project moves forward the technology and privacy and ethical issues associated with it may change. 


In summary: Creating ICT based solutions for protecting humanitarian workers can offer an array of improvement to information analysis, communication, management and logistics. As such, the potential for improving the security and efficiency of humanitarian missions should not be underestimated. However, gaps between the design and operation of the ICT solutions and our understanding of their ethical and privacy implications can have severe consequences affecting individuals as well as groups and whole societies. To that end we must engage in forward thinking around the design of such system, through the use of an E/PIA, which can aim to mitigate ethical and privacy risks.


Author: Julia Muraszkiewicz, Research Analyst at Trilateral Research Ltd. http://trilateralresearch.com/about-us/our-people/