May 23

Welcome, introductions and opening remarks

9:00AM - 9:10AM

Session 1: Threat modeling

9:10AM - 9:50AM

Interaction-based Privacy Threat Elicitation (slides)
Laurens Sion, Kim Wuyts, Koen Yskout, Dimitri Van Landuyt and Wouter Joosen
Threat modeling involves the systematic identification, elicitation, and analysis of privacy- and/or security-related threats in the context of a specific system. These modeling practices are performed at a specific level of architectural abstraction – the use of Data Flow Diagram (DFD) models, for example, is common in this context.
To identify and elicit threats, two fundamentally different approaches can be taken: (1) elicitation on a per-element basis involves iteratively singling out individual architectural elements and considering the applicable threats, (2) elicitation at the level of system interactions (which involve the local context of three elements: a source, a data flow, and a destination) performs elicitation at the basis of system-level communication. Although not considering the local context of the element under investigation makes the former approach easier to adopt and use for human analysts, this approach also leads to threat duplication and redundancy, relies more extensively on implicit analyst expertise, and requires more manual effort.
In this paper, we provide a detailed analysis of these issues with element-based threat elicitation in the context of LINDDUN, an element-driven privacy-by-design threat modeling methodology. Subsequently, we present a LINDDUN extension that implements interaction-based privacy threat elicitation and we provide indepth argumentation on how this approach leads to better process guidance and more concrete interpretation of privacy threat types, ultimately requiring less effort and expertise. A third standalone contribution of this work is a catalog of realistic and illustrative LINDDUN privacy threats, which in turn facilitates practical threat elicitation using LINDDUN.
In the upcoming General Data Protection Regulation (GDPR), privacy by design and privacy impact assessments are given an even more prominent role than before. It is now required that companies build privacy into the core of their technical products. Recently, researchers and industry players have proposed employing threat modeling methods, traditionally used in security engineering, as a way to bridge these two GDPR requirements in the process of engineering systems.
Threat modeling, however, typically assumes a waterfall process and monolithic design, assumptions that are disrupted with the popularization of Agile methodologies and Service Oriented Architectures. Moreover, agile service environments make it easier to address some privacy problems, while complicating others. To date, the challenges of applying threat modeling for privacy in agile service environments remain understudied.
This paper sets out to expose and analyze this gap. Specifically, we analyze what challenges and opportunities the shifts in software engineering practice introduce into traditional Threat Modeling activities; how they relate to the different Privacy Goals; and what Agile principles and Service properties have an impact on them.
Our results show that both agile and services make the end-to-end analysis of applications more difficult. At the same time, the former allows for more efficient communications and iterative progress, while the latter enables the parallelization of tasks and the documentation of some architecture decisions. Additionally, we open a new research avenue pointing to Amazon Macie as an example of Machine Learning applications that aim to provide a solution to the scalability and usability of Privacy Threat Modeling processes.

Session 2: Privacy risk measurement and analysis

9:50AM - 10:30AM

The work described in this paper is a contribution to enhancing individual control over personal data which is promoted, inter alia, by the new EU General Data Protection Regulation. We propose a method to enable better informed choices of privacy settings. The method relies on a privacy risk analysis parameterized by privacy settings. The user can express his choices, visualize their impact on the privacy risks through a user-friendly interface and, if needed, decide to revise them to reduce risks to an acceptable level.
FP-Tester: Automated Testing of Browser Fingerprint Resilience
Antoine Vastel, Walter Rudametkin and Romain Rouvoy
Despite recent regulations and growing user awareness, undesired browser tracking is increasing. In addition to cookies, browser fingerprinting is a stateless technique that exploits a device’s configuration for tracking purposes. In particular, browser fingerprinting builds on attributes made available from Javascript and HTTP headers to create a unique and stable fingerprint. For example, browser plugins have been heavily exploited by state-of-the-art browser fingerprinters as a rich source of entropy. However, as browser vendors abandon plugins in favor of extensions, fingerprinters will adapt.
We present FP-TESTER, an approach to automatically test the effectiveness of browser fingerprinting countermeasure extensions. We implement a testing toolkit to be used by developers to reduce browser fingerprintability. While countermeasures aim to hinder tracking by changing or blocking attributes, they may easily introduce subtle side-effects that make browsers more identifiable, rendering the extensions counterproductive. FP-TESTER reports on the side-effects introduced by the countermeasure, as well as how they impact tracking duration from a fingerprinter’s point-of-view. To the best of our knowledge, FP-TESTER is the first tool to assist developers in fighting browser fingerprinting and reducing the exposure of end-users to such privacy leaks.

Break (30 Minutes)

10:30AM - 11:00AM

Session 3: General Data Protection Regulation

11:00AM - 11:20AM

In this position paper we posit that, for Privacy by Design to be viable, engineers must be effectively involved and endowed with methodological and technological tools closer to their mindset, and which integrate within software and systems engineering methods and tools, realizing in fact the definition of Privacy Engineering. This position will be applied in the soon-to-start PDP4E project, where privacy will be introduced into existent general-purpose software engineering tools and methods, dealing with (risk management, requirements engineering, model-driven design, and software/systems assurance).

Panel: GDPR and Privacy Engineering
Moderator: Nina Taft (Google)
Panel: Saikat Guha (Microsoft), Ben Livshits (Brave), Michael Wei (VMWare), Gary Young (Google)

11:20AM - 12:30PM

The panel will discuss how privacy engineering tools, methodologies and practices have evolved in light of the GDPR requirements. The focus will be on how solutions have evolved for data minimization, anonymization, flow control, privacy patterns, and retention. The panelists will share lessons learned and highlight challenges

Saikat Guha is a researcher at Microsoft Research India.
Ben Livshits is the Chief Scientist for Brave and Associate Professor at Imperial College London.
Michael Wei is a researcher at VMWare Research. 
Gary Young is a Privacy Analyst Engineer at Google.


12:30PM - 2:00PM

Genomic Privacy: Are we heading in the right direction? 
Invited talk by Emiliano De Cristofaro (University College London)

2:00PM - 3:15PM

Rapid advances in human genomics are enabling researchers to gain a better understanding of the role of the genome w.r.t. our ancestry, health, and well-being. While this prompts hope for more cost efficient and effective healthcare, it also yields a number of security and privacy concerns, stemming from the distinctive characteristics of genomic data. Aiming to address them, a new research community has emerged, producing a large number of publications and initiatives.
In this talk, we will present an overview of the relevant results in the field, using a structured methodology to systematize the current knowledge around genome privacy research. We will focus on privacy-enhancing technologies used in the context of testing, storing, and sharing genomic data, and provide critical viewpoints as well as a comprehensive analysis on the timeliness and the relevance of the work produced by the community. In doing so, we will argue that proposed technologies can only offer protection in the short-term, scrutinizing assumptions made by the community, and analyzing the costs introduced by privacy defenses in terms of various types of utility and flexibility overhead.

Emiliano De Cristofaro is a Reader (British English for Associate Professor) in Security and Privacy Enhancing Technologies at University College London (UCL), where he is affiliated with the Computer Science Department and the Information Security Group. Before joining UCL in 2013, he was a research scientist at Xerox PARC. He received a summa-cum-laude Laurea degree in Computer Science from the University of Salerno, Italy (2005), then, in 2011, a PhD in Networked Systems from the University of California, Irvine, advised by Prof. Gene Tsudik (mostly, while running on the beach). In 2013-14, Emiliano co-chaired the Privacy Enhancing Technologies Symposium (PETS), in the 2015, the Workshop on Genome Privacy and Security (GenoPri), and, in 2018, the security and privacy track at WWW. Recently, he also received the distinguished paper award from NDSS 2018.

Best Paper Award Ceremony

3:15PM - 3:30PM

Break (30 Minutes)

3:30PM - 4:00PM

Session 4: Privacy by Design

4:00PM - 4:50PM

As data-centric technologies are increasingly being considered in social contexts that intervene in marginalized peoples’ lives, we consider design paradigms to create systems that fulfill their unique privacy needs and requirements. Disempowered populations often experience disparate harms from the loss of privacy but, typically, have a limited role in formulating the scope and nature of such interventions, and accompanying (implicit or explicit) privacy policies and consequent engineering processes. This gap can be addressed by including recipient communities in designing these privacy policies.
We propose a participatory design model for data-centric applications where privacy policies (norms) emerge out of participation of the community in the research/design process. The framework of Contextual Integrity which articulates privacy as respect for normative rules of information flow in specific contexts, lends itself well to enable a community-generated formulation of these privacy norms within the contexts of the proposed intervention. Employing formal logic, these privacy norms can then be used to engineer systems capable of regulating the flow of information as per the negotiated norms. This entire process which we call Contextualized Participatory Privacy by Design, seeks to empower communities in negotiating and articulating their privacy norms, leading to the development of systems that are capable of enforcing what they deem as ethical, contextualized use of their data.
Enhancing Transparency and Consent in the IoT
Victor Morel, Daniel Le Métayer, Mathieu Cunche and Claude Castelluccia
The development of the IoT raises specific questions in terms of privacy, especially with respect to information to users and consent. We argue that (1) all necessary information about collected data and the collecting devices should be communicated electronically to all data subjects in their range and (2) data subjects should be able to reply also electronically and express their own privacy choices. In this position paper, we take some examples of technologies and initiatives to illustrate our position (including direct and registry-based communications) and discuss them in the light of the GDPR and the WP29 recommendations.
Privacy Compliance via Model Transformations (slides)
Thibaud Antignac, Riccardo Scandariato and Gerardo Schneider
Due to the upcoming, more restrictive regulations (like the European GDPR), designing privacy preserving architectures for information systems is becoming a pressing concern for practitioners. In particular, verifying that a design is compliant with the regulations might be a challenging task for engineers. This work presents an approach based on model transformations, which guarantee that an architectural design encompasses regulation-oriented principles such as purpose limitation, or accountability of the data controller. Our work improves the state of the art along two main dimensions. The approach we propose (i) embeds privacy principles coming from regulations, thus helping to bridge the gap between the technical and the legal worlds, (ii) systematize the embedding of the privacy principles coming from regulations, thus enabling a constructive approach to privacy by design.

Panel: Portability and Authentication
Moderator: Jose Such (King's College London)
Panel: Engin Bozdag (Philips), Nikos Laoutaris (Telefonica), Gavin Ray (

4:50AM - 5:30PM

This panel will discuss new mechanisms that are emerging for inter-service data transfer, including new storage architectures as well as security and authentication mechanisms needed to support portability.

Engin Bozdag is a Privacy by Design expert at Philips.
Nikos Laoutaris is a senior researcher at the Internet Scientific Group of Telefonica Research.
Gavin Ray is the CTO of

Wrap-up and Concluding Remarks