Threat modeling involves the systematic identification, elicitation, and analysis of privacy- and/or security-related threats in the context of a specific system. These modeling practices are performed at a specific level of architectural abstraction – the use of Data Flow Diagram (DFD) models, for example, is common in this context.
To identify and elicit threats, two fundamentally different approaches can be taken: (1) elicitation on a per-element basis involves iteratively singling out individual architectural elements and considering the applicable threats, (2) elicitation at the level of system interactions (which involve the local context of three elements: a source, a data flow, and a destination) performs elicitation at the basis of system-level communication. Although not considering the local context of the element under investigation makes the former approach easier to adopt and use for human analysts, this approach also leads to threat duplication and redundancy, relies more extensively on implicit analyst expertise, and requires more manual effort.
In this paper, we provide a detailed analysis of these issues with element-based threat elicitation in the context of LINDDUN, an element-driven privacy-by-design threat modeling methodology. Subsequently, we present a LINDDUN extension that implements interaction-based privacy threat elicitation and we provide indepth argumentation on how this approach leads to better process guidance and more concrete interpretation of privacy threat types, ultimately requiring less effort and expertise. A third standalone contribution of this work is a catalog of realistic and illustrative LINDDUN privacy threats, which in turn facilitates practical threat elicitation using LINDDUN.
In the upcoming General Data Protection Regulation (GDPR), privacy by design and privacy impact assessments are given an even more prominent role than before. It is now required that companies build privacy into the core of their technical products. Recently, researchers and industry players have proposed employing threat modeling methods, traditionally used in security engineering, as a way to bridge these two GDPR requirements in the process of engineering systems.
Threat modeling, however, typically assumes a waterfall process and monolithic design, assumptions that are disrupted with the popularization of Agile methodologies and Service Oriented Architectures. Moreover, agile service environments make it easier to address some privacy problems, while complicating others. To date, the challenges of applying threat modeling for privacy in agile service environments remain understudied.
This paper sets out to expose and analyze this gap. Specifically, we analyze what challenges and opportunities the shifts in software engineering practice introduce into traditional Threat Modeling activities; how they relate to the different Privacy Goals; and what Agile principles and Service properties have an impact on them.
Our results show that both agile and services make the end-to-end analysis of applications more difficult. At the same time, the former allows for more efficient communications and iterative progress, while the latter enables the parallelization of tasks and the documentation of some architecture decisions. Additionally, we open a new research avenue pointing to Amazon Macie as an example of Machine Learning applications that aim to provide a solution to the scalability and usability of Privacy Threat Modeling processes.