Today’s safety video is brought to you by ScotRail. It examines in depth three cases of train drivers accidentally passing red signals.
*) SPAD = Signal Passed at Danger
We are organizing a workshop on agile secure software development in conjunction with the ARES’15 conference. Please find the full call for papers on the workshop website, http://www.ares-conference.eu/conference/workshops/assd-2015/. The conference takes place in Toulouse this year.
Submission Deadline: April 15, 2015
Author Notification: May 11, 2015
Proceedings version: June 8, 2015
Conference: August 24-28, 2015
Security, privacy, and safety by design sounds like a good idea. Alas, it is not going to happen, at least not with innovative technology. Collingridge’s dilemma gets in the way: When a technology is new and therefore easy to shape, we do not understand its downsides – and the non-issues to be – well enough to make informed design decisions, and once we understand the problems, changing the now established and deployed technology fundamentally becomes hard. Expressed in terms of the Cognitive Dimensions framework, technology design combines premature commitment with high viscosity later on.
With digital technology evolving at high pace, we are continually facing Collingridge’s dilemma. Big data and Internet-scale machine learning, the Internet of everything, self-driving cars, and many inventions yet to come challenge us to keep things under control without knowing what to aim for. Any technology we never created before is subject to the dilemma.
A tempting but fallacious solution is the (strong) precautionary principle: to take all possible risks seriously and treat whatever we cannot rule out as a problem. This approach is fallacious because it ignores the cost of implementation. Every possible risk is not the same as every likely risk. Trying to prevent everything that might go wrong one will inevitably end up spending too much on some possible but unlikely problems. Besides, Collingridge’s dilemma may apply to the chosen treatments as well.
As an alternative we might try to design for corrigibility so that mistakes can be easily corrected once we learn about them. With respect to the information technology domain this idea seems to echo what David Parnas proposed in his seminal paper On the criteria to be used in decomposing systems into modules (DOI: 10.1145/361598.361623). Parnas argues in this paper that software modules should hide design decisions from their surroundings, so that the inner workings of a module can be modified without affecting dependent modules. Constructs supporting this found their way into modern-day programming paradigms and languages; object-oriented programming is the most abvious application of Parnas’ idea.
But the dilemma is not that easily solved. First, software design is too narrow a scope. Technology is more than just software and can become quite viscous regardless of how easily the software is changed. Just think of the Internet and its core protocol, IP. Most operating systems come with support for IPv4 and IPv6 and there are many good reasons to move on to the new protocol version. Yet we are still waiting for the day when the Internet will abandon IPv4 in favor of IPv6. The Internet as a system is really hard to change. Nevertheless, modularity helps. When attacks against Internet banking users became widespread starting ca. 10 years ago, for example, banks in Germany managed to update their authorization mechanisms and increase security in relatively short time and with few troubles for their customers.
In their recent paper Cyber Security as Social Experiment (NSPW’14, DOI: 10.1145/2683467.2683469), Wolter Pieters, Dina Hadžiosmanović and Francien Dechesne argue that experimentation could help us to learn more about the side effects of new technology. Since people are part of any interesting system, this amounts to running social experiments. If we do not care and just deploy a technology, this is an experiment as well, just less controlled and systematic. Particular to cyber security is the challenge of involving adversaries as they are the root of all security threats. The general challenge is to run social experiments responsibly within ethical bounds.
Even with experiments, some negative consequences will likely escape our attention. Some effects take too long before they show or show only after a technology has been deployed at a global scale. Could James Watt have thought of climate change due to the burning of fossil fuel? Probably not. But at least we understand what the meta-problem is.
(This post appeared first on the ESSE project blog.)
Earlier this year Andreas presented at the New Security Paradigms Workshop our paper An Asset to Security Modeling? Analyzing Stakeholder Collaborations Instead of Threats to Assets (DOI: 10.1145/2683467.2683474). During our work with the GESIS Secure Data Center team it emerged that the common way we use to do risk assessment may be flawed. In this paper we discuss what is missing and how to analyze collaboration networks to understand consequences of security incidents.
Risk assessment, as described for example in ISO 31000, is a systematic process that prepares decisions. The goal of this process is to find appropriate risk responses and treatments. A risks can be accepted or even increased (if doing so entails an opportunity); it can be avoided, shared or transferred; or the risk can be mitigated by reducing its likelihood or impact. As a prerequisite for informed decisions one goes through the risk assessment process, during which one identifies, analyzes, and evaluates pertinent risks. The figure below, a more elaborate version of which can be found in ISO 31000, illustrates this process chain.
Stakeholders participate in this process as a source of information, knowing their respective business or business function and being able to assess likelihoods and impacts. This standard approach to risk assessment has the premise that risk treatments are variable and the objective is to find optimal values for them.
In our paper we propose a complementary approach. Our premise: Stakeholders collaborate in complicated networks for mutual benefit. Risk and incident responses are to a large degree determined by their collaboration relationships. These pre-determined responses are not to be defined as a result of risk assessment, they rather constitute a factor to be considered in risk analysis and evaluation. The figure below is a simplified version of Figure 8 in our paper:
The Secure Data Center serves its users, which are part of a larger research community; the SDC also needs its users as serving them is its pupose. Beyond the individual user, the research community at large benefits from SDC services and influences their acceptance. Primary investigators provide data; they benefit from wider recognition of their work through secondary analyses and fulfil obligations by archiving their data. Survey participants are the source of all data. Everyone wants to preserve their willingness to participate in studies.
The need for an extension of risk assessment methodologies became apparent when we reviewed and discussed with the participants of our study the threat models they had produced. They expressed various concerns about the stakeholders involved and their possible reactions to security incidents. Although traditional approaches to risk assessment include the analysis of consequences, they fail to provide tools for this analysis. In the security domain in particular it is often assumed that consequences can be evlauated by identifying assets and assigning some monetsary value to each of them. According to our experience it’s more complicated.
Andreas Poller; Sven Türpe; Katharina Kinder-Kurlanda: An Asset to Security Modeling? Analyzing Stakeholder Collaborations Instead of Threats to Assets. New Security Paradigms Workshop (NSPW’14), Victoria, BC, September 15-18, 2014. DOI: 10.1145/2683467.2683474 [BibTeX]
That was a nice bit of trolling. A rough timeline: (1) Apple and later, Google announce modest improvements to a security building block of their respective mobile device platforms, device encryption. (2) Government officials in the US publicly complain how this would obstruct law enforcement and request means to access encrypted device data. (3) The usual suspects are all up in arms and reiterate their arguments why crypto backdoors are a bad idea.
What is wrong with this debate, apart from it being a rerun? First, encryption is not as secure as claimed. Second, encryption is not as important as assumed.
Device encryption is just one small security building block. It protects data stored on the device against access without the encryption key if the adversary encounters the device in the turned-off state. Attacks against encryption typically go for the keys. As we were just reminded, police can compel suspects to hand over their fingerprints and unlock a device. Some countries have key disclosure laws.
Against running devices there are further attack options. If any key material is held in RAM, it can be extracted, at least in principle, with a cold boot attack. Whether Apple’s Secure Enclave design does anything to protect against such attacks remains unclear. As we’ve learned with Microsoft’s encryption scheme, Bitlocker, even hardware-supported encryption can leave a number of loopholes (Trust 2009 paper).
Encryption has its limitations. It protects data subject to several conditions. In particular, the adversary must be unable to obtain the key or subvert the execution environment. While plug-and-play forensics would be more convenient for law enforcement, there are ways around device encryption.
Mobile platforms extend beyond the individual device. Not only do devices communicate liberally with other devices and with Internet services, they also depend on the platform operator. Apple and Google run appstores and supply software updates. Whatever the software of a device can or cannot do may change at any time.
Encryption protects files against access bypassing the operating system, not against access from within. Protection against rogue users or applications is a matter of authentication and access control — software making decisions, software that can be modified. While this channel entails some tampering-with-evidence problems for law enforcement, it seems technically quite feasible to use it.
Encrypted equals secure only from a microscopic perspective. I have advocated before to pay more attention to systemic and macroscopic aspects, and the crypto wars 2.0 debate illustrates nicely how a too narrow focus on a single security mechanism skews our debate. Encryption matters, but not as much as we allow them to make us believe it would.
The essence of security – your adversary won’t abide by the rules:
This video won’t teach you warefare (just like an introduction to cryptography won’t teach you security) but it is nevertheless interesting:
Two quotes and a thought:
»If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.« — Eric Schmidt
»A public or private organization’s best defense against whistle-blowers is to refrain from doing things it doesn’t want to read about on the front page of the newspaper.« — Bruce Schneier
Is their message the same, or does it make a difference whether we talk about individuals or organizations? One limitation they surely have in common: just that one doesn’t want others to know certain things does not imply these things are wrong, even if one rightfully fears the others’ reaction. Both quotes oversimplify the problem by ignoring the right/wrong dimension, thus omitting case discriminations that may be important to the discussion.