Studien

Ein schneller Tipp zwischendurch:

Drüben in der Quantenwelt erklärt Joachim Schulz kurz und bündig seine Kriterien, wann er sich Angst machen lässt und wann nicht. Der Kern: Bei unklarer Studienlage ist der untersuchte Effekt wahrscheinlich klein.

Dasselbe Prinzip kann man übrigens nicht nur auf vermutete Schadwirkungen anwenden, sondern auch auf jeden behaupteten, aber zweifelhaften Nutzen. Die Helmdiskussion zum Beispiel wäre dann schnell vorbei, und fast alle Behauptungen über IT-Sicherheit würde uns mangels Empirie sowieso keiner glauben.

Nutzenbehauptungen können wir noch an einem zweiten, ähnlichen Kriterium messen. Verkauft sich etwas nicht (z.B. besonders sichere Messenger) oder nur an strenggläubige Randgruppen (z.B. Homöopathie), dann ist der objektive Nutzen wahrscheinlich nicht so groß wie behauptet. Erst ein starker Glaube verschiebt die subjektiven Präferenzen so weit, dass der Kauf ökonomisch rational wird.

OMG, public information found world-readable on mobile phones

In their Black Hat stage performance, employees of a security company showed how apps on certain mobile phones can access fingerprint data if the phone has a fingerprint sensor. The usual discussions ensued about rotating your fingerprints, biometrics being a bad idea, and biometric features being usernames rather than passwords. But was there a problem in the first place? Let’s start from scratch, slightly simplified:

  1. Authentication is about claims and the conditions under which one would believe certain claims.
  2. We need authentication when an adversary might profit from lying to us.
  3. Example: We’d need to authenticate banknotes (= pieces of printed paper issued by or on behalf of a particular entity, usually a national or central bank) because adversaries might profit from making us believe  a printed piece of paper is a banknote when it really isn’t.
  4. Authentication per se has nothing to do with confidentiality and secrets, as the banknotes example demonstrates. All features that we might use to authenticate a banknote are public.
  5. What really matters is effort to counterfeit. The harder a feature or set of features is to reproduce for an adversary, the stronger it authenticates whatever it belongs to.
  6. Secrets, such as passwords, are only surrogates for genuine authenticating features. They remain bound to an entity only for as long as any adversary remains uncertain about their choice from a vast space of possible values.
  7. Fingerprints are neither usernames nor passwords. They are (sets of) biometric features. Your fingerprints are as public as the features of a banknote.
  8. We authenticate others by sets of biometric features every day, recognizing colleagues, friends, neigbours, and spouses by their voices, faces, ways of moving, and so on.
  9. We use even weaker (= easier to counterfeit) features to authenticate, for example, police officers. If someone is wearing a police uniform and driving a car with blinkenlights on its roof, we’ll treat this person as a police officer.
  10. As a side condition for successful attack, the adversary must not only be able to counterfeit authenticating features, the adversary must also go through an actual authentication process.
  11. Stolen (or guessed) passwords are so easy to exploit on the Internet because the Internet does little to constrain their abuse.
  12. Attacks against geographically dispersed fingerprint sensors do not scale in the same way as Internet attacks.

Conclusion: Not every combination of patterns-we-saw-in-security-problems makes a security problem. We are leaving fingerprints on everything we touch, they never were and never will be confidential.

Carcinogenic agents classified by evidence

Carcinogenic is a scary word, but what exactly does it mean? The International Agency for Research on Cancer (IARC),  a part of the World Health Organization (WHO), has established a framework. The IACR reviews scientific evidence for and against carcinogenicity claims and places agents in one of the following categories:

  • Group 1 – carcinogenic: Sufficient evidence exists that an agent is carcinogenic in humans. In this group we find, as expected, agents like various forms of radiation (gamma, x-ray, utraviolet), tobacco smoke, alcohol, estrogen-progestogen oral contraceptives, and asbestos.
  • Group 2A – probably carcinogenic: Strong evidence exists that an agent is carcinogenic, yet the evidence rmains insufficient to be sure. More precisely, there is sufficient evidence of carcinogenicity in animals but limited evidence in humans. This group contains, for example, acrylamide and occupational exposure to oxidized bitumens and their emissions during roofing.
  • Group 2B – possibly carcinogenic: Some evidence exists for carcinogenicity of an agent, but it is neither sufficient nor strong. This class comprises agents like chloroform, coffee, DDT, marine diesel fuel, gasoline and gasoline engine exhaust, and certain surgical implants and other foreign bodies.
  • Group 3 – unclassifiable: Evidence is inadequate; the agent may or may not be carcinogenic, we do not know enough to tell. This category contains agents like caffeine, carbon-nanotubes, static or extemely low-frequency electric fields, fluorescent lighting, phenol, polyvinyl chloride, and rock wool.
  • Group 4 – probably not: We have evidence that an agent is not carcinogenic. Perhaps due to publication bias, this group contains only one element, caprolactam.

The IACR publishes classification criteria and lists (by category, alphabetical). Wikipedia also maintains lists by these categories with links to the respective pages.

Keep in mind that this classification represents only the state of evidence regarding cancerogenicity in humans, not your risk of becoming exposed to an agent, a dose-response assessment, or overall health risk from exposure. Hint: everything that kills you quickly and reliably is unlikely to cause you cancer.

Confidentiality is overrated

Is security about keeping secrets? Not really, although it seems so at first glance. Perhaps this mismatch between perception and reality explains why threats are mounting in the news without much impact on our actual lives.

Confidentiality comes first in infosec’s C/I/A (confidentiality, integrity, availability) trinity. Secrets leaking in a data breach are the prototype of a severe security problem. Laypeople even use encryption and security synonymously. Now that the half-life of secrets is declining, are we becoming less and less secure?

Most real security problems are not about keeping secrets, they are about integrity of control. Think, for example, of the money in your wallet. What matters to you is control over this money, which should abide by certain rules. It’s your money, so you should remain in control of it until you voluntarily give up your control in a transaction. The possibility of someone else taking control of your money without your consent, through force or trickery, is something to worry about and, if such others exist, a real security problem. Keeping the contents of your wallet out of sight is in contrast only a minor concern. Someone peeking into your wallet without taking anything is not much of a threat. Your primary security objective is to remain in control of what is yours most of the times and to limit your losses across the exceptional cases when you are not.

This security objective remains just the same as you move on from a wallet to online banking. What matters most is who controls the balance in which way. In a nutshell, only you (or others with your consent), knowingly and voluntarily, should be able to withdraw money or transfer it from your account; you should not be able to increase your balance arbitrarily without handing in actual money; others should be able to transfer any amount to your account; exceptions apply if you don’t pay your debts.

Confidentiality is only an auxiliary objective. We need confidentiality due to vulnerabilities. Many security mechanisms rely on secrets, such as passwords or keys, to maintain integrity. This is one source of confidentiality requirements. Another is economics: Attackers will spend higher amounts on valuable targets, provided they can identify them. If there is a large number of possible targets but only a few are really valuable, one might try to make the valuable target look like all the others so that attackers have to spread at least part of their effort across many candidate targets. However, strong defenses are still needed in case attackers identify the valuable target in whichever way, random or systematic.

The better we maintain integrity of control, the more secure we are. Systems remain predictable and do what we want despite the presence of adversaries. Confidentiality is only a surrogate where we do not trust our defenses.

Mein Gott, es ist voller Spam!

Spam kommt überall dort vor, wo jemand mit wenig Aufwand viele Empfänger erreichen kann, aus deren Reaktionen er einen Gewinn zieht. E-Mail ist das klassische Beispiel: Millionen von Nachrichten zu versenden, kostet fast nichts. Kann man Klicks der Empfänger direkt zu Geld machen, lohnt sich Spam, denn bereits bei geringer Antwortrate kommt mehr Geld zurück als der Versand gekostet hat.

E-Mail ist nur ein Träger für Spam. Ein anderer ist mir in Google Analytics begegnet: Referral-Spam. Dabei geben sich die Spammer als Website aus, die auf eine andere Website verlinkt, entweder mit Website-Besuchen durch Bots oder auch mit Fake-Daten, die sie bei der Google-Analytics-API abliefern. Im Referrer steht dabei jeweils eine URL, zu der man Besucher locken will; diese URL taucht dann in Logfiles oder eben in Google Analytics auf. Auf diese Weise kann man sich einerseits Links erschleichen, wenn Websites ihre Logfiles öffentlich machen. Andererseits weckt man die Neugier einzelner Analytics-Nutzer oder Logfile-Auswerter und lockt so Besucher auf die Spammer-Site.

So häufig wie in der E-Mail ist Referral-Spam noch nicht. Aktuell läuft aber gerade eine nertötende Kampagne, die ein unnützes Social-Buttons-Gedöns bewirbt. Wenn’s nervt, kann man sich in Google Analytics einen Filter einrichten. Ausführliche Erklärungen gibt es hier.

CfP: Workshop on Agile Secure Software Development @ ARES’15

We are organizing a workshop on agile secure software development in conjunction with the ARES’15 conference. Please find the full call for papers on the workshop website, http://www.ares-conference.eu/conference/workshops/assd-2015/. The conference takes place in Toulouse this year.

Important dates:

Submission Deadline: April 1522, 2015
Author Notification: May 11, 2015
Proceedings version: June 8, 2015
Conference: August 24-28, 2015

The Uncertainty Principle of New Technology

Security, privacy, and safety by design sounds like a good idea. Alas, it is not going to happen, at least not with innovative technology. Collingridge’s dilemma gets in the way: When a technology is new and therefore easy to shape, we do not understand its downsides – and the non-issues to be – well enough to make informed design decisions, and once we understand the problems, changing the now established and deployed technology fundamentally becomes hard. Expressed in terms of the Cognitive Dimensions framework, technology design combines premature commitment with high viscosity later on.

With digital technology evolving at high pace, we are continually facing Collingridge’s dilemma. Big data and Internet-scale machine learning, the Internet of everything, self-driving cars, and many inventions yet to come challenge us to keep things under control without knowing what to aim for. Any technology we never created before is subject to the dilemma.

A tempting but fallacious solution is the (strong) precautionary principle: to take all possible risks seriously and treat whatever we cannot rule out as a problem. This approach is fallacious because it ignores the cost of implementation. Every possible risk is not the same as every likely risk. Trying to prevent everything that might go wrong one will inevitably end up spending too much on some possible but unlikely problems. Besides, Collingridge’s dilemma may apply to the chosen treatments as well.

As an alternative we might try to design for corrigibility so that mistakes can be easily corrected once we learn about them. With respect to the information technology domain this idea seems to echo what David Parnas proposed in his seminal paper On the criteria to be used in decomposing systems into modules (DOI: 10.1145/361598.361623). Parnas argues in this paper that software modules should hide design decisions from their surroundings, so that the inner workings of a module can be modified without affecting dependent modules. Constructs supporting this found their way into modern-day programming paradigms and languages; object-oriented programming is the most abvious application of Parnas’ idea.

But the dilemma is not that easily solved. First, software design is too narrow a scope. Technology is more than just software and can become quite viscous regardless of how easily the software is changed. Just think of the Internet and its core protocol, IP. Most operating systems come with support for IPv4 and IPv6 and there are many good reasons to move on to the new protocol version. Yet we are still waiting for the day when the Internet will abandon IPv4 in favor of IPv6. The Internet as a system is really hard to change. Nevertheless, modularity helps. When attacks against Internet banking users became widespread starting ca. 10 years ago, for example, banks in Germany managed to update their authorization mechanisms and increase security in relatively short time and with few troubles for their customers.

In their recent paper Cyber Security as Social Experiment (NSPW’14, DOI: 10.1145/2683467.2683469), Wolter Pieters, Dina Hadžiosmanović  and Francien Dechesne argue that experimentation could help us to learn more about the side effects of new technology. Since people are part of any interesting system, this amounts to running social experiments. If we do not care and just deploy a technology, this is an experiment as well, just less controlled and systematic. Particular to cyber security is the challenge of involving adversaries as they are the root of all security threats. The general challenge is to run social experiments responsibly within ethical bounds.

Even with experiments, some negative consequences will likely escape our attention. Some effects take too long before they show or show only after a technology has been deployed at a global scale. Could James Watt have thought of climate change due to the burning of fossil fuel? Probably not. But at least we understand what the meta-problem is.

The Social Component of Risk Assessment

(This post appeared first on the ESSE project blog.)

Earlier this year Andreas presented at the New Security Paradigms Workshop our paper An Asset to Security Modeling? Analyzing Stakeholder Collaborations Instead of Threats to Assets (DOI: 10.1145/2683467.2683474). During our work with the GESIS Secure Data Center team it emerged that the common way we use to do risk assessment may be flawed. In this paper we discuss what is missing and how to analyze collaboration networks to understand consequences of security incidents.

Risk assessment, as described for example in ISO 31000, is a systematic process that prepares decisions. The goal of this process is to find appropriate risk responses and treatments. A risks can be accepted or even increased (if doing so entails an opportunity); it can be avoided, shared or transferred; or the risk can be mitigated by reducing its likelihood or impact. As a prerequisite for informed decisions one goes through the risk assessment process, during which one identifies, analyzes, and evaluates pertinent risks. The figure below, a more elaborate version of which can be found in ISO 31000, illustrates this process chain.

A 4-step process: (1) risk identification; (2) risk analysis; (3) risk evaluation; (4) risk treatment. Risk assessment comprises steps 1-3.Stakeholders participate in this process as a source of information, knowing their respective business or business function and being able to assess likelihoods and impacts. This standard approach to risk assessment has the premise that risk treatments are variable and the objective is to find optimal values for them.

In our paper we propose a complementary approach. Our premise: Stakeholders collaborate in complicated networks for mutual benefit. Risk and incident responses are to a large degree determined by their collaboration relationships. These pre-determined responses are not to be defined as a result of risk assessment, they rather constitute a factor to be considered in risk analysis and evaluation. The figure below is a simplified version of Figure 8 in our paper:

SDC Stakeholder NetworkThe Secure Data Center serves its users, which are part of a larger research community; the SDC also needs its users as serving them is its pupose. Beyond the individual user, the research community at large benefits from SDC services and influences their acceptance. Primary investigators provide data; they benefit from wider recognition of their work through secondary analyses and fulfil obligations by archiving their data. Survey participants are the source of all data. Everyone wants to preserve their willingness to participate in studies.

The need for an extension of risk assessment methodologies became apparent when we reviewed and discussed with the participants of our study the threat models they had produced. They expressed various concerns about the stakeholders involved and their possible reactions to security incidents. Although traditional approaches to risk assessment include the analysis of consequences, they fail to provide tools for this analysis. In the security domain in particular it is often assumed that consequences can be evlauated by identifying assets and assigning some monetsary value to each of them. According to our experience it’s more complicated.

Read the paper on our website or in the ACM digital library:

Andreas Poller; Sven Türpe; Katharina Kinder-Kurlanda: An Asset to Security Modeling? Analyzing Stakeholder Collaborations Instead of Threats to Assets. New Security Paradigms Workshop (NSPW’14), Victoria, BC, September 15-18, 2014. DOI: 10.1145/2683467.2683474 [BibTeX]