Category Archives: English

Posts in English

Crypto Wars 2.0: Let the Trolling Commence (and don’t trust your phone)

That was a nice bit of trolling. A rough timeline: (1) Apple and later, Google announce modest improvements to a security building block of their respective mobile device platforms, device encryption. (2) Government officials in the US publicly complain how this would obstruct law enforcement and request means to access encrypted device data. (3) The usual suspects are all up in arms and reiterate their arguments why crypto backdoors are a bad idea.

What is wrong with this debate, apart from it being a rerun? First, encryption is not as secure as claimed. Second, encryption is not as important as assumed.

Device encryption is just one small security building block. It protects data stored on the device against access without the encryption key if the adversary encounters the device in the turned-off state. Attacks against encryption typically go for the keys. As we were just reminded, police can compel suspects to hand over their fingerprints and unlock a device. Some countries have key disclosure laws.

Against running devices there are further attack options. If any key material is held in RAM, it can be extracted, at least in principle, with a cold boot attack. Whether Apple’s Secure Enclave design does anything to protect against such attacks remains unclear. As we’ve learned with Microsoft’s encryption scheme, Bitlocker, even hardware-supported encryption can leave a number of loopholes (Trust 2009 paper).

Encryption has its limitations. It protects data subject to several conditions. In particular, the adversary must be unable to obtain the key or subvert the execution environment. While plug-and-play forensics would be more convenient for law enforcement, there are ways around device encryption.

Mobile platforms extend beyond the individual device. Not only do devices communicate liberally with other devices and with Internet services, they also depend on the platform operator. Apple and Google run appstores and supply software updates. Whatever the software of a device can or cannot do may change at any time.

Encryption protects files against access bypassing the operating system, not against access from within. Protection against rogue users or applications is a matter of authentication and access control — software making decisions, software that can be modified. While this channel entails some tampering-with-evidence problems for law enforcement, it seems technically quite feasible to use it.

Encrypted equals secure only from a microscopic perspective. I have advocated before to pay more attention to systemic and macroscopic aspects, and the crypto wars 2.0 debate illustrates nicely how a too narrow focus on a single security mechanism skews our debate. Encryption matters, but not as much as we allow them to make us believe it would.

Post-privacy is for everyone

Two quotes and a thought:

»If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.« — Eric Schmidt


»A public or private organization’s best defense against whistle-blowers is to refrain from doing things it doesn’t want to read about on the front page of the newspaper.« — Bruce Schneier

Is their message the same, or does it make a difference whether we talk about individuals or organizations? One limitation they surely have in common: just that one doesn’t want others to know certain things does not imply these things are wrong, even if one rightfully fears the others’ reaction. Both quotes oversimplify the problem by ignoring the right/wrong dimension, thus omitting case discriminations that may be important to the discussion.

Learning from History

Everyone knows the story of Clifford Stoll and and West-German KGB hackers (see the video below) in the late 80s.  Does this history teach us something today? What strikes me as I watch this documentary again is the effort ratio between attackers and defenders. To fight a small adversary group, Stoll invested considerable effort, and from some point involved further people and organizations in the hunt. In effect, once they had been detected, the attackers were on their way to being overpowered and apprehended.

Today, we take more organized approaches to security management and incident response. However, at the same time we try to become more efficient: we want to believe in automated mechanisms like data leakage prevention and policy enforcement. But these mechanisms work on abstractions – they are less complicated than actual attacks. We also want to believe in preventive security design, but soon find ourselves engaged in an eternal arms race as our designs never fully anticipate how attackers adapt. Can procedures and programs be smart enough to fend off intelligent attackers, or does it still take simply more brains on the defender’s than on the attacker’s part to win?


Sketchifying and the Instagram of Security Design

About a year ago, when I received the review comments on Point-and-Shoot Security Design (discussed in this blog before), I was confronted with a question I could not answer at that time. One reviewer took my photography analogy seriously and asked:

What ist the Instagram of information security?

Tough one, the more so as I never used Instagram. But the question starts making sense in the light of an explanation that I came across recently:

»Ask any teenager if they would want to share multi-photo albums on Instagram, and they wouldn’t understand you. The special thing about Instagram is that they focus on one photo at a time. Every single photo is a piece of handcrafted excellence. When you view an Instagram photo, you know the photo is the best photo of many photos. The user chose to upload that specific photo and spent a good amount of time picking a filter and editing it.«

(The Starbucks Theory — Thoughts on creativity)

One of the points I made in my paper was that design requires design space exploration before refinement and caring about details. Even if one starts to consider security early in a development process, one may still end up bolting security on rather than designing it in if one lets non-security requirements drive the design process and simply looks for security mechanisms to add to the design. One should rather, I theorized, actively search the security design space for solution candidates and evaluate them against threat models to identify viable solutions to the security problems an operational environment will be posing. Designing a secure system is not in the first place about defining a security policy or guaranteeing certain formal, microscopic security properties. Security design is rather about shaping the behavior of adversarial actors such that the resulting incident profile becomes predictable and acceptable.

Today I came across an article by Željko Obrenović, whose work I was unaware of at the time of writing the point-and-shoot paper. In his article Software Sketchifying: Bringing Innovation into Software Development (IEEE Software 30:3 May/June 2013) he outlines the ideas behind Sketchlet, a tool to help nonengineers to try out different interaction designs. I haven’t tried Sketchlet yet, but apparently it allows interaction designers to work with technological components and services, combining and arranging them through a direct manipulation user interface. Without having to program, the designer can take building blocks and play with them to try out ideas. Designers can thus quickly discard bad ideas before taking a selection of apparently good ones into the prototyping and later, the implementation stage.

Conceptually this is pretty close to what I’d like to see for security design. There’s a catch, however: security design deals with dimensions that can’t be experienced immediately, needs to be visualized through evaluation and analysis. Design sketches need to be evaluated in a second dimension against threat models capturing and representing adversary behavior. Nevertheless, Sketchifying looks like an interesting starting point for further exploration.

Attack-as-a-Service Market Prices

An article in the latest issue of CACM (paywalled) quotes some prices advertised by criminals for elementary attack building blocks:

  • DoS – $50…$500 per day
  • hacking a private e-mail address – $30…$50
  • forged identity documents – <$30
  • software opening fake accounts – <$500
  • custom-made malware – $1,500 + monthly service fee

If you want to get rich, don’t waste your time on developing sophisiticated attack techniques. Look at the services available and devise a business model.

The misleading microscopic view

The Guardian lists 10 gross ingredients you didn’t know were in your food, ingredients like arsenic, hair, or silicone breast implant filler. Should we react with nausea and disgust? Of course not. Yummy food is yummy food, neither a just detectable trace of someting (arsenic) nor the source of an ingredient (hair) nor possible other uses of the same ingredient (breast implants) have any noticeble impact. That’s by definition: if a dose of anything has a proven adverse health impact, it will be banned from being used in food. The Guardian‘s list is an example of microscopic properties that don’t matter macroscopically. Yummy food is yummy food.

We commit the same error when, in security, we look just at the software defects and neglect their security impact. All software has defects; we might easily assemble a list of 10, or 100, or 1000 defects you didn’t know were in your programs. This does not mean they’d all matter and need to be removed. A system is secure if it evokes a predictable and controlled incident profile over its lifetime. some software defects in some systems affect this incident profile in such a way that their removal matters. Others are just traces of poison, or issues appearing problematic by analogy. The problem is: we often don’t know which is which.