Eat Less Bread?

“Eat less bread” requests a British poster from WWI. We all know it makes sense, don’t we? Resources become scarce at wartime, so wasting them weakens one’s own position. Yet this kind of advice can be utterly useless: tell a hungry person to eat less bread and you will earn, at best, a blank stare. However reasonable your advice may seem to you and everyone else, a hungry person will be physically and mentally unable to comply.

“Do not call system()” or “Do not read uninitialized memory” request secure coding guides. Such advice is equally useless if directed at a person who lacks the cognitive ability to comply. Cognitive limitations do not mean a person is stupid. We all are limited in our respective ability to process information, and we are more similar to than dissimilar from each other in this regard.

Secure coding guidelines all too often dictate a large set of arbitrary dos and don’ts, but fail to take human factors into account. Do X! Don’t do Y, do Z instead! Each of these recommendations has a sound technical basis; code becomes more secure if everyone follows this advice. However, only some of these recommendations are realistic for programmers to follow. Their sheer number should raise our doubt and let us expect that only a subset will ever be adopted by a substantial number of programmers.

Some rules are better suited for adoptions than others. Programmers often acquire idioms and conventions they perceive as helpful. Using additional parentheses for clarity, for example, even though not strictly necessary, improves readability; and the const == var convention prevents certain defects that are easy to introduce and sometimes hard to debug.

Other rules seem, from a programmer’s point of view, just ridiculous. Why is there a system() function in the first place if programmers are not supposed to use it? And if developers should not read uninitialized memory, what would warn them about memory being not initialized? Such advice is inexpensive – and likely ineffective. If we want programmers to write secure code, we must offer them platforms that make secure programming easy and straightforward and insecure programming hard and difficult.

Security and protection systems guard persons and property against a broad range of hazards, including crime; fire and attendant risks, such as explosion; accidents; disasters; espionage; sabotage; subversion; civil disturbances; bombings (both actual and threatened); and, in some systems, attack by external enemies. Most security and protection systems emphasize certain hazards more than others. In a retail store, for example, the principal security concerns are shoplifting and employee dishonesty (e.g., pilferage, embezzlement, and fraud). A typical set of categories to be protected includes the personal safety of people in the organization, such as employees, customers, or residents; tangible property, such as the plant, equipment, finished products, cash, and securities; and intangible property, such as highly classified national security information or “proprietary” information (e.g., trade secrets) of private organizations. An important distinction between a security and protection system and public services such as police and fire departments is that the former employs means that emphasize passive and preventive measures.

(Encyclopædia Britannica)

Öffentlicher Elektrowahllobbyismus

Ein Anbieter digitaler Wahltechnologie klärt uns darüber auf, wie unsicher herkömmliche Wahlen seien und welche Vorteile die maschinelle Abwicklung künftiger Wahlen hätte. Uns? Nun, genauer gesagt, die Leser von European View, dem Organ des Centre for European Studies. Das ist der Think Tank der Europäischen Volkspartei, der Euro-CDU/CSU.

Lest einfach mal den Artikel und streicht alle Stellen an, die Euch stutzig machen. Fertig? Dann findet Ihr hier und dort die Musterlösung.

Verbraucherschutz für #Neuland

Wieder einmal ist ein Programm damit aufgefallen, dass es dort, wo es installiert wird, die Umgebung vandalisiert. Kristian Köhntopp fasst das Problem anschaulich zusammen und die Kommentare unter seinem Post illustrieren, dass es sich nicht um einen Einzelfall handelt. Technisch kann man das Problem im Prinzip lösen, indem man einen vertrauenswürdigen Anbieter eine geschlossene Plattform betreiben lässt, die solche Programme verhindert beziehungsweise erkennt und ausschließt. Da stecken freilich einige Annahmen drin, die nicht unbedingt erfüllt sind.

Eigentlich handelt es sich jedoch um ein ökonomisches Problem, das nach einer ökonomischen Lösung schreit: “Moral hazard occurs under a type of information asymmetry where the risk-taking party to a transaction knows more about its intentions than the party paying the consequences of the risk. More broadly, moral hazard occurs when the party with more information about its actions or intentions has a tendency or incentive to behave inappropriately from the perspective of the party with less information.” — (Wikipedia: Moral Hazard)

Produkthaftung löst das Problem nicht unbedingt, sondern führt nur zur risikominimierenden Gestaltung von Firmengeflechten. Jedes Produkt bekommt eine eigene Wegwerffirma ohne nennenswertes Vermögen, die man im Krisenfall kostengünstig opfern kann. Dieses Modell ist auch im Security-Geschäft längst erprobt (Fallstudie: DigiNotar). Man müsste die Unternehmen zwingen, Rücklagen zu bilden und in einen Haftungsfond oder so etwas einzuzahlen. Kann man tun, passt aber besser zu Atomkraftwerken.

Zwangsweise hergestellte Transparenz bietet sich als Lösungsweg an. In #Altland haben wir dafür die Stiftung Warentest, aber die hat mit ihren Vergleichstests von Sonnencreme, Fahrradhelmen und Akkuscharubern schon genug zu tun. In #Neuland glaubte man früher, das Problem mit Positivzertifizierungen lösen zu können, die einem einzelnen Produkt definierte Sicherheitseigenschaften bescheinigen. Das funktioniert nur in Nischen gut. In jüngerer Zeit gesellen sich dazu allerlei Bug Bounties und Initiativen wie das Project Zero.

Wenn ich diese Ansätze frankensteine, komme ich auf eine unabhängige und solide finanzierte Europäische Stiftung IT-Sicherheit, die sich relevante Software näher anschaut und die Ergebnisse publiziert. Gegenstand ihrer Tätigkeit wären Consumer- und Massenprodukte, die sie auf Sicherheitsmängel und überraschende Funtionen prüft. Das Verschleiern von Funktionen müsste man vielleicht noch unter Strafe stellen, damit das funktioniert. Außerdem wird man sich überlegen müssen, wie die Tester ungehinderten Zugang zu SaaS bekommen. Das sind freilich Detailprobleme; erst einmal müssen wir grundsätzlich entscheiden, wie digitaler Verbraucherschutz jenseits von Seien Sie vorsichtig mit dem Internet aussehen soll.

(Geringfügig überarbeitete Fassung eines Posts auf Google+)

Sicherheit muss zweitrangig sein, sonst steht sie im Weg

Seit zwei Jahrzehnten träumt Deutschland erfolglos davon, die elektrische Regierung, staatsnahe Systeme wie die Gesundheitstelematik sowie das Internet überhaupt dadurch zu fördern, dass man generische Sicherheitsdienste amtlich entwickelt, standardisiert und reguliert. Sichtbare Zeichen dafür sind das Signaturgesetz, die Gesundheitskarte, DE-Mail sowie die eID-Funktion des Personalausweises.

Gut funktioniert hat das in keinem dieser Fälle. Die elektronische Signatur ist so wenig verbreitet, dass man ein darauf gestütztes Verfahren für den elektronischen Entgelt-Nachweis (ELENA) einstellen musste. Die Gesundheitskarte kann nach mehr als einer Dekade Entwicklungszeit – die Gründung der gematik liegt länger zurück als der erste Spatenstich für BER – kaum mehr als ihre Vorgängerin. De-Mail ist im Gegensatz zum Vorgänger eID noch nicht im Stadium der Ideenwettbewerbe angekommen, leidet jedoch unter vergleichbaren Akzeptanzproblemen.

Gemeinsam ist diesen Fällen der Versuch, eine generische Sicherheitstechnologie für eine Vielzahl noch unbekannter Anwendungen zu schaffen in der Annahme, diese Sicherheitstechnologie sei genau das, was den Anwendungen fehle. Beides ist falsch. Wer mit der Sicherheitstechnologie anfängt, macht den Entwurfsprozess kaputt und bekommt Design around Security statt Security by Design, und Anwendungen müssen zuerst funktioneren, bevor man beginnt, ihre Sicherheit zu optimieren. Continue reading Sicherheit muss zweitrangig sein, sonst steht sie im Weg

Studien

Ein schneller Tipp zwischendurch:

Drüben in der Quantenwelt erklärt Joachim Schulz kurz und bündig seine Kriterien, wann er sich Angst machen lässt und wann nicht. Der Kern: Bei unklarer Studienlage ist der untersuchte Effekt wahrscheinlich klein.

Dasselbe Prinzip kann man übrigens nicht nur auf vermutete Schadwirkungen anwenden, sondern auch auf jeden behaupteten, aber zweifelhaften Nutzen. Die Helmdiskussion zum Beispiel wäre dann schnell vorbei, und fast alle Behauptungen über IT-Sicherheit würde uns mangels Empirie sowieso keiner glauben.

Nutzenbehauptungen können wir noch an einem zweiten, ähnlichen Kriterium messen. Verkauft sich etwas nicht (z.B. besonders sichere Messenger) oder nur an strenggläubige Randgruppen (z.B. Homöopathie), dann ist der objektive Nutzen wahrscheinlich nicht so groß wie behauptet. Erst ein starker Glaube verschiebt die subjektiven Präferenzen so weit, dass der Kauf ökonomisch rational wird.

OMG, public information found world-readable on mobile phones

In their Black Hat stage performance, employees of a security company showed how apps on certain mobile phones can access fingerprint data if the phone has a fingerprint sensor. The usual discussions ensued about rotating your fingerprints, biometrics being a bad idea, and biometric features being usernames rather than passwords. But was there a problem in the first place? Let’s start from scratch, slightly simplified:

  1. Authentication is about claims and the conditions under which one would believe certain claims.
  2. We need authentication when an adversary might profit from lying to us.
  3. Example: We’d need to authenticate banknotes (= pieces of printed paper issued by or on behalf of a particular entity, usually a national or central bank) because adversaries might profit from making us believe  a printed piece of paper is a banknote when it really isn’t.
  4. Authentication per se has nothing to do with confidentiality and secrets, as the banknotes example demonstrates. All features that we might use to authenticate a banknote are public.
  5. What really matters is effort to counterfeit. The harder a feature or set of features is to reproduce for an adversary, the stronger it authenticates whatever it belongs to.
  6. Secrets, such as passwords, are only surrogates for genuine authenticating features. They remain bound to an entity only for as long as any adversary remains uncertain about their choice from a vast space of possible values.
  7. Fingerprints are neither usernames nor passwords. They are (sets of) biometric features. Your fingerprints are as public as the features of a banknote.
  8. We authenticate others by sets of biometric features every day, recognizing colleagues, friends, neigbours, and spouses by their voices, faces, ways of moving, and so on.
  9. We use even weaker (= easier to counterfeit) features to authenticate, for example, police officers. If someone is wearing a police uniform and driving a car with blinkenlights on its roof, we’ll treat this person as a police officer.
  10. As a side condition for successful attack, the adversary must not only be able to counterfeit authenticating features, the adversary must also go through an actual authentication process.
  11. Stolen (or guessed) passwords are so easy to exploit on the Internet because the Internet does little to constrain their abuse.
  12. Attacks against geographically dispersed fingerprint sensors do not scale in the same way as Internet attacks.

Conclusion: Not every combination of patterns-we-saw-in-security-problems makes a security problem. We are leaving fingerprints on everything we touch, they never were and never will be confidential.

Carcinogenic agents classified by evidence

Carcinogenic is a scary word, but what exactly does it mean? The International Agency for Research on Cancer (IARC),  a part of the World Health Organization (WHO), has established a framework. The IACR reviews scientific evidence for and against carcinogenicity claims and places agents in one of the following categories:

  • Group 1 – carcinogenic: Sufficient evidence exists that an agent is carcinogenic in humans. In this group we find, as expected, agents like various forms of radiation (gamma, x-ray, utraviolet), tobacco smoke, alcohol, estrogen-progestogen oral contraceptives, and asbestos.
  • Group 2A – probably carcinogenic: Strong evidence exists that an agent is carcinogenic, yet the evidence rmains insufficient to be sure. More precisely, there is sufficient evidence of carcinogenicity in animals but limited evidence in humans. This group contains, for example, acrylamide and occupational exposure to oxidized bitumens and their emissions during roofing.
  • Group 2B – possibly carcinogenic: Some evidence exists for carcinogenicity of an agent, but it is neither sufficient nor strong. This class comprises agents like chloroform, coffee, DDT, marine diesel fuel, gasoline and gasoline engine exhaust, and certain surgical implants and other foreign bodies.
  • Group 3 – unclassifiable: Evidence is inadequate; the agent may or may not be carcinogenic, we do not know enough to tell. This category contains agents like caffeine, carbon-nanotubes, static or extemely low-frequency electric fields, fluorescent lighting, phenol, polyvinyl chloride, and rock wool.
  • Group 4 – probably not: We have evidence that an agent is not carcinogenic. Perhaps due to publication bias, this group contains only one element, caprolactam.

The IACR publishes classification criteria and lists (by category, alphabetical). Wikipedia also maintains lists by these categories with links to the respective pages.

Keep in mind that this classification represents only the state of evidence regarding cancerogenicity in humans, not your risk of becoming exposed to an agent, a dose-response assessment, or overall health risk from exposure. Hint: everything that kills you quickly and reliably is unlikely to cause you cancer.

Confidentiality is overrated

Is security about keeping secrets? Not really, although it seems so at first glance. Perhaps this mismatch between perception and reality explains why threats are mounting in the news without much impact on our actual lives.

Confidentiality comes first in infosec’s C/I/A (confidentiality, integrity, availability) trinity. Secrets leaking in a data breach are the prototype of a severe security problem. Laypeople even use encryption and security synonymously. Now that the half-life of secrets is declining, are we becoming less and less secure?

Most real security problems are not about keeping secrets, they are about integrity of control. Think, for example, of the money in your wallet. What matters to you is control over this money, which should abide by certain rules. It’s your money, so you should remain in control of it until you voluntarily give up your control in a transaction. The possibility of someone else taking control of your money without your consent, through force or trickery, is something to worry about and, if such others exist, a real security problem. Keeping the contents of your wallet out of sight is in contrast only a minor concern. Someone peeking into your wallet without taking anything is not much of a threat. Your primary security objective is to remain in control of what is yours most of the times and to limit your losses across the exceptional cases when you are not.

This security objective remains just the same as you move on from a wallet to online banking. What matters most is who controls the balance in which way. In a nutshell, only you (or others with your consent), knowingly and voluntarily, should be able to withdraw money or transfer it from your account; you should not be able to increase your balance arbitrarily without handing in actual money; others should be able to transfer any amount to your account; exceptions apply if you don’t pay your debts.

Confidentiality is only an auxiliary objective. We need confidentiality due to vulnerabilities. Many security mechanisms rely on secrets, such as passwords or keys, to maintain integrity. This is one source of confidentiality requirements. Another is economics: Attackers will spend higher amounts on valuable targets, provided they can identify them. If there is a large number of possible targets but only a few are really valuable, one might try to make the valuable target look like all the others so that attackers have to spread at least part of their effort across many candidate targets. However, strong defenses are still needed in case attackers identify the valuable target in whichever way, random or systematic.

The better we maintain integrity of control, the more secure we are. Systems remain predictable and do what we want despite the presence of adversaries. Confidentiality is only a surrogate where we do not trust our defenses.