Category Archives: Security

CfP: 3rd Workshop on Security Information Workers (WSIW 2017)

There will be another edition of WSIW this year and it will be part of SOUPS again, which is in turn co-located with the Usenix Annual Technical Conference. WSIW is concerned with  all kinds of security information wotk and the people doing such work, such as developers, administrators, analysist, consultants, and so on. We were there last year with early results of our penetration testing in software development study. If the subject of your reseach is security work, consider submitting to WSIW.

CfP: https://www.usenix.org/conference/soups2017/call-for-papers-wsiw2017

Submissions due: 2017-05-25

Vortrag: „Security by Design?“

Triggerwarnung: work in progress

Vergangene Woche durfte ich auf dem 1. IT-Grundschutztag 2017 zum Thema Security by Design? vortragen. Das Leitthema der Veranstaltung war Application Security und ich habe passend zu unserer Forschung einen Blick auf die Softwareentwicklung geworfen. Sichere Software ist leicht gefordert, aber die Umsetzung im Entwicklungsalltag bereitet Schwierigkeiten: In frühen Phasen der Entwicklung kämpft man mit Ungewissheit und es ist schwer, Entscheidungen zu treffen; später weiß man mehr, aber die Veränderung wird schwierig – nicht nur technisch, sondern auch in den Strukturen und Routinen des Entwicklerteams.

Der Vortrag entstand aus einer früheren Fassung, die ich gemeinsam mit Andreas Poller auf dem Workshop „Partizipatives Privacy by Design“ im vergangenen Oktober in Kassel gehalten habe. Aus Andreas’ Vortrag auf der CSCW’17 habe ich mir auch Slides geborgt.

Wer die Tonspur zu den Slides hören möchte: einfach fragen.

Encryption Will Not Give You Free Speech

“Freedom of speech is the right to articulate one’s opinions and ideas without fear of government retaliation or censorship, or societal sanction.”Wikipedia

Reports of a vulnerability in WhatsApp are making the rounds today after The Guardian boosted the signal. Besides the fact that there is not really a backdoor, but rather a feature that represents a reasonable choice in a tradeoff between confidentiality and availability, the Guardian also repeats a common mistake: confounding encryption and free speech.

“Privacy campaigners criticise WhatsApp vulnerability as a ‘huge threat to freedom of speech,’” writes The Guardian. This is bullshit. As per the definition cited above, free speech means you can say things without fear. Being able to say things only in private and needing strong technical privacy guarantees is the opposite of free speech. You need encryption for that which you cannot say without fear.

Yes, encryption can be a tool against those who suppress you (though a weak one, as your adversary can easily use your use of encryption against you – or deny you due process altogether and persecute you without any trace of evidence and probable cause). But encryption will never give you free speech, it will only support your inner immigration.

Re: Offener Brief zu DNA-Analysen in der Forensik

Mahnungen vor dräuenden Gefahren verkaufen sich immer, sind doch vorhergesagte Probleme nie auszuschließen, ohne dass man ein Risiko eingeht und etwas ausprobiert. So lässt sich beliebig lange spekulieren, was alles passieren könnte, wenn man täte, was man wegen der Risiken besser bleiben ließe. Als neuester Gegenstand solcher „kritischen“ Betrachtungen bietet sich die Forderung nach einer Ausweitung der zulässigen DNA-Analysen in der Polizeiarbeit an. Folgerichtig haben Sozialwissenschaftler einen Offenen Brief zu DNA-Analysen in der Forensik verfasst der zur Vorsicht mahnt und seine Autorinnen als unverzichtbare Expertinnen anbietet. Der Tenor: Erweiterte DNA-Analysen seien viel zu kompliziert als dass man einfache Polizisten unbegleitet mit ihren Ergebnissen arbeiten lassen dürfe. Am Ende steht wenig mehr als die Schlussfolgerung, dass es zu Fehlern kommen könne. Dies jedoch ist eine banale Aussage: Fehler sind in der Polizeiarbeit Alltag und das System aus Gesetzgebung, Polizei und Justiz kann damit gut umgehen. Selbstverständlich muss man die Auswirkungen neuer Methoden betrachten, aber zur Panik gibt es keinen Anlass. Unser Rechtsstaat irrt sich recht zuverlässig zugunsten der Verdächtigen und die Forensiker wissen selbst ganz gut, wo die Grenzen der verschiedenen Analyseverfahren liegen. Unschätzbare Risiken können wir jeder Technik unterstellen, das hilft nur niemandem.

 

An In-Depth Study of More Than Ten Years of Java Exploitation

My colleagues Philipp Holzinger, Stefan Triller, Alexandre Bartel, and Eric Bodden had a closer look at Java and the vulnerabilities discovered in the Java runtime environment during the last decade. They started from known exploits, identified the vulnerabilities exploited, and analyzed and grouped their root causes. Philipp’s presentation of the results at CCS’16 has been recorded and published on YouTube:

(YouTube)

The paper is also available online:

P. Holzinger, S. Triller, A. Bartel, E. Bodden: An In-Depth Study of More Than Ten Years of Java Exploitation. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security (CCS’16), Vienna, Austria, Oct. 24-28, 2016. DOI: 10.1145/2976749.2978361. Artifacts: ccs2016-artifacts-v01.zip

 

CAST-Workshop „Sichere Software entwickeln“ am 10. November

Auch in diesem Jahr organisieren wir einen CAST-Workshop zum Thema „Sichere Software entwickeln“. Der Workshop findet am Donnerstag, dem 10. November 2016 am Fraunhofer-SIT in Darmstadt statt. Am Vorabend laden wir zu einem Get-Together ein. Das Programm und alle weiteren Informationen zum Workshop findet Ihr hier: https://www.cast-forum.de/workshops/infos/227.

P.S. Jetzt haben wir auch einen Flyer zum Ausdrucken und Verteilen.

Classifying Vehicles

Security is a classification problem: Security mechanisms, or combinations of mechanisms, need to distinguish that which they should allow to happen from that which they should deny. Two aspects complicate this task. First, security mechanisms often only solve a proxy problem. Authentication mechanisms, for example, usually distinguish some form of token – passwords, keys, sensor input, etc. – rather than the actual actors. Second, adversaries attempt to shape their appearance to pass security mechanisms. To be effective, a security mechanism needs to cover these adaptations, at least the feasible ones.

An everyday problem illustrates this: closing roads for some vehicles but not for others. As a universal but costly solution one might install retractable bollards, issue means to operate them to the drivers of permitted vehicles, and prosecute abuse. This approach is very precise, because classification rests on an artificial feature designed solely for security purposes.

Simpler mechanisms can work sufficiently well if (a) intrinsic features of vehicles are correlated with the desired classification well enough, and (b) modification of these features is subject to constraints so that evading the classifier is infeasible within the adversary model.

Bus traps and sump busters classify vehicles by size, letting lorries and buses pass while stopping common passenger cars. The real intention is to classify vehicles by purpose and operator, but physical dimensions happen to constitute a sufficiently good approximation. Vehicle size correlates with purpose. The distribution of sizes is skewed; there are many more passenger cars than buses, so keeping even just most of them out does a lot. Vehicle dimensions do not change on the fly, and are interdependent with other features and requirements. Although a straightforward way exists to defeat a bus trap – get a car that can pass – this is too expensive for most potential adversaries and their possible gain from the attack.

The Key-Under-the-Doormat Analogy Has a Flaw

The crypto wars are back, and with them the analogy of putting keys under the doormat:

… you can’t build a backdoor into our digital devices that only good guys can use. Just like you can’t put a key under a doormat that only the FBI will ever find.

(Rainey Reitman: An Open Letter to President Obama: This is About Math, Not Politics)

This is only truthy. The problem of distinguishing desirable from undesirable interactions to permit the former and deny the latter lies indeed at the heart of any security problem. I have been arguing for years that security is a classification problem; any key management challenge reminds us of it. I have no doubt that designing a crypto backdoor only law enforcement can use only for legitimate purposes, or any sufficiently close approximation, is a problem we remain far from solving for the foreseeable future.

However, the key-under-the-doormat analogy misrepresents the consequences of not putting keys under the doormat, or at least does not properly explain them. Other than (idealized) crypto, our houses and apartments are not particularly secure to begin with. Even without finding a key under the doormat, SWAT teams and burglars alike can enter with moderate effort. This allows legitimate law enforecement to take place at the cost of a burglary (etc.) risk.

Cryptography can be different. Although real-world implementations often have just as many weaknesses as the physical security of our homes, cryptography can create situations where only a backdoor would allow access to plaintext. If all we have is a properly encrypted blob, there is little hope of finding out anything about its plaintext. This does not imply we must have provisions to avoid that situation no matter what the downsides are, but it does contain a valid problem statement: How should we regulate technology that has the potential to reliably deny law enforcement access to certain data?

The answer will probably remain the same, but acknowledging the problem makes it more powerful. The idea that crypto could not be negotiated about is fundamentalist and therefore wrong. Crypto must be negotiated about and all objective evidence speaks in favor of strong crypto.

Apple, the FBI, and the Omnipotence Paradox

“Can God create a rock so heavy He could not lift it?” this is one version of the omnipotence paradox. To make a long story short, ominpotence as a concept leads to similar logical problems as the naïve set-of-sets and sets-containing-themselves constructions in Russel’s paradox. Some use this paradox to question religion; others use it to question logic; and pondering such questions generally seems to belong to the realm of philosophy. But the ongoing new round of (civil) crypto wars is bringing a tranformed version of this paradox into everyone’s pocket.

Can Apple create an encryption mechanism so strong that even Apple cannot break it? Apple claims so, at least for the particular situation, in their defense against the FBI’s request for help with unlocking a dead terrorist’s iPhone: “As a result of these stronger protections that require data encryption, we are no longer able to use the data extraction process on an iPhone running iOS 8 or later.” Although some residual risk of unknown vulnerabilities remains, this claim seems believable as far as it concerns retroactive circumvention of security defenses. Just as a locksmith can make a lock that will be as hard to break for its maker as for any other locksmith, a gadgetsmith can make gadgets without known backdoors or weaknesses that this gadgetsmith might exploit. This is challenging, but possible.

However, the security of any encryption mechanism hinges on the integrity of key components, such as the encryption algorithm, its implementation, auxiliary functions like key generation and their implementation, and the execution environment of these functions. The maker of a gadget can always weaken it for future access.

Should Apple be allowed to make and sell devices with security mechanisms so strong that neither Apple nor anyone else can break or circumvent them in the course of legitimate investigations? This is the real question here, and a democratic state based on justice and integrity has established institutions and procedures to come to a decision and enforce it. As long as Apple does not rise above states and governments, they will have to comply with laws and regulations if they are not to become the VW of Silicon Valley.

Thus far we do not understand very well how to design systems that allow legitimate law enforcement access while also keeping data secure against illiegitimate access and abuse or excessive use of legitimate means. Perhaps in the end we will have to conclude that too much security would have to be sacrificed for guaranteed law enforcement access, as security experts warn almost in unison, or that a smartphone is too personal a mind extension for anyone to access it without its user’s permission. But this debate we must have: What should the FBI be allowed to access, what would be the design implications of guaranteed access requirements, and which side effects would we need to consider?

For all we know, security experts have a point warning about weakening what does already break more often than not. To expectat that companies could stand above the law because security, however, is just silly.

PS, remember Clarke’s first law: “When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.”

PPS: Last Week Tonight with John Oliver: Encryption

Eat Less Bread?

“Eat less bread” requests a British poster from WWI. We all know it makes sense, don’t we? Resources become scarce at wartime, so wasting them weakens one’s own position. Yet this kind of advice can be utterly useless: tell a hungry person to eat less bread and you will earn, at best, a blank stare. However reasonable your advice may seem to you and everyone else, a hungry person will be physically and mentally unable to comply.

“Do not call system()” or “Do not read uninitialized memory” request secure coding guides. Such advice is equally useless if directed at a person who lacks the cognitive ability to comply. Cognitive limitations do not mean a person is stupid. We all are limited in our respective ability to process information, and we are more similar to than dissimilar from each other in this regard.

Secure coding guidelines all too often dictate a large set of arbitrary dos and don’ts, but fail to take human factors into account. Do X! Don’t do Y, do Z instead! Each of these recommendations has a sound technical basis; code becomes more secure if everyone follows this advice. However, only some of these recommendations are realistic for programmers to follow. Their sheer number should raise our doubt and let us expect that only a subset will ever be adopted by a substantial number of programmers.

Some rules are better suited for adoptions than others. Programmers often acquire idioms and conventions they perceive as helpful. Using additional parentheses for clarity, for example, even though not strictly necessary, improves readability; and the const == var convention prevents certain defects that are easy to introduce and sometimes hard to debug.

Other rules seem, from a programmer’s point of view, just ridiculous. Why is there a system() function in the first place if programmers are not supposed to use it? And if developers should not read uninitialized memory, what would warn them about memory being not initialized? Such advice is inexpensive – and likely ineffective. If we want programmers to write secure code, we must offer them platforms that make secure programming easy and straightforward and insecure programming hard and difficult.

Security and protection systems guard persons and property against a broad range of hazards, including crime; fire and attendant risks, such as explosion; accidents; disasters; espionage; sabotage; subversion; civil disturbances; bombings (both actual and threatened); and, in some systems, attack by external enemies. Most security and protection systems emphasize certain hazards more than others. In a retail store, for example, the principal security concerns are shoplifting and employee dishonesty (e.g., pilferage, embezzlement, and fraud). A typical set of categories to be protected includes the personal safety of people in the organization, such as employees, customers, or residents; tangible property, such as the plant, equipment, finished products, cash, and securities; and intangible property, such as highly classified national security information or “proprietary” information (e.g., trade secrets) of private organizations. An important distinction between a security and protection system and public services such as police and fire departments is that the former employs means that emphasize passive and preventive measures.

(Encyclopædia Britannica)

Öffentlicher Elektrowahllobbyismus

Ein Anbieter digitaler Wahltechnologie klärt uns darüber auf, wie unsicher herkömmliche Wahlen seien und welche Vorteile die maschinelle Abwicklung künftiger Wahlen hätte. Uns? Nun, genauer gesagt, die Leser von European View, dem Organ des Centre for European Studies. Das ist der Think Tank der Europäischen Volkspartei, der Euro-CDU/CSU.

Lest einfach mal den Artikel und streicht alle Stellen an, die Euch stutzig machen. Fertig? Dann findet Ihr hier und dort die Musterlösung.

Verbraucherschutz für #Neuland

Wieder einmal ist ein Programm damit aufgefallen, dass es dort, wo es installiert wird, die Umgebung vandalisiert. Kristian Köhntopp fasst das Problem anschaulich zusammen und die Kommentare unter seinem Post illustrieren, dass es sich nicht um einen Einzelfall handelt. Technisch kann man das Problem im Prinzip lösen, indem man einen vertrauenswürdigen Anbieter eine geschlossene Plattform betreiben lässt, die solche Programme verhindert beziehungsweise erkennt und ausschließt. Da stecken freilich einige Annahmen drin, die nicht unbedingt erfüllt sind.

Eigentlich handelt es sich jedoch um ein ökonomisches Problem, das nach einer ökonomischen Lösung schreit: “Moral hazard occurs under a type of information asymmetry where the risk-taking party to a transaction knows more about its intentions than the party paying the consequences of the risk. More broadly, moral hazard occurs when the party with more information about its actions or intentions has a tendency or incentive to behave inappropriately from the perspective of the party with less information.” — (Wikipedia: Moral Hazard)

Produkthaftung löst das Problem nicht unbedingt, sondern führt nur zur risikominimierenden Gestaltung von Firmengeflechten. Jedes Produkt bekommt eine eigene Wegwerffirma ohne nennenswertes Vermögen, die man im Krisenfall kostengünstig opfern kann. Dieses Modell ist auch im Security-Geschäft längst erprobt (Fallstudie: DigiNotar). Man müsste die Unternehmen zwingen, Rücklagen zu bilden und in einen Haftungsfond oder so etwas einzuzahlen. Kann man tun, passt aber besser zu Atomkraftwerken.

Zwangsweise hergestellte Transparenz bietet sich als Lösungsweg an. In #Altland haben wir dafür die Stiftung Warentest, aber die hat mit ihren Vergleichstests von Sonnencreme, Fahrradhelmen und Akkuscharubern schon genug zu tun. In #Neuland glaubte man früher, das Problem mit Positivzertifizierungen lösen zu können, die einem einzelnen Produkt definierte Sicherheitseigenschaften bescheinigen. Das funktioniert nur in Nischen gut. In jüngerer Zeit gesellen sich dazu allerlei Bug Bounties und Initiativen wie das Project Zero.

Wenn ich diese Ansätze frankensteine, komme ich auf eine unabhängige und solide finanzierte Europäische Stiftung IT-Sicherheit, die sich relevante Software näher anschaut und die Ergebnisse publiziert. Gegenstand ihrer Tätigkeit wären Consumer- und Massenprodukte, die sie auf Sicherheitsmängel und überraschende Funtionen prüft. Das Verschleiern von Funktionen müsste man vielleicht noch unter Strafe stellen, damit das funktioniert. Außerdem wird man sich überlegen müssen, wie die Tester ungehinderten Zugang zu SaaS bekommen. Das sind freilich Detailprobleme; erst einmal müssen wir grundsätzlich entscheiden, wie digitaler Verbraucherschutz jenseits von Seien Sie vorsichtig mit dem Internet aussehen soll.

(Geringfügig überarbeitete Fassung eines Posts auf Google+)

Sicherheit muss zweitrangig sein, sonst steht sie im Weg

Seit zwei Jahrzehnten träumt Deutschland erfolglos davon, die elektrische Regierung, staatsnahe Systeme wie die Gesundheitstelematik sowie das Internet überhaupt dadurch zu fördern, dass man generische Sicherheitsdienste amtlich entwickelt, standardisiert und reguliert. Sichtbare Zeichen dafür sind das Signaturgesetz, die Gesundheitskarte, DE-Mail sowie die eID-Funktion des Personalausweises.

Gut funktioniert hat das in keinem dieser Fälle. Die elektronische Signatur ist so wenig verbreitet, dass man ein darauf gestütztes Verfahren für den elektronischen Entgelt-Nachweis (ELENA) einstellen musste. Die Gesundheitskarte kann nach mehr als einer Dekade Entwicklungszeit – die Gründung der gematik liegt länger zurück als der erste Spatenstich für BER – kaum mehr als ihre Vorgängerin. De-Mail ist im Gegensatz zum Vorgänger eID noch nicht im Stadium der Ideenwettbewerbe angekommen, leidet jedoch unter vergleichbaren Akzeptanzproblemen.

Gemeinsam ist diesen Fällen der Versuch, eine generische Sicherheitstechnologie für eine Vielzahl noch unbekannter Anwendungen zu schaffen in der Annahme, diese Sicherheitstechnologie sei genau das, was den Anwendungen fehle. Beides ist falsch. Wer mit der Sicherheitstechnologie anfängt, macht den Entwurfsprozess kaputt und bekommt Design around Security statt Security by Design, und Anwendungen müssen zuerst funktioneren, bevor man beginnt, ihre Sicherheit zu optimieren. Continue reading Sicherheit muss zweitrangig sein, sonst steht sie im Weg