What the Cypherpunks Got Wrong

Cypherpunk ideas have a long legacy and continue to influence how we are discussion matters of security and privacy, particularly in the relationship between citizens and governments. In a nutshell, cypherpunks posit that we can and should keep government intervention minimal by force-protecting our privacy by means of encryption.

Let us begin with what they got right:

“For privacy to be widespread it must be part of a social contract.”

 (Eric Hughes: A Cypherpunk’s Manifesto)

Social contracts are the basis of every society; they define a society and are represented in its formal and informal norms. Privacy is indeed a matter of social contract as it concerns very fundamental question of what the members of a society should know about each other, how they should they should learn it, and how they may or may not use what they know about others.

Privacy is not merely a matter of hiding information so that it cannot be found. The absence of expected features or utterances carries information, too. Some societies, for example, expect their members to actively demonstrate their allegiance. Members of such a society cannot merely hide what they think, they also have to perform their role as expected if they have no desire to become a leper.

What the cypherpunks got entirely wrong was their conception of social contracts and the hope that cryptography could be the foundation upon which they, the cypherpunks, would build their own. Cypherpunks believe that cryptography would allow them to define their own social contract on top of or next to the existing ones. This has not worked and it cannot work. On the one hand, this is not how social contracts work. They are a dimension of a society’s culture that evolves, for better or worse, with this society.

On the other hand, cryptography–or security technology in general–does not change power relationships as much as cypherpunks hope it would. Governments are by definition institutions of power: “Government is the means by which state policy is enforced.” Cypherpunks believe that cryptography and other means of keeping data private would limit the power of their governments and lay it into the cypherpunks’ hands. However, the most fundamental power that any working government has is the power to detain members of the society it is governing.

In an echo of cypherpunk thinking, some people react to an increased interest of the U.S. Customs and Border Protection (CBP) in travelers’ mobile devices with the suggestion to leave those devices at home while traveling. After all, the CBP cannot force you to surrender what you do not have on you, so the reasoning. This thinking has, however, several flaws.

First, from a security point of view, leaving your phone at home means to leave it just as unattended as it would be in the hands of a CBP agent. If the government really wants your data, nothing better could happen to them than getting access to your phone while you are not even in the country.

Second, the government is interested in phones for a reason. Cryptography and other security mechanisms do not solve security problems, they only transform them. Cryptography in particular transforms the problem of securing data into a problem of securing keys. The use of technology has evolved in many societies to the point where our phones have become our keys to almost everything we do and own online; they have become our primary window into the cloud. This makes phones and the data on them valuable in every respect, for those trying to exert power as well as for ourselves. You lose this value if you refrain from using your phone. Although it seems easy to just leave your phone at home, the hidden cost of it is hefty. Advice suggesting that you do so is therefore not very practical.

Third, this is not about you (or if it is, see #1 above). Everyone is using mobile phones and cloud services because of their tremendous value. Any government interested in private information will adapt its approach to collecting this information to the ways people usually behave. You can indeed gain an advantage sometimes by just being different, by not behaving as everyone else would. This can work for you, particularly if the government’s interest in your affairs is only a general one and they spend only average effort on you. However, this same strategy will not work for everyone as everyone cannot be different. If everyone left their phones at home, your government would find a different way of collecting information.

By ignoring a bit of context, cypherpunks manage to arrive at wrong conclusions from right axioms:

“We cannot expect governments, corporations, or other large, faceless organizations to grant us privacy out of their beneficence.”

“We must defend our own privacy if we expect to have any.”

(Eric Hughes: A Cypherpunk’s Manifesto)

This is true, but incomplete. Power must be contained at its source (and containment failures are a real possibility). Cryptography and other security technology does not do that. Cryptography can perhaps help you evade power under certain circumstances, but it will by no means reverse power relationships. What you really need is a social contract that guarantees your freedom ad dignity.

 

(Expanded version of a G+ comment)

 

Encryption Will Not Give You Free Speech

“Freedom of speech is the right to articulate one’s opinions and ideas without fear of government retaliation or censorship, or societal sanction.”Wikipedia

Reports of a vulnerability in WhatsApp are making the rounds today after The Guardian boosted the signal. Besides the fact that there is not really a backdoor, but rather a feature that represents a reasonable choice in a tradeoff between confidentiality and availability, the Guardian also repeats a common mistake: confounding encryption and free speech.

“Privacy campaigners criticise WhatsApp vulnerability as a ‘huge threat to freedom of speech,’” writes The Guardian. This is bullshit. As per the definition cited above, free speech means you can say things without fear. Being able to say things only in private and needing strong technical privacy guarantees is the opposite of free speech. You need encryption for that which you cannot say without fear.

Yes, encryption can be a tool against those who suppress you (though a weak one, as your adversary can easily use your use of encryption against you – or deny you due process altogether and persecute you without any trace of evidence and probable cause). But encryption will never give you free speech, it will only support your inner immigration.

Re: Offener Brief zu DNA-Analysen in der Forensik

Mahnungen vor dräuenden Gefahren verkaufen sich immer, sind doch vorhergesagte Probleme nie auszuschließen, ohne dass man ein Risiko eingeht und etwas ausprobiert. So lässt sich beliebig lange spekulieren, was alles passieren könnte, wenn man täte, was man wegen der Risiken besser bleiben ließe. Als neuester Gegenstand solcher „kritischen“ Betrachtungen bietet sich die Forderung nach einer Ausweitung der zulässigen DNA-Analysen in der Polizeiarbeit an. Folgerichtig haben Sozialwissenschaftler einen Offenen Brief zu DNA-Analysen in der Forensik verfasst der zur Vorsicht mahnt und seine Autorinnen als unverzichtbare Expertinnen anbietet. Der Tenor: Erweiterte DNA-Analysen seien viel zu kompliziert als dass man einfache Polizisten unbegleitet mit ihren Ergebnissen arbeiten lassen dürfe. Am Ende steht wenig mehr als die Schlussfolgerung, dass es zu Fehlern kommen könne. Dies jedoch ist eine banale Aussage: Fehler sind in der Polizeiarbeit Alltag und das System aus Gesetzgebung, Polizei und Justiz kann damit gut umgehen. Selbstverständlich muss man die Auswirkungen neuer Methoden betrachten, aber zur Panik gibt es keinen Anlass. Unser Rechtsstaat irrt sich recht zuverlässig zugunsten der Verdächtigen und die Forensiker wissen selbst ganz gut, wo die Grenzen der verschiedenen Analyseverfahren liegen. Unschätzbare Risiken können wir jeder Technik unterstellen, das hilft nur niemandem.

 

An In-Depth Study of More Than Ten Years of Java Exploitation

My colleagues Philipp Holzinger, Stefan Triller, Alexandre Bartel, and Eric Bodden had a closer look at Java and the vulnerabilities discovered in the Java runtime environment during the last decade. They started from known exploits, identified the vulnerabilities exploited, and analyzed and grouped their root causes. Philipp’s presentation of the results at CCS’16 has been recorded and published on YouTube:

(YouTube)

The paper is also available online:

P. Holzinger, S. Triller, A. Bartel, E. Bodden: An In-Depth Study of More Than Ten Years of Java Exploitation. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security (CCS’16), Vienna, Austria, Oct. 24-28, 2016. DOI: 10.1145/2976749.2978361. Artifacts: ccs2016-artifacts-v01.zip

 

CAST-Workshop „Sichere Software entwickeln“ am 10. November

Auch in diesem Jahr organisieren wir einen CAST-Workshop zum Thema „Sichere Software entwickeln“. Der Workshop findet am Donnerstag, dem 10. November 2016 am Fraunhofer-SIT in Darmstadt statt. Am Vorabend laden wir zu einem Get-Together ein. Das Programm und alle weiteren Informationen zum Workshop findet Ihr hier: https://www.cast-forum.de/workshops/infos/227.

P.S. Jetzt haben wir auch einen Flyer zum Ausdrucken und Verteilen.

Manipulativ gefragt

»Fürchten Sie, dass es in nächster Zeit in Deutschland terroristische Anschläge geben wird oder fürchten Sie dies nicht?« Diese Frage (via) ist unmöglich sauber zu beantworten, denn es handelt sich in Wirklichkeit um zwei Fragen:

  1. Erwarten Sie, dass es in nächster Zeit in Deutschland terroristische Anschläge geben wird?
  2. Fürchten Sie sich davor?

Ich erwarte, dass es in nächster Zeit in Deutschland terroristische Anschläge geben wird, wies es sie seit der Erfindung des Terrorismus immer wieder gegeben hat. Der letzte, von dem ich gehört habe, liegt gerade zwei Tage zurück.

Zu fürchten gibt es dennoch wenig. Wir leben in einem funktionierenden Staat, der Bedrohungen für Leib und Leben gering hält. Um gewaltsam aus dem Leben zu scheiden, muss man schon ordentlich Pech haben.

Die Fragestellung macht es allzu leicht, nüchterne Antworten auf die erste Frage einzusammeln und sie später zu aufgeregten Antworten auf die zweite umzudeuten. Aber auch den Expertinnen und Experten bei infratest dimap kann ja mal ein Fehler unterlaufen, nicht wahr?

Lernmaschine

Vor vier Jahren schrieb ich Datenkrake Google, weil ich die landläufige Vorstellung von Google als einer großen Datenbank für unpassend hielt. In Wirklichkeit, so meine These, sei maschinelles Lernen der Kern von Google. Inzwischen gibt es daran nicht mehr viel zu zweifeln. Google hat mit AlphaGo Aufsehen erregt, einer KI, die menschliche Go-Meister schlägt. Mit Tensor Flow stellt Google eine KI-Bibliothek als Open Source bereit. Vor zwei Wochen wurde bekannt, dass man sogar spezielle Hardware für Deep-Learning-Anwendungen entwickelt hat: Tensor-Prozessoren, auf denen AlphaGo seine Berechnungen ausführte. Dazu passend hat Google gerade das Startup Nervana übernommen, das ebenfalls optimierte Hardwarearchitekturen für das maschinelle Lernen entwickelt hat.

Das kann in diesem Tempo noch eine Weile weitergehen. Halten unsere Debatten mit der Entwicklung Schritt?

Classifying Vehicles

Security is a classification problem: Security mechanisms, or combinations of mechanisms, need to distinguish that which they should allow to happen from that which they should deny. Two aspects complicate this task. First, security mechanisms often only solve a proxy problem. Authentication mechanisms, for example, usually distinguish some form of token – passwords, keys, sensor input, etc. – rather than the actual actors. Second, adversaries attempt to shape their appearance to pass security mechanisms. To be effective, a security mechanism needs to cover these adaptations, at least the feasible ones.

An everyday problem illustrates this: closing roads for some vehicles but not for others. As a universal but costly solution one might install retractable bollards, issue means to operate them to the drivers of permitted vehicles, and prosecute abuse. This approach is very precise, because classification rests on an artificial feature designed solely for security purposes.

Simpler mechanisms can work sufficiently well if (a) intrinsic features of vehicles are correlated with the desired classification well enough, and (b) modification of these features is subject to constraints so that evading the classifier is infeasible within the adversary model.

Bus traps and sump busters classify vehicles by size, letting lorries and buses pass while stopping common passenger cars. The real intention is to classify vehicles by purpose and operator, but physical dimensions happen to constitute a sufficiently good approximation. Vehicle size correlates with purpose. The distribution of sizes is skewed; there are many more passenger cars than buses, so keeping even just most of them out does a lot. Vehicle dimensions do not change on the fly, and are interdependent with other features and requirements. Although a straightforward way exists to defeat a bus trap – get a car that can pass – this is too expensive for most potential adversaries and their possible gain from the attack.

Unexpected Moves

When AlphaGo played and won against Sedol, it made innovative moves not only unexpected by human experts but also not easily understandable for humans. Apparently this shocked and scared some folks.

However, AI coming up with different concepts than humans is nothing new. Consider this article recounting the story of Eurisko, a genetic programming experiment in the late 1970s. This experiment, too, aimed at competing in a tournament; the game played, Traveller TCS, was apparently about designing fleets of ships and letting them fight against each other. Even this early, simple, and small-scale AI thing surprised human observers:

“To the humans in the tournament, the program’s solution to Traveller must have seemed bizarre. Most of the contestants squandered their trillion-credit budgets on fancy weaponry, designing agile fleets of about twenty lightly armored ships, each armed with one enormous gun and numerous beam weapons.”

(G. Johnson:
Eurisko, The Computer With A Mind Of Its Own)

Keep in mind there was nothing scary in the algorithm, it was really just simulated evolution in a rather small design space and the computer needed some help by its programmers to succeed.

The Eurisko “AI” even rediscovered the concept of outnumbering the enemy instead of overpowering him, a concept humans might associate with Lanchester’s models of predator-prey systems:

“Eurisko, however, had judged that defense was more important than offense, that many cheap, invulnerable ships would outlast fleets consisting of a few high-priced, sophisticated vessels. (…) In any single exchange of gunfire, Eurisko would lose more ships than it destroyed, but it had plenty to spare.”

(G. Johnson:
Eurisko, The Computer With A Mind Of Its Own)

Although Eurisko’s approach seemed “un-human”, it really was not. Eurisko only ignored all human biases and intuition, making decisions strictly by cold, hard data. This is a common theme in data mining, machine learning, and AI applications. Recommender systems, for example, create and use concepts unlike those a human would apply to the same situation; an article in IEEE Spectrum a couple of years ago (J. A. Konstan, J. Riedl: Deconstructing Recommender Systems) outlined a food recommender example and pointed out that concepts like “salty” would not appear in their models.

Transparency and auditability are surely problems if such technology is being used in critical applications. Whether we should be scared beyond this particular problem remains an open question.

 

(This is a slightly revised version of my G+ post, https://plus.google.com/+SvenT%C3%BCrpe/posts/5QE9KeFKKch)

The Key-Under-the-Doormat Analogy Has a Flaw

The crypto wars are back, and with them the analogy of putting keys under the doormat:

… you can’t build a backdoor into our digital devices that only good guys can use. Just like you can’t put a key under a doormat that only the FBI will ever find.

(Rainey Reitman: An Open Letter to President Obama: This is About Math, Not Politics)

This is only truthy. The problem of distinguishing desirable from undesirable interactions to permit the former and deny the latter lies indeed at the heart of any security problem. I have been arguing for years that security is a classification problem; any key management challenge reminds us of it. I have no doubt that designing a crypto backdoor only law enforcement can use only for legitimate purposes, or any sufficiently close approximation, is a problem we remain far from solving for the foreseeable future.

However, the key-under-the-doormat analogy misrepresents the consequences of not putting keys under the doormat, or at least does not properly explain them. Other than (idealized) crypto, our houses and apartments are not particularly secure to begin with. Even without finding a key under the doormat, SWAT teams and burglars alike can enter with moderate effort. This allows legitimate law enforecement to take place at the cost of a burglary (etc.) risk.

Cryptography can be different. Although real-world implementations often have just as many weaknesses as the physical security of our homes, cryptography can create situations where only a backdoor would allow access to plaintext. If all we have is a properly encrypted blob, there is little hope of finding out anything about its plaintext. This does not imply we must have provisions to avoid that situation no matter what the downsides are, but it does contain a valid problem statement: How should we regulate technology that has the potential to reliably deny law enforcement access to certain data?

The answer will probably remain the same, but acknowledging the problem makes it more powerful. The idea that crypto could not be negotiated about is fundamentalist and therefore wrong. Crypto must be negotiated about and all objective evidence speaks in favor of strong crypto.

Apple, the FBI, and the Omnipotence Paradox

“Can God create a rock so heavy He could not lift it?” this is one version of the omnipotence paradox. To make a long story short, ominpotence as a concept leads to similar logical problems as the naïve set-of-sets and sets-containing-themselves constructions in Russel’s paradox. Some use this paradox to question religion; others use it to question logic; and pondering such questions generally seems to belong to the realm of philosophy. But the ongoing new round of (civil) crypto wars is bringing a tranformed version of this paradox into everyone’s pocket.

Can Apple create an encryption mechanism so strong that even Apple cannot break it? Apple claims so, at least for the particular situation, in their defense against the FBI’s request for help with unlocking a dead terrorist’s iPhone: “As a result of these stronger protections that require data encryption, we are no longer able to use the data extraction process on an iPhone running iOS 8 or later.” Although some residual risk of unknown vulnerabilities remains, this claim seems believable as far as it concerns retroactive circumvention of security defenses. Just as a locksmith can make a lock that will be as hard to break for its maker as for any other locksmith, a gadgetsmith can make gadgets without known backdoors or weaknesses that this gadgetsmith might exploit. This is challenging, but possible.

However, the security of any encryption mechanism hinges on the integrity of key components, such as the encryption algorithm, its implementation, auxiliary functions like key generation and their implementation, and the execution environment of these functions. The maker of a gadget can always weaken it for future access.

Should Apple be allowed to make and sell devices with security mechanisms so strong that neither Apple nor anyone else can break or circumvent them in the course of legitimate investigations? This is the real question here, and a democratic state based on justice and integrity has established institutions and procedures to come to a decision and enforce it. As long as Apple does not rise above states and governments, they will have to comply with laws and regulations if they are not to become the VW of Silicon Valley.

Thus far we do not understand very well how to design systems that allow legitimate law enforcement access while also keeping data secure against illiegitimate access and abuse or excessive use of legitimate means. Perhaps in the end we will have to conclude that too much security would have to be sacrificed for guaranteed law enforcement access, as security experts warn almost in unison, or that a smartphone is too personal a mind extension for anyone to access it without its user’s permission. But this debate we must have: What should the FBI be allowed to access, what would be the design implications of guaranteed access requirements, and which side effects would we need to consider?

For all we know, security experts have a point warning about weakening what does already break more often than not. To expectat that companies could stand above the law because security, however, is just silly.

PS, remember Clarke’s first law: “When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.”

PPS: Last Week Tonight with John Oliver: Encryption