Some in the security community think of the Maginot Line as a failure in defense as German troops went around it. This kind of arrogance is, unfortunetely, all too common, especially among security bullies. The following video argues that the Maginot line was a success, because it forced the German troops to go around it:
A spectre is haunting the Internet – the spectre of blockchain. Some claim it to be a disruptive technology, others think it is bullshit. In favor of the bullshit point of view the blockchain narrative lacks convincing use cases and looks like a solution in search of problems. On the other hand, a number of reputable companies seem to believe that this blockchain thing does somehow matter. Are we witnessing a mass delusion or is there more to it?
In a recent post, Peak Blockchain, I argued that that blockchain craze has a lot in common with the way Second Life, an online 3D toy world, was being hyped and then forgotten a little more than a decade ago. Perhaps there is more to this analogy than just the way attention rises and then drops in both cases, blockchain and Second Life. A real but boring trend may lure behind the exciting (at least to some) surface and its catchy name.
Second Life reached its peak of attention during the first half of 2007. Conceptually it never made sense: Except for certain types of computer games and some special-purpose applications, 3D worlds rarely make for usable user interfaces. Just imagine having to browse the shelves of a virtual 3D library instead of just googling or having to use an on-screen replica of a good old typewriter to write a letter instead of using a word processor – a user interface should support relevant task rather than needlessly replicate constraints of the physical world.
Despite the obvious flaws of Second Life as a tool and platform for anything, many well-known companies fell for the hype and built experimental presences in this virtual world. At least one of them, IBM, went even further and attempted to turn Second Life into a collaboration support business. Was everyone crazy?
Not entirely. The 3D toy environment of Second Life was merely the bullshit version of a real trend. Around 2007 the web had evolved from a distributed hypertext system into an interactive application platform (“Web 2.0”). Blogs had appeared on the scene (e.g., WordPress – 2003, wordpress.com – 2005). Cloud computing was on the rise, although nobody called it so yet (e.g., Google Docs/Sheets/Slides – 2006, Dropbox – 2007). Social networks and media were evolving (Myspace – 2003, Facebook – 2004, Twitter – 2006). While Second Life itself did not make much sense, it symbolized in its over-the-top manner what was going on as a proxy instance and gave it a name.
Looking at the recent blockchain mania from this angle, today’s trend-behind-the-craze might be the automated interaction of networked artifacts in emergent systems of systems. The evolution of the Internet has not stopped after turning all our computers, tablets, phones, and watches into cloudtop devices, user interfaces to (collections of) services that reside on the network.
Today we are moving toward an Internet of Things (IoT) where thing means anything that is a computer without looking like one. Networked computers come in a variety of shapes today. At home we find entertainment devices streaming audio and video into our living rooms, voice assistants that let us talk to the Internet, and home automation letting us control anything from lights to heating through our smartphones. Then there are connected and increasingly, autonomous cars as well as charging infrastructure for electric vehicles. Manufacturing equipment from machines and robots to entire factories continues to be automated and networked, as does agriculture.
Technically these cyber-physical systems are networked computers as we know them. However, they consist of increasingly autonomous entities forming emergent systems of (semi-)autonomous systems. On the one hand, autonomy is the key feature in cases like autonomous driving or autonomous robots. On the other hand, many IoT devices lack traditional user interfaces and their ability to interact with users in other ways than over the Internet remains rather limited.
As a consequence, IoT devices need the capability to interact with cloud services and with each other without requiring human intervention and perhaps also to make transactions autonomously. For example, one blockchain application that has been mentioned is charging of autonomous electric cars – a car buying electricity from a charger. This interaction must be secure, whatever this may mean in the particular case, and there is no such thing as a PGP key signing party for IoT devices.
Blockchains and cryptocurrencies are unlikely to solve any of the pertinent problems, but the problems are real: transactions between devices, managing access to continuously produced machine data, keeping manufacturing instructions under control while getting them where they are being needed, and so on. I suspect this is the grain of truth in blockchain mania. At some point everyone will drop the burnt term but continue to work on the actual problems and actual solutions.
This is Peak Blockchain. You can watch it in the rear-view mirror as it lies behind us, shrinking smaller and smaller until it will disappear on the horizon. I am using Google Trends as my rear-view mirror, which allows some easy and superficial yet striking visual argument.
Peak Blockchain took place in December, 2017. Up to that time, starting ca. 2013, search interest grew seemingly exponentially, whereas interest has almost halved in the three months from Christmas, 2017 to Easter, 2018.
Inflated expectations or rather, exaggerated claims were there many, this much is true in the hype cycle mapping. The blockchain narrative emerged out of the cryptocurrency bubble. When the likes of Bitcoin, Ethereum, and IOTA reached the mainstream, it became simultaneously obvious they had little to do with actual currencies and the nominal value of their tokens was not backed by any economic reality other than that of a speculative bubble.
Everyone saw that so-called cryptocurrencies were bullshit and the speculative bubble was about to burst, yet everyone also wanted to talk about it without looking like an idiot. Out of this dilemma was the narrative born that the first and most notorious of the artificial assets, Bitcoin, might collapse but the technology behind it, blockchain, was there to stay. Not only would the “blockchain technology” thus invented – it had been a mere design pattern until then – survive, it would disrupt everything from the financial industry and digital advertising to crybersecurity (sic!) and photography. For none of this claimed disruptive potential existed the slightest example or explanation, but everyone was keen to ride the mounting wave without being seen as just one of those Bitcoin dorks, and so everyone kept inventing what might happen, somehow, in the future.
Blockchain is only one of several related concepts – others being Bitcoin and cryptocurrencies – that peaked at about the same time:
In all cases we see the same steep rise followed by an equally steep decline. This observation alone should suffice to convince everyone that nothing is to be expected from blockchains, regardless of what the IBMs and SAPs of this world are claiming after having fallen for the ploy. We saw peaks like this before. Little more than a decade ago, Second Life was the blockchain of its time:
Second Life is long forgotten, and rightly so. There never was any real promise in it, there was only a popular (in certain circles) delusion and the fear of missing out that also drives our contemporary bullshit business fad.
Let us look at real technology trends for comparison. They do not peak and then collapse, but rather grow organically. Take cloud computing, for example:
We see no steep peak here, but rather a mostly linear growth over the course of four years from 2007 to 2011. After climbing to its all-time high, interest decreases slowly and remains at a high level for a long time. Other than Second Life, cloud computing is here to stay.
Another real trend is REST APIs, a way of designing web services that are meant to be accessed by programs rather than users:
We see a mostly steady increase over more than a decade that at some point gained a bit of speed. Again, there is no sudden peak here. As a third example, NoSQL databases started a bit later but otherwise exhibit a similar trend graph:
Even topics that had some hype to them and were being discussed by the general public exhibit slower increases and declines if there is substance behind the hype. Take, for example, “big data” and some related terms:
Real technology trends and topics take time to develop and then they persist or even continue to grow for quite a while. A real technology trend is an evolutionary process. Something gets discovered or invented and turns out useful. People apply the idea and develop it further, which makes it more attractive.
Genuine technology does not suddenly appear like a Marian apparition and disrupt everything within months, nor does interest in it fade as quickly as it would for anything lacking substance after its hollowness became obvious even to its advocates. Peak Blockchain is not the result of a genuine technology trend, but rather the consequence of everyone in a crowd assuring one other of the beauty of the emperor’s clothes when in fact there never even was an empire.
Blockchain is over. Time to ridicule those who jumped on the bandwagon ignoring all obvious warning signs, determined to boost their careers. They will soon exercise their right to be forgotten.
There was a time when personal computers came with security built into their hardware. For about a decade from 1984 on, virtually every PC featured a key lock. Depending on the particular implementation, locking would prevent powering on the computer, keyboard input, hard drive access, opening the case, or a combination thereof. This video tells the story:
From today’s perspective the key lock looks like a weak if not pointless security mechanism. In the best case it makes tampering with the hardware slightly harder—attackers have to equip themselves with tools and spend some time using them—while not at all addressing all the software vulnerabilities that we care about so much today.
Nevertheless the design made a lot of sense.
First, a keylock is a usable security mechanism. Everyone is familiar with key locks and knows how to use them, no complicated setup is required, and there is little potential for mistakes.
Second, an attacker’s physical access to hardware internals defeats most software security mechanisms. Physical access control is therefore a prerequisite for security against certain threats.
Third, personal computers at that time were not really threatened by online or software attacks, perhaps with the exception of relatively harmless viruses spreading through exchanged floppy disks. Someone tampering with the computer was indeed one of the more realistic threats.
Fourth, seemingly minor security gains can be rather effective when put in context. While forcing attackers to carry tools and use them may not seem like a great complication for them, it may suffice to prevent opportunistic attacks as well as quick serial attacks against multiple consecutive targets.
Security technology has evolved to make key locks obsolete, but they made sense at the time of their introduction.
TL;DR: The author thinks Snowden’s home security app, Haven, is snake oil regardless of the algorithms it uses. Operational security is at least as hard as cryptography and no app is going to provide it for you.
Bogus cryptography is often being referred to as snake oil—a remedy designed by charlatans to the sole end of selling it to the gullible. Discussions of snake oil traditionally focused on cryptography as such and technical aspects like the choice of algorithms, the competence of their designers and implementers, or the degree of scrutiny a design and its implementation received. As a rule of thumb, a set of algorithms and protocols is widely accepted as probably secure according to current public knowledge, and any poorly motivated deviation from this mainstream raises eyebrows.
However, reasonable choices of encryption algorithms and crypto protocols alone does not guarantee security. The overall application in which they serve as building blocks needs to make sense as well in the light of the threat models this application purports to address. Snake oil is easy to mask at this level. While most low-level snake oil can be spotted by a few simple patterns, the application layer calls for a discussion of security requirements.
Enter Haven, the personal security app released by Freedom of the Press Foundation and Guardian Project and associated in public relations with Edward Snowden. Haven turns a smartphone into a remote sensor that alerts its user over confidential channels about activity in its surroundings. The intended use case is apparently to put the app on a cheap phone and leave this phone wherever one feels surveillance is need; the user’s primary phone will then receive alerts and recordings of sensed activity.
Alas, these functions together create a mere securitoy that remains rather ineffective in real applications. The threat model is about the most challenging one can think of short of an alien invasion. A secret police that can make people disappear and get away with it is close to almighty. They will not go through court proceedings to decide who to attack and they will surely not be afraid of journalists reporting on them. Where a secret police makes people disappear there will be no public forum for anyone to report on their atrocities. Just imagine using Haven in North Korea—what would you hope to do, inside the country, after obtaining photos of their secret police?
Besides strongly discouraging your dissemination of any recordings, a secret police can also evade detection through Haven. They might, for example, jam wireless signals before entering your home or hotel room so that your phone has no chance of transmitting messages to you until they have dealt with it. Or they might simply construct a plausible pretense, such as a fire alarm going off and agents-dressed-as-firefighters checking the place. Even if they fail to convince you, you will not be able to react in any meaningful way to the alerts you receive. Even if you were close enough to do anything at all, you would not physically attack agents of a secret police that makes people disappear, would you?
What Haven is trying to sell is the illusion of control where the power differential is clearly in favor of the opponent. Haven sells this illusion to well pampered westerners and exploits their lack of experience with repression. To fall for Haven you have to believe the premise that repression means a secret police in an otherwise unchanged setting. This premise is false: A secret police making people disappear exists inevitably in a context that limits your access to institutions like courts or media or the amount of support you can expect from them. Secret communication as supported by Haven does not even try to address this problem.
While almost everyone understands the problems with low-level snake oil and how to detect and avoid it, securitoys and application layer snake oil continue to fool (some) journalists and activists. Here are a few warning signs:
Security is the only or primary function of a new product or service. Nothing interesting remains if you remove it.
The product or service is being advertised as a tool to evade repression by states.
The threat model and the security goals are not clearly defined and there is no sound argument relating the threat model, security goals, and security design.
Confidentiality or privacy are being over-emphasized and encryption is the core security function. Advertising includes references to “secure” services like Tor or Signal.
The product or service purports to solve problems of operational security with technology.
When somebody shows you a security tool or approach, take the time to ponder how contact with the enemy would end.
Software penetration testing has become a common practice in software companies. Penetration testers apply exploratory testing techniques to find vulnerabilities, giving developers feedback on the results of their security efforts—or lack thereof. This immediate benefit is mostly uncontested, although it comes with the caveat that testers find only a fraction of the present vulnerabilities and their work is rather costly.
Security experts commonly recommend that developers should respond to the results of a penetration test not merely by fixing the specific reported vulnerabilities, but rather analyze and mitigate the root causes of these vulnerabilities. Just as commonly, security experts bemoan that this does not seem to happen in practice. Is this true and what prevents penetration testing from having an impact beyond defect fixing?
Studying Software Developers
We studied this question by observing a product group of a software company over the course of a year. The company had hired security consultants to test one of its products for security defects and subsequently train the developers of this product. We wanted to know how effective this intervention was as a trigger for change in development. To this end we conducted two questionnaires, observed the training workshop, analyzed the contents of the group’s wiki and issue tracker, and interviewed developers and their managers.
The product group in our study comprised several Scrum teams, development managers, and product management. Scrum is a management framework for agile software development. Scrum defines three roles—Development Team, Product Owner, and Scrum Master—and artifacts and ceremonies for their collaboration. The Development Team is responsible for and has authority over technical matters and development work; the Product Owner is responsible for requirements and priorities; and the Scrum Master facilitates and moderates their collaboration.
The participants of our study appreciated the penetration test results as feedback and the training workshop as an opportunity to learn. They managed to address the particular vulnerabilities reported by the consultants. They also felt that security needed more attention in their development work. Yet, after addressing the vulnerabilities uncovered by the penetration test, they returned to their familiar ways of working without lasting change.
Analyzing our observations through the lens of organizational routines, we found three key factors inhibiting change in response to the penetration test and training: successful application of existing routines, the organizational structure of roles and responsibilities, and the overall security posture and attitude of the company.
(1) Existing Routines
To address the immediate findings of the penetration test, our developers used an existing bug-fixing and stabilization routine. Defect reports arrive asynchronously and sometimes require quick response; developers therefore commonly dedicate some—variable—percentage of their working time to defect fixing in response to reports. The penetration test fed the team dozens of new defects at once, but developers hat a routine to deal with it. Moreover, management tracked the number of open defects, so that the sudden increase raised attention and created pressure on the team to get this number down.
Feature development, on the other hand—where change would have to occur—remained mostly unaffected. Feature development followed the process of Scrum and the penetration test neither had a specific impact here nor did it feed requests or ideas into this routine.
(2) Organizational Structure
Following the ideas of Scrum and agile development, a strong division of labor and responsibilities characterized the organizational structure in the setting of our study. Product management and product owners were responsible for the direction of the development work, whereas development teams enjoyed a certain autonomy in technical questions. This structure worked as a social contract: managers expected developers to take care of security as a matter of quality and developers were keen to deliver the features requested by management. However, the penetration test had little impact on the manager’s priorities beyond the pressure to reduce the number of open defects. The developers thus found themselves in a situation where security was not explicitly required and additional security work could not be justified.
(3) Business Role of Security
Finally, security had limited perceived importance for the business of the studied company, which thus far had not experienced any public security disaster and did not actively sell security. The company therefore lacked a security narrative that could have been used to justify security efforts beyond defect fixing. This, together with the inherent low visibility of security and insecurity, shaped priorities. Product managers knew that features sell their products—new features are easy to show and explain, whereas security improvements are not. Security was perceived as contributing little to the success of the product and the company, making it a low-priority requirement.
Our study points to some of the complexities of managing software development and of triggering change by interventions. While it would be tempting to assign blame to a single factors, such as agile development or negligent management, the problem is really more complicated. Organizational structures and routines exist and they are shaped by business needs. Scrum, for example, is highly beneficial for the studied company. One might even ask whether the company’s dealing with security is a problem in the first place. Are they perhaps doing the right thing for their market and customers?
Cypherpunkideas have a long legacy and continue to influence how we are discussion matters of security and privacy, particularly in the relationship between citizens and governments. In a nutshell, cypherpunks posit that we can and should keep government intervention minimal by force-protecting our privacy by means of encryption.
Let us begin with what they got right:
“For privacy to be widespread it must be part of a social contract.”
Social contracts are the basis of every society; they define a society and are represented in its formal and informal norms. Privacy is indeed a matter of social contract as it concerns very fundamental question of what the members of a society should know about each other, how they should they should learn it, and how they may or may not use what they know about others.
Privacy is not merely a matter of hiding information so that it cannot be found. The absence of expected features or utterances carries information, too. Some societies, for example, expect their members to actively demonstrate their allegiance. Members of such a society cannot merely hide what they think, they also have to perform their role as expected if they have no desire to become a leper.
What the cypherpunks got entirely wrong was their conception of social contracts and the hope that cryptography could be the foundation upon which they, the cypherpunks, would build their own. Cypherpunks believe that cryptography would allow them to define their own social contract on top of or next to the existing ones. This has not worked and it cannot work. On the one hand, this is not how social contracts work. They are a dimension of a society’s culture that evolves, for better or worse, with this society.
On the other hand, cryptography–or security technology in general–does not change power relationships as much as cypherpunks hope it would. Governments are by definition institutions of power: “Government is the means by which state policy is enforced.” Cypherpunks believe that cryptography and other means of keeping data private would limit the power of their governments and lay it into the cypherpunks’ hands. However, the most fundamental power that any working government has is the power to detain members of the society it is governing.
In an echo of cypherpunk thinking, some people react to an increased interest of the U.S. Customs and Border Protection (CBP) in travelers’ mobile devices with the suggestion to leave those devices at home while traveling. After all, the CBP cannot force you to surrender what you do not have on you, so the reasoning. This thinking has, however, several flaws.
First, from a security point of view, leaving your phone at home means to leave it just as unattended as it would be in the hands of a CBP agent. If the government really wants your data, nothing better could happen to them than getting access to your phone while you are not even in the country.
Second, the government is interested in phones for a reason. Cryptography and other security mechanisms do not solve security problems, they only transform them. Cryptography in particular transforms the problem of securing data into a problem of securing keys. The use of technology has evolved in many societies to the point where our phones have become our keys to almost everything we do and own online; they have become our primary window into the cloud. This makes phones and the data on them valuable in every respect, for those trying to exert power as well as for ourselves. You lose this value if you refrain from using your phone. Although it seems easy to just leave your phone at home, the hidden cost of it is hefty. Advice suggesting that you do so is therefore not very practical.
Third, this is not about you (or if it is, see #1 above). Everyone is using mobile phones and cloud services because of their tremendous value. Any government interested in private information will adapt its approach to collecting this information to the ways people usually behave. You can indeed gain an advantage sometimes by just being different, by not behaving as everyone else would. This can work for you, particularly if the government’s interest in your affairs is only a general one and they spend only average effort on you. However, this same strategy will not work for everyone as everyone cannot be different. If everyone left their phones at home, your government would find a different way of collecting information.
By ignoring a bit of context, cypherpunks manage to arrive at wrong conclusions from right axioms:
“We cannot expect governments, corporations, or other large, faceless organizations to grant us privacy out of their beneficence.”
“We must defend our own privacy if we expect to have any.”
This is true, but incomplete. Power must be contained at its source (and containment failures are a real possibility). Cryptography and other security technology does not do that. Cryptography can perhaps help you evade power under certain circumstances, but it will by no means reverse power relationships. What you really need is a social contract that guarantees your freedom ad dignity.
“Privacy campaigners criticise WhatsApp vulnerability as a ‘huge threat to freedom of speech,’” writes The Guardian. This is bullshit. As per the definition cited above, free speech means you can say things without fear. Being able to say things only in private and needing strong technical privacy guarantees is the opposite of free speech. You need encryption for that which you cannot say without fear.
Yes, encryption can be a tool against those who suppress you (though a weak one, as your adversary can easily use your use of encryption against you – or deny you due process altogether and persecute you without any trace of evidence and probable cause). But encryption will never give you free speech, it will only support your inner immigration.
My colleagues Philipp Holzinger, Stefan Triller, Alexandre Bartel, and Eric Bodden had a closer look at Java and the vulnerabilities discovered in the Java runtime environment during the last decade. They started from known exploits, identified the vulnerabilities exploited, and analyzed and grouped their root causes. Philipp’s presentation of the results at CCS’16 has been recorded and published on YouTube: