Security is a classification problem: Security mechanisms, or combinations of mechanisms, need to distinguish that which they should allow to happen from that which they should deny. Two aspects complicate this task. First, security mechanisms often only solve a proxy problem. Authentication mechanisms, for example, usually distinguish some form of token – passwords, keys, sensor input, etc. – rather than the actual actors. Second, adversaries attempt to shape their appearance to pass security mechanisms. To be effective, a security mechanism needs to cover these adaptations, at least the feasible ones.
An everyday problem illustrates this: closing roads for some vehicles but not for others. As a universal but costly solution one might install retractable bollards, issue means to operate them to the drivers of permitted vehicles, and prosecute abuse. This approach is very precise, because classification rests on an artificial feature designed solely for security purposes.
Simpler mechanisms can work sufficiently well if (a) intrinsic features of vehicles are correlated with the desired classification well enough, and (b) modification of these features is subject to constraints so that evading the classifier is infeasible within the adversary model.
Bus traps and sump busters classify vehicles by size, letting lorries and buses pass while stopping common passenger cars. The real intention is to classify vehicles by purpose and operator, but physical dimensions happen to constitute a sufficiently good approximation. Vehicle size correlates with purpose. The distribution of sizes is skewed; there are many more passenger cars than buses, so keeping even just most of them out does a lot. Vehicle dimensions do not change on the fly, and are interdependent with other features and requirements. Although a straightforward way exists to defeat a bus trap – get a car that can pass – this is too expensive for most potential adversaries and their possible gain from the attack.
When AlphaGo played and won against Sedol, it made innovative moves not only unexpected by human experts but also not easily understandable for humans. Apparently this shocked and scared some folks.
However, AI coming up with different concepts than humans is nothing new. Consider this article recounting the story of Eurisko, a genetic programming experiment in the late 1970s. This experiment, too, aimed at competing in a tournament; the game played, Traveller TCS, was apparently about designing fleets of ships and letting them fight against each other. Even this early, simple, and small-scale AI thing surprised human observers:
“To the humans in the tournament, the program’s solution to Traveller must have seemed bizarre. Most of the contestants squandered their trillion-credit budgets on fancy weaponry, designing agile fleets of about twenty lightly armored ships, each armed with one enormous gun and numerous beam weapons.”
Eurisko, The Computer With A Mind Of Its Own)
Keep in mind there was nothing scary in the algorithm, it was really just simulated evolution in a rather small design space and the computer needed some help by its programmers to succeed.
The Eurisko “AI” even rediscovered the concept of outnumbering the enemy instead of overpowering him, a concept humans might associate with Lanchester’s models of predator-prey systems:
“Eurisko, however, had judged that defense was more important than offense, that many cheap, invulnerable ships would outlast fleets consisting of a few high-priced, sophisticated vessels. (…) In any single exchange of gunfire, Eurisko would lose more ships than it destroyed, but it had plenty to spare.”
Eurisko, The Computer With A Mind Of Its Own)
Although Eurisko’s approach seemed “un-human”, it really was not. Eurisko only ignored all human biases and intuition, making decisions strictly by cold, hard data. This is a common theme in data mining, machine learning, and AI applications. Recommender systems, for example, create and use concepts unlike those a human would apply to the same situation; an article in IEEE Spectrum a couple of years ago (J. A. Konstan, J. Riedl: Deconstructing Recommender Systems) outlined a food recommender example and pointed out that concepts like “salty” would not appear in their models.
Transparency and auditability are surely problems if such technology is being used in critical applications. Whether we should be scared beyond this particular problem remains an open question.
(This is a slightly revised version of my G+ post, https://plus.google.com/+SvenT%C3%BCrpe/posts/5QE9KeFKKch)
The crypto wars are back, and with them the analogy of putting keys under the doormat:
… you can’t build a backdoor into our digital devices that only good guys can use. Just like you can’t put a key under a doormat that only the FBI will ever find.
(Rainey Reitman: An Open Letter to President Obama: This is About Math, Not Politics)
This is only truthy. The problem of distinguishing desirable from undesirable interactions to permit the former and deny the latter lies indeed at the heart of any security problem. I have been arguing for years that security is a classification problem; any key management challenge reminds us of it. I have no doubt that designing a crypto backdoor only law enforcement can use only for legitimate purposes, or any sufficiently close approximation, is a problem we remain far from solving for the foreseeable future.
However, the key-under-the-doormat analogy misrepresents the consequences of not putting keys under the doormat, or at least does not properly explain them. Other than (idealized) crypto, our houses and apartments are not particularly secure to begin with. Even without finding a key under the doormat, SWAT teams and burglars alike can enter with moderate effort. This allows legitimate law enforecement to take place at the cost of a burglary (etc.) risk.
Cryptography can be different. Although real-world implementations often have just as many weaknesses as the physical security of our homes, cryptography can create situations where only a backdoor would allow access to plaintext. If all we have is a properly encrypted blob, there is little hope of finding out anything about its plaintext. This does not imply we must have provisions to avoid that situation no matter what the downsides are, but it does contain a valid problem statement: How should we regulate technology that has the potential to reliably deny law enforcement access to certain data?
The answer will probably remain the same, but acknowledging the problem makes it more powerful. The idea that crypto could not be negotiated about is fundamentalist and therefore wrong. Crypto must be negotiated about and all objective evidence speaks in favor of strong crypto.
“Can God create a rock so heavy He could not lift it?” this is one versio of the omnipotence paradox. To make a long story short, ominpotence as a concept leads to similar logical problems as the naïve set-of-sets and sets-containing-themselves constructions in Russel’s paradox. Some use this paradox to question religion; others use it to question logic; and pondering such questions generally seems to belong to the realm of philosophy. But the ongoing new round of (civil) crypto wars is bringing a tranformed version of this paradox into everyone’s pocket.
Can Apple create an encryption mechanism so strong that even Apple cannot break it? Apple claims so, at least for the particular situation, in their defense against the FBI’s request for help with unlocking a dead terrorist’s iPhone: “As a result of these stronger protections that require data encryption, we are no longer able to use the data extraction process on an iPhone running iOS 8 or later.” Although some residual risk of unknown vulnerabilities remains, this claim seems believable as far as it concerns retroactive circumvention of security defenses. Just as a locksmith can make a lock that will be as hard to break for its maker as for any other locksmith, a gadgetsmith can make gadgets without known backdoors or weaknesses that this gadgetsmith might exploit. This is challenging, but possible.
However, the security of any encryption mechanism hinges on the integrity of key components, such as the encryption algorithm, its implementation, auxiliary functions like key generation and their implementation, and the execution environment of these functions. The maker of a gadget can always weaken it for future access.
Should Apple be allowed to make and sell devices with security mechanisms so strong that neither Apple nor anyone else can break or circumvent them in the course of legitimate investigations? This is the real question here, and a democratic state based on justice and integrity has established institutions and procedures to come to a decision and enforce it. As long as Apple does not rise above states and governments, they will have to comply with laws and regulations if they are not to become the VW of Silicon Valley.
Thus far we do not understand very well how to design systems that allow legitimate law enforcement access while also keeping data secure against illiegitimate access and abuse or excessive use of legitimate means. Perhaps in the end we will have to conclude that too much security would have to be sacrificed for guaranteed law enforcement access, as security experts warn almost in unison, or that a smartphone is too personal a mind extension for anyone to access it without its user’s permission. But this debate we must have: What should the FBI be allowed to access, what would be the design implications of guaranteed access requirements, and which side effects would we need to consider?
For all we know, security experts have a point warning about weakening what does already break more often than not. To expectat that companies could stand above the law because security, however, is just silly.
PS, remember Clarke’s first law: “When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.”
PPS: Last Week Tonight with John Oliver: Encryption
This video explains the essence of machine learning in just two minutes:
“Eat less bread” requests a British poster from WWI. We all know it makes sense, don’t we? Resources become scarce at wartime, so wasting them weakens one’s own position. Yet this kind of advice can be utterly useless: tell a hungry person to eat less bread and you will earn, at best, a blank stare. However reasonable your advice may seem to you and everyone else, a hungry person will be physically and mentally unable to comply.
“Do not call system()” or “Do not read uninitialized memory” request secure coding guides. Such advice is equally useless if directed at a person who lacks the cognitive ability to comply. Cognitive limitations do not mean a person is stupid. We all are limited in our respective ability to process information, and we are more similar to than dissimilar from each other in this regard.
Secure coding guidelines all too often dictate a large set of arbitrary dos and don’ts, but fail to take human factors into account. Do X! Don’t do Y, do Z instead! Each of these recommendations has a sound technical basis; code becomes more secure if everyone follows this advice. However, only some of these recommendations are realistic for programmers to follow. Their sheer number should raise our doubt and let us expect that only a subset will ever be adopted by a substantial number of programmers.
Some rules are better suited for adoptions than others. Programmers often acquire idioms and conventions they perceive as helpful. Using additional parentheses for clarity, for example, even though not strictly necessary, improves readability; and the const == var convention prevents certain defects that are easy to introduce and sometimes hard to debug.
Other rules seem, from a programmer’s point of view, just ridiculous. Why is there a system() function in the first place if programmers are not supposed to use it? And if developers should not read uninitialized memory, what would warn them about memory being not initialized? Such advice is inexpensive – and likely ineffective. If we want programmers to write secure code, we must offer them platforms that make secure programming easy and straightforward and insecure programming hard and difficult.
Security and protection systems guard persons and property against a broad range of hazards, including crime; fire and attendant risks, such as explosion; accidents; disasters; espionage; sabotage; subversion; civil disturbances; bombings (both actual and threatened); and, in some systems, attack by external enemies. Most security and protection systems emphasize certain hazards more than others. In a retail store, for example, the principal security concerns are shoplifting and employee dishonesty (e.g., pilferage, embezzlement, and fraud). A typical set of categories to be protected includes the personal safety of people in the organization, such as employees, customers, or residents; tangible property, such as the plant, equipment, finished products, cash, and securities; and intangible property, such as highly classified national security information or “proprietary” information (e.g., trade secrets) of private organizations. An important distinction between a security and protection system and public services such as police and fire departments is that the former employs means that emphasize passive and preventive measures.
The way to learn whether a person is trustworthy is to trust him.
In their Black Hat stage performance, employees of a security company showed how apps on certain mobile phones can access fingerprint data if the phone has a fingerprint sensor. The usual discussions ensued about rotating your fingerprints, biometrics being a bad idea, and biometric features being usernames rather than passwords. But was there a problem in the first place? Let’s start from scratch, slightly simplified:
- Authentication is about claims and the conditions under which one would believe certain claims.
- We need authentication when an adversary might profit from lying to us.
- Example: We’d need to authenticate banknotes (= pieces of printed paper issued by or on behalf of a particular entity, usually a national or central bank) because adversaries might profit from making us believe a printed piece of paper is a banknote when it really isn’t.
- Authentication per se has nothing to do with confidentiality and secrets, as the banknotes example demonstrates. All features that we might use to authenticate a banknote are public.
- What really matters is effort to counterfeit. The harder a feature or set of features is to reproduce for an adversary, the stronger it authenticates whatever it belongs to.
- Secrets, such as passwords, are only surrogates for genuine authenticating features. They remain bound to an entity only for as long as any adversary remains uncertain about their choice from a vast space of possible values.
- Fingerprints are neither usernames nor passwords. They are (sets of) biometric features. Your fingerprints are as public as the features of a banknote.
- We authenticate others by sets of biometric features every day, recognizing colleagues, friends, neigbours, and spouses by their voices, faces, ways of moving, and so on.
- We use even weaker (= easier to counterfeit) features to authenticate, for example, police officers. If someone is wearing a police uniform and driving a car with blinkenlights on its roof, we’ll treat this person as a police officer.
- As a side condition for successful attack, the adversary must not only be able to counterfeit authenticating features, the adversary must also go through an actual authentication process.
- Stolen (or guessed) passwords are so easy to exploit on the Internet because the Internet does little to constrain their abuse.
- Attacks against geographically dispersed fingerprint sensors do not scale in the same way as Internet attacks.
Conclusion: Not every combination of patterns-we-saw-in-security-problems makes a security problem. We are leaving fingerprints on everything we touch, they never were and never will be confidential.
Carcinogenic is a scary word, but what exactly does it mean? The International Agency for Research on Cancer (IARC), a part of the World Health Organization (WHO), has established a framework. The IACR reviews scientific evidence for and against carcinogenicity claims and places agents in one of the following categories:
- Group 1 – carcinogenic: Sufficient evidence exists that an agent is carcinogenic in humans. In this group we find, as expected, agents like various forms of radiation (gamma, x-ray, utraviolet), tobacco smoke, alcohol, estrogen-progestogen oral contraceptives, and asbestos.
- Group 2A – probably carcinogenic: Strong evidence exists that an agent is carcinogenic, yet the evidence rmains insufficient to be sure. More precisely, there is sufficient evidence of carcinogenicity in animals but limited evidence in humans. This group contains, for example, acrylamide and occupational exposure to oxidized bitumens and their emissions during roofing.
- Group 2B – possibly carcinogenic: Some evidence exists for carcinogenicity of an agent, but it is neither sufficient nor strong. This class comprises agents like chloroform, coffee, DDT, marine diesel fuel, gasoline and gasoline engine exhaust, and certain surgical implants and other foreign bodies.
- Group 3 – unclassifiable: Evidence is inadequate; the agent may or may not be carcinogenic, we do not know enough to tell. This category contains agents like caffeine, carbon-nanotubes, static or extemely low-frequency electric fields, fluorescent lighting, phenol, polyvinyl chloride, and rock wool.
- Group 4 – probably not: We have evidence that an agent is not carcinogenic. Perhaps due to publication bias, this group contains only one element, caprolactam.
The IACR publishes classification criteria and lists (by category, alphabetical). Wikipedia also maintains lists by these categories with links to the respective pages.
Keep in mind that this classification represents only the state of evidence regarding cancerogenicity in humans, not your risk of becoming exposed to an agent, a dose-response assessment, or overall health risk from exposure. Hint: everything that kills you quickly and reliably is unlikely to cause you cancer.
Is security about keeping secrets? Not really, although it seems so at first glance. Perhaps this mismatch between perception and reality explains why threats are mounting in the news without much impact on our actual lives.
Confidentiality comes first in infosec’s C/I/A (confidentiality, integrity, availability) trinity. Secrets leaking in a data breach are the prototype of a severe security problem. Laypeople even use encryption and security synonymously. Now that the half-life of secrets is declining, are we becoming less and less secure?
Most real security problems are not about keeping secrets, they are about integrity of control. Think, for example, of the money in your wallet. What matters to you is control over this money, which should abide by certain rules. It’s your money, so you should remain in control of it until you voluntarily give up your control in a transaction. The possibility of someone else taking control of your money without your consent, through force or trickery, is something to worry about and, if such others exist, a real security problem. Keeping the contents of your wallet out of sight is in contrast only a minor concern. Someone peeking into your wallet without taking anything is not much of a threat. Your primary security objective is to remain in control of what is yours most of the times and to limit your losses across the exceptional cases when you are not.
This security objective remains just the same as you move on from a wallet to online banking. What matters most is who controls the balance in which way. In a nutshell, only you (or others with your consent), knowingly and voluntarily, should be able to withdraw money or transfer it from your account; you should not be able to increase your balance arbitrarily without handing in actual money; others should be able to transfer any amount to your account; exceptions apply if you don’t pay your debts.
Confidentiality is only an auxiliary objective. We need confidentiality due to vulnerabilities. Many security mechanisms rely on secrets, such as passwords or keys, to maintain integrity. This is one source of confidentiality requirements. Another is economics: Attackers will spend higher amounts on valuable targets, provided they can identify them. If there is a large number of possible targets but only a few are really valuable, one might try to make the valuable target look like all the others so that attackers have to spread at least part of their effort across many candidate targets. However, strong defenses are still needed in case attackers identify the valuable target in whichever way, random or systematic.
The better we maintain integrity of control, the more secure we are. Systems remain predictable and do what we want despite the presence of adversaries. Confidentiality is only a surrogate where we do not trust our defenses.
The following videos teaches us the 7 signs of terrorism:
- Tests of security
- Acquiring supplies
- Suspicious people who “don’t belong”
- Dry runs or trial runs
- Deploying assets or getting into position
Now watch out for terrorists.
We are organizing a workshop on agile secure software development in conjunction with the ARES’15 conference. Please find the full call for papers on the workshop website, http://www.ares-conference.eu/conference/workshops/assd-2015/. The conference takes place in Toulouse this year.
Submission Deadline: April
Author Notification: May 11, 2015
Proceedings version: June 8, 2015
Conference: August 24-28, 2015