August 28, 2013
Two quotes and a thought:
»If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.« — Eric Schmidt
»A public or private organization’s best defense against whistle-blowers is to refrain from doing things it doesn’t want to read about on the front page of the newspaper.« — Bruce Schneier
Is their message the same, or does it make a difference whether we talk about individuals or organizations? One limitation they surely have in common: just that one doesn’t want others to know certain things does not imply these things are wrong, even if one rightfully fears the others’ reaction. Both quotes oversimplify the problem by ignoring the right/wrong dimension, thus omitting case discriminations that may be important to the discussion.
August 1, 2013
How Google determines which ad to display in a slot and how much to charge the advertiser:
July 5, 2013
Everyone knows the story of Clifford Stoll and and West-German KGB hackers (see the video below) in the late 80s. Does this history teach us something today? What strikes me as I watch this documentary again is the effort ratio between attackers and defenders. To fight a small adversary group, Stoll invested considerable effort, and from some point involved further people and organizations in the hunt. In effect, once they had been detected, the attackers were on their way to being overpowered and apprehended.
Today, we take more organized approaches to security management and incident response. However, at the same time we try to become more efficient: we want to believe in automated mechanisms like data leakage prevention and policy enforcement. But these mechanisms work on abstractions – they are less complicated than actual attacks. We also want to believe in preventive security design, but soon find ourselves engaged in an eternal arms race as our designs never fully anticipate how attackers adapt. Can procedures and programs be smart enough to fend off intelligent attackers, or does it still take simply more brains on the defender’s than on the attacker’s part to win?
June 21, 2013
Watch through the end for the twist.
June 20, 2013
An article in the latest issue of CACM (paywalled) quotes some prices advertised by criminals for elementary attack building blocks:
- DoS – $50…$500 per day
- hacking a private e-mail address – $30…$50
- forged identity documents – <$30
- software opening fake accounts – <$500
- custom-made malware – $1,500 + monthly service fee
If you want to get rich, don’t waste your time on developing sophisiticated attack techniques. Look at the services available and devise a business model.
June 7, 2013
I particularly love the music.
May 24, 2013
A few weeks ago we saw the russian way of obtaining soda from a vending machine. It was simple and robust. French nerds employ an entirely different style:
May 14, 2013
The Guardian lists 10 gross ingredients you didn’t know were in your food, ingredients like arsenic, hair, or silicone breast implant filler. Should we react with nausea and disgust? Of course not. Yummy food is yummy food, neither a just detectable trace of someting (arsenic) nor the source of an ingredient (hair) nor possible other uses of the same ingredient (breast implants) have any noticeble impact. That’s by definition: if a dose of anything has a proven adverse health impact, it will be banned from being used in food. The Guardian‘s list is an example of microscopic properties that don’t matter macroscopically. Yummy food is yummy food.
We commit the same error when, in security, we look just at the software defects and neglect their security impact. All software has defects; we might easily assemble a list of 10, or 100, or 1000 defects you didn’t know were in your programs. This does not mean they’d all matter and need to be removed. A system is secure if it evokes a predictable and controlled incident profile over its lifetime. some software defects in some systems affect this incident profile in such a way that their removal matters. Others are just traces of poison, or issues appearing problematic by analogy. The problem is: we often don’t know which is which.
May 2, 2013
I really liked the headline, though I didn’t like what the article said, i.e. that the U.S. President might start a cyber war /launch a preemtive strike without a real conflict with the state in question. The commentator at the Homeland Security Blog gets the point: “While Presidential Policy Directive 20 is secret, what is known about it is sufficient to raise global concern. The US arsenal is stupefying as it is, and cyber capabilities add a new dimension; and preemption brings us back to the fear and insecurity of the chilliest Cold War years. The undertones of the new policy are aggressive and, as of now, there are no known restrictions.”
April 26, 2013
This video shows a gang of street robbers at work in Bogotá. The commentary is in Spanish, but you’ll probably get some of the interesting point from mere watching: how they change clothes, how one of them picks a victim and indicates the pick to others, or how they literally overpower their target.
April 7, 2013
What happens when a government enacts and enforces a mandatory helmet law for motorcycle riders? According to criminologists, a reduction in motorbike theft follows. The 1989 study Motorcycle Theft, Helmet Legislation and Displacement by Mayhew et al. (paywalled, see Wikpedia summary) demonstrated this effect empirically looking at crime figures from Germany, where not wearing a helmet on a motorcycle is being fined since 1980. This lead to a 60% drop in motorcycle theft – interestingly, with limited compensation by increases in other types of vehicle theft.
The plausible explanation: motorcycle thieves incur higher risk of being caught when riding without a helmet after a spontaneous, opportunistic theft. Adaptation is possible but costly and risky, too: looking for loot with a helmet in one’s hand is more conspicious, and preparing specifically to steal bicycles reduces flexibility and narrows the range of possible targets.
March 31, 2013
Which of the following statements do you agree or disagree with, and why?
- If you get robbed, it’s your fault. You should have carried a gun and used it to defend yourself.
- If your home gets burgled, it’s your fault. Your should have secured your home properly.
- If you get raped, it’s your fault. Your shouldn’t have worn those sexy clothes, and hey, what were you doing in that park at night?
- If your computer gets hacked, it’s your fault. You should have patched the computer every day and used a better password.
- If you get run over by a car and injured in the accident, it’s your fault. You should have worn a helmet and a high-viz jacket.
- If someone bullies you on the Internet, it’s your fault. You should have used a pseudonym on Facebook.
Do you agree with all, some, or none of these statements? Please elaborate in the comments. I’m not so much interested in nitpicking about the causation of certain incidents – read »your fault« as »in part your fault« if you like so. What interests me rather is the consistency or incocnsistency in our assessment of these matters. If you happen to agree with some of the statements but disagree with others, why is that?
March 30, 2013
Just came across a crime science paper that expresses an idea similar to my security property degrees:
»In addition, for any crime, opportunities occur at several levels of aggregation. To take residential burglary as an example, a macro level, societal-level cause might be that many homes are left unguarded in the day because most people now work away from home (cf. Cohen and Felson 1979). A meso-level, neighborhood cause could be that many homes in poor public housing estates once used coin-fed fuel meters which offered tempting targets for burglars (as found in Kirkholt, Pease 1991). A micro-level level cause, determining the choices made by a burglar, could be a poorly secured door.«
(Ronald V Clarke: Opportunity makes the thief. Really? And so what?)
Clarke doesn’t elaborate any further on these macro/meso/micro levels of opportunity for crime. Maybe I’m interpreting too much into this paragraph, but in essence he seems to talk about security properties – he is discussing in his paper the proposition that opportunity is a cause of crime and reviews the literature on this subject. Opportunity means properties of places and targets.
February 14, 2013
Now this is an interesting specialization of phishing attacks:
»The scam works like this: Someone obtains of a list of articles submitted to — and under review by — a publisher, along with the corresponding author’s contact information. Then, pretending to be the publisher, the scammers send a bogus email to the corresponding author informing him that his paper has been accepted. The email also contains article processing fee information, advising the author to wire the fee to a certain person. The problem is that it’s not the publisher sending out the acceptance emails — it’s a bad guy.«
(Scholarly Open Access: Fraud Alert: Bogus Article Acceptance Letters)
I doubt this constitutes a sustainable attack business model, but the perpetrators surely deserve credit for being adaptive and creative.
January 7, 2013
Bruce Schneier dug out this little gem: Forbidden Spheres. A nuclear weapons lab, so the (apparently unconfirmed) story goes, classified all spherical objects as confidential, rather than just those related to nuclear weapons design. In the security-as-classification paradigm this makes a lot of sense. The desired security policy is to keep everything related to weapons design confidential. This includes objects that might give away information, which one will naturally find in an R&D lab. To enforce this policy, however, requires either comprehensive tracking of all objects, or complicated considerations to decide whether an object is classified as confidential under the policy or not. But a subset of the objects under consideration has a common, easy-to-detect property: they are spherical. The classification required as a prerequisite for policy enforcement becomes much simpler if one considers only this feature. The classification also becomes less perfect, there are classification errors. However, if it’s all about nuclear spheres, than the simplified classifier errs systematically towards the safe side, erroneously classifying innocuous spherical objects as confidential. As long as it doesn’t disturb the intended operations of the lab, this may well be acceptable. This approach would break down if an adversary, or an accident, could easily change relevant shapes without defeating purpose.
December 21, 2012
The Onion warns us about this new Internet scam: