The following videos teaches us the 7 signs of terrorism:
- Tests of security
- Acquiring supplies
- Suspicious people who “don’t belong”
- Dry runs or trial runs
- Deploying assets or getting into position
Now watch out for terrorists.
Today’s safety video is brought to you by ScotRail. It examines in depth three cases of train drivers accidentally passing red signals.
*) SPAD = Signal Passed at Danger
Jeremy Howard explains what machine learning is capable of in this TEDx Brussels talk, “The wonderful and terrifying implications of computers that can learn”:
Was einer Wassermelone ohne Helm alles passieren kann:
Mikael Colville-Andersen explains why we should bike, but rather not promote helmet use:
The essence of security – your adversary won’t abide by the rules:
This video won’t teach you warefare (just like an introduction to cryptography won’t teach you security) but it is nevertheless interesting:
How would a rational autonomous system behave? What might go wrong and what can we do about it? Steve Omohundro discusses these questions in a thought-provoking talk:
Hat tip to Manu मनु for pointing me to this video in a comment.
How Google determines which ad to display in a slot and how much to charge the advertiser:
Everyone knows the story of Clifford Stoll and and West-German KGB hackers (see the video below) in the late 80s. Does this history teach us something today? What strikes me as I watch this documentary again is the effort ratio between attackers and defenders. To fight a small adversary group, Stoll invested considerable effort, and from some point involved further people and organizations in the hunt. In effect, once they had been detected, the attackers were on their way to being overpowered and apprehended.
Today, we take more organized approaches to security management and incident response. However, at the same time we try to become more efficient: we want to believe in automated mechanisms like data leakage prevention and policy enforcement. But these mechanisms work on abstractions – they are less complicated than actual attacks. We also want to believe in preventive security design, but soon find ourselves engaged in an eternal arms race as our designs never fully anticipate how attackers adapt. Can procedures and programs be smart enough to fend off intelligent attackers, or does it still take simply more brains on the defender’s than on the attacker’s part to win?
Watch through the end for the twist.
I particularly love the music.