reverse deception organized cyber threat counter exploitation

Reverse Deception

Organized Cyber Threat Counter-Exploitation

Sean Bodmer

Dr. Max Kilger

Gregory Carpenter

Jade Jones

New York   Chicago   San Francisco

Lisbon   London   Madrid   Mexico City

Milan   New Delhi   San Juan

Seoul   Singapore   Sydney   Toronto

Reverse Deception: Organized Cyber Threat Counter-Exploitation

Copyright © 2012 by The McGraw-Hill Companies. All rights reserved. Except as permitted under the United States Copyright Act of 1976, no part of this publication may be reproduced or distributed in any form or by any means, or stored in a database or retrieval system, without the prior written permission of the publisher.

ISBN: 978-0-07-177250-1
MHID: 0-07-177250-2

The material in this eBook also appears in the print version of this title: ISBN 978-0-07-177249-5, MHID 0-07-177249-9.

All trademarks are trademarks of their respective owners. Rather than put a trademark symbol after every occurrence of a trademarked name, we use names in an editorial fashion only, and to the benefit of the trademark owner, with no intention of infringement of the trademark. Where such designations appear in this book, they have been printed with initial caps.

McGraw-Hill eBooks are available at special quantity discounts to use as premiums and sales promotions, or for use in corporate training programs. To contact a representative please e-mail us at
[email protected]
.

Information has been obtained by McGraw-Hill from sources believed to be reliable. However, because of the possibility of human or mechanical error by our sources, McGraw-Hill, or others, McGraw-Hill does not guarantee the accuracy, adequacy, or completeness of any information and is not responsible for any errors or omissions or the results obtained from the use of such information.

Sponsoring Editor
Amy Jollymore

Editorial Supervisor
Patty Mon

Project Manager
Harleen Chopra, Cenveo Publisher Services

Acquisitions Coordinator
Ryan Willard

Technical Editor
Alex Eisen

Copy Editor
Marilyn Smith

Proofreader
Lisa McCoy

Indexer
Karin Arrigoni

Production Supervisor
James Kussow

Composition
Cenveo Publisher Services

Illustration
Cenveo Publisher Services

Art Director, Cover
Jeff Weeks

Cover Designer
Jeff Weeks

TERMS OF USE

This is a copyrighted work and The McGraw-Hill Companies, Inc. (“McGraw-Hill”) and its licensors reserve all rights in and to the work. Use of this work is subject to these terms. Except as permitted under the Copyright Act of 1976 and the right to store and retrieve one copy of the work, you may not decompile, disassemble, reverse engineer, reproduce, modify, create derivative works based upon, transmit, distribute, disseminate, sell, publish or sublicense the work or any part of it without McGraw-Hill’s prior consent. You may use the work for your own noncommercial and personal use; any other use of the work is strictly prohibited. Your right to use the work may be terminated if you fail to comply with these terms.

THE WORK IS PROVIDED “AS IS.” McGRAW-HILL AND ITS LICENSORS MAKE NO GUARANTEES OR WARRANTIES AS TO THE ACCURACY, ADEQUACY OR COMPLETENESS OF OR RESULTS TO BE OBTAINED FROM USING THE WORK, INCLUDING ANY INFORMATION THAT CAN BE ACCESSED THROUGH THE WORK VIA HYPERLINK OR OTHERWISE, AND EXPRESSLY DISCLAIM ANY WARRANTY, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. McGraw-Hill and its licensors do not warrant or guarantee that the functions contained in the work will meet your requirements or that its operation will be uninterrupted or error free. Neither McGraw-Hill nor its licensors shall be liable to you or anyone else for any inaccuracy, error or omission, regardless of cause, in the work or for any damages resulting therefrom. McGraw-Hill has no responsibility for the content of any information accessed through the work. Under no circumstances shall McGraw-Hill and/or its licensors be liable for any indirect, incidental, special, punitive, consequential or similar damages that result from the use of or inability to use the work, even if any of them has been advised of the possibility of such damages. This limitation of liability shall apply to any claim or cause whatsoever whether such claim or cause arises in contract, tort or otherwise.

We, as a team, would like to dedicate this book to Brad “The Nurse” Smith. May he heal in spirit and mind. And, of course, also to Angelo Bencivenga, the Secret Cyber Weapon of the Federal Government, who we all love to get advice from and give some level of grief to whenever we can. Without Angelo, we all would not have met!

 

I would like to thank the powers that be in this Universe for making my life full of chaos and stability.

—Sean M. Bodmer

 

To my beautiful and wonderfully understanding wife Christine, who has been so supportive throughout the years for my passion to better understand the relationship between people and technology. I also want to convey my respect and gratitude to the people who sacrifice bits and pieces of their lives so that others may live their lives in peace.

—Dr. Max Kilger

 

To my wonderful wife and children, for all their patience and understanding over the past few years; Cameron Hunt, for a wild and untamed ride—if we wrote down all the ideas, dear God (that’s the next book); Jeff Willhite, a true friend who instilled in me the right perspective of life; and Angelo Bencivenga, the only person in the US government with a “big picture” perspective distancing himself from those who pretend to keep pace.

—Gregory Carpenter

 

To my loving and supportive family, thanks for putting up with my crazy work schedule. Thanks also to my father, James Edward Jones, and to my undergraduate geography professor, Dr. Armando DaSilva, who both taught me to think strategically.

—Jade Jones

About the Authors

Sean M. Bodmer
, CISSP and CEH, is an active senior threat intelligence analyst at Damballa. He specializes in the analysis of signatures and behaviors used by the cyber criminal community. Sean focuses on learning tools, techniques, and procedures behind attacks and intrusions related to various organized persistent threats. Sean has worked in several information systems security roles for various firms and customers across the United States over the past 15 years. Most notably, he has spent several years performing black box penetration testing, exploit development, incident response, and intrusion and intruder analysis for Fortune 100 companies, the Department of Defense, and other federal agencies. Sean has shared numerous accounts of his findings at various industry conferences relating to the inner workings of advanced cyber threats. Sean has been hacking and developing exploits since before he hit puberty, and has made a career for himself by specializing in the impossible or improbable when it comes to performing analysis and attribution on the criminal underground.

Dr. Max Kilger
received his doctorate from Stanford University in Social Psychology in 1993. He has written and coauthored research articles and book chapters in the areas of influence in decision making, the interaction of people with technology, motivations of malicious online actors, the changing social structure of the computer hacking community, and the nature of emerging cyber threats. He is a founding and former board member of the Honeynet Project—a 10-year-old not-for-profit international information security organization that serves the public good. Max was also a member of the National Academy of Engineering’s Combating Terrorism Committee, which was charged with recommending counter-terrorism methodologies to the Congress and relevant federal agencies. He is a frequent national and international speaker to law enforcement, the intelligence community, and military commands, as well as information security forums.

Gregory Carpenter
, CISM, is a retired US Army officer with 27 years of service. He has served in the infantry, chemical, medical service, and intelligence corps throughout his career. Gregory has received numerous professional awards, including the prestigious National Security Agency Military Performer of the Year in 2007. He earned a B.S. from Colorado Christian University in 1993, and an M.S. from Seton Hall University in 2001. Currently, Gregory is employed by the Army Research Laboratory.

Jade Jones
received his commission as a US Navy Judge Advocate General Corps (JAG) Officer in 1994 and currently holds the rank of Commander in the Navy Reserve. His practice areas of expertise include information operations, intelligence, and space law. Jade holds a B.A. in Geography and Asian Studies from Towson University, and a J.D. from Boston College Law School. Jade is a civilian employee with the Department of Defense.

About the Technical Editor

Alex Eisen
is a computer scientist, information security analyst, tech editor, and associate professor. He researches parallels between evolutionary progression of “disruptive tech” and advancement of digital global culture and society. He has spoken at conferences, edited books, advised startups, and taught at UAT.edu. Now he hopes to be involved in creating integral edutainment media for Generations Z, Alpha, and on.

Contents

          
Foreword
          
Acknowledgments
          
Introduction
Chapter 1   State of the Advanced Cyber Threat
          
Have You Heard About the APT?
          
APT Defined
          
What Makes a Threat Advanced and Persistent?
          
Examples of Advanced and Persistent Threats
               
Moonlight Maze
               
Stakkato
               
Titan Rain
               
Stormworm
               
GhostNet
               
Byzantine Hades/Foothold/Candor/Raptor
               
Operation Aurora
               
Stuxnet
               
Russian Business Network
               
New Generation of Botnets and Operators
               
Operation Payback
          
Conclusion
Chapter 2   What Is Deception?
          
How Does Deception Fit in Countering Cyber Threats?
          
Six Principles of Deception
               
Focus
               
Objective
               
Centralized Planning and Control
               
Security
               
Timeliness
               
Integration
          
Traditional Deception
               
Feints—Cowpens
               
Demonstrations—Dorchester Heights
               
Ruses—Operation Mincemeat (the Unlikely Story of Glyndwr Michael)
               
Displays—A Big Hack Attack
          
Why Use Deception?
               
The First US Army Group Deception
               
Russian Maskirovka
          
Deception Maxims
               
“Magruder’s Principle”—Exploitation of a COG’s Perception or Bias
               
“Limitations to Human Information Processing”
               
“Multiple Forms of Surprise”
               
“Jones’ Dilemma”
               
“Choice of Types of Deception”
               
“Husbanding of Deception Assets”
               
“Sequencing Rule”
               
“Importance of Feedback”
               
“Beware of Possible Unwanted Reactions”
               
“Care in the Design of Planned Placement of Deceptive Material”
          
Understanding the Information Picture
               
Half-Empty Version
               
Half-Full Version
               
A Question of Bias
               
Totally Full Version
               
Step-Beyond Version
               
Two-Steps-Beyond Version
          
Conclusion
Chapter 3   Cyber Counterintelligence
          
Fundamental Competencies
          
Applying Counterintelligence to the Cyber Realm
          
Sizing Up Advanced and Persistent Threats
               
Attack Origination Points
               
Numbers Involved in the Attack
               
Risk Tolerance
               
Timeliness
               
Skills and Methods
               
Actions
               
Objectives
               
Resources
               
Knowledge Source
          
Conclusion
Chapter 4   Profiling Fundamentals
          
A Brief History of Traditional Criminal Profiling
          
The Emergence of Cyber Profiling
          
Acquiring an Understanding of the Special Population
          
The Objectives of Profiling
          
The Nature of Profiling
          
Basic Types of Profiling
          
Two Logical Approaches to Profiling: Inductive vs. Deductive
          
Information Vectors for Profiling
               
Time
               
Geolocation
               
Skill
               
Motivation
               
Weapons and Tactics
               
Socially Meaningful Communications and Connections
          
Conclusion
          
References
Chapter 5   Actionable Legal Knowledge for the Security Professional
          
How to Work with a Lawyer
          
What You Should Know About Legal Research
               
Online Legal Resources
               
Common Legal Terms
               
The Role of Statutes in Our Legal System
               
How to Find a Law
               
Do Your Background Homework
          
Reading the Law
          
Communicating with Lawyers
          
Ethics in Cyberspace
          
Conclusion
Chapter 6   Threat (Attacker) Tradecraft
          
Threat Categories
               
Targeted Attacks
               
Opportunistic Attacks
               
Opportunistic Turning Targeted
          
Evolution of Vectors
          
Meet the Team
          
Criminal Tools and Techniques
               
Tailored Valid Services
               
Academic Research Abuse
               
Circles of Trust
               
Injection Vectors
          
Conclusion
Chapter 7   Operational Deception
          
Deception Is Essential
          
Tall Tale 1
               
Postmortem
          
Tall Tale 2
               
Postmortem
          
Tall Tale 3
               
Postmortem
          
Tall Tale 4
               
Honeypot 1
               
Postmortem
          
Conclusion
Chapter 8   Tools and Tactics
          
Detection Technologies
          
Host-Based Tools
               
Antivirus Tools
               
Digital Forensics
               
Security Management Tools
          
Network-Based Tools
               
Firewalls
               
Intrusion Detection/Prevention Systems
          
Deception Technologies
               
Honeywalls
               
Honeynets as Part of Defense-in-Depth
               
Research vs. Production Honeynets
               
Honeynet Architectures
               
Honeywall Accreditation
               
Content Staging
               
Content Filling
               
Honeynet Training
               
Honeynet Objectives
               
Honeynet Risks and Issues
          
Check Yourself Before You’re Wrecked
               
What’s the Status of Your Physical Security?
               
How Does Your Wireless Network Look?
               
What’s Traveling on Your Network?
               
What About Your Host/Server Security?
               
How Are Your Passwords?
               
How’s Your Operational Security?
          
Crimeware/Analysis Detection Systems
               
What Happened on Your Box?
               
What Did That Malicious Software Do?
          
Conclusion
Chapter 9   Attack Characterization Techniques
          
Postincident Characterization
          
Another Tall Tale
               
Discovery
               
Malware
               
Aftermath
          
Real-World Tactics
               
Engaging an Active Threat
               
Traffic, Targets, and Taxonomy
               
Aftermath
          
Conclusion
Chapter 10   Attack Attribution
          
A Brief Note About Levels of Information Present in Objects
          
Profiling Vectors
               
Time
               
Motivations
               
Social Networks
               
Skill Level
               
Vector Summary
          
Strategic Application of Profiling Techniques
          
Example Study: The Changing Social Structure of the Hacking Community
          
Micro- and Macro-Level Analyses
          
The Rise of the Civilian Cyber Warrior
               
The Balance of Power
               
Potential Civilian Cyber Warrior Threats
          
Conclusion
          
References
Chapter 11   The Value of APTs
          
Espionage
          
Costs of Cyber Espionage
          
Value Network Analysis
          
APTs and Value Networks
               
The RSA Case
               
The Operation Aurora Case
               
APT Investments
          
APTs and the Internet Value Chain
               
It’s All Good(s)
               
Bitcoin in the Future?
          
Conclusion
Chapter 12   When and When Not to Act
          
Determining Threat Severity
               
Application Vulnerability Scenario
               
Targeted Attack Scenario
          
What to Do When It Hits the Fan
               
Block or Monitor?
               
Isolating the Problem
               
Distinguishing Threat Objectives
               
Responding to Actionable Intelligence
          
Cyber Threat Acquisition
               
Distinguishing Between Threats
               
Processing Collected Intelligence
               
Determining Available Engagement Tactics
          
Engaging the Threat
               
Within Your Enterprise
               
External to Your Enterprise
               
Working with Law Enforcement
          
To Hack or Not to Hack (Back)
               
To What End?
               
Understanding Lines (Not to Cross)
          
Conclusion
Chapter 13   Implementation and Validation
          
Vetting Your Operations
               
Vetting Deceptions
               
Vetting Perceptual Consistency in a Deception
               
Vetting Engagements
          
Putting This Book to Use with Aid from Professionals
          
How to Evaluate Success
          
Getting to the End Game
          
Conclusion
          
Glossary
          
Index

Foreword

The purpose of locks is not to deter criminals; it is to keep honest people honest
.

—Anonymous reformed thief

Cyberspace Is the Wild West

Deception being the major theme of this book is provocative. It makes explicit and unusual something that is inherent and commonplace. As readers of books such as this, we all know that we live in a world surrounded by deceptions, ranging from the trivial of sports competition to the commercial marketplace to the terrorist bomb maker.

What is different or unique about the deceptions involved in the defense of computer networks that makes them worthy of special study? Ubiquity and technology characterize cyberspace. Time and space hardly exist in the cyber world. Actions take place at nearly light speed. Data theft can occur very rapidly and leave no trace—that which was stolen may appear to have been undisturbed. That rapidity of communication virtually negates space. If the electronic means exist, connections can be made from virtually any point on the earth to any other with equal ease and speed. Unlike gold bullion, data copied is as good as the original data. Physical proximity is not required for theft.

Paradoxically, the highly structured and complex technology of computers gives the technically sophisticated thief unique advantages over the inexpert majority who are mere users of networks. It is the highly structured nature of the computer and its programs that makes them all at once so useful, predictable, reliable, and vulnerable to abuse and theft. The user and abuser alike are vulnerable to deceit precisely because the systems are so useful, predictable, reliable, and vulnerable. Only the humans in the system are vulnerable to deception. Yet the advantages of connection to the global network are so great that total isolation from it is possible only in the event of specific and urgent need. The advantages of connection trump the risk of compromise.

The instructions to computers must be unambiguous and specific. If computers are to communicate with each other, they must do so according to protocols understood by attacker and defender. There are, of course, many protocols and systems of instructions, each consistent within itself, intelligible, and unambiguous. The possibility of secret instructions exists, but someone must know them if secrets are to be useful. These necessities impose themselves on the technologies and hardware of networks.

A protected network is one that represents itself to users as protected by requiring users to show evidence of authorization to access it—typically by means of a password. Gaining unauthorized access to information or data from a protected network, however accomplished, is theft. We refer to the intruder who gains this access as the “adversary.”

Most often, attacks on networks have consisted of adversaries taking advantage of well-known, tried-and-true human failings:

Failures to follow best practices
Failures to heed warnings
Failures of management to provide adequately for personnel security issues
Failures of individuals to control their appetites

 

People have been, and almost certainly will continue to be, the primary points of entry to computer-related deception.

Adversaries attack, hack, and intrude on computer networks largely by using their technical skills to exploit human fallibilities. The higher the value of the data they see“k and the more organized the effort, the more likely it is that the technical skills are leveraged from conventional manipulative criminal skills.

Each network is designed as an orderly world, which nevertheless is connected to a chaotic world. Is it possible to be connected and not be infected by the chaos? A few years ago, at a conference on network security, one participant complained that operating a network in the face of constant hacking attempts was like being naked in a hail storm. Was there nothing that could be done to protect oneself? Another participant replied, “No.” Legal and pragmatic constraints made it difficult, if not impossible. Has there been much change? Not if what we read in the newspapers is true.

Even without attackers, as networks expand and the data in them grows, apparently simple questions may lead to unexpected destinations, often by convoluted routes. On the Web at large, simple questions become complex. Settled truths lose their solidity. There is so much information. And it is so hard to keep the true sorted from the false. As the King of Siam said, “Some things nearly so, others nearly not!” (
www.lyricsbay.com/a_puzzlement_lyrics-the_king_and_i.html
).

As the Internet and cyber world grow in technical complexity and substantive variety, when will the possible permutations of connection with and between networks become infinite? Do any of us truly understand when we drop in a request exactly why we receive a particular answer? I think fondly of the Boston Metropolitan Transit Authority. It inspired the 1950 short story “A Subway Named Moebius,” by A. J. Deutsch, which told the tragic tale of what happens when a network inadvertently goes infinite.
1

Even short of such drama, there is no certainty, no matter the perfection of the technology, that the seekers and users of information will ask the right questions, find the right information, or reach correct conclusions from the information they find.

Paradoxically, the search for the perfect vessel—a container for information impervious to unauthorized uses—motivates some others to go to lengths to penetrate it. Therefore, the hider/finder perplexity is always with us, and so are deception games.

Deception is most often thought of in terms of fooling or misleading. It adds to the uncertainty that characterizes real-world situations. Not true!

Properly considered, the purpose of deception is not to fool or mislead. Whether deployed by friend or foe, its purpose is to achieve some advantage unlikely to be conceded if the target or object of the deception understood the deceiver’s intent. The purpose of deception is, in fact, to increase predictability, though for only one side of a transaction. It increases the confidence one side may feel in the outcome to the disadvantage of the other side.

Having an advantage also gives one side the initiative. Seizing the initiative, exercising and benefiting from it, is the ultimate object of deception.

This view raises several questions that cannot be answered, but which must be considered and whose implications must be taken into account if deception is to be either deployed or defended against on behalf of computer networks:

What exactly is deception?
Why is deception necessary?
Given the necessity of deception, what general issues are, or ought to be, considered before one takes it up?

 

Definition of Deception

Deception in computer networks is our subject. We live in a sea of deception. Virtually all living things recognize that they are the prey of some other, and survival depends on some combination of physical attributes and wit. Four rules apply:

Do not be seen—hide.
If seen, run away.
Counterattack if there is no alternative.
When none of the preceding three are possible, use wits and resort to subterfuge.

 

Hog-nosed snakes and possums feign death
2
. Puffer fish make themselves too big and unpleasant to swallow, and skunks, well… you get the idea. The substance of this book explores the human, rational, and computer network analogs.

Deception’s distinguishing characteristic is that its purpose is to affect behavior. (You can’t deceive an inanimate object, after all.) So the purpose of the deception is to manipulate someone to act as he would not do if he understood what the deceiver were up to. However, taking that desired action probably will not be sufficient. Tricking the bank manager into giving up the combination to the vault still leaves the job of gathering up the money and hauling it away, not to mention avoiding the police long enough to enjoy it.

So deception has three parts:

Define the end state (after the deception succeeds, what is the state of things?).
Perform the action(s) that causes the adversary to cooperate, or at least not interfere with the deceiver’s action.
Execute the action required to secure the intended advantageous state.

 

We give these parts names: the
objective
, the
deception
, and the
exploitation
. Without all three, there can be no deception plan. It is possible to fool, mislead, or confuse. But to do so may cause the adversary to take some unforeseen or unfavorable action. And unless one has the intent and capability to exploit that action induced in the adversary to achieve a goal, what was the purpose of the whole exercise? Of what benefit was it?

Merely hiding something is not deception. Camouflage is an example. Camouflage hides or distorts the appearance of an object, but it does not alter the hunter’s behavior. A newborn deer is speckled and has no scent—essentially invisible to predators—so that it can be left alone while its mother browses. But deer make no effort to defend a fawn by distracting or attacking predators should the fawn be discovered. In contrast, some ground-nesting birds lay speckled eggs, make nests of local materials, and have chicks of a fuzzy form and indeterminate color to discourage predators. But they also will feign a broken wing in efforts to distract predators and lead them away from their nests. They are deceiving their enemies in a way deer do not. On the other hand, some birds will attack predators near their nest, attempting to drive those predators away, but they don’t try to lead the predators away from the nest.

Deception, then, is about behavior both induced in the adversary and undertaken by the deceiver to exploit it. To deceive, it is not sufficient to induce belief in the adversary; it is necessary also to prepare and execute the exploitation of resultant behavior.

As long as the target or object of our deception does what we want him to do, that should be sufficient for deceptive purposes. The adversary may have doubts. He may take precautions.
3
The deceiver’s response is not to embroil himself in attempting to discern the quality of his adversary’s beliefs—a fraught task in the best of times—but to make contingency plans of his own to maintain the initiative and achieve his aims whatever the adversary may do. The adversary’s actions are sufficient warranty for his beliefs.

Purely as a practical matter, how likely is it that the deceiver will be sufficiently certain of his knowledge of the state of mind of an adversary partially known and far away? As deceivers, we may know what the adversary knows because we told him or because we know what someone tells us he was told. But can we know what the adversary believes? What he intends? How today’s environment impacts this morning’s beliefs?

The only thing in the world anyone controls with certainty is his own behavior. From within an organization where action must take place through agents and intermediaries, there is little enough control. As deceivers, we may know only what we intended by acting in a certain way and what we intended if the adversary responded in the anticipated way. The purpose of the deception, after all, is to make the adversary’s actions predictable!

You will say that not knowing his state of mind or beliefs, we cannot know with assurance whether the adversary acted as he did in accord with our intentions or in a deception of his own in which he is using secret knowledge of our intentions. You are right to say so. That is why the deceiver, as well as—and perhaps more than—the deceived must have doubts and contingency plans. It is the deceiver who accepts the added risk of committing to exploiting activity he has initiated.

Card workers (magicians) use the theory of the “Out” as insurance that their tricks will amaze the audience even if they fail. An Out is a piece of business prepared in anticipation of something going wrong in front of live audiences
(
see
“Outs”: Precautions and Challenges for Ambitious Card Workers
by Charles H. Hopkins and illustrated by Walter S. Fogg, 1940). Failure, to one extent or another, is highly likely in any effort to manipulate another. By anticipating when and how failure may occur, it is possible to plan actions to not merely cover the failure, but to transition to an alternate path to a successful conclusion.

Does this differ from old-fashioned contingency planning? Perhaps radically. In a contingency plan, typically the rationale is: “I’ll do A. If the adversary does something unanticipated or uncooperative, then I’ll do C, or I’ll cope.” The theory of Outs would have it: “I’ll do A, but at some point the adversary may do something else, B or B’. If so, I am prepared to do C or C’ to enable me, nonetheless, to achieve A.” The emphasis is on having anticipated those points in the operation where circumstances may dictate change and, having prepared alternatives, enabling achievement of the original objective nonetheless. “It’s the end state, stupid!” to paraphrase.

Deception consists of all those things we must do to manipulate the behavior of the target or object of our operations. It follows that deception is not necessarily or even primarily a matter of technical mastery. In the context of this book, it is a state of mind that recognizes it is the value of the information in the network that attracts hostile interest. In order to penetrate protected networks, certain specific items of intelligence are needed. And, therefore, it is the adversary’s interest in these items of information and his need for the data on the network that make it possible to induce him to act against his own interest.

This insight was emphasized by Geoffrey Barkas, a British camouflage expert in North Africa. (Before and after the war, Barkas was a successful movie producer.) After the Germans had captured one of his more elaborate schemes, Barkas thought the Germans, now aware of the extent of British capabilities, could not be fooled again. They were though, and Barkas realized that as long as the enemy had a good intelligence service to which enemy commanders paid attention, it was possible to fool them again and again (as described in
The Camouflage Story (From Aintree to Alamein)
by Geoffrey and Natalie Barkas, London, Cassell & Company Ltd, 1952).

Barkas realized that it is the need for information and willingness to act on the information acquired that creates the vulnerability to deception. It is no more possible to avoid being deceived than it is to engage in competitive activity without seeking and using information. One can try to do so, and one might succeed for a time. Without intelligence, one could blunder, and in blundering, confuse an opponent into blundering also, but one could not deceive. Deception presupposes a conscious choice. Deception is in the very nature of competitive activity.

The interplay among competitive, conflicting interests must inform the extent, expense, and means used in defending the integrity of the information/data stored in networks. Both attack and defense are preeminently human, not technical.

An excellent, if trivial, example is this football ploy: A quarterback gets down to begin a play, as does the opposing line. He then stands up and calmly walks across the line between the opposing defenders, and then sprints for the end zone. By the time the defenders recover, it is too late. (You can see this in action at
http://www.koreus.com/video/football-americain-culot.html
.) This is perfectly legal. It’s done right in plain sight. Success depends entirely on surprise (and a speedy quarterback). It won’t work often, but when conditions are right, it meets all the requirements of a deception plan well executed.