David Omand – How Spies Think – 10 Lessons in Intelligence – Part 7

David Omand – How Spies Think – 10 Lessons in Intelligence – Part 7

 

Lesson 5: It is our own demons that are most likely to mislead us

Well, you can kiss my ass in Macy’s window’ was the brutal one-line dismissal by Ava, the CIA’s Iraq Group Chief, of the over-reliance of US Biological Warfare (BW) expert analysts on a single human intelligence source on Saddam Hussein’s BW programmes, codenamed Curveball. When she challenged the experts’ faith in using information from that source, in her words, ‘they looked at me like pigs looking at a

wristwatch’.1 Although not a weapons specialist, Ava, as an experienced intelligence officer, could sense that there might be a problem with the source. Her intervention was bound to be unpopular. Not least, pressure was mounting from the Bush administration to prepare for the invasion of Iraq and to justify this by revealing the extent of Saddam’s holdings of illegal weapons of mass destruction. That included exposing his past use of BW weapons – deliberately engineered to spread lethal and incapacitating disease and among the most horrific known to man – and what were assessed to be his current capabilities.

Curveball seemed the answer to the BW experts’ prayers. He was an Iraqi chemical engineer who had turned up in a German refugee camp claiming to have worked on Saddam’s BW programmes and ready to spill the beans. To the old operational hands in the CIA and Britain’s MI6 he seemed too good to be true. The German overseas intelligence service, the BND, took charge of Curveball and between January 2000 and September 2001 shared almost 100 reports based on his debriefing with defence intelligence in the US and UK. Crucially, Curveball claimed that Iraq had

built several mobile BW production units and that one of those units had begun production of lethal BW agents as early as 1997. A diagram of a truck adapted to be a mobile BW production unit based on Curveball’s information was even included in the presentation by Colin Powell, Secretary of State, to the UN Security Council as part of the US justification for war.

The problem was, those mobile BW units did not exist. Curveball had invented them. The experts fell for his story.

After the war Curveball, real name Rafid Ahmed Alwan al-Janabi, was tracked down by journalists. He admitted that he had lied in his reporting, and said that he had watched in shock as it was used to justify the war. He told them he fabricated tales of mobile BW trucks and clandestine factories in an attempt to bring down the Saddam Hussein regime, from which he had fled. He added: ‘Maybe I was right, maybe I was not right … they gave me this chance. I had the chance to fabricate something to topple the

regime. I and my sons are proud of that …’2

Before the invasion of Iraq in 2003 CIA and MI6 humint (human intelligence) professionals had developed doubts about the credibility of Curveball, not least the CIA Iraq section chief, Ava, quoted above, and her counterparts in London. Although believing that much of Curveball’s reporting was technically credible (he was after all a chemical engineer), they were not convinced that he was a wholly reliable source since not all his reporting checked out, and elements of his behaviour struck them as typical of individuals whom intelligence agencies would normally assess as fabricators.

One obstacle in checking out their suspicions was that the BND would not provide US or UK analysts with direct access to Curveball. The analysts did not know whether Curveball had been offered inducements, such as a German passport and assistance with resettlement. Nor how the questioning had been conducted. They wondered if the witness had been inadvertently led and had been able to infer what US analysts were most keen to know – and thus what information would most please them (always a problem with defectors). There were rumours about his drinking. Several inconsistencies were detected in Curveball’s reporting which heightened doubts about his reliability. Disturbingly, the quality of intelligence from him seemed to get better over time. That might be his increasing confidence in the good faith

of those questioning him, or it might be he was working out what to say that would produce the best reward.

Great efforts were made by the US and UK intelligence services to check out Curveball. Investigation into his background and university records revealed that he had indeed been trained in Iraq as a chemical engineer. He was known to have been involved on the fringes of Saddam’s 1990 BW programme. On the one hand that made his reporting of what was currently going on entirely credible from a technical point of view; on the other hand, it put him in an ideal position to exaggerate or even make up details if he so chose.

In London, analysts pored over aerial photographs of Iraq trying to identify the locations for prohibited activity described by Curveball to see if his stories stacked up. One site seemed to be on the wrong side of the river from where he described it – perhaps a slip of memory. Or perhaps an ominous sign that he was fabricating. In 2001 Curveball’s description of a facility that he claimed was involved in the mobile BW programme was contradicted by imagery of the site, which showed a wall blocking the view to what Curveball had claimed were the mobile trailers. Analysts explained away this discrepancy by speculating that the wall spotted by imagery might be a temporary structure put up by the Iraqis to deceive US satellite reconnaissance efforts. In another instance, Iraq was said to be filling BW warheads at a transportable facility near Baghdad. When imagery was unable to locate the transportable BW systems at the reported site, analysts assumed this was another example of Iraq hiding activities from US satellite over-passes. There is a very human tendency to search for or interpret information in a way that confirms one’s preconceptions. It is comforting to think the information coming in bears out our prior beliefs. Psychologists call this confirmation bias. Confirmation bias is accentuated by the tendency for people to scrutinize fiercely information which contradicts their prior beliefs (sometimes known as disconfirmation bias) while accepting too readily without criticism information that is consistent with their preconceptions.

Placing too much weight on Curveball’s reporting on biological weapons was not the only error that Western intelligence encountered in trying to assess the state of Iraq’s capabilities. Analysts also misinterpreted the intelligence that was being reported on Saddam’s chemical weapons programmes. The analysts misled themselves, not, in that case, through

being subject to deliberate deception, but through a series of individual and collective cognitive errors. Burned by the experience of having been deceived by Saddam over the extent of his WMD capabilities, as uncovered by UN inspectors after the first Gulf War, analysts started with the strong presumption that he was playing the same game in 2002. They felt able to dismiss contrary indications that Iraq might not actively be pursuing its prohibited programmes by chalking these indicators up to Iraq’s well-known denial and deception efforts. That outlook, that Saddam must be hiding prohibited materials not surrendered after the first Gulf War in 1991, was shared across all Western intelligence agencies.

Such was the strength of this pre-war ‘group think’ that when eventually UN inspectors returned to Iraq in 2003, the US and UK analysts were slow to admit openly to their bosses and to each other any secret thoughts they might have been harbouring that the reason the inspectors were not finding the predicted stocks of BW and CW weapons and material was because they no longer existed inside Iraq.

In hindsight, a key lesson was the failure to distinguish by the Bush and Blair governments between those parts of the intelligence assessments that were based on hard evidence (such as on Saddam’s prohibited missile-testing capabilities) and those that rested on extrapolations and assumptions made by the analysts confident they already knew the answer. As Colin Powell lectured CIA analysts after the war, in future he wanted them to ‘Tell me what you know. Tell me what you don’t know. And tell me what you think’, to which a highly experienced analyst added: ‘And make clear

which is which.’3

Another conclusion that is evident is that once suspicion has taken hold it breeds yet more suspicion. Saddam Hussein found this out in 2002 as he tried to persuade the West that he no longer retained the chemical and biological weapons capability that he had used against Iran and against his own people, and that he had gone to such lengths previously to conceal. His assurances to the West that work on those programmes had stopped, while failing to comply with the UN’s demands for full accounting of his past capability, were not surprisingly disbelieved. As the CIA’s Director, George Tenet, wrote in his memoirs: ‘Before the war, we didn’t understand that he

was bluffing, and he did not understand that we were not.’4

The need to check our reasoning

The errors in the intelligence assessments over Iraq were not the result of conscious politicization of intelligence by the analysts to please their customers. They resulted from the great capacity of the mind for self-deception and magical thinking, believing we are seeing what deep down at an emotional level we want to see. It is then natural to find reasons to rationalize that belief.

One of the advantages of using the four-part SEES model, as discussed in the chapters of Part I, is that it makes it easier to spot at each level the cognitive biases that lead us to see what we want to see. We met this phenomenon in Chapter 1 with the example of how British Second World War deception efforts fooled the German High Command by feeding them information that they wanted to believe. We saw a different form of cognitive bias in Chapter 2 when policymakers anxious not to get involved militarily resisted seeing the developing Bosnian conflict as potential genocide. In Chapter 3 we identified the cognitive problem of mirror-imaging on the part of Western analysts failing to forecast how the communist regime in Moscow would react to the reform movement in Czechoslovakia in 1968. In Chapter 4 we had the example of the head of Israeli military intelligence convincing himself he had a way of providing strategic notice of Egyptian readiness to consider an attack on Israel, an error in imagination that was almost literally fatal for the state of Israel.

The vulnerability of analysts to cognitive biases was systematically examined in the 1970s and 1980s by Richards ‘Dick’ Heuer, a long-term CIA intelligence officer with forty-five years of experience to call on. In his major work, The Psychology of Intelligence Analysis, Heuer warned that prior knowledge of the prevalence of individual cognitive biases does not

seem to stop people falling under their spell.5 He argued that there have therefore to be systematic checks introduced to manage the risks they pose. After high-level inquiries into the evident intelligence failure to warn before the 1973 Yom Kippur war, the Israeli government set up a standing ‘devil’s advocate’ group within military intelligence, staffed by the best analysts, with direct access when required to the Prime Minister and a remit to adopt a contrarian approach and to be expected to challenge the prevailing

orthodoxy.6 The motto of the group is ‘Ipcha Mistabra’, in Aramaic, translatable as ‘The opposite is most likely’ or ‘On the contrary …’

The good news is that there is a great deal of experimental psychological research available, as well as much practical experience from government and business, about the many cognitive traps and illusions that can ensnare us whether at the level of the individual, of the work group or of the institution:

Individual. Cognitive and emotional biases affect us as individuals. That is part of the human condition. These biases are usually not evident to us at the time, but a good supervisor or colleague will probably become aware of them if sensitized to what to look out for. Understandably, it may not be easy for us to acknowledge how our reasoning may have been influenced without our being consciously aware.

Group. Groups can develop their own distinctive dynamics, the equivalent of a collective personality that is more than just the sum of those of each of us in the group. Members of a group both consciously and unconsciously exercise a reciprocal influence on each other, such as a pressure for conformity or a desire for closure. The existence of such distinctive group behaviours has been established in many

therapeutic settings by psychologists and psychoanalysts7 – for example, in relation to hostile feelings towards the ‘outgroup’, i.e. those who are not members of the group.

Institutional. Internal processes, rules, hierarchies and power structures can unconsciously influence the judgements and decisions reached by an analytic group, just as they can affect the institution’s interaction with its stakeholders or the public. Dynamics at the level of the organization arise from the way that those within have internalized its culture, history and structure. There may be complicated psychic relationships between the different groups of people within the organization, such as between intelligence analysts and policymakers, or generalists and specialists, or civilians and uniformed services. There may also be important dynamics generated by the way the institution interacts with other organizations, such as the inevitable differences of perspective between law enforcement and intelligence agencies working on understanding the same threat. These influences are hard to pin down for those who are thoroughly accustomed to living within the organization’s culture. Critics of the impact of

institutional dynamics tend to be dismissed with ‘that is just the way things are done round here’.

Under each of these three headings we can now identify the most significant cognitive biases to watch out for when engaged in significant reasoning.

Cognitive biases and influences on the individual

Psychologists have replicated in experiments under a range of different conditions the existence of specific cognitive biases in individual subjects

carrying out perceptual and other mental tasks.8 Some have entered into everyday speech, with labels such as cognitive dissonance, whereby the mind finds it hard to hold at the same time the favoured story and the contrary evidence that it might not be true. Such mental tension on the part of the intelligence analysts is liable to be transferred to the national security policymakers and operational commanders who can also fall victim to

cognitive dissonance.9 And we are all liable to have to wrestle with inconsistent beliefs, often suffering stress as a result.

In a study for the UK JIC in 1980 a seasoned intelligence professional, Doug Nicoll (who had worked during the Second World War on German Army and Air Force Enigma in Hut 6 at Bletchley Park and risen to be Deputy Director of its successor organization, GCHQ), concluded that even the most experienced analyst (and we can generalize this to all of us when we are faced with problems to solve) has cognitive blind spots, especially when faced with incomplete or ambiguous information. Nicoll identified six specific biases that he held responsible for why Western governments had

too often been caught out when faced by foreign aggression.10 Mirror-imaging. This is the trap to be wary of on a first date: the

presumption that your prospective partner is bound to feel the same way as you do about what makes for an exciting evening out. Nicoll identified the unconscious tendency to assume that factors which would weigh heavily in the United Kingdom would be equally serious constraints on countries ruled by one-party governments or under the control of a single leader. Analysts had, for example, too readily assumed that the weight of international opinion was a factor that would affect the formation of policy in autocracies

to the same extent as it did in the democracies. Nicoll observed that public servants brought up in the post-war liberal democracies ‘found it difficult to believe that the potential aggressor would indeed find the use of force

politically acceptable’.11 Margaret Thatcher at the time also did not disguise her belief that there was an inbuilt tendency for diplomats to over-emphasize the role of peaceful negotiation in solving international problems. She once cruelly said on television when discussing the Foreign Office: ‘When I am out of politics, I’m going to run a business called

Rent-A-Spine”.’12

Transferred judgement. This is the implicit assumption that others will think about and assess situations as you do. A mistake often made in showrooms, where you can too easily assume that the salesperson is thinking about the merits of the product on display in the same terms as you are and therefore has your interests at heart. Like mirror-imaging, this bias comes from an inability to put oneself inside the mind of the other. It can reflect unconscious cultural or racial stereotyping, as with the Vietnam War assessments by US logistics officers that it would not be possible for the North Vietnamese to bring sufficient supplies down the Ho Chi Minh jungle trail to sustain major offensives in the South, given US bombing. Earlier in Indo-China, French staff officers could not believe it would be possible for the Viet Minh to bring artillery to bear from the hills surrounding the isolated French base of Dien Bien Phu since that was not a feat they would have attempted. General Giáp calculated differently and in 1954 inflicted a humiliating defeat on the French forces, precipitating their withdrawal from Indo-China. After the war, Giáp concluded that French defeat had stemmed fundamentally from a failure by their commander, General Navarre, to understand the mind of his adversary: he had not realized that it was a

people’s war.13 Perhaps the lesson we need to learn is two-fold: ‘It is true we should avoid ethnocentrism, the idea that folks are all like us. But that doesn’t mean we should indulge in condescending exoticism, the notion that we are strategic, modern and political whereas they, our benighted

enemies, are visceral and primitive.’14

Perseveration. Nicoll saw that even with mounting evidence to the contrary analysts tended to stick with their original interpretation. In personal relationships this can be the questionable virtue of sticking by someone even when new evidence shows they are behaving as a heel. We remember their good qualities that first endeared them to us. The JIC in a

developing crisis tended to make up its mind early and was resistant to changing it, downplaying any last-minute signals of the enemy’s true intentions. Nicoll called this perseveration, from the psychological phenomenon whereby data (such as telephone numbers) if learned incorrectly the first time are more difficult subsequently to learn correctly. The medical profession also uses the term perseveration to describe an involuntary repetition of words or other acts long after the relevant stimulus has ceased. The bias affects policymakers too: even in the face of evidence that the policy is not working they will keep repeating the positive messages that led them to adopt it in the first place.

Perseveration can also be thought of as a special case of what psychologists would call choice-supportive bias. That is the tendency of the individual, after coming to a judgement from a choice of alternatives, to remember the positive features of their chosen hypothesis and to forget the negative indicators. We remember the best times with those who have been our friends over a long period of years. Without realizing it the analysts will end up skewing future assessments through having those features uppermost in their mind. That such an effect is shown in psychological experiments should be unsurprising. When a choice is made in good faith it is because the individual genuinely believed at the time that it was the right choice. As a result, it is the reasons that led to that choice that are more likely to stick in the memory and to come to the surface later since they help minimize any lingering feelings of doubt. As a general lesson it is unsurprising that most of us prefer to avoid the pain of regret at the choices we have made.

War as a deliberate act. Nicoll showed from his case studies that armed aggression is usually a deliberate act planned in advance and, he concluded, would very rarely be the result of response to some accidental crisis or by opportunistic chance. The JIC failure to forecast that Saddam Hussein in 1991 would invade Kuwait is an example. Saddam wanted to annul Iraq’s substantial debts to Kuwait for financial support during the earlier Iran–Iraq War. Starting a war deliberately for such a reason did not seem credible to the Western analysts (mirror-imaging), especially given that there was Arab League mediation in progress and diplomatic optimism that a settlement could be reached. This would similarly be the case for our natural reluctance to believe that a friend might deliberately betray our confidences for their own advantage.

Limited information. Nicoll had plenty of experience over his long intelligence career of the difficulty of assessing what might happen next in circumstances where there was very little secret intelligence to illuminate the situation. The cases he examined were all ones where the JIC had been surprised by the outcome. These tended to involve countries and regions in which the priority for intelligence collection had been low precisely because aggression was not expected. This is a vicious circle we can all experience: failure to have strategic notice of trouble ahead means a lower priority for our attention, so the information we need to warn us is less likely to be spotted when it is needed and therefore the risk of unwelcome surprises is greater.

Deliberate deception. Several of Nicoll’s case studies featured deliberate deception and he advised analysts always to look for attempts at deception by the potential aggressor, and sometimes by the victim trying to exaggerate their military strength to discourage an attack. These might be simple measures such as portraying troop movements as an exercise or very complex multi-dimensional deception programmes. A nation committing itself to military operations will almost certainly do everything in its power to preserve tactical surprise as to the time and place of operations through deception, even when as with D-Day in 1944 there could be no strategic

surprise over the intention to open the second front in Europe.15 The Soviet Union had already successfully used deception in battles such as Stalingrad, and its belief in the power of deception must have been reinforced by the success of the D-Day landings. The teaching of deception (maskirovka) became an important part of the syllabus of the Soviet military staff college (the Frunze military academy) and for training intelligence officers of the GRU and KGB. Detecting deception is so important in analysis (and today for all of us in spotting ‘fake news’ and deceptive propaganda) that Chapter 7 is devoted to the topic.

The Nicoll Report was discussed by the JIC at its meeting on 4 March 1982, and within a few weeks a copy of the report had been sent to the Prime Minister, with assurances from the JIC that the Committee considered itself alert to the lessons to be learned. As often is the case, history cruelly intervened. Just a few days later, the Argentine military Junta authorized the invasion of the Falkland Islands, and caught the UK by surprise. The UK had again fallen into the same traps as Nicoll had identified.

Doug Nicoll concluded with lessons in the need for care in the use of language in describing intelligence judgements (something much later that

Lord Butler in his post-Iraq 2004 WMD inquiry found still wanting16 ). Nicoll emphasized the importance of policymakers understanding the meaning of phrases such as ‘no evidence’. To take a contemporary example, an intelligence assessment today might state (as is very probably the case) that there is currently no evidence of terrorists within the UK having access to a surface-to-air missile with which a civilian airliner might be shot down. Such a statement does not mean that the intelligence community intends to convey a reassuring message that the risk of that happening over the next few years can be ignored (and so measures to detect such weapons being smuggled into the country are not needed) but only that at the moment there is no evidence of this having happened. A general lesson worth bearing in mind is that the answer you get depends upon the question you asked.

Group behaviours

Problems of bias also arise as a consequence of group dynamics. We have all heard of group think in which the desire for harmony or conformity within a group leads to judgements being nodded through without proper scrutiny. Most people want to feel they are valued members of a group and fear losing the respect and approbation of colleagues. When an individual group member has an insufficient sense of self-worth or of status within the group, then the individual may play to the group gallery and suppress private doubts. There are many examples of such feelings having inhibited dissenting voices. Resistance to an argument is often the result of a state of cognitive dissonance in which excuses are readily found for not accepting new relevant information that would conflict with a state of belief to which

the group or individual is emotionally committed.17 The harder it may have been to reach the original conclusion the more the individual group or individual analyst is likely to be invested in the result and thus to resist unconsciously the discomfort of having to hold in the mind a contrary view.

There is for most of us an inclination to be more likely to believe opinions if we know many other people do. This is called the bandwagon effect, the tendency that encourages the individual to conform to consensus. In group discussions it helps to have one or more contrarians, those who by

inclination like swimming against the tide and thus help surface all the relevant arguments. That effect can, when necessary, be contrived by the leader of the group choosing an individual to be the devil’s advocate with acknowledged licence to take contrary positions regardless of hierarchy. Or the whole group can indulge in red teaming to explore the problem from the adversary’s point of view. A different group (Team B) could be asked independently to examine the material seen by the first group (Team A) and come to their own conclusions on the same evidence. There is a danger of politicization here. If the customer for some piece of analysis does not like the outcome, they may call for another analytic group to be set up and invited to examine the evidence independently. If, for example, the conclusions of the original analysis were seen as too cautious, then it is likely that the members of the new group will be chosen for their boldness.

The leader of an analytic group can make a huge difference to its work especially by setting reasonable expectations and insisting at the outset upon the standards of debate and argument to which members must conform, and ensuring that the group too becomes self-aware of its own thinking processes. Being open about the risk of bias within a group is the best antidote to cognitive failures. Poor group leaders are, however, liable to become the focus for negative feelings if the task of the group is not going well.

In the conduct of an analytic group, the leader has to insist that all possible explanations are explored. There is a known psychological tendency (called the ambiguity effect) to skip over possible hypotheses for which there is little or no direct reporting, and thus seem unjudgeable, and unknowingly to spend the time discussing hypotheses for which there is evidence. A rush to early judgement can be avoided by insisting upon working systematically through the evidence using structured analytic techniques, as already described in Chapter 2. But there will come a point in a prolonged debate in which the strong urge for the psychological relief of closure will come upon the group. In such circumstances taking time out to have a breather, to let interpersonal tensions relax and minds refocus, is usually a good idea, or even suggesting the group sleep on the issues and return the next morning to check whether they have had any second thoughts.

As the 2005 Robb–Silberman US Senate Inquiry into the intelligence misjudgements over Iraq concluded:

We do not fault the Intelligence Community for formulating the hypothesis, based on Saddam Hussein’s conduct, that Iraq had retained an unconventional weapons capability and was working to augment this capability. Nor do we fault the Intelligence Community for failing to uncover what few Iraqis knew; according to the Iraq Survey Group only a handful of Saddam Hussein’s closest advisors were aware of some of his decisions to halt work on his nuclear program and to destroy his stocks of chemical and biological weapons. Even if an extraordinary intelligence effort had gained access to one of these confidants, doubts would have lingered.

But with all that said, we conclude that the Intelligence Community could and should have come much closer to assessing the true state of Iraq’s weapons programs than it did. It should have been less wrong – and, more importantly, it should have been more candid about what it did not know. In particular, it should have recognized the serious – and knowable – weaknesses in the evidence it accepted as providing hard confirmation that Iraq had retained

WMD capabilities and programs …18

Another way of describing this general lesson about being less wrong is to highlight the need to take time out to double-check the thinking. But it is also likely that the more important the issue the more urgent will be the calls for information. It will take conscious and deliberate effort, and courage, of the group leader to insist upon going back over all the workings to check the reasoning and weight of evidence.

Institutional dynamics

Institutions have their own distinctive cultures, in which corporate behaviours considered correct get passed on from generation to generation. That can be a great strength in adversity but can also lead to a bias when it comes to interpreting the world. Institutions also exhibit personality traits and suffer from time to time the equivalent of nervous breakdowns or exhibit paranoia towards other organizations. National intelligence and law enforcement agencies around the world, for example, are notorious for

feuding between themselves and refusing to share information on their cases. In the case of Curveball that opened this chapter, after questions about Curveball’s credibility had begun to emerge, a CIA operational officer in February 2003 sent a message to Pentagon officials expressing concern that Curveball had not been vetted. The next day the Pentagon official who received that message forwarded it by electronic mail to a subordinate, requesting input to answer the CIA’s query, saying that he was ‘shocked’ by the CIA’s suggestion that Curveball might be unreliable. The reply (which was inadvertently sent to the CIA) observed that ‘CIA is up to their old tricks’ and that the CIA did not ‘have a clue’ about the process by which Curveball’s information was passed from the BND. That is an example of longstanding bureaucratic rivalry in action, resulting in the rationalizing away of awkward information.

There are inevitable cultural differences between domestic and external services, and between essentially human and technical services, and again between predominantly military and civilian organizations, and finally between those security organizations with law enforcement powers and functions and those that are primarily covert intelligence gatherers. Each of these distinctions – and the secrecy and danger that surround their work – can generate tensions, not least reflecting the sometimes very different personality types of the people they employ. These are the tribes that have to come together in analytic groups to draft all source intelligence assessments. Understanding the indirect influences that their institutions exert on their members is important knowledge for the leader of an analytic group.

Cognitive biases in everyday life

The individual, group and institutional biases that Nicoll identified for his case studies of intelligence failures can be seen as special cases of more general cognitive biases that we can see in business and everyday life. These biases are very common in political debate, as they are in intelligence analysis. The advent of social media with applications such as Twitter has, as we will discuss in Chapter 10, opened the way to deliberate exploitation of confirmation bias to sell political ideas and products alike.

A lesson that the founder of scientific intelligence in the Second World War, Professor R. V. Jones, highlighted (he called it Crow’s law) was do not

believe what you want to believe until you know what you need to know.19 Those who subconsciously need the reassurance of confirmation would be already expecting intelligence to confirm their view (this common bias is known as the observer-expectancy effect).

Another example is what is called inattentional blindness. Looking is not the same as seeing. A related problem (known as the focusing effect) is that you can end up so focused on a task that you fail to spot what is going on around you. If you are not one of the 20 million who have already watched the YouTube video asking the viewer to count the fast passes between a

team of basketball players I invite you to try it.20 Given that tricky task of counting basketball passes, the first time most people see the video they fail to take note of the person dressed in a gorilla costume slowly walking across the court. A helicopter view of the basketball court would certainly reveal what a close focus on the passes being made by individual players will miss, that there is something beyond the immediate game going on.

In a comparable way close focus on what we already know can be at the expense of recognizing new information. This is a phenomenon known as attentional bias. Experimental evidence shows it affects particularly individuals who are in a state of anxiety in relation to material that is seen as threatening, or those suffering from depression likewise who may have their focus unconsciously drawn to examples of negativity. What you fear most will grab your attention.

Psychological experiments also show the tendency for an item that appears to be in some way exceptional to be more likely to stick in the memory. We are liable to register and retain in our memory news of plane crashes as dramatic events but not to take in the implications of the tens of millions of miles flown without an accident. We should not therefore be surprised that there is nervousness about flying. The tendency is known today as the Von Restorff effect after the pre-war German child psychologist who first demonstrated it systematically. It is easy to show by giving someone a varied list of names or items to remember. If some of these are readily distinguished from others then those will be the ones most likely to

be recalled to memory.21 The most striking intelligence material is liable to make more of an impact than its meaning may deserve. A case in point was the intelligence report received just before the war in Iraq which indicated

that chemical munitions could be with military units and ready for firing within 20–45 minutes. This report was in itself unexceptional as a reference to the previous Iraqi battlefield capability, but after it was mentioned in the published British government dossier, the headline in the Sun newspaper was ‘Brits 45 mins from doom’ and in the Star ‘Mad Saddam ready to

attack: 45 minutes from a chemical war’.22 It is those memorable reports that are likely to feature in discussion between intelligence chiefs and their ministerial masters, and between ministers and the media.

Managing the risk of cognitive bias

This chapter has been about the cognitive biases that can get in the way of our everyday thinking. We can all understand that they exist and why we might be susceptible to them. But managing the risk that they represent is much harder. That should not surprise us as most of the biases identified in this chapter operate at the unconscious level of the mind, and by definition are therefore not usually accessible to us. Having a developed academic understanding of them from reading textbooks is no guarantee that we will not still be susceptible to them. As the report of a 1977 CIA seminar on bias in intelligence analysis concluded: ‘Personal biases are the most

troublesome. They are frequently unique and hard to detect.’23

The best antidote to cognitive biases such as group think is for the group to discuss openly the danger such biases represent. A good group leader can encourage challenge from within the group, and pose the question: Are we falling into group think here? (Which will normally elicit laughter and lighten any tension there may be over reaching a conclusion.) A process of self-recognition of common cognitive biases can be developed whereby individuals develop first an intellectual understanding of these phenomena (and a historical feel for how they matter) and then through working with others, preferably with a trained facilitator, come to an understanding that they too might be subject to them and how they might recognize when that is happening. But resistance is to be expected when others suggest that we have fallen into one of these errors. What is most important in my experience in managing the risk to the SEES process is to have a ‘safe space’ where the dangers of bias can be discussed as a matter of professionalism without arousing a defensive feeling on the part of the

participants that they are being expected to admit to personal weaknesses.

We all suffer from cognitive biases.

Conclusions: mastering our internal demons of bias and prejudice

It is our own demons that are most likely to mislead our reasoning. In this chapter we examined vulnerability to our cognitive biases and prejudices and how they prevent us thinking straight. If we want to learn to think correctly we should:

Accept that completely objective analysis is impossible since we are human, and have to interpret reality for ourselves. But we can try to be as independent, honest and neutral in our analytic judgements as we can.

What you see or hear, or fail to see or hear, reflects your state of mind and degree of attention.

Try to make explicit implicit biases and prejudices, identifying the assumptions we are making in our reasoning.

Recognize that cognitive errors arise at the individual, group and institutional levels and that they may come together in the pressures on the individual from ‘group think’.

Do not believe what you want to believe until you know what you need to know. Remember that the answer you get is likely to depend upon the question you asked.

Recognize the sign of displacement activity that goes with mental stress and how cognitive dissonance increases resistance to taking in new information.

Beware transferred judgements and mirror-imaging in imputing motives to others.

Keep an open mind and be prepared to change it on Bayesian principles when new evidence arrives.

David Omand – How Spies Think – 10 Lessons in Intelligence – Part 6

4

Lesson 4: Strategic notice We do not have to be so surprised by surprise

Early in the blustery spring morning of 14 April 2010 an Icelandic volcano with a near unpronounceable name (Eyjafjallajökull) exploded, throwing a cloud of fine ash high into the sky. The debris was quickly swept south-east by the regular jet stream of wind across the Atlantic until the skies above Northern Europe were filled with ash. Deep under the Icelandic ice-sheet melt water from the heat of the magma had flowed into the site of the eruption, rapidly cooling the lava and causing the debris to be rich in corrosive glass particles. These are known to pose a potential hazard if ingested by aircraft jet engines. The next day alarmed air traffic authorities decided they had to play it safe since no one had prescribed in advance specific particle sizes and levels below which engines were considered not to be at risk and thus safe to fly. They closed airspace over Europe and grounded all civil aviation in the biggest shut-down since the Second World

War.1

Yet there had been warning that such an extreme event might one day occur, an example of strategic notice that is the fourth component of the SEES model of intelligence analysis. The government authorities in Iceland had been asking airlines for years to determine the density and type of ash that is safe for jet engines to fly through. Had the tests been carried out, the 2010 disruption would have been much less. There would still have been no immediate forewarning of the volcano about to explode, but sensible preparations would have been in place for when it did.

The lesson is that we need sufficiently early notice of future developments that might pose potential danger to us (or might offer us opportunities) to be prepared to take precautionary measures just in case. Strategic notice enables us to anticipate. Governments had strategic noticeof possible coronavirus pandemics – the COVID-19 outbreak should not have caught us unprepared.

There is an important difference between having strategic warning of the existence of a future risk, and a prediction of when such a risk might materialize. Scientists cannot tell us exactly when a specific volcano will erupt (or when a viral disease will mutate from animals to humans). But there can be warning signs. Based on historical data, some sense of scale of frequency of eruption can be given. In the Icelandic case it was to be expected that some such volcanic activity would occur within the next fifty years. But before the volcanic events of April 2010, aviation authorities and aircraft engine manufacturers had not recognized that they needed to

prepare. Instead they had implicitly accepted the precautionary principle2 that if any measurable volcanic ash appeared in the atmosphere they would issue an advisory notice that all planes should be grounded even at the cost of considerable disruption to passengers.

The airlines had known of the baseline precaution that would be taken of grounding planes in the event of volcanic ash appearing in the atmosphere, but they had not thought in advance how such a major global dislocation would be handled. After the April 2010 closure of European airspace, the effects rapidly cascaded around the world. Planes were diverted on safety grounds to countries for which the passengers did not have visas, and could not leave the airport to get to hotels. Coming at the end of the Easter holidays, school parties were unable to return for the start of the term. Nobody had considered if stranded passengers should have priority over new passengers for booking on flights when they restarted. For millions of people the result was misery, camping in airports until finally aviation was allowed to resume just over a week later. At the same time, test flights were rapidly organized by the aero engine manufacturers. These provided data on which calibrated judgements could be made of when it is safe enough to fly through ash clouds. By the end of a week of chaos and confusion, 10 million passengers had been affected overall, with the aviation industry facing losses of over £1bn.

The same thing happened in the 1982 Falklands crisis. The British government was given strategic warning by the JIC that Argentine patience might run out, in which case the Junta could take matters into its own hands. That warning could have prompted the stationing of naval forces as a credible deterrent, while a permanent solution could have been created by extending the runway to handle long-distance transports and the stationing of fast jets (as has now been done). That would have been expensive. But the expense pales in comparison with the loss of over 1000 lives, not to mention an estimated price tag of over £3bn that was involved in recovering the Islands for the Crown once lost.

‘I just say it was the worst, I think, moment of my life’ was how Margaret Thatcher later described the surprise loss of the Falklands: yet she and her senior Cabinet members and the officials supporting them had not understood beforehand the dangers they were running. It was painful for me as a member of the Ministry of Defence to have to recognize later that we had all been so preoccupied by other problems, including managing defence expenditure, that we failed to pay sufficient attention to the vulnerability of the Falklands. We implicitly assumed (magical thinking) that the need would never arise. It was a salutary lesson learned early in my career and one that stayed with me.

Living with surprise

The fourth stage of the SEES method involves acquiring strategic notice of the important longer-term developments that could affect you. If you do not have these at the back of your mind, the chances are that you will not have prepared either mentally or physically for the possibility of their occurring. Nor will you be sufficiently alert to spot their first signs. We will experience what is known to intelligence officers as strategic surprise.

The distinction between having strategic and tactical surprise is an old one in military history. It is often hard for a general to conceal the strategy being followed. But when it comes to choosing tactically when and where to attack, a commander can obtain the advantages of surprise by, for example, picking a point in the enemy’s defences where at least initially he will have the advantage. In 1944 the Germans knew perfectly well that the Allies were preparing a major landing of US, British and Canadian troops

on the continent of Europe. That intent was no surprise since the strategy of opening a second front in Europe was well known. But the tactics that would be adopted, the date of the invasion, and exactly where and how the landings would be mounted were secrets carefully kept from the German High Command. Come 6 June 1944, the Allies landed in Normandy and enjoyed the immediate advantage of tactical surprise.

A tragic example of tactical surprise was the events of 7 July 2005, when terrorist suicide bombers with rucksack bombs struck at the London Underground network and surface transport during the morning rush hour. Fifty-two innocent passengers lost their lives and very many more suffered horrific injuries. The attacks came without intelligence warning and the shock across London and round the world was considerable. But they were not a strategic surprise to the authorities.

The likelihood of terrorist attacks in London in 2005 had been assessed by the Joint Terrorism Analysis Centre based in MI5 headquarters. Intelligence had indicated that supporters of Al Qaid’a living inside the UK had both the capability and intent to mount some form of domestic terror attack. The possibility that the London Underground system would be an attractive target to terrorist suicide bombers had been anticipated and plans drawn up and staff trained just in case. A full-scale live rehearsal of the response to a terrorist attack on the Underground, including emergency services and hospitals that would be receiving casualties, had been held in September 2003. Just as well, since many practical lessons were learned

that helped the response two years later.3 The same can be said for the experience of the pandemic exercise in 2016 for the COVID-19 outbreak in 2020. Exercises can never fully capture the real thing but if events come as a strategic surprise the damage done will be far greater.

The same is true for all of us. We have, for example, plenty of strategic notice that our possessions are at risk of theft, which is why we should think about insurance. If we do get our mobile phone stolen we will certainly feel it as an unwelcome tactical surprise, but if insured we can console ourselves that however inconvenient it is not as bad as if it had been a strategic surprise as well.

Forestalling surprise

Intelligence communities have the duty of trying to forestall unwelcome surprises by spotting international developments that would spell real

trouble.4 In 1973 Israeli intelligence was carefully monitoring Egypt for evidence that President Sadat might be preparing to invade. Signs of mobilization were nevertheless discounted by the Israelis. That was because the Israeli Director of Military Intelligence, Major General Eli Veira, had convinced himself that he would have strategic notice of a coming war. He reasoned that without major new arms imports from Russia, and a military alliance with Syria, Egypt would be bound to lose. Since no such imports or alliance with Syria had been detected he was certain war was not coming. What he failed to spot was that President Sadat of Egypt also knew that and had no illusions about defeating Israel militarily. His plan was to launch a surprise attack to seize the Sinai Peninsula, call for a ceasefire and then to negotiate from strength in peace talks. It was a crucial report from Israel’s top spy inside Egypt (the highly placed Israeli agent Ashwar Marwan was actually the millionaire son-in-law of Gamel Abdel Nasser, Egypt’s second President), which arrived literally on the eve of Yom Kippur, that just gave Israel enough time to mobilize to resist the attack when it came. The near disaster for Israel provides a warning of the dangerous double power of magical thinking, not only imagining that the world will somehow of its own accord fit in with your desires but also interpreting all evidence to the contrary so as to confirm your belief that all is well.

An important conclusion is that events which take us unawares will force us to respond in a hurry. We did not expect it to happen today, but it has happened. If we have not prepared for the eventuality we will be caught out, red-faced, improvising rapidly to recover the situation. That includes ‘slow burn’ issues that creep up on us (like COVID-19) until we suddenly realize with horror that some tipping point has been reached and we are forced to respond. Climate change due to global warming is one such ‘slow burn’ issue. It has been evident to scientists for decades and has now reached a tipping point with the melting of polar ice and weather extremes. It is only very recently, however, that this worsening situation has become a matter for general public concern.

The creation of ISIS in Syria and Iraq is another example where intelligence officers slowly began to recognize that something significant and dangerous was afoot as the terrorists began to occupy and control areas of the two countries. The failure of strategic notice was not to see how a

combination of jihadist participation in the civil war in Syria together with the strength of the remnants of the Sunni insurgency in Iraq could create a power vacuum. The early signals of major risks may be weak, and hard to perceive against the generally noisy background of reality. For example, we have only recently recognized the re-emergence of state threats through digital subversion and propaganda and the possibility of highly damaging cyberattacks against the critical infrastructure such as power and telecommunications.

The product of the likelihood of something happening (a probability) and a measure of its impact if it does arise gives us a measure of what is called the expected value of the event. We are all familiar with the principle fromassessing the expected value of a bet: the combination of the chances of winning and payoff (winnings minus our stake) if we do. At odds of 100 to 1 the chances are low but the payoff correspondingly large, and vice versa with an odds-on favourite. We also know that the expected value of a series of separate bets can be calculated by simply adding the individual net values together. Wins are sadly usually quickly cancelled out by losses. The effect is even more evident with games like roulette, in which, with a fair wheel, there is no skill involved in placing a bet. Over a period, the bookmakers and casino operators will always make their turn, which is why they continue to exist (but punters will still come back for more because of the non-monetary value to them of the thrill of the bet).

Very unlikely events with big consequences (events that in our experience we do not expect to happen to us or only rarely) do nevertheless

sometimes pop up and surprise us.5 Sometimes they are referred to as ‘long-tailed’ risks because of the way that they lie at the extreme end, or tail, of the distribution of risk likelihood rather than in the ‘expected’

middle range. An example might be the 2007 global financial crash.6 Our intuition also can mislead us into thinking that the outcome of some event that concerns us is as likely to be above the average (median) as below since so many large-scale natural processes are governed by the so-called ‘normal’ bell-shaped symmetrical probability distribution. But there are important exceptions in which there is a sizeable long tail of bad outcomes.

That idea of expected value can be elaborated into what is known to engineers as the risk equation to provide a measure of overall loss or gain. We learned the value of this approach when I was the UK Security and Intelligence Coordinator in the Cabinet Office constructing the UK counter-

terrorism strategy, CONTEST, after 9/11.7 Our risk equation separated out the factors that contribute to the overall danger to the public so that actions could be designed to reduce each of them, as shown below.

We can thus reduce the probability of terrorists attempting to attack us by detecting and pursuing terrorist networks and by preventing radicalization to reduce the flow of new recruits. We reduce society’s vulnerability to particular types of attack by more protective security measures such as better airport screening. We reduce the cost to the public of an attack if the terrorists get through our defences by preparing the emergency services to face the initial impact when an attack takes place and by investing in infrastructure that can be repaired quickly. This logic is a major reason why CONTEST remains the UK’s counter-terrorism strategy, despite being on its fifth Prime Minister and ninth Home Secretary. Military planners would

recognize this lesson as applying ‘layered defence’,8 just as the thief after an expensive bicycle might have first to climb over a high wall into the garden, dodge round the burglar alarms, break into the shed, then undo the bicycle lock. The chance of the thief succeeding undetected goes down with each layer of security that is added (and so does your overall risk).

The search for strategic notice of long-term developments is sometimes referred to as horizon scanning, as in looking for the tops of the masts of the enemy ships just appearing. Global banks, consultancies and corporations such as Shell have acquired a reputation for horizon scanning to help their

strategy and planning departments.9 But we should remember that some important developments are like ships that have not yet put to sea – they

may never come to threaten us if pre-emptive measures are taken early enough.

In the UK the chief scientists of government departments have assumed a leading role in providing strategic notice, such as the UK Ministry of

Defence 2016 Global Strategic Trends report looking ahead to 2045.10 Another example is the 2016 report by the UK Chief Scientific Adviser on the potential revolution for government agencies, banks, insurance companies and other private sector organizations of blockchain

technology.11 The headline message is clear: watch out, a new disruptive technology is emerging that has the potential to transform the working of any organization that relies on keeping records. In the natural world we do have strategic notice of many serious issues that should make governments and companies sit up. One of the top risks flagged up by the UK government has long been a virus pandemic, alongside terrorism and cyberattacks.

It is worth bearing in mind when new technology appears which might pose risks to humans or the environment that most scientists prefer to hold back from offering advice until there is solid evidence on which to reach judgements. That understandable caution leads to the special category of ‘epistemic’ risk arising from a lack of knowledge or of agreed understanding, because experts are reluctant to commit as to whether the harm will ever crystallize or because they disagree among themselves as to its significance.

It is hard to predict when some theoretical scientific advance will result in brand-new technology that will impact heavily on our lives. Of the 2.5

million new scientific papers published each year12 very few represent breakthroughs in thinking. Even when a theoretical breakthrough opens the possibility of a revolution in technology it may be years in gestation. Quantum computing provides a striking example where we have strategic notice of its potential, once such a machine is built, to factorize the very large numbers on which all the commercial encryption systems rely for secure internet communication and online payments. At the time of writing, however, no workable quantum computer at scale has been built that can operate to fulfil the promise of the theory: it could be decades ahead. But we know that the impact when and if it happens will be significant. Wise

governments will therefore be investing (as the US and the UK are13 ) in developing new types of cryptography that will be more resistant to

quantum computers when they arrive; and no doubt asking their intelligence agencies to report any signs that somewhere else the practical problems of implementation look like being cracked.

At a personal level, where we find some of our risks easily visualized (such as coming back to the car park to find the car gone) and the costs are low, we can quickly learn to manage the risk (this causes us to get into the habit of checking whether we locked the car). Other personal risks that are more abstract, although more dangerous, may be unconsciously filed as the kind that happen to other people (such as returning home to find the fire brigade outside as a result of a short circuit in old wiring). We look but do not see the danger, just as in everyday life we can hear but not listen.

The term ‘risk’ conventionally carries the meaning of something bad that could happen. But as the economist F. H. Knight concluded many years

ago, without risk there is no profit.14 A further lesson in using strategic notice is how it can allow advance notice of long-term opportunities that might present themselves. Perhaps the chosen route for the future high-speed rail link will go through the middle of the village (threat) or a station on the line will be built in a nearby town (opportunity).

Strategic notice has even become a fashionable theory governing the marketing of products in the internet age. Rather than the more traditional clustering of products around common types of goods and services, it is increasingly being found that it is the quirky niche products or services which might appear at first sight unlikely to have mass appeal that can go viral on social media and quickly generate large returns. Entrepreneurs expect most of such outlier efforts to fail, but those that succeed more than make up for them in profits earned. Who would have thought a few years ago that sportswear such as brightly coloured running shoes and jogging bottoms, then confined to the gym, would for many become staple outdoor wear.

Providing ourselves with strategic notice

Bangladesh Climate Geo-engineering Sparks Protests

April 4, 2033 – Dhaka

Bangladesh became the first country to try to slow climate change by releasing a metric ton of sulphate aerosol into the upper atmosphere from a modified Boeing 797 airplane in the first of six planned flights to reduce the warming effects of solar radiation. The unprecedented move provoked diplomatic warnings by 25 countries and violent public protests at several Bangladeshi Embassies, but government officials in Dhaka claimed its action was ‘critical to self-defense’ after a spate of devastating hurricanes, despite scientists’ warnings of major unintended consequences, such as intensified acid rain and depletion of the ozone layer.

Note the date on that news report. That surprising glimpse of the future in 2033 was included in the 2017 report on Global Trends published by the US

National Intelligence Council.15 The intelligence officers drafting the report included such headlines to bring to life their strategic assessments of possible developments out to 2030 and beyond and the disruptive game-changers to be expected between now and then.

The then chair of the US National Intelligence Council, Professor Greg Treverton, explained in his foreword to the 2017 report that he examined global trends to identify their impact on power, governance and cooperation. In the near future, absent very different personal, political and business choices, he expects the current trajectory of trends and power dynamics to play out among rising international tensions. But what he expects to happen twenty years or more thereafter is explored through three stories or scenarios. The NIC report discusses the lessons these scenarios provide regarding potential opportunities and trade-offs in creating the future, rather than just responding to it.

It is possible to do long-term forecasting, starting with now and trying to work forwards into the far future, on the basis of mega-trends in technology, national wealth, population and so on. That quickly runs into the problem that there are too many possible intersecting options that humankind might or might not take to allow an overall estimate of where we will end up. That problem is getting worse with the interdependencies that globalization has brought. One of the fascinating aspects of the US NIC report quoted above is the use of ‘backcasting’ as well as forecasting, working backwards from a number of postulated long-term scenarios to identify the factors that might influence which of those futures we might

end up near. It is important in all such work to challenge conventional wisdom (an approach known as red teaming). When preparing the report the US NIC team visited thirty-five countries, including the UK, and canvassed ideas from academics and former practitioners such as myself, as well as serving government and military planners.

Using risk management in practice

A number of things have to go right in order for strategic warning to be translated into effective action at both national and international level, inside the private sector and in the home. Forecasts of risk need to be communicated effectively to those who can make use of the information. They in turn must be able to mobilize some kind of response to reduce, mitigate or transfer the risk. And there must be the capacity to learn lessons from experience of this process.

Advised by the assessments of the Joint Intelligence Committee, and by

the National Risk Assessment16 from the Civil Contingencies Secretariat, the UK’s National Security Council, chaired by the Prime Minister,

promulgates the strategic threat priorities for government.17 The 2015 Risk Assessment identified a major human pandemic as one of the most significant national security risks (in terms of likelihood and impact) facing the UK. A test of plans in 2016 exposed major gaps in capability. By the time COVID-19 struck in 2020 there was at least a national biological security strategy in place, although shortcomings still emerged along with shortages of essential protective equipment.

A comparable role must be played by the boards of companies, charitable organizations and government agencies in ensuring that major risks are identified and monitored and that plans for managing the risks are in place. A useful lesson I have learned is to divide the risks into three groups. The first group consists of the risks that are outside the influence of the business, such as a major disease outbreak. These are known as the exogenous risks. The second group of risks are those inherent in the nature of the business: banks suffer fraud, retailers suffer pilfering or ‘shrinkage’ of stock, trucking companies have accidents and so on. The final group of risks are those that the company took on itself, such as investment in a major IT upgrade on which the viability of the whole enterprise depends.

There is nothing most companies can do to eliminate the risks in the first group. But they can conduct periodic impact assessments, and exercise contingency plans. Even MI6 got caught out in September 2000 when a PIRA terrorist fired a small rocket at their Vauxhall Cross headquarters and the police then declared the building a crime scene and refused to allow staff back in until their investigations were complete, a more than trivial problem for an organization that has to operate 24/7.

For the second group of risks, those inherent in the nature of the business, discussion should be around the systems of control – for example, for cash flow, and whether there is sufficient pooling or transfer of risk through insurance or commercial alliance or partnership.

For the third category, the questions a company board must ask itself are much more pointed. Since the future of the organization depends on managing such changes successfully, directors need to ensure they personally have visibility of progress and that they allocate enough of their time to ensuring that key change managers have access to expertise, the delegated authority and the finance needed for success.

We can all follow such a three-part discussion of the risks we face, even at the level of the family. Do we have sufficient medical insurance from work to cover unforeseen traffic and other accidents or do we need extra cover? Is there adequate holiday insurance? Who has to meet the upfront cost of cancellations due to external disruption (such as COVID-19 in 2020 or the shutdown in 2018 of the busy Gatwick airport in the UK due to illegal drone flying over the runway)? Who has spare keys to the car or house in case of loss?

Conclusions: strategic notice

Having strategic notice of possible futures means we will not be so surprised by surprise. In this chapter we have looked at the perils of surprise, at not having strategic notice, and at what it means to say something is likely to happen. We examined the nature of surprise itself, sudden crises and slow-burn crises, how to think about the likelihood of events, and strategies for managing their risk. We looked at some of the ways in which long-term risks can be spotted and at the importance of

communicating the results to achieve an alerted, but not alarmed, public. To learn to live with the expectation of surprise we should:

Search for strategic notice of relevant developments in technology, international and economic affairs, the environment and potential threats.

Think in terms of the expected value of events and developments (probability times impact), not just their likelihood.

Think about a register of your major risks and use a risk equation to identify and link the factors that contribute to the value of overall outcomes.

Use strategic notice to spot opportunities as well as identify dangers.

Accept that you will usually suffer tactical surprise even when you have avoided strategic surprise.

Beware magical thinking, believing that one event occurs, or does not occur, as a result of another without plausible causation and thus wishing away what strategic notice is telling you.

Group risks into those you can do nothing about (but might want to prepare for and exercise contingency plans); those that go with the territory (where you can take sensible precautions); and those risks that accompany your major decisions (since your future depends on them you need to ensure they get the right priority and attention).

David Omand – How Spies Think – 10 Lessons in Intelligence – Part 5

Lesson 3: Estimations Predictions need an explanatory model as well as sufficient data

In mid-August 1968, I was driving an elderly Land Rover with friends from university along the Hungarian side of the border with Czechoslovakia on the first stage of an expedition to eastern Turkey. To our surprise we found ourselves having to dodge in and out of the tank transporters of a Soviet armoured column crawling along the border. We did not realize – and nor did the Joint Intelligence Committee in London – that those tank crews already had orders to cross the border and invade Czechoslovakia as part of a twin strategy of intimidation and deception being employed by Yuri Andropov, then KGB chairman, to undermine the reform-minded

government in Prague led by Alexander Dubček.1

US, UK and NATO intelligence analysts were aware of the Soviet military deployments, which could not be hidden from satellite observation and signals intelligence (I joined GCHQ a year later and learned how that had been done). The Western foreign policy community was also following the war of words between Moscow and Prague over Dubček’s reform programme. They shared Czech hopes that, in Dubček’s memorable campaign slogan, ‘socialism with a human face’ would replace the rigidities of Stalinist doctrine.

Dubček had run for the post of First Secretary of the Party on a platform of increased freedom of the press and of speech and movement; an economic emphasis on consumer goods; a reduction in the powers of the secret police; and even the possibility of multi-party elections. Dubček was in a hurry, with the wind of popular support behind him. But he was clearly

and repeatedly ignoring warnings from Moscow that he was going too far too fast. In 1968, Prague was at risk of slipping from under Moscow’s control.

In the JIC, senior intelligence and policy officials met with representatives of the ‘5-eyes’ to consider whether Moscow would use

military force as it had done in Hungary in 1956.2 This is the stage of analysis that the layperson might consider the most important, trying to predict for the policymakers what will happen next. This is very satisfying when it is achieved, although intelligence professionals shun the word ‘prediction’ as an overstatement of what is normally possible.

Analysts had no difficulty explaining the massing of tanks just on the other side of the Czech border as putting pressure on the reformist Czech government. The JIC analysts must have felt they had had good situational awareness and a credible explanation of what was going on at a military level. But they failed to take the next step and forecast the invasion and violent crushing of the reform movement. They reasoned that the Soviet Union would hold back from such crude direct intervention given the international condemnation that would undoubtedly follow. That verb reasoned carries the explanation of why the analysts got it wrong: they werereasonable people trying to predict the actions of an unreasonable regime. When they put themselves in the shoes of the decisionmakers in Moscow, they still thought exclusively from their own perspective.

We now know from historical research much more than the analysts would have known at the time about the resolve of the Soviet leadership to crush the Czech reforms. Western intelligence analysts would probably have come to a different conclusion about the Soviet willingness to take huge risks if they had known the active measures being taken against the Czech reformers being masterminded by Yuri Andropov, Head of the KGB.

That the key inner adviser to President Brezhnev in Moscow was Andropov should have triggered alarm. Andropov had form. As Soviet Ambassador in Budapest in 1956, he had played a decisive role in convincing the Soviet leader, Nikita Khrushchev, that only the ruthless use of military force would end the Hungarian uprising. It was a movement that had started with student protests but had ended up with an armed revolt to install a new government committed to free elections and a withdrawal from the Warsaw Pact.

One of the main instruments being employed by Andropov was the use of ‘illegals’. The West found that out much later in 1992 with the reporting of Vasili Mitrokhin, the Soviet KGB archivist and MI6 source. He revealed how specially selected and trained KGB officers had been sent in 1968 into Czechoslovakia, disguised as tourists, journalists, businessmen and students, equipped with false passports from West Germany, Austria, the UK, Switzerland and Mexico. Each illegal was given a monthly allowance of $300, travel expenses and enough money to rent a flat in the expectation that the Czech dissidents would more readily confide in Westerners. Their role was both to penetrate reformist circles such as the Union of Writers, radical journals, the universities and political groupings, but also to take ‘active measures’ to blacken the reputation of the dissidents. The Soviet Prime Minister loudly complained of Western provocations and sabotage (with the alleged uncovering of a cache of American weapons and with a faked document purporting to show a US plan for overthrowing the Prague regime). He used such arguments to justify Soviet interference in Czechoslovak affairs even though they were, in fact, the work of the KGB ‘illegals’.

In August 1968, under the pretext of preventing an imperialist plot, the Soviet Union despatched armies from Russia and four other Warsaw Pact countries to invade Czechoslovakia, taking over the airport and public buildings and confining Czech soldiers to barracks. Dubček and his colleagues were flown to Moscow under KGB escort, where, under considerable intimidation, they accepted the reality of complying with the demands of their occupiers.

Today we have seen Moscow using all these tactics from the Soviet playbook to prevent Ukraine orientating itself towards the EU. Yet, despite their understanding of Soviet history, Western analysts failed to predict the Russian seizure of Crimea and their armed intervention in eastern Ukraine. Analysts knew of past Soviet use of methods involving intimidation, propaganda and dirty tricks including the use of the little grey men of the KGB infiltrated into Czechoslovakia in 1968. Yet the appearance of ‘little green men’ in Ukraine, as the Russian special forces were dubbed by the media, came as a surprise.

Modelling the path to the future

The task of understanding how things will unfold is like choosing the most likely route to be taken across a strange country by a traveller you have equipped with a map that sets down only some of the features of the landscape. You know that all maps simplify to some extent; the perfect map, as described satirically by Jonathan Swift in Gulliver’s Travels is one that has a scale of 1 to 1 and thus is as big and detailed as the ground being

mapped.3 There are blank spots on the traveller’s map: ‘here be dragons’, as the medieval cartographers labelled areas where they did not have enough information. The important lesson is that reality itself has no blank spots: the problems you encounter are not with reality but with how well you are able to map it.

An example of getting the modelling of future international developments right was the 1990 US National Intelligence Council estimate ‘Yugoslavia Transformed’ a decade after the death of its autocratic ruler, the

former Partisan leader Marshal Tito.4 The US analysts understood the dynamics of Tito’s long rule. He had forged a federation from very different and historically warring peoples: Serbs, Croats, Slovenes and Bosnian Muslims. As so often happens with autocrats ruling divided countries (think about Iraq under Saddam, Libya under Gaddafi), Tito ruled by balancing the tribal loyalties. For every advantage awarded to one group there had to be counter-balancing concessions in other fields to the other groups. Meanwhile a tough internal security apparatus loyal to Tito and the concept of Yugoslavia identified potential flashpoints to be defused and dissidents to be exiled. After Tito’s death the centre could not long hold. The Serb leadership increasingly played the Serb nationalist and religious card and looked for support to Moscow. The Croats turned to the sympathy of Catholic fellowship in Germany. The Bosnian Muslims put their faith in the international community and the United Nations for protection. The US 1990 estimate summarized the future of the former Yugoslavia in a series of unvarnished judgements that read well in the light of subsequent developments in the Balkans as described in the previous chapter:

Yugoslavia will cease to function as a federal state within one year and will probably break up within two. Economic reform will not stave off the break-up …

There will be a protracted armed uprising by the Albanians in Kosovo. A full-scale, interrepublic war is unlikely but serious intercommunal

violence will accompany the breakup and will continue thereafter. The violence will be intractable and bitter.

There is little that the US and its European allies can do to preserve Yugoslav unity. Yugoslavs will see such efforts as contradictory to advocacy of democracy and self-determination … the Germans will pay lip service to the idea of Yugoslav integrity, whilst quietly accepting the dissolution of the Yugoslav state.

In London, analysts shared the thrusts of the US intelligence assessment on Yugoslavia. But the government of John Major did not want to get involved in what promised to be internecine Balkan civil war, always the bloodiest kind of conflict. The Chiefs of Staff could see no British interest worth fighting for. I recall attending the Chiefs of Staff Committee and reporting on the deteriorating situation but having Bismarck’s wisecrack thrown back at me, that the pacification of the turbulent Balkans was not worth the healthy bones of a single Pomeranian grenadier.

There can be many reasons for failure to predict developments correctly. One of the most common reasons is simply the human temptation to indulge in magical thinking, imagining that things will turn out as we want without any credible causal explanation of how that will come about. We do this to shield ourselves from the unwelcome truth that we may not be able to get what we want. The arguments over the handling of the UK Brexit process say it all.

The choice between being more right or less wrong

It is easy to criticize analysts when they fail to warn of some aggressive act. They know that they will be accused of an intelligence failure. As a rule of thumb, analysts will tend to risk a false positive by issuing a warning estimate rather than risk the accusation of failure after a negative report failed to warn. The costs of not having a timely warning if the event does happen are usually greater than the costs of an unnecessary warning when it does not. Cynics might also argue that analysts are realists and they know that if they issue a warning but the event does not take place there will be many exculpatory reasons that can be deployed for events not turning out that way. On the other hand, if policymakers are badly surprised by events after a failure to warn there will be no excuses accepted.

Analysts are faced in those circumstances with an example of the much

studied false-positive/false-negative quality control problem.5 This is the same dilemma faced by car manufacturers who inspect as the cars leave the factory and have to set the testing to a desired rate of defective vehicles passing the inspection (taken to be safe but actually not, a false positive), knowing that such vehicles are likely to break down and have to be recalled at great cost and the company reputation and sales will suffer; but knowing as well that if too many vehicles are wrongly rejected as unsafe (taken to be unsafe but actually not, a false negative) the car company will also incur large unnecessary costs in reworking them. This logic applies even more forcibly with medicines and foodstuffs. As consumers it is essential to expect foods labelled as nut-free to be just that, in order to avoid the potentially lethal risk to those allergic to nuts. The consequence, however, is that we have to recognize that the manufacturer will need a rigorous testing system achieving very low false-positive rejection rates, and that will put up the false-negative rejection rates, which is likely to add significant cost to the product. We can expect the cursor on most overall manufacturing industry inspection systems to be set towards avoiding more false positives at the expense of more false negatives. The software industry, however, is notorious for cost reasons for tolerating a high false-positive rate, preferring to issue endless patches and updates as the customers themselves find the flaws the hard way by actually using the software.

An obvious application in intelligence and security work is in deciding whether an individual has shown sufficient associations with violent extremism to be placed on a ‘no-fly’ list. Policymakers would want the system to err on the side of caution. That means accepting rather more false negatives, which will of course seriously inconvenience an individual falsely seen as dangerous because they will not be allowed to fly, as the price for having a very low level of false positives (falsely seen as safe when not, which could lead, in the worst case, to a terrorist destroying a passenger aircraft by smuggling a bomb on board). Another example is the design of algorithms for intelligence agencies to pull out information relating to terrorist suspects from digital communications data accessed in bulk. Set the cursor too far in the direction of false positives and too much material of no intelligence interest will be retrieved, wasting valuable analyst time and risking unnecessary invasion of privacy; set the cursor too

far towards false negatives and the risk of not retrieving the material being sought and terrorists escaping notice rises. There is no optimal solution possible without weighing the relative penalties of a false positive as against a false negative. At one extreme, as we will see in the next chapter, is the so-called precautionary principle whereby the risk of harm to humans means there cannot be false positives. Application of such a principle

comes at considerable cost.6

The false-positive/false-negative dilemma occurs with algorithms that have to separate data into categories. Such algorithms are trained on a large set of historic data where it is known which category each example falls into (such as genuinely suspect/not-suspect) and the AI programme then works out the most efficient indicators to use in categorizing the data. Before the algorithm is deployed into service, however, the accuracy of its output needs to be assessed against the known characteristics of the input. Simply setting the rule at a single number so that, say, 95 per cent of algorithmic decisions are expected to be correct in comparison with the known training data is likely to lead to trouble depending upon the ratio of false positives to false negatives in the result and the penalty associated with each. One way of assessing the accuracy of the algorithm in its task is to define its precision as the number of true positives as a proportion of positives that the algorithm thinks it has detected in the training data. Accuracy is often measured as the number of true positives and negatives as a proportion of the total number in the training set. A modern statistical technique that can be useful with big data sets is to chart the number of false positives and false negatives to be expected at each setting of the rule and to look at the area under the resulting curve (AUC) as a measure of

overall success in the task.7

Reluctance to act on intelligence warnings

The policy world may need shaking into recognizing that they have to take warnings seriously. In April 1993 I accompanied the British Defence Secretary, Malcolm Rifkind, to the opening of the Holocaust Museum in Washington. The day started with a moving tribute at Arlington Cemetery to the liberators of the concentration camps. I remembered the sole occasion my father had spoken to me of the horror of entering one such just liberated

camp in 1944 when he was serving as an officer in the Black Watch on the Eighth Army A Staff. It was a memory that he had preferred to suppress. Later that day Elie Wiesel, the Nobel Peace Prize winner, spoke passionately in front of President Bill Clinton, President Chaim Herzog of Israel and a large crowd of dignitaries about the need to keep the memory of those horrors alive. He issued an emotional appeal to remember the failure of the Allied powers to support the Warsaw Ghetto uprising and the Jewish

resistance.8 He quoted the motto chiselled in stone over the entrance to the Holocaust Museum: ‘For the dead and the living we must bear witness’. Then, turning directly to face President Clinton and the First Lady, Hillary Clinton, he reminded them: ‘We are also responsible for what we are doing with those memories … Mr President, I cannot not tell you something. I have been in the former Yugoslavia last Fall … I cannot sleep since over what I have seen. As a Jew I am saying that we must do something to stop the bloodshed in that country! People fight each other and children die. Why? Something, anything, must be done.’

His message, genocide is happening again in Europe, and it is happening on your watch, Mr President, and the Allies are once again doing nothing, was heard in an embarrassed silence, followed by loud applause from the survivors of the camps who were present. Later that year the UN Security Council did finally mandate a humanitarian operation in Bosnia, the UN Protection Force (UNPROFOR), for which the UK was persuaded to provide a headquarters and an infantry battle group. As the opening of the previous chapter recounted, that small peacekeeping force in their blue helmets and white-painted vehicles sadly proved inadequate faced with the aggression of both Bosnian Serbs and Croats, and was helpless to stop the massacre of Bosnian Muslims at Srebrenica in the summer of 1995.

Providing leaders with warnings is not easy. The ancient Greek myth of Cassandra, one of the princesses of Troy and daughter of King Priam, relates that she was blessed by the god Apollo with the gift of foreseeing the future. But when she refused the advances of Apollo she was placed under a curse which meant that, despite her gift, no one would believe her. She tried in vain to warn the inhabitants of Troy to beware Greeks bearing gifts. The giant wooden horse, left by the Greeks as they pretended to lift the siege of the city, was nevertheless pulled inside the walls. Odysseus and his soldiers who were hidden inside climbed out at night and opened the city gates to the invading Greek Army. As Cassandra had cried out in the

streets of Troy: ‘Fools! ye know not your doom … Oh, ye believe not me,

though ne’er so loud I cry!’9 Not to have their warnings believed has been the fate of many intelligence analysts over the years and will be again. The phenomenon is known to the intelligence world as the Cassandra effect.

It might have been doubts about Cassandra’s motives that led to her information being ignored. In 1982 there were warnings from the captain of the ice patrol ship HMS Endurance in the South Atlantic who was monitoring Argentine media that the point was coming close when the Junta would lose patience with diplomatic negotiations. But these warnings were discounted by a very human reaction of ‘Well, he would say that, wouldn’t he’, given his ship was to be withdrawn from service under the cuts in capability imposed by the 1981 defence expenditure review. It is also quite possible that Cassandra might have made too many predictions in the past that led to nothing and created what is known as warning fatigue. We know this as crying wolf, from Aesop’s fable. That might in turn imply the threshold for warning was set too low and should have been set higher than turning out the whole village on a single shout of ‘wolf’ (but remember the earlier discussion of false positives and false negatives and how raising the warning threshold increases the risk of a real threat being ignored). Sending signals which lead to repeated false alarms is an ancient tactic to inure the enemy to the real danger. Warnings also have to be sufficiently specific to allow sensible action to be taken. Simply warning that there is a risk of possible political unrest in the popular holiday destination of Ruritania does not help the tourist know whether or not to cancel their holiday on the Ruritanian coast.

Perhaps poor Cassandra was simply not thought a sufficiently credible source for reasons unconnected with the objective value of her intelligence reporting. Stalin was forewarned of the German surprise attack on the Soviet Union in 1941 by reports from well-placed Soviet intelligence sources, including the Cambridge spies, some of whom had access to Bletchley Park Enigma decrypts of the German High Command’s signals. But he discounted the reporting as too good to be true and therefore assumed a deliberate attempt by the Allies to get him to regard Germany as an enemy and to discount the guarantees of peace in the 1939 Molotov– Ribbentrop non-aggression pact that Stalin had approved two years earlier.

A final lesson from the failure of the Trojans to act on Cassandra’s warning might be that the cost of preventive action can be seen as too great.

Legend has it that the Trojans were concerned with angering their gods if they had refused the Greek offering of the wooden horse. We may ignore troubling symptoms if we fear that a visit to the doctor will result in a diagnosis that prevents us being able to fly to a long-promised holiday in the sun.

Expressing predictions and forecasts as probabilities

It is sadly the case that only rarely can intelligence analysts be definitive in warning what will happen next. Most estimates have to be hedged with caveats and assumptions. Analysts speak therefore of their degree of belief in a forward-looking judgement. Such a degree of belief is expressed as a probability of being right. This is a different use of probability from that associated with gambling games like dice or roulette, where the frequency with which a number comes up provides data from which the probability of a particular outcome is estimated. When we throw a fair die we know that the probability that the next throw will come up with a six is 1/6. We know the odds we ought to accept on a bet that this is what will happen. That is the frequentist interpretation of probability. By analogy, we think of the odds that intelligence analysts would rationally accept on their estimate being right. That is the measure of their degree of belief in their judgement.

It is of course a subjective interpretation of probability.10 Intelligence analysts prefer – like political pollsters – forecasts that

associate with a range of possible outcomes an associated probability. For example, the US Director of National Intelligence, Dan Coats, predicted in a worldwide threat assessment given to the Senate Intelligence Committee that competitors such as Russia, China and Iran ‘probably already are looking to the 2020 U.S. elections as an opportunity to advance their

interests’.11 ‘Probably’ here is likely to mean 55–70 per cent, which can be thought of as the gambling odds the analysts should accept for being right (in that case, just over 70 per cent probable equates to bookmakers offering odds of 2 to 1 on).

When a forecast outcome is heavily dependent on external events, that is usually expressed as an assumption so that readers of the assessment understand that dependency. The use of qualifying words such as ‘unlikely’, ‘likely’ and so on is standardized by professional intelligence analysts. The

UK yardstick was devised by the Professional Head of the Intelligence Analysis profession (PHIA) in the Cabinet Office, and is in use across the British intelligence community, including with law enforcement. The example of the yardstick below is taken from the annual National Strategic

Assessment (NSA) by the UK National Crime Agency.12 Probability and Uncertainty

Throughout the NSA, the ‘probability yardstick’ (as defined by the Professional Head of Intelligence Assessment (PHIA) has been used to ensure consistency across the different threats and themes when assessing probability. The following defines the probability ranges considered when such language is used:

The US Intelligence Community also has published a table showing how to express a likelihood in ordinary language (line 1 of the table below) and in probabilistic language (line 2 of the table, with the corresponding

confidence level in line 3).13

One difference between the approach taken by the UK and the US analysts is in the use of gaps between the ranges in the UK case. The intention is to avoid the potential problem with the US scale over what term you use if your judgement is ‘around 20 per cent’. Two analysts can have a perfectly reasonable, but unnecessary, argument over whether something is ‘very unlikely’ or ‘unlikely’. The gaps obviate the problem. The challenge

is over what to do if the judgement falls within one of the gaps. If an analyst can legitimately say that something is ‘a 75–80 per cent chance’, then they are free to do so. The yardstick is a guide and a minimum standard, but analysts are free to be more specific or precise in their judgements, if they can. It is sensible to think in 5 or 10 per cent increments to discourage unjustified precision for which the evidence is unlikely to be available. I recommend this framework in any situation in which you have to make a prediction. It is very flexible, universally applicable, and extremely helpful in aiding your decisionmaking and in communicating it to others. You could start off by reminding yourself the next time you say it is ‘unlikely’ to rain that that still leaves a one in five chance of a downpour. You might well accept that level of risk and not bother with a coat. But if you were badly run down after a bout of flu even a 20 per cent chance of getting soaked and developing a fever would be a risk not worth running. That is an example of examining the expected value of the outcome, not just its likelihood, formed by multiplying together the probability of an event and a measure of the consequences for you of it happening.

The limits of prediction

The science fiction writer Isaac Asimov in his Foundation and Empire books imagined a future empirical science of psychohistory, where recurring patterns in civilizations on a cosmic scale could be modelled

using sociology, history and mathematical statistics.14 Broad sweeps of history could, Asimov fantasized, be forecast in the same way as statistical mechanics allows the behaviour of large numbers of molecules in a gas to be predicted, although the behaviour of individual molecules cannot (being subject to quantum effects). Asimov’s fictional creator of psychohistory, Dr Hari Seldon, laid down key assumptions that the population whose behaviour was being modelled should be sufficiently large and that the population should remain in ignorance of the results of the application of psychohistorical analyses because, if it became so aware, there would be feedback changing its behaviour. Other assumptions include that there would be no fundamental change in human society and that human nature and reactions to stimuli would remain constant. Thus, Asimov reasoned, the occurrence of times of crisis at an intergalactic scale could be forecast, and

guidance provided (by a holograph of Dr Seldon) by constructing time vaults that would be programmed to open when the crisis was predicted to arise and the need would be greatest.

Psychohistory will remain fantasy. Which is perhaps just as well. The main problem with such ideas is the impossibility of sufficiently specifying the initial conditions. Even with deterministic equations in a weather-forecasting model, after a week or so the divergence between what is forecast and what is observed becomes too large to allow the prediction to be useful. And often in complex systems the model is non-linear, so small changes can quickly become large ones. There are inherent limits to forecasting reality. Broad sweeps may be possible but not detailed predictions. There comes a point when the smallest disturbance (the iconic flapping of a butterfly’s wings) sets in train a sequence of cascading changes that tip weather systems over, resulting in a hurricane on the other side of the world. The finer the scale being used to measure forecasts in international affairs, the more variables that need to be taken into account, the greater the number of imponderables and assumptions, and the less

accurate the long-term forecast is liable to be.15

Even at the level of physical phenomenon not every activity is susceptible to precise modelling. Exactly when a radioactive atom will spontaneously decay cannot be predicted, although the number of such events in a given time can be known in terms of its probability of occurrence. The exact path a photon of light or an electron will take when passing through a narrow pair of slits can also only be predicted in advance in terms of probabilities (the famous double slit experiment that demonstrates one of the key principles of quantum physics).

Secrets, mysteries and complex interactions

There is a deeper way of looking at intelligence, and that is to distinguish between secrets and mysteries. Secrets can be found out if the seeker has the ingenuity, skill and the means to uncover them. Mysteries are of a different order. More and more secrets will not necessarily unlock the mystery of a dictator’s state of mind. But intelligence officers trying to get inside the mind of a potential adversary have to do their best to make an assessment, since that will influence what the policymakers decide to do

next. Inferences can certainly be drawn, based on knowledge of the individuals concerned and on reading of their motivations, together with general observation of human behaviour. But such a judgement will depend on who is making it. A neutral observer might come to a different view from one from a country at risk of being invaded.

Mysteries have a very different evidential status. They concern events that have not yet happened (and therefore may never happen). Yet it is solutions to such mysteries that the users of intelligence need. It was the case that from the moment early in 1982 when the Argentine Junta’s Chief of Naval Staff and chief hawk over the issue, Admiral Anaya, issued secret orders to his staff to begin planning the Falkland Islands invasion then there were secrets to collect. But whether, when it came to the crunch, the Junta as a whole would approve the resulting plan and order implementation would remain a mystery until much later.

To make matters harder, there is often an additional difficulty due to the

complex interactions16involved. We now know in the case of the Junta in1982 that it completely misread what the UK reaction would be to an invasion of the Falkland Islands. And, just as seriously, the Junta did not take sufficient account of the longstanding US/UK defence relationship in assessing how the US would react. It may not have recognized the personal relationship that had developed between the UK’s Defence Secretary, John Nott, and his US counterpart, Caspar Weinberger. Margaret Thatcher’s iron response in sending a naval Task Force to recover the Islands met with Weinberger’s strong approval, in part because it demonstrated to the Soviet Union that armed aggression would not be allowed to pay.

These distinctions are important in everyday life. There are many secrets that can in principle be found out if your investigations are well designed and sufficiently intrusive. In your own life, your partner may have kept texts on their phone from an ex that they have kept private from you. Strictly speaking, these are secrets that you could probably find a way of accessing covertly (I strongly advise you don’t. Your curiosity is not a sufficient reason for violating their privacy rights. And once you have done so, your own behaviour towards your partner, and therefore your partner’s towards you, is likely unconsciously to change). But whether you uncover the secrets or not, the mystery of why your partner kept them and whether they ever intend in the future to contact the ex remains unanswered, and not even your partner is likely to be certain of the answer. You would have the

secret but not the answer to the mystery, and that answer is likely to depend upon your own behaviour over coming months that will exercise a powerful influence on how your partner feels about the relationship. Prediction in such circumstances of complex interactions is always going to be hard.

Missing out on the lessons of Chapter 2and leaping from situational awareness to prediction – for example, by extrapolating trends or assuming conditions will remain the same – is a common error, known as the inductive fallacy. It is equivalent to weather forecasting by simply looking out of the window and extrapolating: most of the time tomorrow’s weather follows naturally from today’s, but not when there is a rapidly developing weather front. Ignoring the underlying dynamics of weather systems will mean you get the forecast right much of the time but inevitably not always. When it happens that you are wrong, as you are bound to be from time to time, you are liable to be disastrously wrong – for example, as a flash flood develops or an unexpected hurricane sweeps in. That holds as true for international affairs as it does for all life as well: if you rely on assumptions, when you get it wrong, you get it really wrong. Experts are as likely to fall

into this trap as anyone else.17

I am fond of the Greek term phronesis, to describe the application of practical wisdom to the anticipation of risks. As defined by the art historian Edgar Wind, this term describes how good judgement can be applied to human conduct consisting in a sound practical instinct for the course of events, an almost indefinable hunch that anticipates the future by

remembering the past and thus judges the present correctly.18

Conclusions: estimates and predictions

Estimates of how events may unfold, and predictions of what will happen next, are crucially dependent on having a reliable explanatory model, as well as sufficient data. Even if we are not consciously aware of doing this, when we think about the future we are mentally constructing a model of our current reality and reaching judgements about how our chosen explanatory model would behave over time and in response to different inputs or stimuli. It will help to have identified what are the most important factors that are likely to affect the outcome, and how sensitive that outcome might

be to changes in circumstances. We are here posing questions of the ‘what next and where next?’ type. In answering them we should:

Avoid the inductive fallacy of jumping straight from situational awareness to prediction and use an explanatory model of how you think the key variables interact.

Be realistic about the limitations of any form of prediction, expressing results as estimates between a range of likely possibilities. Point predictions are hazardous.

Express your degree of confidence in your judgements in probabilistic language, taking care over consistent use of terms such as ‘likely’.

Remember to consider those less likely but potentially damaging outcomes as well as the most probable.

Be aware that wanting to see a reduction in the level of false positives implies increasing the level of false negatives to be expected.

Do not confuse the capability of an individual or organization to act with an intent to act on their part.

Be aware of your cultural differences and prejudices when explaining the motivations and intent of another.

Distinguish between what you conclude based on information you have and what you think based on past experience, inference and intuition (secrets, mysteries and complexities).

Beware your own biases misleading you when you are trying to understand the motives of others.

Give warnings as active deliberative acts based on your belief about how events will unfold and with the intent of causing a change in behaviour or policy.

David Omand – How Spies Think – 10 Lessons in Intelligence – Part 4

Lesson 2: Explanation Facts need explaining

Belgrade, Sunday, 23 July 1995. It was getting dark when our military aircraft landed on an airfield just outside the Serbian capital. We were met by armed Serbian security officers and quickly hustled into cars, watched over cautiously by a diplomat from the British Embassy. After what seemed an endless drive into the country we arrived at a government guest house. Our mission was to deliver in person an ultimatum to its occupant, General Ratko Mladić, the commander of the Bosnian Serb Army, the man who

became infamous as the ‘butcher of Srebrenica’.1

Two days before, at a conference in London, the international community had united to condemn in the strongest terms the actions of Mladić’s Bosnian Serb Army in overrunning the towns of Srebrenica and Zepa. These towns had been placed under the protection of the United Nations as ‘safe areas’, where the Bosnian Muslim population could shelter from the civil war raging around them. Sadly, there had been insufficient understanding in the UN of the ethnic-cleansing activities of Mladić and his army, and thus no proper plans made about how the safe areas were to be defended from him. The UN peacekeeping force in Bosnia, UNPROFOR, was small and lightly armed, and in accordance with UN rules wore blue-painted helmets and rode in white-painted vehicles. They were not a fighting force that could combat the Bosnian Serb Army when it defied the UN. The full extent of the genocidal mass killings and use of rape as a weapon of war by troops under Mladić’s command in Bosnia was not then known, but enough evidence had emerged from Srebrenica to force a reluctant London Conference and NATO international community that enough was enough. Any further interference with the remaining safe areas

would be met by the use of overwhelming air power. The purpose of the mission to Belgrade was to confront Mladić with the reality of that threat and make him desist from further aggression.

Leading the delegation were the three airmen who controlled NATO air power over Bosnia: the Commander of the US Air Force in Europe along with his British and French opposite numbers. I was the Deputy Under Secretary of State for Policy in the Ministry of Defence in London and I was acting as adviser to Air Chief Marshal Sir William Wratten, Commander-in-Chief of the RAF’s Strike Command, a man with a formidable reputation as the architect of British bombing strategy during the first Gulf War. I was there with my opposite numbers from the Ministry of Defence in Paris and the Office of the Secretary of Defense in the Pentagon (my friend Joe Kruzel, who was tragically to die on duty later in Bosnia when his armoured vehicle rolled off a narrow pass). One of our tasks was to use the opportunity to try to understand the motivations of Mladić, the ‘why and what for’ of his actions, and whether he was likely to be deterred by the formal NATO warning from the air commanders of the US, UK and France.

When we arrived at the guest house we were escorted to the dining room and invited to sit at one side of a long table already set with traditional sweetmeats and glasses of plum brandy. Mladić entered in jovial mood with his army jacket around his shoulders hanging unbuttoned, accompanied by the head of his secret police. We had been forewarned that in soldier-to-soldier company he was likely to be bluffly affable, one of the reasons his men adored him. We had therefore resolved on the flight that we would all refuse to accept the hospitality he was bound to offer, an act that we guessed would cause offence and thus jolt Mladić into recognizing this was not a friendly visit. That ploy worked.

Mladić became visibly agitated, defiantly questioning whether the three air forces could pose any real threat to his army given the puny use of NATO air power up to that point. The air commanders had wisely chosen to wear their leather jackets and aviator sunglasses, and not their best dress uniforms. They menacingly described the massive air power they could command and delivered their blunt ultimatum: further attacks against the safe areas would not be tolerated, and substantial air actions would be mounted, ‘if necessary at unprecedented levels’. The atmosphere in the room grew frosty.

Explanations and motives

In the Introduction I described understanding and explanation as the second component of my SEES model of intelligence analysis. Intelligence analysts have to ask themselves why the people and institutions that they are observing are acting as they appear to be, and what their motives and objectives are. That is what we were trying to establish in that visit to Mladić. That’s as true for you in everyday life as it is for intelligence analysts. The task is bound to be all the harder if the analysis is being done at a distance by those brought up in a very different culture from that of the intelligence target. Motives are also easily misread if there is projective identification of some of your own traits in your adversary. This can become dangerous in international affairs when a leader accuses another of behaviour of which they themselves are guilty. That may be a cynical ploy. But it may also be a worrying form of self-deception. The leader may be unconsciously splitting off his own worst traits in order to identify them in the other, allowing the leader then to live in a state of denial believing that they do not actually possess those traits themselves. I’m sure you recognize a similar process in your office every day, too.

If it is the actions of a military leader that are under examination then there may be other objective factors explaining his acts, including the relative capabilities of his and opposing forces, the geography and terrain, and the weather as well as the history, ethnology and cultural anthropology of the society being studied. There are bound to be complexities to unravel where it may be the response to perceived policies and actions by other states, or even internal opposition forces within the society, that provide the best explanation along with an understanding of the history that has led to this point. From the outset of the Bosnian conflict, reports from the region spoke of excesses by the different factions fighting each other, a common feature of civil wars. Such evidence was available. But it was not clear at first what the deeper motivations were that would eventually drive the troops of Ratko Mladić to the horrifying extremes of genocide.

The choice of facts is not neutral, nor do facts speak for themselves

One possible reason we may wrongly understand why we see what we do is because we have implicitly, or explicitly, chosen to find a set of facts that supports an explanation we like and not another. We saw in the preceding chapter that even situational awareness cannot be divorced from the mindset of the analyst. The action of selection of what to focus on is unlikely to be a fully neutral one. This is a problem with which biographers and historians have always had to grapple. As the historian E. H. Carr wrote: ‘By and large, the historian will get the kind of facts he wants.

History means interpretation.’2

Reality is what it is. We cannot go back in time to change what we have observed. More correctly, then, for our purposes reality is what it was when we made our observations. Reality will have changed in the time it has taken us to process what we saw. And we can only perceive some of what is out there. But we can make a mental map of reality on which we locate the facts that we think we know, and when we got to know them. We can place these facts in relation to each other and, via our memory, fill in some detail from our prior knowledge. Then we look at the whole map and hope we recognize the country outlined.

More often than not, facts can bear different meanings. Therein lies the danger of mistakes of interpretation. A shopkeeper facing a young man asking to buy a large meat cleaver has to ask herself, gang member or trainee butcher? Let me adapt an example that Bertrand Russell used in his

philosophy lectures to illustrate the nature of truth.3 Imagine a chicken farm in which the chickens conduct an espionage operation on the farmer, perhaps by hacking into his computer. They discover that he is ordering large quantities of chicken food. The Joint Intelligence Committee of chickens meets. What do they conclude? Is it that the farmer has finally recognized that they deserve more food; or that they are being fattened up for the kill? Perhaps if the experience of the chickens has been of a happy outdoor life, then their past experience may lead them to be unable to conceive of the economics of chicken farming as seen by the farmer. On the other hand, chickens kept in their thousands in a large tin shed may well be all too ready to attribute the worst motives to the farmer. It is the same secret intelligence, the same fact, but with two opposite interpretations. That is true of most factual information.

Context is therefore needed to infer meaning. And meaning is a construct of the human mind. It is liable to reflect our emotionally driven hopes and

fears as much as it represents an objective truth. Intelligence analysts like to characterize themselves as ‘objective’, and great care is taken, as we see in Chapter 5, to identify the many possible types of cognitive bias that might skew their thinking. In the end, however, ‘independent’, ‘neutral’ and ‘honest’ might be better words to describe the skilled analysts who must avoid being influenced by what they know their customers desperately hope

to hear.4 The great skill of the defence counsel in a criminal trial is to weave an explanatory narrative around the otherwise damming evidence so that the jury comes to believe in the explanation offered of what happened and thus in the innocence of the accused. The observed capability to act cannot be read as a real intention to do so. The former is easier to assess, given good situational awareness; the latter is always hard to know since it involves being able to ascribe motives in order to explain what is going on. You may know from your employment contract the circumstances under which your boss may fire you, but that does not mean they (currently) have the intention to do so.

We know from countless psychological experiments that we can convince ourselves we are seeing patterns where none really exist. Especially if our minds are deeply focused somewhere else. So how can we arrive at the most objective interpretation of what our senses are telling us? Put to one side the difficulties we discussed in the last chapter of knowing which are sufficiently reliable pieces of information to justify our labelling them as facts. Even if we are sure of our facts we can still misunderstand their import.

Imagine yourself late at night, for example, sitting in an empty carriage on the last train from the airport. A burly unkempt man comes into the carriage and sits behind you and starts talking aggressively to himself, apparently threatening trouble. Those sense impressions are likely at first to trigger the thought that you do not want to be alone with this individual. The stranger is exhibiting behaviour associated with someone in mental distress. Concern arises that perhaps he will turn violent; you start to estimate the distance to the door to the next carriage and where the emergency alarm is located; then you notice the tiny earphone he is wearing. You relax. Your mental mapping has flipped over and now provides a non-threatening explanation of what you heard as the simpler phenomenon of a very cross and tired man off a long flight making a mobile call to the car hire company that failed to pick him up.

What made you for a moment apprehensive in such a situation was how you instinctively framed the question. Our brains interpret facts within an emotional frame of mind that adds colour, in this case that represented potential danger on the mental map we were making. That framing was initially almost certainly beyond conscious thought. It may have been triggered by memory of past situations or more likely simply imaginative representation of possibilities. If you had been watching a scare movie such as Halloween on your flight, then the effect would probably have been even more pronounced.

The term ‘framing’ is a useful metaphor, a rough descriptor of the mental process that unconsciously colours our inferential map of a situation. The marvellous brightly coloured paintings of Howard Hodgkin, for example, extend from the canvas on to and over the frame. The frame itself is an integral part of the picture and conditions our perception of what we see on the canvas itself. The framing effect comes from within, as our minds respond to what we are seeing, and indeed feeling and remembering. It is part of the job of TV news editors to choose the clips of film that will provide visual and aural clues to frame our understanding of the news. And of course, as movie directors know, the effect of images playing together with sound are all the more powerful when working in combination to help us create in our minds the powerful mental representation of the scene that director wanted. The scrape of the violins as the murderer stalks up the staircase, knife in hand, builds tension; whereas the swelling orchestra releases that tension when the happy couple dance into the sunset at the end. Modern political advertising has learned all these tricks to play on us to make their message one we respond to more emotionally than rationally.

Up to this point in history only a human being could add meaning. Tomorrow, however, it could be a machine that uses an artificial intelligence programme to infer meaning from data, and then to add appropriate framing devices to an artificially generated output. Computerized sentiment analysis of social media postings already exists that can gauge a crowd’s propensity to violence. Careful use of artificial intelligence could shorten the time taken to alert analysts to a developing crisis.

However, there are dangers in letting machines infer an explanation of what is going on. Stock exchanges have already suffered the problems of ‘flash crashes’ when a random fall in a key stock price triggers via an

artificial intelligence programme automated selling that is detected by other trading algorithms, which in turn start selling and set off a chain reaction of dumping shares. So automatic brakes have had to be constructed to prevent the market being driven down by such automation. A dangerous parallel would be if reliance is placed on such causal inference to trigger automatically changes in defence posture in response to detected cyberattacks. If both sides in an adversarial relationship have equipped themselves with such technology, then we might enter the world of Dr Strangelove. Even more so if there are more than two players in such aninfernal game of automated inference. As AI increasingly seeps into our everyday lives, too, we must not allow ourselves to slip into allowing it to infer meaning on our behalf unchecked. Today the algorithm is selecting what online advertisements it thinks will best match our interests, irritating when wrong but not harmful. Which it would be if it were a credit rating algorithm secretly deciding that your browsing and online purchasing history indicate a risk appetite too high to allow you to hold a credit card or obtain affordable motorbike insurance.

Back to Bayesics: scientifically choosing an explanatory hypothesis

The intelligence analyst is applying in the second stage of SEES generally accepted scientific method to the task of explaining the everyday world. The outcome should be the explanatory hypothesis that best fits the observed data, with the least extraneous assumptions having to be made, and with alternative hypotheses having been tested against the data and found less satisfactory. The very best ideas in science, after sufficient replication in different experiments, are dignified with the appellation ‘theories’. In intelligence work, as in everyday life, we normally remain at the level of an explanatory hypothesis, conscious that at any moment new evidence may appear that will force a re-evaluation. An example in the last chapter was the case of the Cuban missile crisis, when the USAF photographs of installations and vehicles seen in Cuba, coupled with the secret intelligence from the MI6/CIA agent Col. Penkovsky, led analysts to warn President Kennedy that he was now faced with the Soviet Union introducing medium-range nuclear missile systems on to the island.

In the last chapter I described the method of Bayesian inference as the scientific way of adjusting our degree of belief in a hypothesis in the light of new evidence. You have evidence and use it to work backwards to assess what the most likely situation was that could have led to it being created. Let me provide a personal example to show that such Bayesian reasoning can be applied to everyday matters.

I remember Tony Blair when Prime Minister saying that he would have guessed that my background was in Defence. When I asked why, he replied because my shoes were shined. Most of Whitehall, he commented, had gone scruffy, but those used to working with the military had retained the habit of cleaning their shoes regularly.

We can use Bayesian reasoning to test that hypothesis, D, that I came from the MOD. Say 5 per cent of senior civil servants work in Defence, so the prior probability of D being true p(D) = 1/20 (5 per cent), which is the chance of picking a senior civil servant at random and finding he or she is from the MOD.

E is the evidence that my shoes are shined. Observation in the Ministry of Defence and around Whitehall might show that 7 out of 10 Defence senior civil servants wear shiny shoes but only 4 out of 10 in civil departments do so. So the overall probability of finding shiny shoes is the sum of that for Defence and that for civil departments

p(E) = (1/20)*(7/10)+(1–1/20)*(4/10) = 83/200

The posterior probability that I came from Defence is written as p(D|E) (where, remember, the vertical bar is to be read as ‘given’). From Bayes’s theorem, as described in Chapter 1:

p(D|E) = p(D). [p(E|D)/p(E)] = 1/20*[7/10*200/83] = 7/83 =

approx. 1/12

Using Bayesian reasoning, the chances of the PM’s hypothesis being true is almost double what would be expected from a random guess.

Bayesian inference is a powerful way of establishing explanations, the second stage of the SEES method. The example can be set out in a 2 by 2 table (say, applied to a sample of 2000 civil servants) showing the classifications of shined shoes/not shined shoes and from Defence/not from Defence. I leave it to the reader to check that the posterior probability

P(D/E) found above using Bayes’s theorem can be read from the first column of the table as 70/830 = approx. 1/12. Without seeing the shined shoes, the prior probability that I come from the MOD would be 100/2000, or 1/20.


E: shined shoesNot shined shoesTotals




D: from MOD7030100




Not from MOD76011401900




Totals83011702000




Now imagine a real ‘big data’ case with an array of hundreds or thousands of dimensions to cater for large numbers of different types of evidence. Bayes’s theorem still holds as the method of inferring posterior probabilities (although the maths gets complicated). That is how inferences are legitimately to be drawn from big data. The medical profession is

already experiencing the benefits of this approach.5 The availability of personal data on internet use also provides many new opportunities to derive valuable results from data analysis. Cambridge Analytica boasted that it had 4000–5000 separate data points on each voter in the US 2016 Presidential election, guiding targeted political advertising, a disturbing application of Bayesian inference that we will return to in Chapter 10.

In all sustained thinking, assumptions do have to be made – the important thing is to be prepared in the light of new evidence challenging the assumptions to rethink the approach. A useful pragmatic test about making assumptions is to ask at any given stage of serious thinking, if I make this assumption, am I making myself worse off in terms of chances of success if it turns out not to be sensible than if I had not made it? Put another way, if my assumption turns out to be wrong then would I end up actually worse off in my search for the answer or am I just no better off?

For example, if you have a four-wheel combination bicycle lock and forget the number you could start at 0000, then 0001, 0002, all the way up, aiming for 9999, knowing that at some point the lock will open. But you might make the reasonable assumption that you would not have picked a

number commencing with 0, so you start at 1000. Chances are that saves you work. But if your assumption is wrong you are no worse off.

As a general rule it is the explanatory hypothesis with the least evidence against it that is most likely to be the best one for us to adopt. The logic is that one strong contrary result can disconfirm a hypothesis. Apparently confirmatory evidence on the other hand can still be consistent with other hypotheses being true. In that way the analyst can avoid the trap (the

inductive fallacy 6 ) of thinking that being able to collect more and more evidence in favour of a proposition necessarily increases confidence in it. If we keep looking in Europe to discover the colour of swans, then we will certainly conclude by piling up as many reports as we like that they are all white. If eventually we seek evidence from Australia then the infamous

‘black swan’ appears and contradicts our generalization.7 When there are more reports in favour of hypothesis A than its inverse, hypothesis B, it is not always sensible to prefer A to B if we suspect that the amount of evidence pointing to A rather than B has been affected by how we set about searching for it.

A well-studied lesson of the dangers of misinterpreting complex situations is the ‘security dilemma’ when rearmament steps taken by one nation with purely defensive intent trigger fears in a potential adversary, leading it to take its own defensive steps that then appear to validate the original fears. The classic example is a decision by country A to modernize by building a new class of battleships. That induces anxiety in country B that an adverse military balance is thereby being built up against it. That leads to decisions on the part of country B also to build up its forces. That rearmament intention in turn is perceived as threatening by country A, not only justifying the original decision to have a new class of battleships but prompting the ordering of yet more ships. The worst fears of country B about the intentions of country A are thus confirmed. And an arms race starts. As the Harvard scholar Ben Buchanan has pointed out, such mutual misassessments of motivation are even more likely to be seen today in cyberspace since the difference between an intrusion for espionage

purposes and for sabotage need only be a few lines of code.8 There is thus ample scope for interpreting detected intrusions as potentially hostile, on both sides. Acts justified as entirely defensive by one government are therefore liable to be labelled as offensive in motivation by another – and vice versa.

We can easily imagine an established couple, call them Alice and Bob, one of whom, Bob, is of a jealous nature. Alice one day catches Bob with her phone reading her texts. Alice feels this is an invasion of her privacy, and increases the privacy settings on her phone. Bob takes this as evidence that Alice must have something to hide and redoubles his efforts to read her text messages and social media posts, which in turn causes Alice to feel justified in her outrage at being mistrusted and spied on. She takes steps to be even more secretive, setting in train a cycle of mistrust likely, if not interrupted, to gravely damage their relationship.

Explaining your conclusions

Margaret Thatcher was grateful for the weekly updates she received from the JIC. She always wanted to be warned when previous assessments had changed. But she complained that the language the JIC employed was too often ‘nuanced’. ‘It would be helpful’, she explained, ‘if key judgments in the assessments could be highlighted by placing them in eye-catching

sentences couched in plainly expressed language.’9 In the case of the Falklands that I mentioned in Chapter 1, the JIC had been guilty of such nuance in their July 1981 assessment. They had explained that they judged that the Argentine government would prefer to achieve its objective (transfer of sovereignty) by peaceful means. Thereby the JIC led readers to infer that if Argentina believed the UK was negotiating in good faith on the future of the Islands, then it would follow a peaceful policy, adding that if Argentina saw no hope of a peaceful transfer of sovereignty then a full-scale invasion of FI could not be discounted. Those in London privy to the Falklands negotiations knew the UK wanted a peaceful solution too. Objectively, nevertheless, the current diplomatic efforts seemed unlikely to lead to a mutually acceptable solution. But for the JIC to say that would look like it was straying into political criticism of ministerial policy and away from its brief of assessing the intelligence. There was therefore no trigger for reconsideration of the controversial cuts to the Royal Navy announced the year before, including the plan to scrap the Falklands-based ice patrol ship HMS Endurance. Inadvertently, and without consciously realizing they had done so, the UK had taken steps that would have reinforced in the minds of the Junta the thought that the UK did not see the

Islands as a vital strategic interest worth fighting for. The Junta might reasonably have concluded that if Argentina took over the Islands by force the worst it would face would be strong diplomatic protest.

Explaining something that is not self-evident is a process that reduces a complex problem to simpler elements. When analysts write an intelligence assessment they have to judge which propositions they can rely on as known to their readers and thus do not need explaining or further justification. That Al Qaid’a under Bin Laden was responsible for the attacks on 9/11 is now such a building block. That the Russian military intelligence directorate, the GRU, was responsible for the attempted murder of the Skripals in Salisbury in 2018 is likewise a building block for discussions of Russian behaviour. That Saddam Hussein in Iraq was still pursuing an unlawful biological warfare programme in 2002 was treated as a building block – wrongly, and therein lies the danger. That was a proposition that had once been true but (unbeknown to the analysts) was no longer. The mental maps being used by the analysts to interpret the reports being received were out of date and were no longer an adequate guide to reality. As the philosopher Richard Rorty has written: ‘We do not have any way to establish the truth of a belief or the rightness of an action except by reference to the justifications we offer for thinking what we think or doing what we do.’10

Here, however, lies another lesson in trying to explain very complex

situations in terms of simpler propositions.11 The temptation is to cut straight through complex arguments by presenting them in instantly recognizable terms that the reader or listener will respond to at an emotional level. We do this when we pigeonhole a colleague with a label like ‘difficult’ or ‘easy to work with’. We all know what we are meant to infer when a politician makes reference in a television interview or debate to the Dunkirk spirit, the appeasement of fascism in the 1930s, Pearl Harbor and the failure to anticipate surprise attacks, or Suez and the overestimation of British power in the 1956 occupation of the Egyptian canal zone. ‘Remember the 2003 invasion of Iraq’ is now a similarly instantly recognizable meme for the alleged dangers of getting too close to the United States. Such crude narrative devices serve as a shorthand for a much more complex reality. They are liable to mislead more than enlighten. History does not repeat itself, even as tragedy.

The lesson in all of this is that an accurate explanation of what you see is crucial.

Testing explanations and choosing hypotheses

How do we know when we have arrived at a sufficiently convincing explanation? The US and British criminal justice systems rest on the testing in court of alternative explanations of the facts presented respectively by counsel for the prosecution and for the defence in an adversarial process. For the intelligence analyst the unconscious temptation will be to try too hard to explain how the known evidence fits their favoured explanation, and why contrary evidence should not be included in the report.

Where there is a choice of explanations apply Occam’s razor (named after the fourteenth-century Franciscan friar William of Occam) and favour the explanation that does not rely on complex, improbable or numerous assumptions, all of which have to be satisfied for the hypothesis to stand up. By adding ever more baroque assumptions any set of facts can be made to fit a favoured theory. This is the territory where conspiracies lurk. In the words of the old medical training adage, when you hear rapid hoof-beats

think first galloping horses not zebras escaping from a zoo.12

Relative likelihood

It is important when engaged in serious thinking about what is going on to have a sense of the relative likelihood of alternative hypotheses being true. We might say, for example, after examining the evidence that it is much more likely that the culprit behind a hacking attack is a criminal group rather than a hostile state intelligence agency. Probability is the language in which likelihoods are expressed. For example, suppose a six-sided die is being used in a gambling game. If I have a suspicion that the die is loaded to give more sixes, I can test the hypothesis that the die is fair by throwing the die many times. I know from first principles that an unbiased die tossed properly will fall randomly on any one of its six faces with a probability [1/6]. The result of each toss of the die should produce a random result independent of the previous toss. Thus I must expect some clustering of

results by chance, with perhaps three or even four sixes being tossed in a row (the probability of four sixes in a row is small – [1/6]x[1/6]x[1/6]x[1/6]

  • 0.0008, less than 1 in a thousand. But it is not zero). I will therefore not be too surprised to find a run of sixes. But, evidently, if I throw the die 100 times and I return 50 sixes, then it is a reasonable conclusion that the die is biased. The more tosses of that particular die the more stable the proportion of sixes will be. Throw it 1,000 times, 10,000 times, and, if the result is consistent, our conclusion becomes more likely. A rational degree of belief in the hypothesis that the die is not fair comes from analysis of the data, seeing the difference between what results would be associated with the hypothesis (a fair die) and the alternative hypothesis (a die biased to show sixes).

The key question to ask in that case is: if the die was fair, how likely is it that we would have seen 50 sixes in 100 throws? That is the approach of Bayesian inference we saw earlier in the chapter. The greater the divergence the more it is rational to believe that the evidence points to it not being a fair die. We have conducted what intelligence officers call an analysis of competing hypotheses (ACH), one of the most important structured analytic techniques in use in Western intelligence assessment, pioneered by CIA

analyst Richards J. Heuer.13 The method is systematically to list all the possible explanations (alternative hypotheses) and to test each piece of evidence, each inference and each assumption made as to whether it is significant in choosing between them (this is known by an ugly term as the discriminatability of the intelligence report). We then prefer the explanationwith the least evidence pointing against it.

Alas, in everyday life, most situations we come across cannot be tested under repeated trials. Nor can we know in advance, or work out from first principles, what ideal results to compare with our observed data (such as the characteristics of a fair die). We cannot know that a boss is exhibiting unfair prejudice against one of their team in the way we can establish that a die is biased. But if we have a hypothesis of bias we can rationally test it against the evidence of observed behaviour. We will have to apply judgement in assessing the motives of the people involved and in testing possible alternative explanations for their behaviour against the evidence, discriminating between these hypotheses as best we can. When we apply Bayesian inference to everyday situations in that way, we end up with a degree of belief in the hypothesis that we conclude best explains the

observed data. That result is inevitably subjective, but is the best achievable from the available evidence. And, of course, we must always therefore be open to correction if fresh evidence is obtained.

Stage 2 of SEES: explaining

The first step in stage 2 of SEES is therefore to decide what possible explanations (hypotheses) to test against each other. Let me start with an intelligence example. Suppose secret intelligence reveals that the military authorities of a non-nuclear weapon State A are seeking covertly to import specialist high-speed fuses of a kind associated with the construction of nuclear weapons but that also have some research civilian uses. I cannot be certain that State A is pursuing a nuclear weapons programme in defiance of the international Non-Proliferation Treaty, although I might know that it has the capability to enrich uranium. The covert procurement attempts might be explicable by the caution on the part of State A that open attempts to purchase such fuses for civil use would be bound to be misunderstood. And the civil research institutions of State A might be using the military procurement route just for convenience since the military budget is larger. One hypothesis might be that the fuses are for a prohibited nuclear weapons programme. The obvious alternative would be that the fuses are for an innocent civil purpose. But there might be other hypotheses to test: perhaps the fuses were for some other military use. The important thing is that all the possible explanations should be caught by one or other of the hypotheses to be tested (in the jargon, exhausting the solution space). A further refinement might be to split the first hypothesis into two: a government-approved procurement for a nuclear weapons programme and one conducted by the military keeping the government in ignorance.

In that way we establish mutually exclusive hypotheses to test. Now we can turn to our evidence and see whether our evidence helps to discriminate between them. We start with identifying key assumptions that might be swaying our minds and ask ourselves how the weight of evidence might shift if we change the assumptions (the analysts might, for example, take for granted that any nuclear research would be in the hands of the military). We identify inferences that we have drawn and whether they are legitimate (the fact that the end-user was not revealed on the procurement documents

may imply that there is something to hide, or it may be just that overseas government procurement is carried out in that country via an import–export intermediary. Finally, we examine each piece of intelligence (not just secret intelligence of course; there are likely to be open sources as well) to see in Bayesian fashion whether they would be more likely under each of the hypotheses, and can thus help us discriminate between them. In doing this we check at the same time how confident we are in each piece of information being reliable, as we discussed in the preceding chapter.

Some of the intelligence reports may be consistent with all our hypotheses and they must be put to one side, however fascinating they are to read. Frustratingly, that can happen with reports of hard-to-get intelligence where perhaps lives have been risked to acquire it. A table (known in the trade as a Heuer table, after the pioneer of the use of structured analytic techniques, Richards J. Heuer) can be drawn up with separate columns for each hypothesis and rows for each piece of evidence, whose consistency with each hypothesis can then be logged in the table.

The first few rows of such a table might look like this:


SourceHypothesis 1: IsHypothesis 2:

typerelated to plan toCan be

Credibilityconduct nuclear-explained by

Relevanceweapon-relatedresearch for


experimentscivil purposes




Evidence 1: knownAnConsistentConsistent
capability to enrichassumption

uranium providesMedium

motiveHigh





Evidence 2:AnConsistentLess
procurement wasinference
consistent
via an import–High

export companyMedium





Evidence 3:ImageryConsistentLess
military securityHigh
consistent

Medium





seen around


warehouse






Evidence 4: covertHumintConsistentMuch less
channels were usedNew
consistent
to acquire high-source on

speed fusestrial High





Evidence 5:SigintConsistentMuch less
encrypted high-High High
consistent
grade military


comms to and from


the warehouse






A hypothetical example of part of a Heuer table

It may become apparent that one particular report provides the dominant evidence, in which case wise analysts will re-examine the sourcing of the report. A lesson from experience (including that of assessing Iraq’s holdings of chemical and biological weapons in 2002) is that once we have chosen our favoured explanation we become unconsciously resistant to changing our mind. Conflicting information that arrives is then too easily dismissed as unreliable or ignored as an anomaly. The table method makes it easier to establish an audit trail of how analysts went about reaching their conclusions. A record of that sort can be invaluable if later evidence casts doubt on the result, perhaps raising suspicions that some of the intelligence reporting was deliberately fabricated as a deception. We will see in Chapter 5 how German, US and UK analysts were deliberately deceived by the reporting of an Iraqi defector into believing that in 2003 Saddam Hussein possessed mobile biological warfare facilities.

The analysis of competing hypotheses using Heuer tables is an example of one of the structured analytic techniques in use today in the US and UK intelligence communities. The method is applicable to any problem you might have where different explanations have to be tested against each other in a methodical way. Heuer himself cites Benjamin Franklin in 1772, when he was the US Ambassador to France, describing to Joseph Priestley (the discoverer of oxygen) his approach to making up his mind:

  • divide half a sheet of paper by a line into two columns; writing over the one Pro and over the other Con … put down over the different heads short hints of the different motives … for or against the measure. When I have thus got them all together in one view, I endeavour to estimate their relative weights; and where I find two, one on each side, that seem equal I strike them out. Thus proceeding I find where the balance lies … and come to a determination accordingly.

In any real example there is likely to be evidence pointing both ways so a weighing up at the end is needed. Following the logic of scientific method it is the hypothesis that has least evidence against it that is usually to be favoured, not the one with most in favour. That avoids the bias that could come from unconsciously choosing evidence to collect that is likely to support a favoured hypothesis. I invite you to try this structured technique for yourself the next time you have a tricky decision to take.

A striking example of the importance of falsifying alternative theories rather than confirming the most favoured comes from an unexpected quarter: the 2016 US Presidential election. It was an election campaign beset with allegations of ‘fake news’ (including the false stories created and spread by Russian intelligence agents to try to discredit one candidate, Hillary Clinton). One of the stories spread online featured a photograph of a young Donald Trump with the allegation that, in an interview to People magazine in 1998, he said: ‘If I were to run, I would run as a Republican. They’re the dumbest group of voters in the country. They believe anything on Fox News. I could lie and they ’d still eat it up. I bet my numbers would be terrific.’ That sounds just like Trump, but the only flaw is that he never said it to People magazine. A search of People magazine disconfirms that

hypothesis – he gave no such interview.14 This story is an example of a falsifiable assertion. The hypothesis that he did say it can be checked and quickly shown to be untrue (that may of course have been the scheming intent of its authors, in order to lend support to the assertion that other anti-Trump stories were equally false). Most statements about beliefs and motivations are non-falsifiable and cannot be disproved in such a clear way. Instead, judgement is needed in reaching a conclusion that involves weighing evidence for and against, as we have seen with the Heuer method.

Assumptions and sensitivity testing

In this second stage of SEES, it is essential to establish how sensitive your explanation is to your assumptions and premises. What would it have taken to change my mind? Often the choice of explanation that is regarded as most likely will itself depend upon a critical assumption, so the right course is to make that dependency clear and to see whether alternative assumptions might change the conclusion reached. Assumptions have to be made, but circumstances can change and what was reasonable to take as a given may not be with time.

Structured diagnostic techniques, such as comparing alternative hypotheses, have the great advantage that they force an analytic group to argue transparently through all the evidence, perhaps prompting double-checking of the reliability of some piece of intelligence on which the choice of hypothesis seems to rest, or exposing an underlying assumption that may no longer hold or that would not be sensible to make in the context of the problem being examined.

As we will see in the next chapter, turning an explanation into a predictive model that allows us to estimate how events will unfold is crucially dependent on honesty over the assumptions we make about human behaviour. Marriages are predicated on the assumption that both partners will maintain fidelity. Many is the business plan that has foundered because assumptions made in the past about consumer behaviour turned out to no longer be valid. Government policies can come unstuck, for example, when implicit assumptions, such as about whether the public will regard them as fair, turn out not to reflect reality. A striking example was the British Criminal Justice Act 1991 that made fines proportionate to the income of the offender, and collapsed on the outcry when two men fighting, equally to blame, were fined £640 and £64 respectively because they belonged to different income brackets.

Back in Serbia in 1995, General Mladić, to our surprise, simplified our assessment task of trying to understand and explain his motivations.

Pulling out a brown leather-backed notebook, every page filled with his own cramped handwriting, Mladić proceeded to read to us from it for over half an hour recounting the tribulations of the Serb people at the hands both of the Croats and, as he put it, the Turks. He gave us his version of the history of his people, including the devastating Serbian defeat by the

Ottoman Empire in 1389 at the Battle of the Field of Blackbirds. That was a defeat he saw as resulting in 500 years of Serbian enslavement. He recounted the legend that the angel Elijah had appeared to the Serb commander, Lazar, on the eve of the battle saying that victory would win him an earthly kingdom, but martyrdom would win a place for the Serb people in heaven. Thus even defeat was a spiritual triumph, and justified the long Serbian mission to recover their homeland from their external oppressors.

According to Mladić’s candid expression of his world view in that dining room in Serbia, he felt it was a continuing humiliation to have Muslims and Croats still occupying parts of the territory of Bosnia–Herzegovina, and an insult to have the West defending Bosnian Muslims in enclaves inside what he saw as his own country. In a dramatic climax to his narrative he ripped open his shirt and cried out, kill me now if you wish but I will not be intimidated, swearing that no foreign boot would be allowed to desecrate the graves of his ancestors.

Mladić had effectively given us the explanation we were seeking and answered our key intelligence question on his motivation for continuing to fight. We returned to our capitals convinced that the ultimatum had been delivered and understood, but Mladić would not be deterred from further defiance of the UN. The West would have to execute a policy U-turn to stop him, by replacing the UN peacekeepers with NATO combat troops under a UN mandate that could be safely backed by the use of air power. And so it worked out, first with the Anglo-French rapid reaction force on Mount Igman protecting Sarajevo and then the deployment of NATO forces including 20,000 US troops, all supported by a major air campaign.

I should add my satisfaction that the final chapter in the story concluded on 22 November 2017, when the Hague war crimes tribunal, with judges from the Netherlands, South Africa and Germany, ruled that, as part of Mladić’s drive to terrorize Muslims and Croats into leaving a self-declared Serb mini-state, his troops had systematically murdered several thousand Bosnian Muslim men and boys, and that groups of women, and girls as young as twelve years old, were routinely and brutally raped by his forces. The judges detailed how soldiers under Mladić’s command killed, brutalized and starved unarmed Muslim and Croat prisoners. Mladić was convicted of war crimes and sentenced to life imprisonment.

Conclusions: explaining why we are seeing what we do

Facts need explaining to understand why the world and the people in it are behaving as they appear to be. In this chapter, we have looked at how to seek the best ‘explanation’ of what we have observed or discovered about what is going on. If we wish to interpret the world as correctly as we can we should:

Recognize that the choice of facts is not neutral and may be biased towards a particular explanation.

Remember that facts do not speak for themselves and are likely to have plausible alternative explanations. Context matters in choosing the most likely explanation. Correlations between facts do not imply a direct causal connection.

Treat explanations as hypotheses each with a likelihood of being true.

Specify carefully alternative explanatory hypotheses to cover all the possibilities, including the most straightforward in accordance with Occam’s razor.

Test hypotheses against each other, using evidence that helps discriminate between them, an application of Bayesian inference.

Take care over how we may be unconsciously framing our examination of alternative hypotheses, risking emotional, cultural or historical bias.

Accept the explanatory hypothesis with the least evidence against it as most likely to be the closest fit to reality.

Generate new insights from sensitivity analysis of what it would take to change our mind.

David Omand – How Spies Think – 10 Lessons in Intelligence – Part 3

STASI-AGENTS IN DISGUISE

Part One

AN ANALYST SEES: FOUR LESSONS IN ORDERING OUR THOUGHTS

1

Lesson 1: Situational awareness Our knowledge of the world is always fragmentary and incomplete, and is sometimes wrong

London, 11 p.m., 20 April 1961. In room 360 of the Mount Royal Hotel, Marble Arch, London, four men are waiting anxiously for the arrival of a fifth. Built in 1933 as rented apartments and used for accommodation by US Army officers during the war, the hotel was chosen by MI6 as a suitably anonymous place for the first face-to-face meeting of Colonel Oleg Penkovsky of Soviet military intelligence, the GRU, with the intelligence officers who would jointly run him as an in-place agent of MI6 and CIA. When Penkovsky finally arrived he handed over two packets of handwritten notes on Soviet missiles and other military secrets that he had smuggled out of Moscow as tokens of intent. He then talked for several hours explaining what he felt was his patriotic duty to mother Russia in exposing to the West the adventurism and brinkmanship of the Soviet leader, Nikita Khrushchev, and the true nature of what he described as the rotten two-faced Soviet

regime he was serving.1

The huge value of Penkovsky as a source of secret intelligence came from the combination of his being a trained intelligence officer and his access to the deepest secrets of the Soviet Union – military technology, high policy and personalities. He was one of the very few with his breadth of access allowed to visit London, tasked with talent spotting of possible

sources for Soviet intelligence to cultivate in Western business and scientific circles.

Penkovsky had established an acquaintance with a frequent legitimate visitor to Moscow, a British businessman, Greville Wynne, and entrusted him with his life when he finally asked Wynne to convey his offer of service to MI6. From April 1961 to August 1962 Penkovsky provided over 5500 exposures of secret material on a Minox camera supplied by MI6. His material alone kept busy twenty American and ten British analysts, and his 120 hours of face-to-face debriefings occupied thirty translators, producing 1200 pages of transcript.

At the same time, on the other side of the Atlantic, intelligence staffs worried about the military support being provided by the Soviet Union to Castro’s Cuba. On 14 October 1962 a U2 reconnaissance aircraft over Cuba photographed what looked to CIA analysts like a missile site under construction. They had the top secret plans Penkovsky had passed to MI6 showing the typical stages of construction and operation for Soviet medium-range missile sites. In the view of the CIA, without this information it would have been very difficult to identify which type of nuclear-capable missiles were at the launch sites and track their operational readiness. On 16 October President Kennedy was briefed on the CIA assessment and shown the photographs. By 19 October he was told a total of nine such sites were under construction and had been photographed by overflights. On 21 October the British Prime Minister, Harold Macmillan, was informed by President Kennedy that the entire US was now within Soviet missile range with a warning time of only four minutes. Macmillan’s response is recorded as ‘now the Americans will realize what we here in England have lived through these past many years’. The next day, after consultation with Macmillan, the President instituted a naval blockade of

Cuba.2

The Cuban missile crisis is a clear example of the ability intelligence has to create awareness of a threatening situation, the first component of the SEES model of intelligence analysis. The new evidence turned US analysts’ opinion on its head. They had previously thought the Soviets would not dare to attempt introducing nuclear missile systems in the Western hemisphere. Now they had a revised situational awareness of what the United States was facing.

There is a scientific way of assessing how new evidence should alter our beliefs about the situation we face, the task of the first stage of the SEES method. That is the Bayesian approach to inference, widely applied in

intelligence analysis, modern statistics and data analysis.3 The method is named after the Rev. Thomas Bayes, the eighteenth-century Tunbridge Wells cleric who first described it in a note on probability found among his papers after his death in 1761.

The Bayesian approach uses conditional probability to work backwards from seeing evidence to the most likely causes of that evidence existing. Think of the coin about to be tossed by a football referee to decide which side gets to pick which goal to attack in the first half of the game. To start with it would be rational to estimate that there is a 50 per cent probability that either team will win the toss. But what should we think if we knew that in every one of the last five games involving our team and the same referee we had lost the toss? We would probably suspect foul play and reduce our belief that we stand an even chance of winning the toss this time. That is what we describe as the conditional probability, given that we now know the outcome of previous tosses. It is different from our prior estimate. What Bayesian inference does in that case is give us a scientific method of starting with the evidence of past tosses to arrive at the most likely cause of those results, such as a biased coin.

Bayesian inference helps us to revise our degree of belief in the likelihood of any proposition being true given our learning of evidence that bears on it. The method applies even when, unlike the coin-tossing example, we only have a subjective initial view of the likelihood of the proposition being true. An example would be the likelihood of our political party winning the next election. In that case it might then be new polling evidence that causes us to want to revise our estimate. We can ask ourselves how far the new evidence helps us discriminate between alternative views of the situation or, as we should term them, alternative hypotheses, about what the outcome is likely to be. If we have a number of alternatives open to us, and the evidence is more closely associated with one of them than the alternatives, then it points us towards believing more strongly that that is the best description of what we face.

The Bayesian method of reasoning therefore involves adjusting our prior degree of belief in a hypothesis on receipt of new evidence to form a posterior degree of belief in it (‘posterior’ meaning after seeing the

evidence). The key to that re-evaluation is to ask the question: if the hypothesis was actually true how likely is it that we would have been able to see that evidence? If we think that evidence is strongly linked to the hypothesis being true, then we should increase our belief in the hypothesis.

The analysts in the Defense Intelligence Agency in the Pentagon had originally thought it was very unlikely that the Soviet Union would try to introduce nuclear missiles into Cuba. That hypothesis had what we term a low prior probability. We can set this down precisely using notation that will come in handy in the next chapter. Call the hypothesis that nuclear missiles would be introduced N. We can write their prior degree of belief in N as a prior probability p(N) lying between 0 and 1. In this case, since they considered N very unlikely, they might have given p(N) a probability value of 0.1, meaning only 10 per cent likely.

The 14 October 1962 USAF photographs forced them to a very different awareness of the situation. They saw evidence, E, consistent with the details Penkovsky had provided of a Soviet medium-range nuclear missile installation under construction. The analysts suddenly had to face the possibility that the Soviet Union was introducing such a capability into Cuba by stealth. They needed to find the posterior probability p(N|E) (read as the reassessed probability of the hypothesis N given the evidence E where the word ‘given’ is written using the vertical line |).

The evidence in the photographs was much more closely associated with the hypothesis that these were Soviet nuclear missile launchers than any alternative hypothesis. Given the evidence in the photographs, they did not appear to be big trucks carrying large pipes for a construction site, for instance. The chances of the nuclear missile hypothesis being true given the USAF evidence will be proportionate to p(E|N), which is the likelihood of finding that evidence on the assumption that N is true. That likelihood was estimated as much greater than the overall probability that such photographs might have been seen in any case (which we can write as p(E)). The relationship between the nuclear missile hypothesis and the evidence seen, that of p(E|N) to p(E), is the factor we need to convert the prior probability p(N) to the posterior probability that the decisionmaker needs, p(N|E).

The Rev. Bayes gave us the rule to calculate what the posterior probability is:

p(N|E) = p(N). [p(E|N)/p(E)]

Or, the new likelihood of something being the case given the evidence is found by adjusting what you thought was likely (before you saw the evidence) by how well the new evidence supports the claim of what could be happening.

This is the only equation in this book. Despite wanting to talk as plainly as possible, I’ve included it because it turns words into precise calculable conditional likelihoods which is what so much of modern data science is about. In the next chapter we examine how we can apply Bayes’s great insight to work backwards, inferring from observations what are the most likely causes of what we see.

The example of the Cuban missile crisis shows Bayesian logic in action to provide new situational awareness. For example, if the analysts had felt that the photographs could equally well have been of a civil construction site and so the photographs were equally likely whether or not N was true (i.e. whether or not these were nuclear missile launchers) then p(E|N) would be the same as p(E), and so the factor in Bayes’s rule is unity and the posterior probability is no different from the prior. The President would not be advised to change his low degree of belief that Khrushchev would dare try to introduce nuclear missiles into Cuba. If, on the other hand, E would be much more likely to be seen in cases where N is true (which is what the Penkovsky intelligence indicated), then it is a strong indicator that N is indeed true and p(E|N) will be greater than P(E). So p(N|E) therefore rises significantly. For the Pentagon analysts p(N|E) would have been much nearer to 1, a near certainty. The President was advised to act on the basis that Soviet nuclear missiles were in America’s backyard.

Kennedy’s key policy insight in 1962 was recognition that Khrushchev would only have taken such a gamble over Cuba having been persuaded that it would be possible to install the missiles on Cuba covertly, and arm them with nuclear warheads before the US found out. The US would then have discovered that the Soviet Union was holding at immediate risk the entire Eastern seaboard of the US, but would have been unable to take action against Cuba or the missiles without running unacceptable risk. Once the missiles had been discovered before they were operational, it was then the Soviet Union that was carrying the risk of confrontation with the naval blockade Kennedy had ordered. Kennedy privately suggested a face-saving

ladder that Khrushchev could climb down (by offering later withdrawal of the old US medium-range missiles based in Turkey), which Khrushchev duly accepted. The crisis ended without war.

The story of President Kennedy’s handling of the Cuban missile crisis has gone down as a case study in bold yet responsible statecraft. It was made possible by having situational awareness – providing the what, who, where and when that the President needed based on Penkovsky’s intelligence on technical specifications about Soviet nuclear missiles, their range and destructive power, and how long they took to become operational after they were shipped to a given location. That last bit of intelligence persuaded Kennedy that he did not need to order air strikes to take out the missile sites immediately. His awareness of the time he had gave him the option of trying to persuade Khrushchev that he had miscalculated.

Bayesian inference is central to the SEES method of thinking. It can be applied to everyday matters, especially where we may be at risk of faulty situational awareness. Suppose you have recently been assigned to a project that looks, from the outside, almost impossible to complete successfully on time and in budget. You have always felt well respected by your line manager, and your view of the situation is that you have been given this hard assignment because you are considered highly competent and have an assured future in the organization. However, at the bottom of an email stream that she had forgotten to delete before forwarding, you notice that your manager calls you ‘too big for your boots’. Working backwards from this evidence you might be wise to infer that it is more likely your line manager is trying to pull you down a peg or two, perhaps by getting you to think about your ability to work with others, by giving you a job that will prove impossible. Do try such inferential reasoning with a situation of your own.

Most intelligence analysis is a much more routine activity than the case of the Cuban missile crisis. The task is to try to piece together what’s going on by looking at fragmentary information from a variety of sources. The Bayesian methodology is the same in weighing information in order to be able to answer promptly the decisionmakers’ need to know what is happening, when and where and who is involved.

When data is collected in the course of intelligence investigations, scientific experiments or just in the course of web browsing and general observation, there is a temptation to expect that it will conform to a known

pattern. Most of the data may well fit nicely. But some may not. That may be because there are problems with the data (source problems in intelligence, experimental error for scientists) or because the sought-for pattern is not an accurate enough representation of reality. It may be that the bulk of the observations fit roughly the expected pattern. But more sensitive instruments or sources with greater access may also be providing data that reveals a new layer of reality to be studied. In the latter case, data that does not fit what has been seen before may be the first sighting of a new phenomenon that cries out to be investigated, or, for an intelligence officer, that could be the first sign that there is a deception operation being mounted. How to treat such ‘outliers’ is thus often the beginning of new insights. Nevertheless, it is a natural human instinct to discard or explain away information that does not fit the prevailing narrative. ‘Why spoil a good story’ is the unconscious thought process. Recognizing the existence of such cases is important in learning to think straight.

Penkovsky had quickly established his bona fides with MI6 and the CIA. But our judgements depend crucially on assessing how accurate and reliable the underlying information base is. What may be described to you as a fact about some event of interest deserves critical scrutiny to test whether we really do know the ‘who, what, where and when’. In the same way, an intelligence analyst would insist when receiving a report from a human informant on knowing whether this source had proved to be regular and reliable, like Penkovsky, or was a new untested source. Like the historian who discovers a previously unknown manuscript describing some famous event in a new way, the intelligence officer has to ask searching questions about who wrote the report and when, and whether they did so from first-hand knowledge, or from a sub-source, or even from a sub-sub-source with potential uncertainty, malicious motives or exaggeration being introduced at each step in the chain. Those who supply information owe the recipient a duty of care to label carefully each report with descriptions to help the analyst assess its reliability. Victims of village gossip and listeners to The Archers on BBC Radio 4 will recognize the effect.

The best way to secure situational awareness is when you can see for yourself what is going on, although even then be aware that appearances can be deceptive, as optical illusions demonstrate. It would always repay treating with caution a report on a social media chat site of outstanding bargains to be had on a previously unknown website. Most human eye-

witness reporting needs great care to establish how reliable it is, as criminal courts know all too well. A good intelligence example where direct situational awareness was hugely helpful comes from the Falklands conflict. The British authorities were able to see the flight paths of Argentine air force jets setting out to attack the British Task Force because they had been detected by a mountaintop radar in Chile, and the Chilean government had agreed their radar picture could be accessed by the UK.

Experienced analysts know that their choice of what deserves close attention and what can be ignored is a function of their mental state at the

time.4 They will be influenced by the terms in which they have been tasked but also by how they may have unconsciously formulated the problem. The analysts will have their own prejudices and biases, often from memories of previous work. In the words of the tradecraft primer for CIA officers:

‘These are experience based constructs of assumptions and expectations both about the world in general and more specific domains. These constructs strongly influence what information analysts will accept – that is, data that are in accordance with analysts’ unconscious mental models are more likely to be perceived and remembered than information that is at

odds with them.’5 Especial caution is needed therefore when the source seems to be showing you what you had most hoped to see.

The interception and deciphering of communications and the product of eavesdropping devices usually have high credibility with intelligence analysts because it is assumed those involved do not realize their message or conversation is not secure and therefore will be speaking honestly. But that need not be the case, since one party to a conversation may be trying to deceive the other, or both may be participating in an attempt to deceive a third party, such as the elaborate fake communications generated before the D-Day landings in June 1944 to create the impression of a whole US Army Corps stationed near Dover. That, combined with the remarkable double agent operation that fed back misleading information to German intelligence, provided the basis of the massive deception operation mounted for D-Day (Operation Fortitude). The main purpose was to convince the German High Command that the landings in Normandy were only the first phase with the main invasion to follow in the Pas de Calais. That intelligence-led deception may have saved the Normandy landings from disaster by persuading the German High Command to hold back an entire armoured division from the battle.

Unsubstantiated reports (at times little more than rumour) swirl around commercial life and are picked up in the business sections of the media and are a driver of market behaviour. As individuals, the sophisticated analysts of the big investment houses may well not be taken in by some piece of market gossip. But they may well believe that the average investor will be, and that the market will move and thus as a consequence they have to make their investment decisions as if the rumour is true. It was that insight that enabled the great economist John Maynard Keynes to make so much money for his alma mater, King’s College Cambridge, in words much quoted today in the marketing material of investment houses: ‘successful investing is

anticipating the anticipation of others’.6 Keynes described this process in his General Theory as a beauty contest:

It is not a case of choosing those which, to the best of one’s judgment, are really the prettiest, nor even those which average opinion genuinely thinks the prettiest. We have reached the third degree where we devote our intelligences to anticipating what average opinion expects the average opinion to be. And there are

some, I believe, who practise the fourth, fifth and higher degrees.7

The Penkovsky case had a tragic ending. His rolls of film had to be delivered by dead drop in the teeth of Soviet surveillance using methods later made famous by John le Carré’s fictional spies, including the mark on the lamppost to indicate there was material to pick up. That task fell to Janet Chisholm, the wife of Penkovsky’s SIS case officer working under diplomatic cover in the Moscow Embassy. She had volunteered to help and was introduced to Penkovsky during one of his official visits to London. It was no coincidence therefore that her children were playing on the pavement of Tsvetnoy Boulevard while she watched from a nearby bench, at the exact moment Oleg Penkovsky in civilian clothes walked past. He chatted to the children and offered them a small box of sweets (that he had been given for that purpose during his meeting in London) within which were concealed microfilms of documents that Penkovsky knew would meet London’s and Washington’s urgent intelligence requirements. Similar drops of film followed. She was, however, later put under routine surveillance and by mischance she was seen making a ‘brush contact’ with a Russian who the KGB could not immediately identify but who triggered further

investigations. That and other slips made by Penkovsky led finally to his arrest. His go-between, the British businessman Greville Wynne, was then kidnapped during a business trip to Budapest, and put on show trial in Moscow alongside Penkovsky. Both were found guilty. Penkovsky was severely tortured and shot. Wynne spent several years in a Soviet prison until exchanged in 1964 in a spy swop for the convicted KGB spy Gordon Lonsdale (real name Konon Molody) and his cut-outs, an antiquarian bookseller and his wife, Peter and Helen Kroger, who had helped him run a spy ring against the UK Admiralty research establishment at Portland.

The digital revolution in information gathering

Today a Penkovsky could more safely steal secret missile plans by finding a way of accessing the relevant database. That is true for digital information of all kinds if there is access to classified networks. Digital satellite imagery provides global coverage. The introduction of remotely piloted aircraft with high-resolution cameras provides pin-sharp digitized imagery for operational military, security and police purposes, as well as for farming, pollution control, investigative journalism and many other public uses. At any incident there are bound to be CCTV cameras and individuals with mobile phones (or drones) that have high-resolution cameras able to take video footage of the event – and media organizations such as broadcasters advertise the telephone numbers to which such footage can be instantly uploaded. Every one of us is potentially a reconnaissance agent.

There is the evident risk that we end up with simply too much digital data to make sense of. The availability of such huge quantities of digitized information increases the importance of devising artificial intelligence

algorithms to sort through it and highlight what appears to be important.8 Such methods rely upon applying Bayesian inference to learn how best to search for the results we want the algorithms to detect. They can be very powerful (and more reliable than a human would be) if the task they are given is clear-cut, such as checking whether a given face appears in a large set of facial images or whether a specimen of handwriting matches any of those in the database. But these algorithms are only as reliable as the data on which they were trained, and spurious correlations are to be expected.

The human analyst is still needed to examine the selected material and to add meaning to the data.9

At the same time, we should remember that the digital world also provides our adversaries with ample opportunities to operate anonymously online and to hack our systems and steal our secrets. Recognition of these cyber-vulnerabilities has led the liberal democracies to give their security and intelligence agencies access to powerful digital intelligence methods, under strict safeguards, to be able to search data in bulk for evidence about those who are attacking us.

One side effect of the digitization of information is the democratization of situational awareness. We can all play at being intelligence analysts given our access to powerful search engines. Anyone with a broadband connection and a mobile device or computer has information power undreamed of in previous eras. There is a new domain of open-source intelligence, or OSINT. We use this ourselves when trying to decide which party to vote for in an election and want to know what each candidate stands for, or ascertaining the level of property prices in a particular area, or researching which university offers us the most relevant courses. The Internet potentially provides the situational awareness that you need to make the right decision. But like intelligence officers you have to be able to use it with discrimination.

The tools available to all of us are remarkable. Catalogues of image libraries can be searched to identify in fractions of a second a location, person, artwork or other object. Google Images has indexed over 10 billion photographs, drawings and other images. By entering an address almost anywhere in the world, Google Street View will enable you to see the building and take a virtual drive round the neighbourhood with maps providing directions and overlays of information. The position of ships and shipping containers can be displayed on a map, as can the location of trains across much of Europe.

With ingenuity and experience, an internet user can often generate situational awareness to rival that of intelligence agencies and major

broadcasting corporations. The not-for-profit organization Bellingcat10 is named after Aesop’s fable in which the mice propose placing a bell around the neck of the cat so that they are warned in good time of its approach but none will volunteer to put the bell on it. Bellingcat publishes the results of non-official investigations by private citizens and journalists into war

crimes, conditions in war zones and the activities of serious criminals. Its most recent high-profile achievement was to publish the real identities of the two GRU officers responsible for the attempted murder of the former MI6 agent and GRU officer Sergei Skripal and his daughter in Salisbury and the death of an innocent citizen.

It requires practice to become as proficient in retrieving situational information from the 4.5 billion indexed pages of the World Wide Web (growing by about one million documents a day) and the hundreds of thousands of accessible databases. Many sites are specialized and may take skill and effort, and the inclination to find (a location map of fishing boats around the UK, for example, should you ever want to know, can be found at fishupdate.com).

Although huge, the indexed surface web accessible by a search engine is estimated to be only 0.03 per cent of the total Internet. Most of the Internet, the so-called deep web, is hidden from normal view, for largely legitimate reasons since it is not intended for casual access by an average user. These are sites that can only be accessed if you already know their location, such as corporate intranets and research data stores, and most will be password-protected. In addition to the deep web, a small part of the Internet is the so-called ‘dark web’ or ‘dark net’ with its own indexing, which can only be reached if specialist anonymization software such as Tor is being used to

hide the identity of the inquirer from law enforcement.11 The dark net thus operates according to different rules from the rest of the Internet that has become so much a part of all of our daily lives. An analogy for the deep web would be the many commercial buildings, research laboratories and government facilities in any city that the average citizen has no need to access, but when necessary can be entered by the right person with the proper pass. The dark net, to develop that cityscape analogy, can be thought of like the red-light district in a city with a small number of buildings (sometimes very hard to find), where access is controlled because the operators want what is going on inside to remain deeply private. At one time, these would have been speakeasies, illegal gambling clubs, flophouses and brothels, but also the meeting places of impoverished young artists and writers, political radicals and dissidents. Today it is where the media have their secure websites which their sources and whistleblowers can access anonymously.

I guess we have all cursed when clicking on the link for a web page we wanted brought up the error message ‘404 Page Not Found’. Your browser communicated with the server, but the server could not locate the web page where it had been indexed. The average lifespan of a web page is under 100 days so skill is needed in using archived web material to retrieve sites that have been mislabelled, moved or removed from the web. Politicians may find it useful that claims they make to the electorate can thus quickly disappear from sight, but there are search methods that can retrieve old web

pages and enable comparison with their views today.12 Most search engines use asterisks to denote wild cards, so a query that includes ‘B*n Lad*n’ will search through the different spellings of his name such as Ben Laden, Bin Laden (the FBI-preferred spelling), Bin Ladin (the CIA-preferred spelling) and so on. Another useful lesson is the use of the tilde, the ~ character on the keyboard. So prefacing a query term with ~ will result in a search for synonyms as well as the specific query term, and will also look for alternative endings. Finally, you can ask the search to ignore a word by placing a minus in front of it, as –query. The meta-search engine Dogpile will return answers taken from other search engines, including from Google and Yahoo.

The order in which results are presented to you after entering a search query into a search engine can give a misleading impression of what is important. The answers that are returned (miraculously in a very small fraction of a second) may have been selected in a number of different ways. The top answer may be as a result of publicity-based search – a form of product placement where a company, interest group or political party has paid to have its results promoted in that way (or has used one of the specialist companies that offer for a fee to deliver that result to advertisers). A search on property prices in an area will certainly flag up local estate agents who have paid for the marketing advantage of appearing high up on the page. The answers will also take account of the accumulated knowledge in the search database of past answers, and also which answers have been most frequently clicked for further information (a popularity-based search, thus tapping into a form of ‘wisdom of the crowd’). This can be misleading. While it may be interesting to see the results of a search for information about university courses that has been sorted by what were the most popular such searches, it is hardly helpful if what you want to know about is all the courses available that match your personal interests.

Finally, and perhaps most disturbingly, the suggested answers to the query may represent a sophisticated attempt by the algorithm to conduct a personalized search by working out what it is that the user is most likely towant to know (in other words, inferring why the question is being asked) from the user’s previous internet behaviour and any other personal information about the individual accessible by the search engine. Two different people entering the same search terms on different devices will therefore get a different ranking of results. My query ‘1984?’ using the Google Chrome browser and the Google search engine brings up George Orwell’s dystopian novel along with suggestions of how I can most conveniently buy or download a copy. Helpfully, the Wikipedia entry on the book is also high up on the first page of the 1.49 billion results I am being offered (in 0.66 seconds). The same query using the Apple Safari browser and its search engine brings up first an article about the year 1984 telling me it was a leap year. And a very different past browsing history might highlight references to the assassination of Indira Gandhi in 1984, or news that the release of the forthcoming film Wonder Woman 1984 has been postponed to 2020. Internet searching is therefore a powerful tool for acquiring the components of situational awareness. That is, for as long as we can rely on an open Internet. If the authorities were to have insisted that the search algorithms did not reference Orwell’s book in response to queries from their citizens about 1984 then we would indeed have entered Orwell’s dystopian world. That, sadly, is likely to be the ambition of authoritarian regimes that will try to use internet technology for social control.

Conclusions: lessons in situational awareness

In this chapter, we have been thinking about the first stage of SEES, the task of acquiring what I have termed situational awareness, knowing about the here and now. Our knowledge of the world is always fragmentary and incomplete, and is sometimes wrong. But something has attracted our attention and we need to know more. It may be because we have already thought about what the future may bring and had strategic notice of areas we needed to monitor. Or it may be that some unexpected observation or report we have received triggers us to focus our attention. There are lessons we can learn about how to improve our chances of seeing clearly what is

going on when answering questions that begin with ‘who, what, where and when’.

We should in those circumstances:

Ask how far we have access to sufficient sources of information.

Understand the scope of the information that exists and what we need to know but do not.

Review how reliable the sources of information we do have are.

If time allows, collect additional information as a cross-check before reaching a conclusion.

Use Bayesian inference to use new information to adjust our degree of belief about what is going on.

Be open and honest about the limitations of what we know, especially in public, and be conscious of the public reactions that may be triggered.

Be alive to the possibility that someone is deliberately trying to manipulate, mislead, deceive or defraud us.

David Omand – How Spies Think – 10 Lessons in Intelligence – Part 2

Daniel Craig as James Bond in Spectre

EES: a model of analytical thinking

I am now a visiting professor teaching intelligence studies in the War Studies Department at King’s College London, at Sciences Po in Paris and also at the Defence University in Oslo. My experience is that it really helps to have a systematic way of unpacking the process of arriving at judgements and establishing the appropriate level of confidence in them. The model I have developed – let me call it by an acronym that recalls what analysts do as they look at the world, the SEES model – leads you through the four types of information that can form an intelligence product, derived from different levels of analysis:

Situational awareness of what is happening and what we face now.

Explanation of why we are seeing what we do and the motivations of those involved.

Estimates and forecasts of how events may unfold under different assumptions.

Strategic notice of future issues that may come to challenge us in the longer term.

There is a powerful logic behind this four-part SEES way of thinking. Take as an example the investigation of far-right extremist violence. The

first step is to find out as accurately as possible what is going on. As a starting point, the police will have had crimes reported to them and will have questioned witnesses and gathered forensic evidence. These days there is also a lot of information available on social media and the Internet, but the credibility of such sources will need careful assessment. Indeed, even well-attested facts are susceptible to multiple interpretations, which can lead to misleading exaggeration or underestimation of the problem.

We need to add meaning so that we can explain what is really going on. We do that in the second stage of SEES by constructing the best explanation consistent with the available evidence, including an understanding of the motives of those involved. We see this process at work in every criminal court when prosecution and defence barristers offer the jury their alternative versions of the truth. For example, why are the fingerprints of an accused on the fragments of a beer bottle used for a petrol bomb attack? Was it because he threw the bottle, or is the explanation that it was taken out of his recycling box by the mob looking for material to make weapons? The court

has to test these narratives and the members of the jury have then to choose the explanation that they think best fits the available evidence. The evidence rarely speaks for itself. In the case of an examination of extremist violence, in the second stage we have to arrive at an understanding of the causes that bring such individuals together. We must learn what factors influence their anger and hatred. That provides the explanatory model that allows us to move on to the third stage of SEES, when we can estimate how the situation may change over time, perhaps following a wave of arrests made by the police and successful convictions of leading extremists. We can estimate how likely it is that arrest and conviction will lead to a reduction in threats of violence and public concern overall. It is this third step that provides the intelligence feedstock for evidence-based policymaking.

The SEES model has an essential fourth component: to provide strategic notice of longer-term developments. Relevant to our example we might want to examine the further growth of extremist movements elsewhere in Europe or the impact on such groups were there to be major changes in patterns of refugee movements as a result of new conflicts or the effects of climate change. That is just one example, but there are very many others where anticipating future developments is essential to allow us to prepare sensibly for the future.

The four-part SEES model can be applied to any situation that concerns us and where we want to understand what has happened and why and what may happen next, from being stressed out at a situation at work to your sports team losing badly. SEES is applicable to any situation where you have information, and want to make a decision on how best to act on it.

We should not be surprised to find patterns in the different kinds of error tending to occur when working on each of the four components of the SEES process. For example:

Situational awareness suffers from all the difficulties of assessing what is going on. Gaps in information exist and often evoke a reluctance to change our minds in the face of new evidence.

Explanations suffer from weaknesses in understanding others: their motives, upbringing, culture and background.

Estimates of how events will unfold can be thrown out by unexpected developments that were not considered in the forecast.

Strategic developments are often missed due to too narrow a focus and a lack of imagination as to future possibilities.

The four-part SEES approach to assessment is not just applicable to affairs of state. At heart it contains an appeal to rationality in all our thinking. Our choices, even between unpalatable alternatives, will be sounder as a result of adopting systematic ways of reasoning. That includes being able to distinguish between what we know, what we do not know and what we think may be. Such thinking is hard. It demands integrity.

Buddhists teach that there are three poisons that cripple the mind: anger,

attachment and ignorance.7 We have to be conscious of how emotions such as anger can distort our perception of what is true and what is false. Attachment to old ideas with which we feel comfortable and that reassure us that the world is predictable can blind us to threatening developments. This is what causes us to be badly taken by surprise. But it is ignorance that is the most damaging mental poison. The purpose of intelligence analysis is to reduce such ignorance, thereby improving our capacity to make sensible decisions and better choices in our everyday lives.

On that fateful day in March 1982 Margaret Thatcher had immediately grasped what the intelligence reports were telling her. She understood what the Argentine Junta appeared to be planning and the potential consequences for her premiership. Her next words demonstrated her ability to use that insight: ‘I must contact President Reagan at once. Only he can persuade Galtieri [General Leopoldo Galtieri, the Junta’s leader] to call off this madness.’ I was deputed to ensure that the latest GCHQ intelligence was being shared with the US authorities, including the White House. No. 10 rapidly prepared a personal message from Thatcher to Reagan asking him to speak to Galtieri and to obtain confirmation that he would not authorize any landing, let alone any hostilities, and warning that the UK could not acquiesce in any invasion. But the Argentine Junta stalled requests for a Reagan conversation with Galtieri until it was much too late to call off the invasion.

Only two days later, on 2 April 1982, the Argentine invasion and military occupation of the Islands duly took place. There was only a small detachment of Royal Marines on the Islands and a lightly armed ice patrol ship, HMS Endurance, operating in the area. No effective resistance was possible. The Islands were too far away for sea reinforcements to arrive within the two days’ notice the intelligence had given us, and the sole

airport had no runway capable of taking long-distance troop-carrying aircraft.

We had lacked adequate situational awareness from intelligence on what the Junta was up to. We had failed to understand the import of what we did know, and therefore had not been able to predict how events would unfold. Furthermore, we had failed over the years to provide strategic notice that this situation was one that might arise, and so had failed to take steps that would have deterred an Argentine invasion. Failures in each of the four stages of SEES analysis.

All lessons to be learned.

How this book is organized

The four chapters in the first part of this book are devoted to the aforementioned SEES model. Chapter 1 covers how we can establish situational awareness and test our sources of information. Chapter 2 deals with causation and explanation, and how the scientific method called Bayesian inference, allows us to use new information to alter our degree of belief in our chosen hypothesis. Chapter 3 explains the process of making estimates and predictions. Chapter 4 describes the advantage that comes from having strategic notice of long-term developments.

There are lessons from these four phases of analysis in how to avoid different kinds of error, failing to see what is in front of us, misunderstanding what we do see, misjudging what is likely to follow and failing to have the imagination to conceive of what the future may bring.

Part Two of this book has three chapters, each drawing out lessons in how to keep our minds clear and check our reasoning.

We will see in Chapter 5 how cognitive biases can subconsciously lead us to the wrong answer (or to fail to be able to answer the question at all). Being forewarned of those very human errors helps us sense when we may be about to make a serious mistake of interpretation.

Chapter 6 introduces us to the dangers of the closed-loop conspiratorial mindset, and how it is that evidence which ought to ring alarm bells can too often be conveniently explained away.

The lesson of Chapter 7 is to beware deliberate deceptions and fakes aimed at manipulating our thinking. There is misinformation, which is false

but circulated innocently; malinformation, which is true but is exposed and circulated maliciously; and disinformation, which is false, and that was known to be false when circulated for effect. The ease with which digital text and images can be manipulated today makes these even more serious problems than in the past.

Part Three explores three areas of life that call for the intelligent use of intelligence.

The lessons of Chapter 8 are about negotiating with others, something we all have to do. The examples used come from extraordinary cases of secret intelligence helping to shape perceptions of those with whom governments have to negotiate, and of how intelli

gence can help build mutual trust – necessary for any arms control or international agreement to survive – and help uncover cheating. We will see how intelligence can assist in unravelling the complex interactions that arise from negotiations and confrontations.

Chapter 9 identifies how you go about establishing and maintaining lasting partnerships. The example here is the successful longstanding ‘5-eyes’ signals intelligence arrangement between the US, the UK, Canada, Australia and New Zealand, drawing out principles that are just as applicable to business and even to personal life.

The lesson of Chapter 10 is that our digital life provides new opportunities for the hostile and unscrupulous to take advantage of us. We can end up in an echo chamber of entertaining information that unconsciously influences our choices, whether over products or politics. Opinion can be mobilized by controlled information sources, with hidden funding and using covert opinion formers. When some of that information is then revealed to be knowingly false, confidence in democratic processes and institutions slowly ebbs away.

The concluding chapter, Chapter 11, is a call to shake ourselves awake and recognize that we are all capable of being exploited through digital technology. The lessons of this book put together an agenda to uphold the values that give legitimacy to liberal democracy: the rule of law; tolerance; the use of reason in public affairs and the search for rational explanations of the world around us; and our ability to make free and informed choices. When we allow ourselves to be over-influenced by those with an agenda, we erode our free will and that is the gradual erosion of an open society. Nobody should be left vulnerable to the arguments of demagogues or snake

oil salesmen. The chapter and the book ends therefore on an optimistic note.

We can learn the lessons of how to live safely in this digital world.

David Omand – How Spies Think – 10 Lessons in Intelligence

Book launch - Professor Sir David Omand: How Spies Think | The Strand Group
Sir David Omand, Former Director of the Government Communications Headquarters (GCHQ)



Contents

Introduction. Why we need these lessons in seeking independence of mind, honesty and integrity

PART ONE: AN ANALYST SEES: FOUR LESSONS IN ORDERING OUR THOUGHTS

Lesson 1: Situational awareness. Our knowledge of the world is always fragmentary and incomplete, and is sometimes wrong Lesson 2: Explanation. Facts need explaining

Lesson 3: Estimations. Predictions need an explanatory model as well as sufficient data

Lesson 4: Strategic notice. We do not have to be so surprised by surprise

PART TWO: THREE LESSONS IN CHECKING OUR REASONING

Lesson 5: It is our own demons that are most likely to mislead us

Lesson 6: We are all susceptible to obsessive states of mind Lesson 7: Seeing is not always believing: beware manipulation, deception and faking

PART THREE: THREE LESSONS IN MAKING INTELLIGENT USE OF INTELLIGENCE

Lesson 8: Imagine yourself in the shoes of the person on the other side

Lesson 9: Trustworthiness creates lasting partnerships

Lesson 10: Subversion and sedition are now digital

PART FOUR

A final lesson in optimism

Acknowledgements

Notes and further reading

Index

About the Author

David Omand was the first UK Security and Intelligence Coordinator, responsible to the Prime Minister for the professional health of the intelligence community, national counter-terrorism strategy and ‘homeland security’. He served for seven years on the Joint Intelligence Committee. He was Permanent Secretary of the Home Office from 1997 to 2000, and before that Director of GCHQ.

For Keir, Robert, Beatrice and Ada, in the hope that

you will grow up in a better world

Introduction

Why we need these lessons in seeking independence of mind, honesty and integrity

Westminster, March 1982. ‘This is very serious, isn’t it?’ said Margaret Thatcher. She frowned and looked up from the intelligence reports I had handed her. ‘Yes, Prime Minister,’ I replied, ‘this intelligence can only be read one way: the Argentine Junta are in the final stages of preparing to invade the Falkland Islands, very likely this coming Saturday.’

It was the afternoon of Wednesday, 31 March 1982.

I was the Principal Private Secretary to the Defence Secretary, John Nott. We were in his room in the House of Commons drafting a speech when an officer from the Defence Intelligence Staff rushed down Whitehall with a locked pouch containing several distinctive folders. I knew immediately from the red diagonal crosses on their dark covers that they contained top secret material with its own special codeword (UMBRA), denoting that they came from the Government Communications Headquarters (GCHQ).

The folders contained decrypted intercepts of Argentine naval communications. The messages showed that an Argentine submarine had been deployed on covert reconnaissance around the Falklands capital, Port Stanley, and that the Argentine Fleet, which had been on exercises, was reassembling. A further intercept referred to a task force said to be due to arrive at an unstated destination in the early hours of Friday, 2 April. From their analysis of the coordinates of the naval vessels, GCHQ had concluded

that its destination could only be Port Stanley.1

John Nott and I looked at each other with but one thought, loss of the Falkland Islands would bring a major existential crisis for the government

of Margaret Thatcher: the Prime Minister must be told at once. We hurried down the Commons corridor to her room and burst in on her.

The last assessment she had received from the UK Joint Intelligence Committee (JIC) had told her that Argentina did not want to use force to secure its claim to the sovereignty of the Falkland Islands. However, the JIC had warned that if there was highly provocative action by the British towards Argentine nationals, who had landed illegally on the British South Atlantic island of South Georgia, then the Junta might use this as a pretext for action. Since the UK had no intention of provoking the Junta, the assessment was wrongly interpreted in Whitehall as reassuring. That made the fresh intelligence reports all the more dramatic. It was the first indication that the Argentine Junta was ready to use force to impose its claim.

The importance for us of being able to reason

The shock of seeing the nation suddenly pitched into the Falklands crisis is still deeply etched in my memory. It demonstrated to me the impact that errors in thinking can have. This is as true for all life as it is for national statecraft. My objective in writing this book therefore is an ambitious one: I want to empower people to make better decisions by learning how intelligence analysts think. I will provide lessons from our past to show how we can know more, explain more and anticipate more about what we face in the extraordinary age we now live in.

There are important life lessons in seeing how intelligence analysts reason. By learning what intelligence analysts do when they tackle problems, by observing them in real cases from recent history, we will learn how they order their thoughts and how they distinguish the likely from the unlikely and thus make better judgements. We will learn how to test alternative explanations methodically and judge how far we need to change our minds as new information arrives. Sound thinkers try to understand how their unconscious feelings as individuals, as members of a group and within an institution might affect their judgement. We will also see how we can fall victim to conspiracy thinking and how we can be taken in by deliberate deception.

We all face decisions and choices, at home, at work, at play. Today we have less and less time to make up our minds than ever before. We are in the digital age, bombarded with contradictory, false and confusing information from more sources than ever. Information is all around us and we feel compelled to respond at its speed. There are influential forces at play ranged against us pushing specific messages and opinions through social media. Overwhelmed by all this information, are we less, or more, ignorant than in previous times? Today more than ever, we need those lessons from the past.

Looking over the shoulder of an intelligence analyst

Over the centuries, generals naturally learned the advantage that intelligence can bring. Governments today deliberately equip themselves with specialist agencies to access and analyse information that can help

them make better decisions.2 Britain’s Secret Intelligence Service (MI6) runs human agents overseas. The Security Service (MI5) and its law enforcement partners investigate domestic threats and conduct surveillance on suspects. The Government Communications Headquarters (GCHQ) intercepts communications and gathers digital intelligence. The armed forces conduct their share of intelligence gathering in their operations overseas (including photographic intelligence from satellites and drones). It is the job of the intelligence analyst to fit all the resulting pieces together. They then produce assessments that aim to reduce the ignorance of the decisionmakers. They find out what is happening, they explain why it is

happening and they outline how things might develop.3

The more we understand about the decisions we have to take, the less likely it is that we will duck them, make bad choices or be seriously surprised. Much of what we need can come from sources that are open to anyone, provided sufficient care is taken to apply critical reasoning to them.

Reducing the ignorance of the decisionmaker does not necessarily mean simplifying. Often the intelligence assessment has to warn that the situation is more complicated than they had previously thought, that the motives of an adversary are to be feared and that a situation may develop in a bad way. But it is better to know than not. Harbouring illusions on such matters leads to poor, or even disastrous, decisions. The task of the intelligence officer is

to tell it as it is to government. When you make decisions, it is up to you todo the same to yourself.

The work of intelligence officers involves stealing the secrets of the dictators, terrorists and criminals who mean us harm. This is done using human sources or technical means to intrude into the privacy of personal correspondence or conversations. We therefore give our intelligence officers a licence to operate by ethical standards different from those we would hope to see applied in everyday life, justified by the reduction in harm to the

public they can achieve.4 Authoritarian states may well feel that they can dispense with such considerations and encourage their officers to do whatever they consider necessary, regardless of law or ethics, to achieve the objectives they have been set. For the democracies such behaviours would quickly undermine confidence in both government and intelligence services. Consequently, intelligence work is carefully regulated under domestic law to ensure it remains necessary and proportionate. I should therefore be clear. This book does not teach you how to spy on others, nor should it encourage you to do so. I want, however, to show that there are lessons from the thinking behind secret intelligence from which we can all benefit. This book is a guide to thinking straight, not a manual for bad behaviour.

Nor does thinking straight mean emotionless, bloodless calculation. ‘Negative capability’ was how the poet John Keats described the writer’s ability to pursue a vision of artistic beauty even when it led to uncertainty, confusion and intellectual doubt. For analytic thinkers the equivalent ability is tolerating the pain and confusion of not knowing, rather than imposing ready-made or omnipotent certainties on ambiguous situations or emotional challenges. To think clearly we must have a scientific, evidence-based approach which nevertheless holds a space for the ‘negative capability’

needed to retain an open mind.5

Intelligence analysts like to look ahead, but they do not pretend to be soothsayers. There are always going to be surprise outcomes, however hard we try to forecast events. The winner of the Grand National or the Indy 500 cannot be known in advance. Nor does the favourite with the crowds always come out in front. Events sometimes combine in ways that seem destined to confound us. Importantly, risks can also provide opportunities if we can use intelligence to position ourselves to take advantage of them.

Who am I to say this?

Intelligence agencies prefer to keep quiet about successes so that they can repeat them, but failures can become very public. I have included examples of both, together with a few glimpses from my own experience – one that spans the startling development of the digital world. It is sobering to recall that in my first paid job, in 1965, in the mathematics department of an engineering company in Glasgow, we learned to write machine code for the early computers then available using five-character punched paper tape for the input. Today, the mobile device in my pocket has immediate access to more processing power than there was then in the whole of Europe. This digitization of our lives brings us huge benefits. But it is also fraught with dangers, as we will examine in Chapter 10.

In 1969, fresh out of Cambridge, I joined GCHQ, the British signals intelligence and communications security agency, and learned of their pioneering work applying mathematics and computing to intelligence. I gave up my plans to pursue a doctorate in (very) theoretical economics, and the lure of an offer to become an economic adviser in HM Treasury. I chose instead a career in public service that would take me into the worlds of intelligence, defence, foreign affairs and security. In the Ministry of Defence (MOD), as a policy official, I used intelligence to craft advice for ministers and the Chiefs of Staff. I had three tours in the Private Office of the Secretary of State for Defence (serving six of them, from Lord Carrington in 1973 to John Nott in 1981) and saw the heavy burden of decisionmaking in crisis that rests at the political level. I saw how valuable good intelligence can be, and the problems its absence causes. When I was working as the UK Defence Counsellor in NATO Brussels it was clear how intelligence was shaping arms control and foreign policy. And as the Deputy Under Secretary of State for Policy in the MOD I was an avid senior customer for operational intelligence on the crisis in the former Yugoslavia. In that role I became a member of the Joint Intelligence Committee (JIC), the most senior intelligence assessment body in the UK, on which I served for a total of seven years.

When I left the MOD to go back to GCHQ as its Director in the mid-1990s, computing was transforming the ability to process, store and retrieve data at scale. I still recall the engineers reporting triumphantly to me that they had achieved for the first time stable storage of a terabyte of rapidly accessible data memory – a big step then although my small laptop today

has half as much again. Even more significantly, the Internet had arrived as an essential working domain for professionals, with the World Wide Web gaining in popularity and Microsoft’s new Hotmail service making email a fast and reliable form of communication. We knew digital technology would eventually penetrate into every aspect of our lives and that

organizations like GCHQ would have to change radically to cope.6 The pace of digital change has been faster than predicted. Then, smartphones had not been invented and nor of course had Facebook,

Twitter, YouTube and all the other social media platforms and apps that go with them. What would become Google was at that point a research project at Stanford. Within this small part of my working lifetime, I saw those revolutionary developments, and much more, come to dominate our world. In less than twenty years, our choices in economic, social and cultural life have become dependent on accessing networked digital technology and learning to live safely with it. There is no way back.

When I was unexpectedly appointed Permanent Secretary of the Home Office in 1997, it brought close contact with MI5 and Scotland Yard. Their use of intelligence was in investigations to identify and disrupt domestic threats, including terrorist and organized crime groups. It was in that period that the Home Office drew up the Human Rights Act and legislation to regulate and oversee investigatory powers to ensure a continual balancing act between our fundamental rights to life and security and the right to privacy for our personal and family life. My career as a Permanent Secretary continued with three years in the Cabinet Office after 9/11 as the first UK Security and Intelligence Coordinator. In that post, rejoining the JIC, I had responsibility for ensuring the health of the British intelligence community and for drawing up the first UK counter-terrorism strategy, CONTEST, still in force in 2020 as I write.

I offer you in this book my choice of lessons drawn from the world of secret intelligence both from the inside and from the perspective of the policymaker as a user of intelligence. I have learned the hard way that intelligence is difficult to come by, and is always fragmentary and incomplete, and is sometimes wrong. But used consistently and with understanding of its limitations, I know it shifts the odds in the nation’s favour. The same is true for you.

 

 

Recommended Book – Spies in the Vatican: The Soviet Union’s Cold War Against the Catholic Church

Effectively scandalous for the self-assertive, distrustful oppression of their own residents, the Soviet Union likewise pursued a horrendous undercover work war against the Catholic Church and its supporters. From the mistreatment of neighborhood ministers to a death request against Pope John Paul II, the KGB saw Catholicism as a danger to security in Eastern Europe and regarded the Church as a foe of the State.

Lifetime writer and previous U.S. Armed force Intelligence Officer John Koehler has composed the complete book on this surprising history. Utilizing at no other time seen records and transcripts, including subtleties of how the KGB, Gorbachev, and the Politburo upheld and empowered the 1981 death endeavor against Pope John Paul II, Koehler illustrates the Soviet system of spies and sleeper specialists, from Dominican priests to Vatican secretaries, who helped the KGB penetrate the Church’s framework even in network areas. Be that as it may, what is frequently most great is the extraordinary mental fortitude of ordinary adherents who offered safe house and assurance to abused ministers, in spite of the threat of their own capture or execution.

The KGB’s endeavors to cleanse the Soviet Union of the Church’s “conspiratorial impact” would inevitably blowback. The mutual feeling of solidarity that created because of these assaults, exacerbated with the horde of complaints welcomed on by many years of severe Soviet guideline, would finish in the introduction of the Solidarity development after a visit by the Pope in 1979. This uncommon history of the Soviet Union’s virus war against the Catholic Church is a fundamental and significant commitment to crafted by twentieth century history.

Investigation into the Cold War endeavors by Soviet and East European Communist forces to penetrate the Vatican and upset its populist impact.

Writer and previous Army knowledge official Koehler (Stasi: The Untold Story of the East German Secret Police, 1999) mines reports acquired from the documents of the East German and Hungarian mystery police, just as Moscow’s Politburo, to construct the account of a continued exertion over decades to dull the intensity of the counter socialist Roman Church in communist nations. Following the Soviet oust in Russia, the creator asserts, the progressive chamber may have planted its first covert agent against the Catholic Church in that nation as ahead of schedule as 1922. Cleanses and even executions of pastors followed, saving no Christian organization from the start, yet when the Soviets later cut an arrangement with the Russian Orthodoxy it made a fracture that truly drove the Roman church underground by 1941. As the Cold War continued, the Russian KGB got a significant knowledge report on the Vatican’s “Ostpolitik” arrangement—to oppose the concealment of strict opportunity in Eastern Europe and bolster hostile to communist developments—by means of the Polish specialists. The Soviet utilization of administrative operators, many Polish, turned into a normal danger, countered by the Vatican’s estimates which at one point incorporated an American Jesuit cleric who turned into the Vatican’s top covert agent catcher. The two sides every so often “turned” each other’s operators to twofold specialists. The CIA turned out to be effectively included, especially during the Reagan organization, utilizing the Vatican as an insight asset yet additionally as a “release” focus to take care of chosen data to Moscow. Koehler tirelessly tracks the story as the decades progressed, yet the account is over-burden with realities and short on emotional pressure.

The overwhelming dependence on authentic archives grants minimal human dramatization and sabotages the interest the creator regularly exaggerates.