Lesson 5: It is our own demons that are most likely to mislead us

Well, you can kiss my ass in Macy’s window’ was the brutal one-line dismissal by Ava, the CIA’s Iraq Group Chief, of the over-reliance of US Biological Warfare (BW) expert analysts on a single human intelligence source on Saddam Hussein’s BW programmes, codenamed Curveball. When she challenged the experts’ faith in using information from that source, in her words, ‘they looked at me like pigs looking at a

wristwatch’.1 Although not a weapons specialist, Ava, as an experienced intelligence officer, could sense that there might be a problem with the source. Her intervention was bound to be unpopular. Not least, pressure was mounting from the Bush administration to prepare for the invasion of Iraq and to justify this by revealing the extent of Saddam’s holdings of illegal weapons of mass destruction. That included exposing his past use of BW weapons – deliberately engineered to spread lethal and incapacitating disease and among the most horrific known to man – and what were assessed to be his current capabilities.

Curveball seemed the answer to the BW experts’ prayers. He was an Iraqi chemical engineer who had turned up in a German refugee camp claiming to have worked on Saddam’s BW programmes and ready to spill the beans. To the old operational hands in the CIA and Britain’s MI6 he seemed too good to be true. The German overseas intelligence service, the BND, took charge of Curveball and between January 2000 and September 2001 shared almost 100 reports based on his debriefing with defence intelligence in the US and UK. Crucially, Curveball claimed that Iraq had

built several mobile BW production units and that one of those units had begun production of lethal BW agents as early as 1997. A diagram of a truck adapted to be a mobile BW production unit based on Curveball’s information was even included in the presentation by Colin Powell, Secretary of State, to the UN Security Council as part of the US justification for war.

The problem was, those mobile BW units did not exist. Curveball had invented them. The experts fell for his story.

After the war Curveball, real name Rafid Ahmed Alwan al-Janabi, was tracked down by journalists. He admitted that he had lied in his reporting, and said that he had watched in shock as it was used to justify the war. He told them he fabricated tales of mobile BW trucks and clandestine factories in an attempt to bring down the Saddam Hussein regime, from which he had fled. He added: ‘Maybe I was right, maybe I was not right … they gave me this chance. I had the chance to fabricate something to topple the

regime. I and my sons are proud of that …’2

Before the invasion of Iraq in 2003 CIA and MI6 humint (human intelligence) professionals had developed doubts about the credibility of Curveball, not least the CIA Iraq section chief, Ava, quoted above, and her counterparts in London. Although believing that much of Curveball’s reporting was technically credible (he was after all a chemical engineer), they were not convinced that he was a wholly reliable source since not all his reporting checked out, and elements of his behaviour struck them as typical of individuals whom intelligence agencies would normally assess as fabricators.

One obstacle in checking out their suspicions was that the BND would not provide US or UK analysts with direct access to Curveball. The analysts did not know whether Curveball had been offered inducements, such as a German passport and assistance with resettlement. Nor how the questioning had been conducted. They wondered if the witness had been inadvertently led and had been able to infer what US analysts were most keen to know – and thus what information would most please them (always a problem with defectors). There were rumours about his drinking. Several inconsistencies were detected in Curveball’s reporting which heightened doubts about his reliability. Disturbingly, the quality of intelligence from him seemed to get better over time. That might be his increasing confidence in the good faith

of those questioning him, or it might be he was working out what to say that would produce the best reward.

Great efforts were made by the US and UK intelligence services to check out Curveball. Investigation into his background and university records revealed that he had indeed been trained in Iraq as a chemical engineer. He was known to have been involved on the fringes of Saddam’s 1990 BW programme. On the one hand that made his reporting of what was currently going on entirely credible from a technical point of view; on the other hand, it put him in an ideal position to exaggerate or even make up details if he so chose.

In London, analysts pored over aerial photographs of Iraq trying to identify the locations for prohibited activity described by Curveball to see if his stories stacked up. One site seemed to be on the wrong side of the river from where he described it – perhaps a slip of memory. Or perhaps an ominous sign that he was fabricating. In 2001 Curveball’s description of a facility that he claimed was involved in the mobile BW programme was contradicted by imagery of the site, which showed a wall blocking the view to what Curveball had claimed were the mobile trailers. Analysts explained away this discrepancy by speculating that the wall spotted by imagery might be a temporary structure put up by the Iraqis to deceive US satellite reconnaissance efforts. In another instance, Iraq was said to be filling BW warheads at a transportable facility near Baghdad. When imagery was unable to locate the transportable BW systems at the reported site, analysts assumed this was another example of Iraq hiding activities from US satellite over-passes. There is a very human tendency to search for or interpret information in a way that confirms one’s preconceptions. It is comforting to think the information coming in bears out our prior beliefs. Psychologists call this confirmation bias. Confirmation bias is accentuated by the tendency for people to scrutinize fiercely information which contradicts their prior beliefs (sometimes known as disconfirmation bias) while accepting too readily without criticism information that is consistent with their preconceptions.

Placing too much weight on Curveball’s reporting on biological weapons was not the only error that Western intelligence encountered in trying to assess the state of Iraq’s capabilities. Analysts also misinterpreted the intelligence that was being reported on Saddam’s chemical weapons programmes. The analysts misled themselves, not, in that case, through

being subject to deliberate deception, but through a series of individual and collective cognitive errors. Burned by the experience of having been deceived by Saddam over the extent of his WMD capabilities, as uncovered by UN inspectors after the first Gulf War, analysts started with the strong presumption that he was playing the same game in 2002. They felt able to dismiss contrary indications that Iraq might not actively be pursuing its prohibited programmes by chalking these indicators up to Iraq’s well-known denial and deception efforts. That outlook, that Saddam must be hiding prohibited materials not surrendered after the first Gulf War in 1991, was shared across all Western intelligence agencies.

Such was the strength of this pre-war ‘group think’ that when eventually UN inspectors returned to Iraq in 2003, the US and UK analysts were slow to admit openly to their bosses and to each other any secret thoughts they might have been harbouring that the reason the inspectors were not finding the predicted stocks of BW and CW weapons and material was because they no longer existed inside Iraq.

In hindsight, a key lesson was the failure to distinguish by the Bush and Blair governments between those parts of the intelligence assessments that were based on hard evidence (such as on Saddam’s prohibited missile-testing capabilities) and those that rested on extrapolations and assumptions made by the analysts confident they already knew the answer. As Colin Powell lectured CIA analysts after the war, in future he wanted them to ‘Tell me what you know. Tell me what you don’t know. And tell me what you think’, to which a highly experienced analyst added: ‘And make clear

which is which.’3

Another conclusion that is evident is that once suspicion has taken hold it breeds yet more suspicion. Saddam Hussein found this out in 2002 as he tried to persuade the West that he no longer retained the chemical and biological weapons capability that he had used against Iran and against his own people, and that he had gone to such lengths previously to conceal. His assurances to the West that work on those programmes had stopped, while failing to comply with the UN’s demands for full accounting of his past capability, were not surprisingly disbelieved. As the CIA’s Director, George Tenet, wrote in his memoirs: ‘Before the war, we didn’t understand that he

was bluffing, and he did not understand that we were not.’4

The need to check our reasoning

The errors in the intelligence assessments over Iraq were not the result of conscious politicization of intelligence by the analysts to please their customers. They resulted from the great capacity of the mind for self-deception and magical thinking, believing we are seeing what deep down at an emotional level we want to see. It is then natural to find reasons to rationalize that belief.

One of the advantages of using the four-part SEES model, as discussed in the chapters of Part I, is that it makes it easier to spot at each level the cognitive biases that lead us to see what we want to see. We met this phenomenon in Chapter 1 with the example of how British Second World War deception efforts fooled the German High Command by feeding them information that they wanted to believe. We saw a different form of cognitive bias in Chapter 2 when policymakers anxious not to get involved militarily resisted seeing the developing Bosnian conflict as potential genocide. In Chapter 3 we identified the cognitive problem of mirror-imaging on the part of Western analysts failing to forecast how the communist regime in Moscow would react to the reform movement in Czechoslovakia in 1968. In Chapter 4 we had the example of the head of Israeli military intelligence convincing himself he had a way of providing strategic notice of Egyptian readiness to consider an attack on Israel, an error in imagination that was almost literally fatal for the state of Israel.

The vulnerability of analysts to cognitive biases was systematically examined in the 1970s and 1980s by Richards ‘Dick’ Heuer, a long-term CIA intelligence officer with forty-five years of experience to call on. In his major work, The Psychology of Intelligence Analysis, Heuer warned that prior knowledge of the prevalence of individual cognitive biases does not

seem to stop people falling under their spell.5 He argued that there have therefore to be systematic checks introduced to manage the risks they pose. After high-level inquiries into the evident intelligence failure to warn before the 1973 Yom Kippur war, the Israeli government set up a standing ‘devil’s advocate’ group within military intelligence, staffed by the best analysts, with direct access when required to the Prime Minister and a remit to adopt a contrarian approach and to be expected to challenge the prevailing

orthodoxy.6 The motto of the group is ‘Ipcha Mistabra’, in Aramaic, translatable as ‘The opposite is most likely’ or ‘On the contrary …’

The good news is that there is a great deal of experimental psychological research available, as well as much practical experience from government and business, about the many cognitive traps and illusions that can ensnare us whether at the level of the individual, of the work group or of the institution:

Individual. Cognitive and emotional biases affect us as individuals. That is part of the human condition. These biases are usually not evident to us at the time, but a good supervisor or colleague will probably become aware of them if sensitized to what to look out for. Understandably, it may not be easy for us to acknowledge how our reasoning may have been influenced without our being consciously aware.

Group. Groups can develop their own distinctive dynamics, the equivalent of a collective personality that is more than just the sum of those of each of us in the group. Members of a group both consciously and unconsciously exercise a reciprocal influence on each other, such as a pressure for conformity or a desire for closure. The existence of such distinctive group behaviours has been established in many

therapeutic settings by psychologists and psychoanalysts7 – for example, in relation to hostile feelings towards the ‘outgroup’, i.e. those who are not members of the group.

Institutional. Internal processes, rules, hierarchies and power structures can unconsciously influence the judgements and decisions reached by an analytic group, just as they can affect the institution’s interaction with its stakeholders or the public. Dynamics at the level of the organization arise from the way that those within have internalized its culture, history and structure. There may be complicated psychic relationships between the different groups of people within the organization, such as between intelligence analysts and policymakers, or generalists and specialists, or civilians and uniformed services. There may also be important dynamics generated by the way the institution interacts with other organizations, such as the inevitable differences of perspective between law enforcement and intelligence agencies working on understanding the same threat. These influences are hard to pin down for those who are thoroughly accustomed to living within the organization’s culture. Critics of the impact of

institutional dynamics tend to be dismissed with ‘that is just the way things are done round here’.

Under each of these three headings we can now identify the most significant cognitive biases to watch out for when engaged in significant reasoning.

Cognitive biases and influences on the individual

Psychologists have replicated in experiments under a range of different conditions the existence of specific cognitive biases in individual subjects

carrying out perceptual and other mental tasks.8 Some have entered into everyday speech, with labels such as cognitive dissonance, whereby the mind finds it hard to hold at the same time the favoured story and the contrary evidence that it might not be true. Such mental tension on the part of the intelligence analysts is liable to be transferred to the national security policymakers and operational commanders who can also fall victim to

cognitive dissonance.9 And we are all liable to have to wrestle with inconsistent beliefs, often suffering stress as a result.

In a study for the UK JIC in 1980 a seasoned intelligence professional, Doug Nicoll (who had worked during the Second World War on German Army and Air Force Enigma in Hut 6 at Bletchley Park and risen to be Deputy Director of its successor organization, GCHQ), concluded that even the most experienced analyst (and we can generalize this to all of us when we are faced with problems to solve) has cognitive blind spots, especially when faced with incomplete or ambiguous information. Nicoll identified six specific biases that he held responsible for why Western governments had

too often been caught out when faced by foreign aggression.10 Mirror-imaging. This is the trap to be wary of on a first date: the

presumption that your prospective partner is bound to feel the same way as you do about what makes for an exciting evening out. Nicoll identified the unconscious tendency to assume that factors which would weigh heavily in the United Kingdom would be equally serious constraints on countries ruled by one-party governments or under the control of a single leader. Analysts had, for example, too readily assumed that the weight of international opinion was a factor that would affect the formation of policy in autocracies

to the same extent as it did in the democracies. Nicoll observed that public servants brought up in the post-war liberal democracies ‘found it difficult to believe that the potential aggressor would indeed find the use of force

politically acceptable’.11 Margaret Thatcher at the time also did not disguise her belief that there was an inbuilt tendency for diplomats to over-emphasize the role of peaceful negotiation in solving international problems. She once cruelly said on television when discussing the Foreign Office: ‘When I am out of politics, I’m going to run a business called

Rent-A-Spine”.’12

Transferred judgement. This is the implicit assumption that others will think about and assess situations as you do. A mistake often made in showrooms, where you can too easily assume that the salesperson is thinking about the merits of the product on display in the same terms as you are and therefore has your interests at heart. Like mirror-imaging, this bias comes from an inability to put oneself inside the mind of the other. It can reflect unconscious cultural or racial stereotyping, as with the Vietnam War assessments by US logistics officers that it would not be possible for the North Vietnamese to bring sufficient supplies down the Ho Chi Minh jungle trail to sustain major offensives in the South, given US bombing. Earlier in Indo-China, French staff officers could not believe it would be possible for the Viet Minh to bring artillery to bear from the hills surrounding the isolated French base of Dien Bien Phu since that was not a feat they would have attempted. General Giáp calculated differently and in 1954 inflicted a humiliating defeat on the French forces, precipitating their withdrawal from Indo-China. After the war, Giáp concluded that French defeat had stemmed fundamentally from a failure by their commander, General Navarre, to understand the mind of his adversary: he had not realized that it was a

people’s war.13 Perhaps the lesson we need to learn is two-fold: ‘It is true we should avoid ethnocentrism, the idea that folks are all like us. But that doesn’t mean we should indulge in condescending exoticism, the notion that we are strategic, modern and political whereas they, our benighted

enemies, are visceral and primitive.’14

Perseveration. Nicoll saw that even with mounting evidence to the contrary analysts tended to stick with their original interpretation. In personal relationships this can be the questionable virtue of sticking by someone even when new evidence shows they are behaving as a heel. We remember their good qualities that first endeared them to us. The JIC in a

developing crisis tended to make up its mind early and was resistant to changing it, downplaying any last-minute signals of the enemy’s true intentions. Nicoll called this perseveration, from the psychological phenomenon whereby data (such as telephone numbers) if learned incorrectly the first time are more difficult subsequently to learn correctly. The medical profession also uses the term perseveration to describe an involuntary repetition of words or other acts long after the relevant stimulus has ceased. The bias affects policymakers too: even in the face of evidence that the policy is not working they will keep repeating the positive messages that led them to adopt it in the first place.

Perseveration can also be thought of as a special case of what psychologists would call choice-supportive bias. That is the tendency of the individual, after coming to a judgement from a choice of alternatives, to remember the positive features of their chosen hypothesis and to forget the negative indicators. We remember the best times with those who have been our friends over a long period of years. Without realizing it the analysts will end up skewing future assessments through having those features uppermost in their mind. That such an effect is shown in psychological experiments should be unsurprising. When a choice is made in good faith it is because the individual genuinely believed at the time that it was the right choice. As a result, it is the reasons that led to that choice that are more likely to stick in the memory and to come to the surface later since they help minimize any lingering feelings of doubt. As a general lesson it is unsurprising that most of us prefer to avoid the pain of regret at the choices we have made.

War as a deliberate act. Nicoll showed from his case studies that armed aggression is usually a deliberate act planned in advance and, he concluded, would very rarely be the result of response to some accidental crisis or by opportunistic chance. The JIC failure to forecast that Saddam Hussein in 1991 would invade Kuwait is an example. Saddam wanted to annul Iraq’s substantial debts to Kuwait for financial support during the earlier Iran–Iraq War. Starting a war deliberately for such a reason did not seem credible to the Western analysts (mirror-imaging), especially given that there was Arab League mediation in progress and diplomatic optimism that a settlement could be reached. This would similarly be the case for our natural reluctance to believe that a friend might deliberately betray our confidences for their own advantage.

Limited information. Nicoll had plenty of experience over his long intelligence career of the difficulty of assessing what might happen next in circumstances where there was very little secret intelligence to illuminate the situation. The cases he examined were all ones where the JIC had been surprised by the outcome. These tended to involve countries and regions in which the priority for intelligence collection had been low precisely because aggression was not expected. This is a vicious circle we can all experience: failure to have strategic notice of trouble ahead means a lower priority for our attention, so the information we need to warn us is less likely to be spotted when it is needed and therefore the risk of unwelcome surprises is greater.

Deliberate deception. Several of Nicoll’s case studies featured deliberate deception and he advised analysts always to look for attempts at deception by the potential aggressor, and sometimes by the victim trying to exaggerate their military strength to discourage an attack. These might be simple measures such as portraying troop movements as an exercise or very complex multi-dimensional deception programmes. A nation committing itself to military operations will almost certainly do everything in its power to preserve tactical surprise as to the time and place of operations through deception, even when as with D-Day in 1944 there could be no strategic

surprise over the intention to open the second front in Europe.15 The Soviet Union had already successfully used deception in battles such as Stalingrad, and its belief in the power of deception must have been reinforced by the success of the D-Day landings. The teaching of deception (maskirovka) became an important part of the syllabus of the Soviet military staff college (the Frunze military academy) and for training intelligence officers of the GRU and KGB. Detecting deception is so important in analysis (and today for all of us in spotting ‘fake news’ and deceptive propaganda) that Chapter 7 is devoted to the topic.

The Nicoll Report was discussed by the JIC at its meeting on 4 March 1982, and within a few weeks a copy of the report had been sent to the Prime Minister, with assurances from the JIC that the Committee considered itself alert to the lessons to be learned. As often is the case, history cruelly intervened. Just a few days later, the Argentine military Junta authorized the invasion of the Falkland Islands, and caught the UK by surprise. The UK had again fallen into the same traps as Nicoll had identified.

Doug Nicoll concluded with lessons in the need for care in the use of language in describing intelligence judgements (something much later that

Lord Butler in his post-Iraq 2004 WMD inquiry found still wanting16 ). Nicoll emphasized the importance of policymakers understanding the meaning of phrases such as ‘no evidence’. To take a contemporary example, an intelligence assessment today might state (as is very probably the case) that there is currently no evidence of terrorists within the UK having access to a surface-to-air missile with which a civilian airliner might be shot down. Such a statement does not mean that the intelligence community intends to convey a reassuring message that the risk of that happening over the next few years can be ignored (and so measures to detect such weapons being smuggled into the country are not needed) but only that at the moment there is no evidence of this having happened. A general lesson worth bearing in mind is that the answer you get depends upon the question you asked.

Group behaviours

Problems of bias also arise as a consequence of group dynamics. We have all heard of group think in which the desire for harmony or conformity within a group leads to judgements being nodded through without proper scrutiny. Most people want to feel they are valued members of a group and fear losing the respect and approbation of colleagues. When an individual group member has an insufficient sense of self-worth or of status within the group, then the individual may play to the group gallery and suppress private doubts. There are many examples of such feelings having inhibited dissenting voices. Resistance to an argument is often the result of a state of cognitive dissonance in which excuses are readily found for not accepting new relevant information that would conflict with a state of belief to which

the group or individual is emotionally committed.17 The harder it may have been to reach the original conclusion the more the individual group or individual analyst is likely to be invested in the result and thus to resist unconsciously the discomfort of having to hold in the mind a contrary view.

There is for most of us an inclination to be more likely to believe opinions if we know many other people do. This is called the bandwagon effect, the tendency that encourages the individual to conform to consensus. In group discussions it helps to have one or more contrarians, those who by

inclination like swimming against the tide and thus help surface all the relevant arguments. That effect can, when necessary, be contrived by the leader of the group choosing an individual to be the devil’s advocate with acknowledged licence to take contrary positions regardless of hierarchy. Or the whole group can indulge in red teaming to explore the problem from the adversary’s point of view. A different group (Team B) could be asked independently to examine the material seen by the first group (Team A) and come to their own conclusions on the same evidence. There is a danger of politicization here. If the customer for some piece of analysis does not like the outcome, they may call for another analytic group to be set up and invited to examine the evidence independently. If, for example, the conclusions of the original analysis were seen as too cautious, then it is likely that the members of the new group will be chosen for their boldness.

The leader of an analytic group can make a huge difference to its work especially by setting reasonable expectations and insisting at the outset upon the standards of debate and argument to which members must conform, and ensuring that the group too becomes self-aware of its own thinking processes. Being open about the risk of bias within a group is the best antidote to cognitive failures. Poor group leaders are, however, liable to become the focus for negative feelings if the task of the group is not going well.

In the conduct of an analytic group, the leader has to insist that all possible explanations are explored. There is a known psychological tendency (called the ambiguity effect) to skip over possible hypotheses for which there is little or no direct reporting, and thus seem unjudgeable, and unknowingly to spend the time discussing hypotheses for which there is evidence. A rush to early judgement can be avoided by insisting upon working systematically through the evidence using structured analytic techniques, as already described in Chapter 2. But there will come a point in a prolonged debate in which the strong urge for the psychological relief of closure will come upon the group. In such circumstances taking time out to have a breather, to let interpersonal tensions relax and minds refocus, is usually a good idea, or even suggesting the group sleep on the issues and return the next morning to check whether they have had any second thoughts.

As the 2005 Robb–Silberman US Senate Inquiry into the intelligence misjudgements over Iraq concluded:

We do not fault the Intelligence Community for formulating the hypothesis, based on Saddam Hussein’s conduct, that Iraq had retained an unconventional weapons capability and was working to augment this capability. Nor do we fault the Intelligence Community for failing to uncover what few Iraqis knew; according to the Iraq Survey Group only a handful of Saddam Hussein’s closest advisors were aware of some of his decisions to halt work on his nuclear program and to destroy his stocks of chemical and biological weapons. Even if an extraordinary intelligence effort had gained access to one of these confidants, doubts would have lingered.

But with all that said, we conclude that the Intelligence Community could and should have come much closer to assessing the true state of Iraq’s weapons programs than it did. It should have been less wrong – and, more importantly, it should have been more candid about what it did not know. In particular, it should have recognized the serious – and knowable – weaknesses in the evidence it accepted as providing hard confirmation that Iraq had retained

WMD capabilities and programs …18

Another way of describing this general lesson about being less wrong is to highlight the need to take time out to double-check the thinking. But it is also likely that the more important the issue the more urgent will be the calls for information. It will take conscious and deliberate effort, and courage, of the group leader to insist upon going back over all the workings to check the reasoning and weight of evidence.

Institutional dynamics

Institutions have their own distinctive cultures, in which corporate behaviours considered correct get passed on from generation to generation. That can be a great strength in adversity but can also lead to a bias when it comes to interpreting the world. Institutions also exhibit personality traits and suffer from time to time the equivalent of nervous breakdowns or exhibit paranoia towards other organizations. National intelligence and law enforcement agencies around the world, for example, are notorious for

feuding between themselves and refusing to share information on their cases. In the case of Curveball that opened this chapter, after questions about Curveball’s credibility had begun to emerge, a CIA operational officer in February 2003 sent a message to Pentagon officials expressing concern that Curveball had not been vetted. The next day the Pentagon official who received that message forwarded it by electronic mail to a subordinate, requesting input to answer the CIA’s query, saying that he was ‘shocked’ by the CIA’s suggestion that Curveball might be unreliable. The reply (which was inadvertently sent to the CIA) observed that ‘CIA is up to their old tricks’ and that the CIA did not ‘have a clue’ about the process by which Curveball’s information was passed from the BND. That is an example of longstanding bureaucratic rivalry in action, resulting in the rationalizing away of awkward information.

There are inevitable cultural differences between domestic and external services, and between essentially human and technical services, and again between predominantly military and civilian organizations, and finally between those security organizations with law enforcement powers and functions and those that are primarily covert intelligence gatherers. Each of these distinctions – and the secrecy and danger that surround their work – can generate tensions, not least reflecting the sometimes very different personality types of the people they employ. These are the tribes that have to come together in analytic groups to draft all source intelligence assessments. Understanding the indirect influences that their institutions exert on their members is important knowledge for the leader of an analytic group.

Cognitive biases in everyday life

The individual, group and institutional biases that Nicoll identified for his case studies of intelligence failures can be seen as special cases of more general cognitive biases that we can see in business and everyday life. These biases are very common in political debate, as they are in intelligence analysis. The advent of social media with applications such as Twitter has, as we will discuss in Chapter 10, opened the way to deliberate exploitation of confirmation bias to sell political ideas and products alike.

A lesson that the founder of scientific intelligence in the Second World War, Professor R. V. Jones, highlighted (he called it Crow’s law) was do not

believe what you want to believe until you know what you need to know.19 Those who subconsciously need the reassurance of confirmation would be already expecting intelligence to confirm their view (this common bias is known as the observer-expectancy effect).

Another example is what is called inattentional blindness. Looking is not the same as seeing. A related problem (known as the focusing effect) is that you can end up so focused on a task that you fail to spot what is going on around you. If you are not one of the 20 million who have already watched the YouTube video asking the viewer to count the fast passes between a

team of basketball players I invite you to try it.20 Given that tricky task of counting basketball passes, the first time most people see the video they fail to take note of the person dressed in a gorilla costume slowly walking across the court. A helicopter view of the basketball court would certainly reveal what a close focus on the passes being made by individual players will miss, that there is something beyond the immediate game going on.

In a comparable way close focus on what we already know can be at the expense of recognizing new information. This is a phenomenon known as attentional bias. Experimental evidence shows it affects particularly individuals who are in a state of anxiety in relation to material that is seen as threatening, or those suffering from depression likewise who may have their focus unconsciously drawn to examples of negativity. What you fear most will grab your attention.

Psychological experiments also show the tendency for an item that appears to be in some way exceptional to be more likely to stick in the memory. We are liable to register and retain in our memory news of plane crashes as dramatic events but not to take in the implications of the tens of millions of miles flown without an accident. We should not therefore be surprised that there is nervousness about flying. The tendency is known today as the Von Restorff effect after the pre-war German child psychologist who first demonstrated it systematically. It is easy to show by giving someone a varied list of names or items to remember. If some of these are readily distinguished from others then those will be the ones most likely to

be recalled to memory.21 The most striking intelligence material is liable to make more of an impact than its meaning may deserve. A case in point was the intelligence report received just before the war in Iraq which indicated

that chemical munitions could be with military units and ready for firing within 20–45 minutes. This report was in itself unexceptional as a reference to the previous Iraqi battlefield capability, but after it was mentioned in the published British government dossier, the headline in the Sun newspaper was ‘Brits 45 mins from doom’ and in the Star ‘Mad Saddam ready to

attack: 45 minutes from a chemical war’.22 It is those memorable reports that are likely to feature in discussion between intelligence chiefs and their ministerial masters, and between ministers and the media.

Managing the risk of cognitive bias

This chapter has been about the cognitive biases that can get in the way of our everyday thinking. We can all understand that they exist and why we might be susceptible to them. But managing the risk that they represent is much harder. That should not surprise us as most of the biases identified in this chapter operate at the unconscious level of the mind, and by definition are therefore not usually accessible to us. Having a developed academic understanding of them from reading textbooks is no guarantee that we will not still be susceptible to them. As the report of a 1977 CIA seminar on bias in intelligence analysis concluded: ‘Personal biases are the most

troublesome. They are frequently unique and hard to detect.’23

The best antidote to cognitive biases such as group think is for the group to discuss openly the danger such biases represent. A good group leader can encourage challenge from within the group, and pose the question: Are we falling into group think here? (Which will normally elicit laughter and lighten any tension there may be over reaching a conclusion.) A process of self-recognition of common cognitive biases can be developed whereby individuals develop first an intellectual understanding of these phenomena (and a historical feel for how they matter) and then through working with others, preferably with a trained facilitator, come to an understanding that they too might be subject to them and how they might recognize when that is happening. But resistance is to be expected when others suggest that we have fallen into one of these errors. What is most important in my experience in managing the risk to the SEES process is to have a ‘safe space’ where the dangers of bias can be discussed as a matter of professionalism without arousing a defensive feeling on the part of the

participants that they are being expected to admit to personal weaknesses.

We all suffer from cognitive biases.

Conclusions: mastering our internal demons of bias and prejudice

It is our own demons that are most likely to mislead our reasoning. In this chapter we examined vulnerability to our cognitive biases and prejudices and how they prevent us thinking straight. If we want to learn to think correctly we should:

Accept that completely objective analysis is impossible since we are human, and have to interpret reality for ourselves. But we can try to be as independent, honest and neutral in our analytic judgements as we can.

What you see or hear, or fail to see or hear, reflects your state of mind and degree of attention.

Try to make explicit implicit biases and prejudices, identifying the assumptions we are making in our reasoning.

Recognize that cognitive errors arise at the individual, group and institutional levels and that they may come together in the pressures on the individual from ‘group think’.

Do not believe what you want to believe until you know what you need to know. Remember that the answer you get is likely to depend upon the question you asked.

Recognize the sign of displacement activity that goes with mental stress and how cognitive dissonance increases resistance to taking in new information.

Beware transferred judgements and mirror-imaging in imputing motives to others.

Keep an open mind and be prepared to change it on Bayesian principles when new evidence arrives.