David Omand – How Spies Think – 10 Lessons in Intelligence – Part 4

Become a Patron!
True Information is the most valuable resource and we ask you to give back.

Lesson 2: Explanation Facts need explaining

Belgrade, Sunday, 23 July 1995. It was getting dark when our military aircraft landed on an airfield just outside the Serbian capital. We were met by armed Serbian security officers and quickly hustled into cars, watched over cautiously by a diplomat from the British Embassy. After what seemed an endless drive into the country we arrived at a government guest house. Our mission was to deliver in person an ultimatum to its occupant, General Ratko Mladić, the commander of the Bosnian Serb Army, the man who

became infamous as the ‘butcher of Srebrenica’.1

Two days before, at a conference in London, the international community had united to condemn in the strongest terms the actions of Mladić’s Bosnian Serb Army in overrunning the towns of Srebrenica and Zepa. These towns had been placed under the protection of the United Nations as ‘safe areas’, where the Bosnian Muslim population could shelter from the civil war raging around them. Sadly, there had been insufficient understanding in the UN of the ethnic-cleansing activities of Mladić and his army, and thus no proper plans made about how the safe areas were to be defended from him. The UN peacekeeping force in Bosnia, UNPROFOR, was small and lightly armed, and in accordance with UN rules wore blue-painted helmets and rode in white-painted vehicles. They were not a fighting force that could combat the Bosnian Serb Army when it defied the UN. The full extent of the genocidal mass killings and use of rape as a weapon of war by troops under Mladić’s command in Bosnia was not then known, but enough evidence had emerged from Srebrenica to force a reluctant London Conference and NATO international community that enough was enough. Any further interference with the remaining safe areas

would be met by the use of overwhelming air power. The purpose of the mission to Belgrade was to confront Mladić with the reality of that threat and make him desist from further aggression.

Leading the delegation were the three airmen who controlled NATO air power over Bosnia: the Commander of the US Air Force in Europe along with his British and French opposite numbers. I was the Deputy Under Secretary of State for Policy in the Ministry of Defence in London and I was acting as adviser to Air Chief Marshal Sir William Wratten, Commander-in-Chief of the RAF’s Strike Command, a man with a formidable reputation as the architect of British bombing strategy during the first Gulf War. I was there with my opposite numbers from the Ministry of Defence in Paris and the Office of the Secretary of Defense in the Pentagon (my friend Joe Kruzel, who was tragically to die on duty later in Bosnia when his armoured vehicle rolled off a narrow pass). One of our tasks was to use the opportunity to try to understand the motivations of Mladić, the ‘why and what for’ of his actions, and whether he was likely to be deterred by the formal NATO warning from the air commanders of the US, UK and France.

When we arrived at the guest house we were escorted to the dining room and invited to sit at one side of a long table already set with traditional sweetmeats and glasses of plum brandy. Mladić entered in jovial mood with his army jacket around his shoulders hanging unbuttoned, accompanied by the head of his secret police. We had been forewarned that in soldier-to-soldier company he was likely to be bluffly affable, one of the reasons his men adored him. We had therefore resolved on the flight that we would all refuse to accept the hospitality he was bound to offer, an act that we guessed would cause offence and thus jolt Mladić into recognizing this was not a friendly visit. That ploy worked.

Mladić became visibly agitated, defiantly questioning whether the three air forces could pose any real threat to his army given the puny use of NATO air power up to that point. The air commanders had wisely chosen to wear their leather jackets and aviator sunglasses, and not their best dress uniforms. They menacingly described the massive air power they could command and delivered their blunt ultimatum: further attacks against the safe areas would not be tolerated, and substantial air actions would be mounted, ‘if necessary at unprecedented levels’. The atmosphere in the room grew frosty.

Explanations and motives

In the Introduction I described understanding and explanation as the second component of my SEES model of intelligence analysis. Intelligence analysts have to ask themselves why the people and institutions that they are observing are acting as they appear to be, and what their motives and objectives are. That is what we were trying to establish in that visit to Mladić. That’s as true for you in everyday life as it is for intelligence analysts. The task is bound to be all the harder if the analysis is being done at a distance by those brought up in a very different culture from that of the intelligence target. Motives are also easily misread if there is projective identification of some of your own traits in your adversary. This can become dangerous in international affairs when a leader accuses another of behaviour of which they themselves are guilty. That may be a cynical ploy. But it may also be a worrying form of self-deception. The leader may be unconsciously splitting off his own worst traits in order to identify them in the other, allowing the leader then to live in a state of denial believing that they do not actually possess those traits themselves. I’m sure you recognize a similar process in your office every day, too.

If it is the actions of a military leader that are under examination then there may be other objective factors explaining his acts, including the relative capabilities of his and opposing forces, the geography and terrain, and the weather as well as the history, ethnology and cultural anthropology of the society being studied. There are bound to be complexities to unravel where it may be the response to perceived policies and actions by other states, or even internal opposition forces within the society, that provide the best explanation along with an understanding of the history that has led to this point. From the outset of the Bosnian conflict, reports from the region spoke of excesses by the different factions fighting each other, a common feature of civil wars. Such evidence was available. But it was not clear at first what the deeper motivations were that would eventually drive the troops of Ratko Mladić to the horrifying extremes of genocide.

The choice of facts is not neutral, nor do facts speak for themselves

One possible reason we may wrongly understand why we see what we do is because we have implicitly, or explicitly, chosen to find a set of facts that supports an explanation we like and not another. We saw in the preceding chapter that even situational awareness cannot be divorced from the mindset of the analyst. The action of selection of what to focus on is unlikely to be a fully neutral one. This is a problem with which biographers and historians have always had to grapple. As the historian E. H. Carr wrote: ‘By and large, the historian will get the kind of facts he wants.

History means interpretation.’2

Reality is what it is. We cannot go back in time to change what we have observed. More correctly, then, for our purposes reality is what it was when we made our observations. Reality will have changed in the time it has taken us to process what we saw. And we can only perceive some of what is out there. But we can make a mental map of reality on which we locate the facts that we think we know, and when we got to know them. We can place these facts in relation to each other and, via our memory, fill in some detail from our prior knowledge. Then we look at the whole map and hope we recognize the country outlined.

More often than not, facts can bear different meanings. Therein lies the danger of mistakes of interpretation. A shopkeeper facing a young man asking to buy a large meat cleaver has to ask herself, gang member or trainee butcher? Let me adapt an example that Bertrand Russell used in his

philosophy lectures to illustrate the nature of truth.3 Imagine a chicken farm in which the chickens conduct an espionage operation on the farmer, perhaps by hacking into his computer. They discover that he is ordering large quantities of chicken food. The Joint Intelligence Committee of chickens meets. What do they conclude? Is it that the farmer has finally recognized that they deserve more food; or that they are being fattened up for the kill? Perhaps if the experience of the chickens has been of a happy outdoor life, then their past experience may lead them to be unable to conceive of the economics of chicken farming as seen by the farmer. On the other hand, chickens kept in their thousands in a large tin shed may well be all too ready to attribute the worst motives to the farmer. It is the same secret intelligence, the same fact, but with two opposite interpretations. That is true of most factual information.

Context is therefore needed to infer meaning. And meaning is a construct of the human mind. It is liable to reflect our emotionally driven hopes and

fears as much as it represents an objective truth. Intelligence analysts like to characterize themselves as ‘objective’, and great care is taken, as we see in Chapter 5, to identify the many possible types of cognitive bias that might skew their thinking. In the end, however, ‘independent’, ‘neutral’ and ‘honest’ might be better words to describe the skilled analysts who must avoid being influenced by what they know their customers desperately hope

to hear.4 The great skill of the defence counsel in a criminal trial is to weave an explanatory narrative around the otherwise damming evidence so that the jury comes to believe in the explanation offered of what happened and thus in the innocence of the accused. The observed capability to act cannot be read as a real intention to do so. The former is easier to assess, given good situational awareness; the latter is always hard to know since it involves being able to ascribe motives in order to explain what is going on. You may know from your employment contract the circumstances under which your boss may fire you, but that does not mean they (currently) have the intention to do so.

We know from countless psychological experiments that we can convince ourselves we are seeing patterns where none really exist. Especially if our minds are deeply focused somewhere else. So how can we arrive at the most objective interpretation of what our senses are telling us? Put to one side the difficulties we discussed in the last chapter of knowing which are sufficiently reliable pieces of information to justify our labelling them as facts. Even if we are sure of our facts we can still misunderstand their import.

Imagine yourself late at night, for example, sitting in an empty carriage on the last train from the airport. A burly unkempt man comes into the carriage and sits behind you and starts talking aggressively to himself, apparently threatening trouble. Those sense impressions are likely at first to trigger the thought that you do not want to be alone with this individual. The stranger is exhibiting behaviour associated with someone in mental distress. Concern arises that perhaps he will turn violent; you start to estimate the distance to the door to the next carriage and where the emergency alarm is located; then you notice the tiny earphone he is wearing. You relax. Your mental mapping has flipped over and now provides a non-threatening explanation of what you heard as the simpler phenomenon of a very cross and tired man off a long flight making a mobile call to the car hire company that failed to pick him up.

What made you for a moment apprehensive in such a situation was how you instinctively framed the question. Our brains interpret facts within an emotional frame of mind that adds colour, in this case that represented potential danger on the mental map we were making. That framing was initially almost certainly beyond conscious thought. It may have been triggered by memory of past situations or more likely simply imaginative representation of possibilities. If you had been watching a scare movie such as Halloween on your flight, then the effect would probably have been even more pronounced.

The term ‘framing’ is a useful metaphor, a rough descriptor of the mental process that unconsciously colours our inferential map of a situation. The marvellous brightly coloured paintings of Howard Hodgkin, for example, extend from the canvas on to and over the frame. The frame itself is an integral part of the picture and conditions our perception of what we see on the canvas itself. The framing effect comes from within, as our minds respond to what we are seeing, and indeed feeling and remembering. It is part of the job of TV news editors to choose the clips of film that will provide visual and aural clues to frame our understanding of the news. And of course, as movie directors know, the effect of images playing together with sound are all the more powerful when working in combination to help us create in our minds the powerful mental representation of the scene that director wanted. The scrape of the violins as the murderer stalks up the staircase, knife in hand, builds tension; whereas the swelling orchestra releases that tension when the happy couple dance into the sunset at the end. Modern political advertising has learned all these tricks to play on us to make their message one we respond to more emotionally than rationally.

Up to this point in history only a human being could add meaning. Tomorrow, however, it could be a machine that uses an artificial intelligence programme to infer meaning from data, and then to add appropriate framing devices to an artificially generated output. Computerized sentiment analysis of social media postings already exists that can gauge a crowd’s propensity to violence. Careful use of artificial intelligence could shorten the time taken to alert analysts to a developing crisis.

However, there are dangers in letting machines infer an explanation of what is going on. Stock exchanges have already suffered the problems of ‘flash crashes’ when a random fall in a key stock price triggers via an

artificial intelligence programme automated selling that is detected by other trading algorithms, which in turn start selling and set off a chain reaction of dumping shares. So automatic brakes have had to be constructed to prevent the market being driven down by such automation. A dangerous parallel would be if reliance is placed on such causal inference to trigger automatically changes in defence posture in response to detected cyberattacks. If both sides in an adversarial relationship have equipped themselves with such technology, then we might enter the world of Dr Strangelove. Even more so if there are more than two players in such aninfernal game of automated inference. As AI increasingly seeps into our everyday lives, too, we must not allow ourselves to slip into allowing it to infer meaning on our behalf unchecked. Today the algorithm is selecting what online advertisements it thinks will best match our interests, irritating when wrong but not harmful. Which it would be if it were a credit rating algorithm secretly deciding that your browsing and online purchasing history indicate a risk appetite too high to allow you to hold a credit card or obtain affordable motorbike insurance.

Back to Bayesics: scientifically choosing an explanatory hypothesis

The intelligence analyst is applying in the second stage of SEES generally accepted scientific method to the task of explaining the everyday world. The outcome should be the explanatory hypothesis that best fits the observed data, with the least extraneous assumptions having to be made, and with alternative hypotheses having been tested against the data and found less satisfactory. The very best ideas in science, after sufficient replication in different experiments, are dignified with the appellation ‘theories’. In intelligence work, as in everyday life, we normally remain at the level of an explanatory hypothesis, conscious that at any moment new evidence may appear that will force a re-evaluation. An example in the last chapter was the case of the Cuban missile crisis, when the USAF photographs of installations and vehicles seen in Cuba, coupled with the secret intelligence from the MI6/CIA agent Col. Penkovsky, led analysts to warn President Kennedy that he was now faced with the Soviet Union introducing medium-range nuclear missile systems on to the island.

In the last chapter I described the method of Bayesian inference as the scientific way of adjusting our degree of belief in a hypothesis in the light of new evidence. You have evidence and use it to work backwards to assess what the most likely situation was that could have led to it being created. Let me provide a personal example to show that such Bayesian reasoning can be applied to everyday matters.

I remember Tony Blair when Prime Minister saying that he would have guessed that my background was in Defence. When I asked why, he replied because my shoes were shined. Most of Whitehall, he commented, had gone scruffy, but those used to working with the military had retained the habit of cleaning their shoes regularly.

We can use Bayesian reasoning to test that hypothesis, D, that I came from the MOD. Say 5 per cent of senior civil servants work in Defence, so the prior probability of D being true p(D) = 1/20 (5 per cent), which is the chance of picking a senior civil servant at random and finding he or she is from the MOD.

E is the evidence that my shoes are shined. Observation in the Ministry of Defence and around Whitehall might show that 7 out of 10 Defence senior civil servants wear shiny shoes but only 4 out of 10 in civil departments do so. So the overall probability of finding shiny shoes is the sum of that for Defence and that for civil departments

p(E) = (1/20)*(7/10)+(1–1/20)*(4/10) = 83/200

The posterior probability that I came from Defence is written as p(D|E) (where, remember, the vertical bar is to be read as ‘given’). From Bayes’s theorem, as described in Chapter 1:

p(D|E) = p(D). [p(E|D)/p(E)] = 1/20*[7/10*200/83] = 7/83 =

approx. 1/12

Using Bayesian reasoning, the chances of the PM’s hypothesis being true is almost double what would be expected from a random guess.

Bayesian inference is a powerful way of establishing explanations, the second stage of the SEES method. The example can be set out in a 2 by 2 table (say, applied to a sample of 2000 civil servants) showing the classifications of shined shoes/not shined shoes and from Defence/not from Defence. I leave it to the reader to check that the posterior probability

P(D/E) found above using Bayes’s theorem can be read from the first column of the table as 70/830 = approx. 1/12. Without seeing the shined shoes, the prior probability that I come from the MOD would be 100/2000, or 1/20.

E: shined shoesNot shined shoesTotals

D: from MOD7030100

Not from MOD76011401900


Now imagine a real ‘big data’ case with an array of hundreds or thousands of dimensions to cater for large numbers of different types of evidence. Bayes’s theorem still holds as the method of inferring posterior probabilities (although the maths gets complicated). That is how inferences are legitimately to be drawn from big data. The medical profession is

already experiencing the benefits of this approach.5 The availability of personal data on internet use also provides many new opportunities to derive valuable results from data analysis. Cambridge Analytica boasted that it had 4000–5000 separate data points on each voter in the US 2016 Presidential election, guiding targeted political advertising, a disturbing application of Bayesian inference that we will return to in Chapter 10.

In all sustained thinking, assumptions do have to be made – the important thing is to be prepared in the light of new evidence challenging the assumptions to rethink the approach. A useful pragmatic test about making assumptions is to ask at any given stage of serious thinking, if I make this assumption, am I making myself worse off in terms of chances of success if it turns out not to be sensible than if I had not made it? Put another way, if my assumption turns out to be wrong then would I end up actually worse off in my search for the answer or am I just no better off?

For example, if you have a four-wheel combination bicycle lock and forget the number you could start at 0000, then 0001, 0002, all the way up, aiming for 9999, knowing that at some point the lock will open. But you might make the reasonable assumption that you would not have picked a

number commencing with 0, so you start at 1000. Chances are that saves you work. But if your assumption is wrong you are no worse off.

As a general rule it is the explanatory hypothesis with the least evidence against it that is most likely to be the best one for us to adopt. The logic is that one strong contrary result can disconfirm a hypothesis. Apparently confirmatory evidence on the other hand can still be consistent with other hypotheses being true. In that way the analyst can avoid the trap (the

inductive fallacy 6 ) of thinking that being able to collect more and more evidence in favour of a proposition necessarily increases confidence in it. If we keep looking in Europe to discover the colour of swans, then we will certainly conclude by piling up as many reports as we like that they are all white. If eventually we seek evidence from Australia then the infamous

‘black swan’ appears and contradicts our generalization.7 When there are more reports in favour of hypothesis A than its inverse, hypothesis B, it is not always sensible to prefer A to B if we suspect that the amount of evidence pointing to A rather than B has been affected by how we set about searching for it.

A well-studied lesson of the dangers of misinterpreting complex situations is the ‘security dilemma’ when rearmament steps taken by one nation with purely defensive intent trigger fears in a potential adversary, leading it to take its own defensive steps that then appear to validate the original fears. The classic example is a decision by country A to modernize by building a new class of battleships. That induces anxiety in country B that an adverse military balance is thereby being built up against it. That leads to decisions on the part of country B also to build up its forces. That rearmament intention in turn is perceived as threatening by country A, not only justifying the original decision to have a new class of battleships but prompting the ordering of yet more ships. The worst fears of country B about the intentions of country A are thus confirmed. And an arms race starts. As the Harvard scholar Ben Buchanan has pointed out, such mutual misassessments of motivation are even more likely to be seen today in cyberspace since the difference between an intrusion for espionage

purposes and for sabotage need only be a few lines of code.8 There is thus ample scope for interpreting detected intrusions as potentially hostile, on both sides. Acts justified as entirely defensive by one government are therefore liable to be labelled as offensive in motivation by another – and vice versa.

We can easily imagine an established couple, call them Alice and Bob, one of whom, Bob, is of a jealous nature. Alice one day catches Bob with her phone reading her texts. Alice feels this is an invasion of her privacy, and increases the privacy settings on her phone. Bob takes this as evidence that Alice must have something to hide and redoubles his efforts to read her text messages and social media posts, which in turn causes Alice to feel justified in her outrage at being mistrusted and spied on. She takes steps to be even more secretive, setting in train a cycle of mistrust likely, if not interrupted, to gravely damage their relationship.

Explaining your conclusions

Margaret Thatcher was grateful for the weekly updates she received from the JIC. She always wanted to be warned when previous assessments had changed. But she complained that the language the JIC employed was too often ‘nuanced’. ‘It would be helpful’, she explained, ‘if key judgments in the assessments could be highlighted by placing them in eye-catching

sentences couched in plainly expressed language.’9 In the case of the Falklands that I mentioned in Chapter 1, the JIC had been guilty of such nuance in their July 1981 assessment. They had explained that they judged that the Argentine government would prefer to achieve its objective (transfer of sovereignty) by peaceful means. Thereby the JIC led readers to infer that if Argentina believed the UK was negotiating in good faith on the future of the Islands, then it would follow a peaceful policy, adding that if Argentina saw no hope of a peaceful transfer of sovereignty then a full-scale invasion of FI could not be discounted. Those in London privy to the Falklands negotiations knew the UK wanted a peaceful solution too. Objectively, nevertheless, the current diplomatic efforts seemed unlikely to lead to a mutually acceptable solution. But for the JIC to say that would look like it was straying into political criticism of ministerial policy and away from its brief of assessing the intelligence. There was therefore no trigger for reconsideration of the controversial cuts to the Royal Navy announced the year before, including the plan to scrap the Falklands-based ice patrol ship HMS Endurance. Inadvertently, and without consciously realizing they had done so, the UK had taken steps that would have reinforced in the minds of the Junta the thought that the UK did not see the

Islands as a vital strategic interest worth fighting for. The Junta might reasonably have concluded that if Argentina took over the Islands by force the worst it would face would be strong diplomatic protest.

Explaining something that is not self-evident is a process that reduces a complex problem to simpler elements. When analysts write an intelligence assessment they have to judge which propositions they can rely on as known to their readers and thus do not need explaining or further justification. That Al Qaid’a under Bin Laden was responsible for the attacks on 9/11 is now such a building block. That the Russian military intelligence directorate, the GRU, was responsible for the attempted murder of the Skripals in Salisbury in 2018 is likewise a building block for discussions of Russian behaviour. That Saddam Hussein in Iraq was still pursuing an unlawful biological warfare programme in 2002 was treated as a building block – wrongly, and therein lies the danger. That was a proposition that had once been true but (unbeknown to the analysts) was no longer. The mental maps being used by the analysts to interpret the reports being received were out of date and were no longer an adequate guide to reality. As the philosopher Richard Rorty has written: ‘We do not have any way to establish the truth of a belief or the rightness of an action except by reference to the justifications we offer for thinking what we think or doing what we do.’10

Here, however, lies another lesson in trying to explain very complex

situations in terms of simpler propositions.11 The temptation is to cut straight through complex arguments by presenting them in instantly recognizable terms that the reader or listener will respond to at an emotional level. We do this when we pigeonhole a colleague with a label like ‘difficult’ or ‘easy to work with’. We all know what we are meant to infer when a politician makes reference in a television interview or debate to the Dunkirk spirit, the appeasement of fascism in the 1930s, Pearl Harbor and the failure to anticipate surprise attacks, or Suez and the overestimation of British power in the 1956 occupation of the Egyptian canal zone. ‘Remember the 2003 invasion of Iraq’ is now a similarly instantly recognizable meme for the alleged dangers of getting too close to the United States. Such crude narrative devices serve as a shorthand for a much more complex reality. They are liable to mislead more than enlighten. History does not repeat itself, even as tragedy.

The lesson in all of this is that an accurate explanation of what you see is crucial.

Testing explanations and choosing hypotheses

How do we know when we have arrived at a sufficiently convincing explanation? The US and British criminal justice systems rest on the testing in court of alternative explanations of the facts presented respectively by counsel for the prosecution and for the defence in an adversarial process. For the intelligence analyst the unconscious temptation will be to try too hard to explain how the known evidence fits their favoured explanation, and why contrary evidence should not be included in the report.

Where there is a choice of explanations apply Occam’s razor (named after the fourteenth-century Franciscan friar William of Occam) and favour the explanation that does not rely on complex, improbable or numerous assumptions, all of which have to be satisfied for the hypothesis to stand up. By adding ever more baroque assumptions any set of facts can be made to fit a favoured theory. This is the territory where conspiracies lurk. In the words of the old medical training adage, when you hear rapid hoof-beats

think first galloping horses not zebras escaping from a zoo.12

Relative likelihood

It is important when engaged in serious thinking about what is going on to have a sense of the relative likelihood of alternative hypotheses being true. We might say, for example, after examining the evidence that it is much more likely that the culprit behind a hacking attack is a criminal group rather than a hostile state intelligence agency. Probability is the language in which likelihoods are expressed. For example, suppose a six-sided die is being used in a gambling game. If I have a suspicion that the die is loaded to give more sixes, I can test the hypothesis that the die is fair by throwing the die many times. I know from first principles that an unbiased die tossed properly will fall randomly on any one of its six faces with a probability [1/6]. The result of each toss of the die should produce a random result independent of the previous toss. Thus I must expect some clustering of

results by chance, with perhaps three or even four sixes being tossed in a row (the probability of four sixes in a row is small – [1/6]x[1/6]x[1/6]x[1/6]

  • 0.0008, less than 1 in a thousand. But it is not zero). I will therefore not be too surprised to find a run of sixes. But, evidently, if I throw the die 100 times and I return 50 sixes, then it is a reasonable conclusion that the die is biased. The more tosses of that particular die the more stable the proportion of sixes will be. Throw it 1,000 times, 10,000 times, and, if the result is consistent, our conclusion becomes more likely. A rational degree of belief in the hypothesis that the die is not fair comes from analysis of the data, seeing the difference between what results would be associated with the hypothesis (a fair die) and the alternative hypothesis (a die biased to show sixes).

The key question to ask in that case is: if the die was fair, how likely is it that we would have seen 50 sixes in 100 throws? That is the approach of Bayesian inference we saw earlier in the chapter. The greater the divergence the more it is rational to believe that the evidence points to it not being a fair die. We have conducted what intelligence officers call an analysis of competing hypotheses (ACH), one of the most important structured analytic techniques in use in Western intelligence assessment, pioneered by CIA

analyst Richards J. Heuer.13 The method is systematically to list all the possible explanations (alternative hypotheses) and to test each piece of evidence, each inference and each assumption made as to whether it is significant in choosing between them (this is known by an ugly term as the discriminatability of the intelligence report). We then prefer the explanationwith the least evidence pointing against it.

Alas, in everyday life, most situations we come across cannot be tested under repeated trials. Nor can we know in advance, or work out from first principles, what ideal results to compare with our observed data (such as the characteristics of a fair die). We cannot know that a boss is exhibiting unfair prejudice against one of their team in the way we can establish that a die is biased. But if we have a hypothesis of bias we can rationally test it against the evidence of observed behaviour. We will have to apply judgement in assessing the motives of the people involved and in testing possible alternative explanations for their behaviour against the evidence, discriminating between these hypotheses as best we can. When we apply Bayesian inference to everyday situations in that way, we end up with a degree of belief in the hypothesis that we conclude best explains the

observed data. That result is inevitably subjective, but is the best achievable from the available evidence. And, of course, we must always therefore be open to correction if fresh evidence is obtained.

Stage 2 of SEES: explaining

The first step in stage 2 of SEES is therefore to decide what possible explanations (hypotheses) to test against each other. Let me start with an intelligence example. Suppose secret intelligence reveals that the military authorities of a non-nuclear weapon State A are seeking covertly to import specialist high-speed fuses of a kind associated with the construction of nuclear weapons but that also have some research civilian uses. I cannot be certain that State A is pursuing a nuclear weapons programme in defiance of the international Non-Proliferation Treaty, although I might know that it has the capability to enrich uranium. The covert procurement attempts might be explicable by the caution on the part of State A that open attempts to purchase such fuses for civil use would be bound to be misunderstood. And the civil research institutions of State A might be using the military procurement route just for convenience since the military budget is larger. One hypothesis might be that the fuses are for a prohibited nuclear weapons programme. The obvious alternative would be that the fuses are for an innocent civil purpose. But there might be other hypotheses to test: perhaps the fuses were for some other military use. The important thing is that all the possible explanations should be caught by one or other of the hypotheses to be tested (in the jargon, exhausting the solution space). A further refinement might be to split the first hypothesis into two: a government-approved procurement for a nuclear weapons programme and one conducted by the military keeping the government in ignorance.

In that way we establish mutually exclusive hypotheses to test. Now we can turn to our evidence and see whether our evidence helps to discriminate between them. We start with identifying key assumptions that might be swaying our minds and ask ourselves how the weight of evidence might shift if we change the assumptions (the analysts might, for example, take for granted that any nuclear research would be in the hands of the military). We identify inferences that we have drawn and whether they are legitimate (the fact that the end-user was not revealed on the procurement documents

may imply that there is something to hide, or it may be just that overseas government procurement is carried out in that country via an import–export intermediary. Finally, we examine each piece of intelligence (not just secret intelligence of course; there are likely to be open sources as well) to see in Bayesian fashion whether they would be more likely under each of the hypotheses, and can thus help us discriminate between them. In doing this we check at the same time how confident we are in each piece of information being reliable, as we discussed in the preceding chapter.

Some of the intelligence reports may be consistent with all our hypotheses and they must be put to one side, however fascinating they are to read. Frustratingly, that can happen with reports of hard-to-get intelligence where perhaps lives have been risked to acquire it. A table (known in the trade as a Heuer table, after the pioneer of the use of structured analytic techniques, Richards J. Heuer) can be drawn up with separate columns for each hypothesis and rows for each piece of evidence, whose consistency with each hypothesis can then be logged in the table.

The first few rows of such a table might look like this:

SourceHypothesis 1: IsHypothesis 2:

typerelated to plan toCan be

Credibilityconduct nuclear-explained by

Relevanceweapon-relatedresearch for

experimentscivil purposes

Evidence 1: knownAnConsistentConsistent
capability to enrichassumption

uranium providesMedium


Evidence 2:AnConsistentLess
procurement wasinference
via an import–High

export companyMedium

Evidence 3:ImageryConsistentLess
military securityHigh


seen around


Evidence 4: covertHumintConsistentMuch less
channels were usedNew
to acquire high-source on

speed fusestrial High

Evidence 5:SigintConsistentMuch less
encrypted high-High High
grade military

comms to and from

the warehouse

A hypothetical example of part of a Heuer table

It may become apparent that one particular report provides the dominant evidence, in which case wise analysts will re-examine the sourcing of the report. A lesson from experience (including that of assessing Iraq’s holdings of chemical and biological weapons in 2002) is that once we have chosen our favoured explanation we become unconsciously resistant to changing our mind. Conflicting information that arrives is then too easily dismissed as unreliable or ignored as an anomaly. The table method makes it easier to establish an audit trail of how analysts went about reaching their conclusions. A record of that sort can be invaluable if later evidence casts doubt on the result, perhaps raising suspicions that some of the intelligence reporting was deliberately fabricated as a deception. We will see in Chapter 5 how German, US and UK analysts were deliberately deceived by the reporting of an Iraqi defector into believing that in 2003 Saddam Hussein possessed mobile biological warfare facilities.

The analysis of competing hypotheses using Heuer tables is an example of one of the structured analytic techniques in use today in the US and UK intelligence communities. The method is applicable to any problem you might have where different explanations have to be tested against each other in a methodical way. Heuer himself cites Benjamin Franklin in 1772, when he was the US Ambassador to France, describing to Joseph Priestley (the discoverer of oxygen) his approach to making up his mind:

  • divide half a sheet of paper by a line into two columns; writing over the one Pro and over the other Con … put down over the different heads short hints of the different motives … for or against the measure. When I have thus got them all together in one view, I endeavour to estimate their relative weights; and where I find two, one on each side, that seem equal I strike them out. Thus proceeding I find where the balance lies … and come to a determination accordingly.

In any real example there is likely to be evidence pointing both ways so a weighing up at the end is needed. Following the logic of scientific method it is the hypothesis that has least evidence against it that is usually to be favoured, not the one with most in favour. That avoids the bias that could come from unconsciously choosing evidence to collect that is likely to support a favoured hypothesis. I invite you to try this structured technique for yourself the next time you have a tricky decision to take.

A striking example of the importance of falsifying alternative theories rather than confirming the most favoured comes from an unexpected quarter: the 2016 US Presidential election. It was an election campaign beset with allegations of ‘fake news’ (including the false stories created and spread by Russian intelligence agents to try to discredit one candidate, Hillary Clinton). One of the stories spread online featured a photograph of a young Donald Trump with the allegation that, in an interview to People magazine in 1998, he said: ‘If I were to run, I would run as a Republican. They’re the dumbest group of voters in the country. They believe anything on Fox News. I could lie and they ’d still eat it up. I bet my numbers would be terrific.’ That sounds just like Trump, but the only flaw is that he never said it to People magazine. A search of People magazine disconfirms that

hypothesis – he gave no such interview.14 This story is an example of a falsifiable assertion. The hypothesis that he did say it can be checked and quickly shown to be untrue (that may of course have been the scheming intent of its authors, in order to lend support to the assertion that other anti-Trump stories were equally false). Most statements about beliefs and motivations are non-falsifiable and cannot be disproved in such a clear way. Instead, judgement is needed in reaching a conclusion that involves weighing evidence for and against, as we have seen with the Heuer method.

Assumptions and sensitivity testing

In this second stage of SEES, it is essential to establish how sensitive your explanation is to your assumptions and premises. What would it have taken to change my mind? Often the choice of explanation that is regarded as most likely will itself depend upon a critical assumption, so the right course is to make that dependency clear and to see whether alternative assumptions might change the conclusion reached. Assumptions have to be made, but circumstances can change and what was reasonable to take as a given may not be with time.

Structured diagnostic techniques, such as comparing alternative hypotheses, have the great advantage that they force an analytic group to argue transparently through all the evidence, perhaps prompting double-checking of the reliability of some piece of intelligence on which the choice of hypothesis seems to rest, or exposing an underlying assumption that may no longer hold or that would not be sensible to make in the context of the problem being examined.

As we will see in the next chapter, turning an explanation into a predictive model that allows us to estimate how events will unfold is crucially dependent on honesty over the assumptions we make about human behaviour. Marriages are predicated on the assumption that both partners will maintain fidelity. Many is the business plan that has foundered because assumptions made in the past about consumer behaviour turned out to no longer be valid. Government policies can come unstuck, for example, when implicit assumptions, such as about whether the public will regard them as fair, turn out not to reflect reality. A striking example was the British Criminal Justice Act 1991 that made fines proportionate to the income of the offender, and collapsed on the outcry when two men fighting, equally to blame, were fined £640 and £64 respectively because they belonged to different income brackets.

Back in Serbia in 1995, General Mladić, to our surprise, simplified our assessment task of trying to understand and explain his motivations.

Pulling out a brown leather-backed notebook, every page filled with his own cramped handwriting, Mladić proceeded to read to us from it for over half an hour recounting the tribulations of the Serb people at the hands both of the Croats and, as he put it, the Turks. He gave us his version of the history of his people, including the devastating Serbian defeat by the

Ottoman Empire in 1389 at the Battle of the Field of Blackbirds. That was a defeat he saw as resulting in 500 years of Serbian enslavement. He recounted the legend that the angel Elijah had appeared to the Serb commander, Lazar, on the eve of the battle saying that victory would win him an earthly kingdom, but martyrdom would win a place for the Serb people in heaven. Thus even defeat was a spiritual triumph, and justified the long Serbian mission to recover their homeland from their external oppressors.

According to Mladić’s candid expression of his world view in that dining room in Serbia, he felt it was a continuing humiliation to have Muslims and Croats still occupying parts of the territory of Bosnia–Herzegovina, and an insult to have the West defending Bosnian Muslims in enclaves inside what he saw as his own country. In a dramatic climax to his narrative he ripped open his shirt and cried out, kill me now if you wish but I will not be intimidated, swearing that no foreign boot would be allowed to desecrate the graves of his ancestors.

Mladić had effectively given us the explanation we were seeking and answered our key intelligence question on his motivation for continuing to fight. We returned to our capitals convinced that the ultimatum had been delivered and understood, but Mladić would not be deterred from further defiance of the UN. The West would have to execute a policy U-turn to stop him, by replacing the UN peacekeepers with NATO combat troops under a UN mandate that could be safely backed by the use of air power. And so it worked out, first with the Anglo-French rapid reaction force on Mount Igman protecting Sarajevo and then the deployment of NATO forces including 20,000 US troops, all supported by a major air campaign.

I should add my satisfaction that the final chapter in the story concluded on 22 November 2017, when the Hague war crimes tribunal, with judges from the Netherlands, South Africa and Germany, ruled that, as part of Mladić’s drive to terrorize Muslims and Croats into leaving a self-declared Serb mini-state, his troops had systematically murdered several thousand Bosnian Muslim men and boys, and that groups of women, and girls as young as twelve years old, were routinely and brutally raped by his forces. The judges detailed how soldiers under Mladić’s command killed, brutalized and starved unarmed Muslim and Croat prisoners. Mladić was convicted of war crimes and sentenced to life imprisonment.

Conclusions: explaining why we are seeing what we do

Facts need explaining to understand why the world and the people in it are behaving as they appear to be. In this chapter, we have looked at how to seek the best ‘explanation’ of what we have observed or discovered about what is going on. If we wish to interpret the world as correctly as we can we should:

Recognize that the choice of facts is not neutral and may be biased towards a particular explanation.

Remember that facts do not speak for themselves and are likely to have plausible alternative explanations. Context matters in choosing the most likely explanation. Correlations between facts do not imply a direct causal connection.

Treat explanations as hypotheses each with a likelihood of being true.

Specify carefully alternative explanatory hypotheses to cover all the possibilities, including the most straightforward in accordance with Occam’s razor.

Test hypotheses against each other, using evidence that helps discriminate between them, an application of Bayesian inference.

Take care over how we may be unconsciously framing our examination of alternative hypotheses, risking emotional, cultural or historical bias.

Accept the explanatory hypothesis with the least evidence against it as most likely to be the closest fit to reality.

Generate new insights from sensitivity analysis of what it would take to change our mind.

Become a Patron!
True Information is the most valuable resource and we ask you to give back.