David Omand – How Spies Think – 10 Lessons in Intelligence – Part 5

Lesson 3: Estimations Predictions need an explanatory model as well as sufficient data

In mid-August 1968, I was driving an elderly Land Rover with friends from university along the Hungarian side of the border with Czechoslovakia on the first stage of an expedition to eastern Turkey. To our surprise we found ourselves having to dodge in and out of the tank transporters of a Soviet armoured column crawling along the border. We did not realize – and nor did the Joint Intelligence Committee in London – that those tank crews already had orders to cross the border and invade Czechoslovakia as part of a twin strategy of intimidation and deception being employed by Yuri Andropov, then KGB chairman, to undermine the reform-minded

government in Prague led by Alexander Dubček.1

US, UK and NATO intelligence analysts were aware of the Soviet military deployments, which could not be hidden from satellite observation and signals intelligence (I joined GCHQ a year later and learned how that had been done). The Western foreign policy community was also following the war of words between Moscow and Prague over Dubček’s reform programme. They shared Czech hopes that, in Dubček’s memorable campaign slogan, ‘socialism with a human face’ would replace the rigidities of Stalinist doctrine.

Dubček had run for the post of First Secretary of the Party on a platform of increased freedom of the press and of speech and movement; an economic emphasis on consumer goods; a reduction in the powers of the secret police; and even the possibility of multi-party elections. Dubček was in a hurry, with the wind of popular support behind him. But he was clearly

and repeatedly ignoring warnings from Moscow that he was going too far too fast. In 1968, Prague was at risk of slipping from under Moscow’s control.

In the JIC, senior intelligence and policy officials met with representatives of the ‘5-eyes’ to consider whether Moscow would use

military force as it had done in Hungary in 1956.2 This is the stage of analysis that the layperson might consider the most important, trying to predict for the policymakers what will happen next. This is very satisfying when it is achieved, although intelligence professionals shun the word ‘prediction’ as an overstatement of what is normally possible.

Analysts had no difficulty explaining the massing of tanks just on the other side of the Czech border as putting pressure on the reformist Czech government. The JIC analysts must have felt they had had good situational awareness and a credible explanation of what was going on at a military level. But they failed to take the next step and forecast the invasion and violent crushing of the reform movement. They reasoned that the Soviet Union would hold back from such crude direct intervention given the international condemnation that would undoubtedly follow. That verb reasoned carries the explanation of why the analysts got it wrong: they werereasonable people trying to predict the actions of an unreasonable regime. When they put themselves in the shoes of the decisionmakers in Moscow, they still thought exclusively from their own perspective.

We now know from historical research much more than the analysts would have known at the time about the resolve of the Soviet leadership to crush the Czech reforms. Western intelligence analysts would probably have come to a different conclusion about the Soviet willingness to take huge risks if they had known the active measures being taken against the Czech reformers being masterminded by Yuri Andropov, Head of the KGB.

That the key inner adviser to President Brezhnev in Moscow was Andropov should have triggered alarm. Andropov had form. As Soviet Ambassador in Budapest in 1956, he had played a decisive role in convincing the Soviet leader, Nikita Khrushchev, that only the ruthless use of military force would end the Hungarian uprising. It was a movement that had started with student protests but had ended up with an armed revolt to install a new government committed to free elections and a withdrawal from the Warsaw Pact.

One of the main instruments being employed by Andropov was the use of ‘illegals’. The West found that out much later in 1992 with the reporting of Vasili Mitrokhin, the Soviet KGB archivist and MI6 source. He revealed how specially selected and trained KGB officers had been sent in 1968 into Czechoslovakia, disguised as tourists, journalists, businessmen and students, equipped with false passports from West Germany, Austria, the UK, Switzerland and Mexico. Each illegal was given a monthly allowance of $300, travel expenses and enough money to rent a flat in the expectation that the Czech dissidents would more readily confide in Westerners. Their role was both to penetrate reformist circles such as the Union of Writers, radical journals, the universities and political groupings, but also to take ‘active measures’ to blacken the reputation of the dissidents. The Soviet Prime Minister loudly complained of Western provocations and sabotage (with the alleged uncovering of a cache of American weapons and with a faked document purporting to show a US plan for overthrowing the Prague regime). He used such arguments to justify Soviet interference in Czechoslovak affairs even though they were, in fact, the work of the KGB ‘illegals’.

In August 1968, under the pretext of preventing an imperialist plot, the Soviet Union despatched armies from Russia and four other Warsaw Pact countries to invade Czechoslovakia, taking over the airport and public buildings and confining Czech soldiers to barracks. Dubček and his colleagues were flown to Moscow under KGB escort, where, under considerable intimidation, they accepted the reality of complying with the demands of their occupiers.

Today we have seen Moscow using all these tactics from the Soviet playbook to prevent Ukraine orientating itself towards the EU. Yet, despite their understanding of Soviet history, Western analysts failed to predict the Russian seizure of Crimea and their armed intervention in eastern Ukraine. Analysts knew of past Soviet use of methods involving intimidation, propaganda and dirty tricks including the use of the little grey men of the KGB infiltrated into Czechoslovakia in 1968. Yet the appearance of ‘little green men’ in Ukraine, as the Russian special forces were dubbed by the media, came as a surprise.

Modelling the path to the future

The task of understanding how things will unfold is like choosing the most likely route to be taken across a strange country by a traveller you have equipped with a map that sets down only some of the features of the landscape. You know that all maps simplify to some extent; the perfect map, as described satirically by Jonathan Swift in Gulliver’s Travels is one that has a scale of 1 to 1 and thus is as big and detailed as the ground being

mapped.3 There are blank spots on the traveller’s map: ‘here be dragons’, as the medieval cartographers labelled areas where they did not have enough information. The important lesson is that reality itself has no blank spots: the problems you encounter are not with reality but with how well you are able to map it.

An example of getting the modelling of future international developments right was the 1990 US National Intelligence Council estimate ‘Yugoslavia Transformed’ a decade after the death of its autocratic ruler, the

former Partisan leader Marshal Tito.4 The US analysts understood the dynamics of Tito’s long rule. He had forged a federation from very different and historically warring peoples: Serbs, Croats, Slovenes and Bosnian Muslims. As so often happens with autocrats ruling divided countries (think about Iraq under Saddam, Libya under Gaddafi), Tito ruled by balancing the tribal loyalties. For every advantage awarded to one group there had to be counter-balancing concessions in other fields to the other groups. Meanwhile a tough internal security apparatus loyal to Tito and the concept of Yugoslavia identified potential flashpoints to be defused and dissidents to be exiled. After Tito’s death the centre could not long hold. The Serb leadership increasingly played the Serb nationalist and religious card and looked for support to Moscow. The Croats turned to the sympathy of Catholic fellowship in Germany. The Bosnian Muslims put their faith in the international community and the United Nations for protection. The US 1990 estimate summarized the future of the former Yugoslavia in a series of unvarnished judgements that read well in the light of subsequent developments in the Balkans as described in the previous chapter:

Yugoslavia will cease to function as a federal state within one year and will probably break up within two. Economic reform will not stave off the break-up …

There will be a protracted armed uprising by the Albanians in Kosovo. A full-scale, interrepublic war is unlikely but serious intercommunal

violence will accompany the breakup and will continue thereafter. The violence will be intractable and bitter.

There is little that the US and its European allies can do to preserve Yugoslav unity. Yugoslavs will see such efforts as contradictory to advocacy of democracy and self-determination … the Germans will pay lip service to the idea of Yugoslav integrity, whilst quietly accepting the dissolution of the Yugoslav state.

In London, analysts shared the thrusts of the US intelligence assessment on Yugoslavia. But the government of John Major did not want to get involved in what promised to be internecine Balkan civil war, always the bloodiest kind of conflict. The Chiefs of Staff could see no British interest worth fighting for. I recall attending the Chiefs of Staff Committee and reporting on the deteriorating situation but having Bismarck’s wisecrack thrown back at me, that the pacification of the turbulent Balkans was not worth the healthy bones of a single Pomeranian grenadier.

There can be many reasons for failure to predict developments correctly. One of the most common reasons is simply the human temptation to indulge in magical thinking, imagining that things will turn out as we want without any credible causal explanation of how that will come about. We do this to shield ourselves from the unwelcome truth that we may not be able to get what we want. The arguments over the handling of the UK Brexit process say it all.

The choice between being more right or less wrong

It is easy to criticize analysts when they fail to warn of some aggressive act. They know that they will be accused of an intelligence failure. As a rule of thumb, analysts will tend to risk a false positive by issuing a warning estimate rather than risk the accusation of failure after a negative report failed to warn. The costs of not having a timely warning if the event does happen are usually greater than the costs of an unnecessary warning when it does not. Cynics might also argue that analysts are realists and they know that if they issue a warning but the event does not take place there will be many exculpatory reasons that can be deployed for events not turning out that way. On the other hand, if policymakers are badly surprised by events after a failure to warn there will be no excuses accepted.

Analysts are faced in those circumstances with an example of the much

studied false-positive/false-negative quality control problem.5 This is the same dilemma faced by car manufacturers who inspect as the cars leave the factory and have to set the testing to a desired rate of defective vehicles passing the inspection (taken to be safe but actually not, a false positive), knowing that such vehicles are likely to break down and have to be recalled at great cost and the company reputation and sales will suffer; but knowing as well that if too many vehicles are wrongly rejected as unsafe (taken to be unsafe but actually not, a false negative) the car company will also incur large unnecessary costs in reworking them. This logic applies even more forcibly with medicines and foodstuffs. As consumers it is essential to expect foods labelled as nut-free to be just that, in order to avoid the potentially lethal risk to those allergic to nuts. The consequence, however, is that we have to recognize that the manufacturer will need a rigorous testing system achieving very low false-positive rejection rates, and that will put up the false-negative rejection rates, which is likely to add significant cost to the product. We can expect the cursor on most overall manufacturing industry inspection systems to be set towards avoiding more false positives at the expense of more false negatives. The software industry, however, is notorious for cost reasons for tolerating a high false-positive rate, preferring to issue endless patches and updates as the customers themselves find the flaws the hard way by actually using the software.

An obvious application in intelligence and security work is in deciding whether an individual has shown sufficient associations with violent extremism to be placed on a ‘no-fly’ list. Policymakers would want the system to err on the side of caution. That means accepting rather more false negatives, which will of course seriously inconvenience an individual falsely seen as dangerous because they will not be allowed to fly, as the price for having a very low level of false positives (falsely seen as safe when not, which could lead, in the worst case, to a terrorist destroying a passenger aircraft by smuggling a bomb on board). Another example is the design of algorithms for intelligence agencies to pull out information relating to terrorist suspects from digital communications data accessed in bulk. Set the cursor too far in the direction of false positives and too much material of no intelligence interest will be retrieved, wasting valuable analyst time and risking unnecessary invasion of privacy; set the cursor too

far towards false negatives and the risk of not retrieving the material being sought and terrorists escaping notice rises. There is no optimal solution possible without weighing the relative penalties of a false positive as against a false negative. At one extreme, as we will see in the next chapter, is the so-called precautionary principle whereby the risk of harm to humans means there cannot be false positives. Application of such a principle

comes at considerable cost.6

The false-positive/false-negative dilemma occurs with algorithms that have to separate data into categories. Such algorithms are trained on a large set of historic data where it is known which category each example falls into (such as genuinely suspect/not-suspect) and the AI programme then works out the most efficient indicators to use in categorizing the data. Before the algorithm is deployed into service, however, the accuracy of its output needs to be assessed against the known characteristics of the input. Simply setting the rule at a single number so that, say, 95 per cent of algorithmic decisions are expected to be correct in comparison with the known training data is likely to lead to trouble depending upon the ratio of false positives to false negatives in the result and the penalty associated with each. One way of assessing the accuracy of the algorithm in its task is to define its precision as the number of true positives as a proportion of positives that the algorithm thinks it has detected in the training data. Accuracy is often measured as the number of true positives and negatives as a proportion of the total number in the training set. A modern statistical technique that can be useful with big data sets is to chart the number of false positives and false negatives to be expected at each setting of the rule and to look at the area under the resulting curve (AUC) as a measure of

overall success in the task.7

Reluctance to act on intelligence warnings

The policy world may need shaking into recognizing that they have to take warnings seriously. In April 1993 I accompanied the British Defence Secretary, Malcolm Rifkind, to the opening of the Holocaust Museum in Washington. The day started with a moving tribute at Arlington Cemetery to the liberators of the concentration camps. I remembered the sole occasion my father had spoken to me of the horror of entering one such just liberated

camp in 1944 when he was serving as an officer in the Black Watch on the Eighth Army A Staff. It was a memory that he had preferred to suppress. Later that day Elie Wiesel, the Nobel Peace Prize winner, spoke passionately in front of President Bill Clinton, President Chaim Herzog of Israel and a large crowd of dignitaries about the need to keep the memory of those horrors alive. He issued an emotional appeal to remember the failure of the Allied powers to support the Warsaw Ghetto uprising and the Jewish

resistance.8 He quoted the motto chiselled in stone over the entrance to the Holocaust Museum: ‘For the dead and the living we must bear witness’. Then, turning directly to face President Clinton and the First Lady, Hillary Clinton, he reminded them: ‘We are also responsible for what we are doing with those memories … Mr President, I cannot not tell you something. I have been in the former Yugoslavia last Fall … I cannot sleep since over what I have seen. As a Jew I am saying that we must do something to stop the bloodshed in that country! People fight each other and children die. Why? Something, anything, must be done.’

His message, genocide is happening again in Europe, and it is happening on your watch, Mr President, and the Allies are once again doing nothing, was heard in an embarrassed silence, followed by loud applause from the survivors of the camps who were present. Later that year the UN Security Council did finally mandate a humanitarian operation in Bosnia, the UN Protection Force (UNPROFOR), for which the UK was persuaded to provide a headquarters and an infantry battle group. As the opening of the previous chapter recounted, that small peacekeeping force in their blue helmets and white-painted vehicles sadly proved inadequate faced with the aggression of both Bosnian Serbs and Croats, and was helpless to stop the massacre of Bosnian Muslims at Srebrenica in the summer of 1995.

Providing leaders with warnings is not easy. The ancient Greek myth of Cassandra, one of the princesses of Troy and daughter of King Priam, relates that she was blessed by the god Apollo with the gift of foreseeing the future. But when she refused the advances of Apollo she was placed under a curse which meant that, despite her gift, no one would believe her. She tried in vain to warn the inhabitants of Troy to beware Greeks bearing gifts. The giant wooden horse, left by the Greeks as they pretended to lift the siege of the city, was nevertheless pulled inside the walls. Odysseus and his soldiers who were hidden inside climbed out at night and opened the city gates to the invading Greek Army. As Cassandra had cried out in the

streets of Troy: ‘Fools! ye know not your doom … Oh, ye believe not me,

though ne’er so loud I cry!’9 Not to have their warnings believed has been the fate of many intelligence analysts over the years and will be again. The phenomenon is known to the intelligence world as the Cassandra effect.

It might have been doubts about Cassandra’s motives that led to her information being ignored. In 1982 there were warnings from the captain of the ice patrol ship HMS Endurance in the South Atlantic who was monitoring Argentine media that the point was coming close when the Junta would lose patience with diplomatic negotiations. But these warnings were discounted by a very human reaction of ‘Well, he would say that, wouldn’t he’, given his ship was to be withdrawn from service under the cuts in capability imposed by the 1981 defence expenditure review. It is also quite possible that Cassandra might have made too many predictions in the past that led to nothing and created what is known as warning fatigue. We know this as crying wolf, from Aesop’s fable. That might in turn imply the threshold for warning was set too low and should have been set higher than turning out the whole village on a single shout of ‘wolf’ (but remember the earlier discussion of false positives and false negatives and how raising the warning threshold increases the risk of a real threat being ignored). Sending signals which lead to repeated false alarms is an ancient tactic to inure the enemy to the real danger. Warnings also have to be sufficiently specific to allow sensible action to be taken. Simply warning that there is a risk of possible political unrest in the popular holiday destination of Ruritania does not help the tourist know whether or not to cancel their holiday on the Ruritanian coast.

Perhaps poor Cassandra was simply not thought a sufficiently credible source for reasons unconnected with the objective value of her intelligence reporting. Stalin was forewarned of the German surprise attack on the Soviet Union in 1941 by reports from well-placed Soviet intelligence sources, including the Cambridge spies, some of whom had access to Bletchley Park Enigma decrypts of the German High Command’s signals. But he discounted the reporting as too good to be true and therefore assumed a deliberate attempt by the Allies to get him to regard Germany as an enemy and to discount the guarantees of peace in the 1939 Molotov– Ribbentrop non-aggression pact that Stalin had approved two years earlier.

A final lesson from the failure of the Trojans to act on Cassandra’s warning might be that the cost of preventive action can be seen as too great.

Legend has it that the Trojans were concerned with angering their gods if they had refused the Greek offering of the wooden horse. We may ignore troubling symptoms if we fear that a visit to the doctor will result in a diagnosis that prevents us being able to fly to a long-promised holiday in the sun.

Expressing predictions and forecasts as probabilities

It is sadly the case that only rarely can intelligence analysts be definitive in warning what will happen next. Most estimates have to be hedged with caveats and assumptions. Analysts speak therefore of their degree of belief in a forward-looking judgement. Such a degree of belief is expressed as a probability of being right. This is a different use of probability from that associated with gambling games like dice or roulette, where the frequency with which a number comes up provides data from which the probability of a particular outcome is estimated. When we throw a fair die we know that the probability that the next throw will come up with a six is 1/6. We know the odds we ought to accept on a bet that this is what will happen. That is the frequentist interpretation of probability. By analogy, we think of the odds that intelligence analysts would rationally accept on their estimate being right. That is the measure of their degree of belief in their judgement.

It is of course a subjective interpretation of probability.10 Intelligence analysts prefer – like political pollsters – forecasts that

associate with a range of possible outcomes an associated probability. For example, the US Director of National Intelligence, Dan Coats, predicted in a worldwide threat assessment given to the Senate Intelligence Committee that competitors such as Russia, China and Iran ‘probably already are looking to the 2020 U.S. elections as an opportunity to advance their

interests’.11 ‘Probably’ here is likely to mean 55–70 per cent, which can be thought of as the gambling odds the analysts should accept for being right (in that case, just over 70 per cent probable equates to bookmakers offering odds of 2 to 1 on).

When a forecast outcome is heavily dependent on external events, that is usually expressed as an assumption so that readers of the assessment understand that dependency. The use of qualifying words such as ‘unlikely’, ‘likely’ and so on is standardized by professional intelligence analysts. The

UK yardstick was devised by the Professional Head of the Intelligence Analysis profession (PHIA) in the Cabinet Office, and is in use across the British intelligence community, including with law enforcement. The example of the yardstick below is taken from the annual National Strategic

Assessment (NSA) by the UK National Crime Agency.12 Probability and Uncertainty

Throughout the NSA, the ‘probability yardstick’ (as defined by the Professional Head of Intelligence Assessment (PHIA) has been used to ensure consistency across the different threats and themes when assessing probability. The following defines the probability ranges considered when such language is used:

The US Intelligence Community also has published a table showing how to express a likelihood in ordinary language (line 1 of the table below) and in probabilistic language (line 2 of the table, with the corresponding

confidence level in line 3).13

One difference between the approach taken by the UK and the US analysts is in the use of gaps between the ranges in the UK case. The intention is to avoid the potential problem with the US scale over what term you use if your judgement is ‘around 20 per cent’. Two analysts can have a perfectly reasonable, but unnecessary, argument over whether something is ‘very unlikely’ or ‘unlikely’. The gaps obviate the problem. The challenge

is over what to do if the judgement falls within one of the gaps. If an analyst can legitimately say that something is ‘a 75–80 per cent chance’, then they are free to do so. The yardstick is a guide and a minimum standard, but analysts are free to be more specific or precise in their judgements, if they can. It is sensible to think in 5 or 10 per cent increments to discourage unjustified precision for which the evidence is unlikely to be available. I recommend this framework in any situation in which you have to make a prediction. It is very flexible, universally applicable, and extremely helpful in aiding your decisionmaking and in communicating it to others. You could start off by reminding yourself the next time you say it is ‘unlikely’ to rain that that still leaves a one in five chance of a downpour. You might well accept that level of risk and not bother with a coat. But if you were badly run down after a bout of flu even a 20 per cent chance of getting soaked and developing a fever would be a risk not worth running. That is an example of examining the expected value of the outcome, not just its likelihood, formed by multiplying together the probability of an event and a measure of the consequences for you of it happening.

The limits of prediction

The science fiction writer Isaac Asimov in his Foundation and Empire books imagined a future empirical science of psychohistory, where recurring patterns in civilizations on a cosmic scale could be modelled

using sociology, history and mathematical statistics.14 Broad sweeps of history could, Asimov fantasized, be forecast in the same way as statistical mechanics allows the behaviour of large numbers of molecules in a gas to be predicted, although the behaviour of individual molecules cannot (being subject to quantum effects). Asimov’s fictional creator of psychohistory, Dr Hari Seldon, laid down key assumptions that the population whose behaviour was being modelled should be sufficiently large and that the population should remain in ignorance of the results of the application of psychohistorical analyses because, if it became so aware, there would be feedback changing its behaviour. Other assumptions include that there would be no fundamental change in human society and that human nature and reactions to stimuli would remain constant. Thus, Asimov reasoned, the occurrence of times of crisis at an intergalactic scale could be forecast, and

guidance provided (by a holograph of Dr Seldon) by constructing time vaults that would be programmed to open when the crisis was predicted to arise and the need would be greatest.

Psychohistory will remain fantasy. Which is perhaps just as well. The main problem with such ideas is the impossibility of sufficiently specifying the initial conditions. Even with deterministic equations in a weather-forecasting model, after a week or so the divergence between what is forecast and what is observed becomes too large to allow the prediction to be useful. And often in complex systems the model is non-linear, so small changes can quickly become large ones. There are inherent limits to forecasting reality. Broad sweeps may be possible but not detailed predictions. There comes a point when the smallest disturbance (the iconic flapping of a butterfly’s wings) sets in train a sequence of cascading changes that tip weather systems over, resulting in a hurricane on the other side of the world. The finer the scale being used to measure forecasts in international affairs, the more variables that need to be taken into account, the greater the number of imponderables and assumptions, and the less

accurate the long-term forecast is liable to be.15

Even at the level of physical phenomenon not every activity is susceptible to precise modelling. Exactly when a radioactive atom will spontaneously decay cannot be predicted, although the number of such events in a given time can be known in terms of its probability of occurrence. The exact path a photon of light or an electron will take when passing through a narrow pair of slits can also only be predicted in advance in terms of probabilities (the famous double slit experiment that demonstrates one of the key principles of quantum physics).

Secrets, mysteries and complex interactions

There is a deeper way of looking at intelligence, and that is to distinguish between secrets and mysteries. Secrets can be found out if the seeker has the ingenuity, skill and the means to uncover them. Mysteries are of a different order. More and more secrets will not necessarily unlock the mystery of a dictator’s state of mind. But intelligence officers trying to get inside the mind of a potential adversary have to do their best to make an assessment, since that will influence what the policymakers decide to do

next. Inferences can certainly be drawn, based on knowledge of the individuals concerned and on reading of their motivations, together with general observation of human behaviour. But such a judgement will depend on who is making it. A neutral observer might come to a different view from one from a country at risk of being invaded.

Mysteries have a very different evidential status. They concern events that have not yet happened (and therefore may never happen). Yet it is solutions to such mysteries that the users of intelligence need. It was the case that from the moment early in 1982 when the Argentine Junta’s Chief of Naval Staff and chief hawk over the issue, Admiral Anaya, issued secret orders to his staff to begin planning the Falkland Islands invasion then there were secrets to collect. But whether, when it came to the crunch, the Junta as a whole would approve the resulting plan and order implementation would remain a mystery until much later.

To make matters harder, there is often an additional difficulty due to the

complex interactions16involved. We now know in the case of the Junta in1982 that it completely misread what the UK reaction would be to an invasion of the Falkland Islands. And, just as seriously, the Junta did not take sufficient account of the longstanding US/UK defence relationship in assessing how the US would react. It may not have recognized the personal relationship that had developed between the UK’s Defence Secretary, John Nott, and his US counterpart, Caspar Weinberger. Margaret Thatcher’s iron response in sending a naval Task Force to recover the Islands met with Weinberger’s strong approval, in part because it demonstrated to the Soviet Union that armed aggression would not be allowed to pay.

These distinctions are important in everyday life. There are many secrets that can in principle be found out if your investigations are well designed and sufficiently intrusive. In your own life, your partner may have kept texts on their phone from an ex that they have kept private from you. Strictly speaking, these are secrets that you could probably find a way of accessing covertly (I strongly advise you don’t. Your curiosity is not a sufficient reason for violating their privacy rights. And once you have done so, your own behaviour towards your partner, and therefore your partner’s towards you, is likely unconsciously to change). But whether you uncover the secrets or not, the mystery of why your partner kept them and whether they ever intend in the future to contact the ex remains unanswered, and not even your partner is likely to be certain of the answer. You would have the

secret but not the answer to the mystery, and that answer is likely to depend upon your own behaviour over coming months that will exercise a powerful influence on how your partner feels about the relationship. Prediction in such circumstances of complex interactions is always going to be hard.

Missing out on the lessons of Chapter 2and leaping from situational awareness to prediction – for example, by extrapolating trends or assuming conditions will remain the same – is a common error, known as the inductive fallacy. It is equivalent to weather forecasting by simply looking out of the window and extrapolating: most of the time tomorrow’s weather follows naturally from today’s, but not when there is a rapidly developing weather front. Ignoring the underlying dynamics of weather systems will mean you get the forecast right much of the time but inevitably not always. When it happens that you are wrong, as you are bound to be from time to time, you are liable to be disastrously wrong – for example, as a flash flood develops or an unexpected hurricane sweeps in. That holds as true for international affairs as it does for all life as well: if you rely on assumptions, when you get it wrong, you get it really wrong. Experts are as likely to fall

into this trap as anyone else.17

I am fond of the Greek term phronesis, to describe the application of practical wisdom to the anticipation of risks. As defined by the art historian Edgar Wind, this term describes how good judgement can be applied to human conduct consisting in a sound practical instinct for the course of events, an almost indefinable hunch that anticipates the future by

remembering the past and thus judges the present correctly.18

Conclusions: estimates and predictions

Estimates of how events may unfold, and predictions of what will happen next, are crucially dependent on having a reliable explanatory model, as well as sufficient data. Even if we are not consciously aware of doing this, when we think about the future we are mentally constructing a model of our current reality and reaching judgements about how our chosen explanatory model would behave over time and in response to different inputs or stimuli. It will help to have identified what are the most important factors that are likely to affect the outcome, and how sensitive that outcome might

be to changes in circumstances. We are here posing questions of the ‘what next and where next?’ type. In answering them we should:

Avoid the inductive fallacy of jumping straight from situational awareness to prediction and use an explanatory model of how you think the key variables interact.

Be realistic about the limitations of any form of prediction, expressing results as estimates between a range of likely possibilities. Point predictions are hazardous.

Express your degree of confidence in your judgements in probabilistic language, taking care over consistent use of terms such as ‘likely’.

Remember to consider those less likely but potentially damaging outcomes as well as the most probable.

Be aware that wanting to see a reduction in the level of false positives implies increasing the level of false negatives to be expected.

Do not confuse the capability of an individual or organization to act with an intent to act on their part.

Be aware of your cultural differences and prejudices when explaining the motivations and intent of another.

Distinguish between what you conclude based on information you have and what you think based on past experience, inference and intuition (secrets, mysteries and complexities).

Beware your own biases misleading you when you are trying to understand the motives of others.

Give warnings as active deliberative acts based on your belief about how events will unfold and with the intent of causing a change in behaviour or policy.