David Omand – How Spies Think – 10 Lessons In Intelligence – Part 3

Become a Patron!
True Information is the most valuable resource and we ask you to give back.

STASI-AGENTS IN DISGUISE

Part One

AN ANALYST SEES: FOUR LESSONS IN ORDERING OUR THOUGHTS

1

Lesson 1: Situational awareness Our knowledge of the world is always fragmentary and incomplete, and is sometimes wrong

London, 11 p.m., 20 April 1961. In room 360 of the Mount Royal Hotel, Marble Arch, London, four men are waiting anxiously for the arrival of a fifth. Built in 1933 as rented apartments and used for accommodation by US Army officers during the war, the hotel was chosen by MI6 as a suitably anonymous place for the first face-to-face meeting of Colonel Oleg Penkovsky of Soviet military intelligence, the GRU, with the intelligence officers who would jointly run him as an in-place agent of MI6 and CIA. When Penkovsky finally arrived he handed over two packets of handwritten notes on Soviet missiles and other military secrets that he had smuggled out of Moscow as tokens of intent. He then talked for several hours explaining what he felt was his patriotic duty to mother Russia in exposing to the West the adventurism and brinkmanship of the Soviet leader, Nikita Khrushchev, and the true nature of what he described as the rotten two-faced Soviet

regime he was serving.1

The huge value of Penkovsky as a source of secret intelligence came from the combination of his being a trained intelligence officer and his access to the deepest secrets of the Soviet Union – military technology, high policy and personalities. He was one of the very few with his breadth of access allowed to visit London, tasked with talent spotting of possible

sources for Soviet intelligence to cultivate in Western business and scientific circles.

Penkovsky had established an acquaintance with a frequent legitimate visitor to Moscow, a British businessman, Greville Wynne, and entrusted him with his life when he finally asked Wynne to convey his offer of service to MI6. From April 1961 to August 1962 Penkovsky provided over 5500 exposures of secret material on a Minox camera supplied by MI6. His material alone kept busy twenty American and ten British analysts, and his 120 hours of face-to-face debriefings occupied thirty translators, producing 1200 pages of transcript.

At the same time, on the other side of the Atlantic, intelligence staffs worried about the military support being provided by the Soviet Union to Castro’s Cuba. On 14 October 1962 a U2 reconnaissance aircraft over Cuba photographed what looked to CIA analysts like a missile site under construction. They had the top secret plans Penkovsky had passed to MI6 showing the typical stages of construction and operation for Soviet medium-range missile sites. In the view of the CIA, without this information it would have been very difficult to identify which type of nuclear-capable missiles were at the launch sites and track their operational readiness. On 16 October President Kennedy was briefed on the CIA assessment and shown the photographs. By 19 October he was told a total of nine such sites were under construction and had been photographed by overflights. On 21 October the British Prime Minister, Harold Macmillan, was informed by President Kennedy that the entire US was now within Soviet missile range with a warning time of only four minutes. Macmillan’s response is recorded as ‘now the Americans will realize what we here in England have lived through these past many years’. The next day, after consultation with Macmillan, the President instituted a naval blockade of

Cuba.2

The Cuban missile crisis is a clear example of the ability intelligence has to create awareness of a threatening situation, the first component of the SEES model of intelligence analysis. The new evidence turned US analysts’ opinion on its head. They had previously thought the Soviets would not dare to attempt introducing nuclear missile systems in the Western hemisphere. Now they had a revised situational awareness of what the United States was facing.

There is a scientific way of assessing how new evidence should alter our beliefs about the situation we face, the task of the first stage of the SEES method. That is the Bayesian approach to inference, widely applied in

intelligence analysis, modern statistics and data analysis.3 The method is named after the Rev. Thomas Bayes, the eighteenth-century Tunbridge Wells cleric who first described it in a note on probability found among his papers after his death in 1761.

The Bayesian approach uses conditional probability to work backwards from seeing evidence to the most likely causes of that evidence existing. Think of the coin about to be tossed by a football referee to decide which side gets to pick which goal to attack in the first half of the game. To start with it would be rational to estimate that there is a 50 per cent probability that either team will win the toss. But what should we think if we knew that in every one of the last five games involving our team and the same referee we had lost the toss? We would probably suspect foul play and reduce our belief that we stand an even chance of winning the toss this time. That is what we describe as the conditional probability, given that we now know the outcome of previous tosses. It is different from our prior estimate. What Bayesian inference does in that case is give us a scientific method of starting with the evidence of past tosses to arrive at the most likely cause of those results, such as a biased coin.

Bayesian inference helps us to revise our degree of belief in the likelihood of any proposition being true given our learning of evidence that bears on it. The method applies even when, unlike the coin-tossing example, we only have a subjective initial view of the likelihood of the proposition being true. An example would be the likelihood of our political party winning the next election. In that case it might then be new polling evidence that causes us to want to revise our estimate. We can ask ourselves how far the new evidence helps us discriminate between alternative views of the situation or, as we should term them, alternative hypotheses, about what the outcome is likely to be. If we have a number of alternatives open to us, and the evidence is more closely associated with one of them than the alternatives, then it points us towards believing more strongly that that is the best description of what we face.

The Bayesian method of reasoning therefore involves adjusting our prior degree of belief in a hypothesis on receipt of new evidence to form a posterior degree of belief in it (‘posterior’ meaning after seeing the

evidence). The key to that re-evaluation is to ask the question: if the hypothesis was actually true how likely is it that we would have been able to see that evidence? If we think that evidence is strongly linked to the hypothesis being true, then we should increase our belief in the hypothesis.

The analysts in the Defense Intelligence Agency in the Pentagon had originally thought it was very unlikely that the Soviet Union would try to introduce nuclear missiles into Cuba. That hypothesis had what we term a low prior probability. We can set this down precisely using notation that will come in handy in the next chapter. Call the hypothesis that nuclear missiles would be introduced N. We can write their prior degree of belief in N as a prior probability p(N) lying between 0 and 1. In this case, since they considered N very unlikely, they might have given p(N) a probability value of 0.1, meaning only 10 per cent likely.

The 14 October 1962 USAF photographs forced them to a very different awareness of the situation. They saw evidence, E, consistent with the details Penkovsky had provided of a Soviet medium-range nuclear missile installation under construction. The analysts suddenly had to face the possibility that the Soviet Union was introducing such a capability into Cuba by stealth. They needed to find the posterior probability p(N|E) (read as the reassessed probability of the hypothesis N given the evidence E where the word ‘given’ is written using the vertical line |).

The evidence in the photographs was much more closely associated with the hypothesis that these were Soviet nuclear missile launchers than any alternative hypothesis. Given the evidence in the photographs, they did not appear to be big trucks carrying large pipes for a construction site, for instance. The chances of the nuclear missile hypothesis being true given the USAF evidence will be proportionate to p(E|N), which is the likelihood of finding that evidence on the assumption that N is true. That likelihood was estimated as much greater than the overall probability that such photographs might have been seen in any case (which we can write as p(E)). The relationship between the nuclear missile hypothesis and the evidence seen, that of p(E|N) to p(E), is the factor we need to convert the prior probability p(N) to the posterior probability that the decisionmaker needs, p(N|E).

The Rev. Bayes gave us the rule to calculate what the posterior probability is:

p(N|E) = p(N). [p(E|N)/p(E)]

Or, the new likelihood of something being the case given the evidence is found by adjusting what you thought was likely (before you saw the evidence) by how well the new evidence supports the claim of what could be happening.

This is the only equation in this book. Despite wanting to talk as plainly as possible, I’ve included it because it turns words into precise calculable conditional likelihoods which is what so much of modern data science is about. In the next chapter we examine how we can apply Bayes’s great insight to work backwards, inferring from observations what are the most likely causes of what we see.

The example of the Cuban missile crisis shows Bayesian logic in action to provide new situational awareness. For example, if the analysts had felt that the photographs could equally well have been of a civil construction site and so the photographs were equally likely whether or not N was true (i.e. whether or not these were nuclear missile launchers) then p(E|N) would be the same as p(E), and so the factor in Bayes’s rule is unity and the posterior probability is no different from the prior. The President would not be advised to change his low degree of belief that Khrushchev would dare try to introduce nuclear missiles into Cuba. If, on the other hand, E would be much more likely to be seen in cases where N is true (which is what the Penkovsky intelligence indicated), then it is a strong indicator that N is indeed true and p(E|N) will be greater than P(E). So p(N|E) therefore rises significantly. For the Pentagon analysts p(N|E) would have been much nearer to 1, a near certainty. The President was advised to act on the basis that Soviet nuclear missiles were in America’s backyard.

Kennedy’s key policy insight in 1962 was recognition that Khrushchev would only have taken such a gamble over Cuba having been persuaded that it would be possible to install the missiles on Cuba covertly, and arm them with nuclear warheads before the US found out. The US would then have discovered that the Soviet Union was holding at immediate risk the entire Eastern seaboard of the US, but would have been unable to take action against Cuba or the missiles without running unacceptable risk. Once the missiles had been discovered before they were operational, it was then the Soviet Union that was carrying the risk of confrontation with the naval blockade Kennedy had ordered. Kennedy privately suggested a face-saving

ladder that Khrushchev could climb down (by offering later withdrawal of the old US medium-range missiles based in Turkey), which Khrushchev duly accepted. The crisis ended without war.

The story of President Kennedy’s handling of the Cuban missile crisis has gone down as a case study in bold yet responsible statecraft. It was made possible by having situational awareness – providing the what, who, where and when that the President needed based on Penkovsky’s intelligence on technical specifications about Soviet nuclear missiles, their range and destructive power, and how long they took to become operational after they were shipped to a given location. That last bit of intelligence persuaded Kennedy that he did not need to order air strikes to take out the missile sites immediately. His awareness of the time he had gave him the option of trying to persuade Khrushchev that he had miscalculated.

Bayesian inference is central to the SEES method of thinking. It can be applied to everyday matters, especially where we may be at risk of faulty situational awareness. Suppose you have recently been assigned to a project that looks, from the outside, almost impossible to complete successfully on time and in budget. You have always felt well respected by your line manager, and your view of the situation is that you have been given this hard assignment because you are considered highly competent and have an assured future in the organization. However, at the bottom of an email stream that she had forgotten to delete before forwarding, you notice that your manager calls you ‘too big for your boots’. Working backwards from this evidence you might be wise to infer that it is more likely your line manager is trying to pull you down a peg or two, perhaps by getting you to think about your ability to work with others, by giving you a job that will prove impossible. Do try such inferential reasoning with a situation of your own.

Most intelligence analysis is a much more routine activity than the case of the Cuban missile crisis. The task is to try to piece together what’s going on by looking at fragmentary information from a variety of sources. The Bayesian methodology is the same in weighing information in order to be able to answer promptly the decisionmakers’ need to know what is happening, when and where and who is involved.

When data is collected in the course of intelligence investigations, scientific experiments or just in the course of web browsing and general observation, there is a temptation to expect that it will conform to a known

pattern. Most of the data may well fit nicely. But some may not. That may be because there are problems with the data (source problems in intelligence, experimental error for scientists) or because the sought-for pattern is not an accurate enough representation of reality. It may be that the bulk of the observations fit roughly the expected pattern. But more sensitive instruments or sources with greater access may also be providing data that reveals a new layer of reality to be studied. In the latter case, data that does not fit what has been seen before may be the first sighting of a new phenomenon that cries out to be investigated, or, for an intelligence officer, that could be the first sign that there is a deception operation being mounted. How to treat such ‘outliers’ is thus often the beginning of new insights. Nevertheless, it is a natural human instinct to discard or explain away information that does not fit the prevailing narrative. ‘Why spoil a good story’ is the unconscious thought process. Recognizing the existence of such cases is important in learning to think straight.

Penkovsky had quickly established his bona fides with MI6 and the CIA. But our judgements depend crucially on assessing how accurate and reliable the underlying information base is. What may be described to you as a fact about some event of interest deserves critical scrutiny to test whether we really do know the ‘who, what, where and when’. In the same way, an intelligence analyst would insist when receiving a report from a human informant on knowing whether this source had proved to be regular and reliable, like Penkovsky, or was a new untested source. Like the historian who discovers a previously unknown manuscript describing some famous event in a new way, the intelligence officer has to ask searching questions about who wrote the report and when, and whether they did so from first-hand knowledge, or from a sub-source, or even from a sub-sub-source with potential uncertainty, malicious motives or exaggeration being introduced at each step in the chain. Those who supply information owe the recipient a duty of care to label carefully each report with descriptions to help the analyst assess its reliability. Victims of village gossip and listeners to The Archers on BBC Radio 4 will recognize the effect.

The best way to secure situational awareness is when you can see for yourself what is going on, although even then be aware that appearances can be deceptive, as optical illusions demonstrate. It would always repay treating with caution a report on a social media chat site of outstanding bargains to be had on a previously unknown website. Most human eye-

witness reporting needs great care to establish how reliable it is, as criminal courts know all too well. A good intelligence example where direct situational awareness was hugely helpful comes from the Falklands conflict. The British authorities were able to see the flight paths of Argentine air force jets setting out to attack the British Task Force because they had been detected by a mountaintop radar in Chile, and the Chilean government had agreed their radar picture could be accessed by the UK.

Experienced analysts know that their choice of what deserves close attention and what can be ignored is a function of their mental state at the

time.4 They will be influenced by the terms in which they have been tasked but also by how they may have unconsciously formulated the problem. The analysts will have their own prejudices and biases, often from memories of previous work. In the words of the tradecraft primer for CIA officers:

‘These are experience based constructs of assumptions and expectations both about the world in general and more specific domains. These constructs strongly influence what information analysts will accept – that is, data that are in accordance with analysts’ unconscious mental models are more likely to be perceived and remembered than information that is at

odds with them.’5 Especial caution is needed therefore when the source seems to be showing you what you had most hoped to see.

The interception and deciphering of communications and the product of eavesdropping devices usually have high credibility with intelligence analysts because it is assumed those involved do not realize their message or conversation is not secure and therefore will be speaking honestly. But that need not be the case, since one party to a conversation may be trying to deceive the other, or both may be participating in an attempt to deceive a third party, such as the elaborate fake communications generated before the D-Day landings in June 1944 to create the impression of a whole US Army Corps stationed near Dover. That, combined with the remarkable double agent operation that fed back misleading information to German intelligence, provided the basis of the massive deception operation mounted for D-Day (Operation Fortitude). The main purpose was to convince the German High Command that the landings in Normandy were only the first phase with the main invasion to follow in the Pas de Calais. That intelligence-led deception may have saved the Normandy landings from disaster by persuading the German High Command to hold back an entire armoured division from the battle.

Unsubstantiated reports (at times little more than rumour) swirl around commercial life and are picked up in the business sections of the media and are a driver of market behaviour. As individuals, the sophisticated analysts of the big investment houses may well not be taken in by some piece of market gossip. But they may well believe that the average investor will be, and that the market will move and thus as a consequence they have to make their investment decisions as if the rumour is true. It was that insight that enabled the great economist John Maynard Keynes to make so much money for his alma mater, King’s College Cambridge, in words much quoted today in the marketing material of investment houses: ‘successful investing is

anticipating the anticipation of others’.6 Keynes described this process in his General Theory as a beauty contest:

It is not a case of choosing those which, to the best of one’s judgment, are really the prettiest, nor even those which average opinion genuinely thinks the prettiest. We have reached the third degree where we devote our intelligences to anticipating what average opinion expects the average opinion to be. And there are

some, I believe, who practise the fourth, fifth and higher degrees.7

The Penkovsky case had a tragic ending. His rolls of film had to be delivered by dead drop in the teeth of Soviet surveillance using methods later made famous by John le Carré’s fictional spies, including the mark on the lamppost to indicate there was material to pick up. That task fell to Janet Chisholm, the wife of Penkovsky’s SIS case officer working under diplomatic cover in the Moscow Embassy. She had volunteered to help and was introduced to Penkovsky during one of his official visits to London. It was no coincidence therefore that her children were playing on the pavement of Tsvetnoy Boulevard while she watched from a nearby bench, at the exact moment Oleg Penkovsky in civilian clothes walked past. He chatted to the children and offered them a small box of sweets (that he had been given for that purpose during his meeting in London) within which were concealed microfilms of documents that Penkovsky knew would meet London’s and Washington’s urgent intelligence requirements. Similar drops of film followed. She was, however, later put under routine surveillance and by mischance she was seen making a ‘brush contact’ with a Russian who the KGB could not immediately identify but who triggered further

investigations. That and other slips made by Penkovsky led finally to his arrest. His go-between, the British businessman Greville Wynne, was then kidnapped during a business trip to Budapest, and put on show trial in Moscow alongside Penkovsky. Both were found guilty. Penkovsky was severely tortured and shot. Wynne spent several years in a Soviet prison until exchanged in 1964 in a spy swop for the convicted KGB spy Gordon Lonsdale (real name Konon Molody) and his cut-outs, an antiquarian bookseller and his wife, Peter and Helen Kroger, who had helped him run a spy ring against the UK Admiralty research establishment at Portland.

The digital revolution in information gathering

Today a Penkovsky could more safely steal secret missile plans by finding a way of accessing the relevant database. That is true for digital information of all kinds if there is access to classified networks. Digital satellite imagery provides global coverage. The introduction of remotely piloted aircraft with high-resolution cameras provides pin-sharp digitized imagery for operational military, security and police purposes, as well as for farming, pollution control, investigative journalism and many other public uses. At any incident there are bound to be CCTV cameras and individuals with mobile phones (or drones) that have high-resolution cameras able to take video footage of the event – and media organizations such as broadcasters advertise the telephone numbers to which such footage can be instantly uploaded. Every one of us is potentially a reconnaissance agent.

There is the evident risk that we end up with simply too much digital data to make sense of. The availability of such huge quantities of digitized information increases the importance of devising artificial intelligence

algorithms to sort through it and highlight what appears to be important.8 Such methods rely upon applying Bayesian inference to learn how best to search for the results we want the algorithms to detect. They can be very powerful (and more reliable than a human would be) if the task they are given is clear-cut, such as checking whether a given face appears in a large set of facial images or whether a specimen of handwriting matches any of those in the database. But these algorithms are only as reliable as the data on which they were trained, and spurious correlations are to be expected.

The human analyst is still needed to examine the selected material and to add meaning to the data.9

At the same time, we should remember that the digital world also provides our adversaries with ample opportunities to operate anonymously online and to hack our systems and steal our secrets. Recognition of these cyber-vulnerabilities has led the liberal democracies to give their security and intelligence agencies access to powerful digital intelligence methods, under strict safeguards, to be able to search data in bulk for evidence about those who are attacking us.

One side effect of the digitization of information is the democratization of situational awareness. We can all play at being intelligence analysts given our access to powerful search engines. Anyone with a broadband connection and a mobile device or computer has information power undreamed of in previous eras. There is a new domain of open-source intelligence, or OSINT. We use this ourselves when trying to decide which party to vote for in an election and want to know what each candidate stands for, or ascertaining the level of property prices in a particular area, or researching which university offers us the most relevant courses. The Internet potentially provides the situational awareness that you need to make the right decision. But like intelligence officers you have to be able to use it with discrimination.

The tools available to all of us are remarkable. Catalogues of image libraries can be searched to identify in fractions of a second a location, person, artwork or other object. Google Images has indexed over 10 billion photographs, drawings and other images. By entering an address almost anywhere in the world, Google Street View will enable you to see the building and take a virtual drive round the neighbourhood with maps providing directions and overlays of information. The position of ships and shipping containers can be displayed on a map, as can the location of trains across much of Europe.

With ingenuity and experience, an internet user can often generate situational awareness to rival that of intelligence agencies and major

broadcasting corporations. The not-for-profit organization Bellingcat10 is named after Aesop’s fable in which the mice propose placing a bell around the neck of the cat so that they are warned in good time of its approach but none will volunteer to put the bell on it. Bellingcat publishes the results of non-official investigations by private citizens and journalists into war

crimes, conditions in war zones and the activities of serious criminals. Its most recent high-profile achievement was to publish the real identities of the two GRU officers responsible for the attempted murder of the former MI6 agent and GRU officer Sergei Skripal and his daughter in Salisbury and the death of an innocent citizen.

It requires practice to become as proficient in retrieving situational information from the 4.5 billion indexed pages of the World Wide Web (growing by about one million documents a day) and the hundreds of thousands of accessible databases. Many sites are specialized and may take skill and effort, and the inclination to find (a location map of fishing boats around the UK, for example, should you ever want to know, can be found at fishupdate.com).

Although huge, the indexed surface web accessible by a search engine is estimated to be only 0.03 per cent of the total Internet. Most of the Internet, the so-called deep web, is hidden from normal view, for largely legitimate reasons since it is not intended for casual access by an average user. These are sites that can only be accessed if you already know their location, such as corporate intranets and research data stores, and most will be password-protected. In addition to the deep web, a small part of the Internet is the so-called ‘dark web’ or ‘dark net’ with its own indexing, which can only be reached if specialist anonymization software such as Tor is being used to

hide the identity of the inquirer from law enforcement.11 The dark net thus operates according to different rules from the rest of the Internet that has become so much a part of all of our daily lives. An analogy for the deep web would be the many commercial buildings, research laboratories and government facilities in any city that the average citizen has no need to access, but when necessary can be entered by the right person with the proper pass. The dark net, to develop that cityscape analogy, can be thought of like the red-light district in a city with a small number of buildings (sometimes very hard to find), where access is controlled because the operators want what is going on inside to remain deeply private. At one time, these would have been speakeasies, illegal gambling clubs, flophouses and brothels, but also the meeting places of impoverished young artists and writers, political radicals and dissidents. Today it is where the media have their secure websites which their sources and whistleblowers can access anonymously.

I guess we have all cursed when clicking on the link for a web page we wanted brought up the error message ‘404 Page Not Found’. Your browser communicated with the server, but the server could not locate the web page where it had been indexed. The average lifespan of a web page is under 100 days so skill is needed in using archived web material to retrieve sites that have been mislabelled, moved or removed from the web. Politicians may find it useful that claims they make to the electorate can thus quickly disappear from sight, but there are search methods that can retrieve old web

pages and enable comparison with their views today.12 Most search engines use asterisks to denote wild cards, so a query that includes ‘B*n Lad*n’ will search through the different spellings of his name such as Ben Laden, Bin Laden (the FBI-preferred spelling), Bin Ladin (the CIA-preferred spelling) and so on. Another useful lesson is the use of the tilde, the ~ character on the keyboard. So prefacing a query term with ~ will result in a search for synonyms as well as the specific query term, and will also look for alternative endings. Finally, you can ask the search to ignore a word by placing a minus in front of it, as –query. The meta-search engine Dogpile will return answers taken from other search engines, including from Google and Yahoo.

The order in which results are presented to you after entering a search query into a search engine can give a misleading impression of what is important. The answers that are returned (miraculously in a very small fraction of a second) may have been selected in a number of different ways. The top answer may be as a result of publicity-based search – a form of product placement where a company, interest group or political party has paid to have its results promoted in that way (or has used one of the specialist companies that offer for a fee to deliver that result to advertisers). A search on property prices in an area will certainly flag up local estate agents who have paid for the marketing advantage of appearing high up on the page. The answers will also take account of the accumulated knowledge in the search database of past answers, and also which answers have been most frequently clicked for further information (a popularity-based search, thus tapping into a form of ‘wisdom of the crowd’). This can be misleading. While it may be interesting to see the results of a search for information about university courses that has been sorted by what were the most popular such searches, it is hardly helpful if what you want to know about is all the courses available that match your personal interests.

Finally, and perhaps most disturbingly, the suggested answers to the query may represent a sophisticated attempt by the algorithm to conduct a personalized search by working out what it is that the user is most likely towant to know (in other words, inferring why the question is being asked) from the user’s previous internet behaviour and any other personal information about the individual accessible by the search engine. Two different people entering the same search terms on different devices will therefore get a different ranking of results. My query ‘1984?’ using the Google Chrome browser and the Google search engine brings up George Orwell’s dystopian novel along with suggestions of how I can most conveniently buy or download a copy. Helpfully, the Wikipedia entry on the book is also high up on the first page of the 1.49 billion results I am being offered (in 0.66 seconds). The same query using the Apple Safari browser and its search engine brings up first an article about the year 1984 telling me it was a leap year. And a very different past browsing history might highlight references to the assassination of Indira Gandhi in 1984, or news that the release of the forthcoming film Wonder Woman 1984 has been postponed to 2020. Internet searching is therefore a powerful tool for acquiring the components of situational awareness. That is, for as long as we can rely on an open Internet. If the authorities were to have insisted that the search algorithms did not reference Orwell’s book in response to queries from their citizens about 1984 then we would indeed have entered Orwell’s dystopian world. That, sadly, is likely to be the ambition of authoritarian regimes that will try to use internet technology for social control.

Conclusions: lessons in situational awareness

In this chapter, we have been thinking about the first stage of SEES, the task of acquiring what I have termed situational awareness, knowing about the here and now. Our knowledge of the world is always fragmentary and incomplete, and is sometimes wrong. But something has attracted our attention and we need to know more. It may be because we have already thought about what the future may bring and had strategic notice of areas we needed to monitor. Or it may be that some unexpected observation or report we have received triggers us to focus our attention. There are lessons we can learn about how to improve our chances of seeing clearly what is

going on when answering questions that begin with ‘who, what, where and when’.

We should in those circumstances:

Ask how far we have access to sufficient sources of information.

Understand the scope of the information that exists and what we need to know but do not.

Review how reliable the sources of information we do have are.

If time allows, collect additional information as a cross-check before reaching a conclusion.

Use Bayesian inference to use new information to adjust our degree of belief about what is going on.

Be open and honest about the limitations of what we know, especially in public, and be conscious of the public reactions that may be triggered.

Be alive to the possibility that someone is deliberately trying to manipulate, mislead, deceive or defraud us.

Become a Patron!
True Information is the most valuable resource and we ask you to give back.