Dutch Govt removes two Russians utilizing Political Cover

The Dutch minister of the Interior, Kajsa Ollongren, sent a letter (in Dutch) to the House of Representatives to educate the parliament about the disturbance regarding a Russian reconnaissance activity.

Two Russians utilizing a conciliatory cover to submit undercover work for the Russian common unfamiliar knowledge organization SVR have been ousted from the Netherlands. Both were certify as representative at the Russian government office in The Hague. The clergyman says the SVR insight official assembled a “generous” organization of sources (i.e., he was a case official) working in the Dutch innovative area. He sought after data about AI, semiconductors and nano innovation; information that has both common and military applications. At times the sources got paid for their participation.

The Dutch common knowledge and security administration AIVD upset the activity. On 9 December 2020, the Russian represetative to the Netherlands was called by the Dutch service of Foreign Affairs. The Russian envoy was informed that the two Russians have been assigned as Persona Non Grata (PNG), i.e., they are ousted from the Netherlands.

This case includes different organizations and one instructive foundation, whose personalities are not uncovered. The clergyman expresses that the surveillance “has likely made harm the associations where the sources are or were dynamic, and in this way to the Dutch economy and public security.”

The clergyman expresses that the Immigration and Naturalization Service (IND) will make a lawful move against one source based on migration law.

The clergyman additionally reports the Dutch organization will investigate conceivable outcomes to condemn the demonstration of helping out an unfamiliar insight administration. Presently, that follow up on and without help from anyone else is definitely not a culpable offense. Legitimate potential outcomes do as of now exist with respect to infringement of privacy of authentic mysteries and friends insider facts, be that as it may. For related advancements at the EU level, look at the Trade mysteries page of the European Commission.

At last, the clergyman brings up that this case shows “that dangers from unfamiliar states against the Netherlands are genuine”, and that a more extensive subsequent will happen of the parliamentary letters “Countering unfamiliar state dangers” of 18 April 2019 and “Information security in advanced education and science” of 27 November 2020.

Three side notes:

For a short look into one part of the work life of case officials (and their periodic supporting officials), see Physical Counter Surveillance – Dry Cleaning and Evading Capture (September 2019).

For (self-)assurance, see New leaflet on secret activities from the Dutch General Intelligence and Security Service (AIVD) – informal English interpretation (May 2020).

For additional (foundation) perusing, look at the site of the U.S. Public Counterintelligence and Security Center, part of the U.S. Office of the Director of National Intelligence (ODNI); it has a ton of good preparing and understanding material.

For much further (foundation) perusing, outstandingly the part of people and helplessness/weakness to being enrolled as spies/sources, I suggest the Selected Reports page of the U.S. DOD Defense Human Resources Activity site. Furthermore, scan Google for “MICE” and “RASCLS”.

Additionally for (self-)insurance, Dutch perusers might need to educate themselves about in the index of the parliamentary letter of 18 April 2019 that the clergyman alludes to: Nederlandse aanpak tegengaan statelijke dreigingen (April 2019).

In case you’re at an association that has a requirement for knowledge into insurance against insider dangers, I suggest looking at Signpost Six. It was established by @Elsine_van_Os, who once in the past worked at the Dutch military insight and security administration MIVD.

The rest of this post is an interpretation of the fundamental body of the clergyman’s letter on the upset surveillance activity.

[… ]


As referenced in the yearly reports of the AIVD, the Netherlands is an objective of Russian insight administrations who secretly gather data that is significant to Russia, including financial and logical data.

The AIVD as of late finished activities of a Russian insight official of the common unfamiliar knowledge administration SVR. The Russian public, who was utilized at the Russian international safe haven as a licensed representative, was associated with undercover work on innovation and science. He constructed a generous organization of sources, every one of whom are or were utilized in the Dutch cutting edge area. The insight official was keen on data about, among others, man-made consciousness, semiconductors, and nano innovation. A lot of this innovation is useful both for common and military applications.

The Russian knowledge official connected with people who approach delicate data inside the innovative area, and now and again paid for that. A second Russian SVR official, additionally licensed as representative, satisfied a supporting job.

Organizations and instructive establishment have been educated

The cutting edge area in the Netherlands holds high-caliber and novel information. The undercover work has likely made harm the associations where the sources are or were dynamic, and accordingly to the Dutch economy and public security.

The wellsprings of the Russian knowledge official have been reached by the AIVD to disturb their exercises. In various cases, the AIVD has presented an official notice to the organizations and instructive foundation included with the end goal that they can take measures. In one case, an official notice was shipped off the Immigration and Naturalization Service (IND). The IND will take legitimate measures against one source. The AIVD is exploring whether further authority warnings can be shipped off the IND.

No remarks can be made about the characters of the sources and which organizations and instructive foundation are included.

Persona Non Grata

Because of the distinguished reconnaissance exercises, the Russian represetative has been called by the Dutch Ministry of Foreign Affairs on 9 December 2020, and has been informed that the insight official, just as the supporting SVR laborer, have been assigned as Persona Non Grata (PNG).

Criminalization of secret activities

Because of the expanded weakness of the Netherlands for surveillance, the Dutch organization has inspected the additional estimation of criminalization of undercover work. Criminal law as of now gives lawful potential outcomes to act against wrongdoings including infringement of classification of authentic insider facts and friends privileged insights. In any case, reconnaissance in the feeling of people secretly teaming up with an unfamiliar knowledge administration is right now not a culpable office. The organization has set up that extra criminalization is alluring and will inspect how that can been sought after, and afterward start an authoritative cycle.


This case shows, once more, that dangers from unfamiliar states against the Netherlands are genuine. We will additionally illuminate you about the more extensive methodology in development to the Parliamentary Letters “Countering unfamiliar state dangers” of 18 April 2019 and “Information security in advanced education and science” of 27 November 2020.


The AIVD is focused on bringing issues to light about undercover work hazards and, where conceivable, discloses to organizations, governments and instructive establishments how they can forestall this, both now and later on.

CRYPTOME REVEALS DHS Fusion Center China Problems

Screenshot of texasarmytrail.com1 September 2020DHS Fusion Center China Problems1. Over 100 DHS Fusion Center sites were involved in the recent #BlueLeaks database breach. All of the sites were ultimately hosted on a computer server in a Data Foundry data center in Houston. Data Foundry, also called GigaNews, is a central Texas based operator of several data centers.2. Despite its small size, Data Foundry appears to be one of the larger distributors of child pornography in the world via the Usenet groups it hosts. This claim was already made before in some detail back in 2014 by a former engineer, as well as in 2018 by the OAG of New Mexico.3. Data Foundry at one time served as one of the world’s largest bulk intel metadata collection points for the NSA program “BOUNDLESS INFORMANT” and was given the codename WAXTITAN. This was revealed as part of the Snowden leaks in 2013.4. Data Foundry has an unusual history with mainland China. The  Yokubaitis family, which runs the company (along with other related firms) have frequently attended Peking University. This school is probably the 2nd most prestigious in all of China (behind Tsinghua), and has developed most of the breakthroughs for China’s nuclear weapons program over the last three decades. During SXSW 2015 it was mentioned that their 2nd largest customer base is in China. This is unusual as no effective marketing seems to take place there, raising the question of how these customers are acquired. The sysadmin who first made claims against Data Foundry in 2014 alleged that their facilities would follow requests made from the datacenter in Hong Kong they colocate with, Powerline HK. Such requests could only come from the government of China, which raises serious questions regarding the independence and what could and could not be accessed.5. We find the story of Nick Caputo highly credible as all of the technical information can be verified, even years later. Other messages throughout the years on UseNet, Reddit, and elsewhere seem to corroborate the general story / character of the firm as well. Additionally the unregistered FBI office address he provides in his original message (12515 Research Blvd) actually turns up dozens of times in the #BlueLeaks files for FBI agents. We are unsure if these are police impersonators or simply a unit that is operating out of scope and without authority (more likely the latter). We have reached out to law enforcement officials in Australia and Britain in the meanwhile out of an abundance of caution.dan.ehrlich@12security.com 

David Omand – How Spies Think – 10 Lessons in Intelligence – Part 6


Lesson 4: Strategic notice We do not have to be so surprised by surprise

Early in the blustery spring morning of 14 April 2010 an Icelandic volcano with a near unpronounceable name (Eyjafjallajökull) exploded, throwing a cloud of fine ash high into the sky. The debris was quickly swept south-east by the regular jet stream of wind across the Atlantic until the skies above Northern Europe were filled with ash. Deep under the Icelandic ice-sheet melt water from the heat of the magma had flowed into the site of the eruption, rapidly cooling the lava and causing the debris to be rich in corrosive glass particles. These are known to pose a potential hazard if ingested by aircraft jet engines. The next day alarmed air traffic authorities decided they had to play it safe since no one had prescribed in advance specific particle sizes and levels below which engines were considered not to be at risk and thus safe to fly. They closed airspace over Europe and grounded all civil aviation in the biggest shut-down since the Second World


Yet there had been warning that such an extreme event might one day occur, an example of strategic notice that is the fourth component of the SEES model of intelligence analysis. The government authorities in Iceland had been asking airlines for years to determine the density and type of ash that is safe for jet engines to fly through. Had the tests been carried out, the 2010 disruption would have been much less. There would still have been no immediate forewarning of the volcano about to explode, but sensible preparations would have been in place for when it did.

The lesson is that we need sufficiently early notice of future developments that might pose potential danger to us (or might offer us opportunities) to be prepared to take precautionary measures just in case. Strategic notice enables us to anticipate. Governments had strategic noticeof possible coronavirus pandemics – the COVID-19 outbreak should not have caught us unprepared.

There is an important difference between having strategic warning of the existence of a future risk, and a prediction of when such a risk might materialize. Scientists cannot tell us exactly when a specific volcano will erupt (or when a viral disease will mutate from animals to humans). But there can be warning signs. Based on historical data, some sense of scale of frequency of eruption can be given. In the Icelandic case it was to be expected that some such volcanic activity would occur within the next fifty years. But before the volcanic events of April 2010, aviation authorities and aircraft engine manufacturers had not recognized that they needed to

prepare. Instead they had implicitly accepted the precautionary principle2 that if any measurable volcanic ash appeared in the atmosphere they would issue an advisory notice that all planes should be grounded even at the cost of considerable disruption to passengers.

The airlines had known of the baseline precaution that would be taken of grounding planes in the event of volcanic ash appearing in the atmosphere, but they had not thought in advance how such a major global dislocation would be handled. After the April 2010 closure of European airspace, the effects rapidly cascaded around the world. Planes were diverted on safety grounds to countries for which the passengers did not have visas, and could not leave the airport to get to hotels. Coming at the end of the Easter holidays, school parties were unable to return for the start of the term. Nobody had considered if stranded passengers should have priority over new passengers for booking on flights when they restarted. For millions of people the result was misery, camping in airports until finally aviation was allowed to resume just over a week later. At the same time, test flights were rapidly organized by the aero engine manufacturers. These provided data on which calibrated judgements could be made of when it is safe enough to fly through ash clouds. By the end of a week of chaos and confusion, 10 million passengers had been affected overall, with the aviation industry facing losses of over £1bn.

The same thing happened in the 1982 Falklands crisis. The British government was given strategic warning by the JIC that Argentine patience might run out, in which case the Junta could take matters into its own hands. That warning could have prompted the stationing of naval forces as a credible deterrent, while a permanent solution could have been created by extending the runway to handle long-distance transports and the stationing of fast jets (as has now been done). That would have been expensive. But the expense pales in comparison with the loss of over 1000 lives, not to mention an estimated price tag of over £3bn that was involved in recovering the Islands for the Crown once lost.

‘I just say it was the worst, I think, moment of my life’ was how Margaret Thatcher later described the surprise loss of the Falklands: yet she and her senior Cabinet members and the officials supporting them had not understood beforehand the dangers they were running. It was painful for me as a member of the Ministry of Defence to have to recognize later that we had all been so preoccupied by other problems, including managing defence expenditure, that we failed to pay sufficient attention to the vulnerability of the Falklands. We implicitly assumed (magical thinking) that the need would never arise. It was a salutary lesson learned early in my career and one that stayed with me.

Living with surprise

The fourth stage of the SEES method involves acquiring strategic notice of the important longer-term developments that could affect you. If you do not have these at the back of your mind, the chances are that you will not have prepared either mentally or physically for the possibility of their occurring. Nor will you be sufficiently alert to spot their first signs. We will experience what is known to intelligence officers as strategic surprise.

The distinction between having strategic and tactical surprise is an old one in military history. It is often hard for a general to conceal the strategy being followed. But when it comes to choosing tactically when and where to attack, a commander can obtain the advantages of surprise by, for example, picking a point in the enemy’s defences where at least initially he will have the advantage. In 1944 the Germans knew perfectly well that the Allies were preparing a major landing of US, British and Canadian troops

on the continent of Europe. That intent was no surprise since the strategy of opening a second front in Europe was well known. But the tactics that would be adopted, the date of the invasion, and exactly where and how the landings would be mounted were secrets carefully kept from the German High Command. Come 6 June 1944, the Allies landed in Normandy and enjoyed the immediate advantage of tactical surprise.

A tragic example of tactical surprise was the events of 7 July 2005, when terrorist suicide bombers with rucksack bombs struck at the London Underground network and surface transport during the morning rush hour. Fifty-two innocent passengers lost their lives and very many more suffered horrific injuries. The attacks came without intelligence warning and the shock across London and round the world was considerable. But they were not a strategic surprise to the authorities.

The likelihood of terrorist attacks in London in 2005 had been assessed by the Joint Terrorism Analysis Centre based in MI5 headquarters. Intelligence had indicated that supporters of Al Qaid’a living inside the UK had both the capability and intent to mount some form of domestic terror attack. The possibility that the London Underground system would be an attractive target to terrorist suicide bombers had been anticipated and plans drawn up and staff trained just in case. A full-scale live rehearsal of the response to a terrorist attack on the Underground, including emergency services and hospitals that would be receiving casualties, had been held in September 2003. Just as well, since many practical lessons were learned

that helped the response two years later.3 The same can be said for the experience of the pandemic exercise in 2016 for the COVID-19 outbreak in 2020. Exercises can never fully capture the real thing but if events come as a strategic surprise the damage done will be far greater.

The same is true for all of us. We have, for example, plenty of strategic notice that our possessions are at risk of theft, which is why we should think about insurance. If we do get our mobile phone stolen we will certainly feel it as an unwelcome tactical surprise, but if insured we can console ourselves that however inconvenient it is not as bad as if it had been a strategic surprise as well.

Forestalling surprise

Intelligence communities have the duty of trying to forestall unwelcome surprises by spotting international developments that would spell real

trouble.4 In 1973 Israeli intelligence was carefully monitoring Egypt for evidence that President Sadat might be preparing to invade. Signs of mobilization were nevertheless discounted by the Israelis. That was because the Israeli Director of Military Intelligence, Major General Eli Veira, had convinced himself that he would have strategic notice of a coming war. He reasoned that without major new arms imports from Russia, and a military alliance with Syria, Egypt would be bound to lose. Since no such imports or alliance with Syria had been detected he was certain war was not coming. What he failed to spot was that President Sadat of Egypt also knew that and had no illusions about defeating Israel militarily. His plan was to launch a surprise attack to seize the Sinai Peninsula, call for a ceasefire and then to negotiate from strength in peace talks. It was a crucial report from Israel’s top spy inside Egypt (the highly placed Israeli agent Ashwar Marwan was actually the millionaire son-in-law of Gamel Abdel Nasser, Egypt’s second President), which arrived literally on the eve of Yom Kippur, that just gave Israel enough time to mobilize to resist the attack when it came. The near disaster for Israel provides a warning of the dangerous double power of magical thinking, not only imagining that the world will somehow of its own accord fit in with your desires but also interpreting all evidence to the contrary so as to confirm your belief that all is well.

An important conclusion is that events which take us unawares will force us to respond in a hurry. We did not expect it to happen today, but it has happened. If we have not prepared for the eventuality we will be caught out, red-faced, improvising rapidly to recover the situation. That includes ‘slow burn’ issues that creep up on us (like COVID-19) until we suddenly realize with horror that some tipping point has been reached and we are forced to respond. Climate change due to global warming is one such ‘slow burn’ issue. It has been evident to scientists for decades and has now reached a tipping point with the melting of polar ice and weather extremes. It is only very recently, however, that this worsening situation has become a matter for general public concern.

The creation of ISIS in Syria and Iraq is another example where intelligence officers slowly began to recognize that something significant and dangerous was afoot as the terrorists began to occupy and control areas of the two countries. The failure of strategic notice was not to see how a

combination of jihadist participation in the civil war in Syria together with the strength of the remnants of the Sunni insurgency in Iraq could create a power vacuum. The early signals of major risks may be weak, and hard to perceive against the generally noisy background of reality. For example, we have only recently recognized the re-emergence of state threats through digital subversion and propaganda and the possibility of highly damaging cyberattacks against the critical infrastructure such as power and telecommunications.

The product of the likelihood of something happening (a probability) and a measure of its impact if it does arise gives us a measure of what is called the expected value of the event. We are all familiar with the principle fromassessing the expected value of a bet: the combination of the chances of winning and payoff (winnings minus our stake) if we do. At odds of 100 to 1 the chances are low but the payoff correspondingly large, and vice versa with an odds-on favourite. We also know that the expected value of a series of separate bets can be calculated by simply adding the individual net values together. Wins are sadly usually quickly cancelled out by losses. The effect is even more evident with games like roulette, in which, with a fair wheel, there is no skill involved in placing a bet. Over a period, the bookmakers and casino operators will always make their turn, which is why they continue to exist (but punters will still come back for more because of the non-monetary value to them of the thrill of the bet).

Very unlikely events with big consequences (events that in our experience we do not expect to happen to us or only rarely) do nevertheless

sometimes pop up and surprise us.5 Sometimes they are referred to as ‘long-tailed’ risks because of the way that they lie at the extreme end, or tail, of the distribution of risk likelihood rather than in the ‘expected’

middle range. An example might be the 2007 global financial crash.6 Our intuition also can mislead us into thinking that the outcome of some event that concerns us is as likely to be above the average (median) as below since so many large-scale natural processes are governed by the so-called ‘normal’ bell-shaped symmetrical probability distribution. But there are important exceptions in which there is a sizeable long tail of bad outcomes.

That idea of expected value can be elaborated into what is known to engineers as the risk equation to provide a measure of overall loss or gain. We learned the value of this approach when I was the UK Security and Intelligence Coordinator in the Cabinet Office constructing the UK counter-

terrorism strategy, CONTEST, after 9/11.7 Our risk equation separated out the factors that contribute to the overall danger to the public so that actions could be designed to reduce each of them, as shown below.

We can thus reduce the probability of terrorists attempting to attack us by detecting and pursuing terrorist networks and by preventing radicalization to reduce the flow of new recruits. We reduce society’s vulnerability to particular types of attack by more protective security measures such as better airport screening. We reduce the cost to the public of an attack if the terrorists get through our defences by preparing the emergency services to face the initial impact when an attack takes place and by investing in infrastructure that can be repaired quickly. This logic is a major reason why CONTEST remains the UK’s counter-terrorism strategy, despite being on its fifth Prime Minister and ninth Home Secretary. Military planners would

recognize this lesson as applying ‘layered defence’,8 just as the thief after an expensive bicycle might have first to climb over a high wall into the garden, dodge round the burglar alarms, break into the shed, then undo the bicycle lock. The chance of the thief succeeding undetected goes down with each layer of security that is added (and so does your overall risk).

The search for strategic notice of long-term developments is sometimes referred to as horizon scanning, as in looking for the tops of the masts of the enemy ships just appearing. Global banks, consultancies and corporations such as Shell have acquired a reputation for horizon scanning to help their

strategy and planning departments.9 But we should remember that some important developments are like ships that have not yet put to sea – they

may never come to threaten us if pre-emptive measures are taken early enough.

In the UK the chief scientists of government departments have assumed a leading role in providing strategic notice, such as the UK Ministry of

Defence 2016 Global Strategic Trends report looking ahead to 2045.10 Another example is the 2016 report by the UK Chief Scientific Adviser on the potential revolution for government agencies, banks, insurance companies and other private sector organizations of blockchain

technology.11 The headline message is clear: watch out, a new disruptive technology is emerging that has the potential to transform the working of any organization that relies on keeping records. In the natural world we do have strategic notice of many serious issues that should make governments and companies sit up. One of the top risks flagged up by the UK government has long been a virus pandemic, alongside terrorism and cyberattacks.

It is worth bearing in mind when new technology appears which might pose risks to humans or the environment that most scientists prefer to hold back from offering advice until there is solid evidence on which to reach judgements. That understandable caution leads to the special category of ‘epistemic’ risk arising from a lack of knowledge or of agreed understanding, because experts are reluctant to commit as to whether the harm will ever crystallize or because they disagree among themselves as to its significance.

It is hard to predict when some theoretical scientific advance will result in brand-new technology that will impact heavily on our lives. Of the 2.5

million new scientific papers published each year12 very few represent breakthroughs in thinking. Even when a theoretical breakthrough opens the possibility of a revolution in technology it may be years in gestation. Quantum computing provides a striking example where we have strategic notice of its potential, once such a machine is built, to factorize the very large numbers on which all the commercial encryption systems rely for secure internet communication and online payments. At the time of writing, however, no workable quantum computer at scale has been built that can operate to fulfil the promise of the theory: it could be decades ahead. But we know that the impact when and if it happens will be significant. Wise

governments will therefore be investing (as the US and the UK are13 ) in developing new types of cryptography that will be more resistant to

quantum computers when they arrive; and no doubt asking their intelligence agencies to report any signs that somewhere else the practical problems of implementation look like being cracked.

At a personal level, where we find some of our risks easily visualized (such as coming back to the car park to find the car gone) and the costs are low, we can quickly learn to manage the risk (this causes us to get into the habit of checking whether we locked the car). Other personal risks that are more abstract, although more dangerous, may be unconsciously filed as the kind that happen to other people (such as returning home to find the fire brigade outside as a result of a short circuit in old wiring). We look but do not see the danger, just as in everyday life we can hear but not listen.

The term ‘risk’ conventionally carries the meaning of something bad that could happen. But as the economist F. H. Knight concluded many years

ago, without risk there is no profit.14 A further lesson in using strategic notice is how it can allow advance notice of long-term opportunities that might present themselves. Perhaps the chosen route for the future high-speed rail link will go through the middle of the village (threat) or a station on the line will be built in a nearby town (opportunity).

Strategic notice has even become a fashionable theory governing the marketing of products in the internet age. Rather than the more traditional clustering of products around common types of goods and services, it is increasingly being found that it is the quirky niche products or services which might appear at first sight unlikely to have mass appeal that can go viral on social media and quickly generate large returns. Entrepreneurs expect most of such outlier efforts to fail, but those that succeed more than make up for them in profits earned. Who would have thought a few years ago that sportswear such as brightly coloured running shoes and jogging bottoms, then confined to the gym, would for many become staple outdoor wear.

Providing ourselves with strategic notice

Bangladesh Climate Geo-engineering Sparks Protests

April 4, 2033 – Dhaka

Bangladesh became the first country to try to slow climate change by releasing a metric ton of sulphate aerosol into the upper atmosphere from a modified Boeing 797 airplane in the first of six planned flights to reduce the warming effects of solar radiation. The unprecedented move provoked diplomatic warnings by 25 countries and violent public protests at several Bangladeshi Embassies, but government officials in Dhaka claimed its action was ‘critical to self-defense’ after a spate of devastating hurricanes, despite scientists’ warnings of major unintended consequences, such as intensified acid rain and depletion of the ozone layer.

Note the date on that news report. That surprising glimpse of the future in 2033 was included in the 2017 report on Global Trends published by the US

National Intelligence Council.15 The intelligence officers drafting the report included such headlines to bring to life their strategic assessments of possible developments out to 2030 and beyond and the disruptive game-changers to be expected between now and then.

The then chair of the US National Intelligence Council, Professor Greg Treverton, explained in his foreword to the 2017 report that he examined global trends to identify their impact on power, governance and cooperation. In the near future, absent very different personal, political and business choices, he expects the current trajectory of trends and power dynamics to play out among rising international tensions. But what he expects to happen twenty years or more thereafter is explored through three stories or scenarios. The NIC report discusses the lessons these scenarios provide regarding potential opportunities and trade-offs in creating the future, rather than just responding to it.

It is possible to do long-term forecasting, starting with now and trying to work forwards into the far future, on the basis of mega-trends in technology, national wealth, population and so on. That quickly runs into the problem that there are too many possible intersecting options that humankind might or might not take to allow an overall estimate of where we will end up. That problem is getting worse with the interdependencies that globalization has brought. One of the fascinating aspects of the US NIC report quoted above is the use of ‘backcasting’ as well as forecasting, working backwards from a number of postulated long-term scenarios to identify the factors that might influence which of those futures we might

end up near. It is important in all such work to challenge conventional wisdom (an approach known as red teaming). When preparing the report the US NIC team visited thirty-five countries, including the UK, and canvassed ideas from academics and former practitioners such as myself, as well as serving government and military planners.

Using risk management in practice

A number of things have to go right in order for strategic warning to be translated into effective action at both national and international level, inside the private sector and in the home. Forecasts of risk need to be communicated effectively to those who can make use of the information. They in turn must be able to mobilize some kind of response to reduce, mitigate or transfer the risk. And there must be the capacity to learn lessons from experience of this process.

Advised by the assessments of the Joint Intelligence Committee, and by

the National Risk Assessment16 from the Civil Contingencies Secretariat, the UK’s National Security Council, chaired by the Prime Minister,

promulgates the strategic threat priorities for government.17 The 2015 Risk Assessment identified a major human pandemic as one of the most significant national security risks (in terms of likelihood and impact) facing the UK. A test of plans in 2016 exposed major gaps in capability. By the time COVID-19 struck in 2020 there was at least a national biological security strategy in place, although shortcomings still emerged along with shortages of essential protective equipment.

A comparable role must be played by the boards of companies, charitable organizations and government agencies in ensuring that major risks are identified and monitored and that plans for managing the risks are in place. A useful lesson I have learned is to divide the risks into three groups. The first group consists of the risks that are outside the influence of the business, such as a major disease outbreak. These are known as the exogenous risks. The second group of risks are those inherent in the nature of the business: banks suffer fraud, retailers suffer pilfering or ‘shrinkage’ of stock, trucking companies have accidents and so on. The final group of risks are those that the company took on itself, such as investment in a major IT upgrade on which the viability of the whole enterprise depends.

There is nothing most companies can do to eliminate the risks in the first group. But they can conduct periodic impact assessments, and exercise contingency plans. Even MI6 got caught out in September 2000 when a PIRA terrorist fired a small rocket at their Vauxhall Cross headquarters and the police then declared the building a crime scene and refused to allow staff back in until their investigations were complete, a more than trivial problem for an organization that has to operate 24/7.

For the second group of risks, those inherent in the nature of the business, discussion should be around the systems of control – for example, for cash flow, and whether there is sufficient pooling or transfer of risk through insurance or commercial alliance or partnership.

For the third category, the questions a company board must ask itself are much more pointed. Since the future of the organization depends on managing such changes successfully, directors need to ensure they personally have visibility of progress and that they allocate enough of their time to ensuring that key change managers have access to expertise, the delegated authority and the finance needed for success.

We can all follow such a three-part discussion of the risks we face, even at the level of the family. Do we have sufficient medical insurance from work to cover unforeseen traffic and other accidents or do we need extra cover? Is there adequate holiday insurance? Who has to meet the upfront cost of cancellations due to external disruption (such as COVID-19 in 2020 or the shutdown in 2018 of the busy Gatwick airport in the UK due to illegal drone flying over the runway)? Who has spare keys to the car or house in case of loss?

Conclusions: strategic notice

Having strategic notice of possible futures means we will not be so surprised by surprise. In this chapter we have looked at the perils of surprise, at not having strategic notice, and at what it means to say something is likely to happen. We examined the nature of surprise itself, sudden crises and slow-burn crises, how to think about the likelihood of events, and strategies for managing their risk. We looked at some of the ways in which long-term risks can be spotted and at the importance of

communicating the results to achieve an alerted, but not alarmed, public. To learn to live with the expectation of surprise we should:

Search for strategic notice of relevant developments in technology, international and economic affairs, the environment and potential threats.

Think in terms of the expected value of events and developments (probability times impact), not just their likelihood.

Think about a register of your major risks and use a risk equation to identify and link the factors that contribute to the value of overall outcomes.

Use strategic notice to spot opportunities as well as identify dangers.

Accept that you will usually suffer tactical surprise even when you have avoided strategic surprise.

Beware magical thinking, believing that one event occurs, or does not occur, as a result of another without plausible causation and thus wishing away what strategic notice is telling you.

Group risks into those you can do nothing about (but might want to prepare for and exercise contingency plans); those that go with the territory (where you can take sensible precautions); and those risks that accompany your major decisions (since your future depends on them you need to ensure they get the right priority and attention).

David Omand – How Spies Think – 10 Lessons in Intelligence – Part 4

Lesson 2: Explanation Facts need explaining

Belgrade, Sunday, 23 July 1995. It was getting dark when our military aircraft landed on an airfield just outside the Serbian capital. We were met by armed Serbian security officers and quickly hustled into cars, watched over cautiously by a diplomat from the British Embassy. After what seemed an endless drive into the country we arrived at a government guest house. Our mission was to deliver in person an ultimatum to its occupant, General Ratko Mladić, the commander of the Bosnian Serb Army, the man who

became infamous as the ‘butcher of Srebrenica’.1

Two days before, at a conference in London, the international community had united to condemn in the strongest terms the actions of Mladić’s Bosnian Serb Army in overrunning the towns of Srebrenica and Zepa. These towns had been placed under the protection of the United Nations as ‘safe areas’, where the Bosnian Muslim population could shelter from the civil war raging around them. Sadly, there had been insufficient understanding in the UN of the ethnic-cleansing activities of Mladić and his army, and thus no proper plans made about how the safe areas were to be defended from him. The UN peacekeeping force in Bosnia, UNPROFOR, was small and lightly armed, and in accordance with UN rules wore blue-painted helmets and rode in white-painted vehicles. They were not a fighting force that could combat the Bosnian Serb Army when it defied the UN. The full extent of the genocidal mass killings and use of rape as a weapon of war by troops under Mladić’s command in Bosnia was not then known, but enough evidence had emerged from Srebrenica to force a reluctant London Conference and NATO international community that enough was enough. Any further interference with the remaining safe areas

would be met by the use of overwhelming air power. The purpose of the mission to Belgrade was to confront Mladić with the reality of that threat and make him desist from further aggression.

Leading the delegation were the three airmen who controlled NATO air power over Bosnia: the Commander of the US Air Force in Europe along with his British and French opposite numbers. I was the Deputy Under Secretary of State for Policy in the Ministry of Defence in London and I was acting as adviser to Air Chief Marshal Sir William Wratten, Commander-in-Chief of the RAF’s Strike Command, a man with a formidable reputation as the architect of British bombing strategy during the first Gulf War. I was there with my opposite numbers from the Ministry of Defence in Paris and the Office of the Secretary of Defense in the Pentagon (my friend Joe Kruzel, who was tragically to die on duty later in Bosnia when his armoured vehicle rolled off a narrow pass). One of our tasks was to use the opportunity to try to understand the motivations of Mladić, the ‘why and what for’ of his actions, and whether he was likely to be deterred by the formal NATO warning from the air commanders of the US, UK and France.

When we arrived at the guest house we were escorted to the dining room and invited to sit at one side of a long table already set with traditional sweetmeats and glasses of plum brandy. Mladić entered in jovial mood with his army jacket around his shoulders hanging unbuttoned, accompanied by the head of his secret police. We had been forewarned that in soldier-to-soldier company he was likely to be bluffly affable, one of the reasons his men adored him. We had therefore resolved on the flight that we would all refuse to accept the hospitality he was bound to offer, an act that we guessed would cause offence and thus jolt Mladić into recognizing this was not a friendly visit. That ploy worked.

Mladić became visibly agitated, defiantly questioning whether the three air forces could pose any real threat to his army given the puny use of NATO air power up to that point. The air commanders had wisely chosen to wear their leather jackets and aviator sunglasses, and not their best dress uniforms. They menacingly described the massive air power they could command and delivered their blunt ultimatum: further attacks against the safe areas would not be tolerated, and substantial air actions would be mounted, ‘if necessary at unprecedented levels’. The atmosphere in the room grew frosty.

Explanations and motives

In the Introduction I described understanding and explanation as the second component of my SEES model of intelligence analysis. Intelligence analysts have to ask themselves why the people and institutions that they are observing are acting as they appear to be, and what their motives and objectives are. That is what we were trying to establish in that visit to Mladić. That’s as true for you in everyday life as it is for intelligence analysts. The task is bound to be all the harder if the analysis is being done at a distance by those brought up in a very different culture from that of the intelligence target. Motives are also easily misread if there is projective identification of some of your own traits in your adversary. This can become dangerous in international affairs when a leader accuses another of behaviour of which they themselves are guilty. That may be a cynical ploy. But it may also be a worrying form of self-deception. The leader may be unconsciously splitting off his own worst traits in order to identify them in the other, allowing the leader then to live in a state of denial believing that they do not actually possess those traits themselves. I’m sure you recognize a similar process in your office every day, too.

If it is the actions of a military leader that are under examination then there may be other objective factors explaining his acts, including the relative capabilities of his and opposing forces, the geography and terrain, and the weather as well as the history, ethnology and cultural anthropology of the society being studied. There are bound to be complexities to unravel where it may be the response to perceived policies and actions by other states, or even internal opposition forces within the society, that provide the best explanation along with an understanding of the history that has led to this point. From the outset of the Bosnian conflict, reports from the region spoke of excesses by the different factions fighting each other, a common feature of civil wars. Such evidence was available. But it was not clear at first what the deeper motivations were that would eventually drive the troops of Ratko Mladić to the horrifying extremes of genocide.

The choice of facts is not neutral, nor do facts speak for themselves

One possible reason we may wrongly understand why we see what we do is because we have implicitly, or explicitly, chosen to find a set of facts that supports an explanation we like and not another. We saw in the preceding chapter that even situational awareness cannot be divorced from the mindset of the analyst. The action of selection of what to focus on is unlikely to be a fully neutral one. This is a problem with which biographers and historians have always had to grapple. As the historian E. H. Carr wrote: ‘By and large, the historian will get the kind of facts he wants.

History means interpretation.’2

Reality is what it is. We cannot go back in time to change what we have observed. More correctly, then, for our purposes reality is what it was when we made our observations. Reality will have changed in the time it has taken us to process what we saw. And we can only perceive some of what is out there. But we can make a mental map of reality on which we locate the facts that we think we know, and when we got to know them. We can place these facts in relation to each other and, via our memory, fill in some detail from our prior knowledge. Then we look at the whole map and hope we recognize the country outlined.

More often than not, facts can bear different meanings. Therein lies the danger of mistakes of interpretation. A shopkeeper facing a young man asking to buy a large meat cleaver has to ask herself, gang member or trainee butcher? Let me adapt an example that Bertrand Russell used in his

philosophy lectures to illustrate the nature of truth.3 Imagine a chicken farm in which the chickens conduct an espionage operation on the farmer, perhaps by hacking into his computer. They discover that he is ordering large quantities of chicken food. The Joint Intelligence Committee of chickens meets. What do they conclude? Is it that the farmer has finally recognized that they deserve more food; or that they are being fattened up for the kill? Perhaps if the experience of the chickens has been of a happy outdoor life, then their past experience may lead them to be unable to conceive of the economics of chicken farming as seen by the farmer. On the other hand, chickens kept in their thousands in a large tin shed may well be all too ready to attribute the worst motives to the farmer. It is the same secret intelligence, the same fact, but with two opposite interpretations. That is true of most factual information.

Context is therefore needed to infer meaning. And meaning is a construct of the human mind. It is liable to reflect our emotionally driven hopes and

fears as much as it represents an objective truth. Intelligence analysts like to characterize themselves as ‘objective’, and great care is taken, as we see in Chapter 5, to identify the many possible types of cognitive bias that might skew their thinking. In the end, however, ‘independent’, ‘neutral’ and ‘honest’ might be better words to describe the skilled analysts who must avoid being influenced by what they know their customers desperately hope

to hear.4 The great skill of the defence counsel in a criminal trial is to weave an explanatory narrative around the otherwise damming evidence so that the jury comes to believe in the explanation offered of what happened and thus in the innocence of the accused. The observed capability to act cannot be read as a real intention to do so. The former is easier to assess, given good situational awareness; the latter is always hard to know since it involves being able to ascribe motives in order to explain what is going on. You may know from your employment contract the circumstances under which your boss may fire you, but that does not mean they (currently) have the intention to do so.

We know from countless psychological experiments that we can convince ourselves we are seeing patterns where none really exist. Especially if our minds are deeply focused somewhere else. So how can we arrive at the most objective interpretation of what our senses are telling us? Put to one side the difficulties we discussed in the last chapter of knowing which are sufficiently reliable pieces of information to justify our labelling them as facts. Even if we are sure of our facts we can still misunderstand their import.

Imagine yourself late at night, for example, sitting in an empty carriage on the last train from the airport. A burly unkempt man comes into the carriage and sits behind you and starts talking aggressively to himself, apparently threatening trouble. Those sense impressions are likely at first to trigger the thought that you do not want to be alone with this individual. The stranger is exhibiting behaviour associated with someone in mental distress. Concern arises that perhaps he will turn violent; you start to estimate the distance to the door to the next carriage and where the emergency alarm is located; then you notice the tiny earphone he is wearing. You relax. Your mental mapping has flipped over and now provides a non-threatening explanation of what you heard as the simpler phenomenon of a very cross and tired man off a long flight making a mobile call to the car hire company that failed to pick him up.

What made you for a moment apprehensive in such a situation was how you instinctively framed the question. Our brains interpret facts within an emotional frame of mind that adds colour, in this case that represented potential danger on the mental map we were making. That framing was initially almost certainly beyond conscious thought. It may have been triggered by memory of past situations or more likely simply imaginative representation of possibilities. If you had been watching a scare movie such as Halloween on your flight, then the effect would probably have been even more pronounced.

The term ‘framing’ is a useful metaphor, a rough descriptor of the mental process that unconsciously colours our inferential map of a situation. The marvellous brightly coloured paintings of Howard Hodgkin, for example, extend from the canvas on to and over the frame. The frame itself is an integral part of the picture and conditions our perception of what we see on the canvas itself. The framing effect comes from within, as our minds respond to what we are seeing, and indeed feeling and remembering. It is part of the job of TV news editors to choose the clips of film that will provide visual and aural clues to frame our understanding of the news. And of course, as movie directors know, the effect of images playing together with sound are all the more powerful when working in combination to help us create in our minds the powerful mental representation of the scene that director wanted. The scrape of the violins as the murderer stalks up the staircase, knife in hand, builds tension; whereas the swelling orchestra releases that tension when the happy couple dance into the sunset at the end. Modern political advertising has learned all these tricks to play on us to make their message one we respond to more emotionally than rationally.

Up to this point in history only a human being could add meaning. Tomorrow, however, it could be a machine that uses an artificial intelligence programme to infer meaning from data, and then to add appropriate framing devices to an artificially generated output. Computerized sentiment analysis of social media postings already exists that can gauge a crowd’s propensity to violence. Careful use of artificial intelligence could shorten the time taken to alert analysts to a developing crisis.

However, there are dangers in letting machines infer an explanation of what is going on. Stock exchanges have already suffered the problems of ‘flash crashes’ when a random fall in a key stock price triggers via an

artificial intelligence programme automated selling that is detected by other trading algorithms, which in turn start selling and set off a chain reaction of dumping shares. So automatic brakes have had to be constructed to prevent the market being driven down by such automation. A dangerous parallel would be if reliance is placed on such causal inference to trigger automatically changes in defence posture in response to detected cyberattacks. If both sides in an adversarial relationship have equipped themselves with such technology, then we might enter the world of Dr Strangelove. Even more so if there are more than two players in such aninfernal game of automated inference. As AI increasingly seeps into our everyday lives, too, we must not allow ourselves to slip into allowing it to infer meaning on our behalf unchecked. Today the algorithm is selecting what online advertisements it thinks will best match our interests, irritating when wrong but not harmful. Which it would be if it were a credit rating algorithm secretly deciding that your browsing and online purchasing history indicate a risk appetite too high to allow you to hold a credit card or obtain affordable motorbike insurance.

Back to Bayesics: scientifically choosing an explanatory hypothesis

The intelligence analyst is applying in the second stage of SEES generally accepted scientific method to the task of explaining the everyday world. The outcome should be the explanatory hypothesis that best fits the observed data, with the least extraneous assumptions having to be made, and with alternative hypotheses having been tested against the data and found less satisfactory. The very best ideas in science, after sufficient replication in different experiments, are dignified with the appellation ‘theories’. In intelligence work, as in everyday life, we normally remain at the level of an explanatory hypothesis, conscious that at any moment new evidence may appear that will force a re-evaluation. An example in the last chapter was the case of the Cuban missile crisis, when the USAF photographs of installations and vehicles seen in Cuba, coupled with the secret intelligence from the MI6/CIA agent Col. Penkovsky, led analysts to warn President Kennedy that he was now faced with the Soviet Union introducing medium-range nuclear missile systems on to the island.

In the last chapter I described the method of Bayesian inference as the scientific way of adjusting our degree of belief in a hypothesis in the light of new evidence. You have evidence and use it to work backwards to assess what the most likely situation was that could have led to it being created. Let me provide a personal example to show that such Bayesian reasoning can be applied to everyday matters.

I remember Tony Blair when Prime Minister saying that he would have guessed that my background was in Defence. When I asked why, he replied because my shoes were shined. Most of Whitehall, he commented, had gone scruffy, but those used to working with the military had retained the habit of cleaning their shoes regularly.

We can use Bayesian reasoning to test that hypothesis, D, that I came from the MOD. Say 5 per cent of senior civil servants work in Defence, so the prior probability of D being true p(D) = 1/20 (5 per cent), which is the chance of picking a senior civil servant at random and finding he or she is from the MOD.

E is the evidence that my shoes are shined. Observation in the Ministry of Defence and around Whitehall might show that 7 out of 10 Defence senior civil servants wear shiny shoes but only 4 out of 10 in civil departments do so. So the overall probability of finding shiny shoes is the sum of that for Defence and that for civil departments

p(E) = (1/20)*(7/10)+(1–1/20)*(4/10) = 83/200

The posterior probability that I came from Defence is written as p(D|E) (where, remember, the vertical bar is to be read as ‘given’). From Bayes’s theorem, as described in Chapter 1:

p(D|E) = p(D). [p(E|D)/p(E)] = 1/20*[7/10*200/83] = 7/83 =

approx. 1/12

Using Bayesian reasoning, the chances of the PM’s hypothesis being true is almost double what would be expected from a random guess.

Bayesian inference is a powerful way of establishing explanations, the second stage of the SEES method. The example can be set out in a 2 by 2 table (say, applied to a sample of 2000 civil servants) showing the classifications of shined shoes/not shined shoes and from Defence/not from Defence. I leave it to the reader to check that the posterior probability

P(D/E) found above using Bayes’s theorem can be read from the first column of the table as 70/830 = approx. 1/12. Without seeing the shined shoes, the prior probability that I come from the MOD would be 100/2000, or 1/20.

E: shined shoesNot shined shoesTotals

D: from MOD7030100

Not from MOD76011401900


Now imagine a real ‘big data’ case with an array of hundreds or thousands of dimensions to cater for large numbers of different types of evidence. Bayes’s theorem still holds as the method of inferring posterior probabilities (although the maths gets complicated). That is how inferences are legitimately to be drawn from big data. The medical profession is

already experiencing the benefits of this approach.5 The availability of personal data on internet use also provides many new opportunities to derive valuable results from data analysis. Cambridge Analytica boasted that it had 4000–5000 separate data points on each voter in the US 2016 Presidential election, guiding targeted political advertising, a disturbing application of Bayesian inference that we will return to in Chapter 10.

In all sustained thinking, assumptions do have to be made – the important thing is to be prepared in the light of new evidence challenging the assumptions to rethink the approach. A useful pragmatic test about making assumptions is to ask at any given stage of serious thinking, if I make this assumption, am I making myself worse off in terms of chances of success if it turns out not to be sensible than if I had not made it? Put another way, if my assumption turns out to be wrong then would I end up actually worse off in my search for the answer or am I just no better off?

For example, if you have a four-wheel combination bicycle lock and forget the number you could start at 0000, then 0001, 0002, all the way up, aiming for 9999, knowing that at some point the lock will open. But you might make the reasonable assumption that you would not have picked a

number commencing with 0, so you start at 1000. Chances are that saves you work. But if your assumption is wrong you are no worse off.

As a general rule it is the explanatory hypothesis with the least evidence against it that is most likely to be the best one for us to adopt. The logic is that one strong contrary result can disconfirm a hypothesis. Apparently confirmatory evidence on the other hand can still be consistent with other hypotheses being true. In that way the analyst can avoid the trap (the

inductive fallacy 6 ) of thinking that being able to collect more and more evidence in favour of a proposition necessarily increases confidence in it. If we keep looking in Europe to discover the colour of swans, then we will certainly conclude by piling up as many reports as we like that they are all white. If eventually we seek evidence from Australia then the infamous

‘black swan’ appears and contradicts our generalization.7 When there are more reports in favour of hypothesis A than its inverse, hypothesis B, it is not always sensible to prefer A to B if we suspect that the amount of evidence pointing to A rather than B has been affected by how we set about searching for it.

A well-studied lesson of the dangers of misinterpreting complex situations is the ‘security dilemma’ when rearmament steps taken by one nation with purely defensive intent trigger fears in a potential adversary, leading it to take its own defensive steps that then appear to validate the original fears. The classic example is a decision by country A to modernize by building a new class of battleships. That induces anxiety in country B that an adverse military balance is thereby being built up against it. That leads to decisions on the part of country B also to build up its forces. That rearmament intention in turn is perceived as threatening by country A, not only justifying the original decision to have a new class of battleships but prompting the ordering of yet more ships. The worst fears of country B about the intentions of country A are thus confirmed. And an arms race starts. As the Harvard scholar Ben Buchanan has pointed out, such mutual misassessments of motivation are even more likely to be seen today in cyberspace since the difference between an intrusion for espionage

purposes and for sabotage need only be a few lines of code.8 There is thus ample scope for interpreting detected intrusions as potentially hostile, on both sides. Acts justified as entirely defensive by one government are therefore liable to be labelled as offensive in motivation by another – and vice versa.

We can easily imagine an established couple, call them Alice and Bob, one of whom, Bob, is of a jealous nature. Alice one day catches Bob with her phone reading her texts. Alice feels this is an invasion of her privacy, and increases the privacy settings on her phone. Bob takes this as evidence that Alice must have something to hide and redoubles his efforts to read her text messages and social media posts, which in turn causes Alice to feel justified in her outrage at being mistrusted and spied on. She takes steps to be even more secretive, setting in train a cycle of mistrust likely, if not interrupted, to gravely damage their relationship.

Explaining your conclusions

Margaret Thatcher was grateful for the weekly updates she received from the JIC. She always wanted to be warned when previous assessments had changed. But she complained that the language the JIC employed was too often ‘nuanced’. ‘It would be helpful’, she explained, ‘if key judgments in the assessments could be highlighted by placing them in eye-catching

sentences couched in plainly expressed language.’9 In the case of the Falklands that I mentioned in Chapter 1, the JIC had been guilty of such nuance in their July 1981 assessment. They had explained that they judged that the Argentine government would prefer to achieve its objective (transfer of sovereignty) by peaceful means. Thereby the JIC led readers to infer that if Argentina believed the UK was negotiating in good faith on the future of the Islands, then it would follow a peaceful policy, adding that if Argentina saw no hope of a peaceful transfer of sovereignty then a full-scale invasion of FI could not be discounted. Those in London privy to the Falklands negotiations knew the UK wanted a peaceful solution too. Objectively, nevertheless, the current diplomatic efforts seemed unlikely to lead to a mutually acceptable solution. But for the JIC to say that would look like it was straying into political criticism of ministerial policy and away from its brief of assessing the intelligence. There was therefore no trigger for reconsideration of the controversial cuts to the Royal Navy announced the year before, including the plan to scrap the Falklands-based ice patrol ship HMS Endurance. Inadvertently, and without consciously realizing they had done so, the UK had taken steps that would have reinforced in the minds of the Junta the thought that the UK did not see the

Islands as a vital strategic interest worth fighting for. The Junta might reasonably have concluded that if Argentina took over the Islands by force the worst it would face would be strong diplomatic protest.

Explaining something that is not self-evident is a process that reduces a complex problem to simpler elements. When analysts write an intelligence assessment they have to judge which propositions they can rely on as known to their readers and thus do not need explaining or further justification. That Al Qaid’a under Bin Laden was responsible for the attacks on 9/11 is now such a building block. That the Russian military intelligence directorate, the GRU, was responsible for the attempted murder of the Skripals in Salisbury in 2018 is likewise a building block for discussions of Russian behaviour. That Saddam Hussein in Iraq was still pursuing an unlawful biological warfare programme in 2002 was treated as a building block – wrongly, and therein lies the danger. That was a proposition that had once been true but (unbeknown to the analysts) was no longer. The mental maps being used by the analysts to interpret the reports being received were out of date and were no longer an adequate guide to reality. As the philosopher Richard Rorty has written: ‘We do not have any way to establish the truth of a belief or the rightness of an action except by reference to the justifications we offer for thinking what we think or doing what we do.’10

Here, however, lies another lesson in trying to explain very complex

situations in terms of simpler propositions.11 The temptation is to cut straight through complex arguments by presenting them in instantly recognizable terms that the reader or listener will respond to at an emotional level. We do this when we pigeonhole a colleague with a label like ‘difficult’ or ‘easy to work with’. We all know what we are meant to infer when a politician makes reference in a television interview or debate to the Dunkirk spirit, the appeasement of fascism in the 1930s, Pearl Harbor and the failure to anticipate surprise attacks, or Suez and the overestimation of British power in the 1956 occupation of the Egyptian canal zone. ‘Remember the 2003 invasion of Iraq’ is now a similarly instantly recognizable meme for the alleged dangers of getting too close to the United States. Such crude narrative devices serve as a shorthand for a much more complex reality. They are liable to mislead more than enlighten. History does not repeat itself, even as tragedy.

The lesson in all of this is that an accurate explanation of what you see is crucial.

Testing explanations and choosing hypotheses

How do we know when we have arrived at a sufficiently convincing explanation? The US and British criminal justice systems rest on the testing in court of alternative explanations of the facts presented respectively by counsel for the prosecution and for the defence in an adversarial process. For the intelligence analyst the unconscious temptation will be to try too hard to explain how the known evidence fits their favoured explanation, and why contrary evidence should not be included in the report.

Where there is a choice of explanations apply Occam’s razor (named after the fourteenth-century Franciscan friar William of Occam) and favour the explanation that does not rely on complex, improbable or numerous assumptions, all of which have to be satisfied for the hypothesis to stand up. By adding ever more baroque assumptions any set of facts can be made to fit a favoured theory. This is the territory where conspiracies lurk. In the words of the old medical training adage, when you hear rapid hoof-beats

think first galloping horses not zebras escaping from a zoo.12

Relative likelihood

It is important when engaged in serious thinking about what is going on to have a sense of the relative likelihood of alternative hypotheses being true. We might say, for example, after examining the evidence that it is much more likely that the culprit behind a hacking attack is a criminal group rather than a hostile state intelligence agency. Probability is the language in which likelihoods are expressed. For example, suppose a six-sided die is being used in a gambling game. If I have a suspicion that the die is loaded to give more sixes, I can test the hypothesis that the die is fair by throwing the die many times. I know from first principles that an unbiased die tossed properly will fall randomly on any one of its six faces with a probability [1/6]. The result of each toss of the die should produce a random result independent of the previous toss. Thus I must expect some clustering of

results by chance, with perhaps three or even four sixes being tossed in a row (the probability of four sixes in a row is small – [1/6]x[1/6]x[1/6]x[1/6]

  • 0.0008, less than 1 in a thousand. But it is not zero). I will therefore not be too surprised to find a run of sixes. But, evidently, if I throw the die 100 times and I return 50 sixes, then it is a reasonable conclusion that the die is biased. The more tosses of that particular die the more stable the proportion of sixes will be. Throw it 1,000 times, 10,000 times, and, if the result is consistent, our conclusion becomes more likely. A rational degree of belief in the hypothesis that the die is not fair comes from analysis of the data, seeing the difference between what results would be associated with the hypothesis (a fair die) and the alternative hypothesis (a die biased to show sixes).

The key question to ask in that case is: if the die was fair, how likely is it that we would have seen 50 sixes in 100 throws? That is the approach of Bayesian inference we saw earlier in the chapter. The greater the divergence the more it is rational to believe that the evidence points to it not being a fair die. We have conducted what intelligence officers call an analysis of competing hypotheses (ACH), one of the most important structured analytic techniques in use in Western intelligence assessment, pioneered by CIA

analyst Richards J. Heuer.13 The method is systematically to list all the possible explanations (alternative hypotheses) and to test each piece of evidence, each inference and each assumption made as to whether it is significant in choosing between them (this is known by an ugly term as the discriminatability of the intelligence report). We then prefer the explanationwith the least evidence pointing against it.

Alas, in everyday life, most situations we come across cannot be tested under repeated trials. Nor can we know in advance, or work out from first principles, what ideal results to compare with our observed data (such as the characteristics of a fair die). We cannot know that a boss is exhibiting unfair prejudice against one of their team in the way we can establish that a die is biased. But if we have a hypothesis of bias we can rationally test it against the evidence of observed behaviour. We will have to apply judgement in assessing the motives of the people involved and in testing possible alternative explanations for their behaviour against the evidence, discriminating between these hypotheses as best we can. When we apply Bayesian inference to everyday situations in that way, we end up with a degree of belief in the hypothesis that we conclude best explains the

observed data. That result is inevitably subjective, but is the best achievable from the available evidence. And, of course, we must always therefore be open to correction if fresh evidence is obtained.

Stage 2 of SEES: explaining

The first step in stage 2 of SEES is therefore to decide what possible explanations (hypotheses) to test against each other. Let me start with an intelligence example. Suppose secret intelligence reveals that the military authorities of a non-nuclear weapon State A are seeking covertly to import specialist high-speed fuses of a kind associated with the construction of nuclear weapons but that also have some research civilian uses. I cannot be certain that State A is pursuing a nuclear weapons programme in defiance of the international Non-Proliferation Treaty, although I might know that it has the capability to enrich uranium. The covert procurement attempts might be explicable by the caution on the part of State A that open attempts to purchase such fuses for civil use would be bound to be misunderstood. And the civil research institutions of State A might be using the military procurement route just for convenience since the military budget is larger. One hypothesis might be that the fuses are for a prohibited nuclear weapons programme. The obvious alternative would be that the fuses are for an innocent civil purpose. But there might be other hypotheses to test: perhaps the fuses were for some other military use. The important thing is that all the possible explanations should be caught by one or other of the hypotheses to be tested (in the jargon, exhausting the solution space). A further refinement might be to split the first hypothesis into two: a government-approved procurement for a nuclear weapons programme and one conducted by the military keeping the government in ignorance.

In that way we establish mutually exclusive hypotheses to test. Now we can turn to our evidence and see whether our evidence helps to discriminate between them. We start with identifying key assumptions that might be swaying our minds and ask ourselves how the weight of evidence might shift if we change the assumptions (the analysts might, for example, take for granted that any nuclear research would be in the hands of the military). We identify inferences that we have drawn and whether they are legitimate (the fact that the end-user was not revealed on the procurement documents

may imply that there is something to hide, or it may be just that overseas government procurement is carried out in that country via an import–export intermediary. Finally, we examine each piece of intelligence (not just secret intelligence of course; there are likely to be open sources as well) to see in Bayesian fashion whether they would be more likely under each of the hypotheses, and can thus help us discriminate between them. In doing this we check at the same time how confident we are in each piece of information being reliable, as we discussed in the preceding chapter.

Some of the intelligence reports may be consistent with all our hypotheses and they must be put to one side, however fascinating they are to read. Frustratingly, that can happen with reports of hard-to-get intelligence where perhaps lives have been risked to acquire it. A table (known in the trade as a Heuer table, after the pioneer of the use of structured analytic techniques, Richards J. Heuer) can be drawn up with separate columns for each hypothesis and rows for each piece of evidence, whose consistency with each hypothesis can then be logged in the table.

The first few rows of such a table might look like this:

SourceHypothesis 1: IsHypothesis 2:

typerelated to plan toCan be

Credibilityconduct nuclear-explained by

Relevanceweapon-relatedresearch for

experimentscivil purposes

Evidence 1: knownAnConsistentConsistent
capability to enrichassumption

uranium providesMedium


Evidence 2:AnConsistentLess
procurement wasinference
via an import–High

export companyMedium

Evidence 3:ImageryConsistentLess
military securityHigh


seen around


Evidence 4: covertHumintConsistentMuch less
channels were usedNew
to acquire high-source on

speed fusestrial High

Evidence 5:SigintConsistentMuch less
encrypted high-High High
grade military

comms to and from

the warehouse

A hypothetical example of part of a Heuer table

It may become apparent that one particular report provides the dominant evidence, in which case wise analysts will re-examine the sourcing of the report. A lesson from experience (including that of assessing Iraq’s holdings of chemical and biological weapons in 2002) is that once we have chosen our favoured explanation we become unconsciously resistant to changing our mind. Conflicting information that arrives is then too easily dismissed as unreliable or ignored as an anomaly. The table method makes it easier to establish an audit trail of how analysts went about reaching their conclusions. A record of that sort can be invaluable if later evidence casts doubt on the result, perhaps raising suspicions that some of the intelligence reporting was deliberately fabricated as a deception. We will see in Chapter 5 how German, US and UK analysts were deliberately deceived by the reporting of an Iraqi defector into believing that in 2003 Saddam Hussein possessed mobile biological warfare facilities.

The analysis of competing hypotheses using Heuer tables is an example of one of the structured analytic techniques in use today in the US and UK intelligence communities. The method is applicable to any problem you might have where different explanations have to be tested against each other in a methodical way. Heuer himself cites Benjamin Franklin in 1772, when he was the US Ambassador to France, describing to Joseph Priestley (the discoverer of oxygen) his approach to making up his mind:

  • divide half a sheet of paper by a line into two columns; writing over the one Pro and over the other Con … put down over the different heads short hints of the different motives … for or against the measure. When I have thus got them all together in one view, I endeavour to estimate their relative weights; and where I find two, one on each side, that seem equal I strike them out. Thus proceeding I find where the balance lies … and come to a determination accordingly.

In any real example there is likely to be evidence pointing both ways so a weighing up at the end is needed. Following the logic of scientific method it is the hypothesis that has least evidence against it that is usually to be favoured, not the one with most in favour. That avoids the bias that could come from unconsciously choosing evidence to collect that is likely to support a favoured hypothesis. I invite you to try this structured technique for yourself the next time you have a tricky decision to take.

A striking example of the importance of falsifying alternative theories rather than confirming the most favoured comes from an unexpected quarter: the 2016 US Presidential election. It was an election campaign beset with allegations of ‘fake news’ (including the false stories created and spread by Russian intelligence agents to try to discredit one candidate, Hillary Clinton). One of the stories spread online featured a photograph of a young Donald Trump with the allegation that, in an interview to People magazine in 1998, he said: ‘If I were to run, I would run as a Republican. They’re the dumbest group of voters in the country. They believe anything on Fox News. I could lie and they ’d still eat it up. I bet my numbers would be terrific.’ That sounds just like Trump, but the only flaw is that he never said it to People magazine. A search of People magazine disconfirms that

hypothesis – he gave no such interview.14 This story is an example of a falsifiable assertion. The hypothesis that he did say it can be checked and quickly shown to be untrue (that may of course have been the scheming intent of its authors, in order to lend support to the assertion that other anti-Trump stories were equally false). Most statements about beliefs and motivations are non-falsifiable and cannot be disproved in such a clear way. Instead, judgement is needed in reaching a conclusion that involves weighing evidence for and against, as we have seen with the Heuer method.

Assumptions and sensitivity testing

In this second stage of SEES, it is essential to establish how sensitive your explanation is to your assumptions and premises. What would it have taken to change my mind? Often the choice of explanation that is regarded as most likely will itself depend upon a critical assumption, so the right course is to make that dependency clear and to see whether alternative assumptions might change the conclusion reached. Assumptions have to be made, but circumstances can change and what was reasonable to take as a given may not be with time.

Structured diagnostic techniques, such as comparing alternative hypotheses, have the great advantage that they force an analytic group to argue transparently through all the evidence, perhaps prompting double-checking of the reliability of some piece of intelligence on which the choice of hypothesis seems to rest, or exposing an underlying assumption that may no longer hold or that would not be sensible to make in the context of the problem being examined.

As we will see in the next chapter, turning an explanation into a predictive model that allows us to estimate how events will unfold is crucially dependent on honesty over the assumptions we make about human behaviour. Marriages are predicated on the assumption that both partners will maintain fidelity. Many is the business plan that has foundered because assumptions made in the past about consumer behaviour turned out to no longer be valid. Government policies can come unstuck, for example, when implicit assumptions, such as about whether the public will regard them as fair, turn out not to reflect reality. A striking example was the British Criminal Justice Act 1991 that made fines proportionate to the income of the offender, and collapsed on the outcry when two men fighting, equally to blame, were fined £640 and £64 respectively because they belonged to different income brackets.

Back in Serbia in 1995, General Mladić, to our surprise, simplified our assessment task of trying to understand and explain his motivations.

Pulling out a brown leather-backed notebook, every page filled with his own cramped handwriting, Mladić proceeded to read to us from it for over half an hour recounting the tribulations of the Serb people at the hands both of the Croats and, as he put it, the Turks. He gave us his version of the history of his people, including the devastating Serbian defeat by the

Ottoman Empire in 1389 at the Battle of the Field of Blackbirds. That was a defeat he saw as resulting in 500 years of Serbian enslavement. He recounted the legend that the angel Elijah had appeared to the Serb commander, Lazar, on the eve of the battle saying that victory would win him an earthly kingdom, but martyrdom would win a place for the Serb people in heaven. Thus even defeat was a spiritual triumph, and justified the long Serbian mission to recover their homeland from their external oppressors.

According to Mladić’s candid expression of his world view in that dining room in Serbia, he felt it was a continuing humiliation to have Muslims and Croats still occupying parts of the territory of Bosnia–Herzegovina, and an insult to have the West defending Bosnian Muslims in enclaves inside what he saw as his own country. In a dramatic climax to his narrative he ripped open his shirt and cried out, kill me now if you wish but I will not be intimidated, swearing that no foreign boot would be allowed to desecrate the graves of his ancestors.

Mladić had effectively given us the explanation we were seeking and answered our key intelligence question on his motivation for continuing to fight. We returned to our capitals convinced that the ultimatum had been delivered and understood, but Mladić would not be deterred from further defiance of the UN. The West would have to execute a policy U-turn to stop him, by replacing the UN peacekeepers with NATO combat troops under a UN mandate that could be safely backed by the use of air power. And so it worked out, first with the Anglo-French rapid reaction force on Mount Igman protecting Sarajevo and then the deployment of NATO forces including 20,000 US troops, all supported by a major air campaign.

I should add my satisfaction that the final chapter in the story concluded on 22 November 2017, when the Hague war crimes tribunal, with judges from the Netherlands, South Africa and Germany, ruled that, as part of Mladić’s drive to terrorize Muslims and Croats into leaving a self-declared Serb mini-state, his troops had systematically murdered several thousand Bosnian Muslim men and boys, and that groups of women, and girls as young as twelve years old, were routinely and brutally raped by his forces. The judges detailed how soldiers under Mladić’s command killed, brutalized and starved unarmed Muslim and Croat prisoners. Mladić was convicted of war crimes and sentenced to life imprisonment.

Conclusions: explaining why we are seeing what we do

Facts need explaining to understand why the world and the people in it are behaving as they appear to be. In this chapter, we have looked at how to seek the best ‘explanation’ of what we have observed or discovered about what is going on. If we wish to interpret the world as correctly as we can we should:

Recognize that the choice of facts is not neutral and may be biased towards a particular explanation.

Remember that facts do not speak for themselves and are likely to have plausible alternative explanations. Context matters in choosing the most likely explanation. Correlations between facts do not imply a direct causal connection.

Treat explanations as hypotheses each with a likelihood of being true.

Specify carefully alternative explanatory hypotheses to cover all the possibilities, including the most straightforward in accordance with Occam’s razor.

Test hypotheses against each other, using evidence that helps discriminate between them, an application of Bayesian inference.

Take care over how we may be unconsciously framing our examination of alternative hypotheses, risking emotional, cultural or historical bias.

Accept the explanatory hypothesis with the least evidence against it as most likely to be the closest fit to reality.

Generate new insights from sensitivity analysis of what it would take to change our mind.

David Omand – How Spies Think – 10 Lessons in Intelligence – Part 3


Part One



Lesson 1: Situational awareness Our knowledge of the world is always fragmentary and incomplete, and is sometimes wrong

London, 11 p.m., 20 April 1961. In room 360 of the Mount Royal Hotel, Marble Arch, London, four men are waiting anxiously for the arrival of a fifth. Built in 1933 as rented apartments and used for accommodation by US Army officers during the war, the hotel was chosen by MI6 as a suitably anonymous place for the first face-to-face meeting of Colonel Oleg Penkovsky of Soviet military intelligence, the GRU, with the intelligence officers who would jointly run him as an in-place agent of MI6 and CIA. When Penkovsky finally arrived he handed over two packets of handwritten notes on Soviet missiles and other military secrets that he had smuggled out of Moscow as tokens of intent. He then talked for several hours explaining what he felt was his patriotic duty to mother Russia in exposing to the West the adventurism and brinkmanship of the Soviet leader, Nikita Khrushchev, and the true nature of what he described as the rotten two-faced Soviet

regime he was serving.1

The huge value of Penkovsky as a source of secret intelligence came from the combination of his being a trained intelligence officer and his access to the deepest secrets of the Soviet Union – military technology, high policy and personalities. He was one of the very few with his breadth of access allowed to visit London, tasked with talent spotting of possible

sources for Soviet intelligence to cultivate in Western business and scientific circles.

Penkovsky had established an acquaintance with a frequent legitimate visitor to Moscow, a British businessman, Greville Wynne, and entrusted him with his life when he finally asked Wynne to convey his offer of service to MI6. From April 1961 to August 1962 Penkovsky provided over 5500 exposures of secret material on a Minox camera supplied by MI6. His material alone kept busy twenty American and ten British analysts, and his 120 hours of face-to-face debriefings occupied thirty translators, producing 1200 pages of transcript.

At the same time, on the other side of the Atlantic, intelligence staffs worried about the military support being provided by the Soviet Union to Castro’s Cuba. On 14 October 1962 a U2 reconnaissance aircraft over Cuba photographed what looked to CIA analysts like a missile site under construction. They had the top secret plans Penkovsky had passed to MI6 showing the typical stages of construction and operation for Soviet medium-range missile sites. In the view of the CIA, without this information it would have been very difficult to identify which type of nuclear-capable missiles were at the launch sites and track their operational readiness. On 16 October President Kennedy was briefed on the CIA assessment and shown the photographs. By 19 October he was told a total of nine such sites were under construction and had been photographed by overflights. On 21 October the British Prime Minister, Harold Macmillan, was informed by President Kennedy that the entire US was now within Soviet missile range with a warning time of only four minutes. Macmillan’s response is recorded as ‘now the Americans will realize what we here in England have lived through these past many years’. The next day, after consultation with Macmillan, the President instituted a naval blockade of


The Cuban missile crisis is a clear example of the ability intelligence has to create awareness of a threatening situation, the first component of the SEES model of intelligence analysis. The new evidence turned US analysts’ opinion on its head. They had previously thought the Soviets would not dare to attempt introducing nuclear missile systems in the Western hemisphere. Now they had a revised situational awareness of what the United States was facing.

There is a scientific way of assessing how new evidence should alter our beliefs about the situation we face, the task of the first stage of the SEES method. That is the Bayesian approach to inference, widely applied in

intelligence analysis, modern statistics and data analysis.3 The method is named after the Rev. Thomas Bayes, the eighteenth-century Tunbridge Wells cleric who first described it in a note on probability found among his papers after his death in 1761.

The Bayesian approach uses conditional probability to work backwards from seeing evidence to the most likely causes of that evidence existing. Think of the coin about to be tossed by a football referee to decide which side gets to pick which goal to attack in the first half of the game. To start with it would be rational to estimate that there is a 50 per cent probability that either team will win the toss. But what should we think if we knew that in every one of the last five games involving our team and the same referee we had lost the toss? We would probably suspect foul play and reduce our belief that we stand an even chance of winning the toss this time. That is what we describe as the conditional probability, given that we now know the outcome of previous tosses. It is different from our prior estimate. What Bayesian inference does in that case is give us a scientific method of starting with the evidence of past tosses to arrive at the most likely cause of those results, such as a biased coin.

Bayesian inference helps us to revise our degree of belief in the likelihood of any proposition being true given our learning of evidence that bears on it. The method applies even when, unlike the coin-tossing example, we only have a subjective initial view of the likelihood of the proposition being true. An example would be the likelihood of our political party winning the next election. In that case it might then be new polling evidence that causes us to want to revise our estimate. We can ask ourselves how far the new evidence helps us discriminate between alternative views of the situation or, as we should term them, alternative hypotheses, about what the outcome is likely to be. If we have a number of alternatives open to us, and the evidence is more closely associated with one of them than the alternatives, then it points us towards believing more strongly that that is the best description of what we face.

The Bayesian method of reasoning therefore involves adjusting our prior degree of belief in a hypothesis on receipt of new evidence to form a posterior degree of belief in it (‘posterior’ meaning after seeing the

evidence). The key to that re-evaluation is to ask the question: if the hypothesis was actually true how likely is it that we would have been able to see that evidence? If we think that evidence is strongly linked to the hypothesis being true, then we should increase our belief in the hypothesis.

The analysts in the Defense Intelligence Agency in the Pentagon had originally thought it was very unlikely that the Soviet Union would try to introduce nuclear missiles into Cuba. That hypothesis had what we term a low prior probability. We can set this down precisely using notation that will come in handy in the next chapter. Call the hypothesis that nuclear missiles would be introduced N. We can write their prior degree of belief in N as a prior probability p(N) lying between 0 and 1. In this case, since they considered N very unlikely, they might have given p(N) a probability value of 0.1, meaning only 10 per cent likely.

The 14 October 1962 USAF photographs forced them to a very different awareness of the situation. They saw evidence, E, consistent with the details Penkovsky had provided of a Soviet medium-range nuclear missile installation under construction. The analysts suddenly had to face the possibility that the Soviet Union was introducing such a capability into Cuba by stealth. They needed to find the posterior probability p(N|E) (read as the reassessed probability of the hypothesis N given the evidence E where the word ‘given’ is written using the vertical line |).

The evidence in the photographs was much more closely associated with the hypothesis that these were Soviet nuclear missile launchers than any alternative hypothesis. Given the evidence in the photographs, they did not appear to be big trucks carrying large pipes for a construction site, for instance. The chances of the nuclear missile hypothesis being true given the USAF evidence will be proportionate to p(E|N), which is the likelihood of finding that evidence on the assumption that N is true. That likelihood was estimated as much greater than the overall probability that such photographs might have been seen in any case (which we can write as p(E)). The relationship between the nuclear missile hypothesis and the evidence seen, that of p(E|N) to p(E), is the factor we need to convert the prior probability p(N) to the posterior probability that the decisionmaker needs, p(N|E).

The Rev. Bayes gave us the rule to calculate what the posterior probability is:

p(N|E) = p(N). [p(E|N)/p(E)]

Or, the new likelihood of something being the case given the evidence is found by adjusting what you thought was likely (before you saw the evidence) by how well the new evidence supports the claim of what could be happening.

This is the only equation in this book. Despite wanting to talk as plainly as possible, I’ve included it because it turns words into precise calculable conditional likelihoods which is what so much of modern data science is about. In the next chapter we examine how we can apply Bayes’s great insight to work backwards, inferring from observations what are the most likely causes of what we see.

The example of the Cuban missile crisis shows Bayesian logic in action to provide new situational awareness. For example, if the analysts had felt that the photographs could equally well have been of a civil construction site and so the photographs were equally likely whether or not N was true (i.e. whether or not these were nuclear missile launchers) then p(E|N) would be the same as p(E), and so the factor in Bayes’s rule is unity and the posterior probability is no different from the prior. The President would not be advised to change his low degree of belief that Khrushchev would dare try to introduce nuclear missiles into Cuba. If, on the other hand, E would be much more likely to be seen in cases where N is true (which is what the Penkovsky intelligence indicated), then it is a strong indicator that N is indeed true and p(E|N) will be greater than P(E). So p(N|E) therefore rises significantly. For the Pentagon analysts p(N|E) would have been much nearer to 1, a near certainty. The President was advised to act on the basis that Soviet nuclear missiles were in America’s backyard.

Kennedy’s key policy insight in 1962 was recognition that Khrushchev would only have taken such a gamble over Cuba having been persuaded that it would be possible to install the missiles on Cuba covertly, and arm them with nuclear warheads before the US found out. The US would then have discovered that the Soviet Union was holding at immediate risk the entire Eastern seaboard of the US, but would have been unable to take action against Cuba or the missiles without running unacceptable risk. Once the missiles had been discovered before they were operational, it was then the Soviet Union that was carrying the risk of confrontation with the naval blockade Kennedy had ordered. Kennedy privately suggested a face-saving

ladder that Khrushchev could climb down (by offering later withdrawal of the old US medium-range missiles based in Turkey), which Khrushchev duly accepted. The crisis ended without war.

The story of President Kennedy’s handling of the Cuban missile crisis has gone down as a case study in bold yet responsible statecraft. It was made possible by having situational awareness – providing the what, who, where and when that the President needed based on Penkovsky’s intelligence on technical specifications about Soviet nuclear missiles, their range and destructive power, and how long they took to become operational after they were shipped to a given location. That last bit of intelligence persuaded Kennedy that he did not need to order air strikes to take out the missile sites immediately. His awareness of the time he had gave him the option of trying to persuade Khrushchev that he had miscalculated.

Bayesian inference is central to the SEES method of thinking. It can be applied to everyday matters, especially where we may be at risk of faulty situational awareness. Suppose you have recently been assigned to a project that looks, from the outside, almost impossible to complete successfully on time and in budget. You have always felt well respected by your line manager, and your view of the situation is that you have been given this hard assignment because you are considered highly competent and have an assured future in the organization. However, at the bottom of an email stream that she had forgotten to delete before forwarding, you notice that your manager calls you ‘too big for your boots’. Working backwards from this evidence you might be wise to infer that it is more likely your line manager is trying to pull you down a peg or two, perhaps by getting you to think about your ability to work with others, by giving you a job that will prove impossible. Do try such inferential reasoning with a situation of your own.

Most intelligence analysis is a much more routine activity than the case of the Cuban missile crisis. The task is to try to piece together what’s going on by looking at fragmentary information from a variety of sources. The Bayesian methodology is the same in weighing information in order to be able to answer promptly the decisionmakers’ need to know what is happening, when and where and who is involved.

When data is collected in the course of intelligence investigations, scientific experiments or just in the course of web browsing and general observation, there is a temptation to expect that it will conform to a known

pattern. Most of the data may well fit nicely. But some may not. That may be because there are problems with the data (source problems in intelligence, experimental error for scientists) or because the sought-for pattern is not an accurate enough representation of reality. It may be that the bulk of the observations fit roughly the expected pattern. But more sensitive instruments or sources with greater access may also be providing data that reveals a new layer of reality to be studied. In the latter case, data that does not fit what has been seen before may be the first sighting of a new phenomenon that cries out to be investigated, or, for an intelligence officer, that could be the first sign that there is a deception operation being mounted. How to treat such ‘outliers’ is thus often the beginning of new insights. Nevertheless, it is a natural human instinct to discard or explain away information that does not fit the prevailing narrative. ‘Why spoil a good story’ is the unconscious thought process. Recognizing the existence of such cases is important in learning to think straight.

Penkovsky had quickly established his bona fides with MI6 and the CIA. But our judgements depend crucially on assessing how accurate and reliable the underlying information base is. What may be described to you as a fact about some event of interest deserves critical scrutiny to test whether we really do know the ‘who, what, where and when’. In the same way, an intelligence analyst would insist when receiving a report from a human informant on knowing whether this source had proved to be regular and reliable, like Penkovsky, or was a new untested source. Like the historian who discovers a previously unknown manuscript describing some famous event in a new way, the intelligence officer has to ask searching questions about who wrote the report and when, and whether they did so from first-hand knowledge, or from a sub-source, or even from a sub-sub-source with potential uncertainty, malicious motives or exaggeration being introduced at each step in the chain. Those who supply information owe the recipient a duty of care to label carefully each report with descriptions to help the analyst assess its reliability. Victims of village gossip and listeners to The Archers on BBC Radio 4 will recognize the effect.

The best way to secure situational awareness is when you can see for yourself what is going on, although even then be aware that appearances can be deceptive, as optical illusions demonstrate. It would always repay treating with caution a report on a social media chat site of outstanding bargains to be had on a previously unknown website. Most human eye-

witness reporting needs great care to establish how reliable it is, as criminal courts know all too well. A good intelligence example where direct situational awareness was hugely helpful comes from the Falklands conflict. The British authorities were able to see the flight paths of Argentine air force jets setting out to attack the British Task Force because they had been detected by a mountaintop radar in Chile, and the Chilean government had agreed their radar picture could be accessed by the UK.

Experienced analysts know that their choice of what deserves close attention and what can be ignored is a function of their mental state at the

time.4 They will be influenced by the terms in which they have been tasked but also by how they may have unconsciously formulated the problem. The analysts will have their own prejudices and biases, often from memories of previous work. In the words of the tradecraft primer for CIA officers:

‘These are experience based constructs of assumptions and expectations both about the world in general and more specific domains. These constructs strongly influence what information analysts will accept – that is, data that are in accordance with analysts’ unconscious mental models are more likely to be perceived and remembered than information that is at

odds with them.’5 Especial caution is needed therefore when the source seems to be showing you what you had most hoped to see.

The interception and deciphering of communications and the product of eavesdropping devices usually have high credibility with intelligence analysts because it is assumed those involved do not realize their message or conversation is not secure and therefore will be speaking honestly. But that need not be the case, since one party to a conversation may be trying to deceive the other, or both may be participating in an attempt to deceive a third party, such as the elaborate fake communications generated before the D-Day landings in June 1944 to create the impression of a whole US Army Corps stationed near Dover. That, combined with the remarkable double agent operation that fed back misleading information to German intelligence, provided the basis of the massive deception operation mounted for D-Day (Operation Fortitude). The main purpose was to convince the German High Command that the landings in Normandy were only the first phase with the main invasion to follow in the Pas de Calais. That intelligence-led deception may have saved the Normandy landings from disaster by persuading the German High Command to hold back an entire armoured division from the battle.

Unsubstantiated reports (at times little more than rumour) swirl around commercial life and are picked up in the business sections of the media and are a driver of market behaviour. As individuals, the sophisticated analysts of the big investment houses may well not be taken in by some piece of market gossip. But they may well believe that the average investor will be, and that the market will move and thus as a consequence they have to make their investment decisions as if the rumour is true. It was that insight that enabled the great economist John Maynard Keynes to make so much money for his alma mater, King’s College Cambridge, in words much quoted today in the marketing material of investment houses: ‘successful investing is

anticipating the anticipation of others’.6 Keynes described this process in his General Theory as a beauty contest:

It is not a case of choosing those which, to the best of one’s judgment, are really the prettiest, nor even those which average opinion genuinely thinks the prettiest. We have reached the third degree where we devote our intelligences to anticipating what average opinion expects the average opinion to be. And there are

some, I believe, who practise the fourth, fifth and higher degrees.7

The Penkovsky case had a tragic ending. His rolls of film had to be delivered by dead drop in the teeth of Soviet surveillance using methods later made famous by John le Carré’s fictional spies, including the mark on the lamppost to indicate there was material to pick up. That task fell to Janet Chisholm, the wife of Penkovsky’s SIS case officer working under diplomatic cover in the Moscow Embassy. She had volunteered to help and was introduced to Penkovsky during one of his official visits to London. It was no coincidence therefore that her children were playing on the pavement of Tsvetnoy Boulevard while she watched from a nearby bench, at the exact moment Oleg Penkovsky in civilian clothes walked past. He chatted to the children and offered them a small box of sweets (that he had been given for that purpose during his meeting in London) within which were concealed microfilms of documents that Penkovsky knew would meet London’s and Washington’s urgent intelligence requirements. Similar drops of film followed. She was, however, later put under routine surveillance and by mischance she was seen making a ‘brush contact’ with a Russian who the KGB could not immediately identify but who triggered further

investigations. That and other slips made by Penkovsky led finally to his arrest. His go-between, the British businessman Greville Wynne, was then kidnapped during a business trip to Budapest, and put on show trial in Moscow alongside Penkovsky. Both were found guilty. Penkovsky was severely tortured and shot. Wynne spent several years in a Soviet prison until exchanged in 1964 in a spy swop for the convicted KGB spy Gordon Lonsdale (real name Konon Molody) and his cut-outs, an antiquarian bookseller and his wife, Peter and Helen Kroger, who had helped him run a spy ring against the UK Admiralty research establishment at Portland.

The digital revolution in information gathering

Today a Penkovsky could more safely steal secret missile plans by finding a way of accessing the relevant database. That is true for digital information of all kinds if there is access to classified networks. Digital satellite imagery provides global coverage. The introduction of remotely piloted aircraft with high-resolution cameras provides pin-sharp digitized imagery for operational military, security and police purposes, as well as for farming, pollution control, investigative journalism and many other public uses. At any incident there are bound to be CCTV cameras and individuals with mobile phones (or drones) that have high-resolution cameras able to take video footage of the event – and media organizations such as broadcasters advertise the telephone numbers to which such footage can be instantly uploaded. Every one of us is potentially a reconnaissance agent.

There is the evident risk that we end up with simply too much digital data to make sense of. The availability of such huge quantities of digitized information increases the importance of devising artificial intelligence

algorithms to sort through it and highlight what appears to be important.8 Such methods rely upon applying Bayesian inference to learn how best to search for the results we want the algorithms to detect. They can be very powerful (and more reliable than a human would be) if the task they are given is clear-cut, such as checking whether a given face appears in a large set of facial images or whether a specimen of handwriting matches any of those in the database. But these algorithms are only as reliable as the data on which they were trained, and spurious correlations are to be expected.

The human analyst is still needed to examine the selected material and to add meaning to the data.9

At the same time, we should remember that the digital world also provides our adversaries with ample opportunities to operate anonymously online and to hack our systems and steal our secrets. Recognition of these cyber-vulnerabilities has led the liberal democracies to give their security and intelligence agencies access to powerful digital intelligence methods, under strict safeguards, to be able to search data in bulk for evidence about those who are attacking us.

One side effect of the digitization of information is the democratization of situational awareness. We can all play at being intelligence analysts given our access to powerful search engines. Anyone with a broadband connection and a mobile device or computer has information power undreamed of in previous eras. There is a new domain of open-source intelligence, or OSINT. We use this ourselves when trying to decide which party to vote for in an election and want to know what each candidate stands for, or ascertaining the level of property prices in a particular area, or researching which university offers us the most relevant courses. The Internet potentially provides the situational awareness that you need to make the right decision. But like intelligence officers you have to be able to use it with discrimination.

The tools available to all of us are remarkable. Catalogues of image libraries can be searched to identify in fractions of a second a location, person, artwork or other object. Google Images has indexed over 10 billion photographs, drawings and other images. By entering an address almost anywhere in the world, Google Street View will enable you to see the building and take a virtual drive round the neighbourhood with maps providing directions and overlays of information. The position of ships and shipping containers can be displayed on a map, as can the location of trains across much of Europe.

With ingenuity and experience, an internet user can often generate situational awareness to rival that of intelligence agencies and major

broadcasting corporations. The not-for-profit organization Bellingcat10 is named after Aesop’s fable in which the mice propose placing a bell around the neck of the cat so that they are warned in good time of its approach but none will volunteer to put the bell on it. Bellingcat publishes the results of non-official investigations by private citizens and journalists into war

crimes, conditions in war zones and the activities of serious criminals. Its most recent high-profile achievement was to publish the real identities of the two GRU officers responsible for the attempted murder of the former MI6 agent and GRU officer Sergei Skripal and his daughter in Salisbury and the death of an innocent citizen.

It requires practice to become as proficient in retrieving situational information from the 4.5 billion indexed pages of the World Wide Web (growing by about one million documents a day) and the hundreds of thousands of accessible databases. Many sites are specialized and may take skill and effort, and the inclination to find (a location map of fishing boats around the UK, for example, should you ever want to know, can be found at fishupdate.com).

Although huge, the indexed surface web accessible by a search engine is estimated to be only 0.03 per cent of the total Internet. Most of the Internet, the so-called deep web, is hidden from normal view, for largely legitimate reasons since it is not intended for casual access by an average user. These are sites that can only be accessed if you already know their location, such as corporate intranets and research data stores, and most will be password-protected. In addition to the deep web, a small part of the Internet is the so-called ‘dark web’ or ‘dark net’ with its own indexing, which can only be reached if specialist anonymization software such as Tor is being used to

hide the identity of the inquirer from law enforcement.11 The dark net thus operates according to different rules from the rest of the Internet that has become so much a part of all of our daily lives. An analogy for the deep web would be the many commercial buildings, research laboratories and government facilities in any city that the average citizen has no need to access, but when necessary can be entered by the right person with the proper pass. The dark net, to develop that cityscape analogy, can be thought of like the red-light district in a city with a small number of buildings (sometimes very hard to find), where access is controlled because the operators want what is going on inside to remain deeply private. At one time, these would have been speakeasies, illegal gambling clubs, flophouses and brothels, but also the meeting places of impoverished young artists and writers, political radicals and dissidents. Today it is where the media have their secure websites which their sources and whistleblowers can access anonymously.

I guess we have all cursed when clicking on the link for a web page we wanted brought up the error message ‘404 Page Not Found’. Your browser communicated with the server, but the server could not locate the web page where it had been indexed. The average lifespan of a web page is under 100 days so skill is needed in using archived web material to retrieve sites that have been mislabelled, moved or removed from the web. Politicians may find it useful that claims they make to the electorate can thus quickly disappear from sight, but there are search methods that can retrieve old web

pages and enable comparison with their views today.12 Most search engines use asterisks to denote wild cards, so a query that includes ‘B*n Lad*n’ will search through the different spellings of his name such as Ben Laden, Bin Laden (the FBI-preferred spelling), Bin Ladin (the CIA-preferred spelling) and so on. Another useful lesson is the use of the tilde, the ~ character on the keyboard. So prefacing a query term with ~ will result in a search for synonyms as well as the specific query term, and will also look for alternative endings. Finally, you can ask the search to ignore a word by placing a minus in front of it, as –query. The meta-search engine Dogpile will return answers taken from other search engines, including from Google and Yahoo.

The order in which results are presented to you after entering a search query into a search engine can give a misleading impression of what is important. The answers that are returned (miraculously in a very small fraction of a second) may have been selected in a number of different ways. The top answer may be as a result of publicity-based search – a form of product placement where a company, interest group or political party has paid to have its results promoted in that way (or has used one of the specialist companies that offer for a fee to deliver that result to advertisers). A search on property prices in an area will certainly flag up local estate agents who have paid for the marketing advantage of appearing high up on the page. The answers will also take account of the accumulated knowledge in the search database of past answers, and also which answers have been most frequently clicked for further information (a popularity-based search, thus tapping into a form of ‘wisdom of the crowd’). This can be misleading. While it may be interesting to see the results of a search for information about university courses that has been sorted by what were the most popular such searches, it is hardly helpful if what you want to know about is all the courses available that match your personal interests.

Finally, and perhaps most disturbingly, the suggested answers to the query may represent a sophisticated attempt by the algorithm to conduct a personalized search by working out what it is that the user is most likely towant to know (in other words, inferring why the question is being asked) from the user’s previous internet behaviour and any other personal information about the individual accessible by the search engine. Two different people entering the same search terms on different devices will therefore get a different ranking of results. My query ‘1984?’ using the Google Chrome browser and the Google search engine brings up George Orwell’s dystopian novel along with suggestions of how I can most conveniently buy or download a copy. Helpfully, the Wikipedia entry on the book is also high up on the first page of the 1.49 billion results I am being offered (in 0.66 seconds). The same query using the Apple Safari browser and its search engine brings up first an article about the year 1984 telling me it was a leap year. And a very different past browsing history might highlight references to the assassination of Indira Gandhi in 1984, or news that the release of the forthcoming film Wonder Woman 1984 has been postponed to 2020. Internet searching is therefore a powerful tool for acquiring the components of situational awareness. That is, for as long as we can rely on an open Internet. If the authorities were to have insisted that the search algorithms did not reference Orwell’s book in response to queries from their citizens about 1984 then we would indeed have entered Orwell’s dystopian world. That, sadly, is likely to be the ambition of authoritarian regimes that will try to use internet technology for social control.

Conclusions: lessons in situational awareness

In this chapter, we have been thinking about the first stage of SEES, the task of acquiring what I have termed situational awareness, knowing about the here and now. Our knowledge of the world is always fragmentary and incomplete, and is sometimes wrong. But something has attracted our attention and we need to know more. It may be because we have already thought about what the future may bring and had strategic notice of areas we needed to monitor. Or it may be that some unexpected observation or report we have received triggers us to focus our attention. There are lessons we can learn about how to improve our chances of seeing clearly what is

going on when answering questions that begin with ‘who, what, where and when’.

We should in those circumstances:

Ask how far we have access to sufficient sources of information.

Understand the scope of the information that exists and what we need to know but do not.

Review how reliable the sources of information we do have are.

If time allows, collect additional information as a cross-check before reaching a conclusion.

Use Bayesian inference to use new information to adjust our degree of belief about what is going on.

Be open and honest about the limitations of what we know, especially in public, and be conscious of the public reactions that may be triggered.

Be alive to the possibility that someone is deliberately trying to manipulate, mislead, deceive or defraud us.

David Omand – How Spies Think – 10 Lessons in Intelligence – Part 2

Daniel Craig as James Bond in Spectre

EES: a model of analytical thinking

I am now a visiting professor teaching intelligence studies in the War Studies Department at King’s College London, at Sciences Po in Paris and also at the Defence University in Oslo. My experience is that it really helps to have a systematic way of unpacking the process of arriving at judgements and establishing the appropriate level of confidence in them. The model I have developed – let me call it by an acronym that recalls what analysts do as they look at the world, the SEES model – leads you through the four types of information that can form an intelligence product, derived from different levels of analysis:

Situational awareness of what is happening and what we face now.

Explanation of why we are seeing what we do and the motivations of those involved.

Estimates and forecasts of how events may unfold under different assumptions.

Strategic notice of future issues that may come to challenge us in the longer term.

There is a powerful logic behind this four-part SEES way of thinking. Take as an example the investigation of far-right extremist violence. The

first step is to find out as accurately as possible what is going on. As a starting point, the police will have had crimes reported to them and will have questioned witnesses and gathered forensic evidence. These days there is also a lot of information available on social media and the Internet, but the credibility of such sources will need careful assessment. Indeed, even well-attested facts are susceptible to multiple interpretations, which can lead to misleading exaggeration or underestimation of the problem.

We need to add meaning so that we can explain what is really going on. We do that in the second stage of SEES by constructing the best explanation consistent with the available evidence, including an understanding of the motives of those involved. We see this process at work in every criminal court when prosecution and defence barristers offer the jury their alternative versions of the truth. For example, why are the fingerprints of an accused on the fragments of a beer bottle used for a petrol bomb attack? Was it because he threw the bottle, or is the explanation that it was taken out of his recycling box by the mob looking for material to make weapons? The court

has to test these narratives and the members of the jury have then to choose the explanation that they think best fits the available evidence. The evidence rarely speaks for itself. In the case of an examination of extremist violence, in the second stage we have to arrive at an understanding of the causes that bring such individuals together. We must learn what factors influence their anger and hatred. That provides the explanatory model that allows us to move on to the third stage of SEES, when we can estimate how the situation may change over time, perhaps following a wave of arrests made by the police and successful convictions of leading extremists. We can estimate how likely it is that arrest and conviction will lead to a reduction in threats of violence and public concern overall. It is this third step that provides the intelligence feedstock for evidence-based policymaking.

The SEES model has an essential fourth component: to provide strategic notice of longer-term developments. Relevant to our example we might want to examine the further growth of extremist movements elsewhere in Europe or the impact on such groups were there to be major changes in patterns of refugee movements as a result of new conflicts or the effects of climate change. That is just one example, but there are very many others where anticipating future developments is essential to allow us to prepare sensibly for the future.

The four-part SEES model can be applied to any situation that concerns us and where we want to understand what has happened and why and what may happen next, from being stressed out at a situation at work to your sports team losing badly. SEES is applicable to any situation where you have information, and want to make a decision on how best to act on it.

We should not be surprised to find patterns in the different kinds of error tending to occur when working on each of the four components of the SEES process. For example:

Situational awareness suffers from all the difficulties of assessing what is going on. Gaps in information exist and often evoke a reluctance to change our minds in the face of new evidence.

Explanations suffer from weaknesses in understanding others: their motives, upbringing, culture and background.

Estimates of how events will unfold can be thrown out by unexpected developments that were not considered in the forecast.

Strategic developments are often missed due to too narrow a focus and a lack of imagination as to future possibilities.

The four-part SEES approach to assessment is not just applicable to affairs of state. At heart it contains an appeal to rationality in all our thinking. Our choices, even between unpalatable alternatives, will be sounder as a result of adopting systematic ways of reasoning. That includes being able to distinguish between what we know, what we do not know and what we think may be. Such thinking is hard. It demands integrity.

Buddhists teach that there are three poisons that cripple the mind: anger,

attachment and ignorance.7 We have to be conscious of how emotions such as anger can distort our perception of what is true and what is false. Attachment to old ideas with which we feel comfortable and that reassure us that the world is predictable can blind us to threatening developments. This is what causes us to be badly taken by surprise. But it is ignorance that is the most damaging mental poison. The purpose of intelligence analysis is to reduce such ignorance, thereby improving our capacity to make sensible decisions and better choices in our everyday lives.

On that fateful day in March 1982 Margaret Thatcher had immediately grasped what the intelligence reports were telling her. She understood what the Argentine Junta appeared to be planning and the potential consequences for her premiership. Her next words demonstrated her ability to use that insight: ‘I must contact President Reagan at once. Only he can persuade Galtieri [General Leopoldo Galtieri, the Junta’s leader] to call off this madness.’ I was deputed to ensure that the latest GCHQ intelligence was being shared with the US authorities, including the White House. No. 10 rapidly prepared a personal message from Thatcher to Reagan asking him to speak to Galtieri and to obtain confirmation that he would not authorize any landing, let alone any hostilities, and warning that the UK could not acquiesce in any invasion. But the Argentine Junta stalled requests for a Reagan conversation with Galtieri until it was much too late to call off the invasion.

Only two days later, on 2 April 1982, the Argentine invasion and military occupation of the Islands duly took place. There was only a small detachment of Royal Marines on the Islands and a lightly armed ice patrol ship, HMS Endurance, operating in the area. No effective resistance was possible. The Islands were too far away for sea reinforcements to arrive within the two days’ notice the intelligence had given us, and the sole

airport had no runway capable of taking long-distance troop-carrying aircraft.

We had lacked adequate situational awareness from intelligence on what the Junta was up to. We had failed to understand the import of what we did know, and therefore had not been able to predict how events would unfold. Furthermore, we had failed over the years to provide strategic notice that this situation was one that might arise, and so had failed to take steps that would have deterred an Argentine invasion. Failures in each of the four stages of SEES analysis.

All lessons to be learned.

How this book is organized

The four chapters in the first part of this book are devoted to the aforementioned SEES model. Chapter 1 covers how we can establish situational awareness and test our sources of information. Chapter 2 deals with causation and explanation, and how the scientific method called Bayesian inference, allows us to use new information to alter our degree of belief in our chosen hypothesis. Chapter 3 explains the process of making estimates and predictions. Chapter 4 describes the advantage that comes from having strategic notice of long-term developments.

There are lessons from these four phases of analysis in how to avoid different kinds of error, failing to see what is in front of us, misunderstanding what we do see, misjudging what is likely to follow and failing to have the imagination to conceive of what the future may bring.

Part Two of this book has three chapters, each drawing out lessons in how to keep our minds clear and check our reasoning.

We will see in Chapter 5 how cognitive biases can subconsciously lead us to the wrong answer (or to fail to be able to answer the question at all). Being forewarned of those very human errors helps us sense when we may be about to make a serious mistake of interpretation.

Chapter 6 introduces us to the dangers of the closed-loop conspiratorial mindset, and how it is that evidence which ought to ring alarm bells can too often be conveniently explained away.

The lesson of Chapter 7 is to beware deliberate deceptions and fakes aimed at manipulating our thinking. There is misinformation, which is false

but circulated innocently; malinformation, which is true but is exposed and circulated maliciously; and disinformation, which is false, and that was known to be false when circulated for effect. The ease with which digital text and images can be manipulated today makes these even more serious problems than in the past.

Part Three explores three areas of life that call for the intelligent use of intelligence.

The lessons of Chapter 8 are about negotiating with others, something we all have to do. The examples used come from extraordinary cases of secret intelligence helping to shape perceptions of those with whom governments have to negotiate, and of how intelli

gence can help build mutual trust – necessary for any arms control or international agreement to survive – and help uncover cheating. We will see how intelligence can assist in unravelling the complex interactions that arise from negotiations and confrontations.

Chapter 9 identifies how you go about establishing and maintaining lasting partnerships. The example here is the successful longstanding ‘5-eyes’ signals intelligence arrangement between the US, the UK, Canada, Australia and New Zealand, drawing out principles that are just as applicable to business and even to personal life.

The lesson of Chapter 10 is that our digital life provides new opportunities for the hostile and unscrupulous to take advantage of us. We can end up in an echo chamber of entertaining information that unconsciously influences our choices, whether over products or politics. Opinion can be mobilized by controlled information sources, with hidden funding and using covert opinion formers. When some of that information is then revealed to be knowingly false, confidence in democratic processes and institutions slowly ebbs away.

The concluding chapter, Chapter 11, is a call to shake ourselves awake and recognize that we are all capable of being exploited through digital technology. The lessons of this book put together an agenda to uphold the values that give legitimacy to liberal democracy: the rule of law; tolerance; the use of reason in public affairs and the search for rational explanations of the world around us; and our ability to make free and informed choices. When we allow ourselves to be over-influenced by those with an agenda, we erode our free will and that is the gradual erosion of an open society. Nobody should be left vulnerable to the arguments of demagogues or snake

oil salesmen. The chapter and the book ends therefore on an optimistic note.

We can learn the lessons of how to live safely in this digital world.

Must See Video – Russian GRU-Agent Colonel Georgy Viktorovich Kleban meets Serbian Spy

Russian spies are corrupting Serbia. This is video of the Russian military main intelligence directorate (GRU) officer Colonel Georgy Viktorovich Kleban paying his Serbian agent who is senior Serbian official. Kleban works in Russian Embassy in Belgrade. This is what the Russians do to us, there ‘friends’.

Prohibiting Procurement from Huawei, ZTE, and Other Chinese Companies

Prohibiting Procurement from Huawei, ZTE, and Other Chinese Companies

The National Reconnaissance Office (NRO) Acquisition Manual is hereby amended by adding new sub-part N4.21, Prohibition on Contracting for Certain Telecommunications and Video Surveillance Services or Equipment, to implement a provision of the 2019 National Defense Authorization Act prohibiting the procurement and use of covered equipment and services produced or provided by Huawei Technologies Company, ZTE Corporation, Hytera Communications Corporation, Hangzhou Hikvision Digital Technology Company, and Dahua Technology Company. New provision N52.204-016, Representation Regarding Certain Telecommunications and Video Surveillance Services or Equipment, is prescribed for use in all solicitations in lieu of FAR provision 52.204-24, and new clause N52.204-017, Prohibition on Contracting of Certain Telecommunications and Video Surveillance Services or Equipment, is prescribed for all solicitations and contracts in lieu of FAR clause 52.204-25. These revisions are effective immediately, and will be incorporated into NRO Acquisition Circular 2019-03.