HomeTopicsBackgroundWhy the hell did we miss it? A story of missed signals,...

Why the hell did we miss it? A story of missed signals, bad warnings, mind games and other reasons why intelligence fails

“Why did no one see this coming?” is what Queen Elizabeth II asked a group of economists a few weeks into the 2008 financial crisis. In fact, it is the question raised most often in light of all types of crises and surprises. Sometimes, things happen that we just don’t see coming. Events like the attacks on September 11 (9/11), Pearl Harbor, natural disasters, and full-on invasions seem to fit under this description. But the truth is that, more often than not, finding out about these disasters is not impossible. Oftentimes, the information exists, but as Professor James Wirtz explained, it is simply obscured by noise. So what are these elements of noise leading governments and their intelligence agencies to miss these events? This article will delve into some, although not all, of the factors that often lead to so-called intelligence failures, i.e. situations where the information exists, but it is not identified, interpreted or acted upon effectively.

Cognitive Factors

Seems obvious sometimes, but we must remember that this information needs to be found, interpreted, delivered, and acted on by people. And thus, the process can not be perfect. Our human brains are not machines; they function on a basis of biases, shortcuts and assumptions about the world. These inevitably shape the way in which intelligence is conducted and applied by decision-makers. Whatsmore, we often do not even know this is happening, as our brains disguise these biases and shortcuts as logical thinking. Although the psychological processes affecting how information is interpreted and decisions are made are incredibly complex and vast, here are a few examples of the cognitive limitations that have led to poor crisis detection in the past.

Firstly, we have confirmation bias. This is the cognitive tendency to look for and favor information that supports our assumptions and beliefs. This can lead intelligence analysts and politicians to ignore, discount, or misinterpret contradicting evidence. For example, leading up to the British-American invasion of Iraq in 2003, these two governments assumed that Saddam Hussein possessed Weapons of Mass Destruction (WMD). This led them to focus only on the intelligence supporting this narrative, such as some negotiations Iraq had with African nations for a uranium purchase (which never came through) and the made up reports by the “Curveball” source, who was really just looking for preferential treatment. Similarly, in the days leading up to the Yom Kippur war in 1973, Israeli officials strongly believed that Egypt would not launch a surprise attack against them until having aerial superiority (which they did not have). These underlying assumptions (known as the “concept”; ha-koptsepsiyah in hebrew), resulted in the re-interpretation of the massive evidence suggesting an imminent attack. Instead, the large-scale movement of troops on the border was seen as a series of defensive exercises, afterall, Egypt did not yet have the capacity to strike deep into Israel and launching a war without such capacity was just not reasonable (at least according to the Israeli analysts).

The confirmation phenomenon is related to a second cognitive problem, using analogies or the availability bias. When confronted with a problem, people are likely to draw on previous similar cases. These are usually cases that come to mind quickly (also known as the availability bias) or cases that have some sort of personal importance. For example, let’s pretend that you are the head of intelligence of an imaginary country and you hear that a group of hijackers has taken over a plane that could be in a major city in 20 minutes. What would you do if you were in charge of the intelligence or security forces? The best decision would probably be to take down the plane before it enters the city, right? If you think so and you got to the conclusion quickly, it is because of your availability bias. As political science students in 2026, it is safe to assume that we have all heard and read about 9/11. So when confronted with a situation that seems similar to it, our brain fills in the blanks and immediately associates both scenarios. We quickly conclude that the hijackers will crash into a major piece of infrastructure or building, killing many innocent civilians. In this case, we have very little information; we do not know what the hijackers want, if they are indeed going into the city, how many people are on the plane, etc… Yet, we were quick to make a decision about what to do based on the assumption that this is just another terrorist attack following a similar script to the one in 2001. This bias can lead to an excessive focus on situations following a familiar script “while failing to anticipate truly novel developments”. In fact, this is what happened in the actual 9/11 attacks. Prior to the events of September 2001, most plane hijackings were pursued with the intent to negotiate. Instead of large-scale terrorist attacks that use the plane as a weapon, hijackings were closer to traditional hostage situations with some sort of negotiation as the goal. The American intelligence community (IC) failed to anticipate the 9/11 attacks for many different reasons (some of which this article explores), but part of this failure was undeniably because of this common bias.

Next we have mirror imaging. This is when we assume that other actors will “think, perceive, and decide” in a similar manner as we would under similar circumstances. When this bias comes into place, analysts fail to consider the cultural, historical, ideological and institutional elements that shape decision-making in other places. Mirror imaging was seen, for example, in the intelligence assessments surrounding the Japanese attacks on Pearl Harbor in December 1941. Although the United States (US) had received warning reports about a possible Japanese attack on the region, they ruled out Pearl Harbor as a possible target. In their eyes, it was irrational for the Japanese to attack a key base from a far superior adversary. Yet, the concept of rationality for the Japanese decision-makers – who went along with the attack – clearly meant something different. Due to mirror imaging, the American authorities wrongly determined that “rationality” meant the same in both contexts and failed to prepare for the most devastating attack in US history.

What’s more is that these cognitive issues can be further exacerbated in group settings. One of the most problematic limitations experienced in groups is the creation of echo chambers or groupthink. This dysfunctional group dynamic is more common in cohesive groups made up of similarly-minded individuals operating under stress. It describes situations where groups experience an “excessive desire for consensus, conformity pressure, and suppression of dissenting views”. It can easily lead to illusions of invulnerability, collective re-interpretations of contradicting evidence and, perhaps most importantly, self-censorship when it comes to doubts or dissenting opinions. When a group seems so in-sync, no one wants to be the person slowing this momentum and contradicting everyone else. As a result, individuals who could present crucial evidence that contradicts the narrative already created may keep the information to themselves or, what is worse, face backlash and be silenced by the rest of the group. Thus, alternative scenarios are not even considered and not prepared for by the intelligence analysts and decision-makers in charge of national security. This is what happened during the Bay of Pigs invasion, as President Kennedy’s advisors avoided challenging assumptions and the narrative created by President Eisenhower’s team. They continued pushing the idea that the invasion would succeed, despite repeated concerns from officials from the Central Intelligence Agency (CIA) on the unpreparedness and poor planning, as well as the knowledge that Castro (just like everyone else with a New York Times subscription) was aware of the planned invasion.

Coordination and Disjunctive Information

While we are discussing groups, an important element to note is that, more often than not, the information that could lead to an early detection of a crisis or the “dots that need to be connected”, are not neatly placed in front of a single actor. Instead, it is usually scattered, with many different individuals, organizations and government agencies possessing at least one piece of the larger puzzle. This is what is called “disjunctive information” and is more common in countries with a large and complex intelligence community. The United States currently has one of the largest ICs in the world with 18 intelligence agencies, such as the CIA, the Federal Bureau of Investigation (FBI), the National Security Agency (NSA), amongst many others. Each of these has different goals, skills sets and capabilities and, as a result, focus on different things. For example, the CIA is mostly a preventive institution that looks for potential threats and tries to tackle them before they materialize, while the FBI is a law enforcement agency, which focuses on gathering evidence that can stand in court to get criminal convictions. Ideally, all of these agencies would respect each other and share crucial information promptly and efficiently. Unfortunately, this is not always the case. 

As previously mentioned, 9/11, the largest attack on the US since WWII and the biggest terrorist attack in British history (67 nationals died), is considered a massive intelligence failure for many reasons. It was mainly an issue of “connecting the dots” because, as political scientist Amy Zegart argues, the information really was all out there but the flawed bureaucratic design and competing institutional cultures (not to mention psychological limitations) meant that the organizations possessing this information did not cooperate adequately. Already in 1997, the White House Commission on Aviation Safety and Security created a report warning about the possibility of weaponizing aircrafts and called for stronger aviation security. A few years afterwards, in July 2001, the FBI learned that Al-Qaeda operatives were attending flight schools. Interestingly, the flight instructors pointed out that they were interested in learning how to fly, but not that curious about how to land (a point that, somehow, the FBI did not dwell on). Then, in August 2001, the CIA warned the President that Al-Qaeda was threatening to attack the US and they had identified the pilots as Al-Qaeda fighters. While all these reports existed, the CIA was not the one who talked to the fly instructors, the FBI did not get the threats from Al-Qaeda, the immigration services allowed the pilots entry into the country (probably not knowing who they were) and everyone was left in the dark. The failure to share information adequately did not just lead to a flawed response to the crisis and failed detection, but it also created internal frustration and further divides along the distrust plagued agency lines.

Dissemination issues

Supposing that intelligence agencies have been able to overcome their psychological shortcuts and effectively coordinate with each other, they would now need to relay that information to the policymakers that need to act on it. This seems pretty straightforward; you just let the president, your minister or the senior officer know what the threat is. Yet, as you can probably tell by now, even easy-looking things get much more complicated in this field. 

Firstly, crisis detectors need to consider the right timing to send warnings higher up the chain. According to Robert Jervis, “for intelligence to be welcomed and to have an impact, it must arrive at the right time, which is after leaders have become seized with the problem but before they made up their minds”. On a daily basis, political leaders have many different problems to deal with and often prioritize them in terms of possible political gains/losses. If an analyst interrupts their work with a little known and seemingly small threat, they might not pay attention and place the issue lower down their priorities list. On the other hand, if intelligence reports arrive after the politicians have already made a decision on the matter, it will probably be unable to change their minds. As we previously saw, once the politician’s brain has created a convincing narrative on the situation, it will follow it despite receiving contradicting evidence. Thus, as Jervis points out, intelligence analysts are left with a short window of opportunity to make sure that their work is actually used.

Aside from the careful timing puzzle, intelligence officials need to ensure that their warnings are convincing enough. Their reports must present an accurate portrayal of events clarifying the seriousness of the situation. If a warning makes it seem like the crisis is minor and probably under control, politicians will likely decide not to do anything about it. Similarly, if the warning is confusing and paints a far-fetched scenario, politicians may discard it as unrealistic and unserious. 

Two crucial phenomena can sometimes play an important role in determining how convincing a warning is: the cry-wolf syndrome and Cassandra effect. The former is probably best explained through an example. In May 1973, Ashraf Marwan (codenamed “The Angel”), a high-ranking Egyptian official working with Mossad, alerted Israeli intelligence that an Egyptian attack was imminent. Acting on the intelligence provided by their trusted source, Israel mobilized thousands of soldiers to the Sinai Peninsula only to see no attack.The unnecessary mobilization cost Israel around $35 million USD. There are many debates behind why the warning was inaccurate, with some claiming the Egyptian plans simply changed and others speculating that Marwar was a double-agent acting on Egyptian instructions. Regardless of what his intentions were, the false alarm led to a loss of trust regarding any further information he could provide, including an accurate (though last-minute) warning of the actual attack on October 6th. Since he was no longer trusted, officials dismissed this warning and were caught off-guard when the attack that started the Yom Kippur war actually materialized. The second phenomena comes from the Greek mythological figure of Cassandra, a gifted prophet who was cursed so that no one would believe her. “Intelligence Cassandras” face a similar conundrum and are the analysts, experts or officials who accurately detect crises but whose warnings are ignored or dismissed. Often, Cassandras are officials who use unconventional methods, have a lower-ranking or contradict the aforementioned groupthink-created consensus. An example of this could be seen in the October 7 Attacks (2023), where Cassandra-type figures produced no less than five warnings of a Hamas kidnapping plan. Since these were lower-ranking or unconventional officials their warnings were easily explained away and rationalized to fit the created narrative of calm. As a result, Israeli forces failed to anticipate the attack and, as they say, the rest is history. 

As for exactly how much information and detail a warning should have, the answer is somewhat muddier. Generally speaking, warnings can be categorized into three types based on how much certainty there is. Tactical or unambiguous warnings are the most specific type and know in detail that an event will occur. Meanwhile, strategic and operational warnings are more vague, they know there might be a threat and may have an idea as to when and where it will materialize, but usually do not go into specifics.  Seeing this, we may be immediately inclined to prioritize tactical warnings over the others at all times. In fact, this is what most politicians do. However, as Regan Copple argues, this can be an unrealistic and inadequate expectation. Unambiguous warnings are the hardest to get and, when they do exist, they are often produced too close to the event where action is no longer feasible. In other words, if politicians only wait for tactical warnings, they will probably never get it with enough time to prevent attacks. A similar related problem is that of length. Busy politicians can not spend their time reading every single bit of intelligence collected. If they do so, they risk falling into an information overload state where they could be distracted by irrelevant pieces of information and miss crucial data. Equally important is that warnings are transmitted in a language that policymakers can actually understand as too much jargon can obscure the real message. Thus, warnings should be short, clear, succinct, and convincing. Additionally, policymakers should understand what are actually realistic expectations for intelligence products. 

Politicization

Politicization is the idea that intelligence (its analysis and products) can be influenced, tainted, or biased by policy considerations. Ideally, there should be a boundary between the (objective) production of intelligence that is later passed on to the decisionmakers. Yet, in reality, this boundary is sometimes blurred and we instead get politicization. Politicization is a very specific and studied phenomenon on which entire books can be written, so for the sake of brevity, the following section is going to be an extremely short and simplified exploration into what politicization is. Intelligence scholars and historians have identified two main types of politicization: (1) the subtle, unconscious, contamination of analysis with policy considerations due to the analysts’ awareness of policy preferences; and (2) the conscious top-down dictation of analytical conclusions by policymakers or senior intelligence staff. The former is mostly influenced by a combination of the cognitive elements discussed at the beginning, contextual influences on the analyst, and a desire for their work to be relevant. Intelligence analysts, through direct interactions with them or just watching the news, are usually aware of what the policymakers’ preferences are. Since they want them to actually read and use their reports, they may unconsciously focus more on these preferences. Meanwhile, the second type of politicization, which is perhaps the most problematic, can be seen as manipulating or cherrypicking intelligence to push forward a specific political conclusion/goal. A common example of this phenomena is the Iraqi WMD claim. Leaders in the Bush and Blaire administrations publicly presented intelligence about Iraqi WMD programs with far greater certainty than the evidence suggested, while undermining doubts and dissent within the ICs. They made sure that the intelligence products supported a pre-existing policy and framed the government’s approach as appropriate. These intelligence reports have been amply criticized for having “sexed-up” the information and leading to an estimate that was “either overstated or not supported by the underlying intelligence reporting”. However, despite being evidentially weak or contradicting, the way in which the intelligence was reported by key government figures definitely supported the invasion of Iraq (their political goal).

Politicization is extremely hard to avoid. Afterall, the consumers of intelligence, the people for whom it is intended and the ones who design its purpose, are usually politicians who are always thinking about the next elections. 

Finally, another related problem is that sometimes, political preferences lead policymakers to oversee or underprioritize what the intelligence is warning about. For example, President Clinton’s response to the 1998 Al-Qaeda bombings of the American embassies in Nairobi and Dar-Er-Sallam has been amply criticized, particularly in the wake of 9/11. Yet, as Steve Coll explains in his book “Ghost Wars”, given the Clinton-Lewinski scandal of that same year, the President “had neither the credibility nor the political strength” to react to the intelligence priority by entering into a sustained military conflict. At the time, Clinton was more worried about getting impeached than he was about Al-Qaeda. Thus, the retaliation against the group was limited to an operation (Operation Infinite Reach) that killed 21 civilians but not Osama Bin Laden. Thus, intelligence assessments are constantly competing with other agenda matters and are judged in terms of electoral benefit. 

Are surprises always a mistake?

Having said all of that, we must also recognize that sometimes there really is no way intelligence officials could have known about possible crises. The so-called “shit happens” phenomenon is real. Sometimes things just go wrong or you’re just at the wrong place at the wrong time. For example, before the Russian invasion of Ukraine in 2022, Bruno Kahl, the head of the German foreign intelligence agency (BND), decided to talk to senior officials in Kiev. When the invasion started, he was unable to leave the country and was left “stranded”. This was painted as a huge embarrassment and intelligence failure. However, he was perfectly aware that the invasion was coming soon, in fact that is what he went to discuss with the Ukrainians. It was just bad timing and bad luck that it happened before he could return to Germany. 

Related to that is the fact that sometimes information really just doesn’t exist. Afterall, the purpose of intelligence is both to uncover other people’s secrets, but also that of keeping your own plans hidden. If adversaries do their jobs right, there should be no way for analysts to know what is happening with that much time in advance. Additionally, intelligence agencies will often employ deception campaigns and techniques to try to mislead adversaries into getting a distorted perception of reality. If we go back to our Ashraf Marwan example, if his false warning was really an instruction for the Egyptian president (who also happened to be his father-in-law), then it was a great deception technique that completely threw off Israel’s readiness for an attack. 

Finally, we must note that intelligence agencies are often subjected to impossible expectations. As Will Inboden explains, policymakers seem to expect the IC to just be able to perfectly predict the future. Yet, as we all know, that’s impossible. At the end of the day, human behavior is often unpredictable and there are way too many contextual, cognitive , cultural and institutional variables when determining how a certain event will pan out. Instead, policymakers need to understand that prediction is unattainable. The best they can hope for are detailed and accurate warnings delivered in a timely and convincing manner.

Are intelligence failures inevitable?

Although this article paints a relatively grim picture, intelligence failures are often not inevitable. For each of the examples provided here, we can easily list a number of different decisions, approaches, or techniques that would have led to so-called successes. For example, the events of September 11th would probably have looked a lot different had the different institutions of the intelligence community cooperated effectively and the analysts had engaged in more creative risk assessments that considered a possible weaponization of airplanes. After conducting various assessments into what went wrong, the US government made significant changes to avoid such mistakes in the future. It abolished the position of Director of Central intelligence (DCI) (who was also in charge of the CIA) and created a new Director of National Intelligence (DNI). The DNI is now exclusively in charge of overseeing all the intelligence agencies and making sure that they work together when necessary. Similarly, to combat politicization, many countries have adopted the Sherman Kent model, which calls for a strict separation of intelligence agencies from the influences of policy preferences. 

Although listing all the possible changes and solutions to the problems raised by this article would be too long, they definitely are out there. And learning from past mistakes is a great tool that all governments and agencies should engage in to not only identify mistakes, but to find ways to fix them.

Photo by kalhh via https://pixabay.com/illustrations/megaphone-dollar-money-banknotes-1189870/

mm
Mariana Goldsmit
My name is Mariana Goldsmit, and this is my third year studying IRO. I am a writer and editor for DEBAT. I am highly passionate about writing // reading and am very interested in US and Latin American politics.
RELATED ARTICLES
- Advertisment -

Most Popular

Recent Comments