America’s Serial Foreign Policy Failures

Every few years for the past forty years, I have found one subject or another to be particularly fascinating. My inquisitiveness has then led me to undertake a major research project that has usually lasted for three or four years. I am now in the midst of another one of these projects. I have spent the last year and a half reading everything I can that in any way relates to what I believe are two key questions:

  • Why have there been so many American foreign policy failures in the Middle East, especially since the cold war ended? and,
  • Since the United States is a democracy with almost unlimited freedom of speech, why has the American public not sought to remedy that failing—especially since those failures have been so costly in lives and national treasure?

What follows is a preliminary assessment and a précis of the vast amount of material that I have been able to piece together so far.

In terms of methodology, from the outset, I understood that I would first have to try to determine whether one of the two subject areas that I was investigating caused the other…or whether there was merely a correlation between the two. In other words, since the same mistakes have been made by both Republican and Democratic presidents, has the consistent failure by American administrations to produce a cogent foreign policy, especially in the Middle East, only been due to a fault in the policy-making system? Or, has it been caused by government outsiders’ inability to correct mistakes both at an early stage and even later while that policy was being implemented? Or, did those two trends develop independently of each other and then only interact at a late stage in their evolution when there was a desire for post-mortems?

Full disclosure: I have to admit that I have also had personal questions that I hoped that this research effort would answer. Specifically, I will soon be entering my jubilee year of living and working in Israel. In recent years I have felt that the efforts I have invested in studying Israel and the region have been paying especially rich dividends; and my work is becoming more incisive and relevant than ever. I have also become particularly successful at predicting the course of events in the Middle East as a whole.

At the same time, though, I have had to witness two very negative and highly-disturbing trends. First, the policies toward the Middle East by most of the great Western democracies have degenerated into a shambles. Second, I have also discovered that the prime market for my writing and analyses— media outlets in the United States—is no longer interested in what I have to say. I therefore cannot help but wonder whether the reaction to my work says something that is relevant about the twin issues that lie at the heart of my study. Is the latest crop of editors more ignorant than those who came before? Or, are there other reasons why editors are less interested in in-depth, non-ideological and rational analysis? In particular, are the editors merely reacting to their markets? For example, have the media’s audiences, as expressed in part by Trumpmania, been sending a message to the administration that careful, rational policy-planning is no longer (or maybe never was) considered to be important by American voters.

Much of what I have discovered has been included in articles that I have written in the past year. However, those articles focused primarily on other issues and this research material was used largely for illustrative purposes. Those articles can be found elsewhere on my blog This article is an attempt to actually focus on the specific issue of American foreign policy-making and to provide both a more encompassing and a more detailed overview of that subject.

In brief, what I have discerned so far is that there is a matrix of relationships between the American public, the American government, American academe and the American media that is even more complex than anything that I have found in the supposedly “inscrutable” Middle East. I’ll go even further. Understanding the Middle East is a breeze by comparison with comprehending how Americans conger up their foreign policies.

In general, the citizens of the United States have yet to come up with a full, coherent set of principles for making domestic policy. For example, each person is supposed to be equal before the law. Yet a disproportionate number of unarmed blacks are killed each year by the police. It should, therefore, come as no surprise that, in the absence of a set of standards by which they live out their daily lives, since the end of the Cold War, the government of the United States and non-governmental bodies in the United States, whether working separately or together, have been unable to come up with and to foster a coherent and comprehensive policy towards the Middle East.

Much of the detail about recent events that I am including in this article has been published in the media and in professional journals; and so some of the details will undoubtedly be familiar to many readers. However, to my considerable surprise, while seemingly millions of words have been used to produce post-mortems on subjects such as the invasion of Iraq, the failure of the US to mediate even a temporary and partial resolution to the Israeli-Palestinian dispute, and America’s waffling following the outbreak of the so-called “Arab Spring,” I have been unable to find anything of note that discusses whether there is something inherent in the American policy-making system or in American society that has led to such consistency in the production of failures and disasters.

Fortunately, I was able to find a totally different topic that parallels my areas of interest and that has been studied in some depth. It is the epidemic of national, irrational, anti-scientism—which is expressed in many policy-making fields such as how to deal with the reluctance of many parents to vaccinate their children, or the widespread refusal to accept that humankind is altering the climate of this planet. The explanations provided by researchers studying American anti-scientism, especially the analytical fallacies to which many Americans have been prey, have enabled me to create a very useful and effective skeleton for my research. The flesh that I could put on the skeleton came from two primary sources—a careful reading of American history, and the research that I have conducted on the Middle East and the media over almost 50 years.

I now believe that I have done sufficient research that I can safely articulate some fairly firm conclusions. Many of them are both dispiriting and alarming.


Niall Ferguson, in the introduction to his recently-published biography of Henry Kissinger, offers up the following remarkable condemnation of the American foreign policy making fraternity and sorority: “In researching the life and times of Henry Kissinger, I have come to realize that…I had missed the crucial importance in American foreign policy of the history deficit: The fact that key decision-makers know almost nothing not just of other countries’ pasts but also of their own. Worse, they often do not see what is wrong with their ignorance.[1]

I cannot agree more with this observation. However, Ferguson interprets his insight too narrowly. His primary concern after reaching this conclusion is to demonstrate how Kissinger’s profound knowledge of history informed all of his work as a policy advisor. My research has shown that the implications of this reality are far broader and far deeper. Only if one comprehends the impact that this history deficit has had on Americans in general and on American policy-makers other than Kissinger in particular, is it possible to understand how America’s serial foreign policy failures came about. In this case, when I talk about “history,” I am referring less to specific dates and events, and more to how a people’s culture (their values, beliefs and practices) has affected the way they have behaved; and how their behavior during events has affected their culture.

My emphasis on this particular aspect of history came about for several reasons. I have found that too many political pontificators, regardless of whether they are of the left/liberal, realist or neocon persuasions, base their tirades and tracts about Israel and the Palestinians on the same patently false data and assumptions—and have equal difficulty in abandoning these falsehoods even when presented with hard evidence. When I have attempted to counteract this ignorance, I have found that no matter how much effort I put into simplifying and clarifying my discussions about the background to events in the Middle East, my American readers and listeners in particular have often tuned me out. And to top things off, I have yet to understand why I have been consistently criticized by well-known American colleagues for even trying to place events into their historical context…and for trying to explain those events as part of an extended historical process that is underway.

Recognizing that many people are bored by history, I nonetheless now have little choice but to provide some necessary context for dealing with the subjects of my research. The historical material that follows may initially seem abstruse. However, I believe that current American policy-making—the stage when actions are being planned but have not yet been implemented—cannot be analyzed adequately in any other way. In particular, only by comprehending how history has created and shaped the domestic tensions that invariably accompany foreign policy decision-making is it possible to understand why and how the Americans have made so many mistakes in their dealings with the rest of the world.

Some of the origins of today’s follies in American foreign policy-making can be traced to the endemic conflict that arose when those events and models of thinking that were strongly influenced by rationalist individuals such as

  • The Treaty of Westphalia (1648), and
  • The intellectual conceptual breakthrough of Amerigo Vespucci (1454-1512) and the subsequent cerebral ruminations of Galileo Galilei (1564-1642), Francis Bacon (1561-1626) and Renée Descartes (1596-1650),

were challenged by those events and models of thinking that were strongly influenced by non-rationalists and anti-rationalists such as

  • Those American Protestant founding fathers who were both devout and who played an instrumental role in the creation of the American national narrative, and
  • The religious “Great Awakenings” (1730-1743, 1800-1840, 1858-1900, late 1960s-today).

The Treaty of Westphalia (or, rather the treaties of Westphalia) brought to an end the religious wars that had convulsed Europe for three decades and had led to the deaths of about a third of the people living in German-speaking regions of the continent. The various treaties were based on what was then the revolutionary proposition that:

  • Europe was made up of co-existing sovereign states whose boundaries were immutable.
  • No state should be allowed to interfere in the internal affairs of another state.
  • Stability on the continent should be fostered through a balance of power among the varying states.

Initially, the treaties applied primarily to existing empires and principalities. However, the main principle upon which the agreements were based—the concept of the sovereign nation-state—eventually formed the foundation stone upon which the entire modern system of international governance was constructed. One of the first of the states-to-come whose system of governance was specifically designed with this model in mind was the United States of America.

Over the years, though, the idea of the sovereign nation state with inviolable borders has been found to be less of a peacemaker than had been hoped. For that reason it has undergone numerous revisions as time has passed. Among other things, those changes have also had a direct impact on how Americans view the world.

Initially, the modifications to the nation-state model were merely designed to ameliorate the impact that one crisis or another had had. One of the first such corrections occurred because the Treaty of Westphalia had made no provision for the possibility that one state (Napoleonic France) could become disproportionately more powerful than the other states—even when most of them were acting in concert politically.

The Congress of Vienna in 1815, which brought the Napoleonic wars to a formal end, sought to overcome that shortcoming by resizing the existing states and principalities and by adding a new, unwritten principle that stability in Europe should be assured by the creation of shifting political alliances that would constantly adjust the balance of power to contend with new events and new political and economic trends. Creating political stability became more important than promoting one form of government over another.

The system established by the Congress of Vienna worked reasonably well for almost a century. However, it eventually could not cope with the sum of a myriad of factors such as political cowardice, competitive colonialism, the asymmetrical modernization of Europe, the growth in populist nationalism, and the social impact of the advent of liberalism, progressivism and socialism. To make a long and very complicated story ultra-short, these weaknesses led to two world wars.

Americans were often appalled at the outcome of Europeans’ political manipulations both prior to and after World War I. Domestically, the reactions by the Americans led to the consolidation of two, existing, competitive schools of thought. One was made up of secular political evangelists who sought to intervene in happenings taking place in venues far away so that they could “correct” the mistakes that had been made there. Others preferred to isolate America from any impact those events might have.

The decision by America to take part in the First World War was the first major step it took to alter its previous preference for isolationism. When the fighting ended, willy nilly, the US ended up becoming a charter supporter of the Treaty of Versailles, which had, among other things, revised the concept of nation-statism to take into account the emergence of ethno-based states in place of the Hapsburg Empire.


After the treaty was signed, the Western political order was controlled by countries that had adopted one of three different forms of governance—liberal democracy, Communism or authoritative nationalism. There was an attempt to overcome the competition for superiority that the cohabitation of these very different forms of governance on the land mass of Europe was engendering. However, residual isolationism led the US to reject the supranationalism that was inherent in the thinking of those who had founded the League of Nations.


Eventually, Europe emerged from the Second World War with two major additions to the continent’s political mix. The United States had effectively become Western Europe’s hegemon. In order to provide legitimization for that reality, Washington now agreed to the founding of the United Nations. However, its distrust of foreigners remained largely intact. For that reason, it demanded de facto control over the UN through the institution of the right of veto.

A second significant change over which the Americans would have no control was the growth in support of a new, humanistic political philosophy that began to sweep through some of the countries in the western part of the European continent. That philosophy seeks to prioritize human rights issues above all others and has led to a dramatic expansion of the terms of the Hague and Geneva agreements. The US ratified those changes, but then, like many other states, ignored the terms of those pacts when they were inconvenient.

America’s behavior as a hegemon and the new European political philosophy actually had common roots. They arose out of the inspirations of Bacon, Locke, Descartes and other thinkers who had emerged during the late Renaissance/early Enlightenment period.

But, crucially, these thinkers’ cogitations had even earlier roots that are too often ignored but that occasionally also have a significant impact on American foreign policy decision-making. By the time that Bacon & Co arrived on the scene, the Renaissance had been underway for just over 200 years. As Yuval Noah Harari has noted[2], one of the major breakthroughs in cogent thinking during that period had come from Amerigo Vespucci—who, among many other things also ended up giving the New World its name.

Vespucci had made several trips to the New World soon after Columbus’s discoveries. Up until his voyages of exploration, mapmakers’ drawings of edges of the known world had petered out to and comingled with broad swaths of meaningless squiggles, drawings of mythical figures and other graphic embellishments that took up what might otherwise have been huge empty spaces. This made the maps reinforce the reigning intellectual concept that everything that needed to be known about the geography of the world was already known.

Vespucci’s maps of the Americas, though, were conceptually different. They included vast white spaces, whose purpose was to indicate graphically that much about the world had remained unknown and was available for discovery.

The maps that had been crafted before Vespucci were dramatic exemplars of a more general mindset of the period. Difficult as it must be for most of us to believe today, throughout the two initial centuries of intellectual reawakening in Europe, one unproven belief had remained steadfast among all the leading thinkers of the age. It was that everything that needed to be known had already been revealed to the ancients. All one needed to do was to search for the appropriate, ancient document. Until that mental block was overcome, a methodical search for new knowledge and understanding would be delayed indefinitely.

Building on Vespucci’s insight, Galileo’s and Bacon’s most important contribution to humanity was, therefore, not so much their actual scientific endeavours as their success in creating an analytical framework that could enable anyone to challenge any belief.

Their framework was formed by two intellectual pillars that were abhorrent to the intellectual censor of the age, the Catholic Church. Those intellectual weight-bearing columns were a belief in the inherent value of skepticism, and an acceptance of the proposition that every human notion about the world may be wrong. Hard as it may be for us to comprehend, until Bacon & Co arrived on the scene, Western scholars simply could not and would not accept the idea that the great thinkers of the past had made mistakes. Bacon, in particular, insisted that one should not accept any idea or proposition as the truth unless it is accompanied by detailed evidence gathered through careful observation. Once this proposition became widely accepted, its impact was huge. For the first time, for example, many of Aristotle’s suppositions were questioned and disproved; and the way was opened to eventually refute almost everything that Galen had written about how to heal the human body.

However, it cannot be over-emphasized that rational thought based on careful observation has never overwhelmed the marketplace of ideas. Far from it. All that these men really did was to kick start what has become one of the greatest conflicts in human history—the battle between those for whom reason and the use of what has come to be called “the scientific method” is the guiding principle directing their thoughts, and those who prefer to rely on a set of untested and often speculative beliefs.

Bacon, together with two of his successors, Isaac Newton and John Locke, had a particularly profound effect on Thomas Jefferson, and therefore a direct impact on the writing of the US constitution and the way Americans think—or the way that they say they think. Jefferson, called these creators of physics, inductive reasoning and empiricism his “trinity of three greatest men.[3]

Of particular importance to my study is Jefferson’s reasoning that if anyone can discover the truth by using reason and science, then no one is naturally closer to the truth than anyone else. One conclusion that he drew from that realization was that those in positions of authority do not have the right to impose their beliefs on other people. The people themselves retain this as an inalienable right of their own. This perception was to find its fulfillment in the Bill of Rights that, especially in the First Amendment, is an adulation of skepticism and a declaration of the right to deny the validity of previously accepted wisdom.

It was this realization that led me to include the second question in my study. For that reason, let me now repeat that question using slightly different wording: If the assumption that anyone can discover the truth is so fundamental to the American system of governance, why have those people outside government had so little impact on policy-making?

The third historical element in the American policy-making process is one that I have not seen discussed recently either by scholars or journalists. It is the powerful influence that is still being exerted on American foreign policy-making by Protestant Christianity. In fact, if I had to name only one historical influence on American foreign policy my first choice would be the secularization of the Calvinist/Baptist religious world view. After all, the European Enlightenment that so influenced America’s founding fathers, was supercharged by Martin Luther’s protests; and the Treaty of Westphalia was the product of the Thirty Years War between Protestantism and Catholicism. The American secular catechism about the right to offer credit without being demeaned as a usurer, the duty to repay loans, the sacredness of labour and the right to gain and keep the wealth emanating from that labour is the direct offspring of what is often called the “Puritan Ethic”.

In its public appearances today, the “Protestant Ethic,” as it is used to formulate foreign policy, may have been scrubbed clean of its most overt religious appurtenances and the evidence of its religious origins. However, the religious/cultural beliefs that inform it remain. As a result, it is inevitable that its true-believer adherents will invariably misunderstand and come into conflict with believers in a different world-view.

My ideas about the continuing influence of Protestantism on American policy-making in general are not entirely new. A century ago, taking his cue from the writings of American founding father Benjamin Franklin, sociologist Max Weber postulated[4] that Northern European Calvinist/Baptist Protestantism was the foundation upon which modern American capitalism and the American economy were built. It is a fact that much, if not most of America’s foreign policy has always been directed towards spreading the doctrine of capitalism and strengthening and protecting American capitalist enterprises.

Arguably, the most simplified and idealized summation of content of modern American foreign policy can be found in a document that continues to influence American policy-making today, but which has been largely forgotten by the public at large and is almost never mentioned in current discussions of American foreign policy-making. The Atlantic Charter[5] was prepared in the wake of a meeting held between US President Franklin Delano Roosevelt and British Prime Minister Winston Churchill on August14, 1941, months before the US entered World War II.

The eight principal points of the Charter were:

  1. no territorial gains were to be sought by the United States or the United Kingdom;
  2. territorial adjustments must be in accord with the wishes of the peoples concerned;
  3. all people had a right to self-determination;
  4. trade barriers were to be lowered;
  5. there was to be global economic cooperation and advancement of social welfare;
  6. the participants would work for a world free of want and fear;
  7. the participants would work for freedom of the seas;
  8. there was to be disarmament of aggressor nations, and a post-war common disarmament.

In an act of consummate hypocrisy designed solely to get the United States to enter the war and keep it there, all the allies eventually signed on to the charter—even though none of the regional hegemons such as Britain, France, Belgium and Holland had any intention of giving up their colonies. More importantly for my purposes, though, is the fact that these objectives were never prioritized. This meant that there has never been any indication which goals would or should take precedence if they come into conflict with each other—or if they come into conflict with either domestic considerations, economic considerations or what eventually became America’s all-consuming campaign against Communism. This unwillingness or inability to prioritize then led to endemic incoherence in the foreign policies adopted by successive America governments. For example, America constantly preaches the institution of democracy as the remedy for all political ills. However, it has also shown no compunction about overthrowing democratically-elected governments such as those of Mohammed Mossadegh in Iran in 1953 and Salvador Allende in Chile in 1973.

Setting the Work Agenda

Another source of follies is nurtured by the bureaucratic environment in which policy decisions are made.

In principle, the basic list of American foreign policy advisors’ work priorities is set out in the President’s National Intelligence Priorities Framework, which is formally updated every six months. These guidelines are then used to produce the bulk of foreign-policy-related intelligence analyses that land on the president’s desk—the so-called “authoritative” analyses of events taking place in the world that appear, most importantly, each day in the form of The President’s Daily Brief. Those analyses also form the basis for the narratives that the spokespeople at the White House, the Pentagon and the State Department are likely to propagate and publicize in the months ahead.

The agenda for the policy-advisors is based on what the president determines are “America’s interests.” Those “interests” can vary depending on whether long-range or short-range issues are foremost in the president’s mind, whether constitutional or other idealistic principles, or economic concerns, determine his approach to policy-making, and how much of a rationalist or a “gut believer” he is.

The nature of the final version of the document is determined by what US Nobel Prize-winning economist Herbert Simon called humankind’s “bounded,” or limited rationality[6]. Simon believed that the human mind is inherently incapable of coping with the complexity of the modern world.[7] For that reason, not only are people often caught by surprise by events, most individuals have great difficulty in dealing with the already-existing “big picture.” For that reason, even the brightest people create simplified mental models that they then use to make sense of events taking place around them. Instead of working hard at assembling a jumble of pieces of information into a single, coherent picture, most people—including presidents—prefer to create an image that is to their liking; and then find those bits and pieces of data that appear to support and validate the picture.

In the wake of the 9/11 intelligence fiasco, the RAND Corporation was asked by the US government to prepare an extensive study, which has since been declassified, of what had led to that ignominious calamity[8]. The study determined that the most unresolvable problem American government analysts face is that they can really only expect a proper hearing if they can shape what they have to offer to make it directly relevant to the policy-maker’s existing agenda—and especially only if they can present it in a form that the policy-maker “finds congenial” (my emphasis).

In a separate article by one of the authors of the RAND Report,[9] Gregory Treverton reveals some of the other dynamics that were actually at work among American intelligence analysts prior to another American debacle, the US invasion of Iraq:

“…whether Saddam Hussein’s Iraq had weapons of mass destruction in 2002, drives home the point that because intelligence is a service industry, what policy officials expect from it shapes its work. In the WMD case, neither the US investigating panel nor the British Butler report found evidence that political leaders had directly pressured intelligence agencies to come to a particular conclusion. Yet it is also fair to report that some analysts on both sides of the Atlantic did feel they were under pressure to produce the “right” answer: that Saddam Hussein had weapons of mass destruction.

“The interaction of intelligence and policy shaped the results in several other ways. Policy officials, particularly on the US side, when presented with a range of assessments by different agencies, cherry-picked their favourites (and sometimes grew their own cherries by giving credibility to information sources the intelligence services had discredited). As elsewhere in life, how the question was asked went a long way toward determining the answer…Moreover, American intelligence was asked over and over about links between Iraq and al-Qaeda. It stuck to its analytic guns—the link was tenuous at best—but the repeated questions served both to elevate the debate over the issue and to contribute to intelligence’s relative lack of attention to other questions.

“In the end, however, the most significant part of the WMD story was what intelligence and policy shared: a deeply held mindset that Saddam must have WMD… For intelligence, the mindset was compounded by history, for the previous time around, in the early 1990s, US intelligence had underestimated Iraqi WMD; it was not going to make that mistake again. In the end, if most people believe one thing, arguing for another is hard. There is little pressure to rethink the issue, and the few dissenters in intelligence are lost in the wilderness.

The RAND study found that because of factors such as these, and despite the tens of thousands who are employed by the various intelligence agencies, there is a woeful lack of long-term analysis. Most of the work done by these employees is “tactical, operational or current.” They do not attempt to attain a “deep understanding of our adversaries.” (I should add that they also, too often, do not attempt to attain a deep understanding of their country’s friends.) Exacerbating the situation is the fact that too often, analysis within government entities is structured to meet organizational needs, not to deal with issues and problems.

History has shown that another element, one not mentioned in the RAND report, also has a profound impact on the material being made available to policy-makers. Despite being a world superpower, Americans, as I have noted, have remained remarkably insular. The triumphalism that accompanied its victory in World War II, when combined with the Americans’ belief in their uniqueness and exceptionalism, has led to a morbid syndrome often labelled as “NIH” (Not Invented Here), which, in turn, leads policy-makers and policy-advisors to ignore or reject out-of-hand insights and data gathered by people who are not members of the Washington foreign policy-making guild.

Possibly the best example of this syndrome occurred in 1976. At the time, all of Washington was totally preoccupied by the “Red Threat” of the “Evil Empire”—the Communist Soviet Union. However, in that year, a demographer with a fanatical devotion to statistical analysis named Emmanuel Todd published an essay based on an exhaustive examination of every set of numbers he could find about the Soviet Union[10]. He discovered beyond any shadow of a doubt that the Soviet Union was on the verge of a collapse. The prescient article made no waves at all in Washington, however, because Todd had three strikes against him even before he dared challenge such an entrenched and devoutly-held piece of American conventional wisdom. He was French. He wrote in French. And, worst of all, he had been a member of the Communist youth club at his high school.

This American linguistic deficit doesn’t surprise me. A friend who worked for the CIA once told me that almost all the assessments that his office produced on the Middle East were based on what could be garnered from reading the English language press, and only the English language press. I will come back to the language issue in a moment

The end product of all these factors is a frightening one: According to former “spooks” I have spoken to, answers to even the most critical questions are rarely sought by civil service underlings without them having first been assigned to do so; and if for some reason these subordinates happen to be in possession of answers to unasked questions, they do not usually volunteer them. The answers are only offered up if the “client” (the president or his immediate advisors) asks the right question.

The problem that then arises is: How is the “client” to know enough to ask the “right” question?

America’s founding fathers were apparently aware that such a situation might occur. Jefferson may have believed that any person who has the will, the intelligence and the resources to do so can provide the input needed. However, while America’s founding fathers may have been influenced by the great rationalist philosophers, American history has also been marked by successive periods of intense, belief-based public arousal. The best known of these periods are the so-called eras of mass “great awakenings” of religious fervor that have always been accompanied by non-rationalism and even anti-rationalism on a huge scale. During these periods, the scientific method of thinking, as formulated by Bacon and Descartes, had to compete openly for public acceptance with unverifiable religious and other spiritual or ideologically-based belief systems. The first of these “awakenings” took place about 40 years prior to the drafting of the constitution, so the drafters had to have been aware of the dangers that mass irrationality presented.

Incidentally, some scholars believe that we are now in the midst of the fourth such religiously-based “great awakening.” According to their calculations, this latest period of mass irrationality began in the 1960s[11] and heavily influenced policy-making in the White House of George W. Bush. Certainly, many of the positions adopted by the Tea Party today clearly have their roots in the beliefs propounded and promoted by anti-rationalist, true-believing, evangelical preachers.

Because the danger of irrationality was so palpable, the founding fathers created three separate, partly-competitive governing institutions so that public debates based on different sorts of inputs would be fostered. Their system of checks and balances soon expanded when the Bill of Rights, which guaranteed freedom of speech and freedom of the press, was included in the constitution.

However, as I have shown extensively in my other writings, neither the American public nor the American policy-makers are likely to get the kind of information necessary to formulate incisive questions from the popular media today[12].

There are two other bodies that have the resources to independently produce the kind of information and assessments that policy-makers need—think tanks and universities. The material produced by think tanks, though, is very often slanted so that it concurs with the interests and concerns of the think tank’s patrons—both financial and otherwise.

In theory, universities have both the resources and the independence to produce the material needed; and America today is home to some of the most respected research universities in the world.

All the leading universities claim to be working repositories of the approach adopted by Jefferson and his “trinity of three greatest minds”: clear thinking, combined with a respect for evidence — especially inconvenient and unwanted evidence…that challenges our preconceptions[13].” Unfortunately, the universities’ claims, too often, have been false.

To begin with, America has always had isolationist tendencies. In fact, between the end of World War I and a Supreme Court ruling in 1923, it was a criminal offence in about half of the states to teach a foreign language. Today, because of the budget battles in Washington, government support for programmes designed to produce competent foreign policy analysts, has been evaporating. The Foreign Language Assistance Program, created in 1988 to provide local schools with matching grants from the Department of Education for teaching foreign languages, ended in 2012. The previous year, Title VI funding for university-based regional studies fell by 40 percent and has flatlined since then. If today’s Title VI appropriation were funded at the level it was during the Johnson administration, then it would total almost half a billion dollars after adjusting for inflation. Instead, the 2014 number stood at slightly below $64 million[14].

That could explain, in part, why a survey conducted by The College of William and Mary found that a third of the international relations specialists at American universities have no working language other than English[15].

In addition, for historical reasons, the teaching of Middle Eastern studies in American universities has become among the most ideologically-influenced and politically-laden fields of endeavor imaginable. For that reason, it is almost inevitable that many if not most of the graduates of departments of Middle Eastern studies, some of whom go on to work in government, end up making huge errors in assessing events in the region.

One major reason for this phenomenon is that beginning in 1976, in the wake of the second great oil shock, Saudi Arabia’s national income was raised by a gargantuan amount. The Saudis decided to use some of that newly acquired money not only to enhance their political position in the world through such things as massive arms purchases, but also to influence hearts and minds by funding massive educational projects. Madrassahs that teach the fundamentalist Wahhabi version of Islam were set up throughout the third world. It is those schools that have now produced the Salafist movement that is providing the bulk of the fighters that have joined al Qaeda, ISIS and other radical Islamic movements.

Other monies were directed at establishing endowed chairs in Islamic and Arab studies in leading universities world-wide. That then led American Jewish philanthropists to both strengthen AIPAC as a lobbying arm and to endow university chairs in Jewish and Israeli studies.

Getting one of those plum tenured positions implicitly, if not explicitly, meant promoting the political positions propounded by the funders; and producing graduates who would adhere to the particular political line being funded. Thus, academe, especially when it came to subjects that somehow could relate to issues in the Middle East became increasingly polarized and politicized. The polarization, in turn, led many politically-oriented academics to focus on subjects and policy issues that are of use to political advocates and lobbyists acting on behalf of the moneybags—in place of trying to seek imaginative solutions to seemingly-intractable problems.

Of course not all university positions are externally funded and not all academics have been “bought” or become politicized. However, the expectations that the universities have of people entering into tenure-track positions in social sciences faculties can be even more bizarre and counterproductive for policymaking.

For several generations, scholars in the social sciences have been waging a campaign to have their work recognized as a “science.” For the most part they have done so by using mathematics and statistics where possible. Schools of Political Science and International Relations are no different. This has, in turn, led to the widespread introduction of abstract theories and an emphasis on methodologies in many academic discussions of foreign policy…without there being any concern for whether the discussion has any practical relevance whatsoever.

A no less worrisome recent phenomenon has been the growing demand within both think tanks and universities that international affairs practitioners be assessed by many of the same criteria currently being used in the business world. Increasingly, these specialists are being judged for promotion—to a far greater extent than before—by the number of what are now termed the “actionable ideas” they can create, and the “measurable impact” that they have. Academics have always been afflicted by the “publish or perish” demands of universities and foundations. But, while promotion committees publicly deny it, today, more and more, tenure track teachers are also being increasingly judged by the number of op-ed pieces they can get published, on the number of “likes” that are registered on their Facebook pages, and the number of television appearances they have had.

Aligned with that phenomenon has been the dramatic change that has taken place in the op-ed pages themselves. When they were first introduced in the mid-1970s their purpose was to provide an outlet for policy-oriented research that was new or had been overlooked by the newspaper’s news pages. However, the degeneration from that worthy purpose to verbal entertainment has been swift. For that reason, it is not at all surprising that this part of the newspaper has now been renamed “The Opinion Page.”

The end product of these syndromes has often been appalling at best. I have spoken to many university audiences over the years. Inevitably, when I review the state of the Israeli-Palestinian dispute, the discussion often gets heated and passionate—especially if devoted activist advocates of one side or the other are present. However, I have found that the more passionately that people speak, the less they tend to know. In a majority of cases, if I ask them if they have read UN Resolution 242, which is the basic text of the peace process, a majority have not done so…and even if they have, they cannot even approximately repeat the hundred or so words that relate to resolving the issue of the Israeli occupation. Likewise, if I pull out a map, almost invariably they cannot use a finger to draw a free-hand approximation of the so-called “Green Line” that is the focus of almost all the boundary negotiations.

One would have thought that the Viet Nam War disaster, which was the product of an unfettered belief in the unproven “domino theory” by some of the best minds that American academe has ever produced, would have taught America’s decision-makers a profound lesson. But that has not been the case. Instead, as Harold Bloom wrote later in his seminal book, “The Closing of the American Mind,[16]” the Classical Canon of Western thought was being replaced by irrational philosophizing and skepticism about standards of truth.

This particular form of anti-rationalism appears to be part of a much broader, deeper and worrisome phenomenon than first appears to be the case. All the evidence points to an ineradicable desire by all too many academics to seek an escape from having to confront and to seek real, effective solutions to seemingly-insoluble problems. They do so by inventing a jargon whose sole real purpose is to hide the intellectual vacuity that lies behind the words and propositions.

Bloom may have been a conservative curmudgeon. But the basic fact is that by the 1970s, rational thinking’s place in the American intellectual cauldron was being supplanted by a group of usually-unprovable beliefs or ways of thinking that undermined the scientific method’s approach to analyzing the world.

The most famous, or infamous of these idiocies was post-modernism and post-structuralism that still have an honoured place in many university humanities and sociology departments.

Instead of seeking factual truth, these theoreticians satisfied themselves with a belief that any narrative is as good as another. What scientists would call “objective knowledge,” the post-structuralists claimed, is a mere “social construction” that is similar to myths and religious beliefs. Furthermore, they assert that facts are made up; there is no such thing as natural, unmediated, unbiased access to truth; and we are always “prisoners of language” (whatever that phrase may mean).

This set of beliefs was backed up by what came to be known as “political correctness.” In its original form, “political correctness” was a legitimate attempt to eliminate gratuitous insults to women, blacks and other minorities in the United States that had become part of everyday speech. Very soon, however, it deteriorated into an often-ludicrous distortion of the normal use of language, and a descent into nonsense. A classic example of this sort of twaddle was Katherine Hayles’s analysis of fluid mechanics—a dry, “objective,” field of study if there ever was one. Hayles was a professor of literature at Duke University and, even more significantly, a peer-approved, former president of the Society for Literature and Science. The following is how she ended up treating the field of fluid mechanics:

“Despite their names, conservation laws are not inevitable facts of nature but constructions that foreground some experiences and marginalize others. . . . Almost without exception, conservation laws were formulated, developed, and experimentally tested by men. If conservation laws represent particular emphases and not inevitable facts, then people living in different kinds of bodies and identifying with different gender constructions might well have arrived at different models for [fluid] flow[17].”

In Hayles’s exploration of the imaginary wonderland that her neurons had constructed, the very fact that basic laws of nature were tested by men, was a reason enough why the validity of these laws should be questioned. It might be possible to laugh off this kind of writing were it not for the fact that these kinds of beliefs did become all too common in academe and were influencing students’ (future analysts’ and journalists’) view of the world around them….and especially the world of problem-solving. This situation then led to the legitimization of all sorts of bumpf, such as an increased use of euphemisms that ended up blanketing discourse and thwarting honest criticism. My personal favourite example of the new euphemisms introduced at that time was the word “spin[18]” in place of the more direct and honest words such as “lying” and “fraud.”

Most recently, there has been a growing movement in American academe to shut down critical discourse entirely through the increased acceptance of the idea that people should be silenced for innocently enunciating what is now termed “micro-aggressions.” [19] Essentially, anyone in a place that accepts this concept can now claim that anyone who upsets them should be censored. This, of course, then leaves anyone who has successfully used this escapist technique incapable of coping with situations that require clear thinking because they are untrained to do so. When combined with the general dumbing down in American education, this fear of offending becomes a blanket excuse for thoughtlessness.

The results of this approach can be seen everywhere on the internet. More shockingly, these modes of anti-rationalism can now also be found in places that play a significant role in shaping the public’s thinking. For example, the most recent edition of the AP style book, which influences almost everything that is written in today’s established American media, highlights the following admonition “…use climate change doubters or those who reject climate science and avoid the use of skeptics and deniers” (my emphasis). Fascinating, isn’t it that two of the pillars that constitutionally-protected American journalism should now be denigrated by what is, arguably, the most powerful journalistic institution in America.

Since government policy-making is all-too-often the product of a government/academe revolving door, and because unprovable beliefs and “spin” had become so acceptable in other august government bodies such as the Fed, it should come as no surprise that these sorts of mental shenanigans have also been affecting the American foreign policy community. As things have turned out, the foreign policy community has often been even more susceptible to this sort of foolishness than most other government bodies. And the foreign policy community that deals with Middle Eastern issues seems to be the most susceptible of all.

One of the seminal problems in American foreign policy-making that I have discovered is that once a subject has been added to the policy-makers’ agenda, it is shaped so that its objectives accord with existing work patterns at the State Department or the White House. I have never been able to find an example of the opposite case where work patterns were adjusted in a dramatic fashion in order to achieve an objective.

One reason for this is that, almost always, American leaders’ mental models are influenced by one of the most remarkable aspects of American foreign policy-making. Because they have been taught to be so proud of their own system of governance, I have found that American diplomats are almost universally unable to conceive of any need to change their standard operating procedures (SOPs)—even when those SOPs were installed only as a matter of convenience, and even when they are decidedly counter-productive. In particular, policy-makers seem to feel no need to study political systems with which they may be unfamiliar—and especially to investigate whether those systems may have evolved to meet the specific needs of a particular society.

One major outcome of this sort of mindset is the mindless routinization of working procedures. For example, during the Cold War, foreign policy-making was relatively easy because the belief that Communism had to be contained made it particularly easy for “bounded” policy-makers to act almost unthinkingly. American policy-makers could and did divide the world into two camps—“those who are with us and those who are against us.” So-called “neutrals” such as those countries that labelled themselves as “non-aligned” were then either arbitrarily dumped into one of the two categories based on largely irrelevant criteria such as whether they bought Western or Soviet military equipment and how vociferously they objected to America’s practices in developing societies…or they were just ignored or berated.

This desire to treat the world in a binary fashion led the US to appear to be catatonic when the so-called “Arab Spring” sprung. Even today, Americans tend to view the violence in the Arab World in binary terms—Sunni versus Shia, religious versus secular, and democratic versus dictatorial. The fact is that this simplistic approach has led to one disaster after another because the Middle East is not and has never been “binary.” There are dozens of shades and inflections of religion. Even more importantly, the most influential divisions in the Middle East are tribal ones—a subject that I have never seen discussed in depth by any American diplomats or American journalists whom I have met or whose work I have read.

Added to this is the fact that from the time of Woodrow Wilson onwards, American foreign policy has sought to “transform” the world. As the civil wars in Libya, Syria and Yemen have demonstrated so dramatically, though, the problem that then arises is how should any would-be mediator or international aid specialist approach and cope with the legacy of multi-sided tribalism? The evidence to date is that these events have left the Americans flatfooted.

One reason is that, because it has been their wont since time immemorial, many of the parties to the recent acts of bloodshed in the Middle East have been interested solely in tactical transactionalism, not strategic transformation. In other words, many of America’s allies, such as Saudi Arabia, the Gulf States and Turkey and the tribes that live there have had no desire to see the region transformed according to some American-created ideal. The very opposite has been true. Since the violence in the region broke out in earnest five years ago, the main protagonists have been preoccupied almost solely with entering into small-scale transactions that are intended to produce momentary tactical advantages. Thus, the US ended up supporting the Iranian proxy government in Iraq, while opposing the continued existence of the Iranian proxy government of Bashar Assad in Syria while, at the same time, assisting the Saudis to bomb the Iranian proxy Houtis in Yemen.

America’s relationship with Israel highlights a different problem, one that is a totally unlike any that American foreign policy-makers have to deal with elsewhere in the Middle East—or anywhere else in the world for that matter. Israelis do not behave like the Arabs. However, instead of examining and taking into account the components of Israel’s uniqueness, almost invariably, decision-makers in Washington have tended to view Jewish Israelis as a cultural extension of the American Jews with whom they are familiar. Barack Obama is an excellent case in point[20]. Among other things, this leads both the Americans and the Israelis to talk constantly about the two countries’ “shared values,” when, fact, because of their very different histories, there are as many differences in their value systems as there are commonalities. This then leads to otherwise avoidable misunderstandings and tensions.

In some cases, the Americans’ blindness to history and the culture of others, when combined with the routinization of their own working procedures, can be so counter-productive that the actions taken by American officials end up defeating the objectives that a stated American policy claims to be seeking. For example, the American standard operating procedure (SOP) for negotiations focusses on what is called “Track I negotiations” (elites talking to elites) and “Track II negotiations” (small group decision-making).That approach, when applied to the Israeli-Palestinian dispute, has repeatedly proven to be not only ineffective, but it has even undermined good government in both the Israeli and the Palestinian camps.

Because of considerations of length, in this article, I will confine myself to just discussing only a small portion of the impact that this factor has had on the Israelis. Israel is a democracy, but not a “normal” one. As I have shown in detail in my past writings, in Israel, there is no majority rule. Instead, for decades, the country has been governed by a succession of federations of minority sectoral political interests. In most cases, each of these sectoral interests is also a federation of sorts that is controlled by an even smaller minority group. Both in intra-party forums and in coalition governments, these minorities make policy trade-offs between each other to such an extent that the needs and interests of the majority are usually ignored. For more than a decade, between 1977 and the beginning of the first intifada in 1987, members of this rationalist majority carried out an extensive debate in the media on how the peace process should be pursued. However, they were ignored by successive American governments that were focused on dealing solely with the governing Israeli elite. Eventually, members of the majority simply opted out of the political system—a situation that, in turn, has further strengthened the influence of the extremist minorities on Israeli governments.

There are many other quirks of the American foreign policy-making that intensify the syndromes I have mentioned. Possibly the most dangerous is the craving for and often the objective need by academics and policy advisors for peer acceptance. One product of the matrix of relations between those who believe themselves to be peers is that, once an agenda item is set, it is almost impossible to remove it from public discussion, no matter how inaccurate or foolish it may be. The classic case was the continued insistence by Washington policy-makers in the1960s that China and the Soviet Union presented a joint threat—at the very moment when the ties between the two Communist states had reached a breaking point. Today, even more absurdly, is the continued insistence by many that reaching a deal between Israel and the Palestinians is the precondition for stability in the Middle East.

The reasons why incorrect perceptions and stupid ideas continue to influence policies are many. To begin with, there is the problem of “sunk costs,” where policy-makers and advisors feel that they have invested so much prestige in a particular position that they come to believe that they can’t abandon it without suffering grievous personal harm. A variant of this feeling is the fear of criticism from colleagues who have also adopted the same position and who don’t want to give it up. A break from supporting colleagues can mean exclusion from what may be a useful and career-enhancing, self-supporting “club” of like-minded aficionados.

The reverse can also be true. Many people need a constant, known foil whom they can bash in support of their side.

The most often proposed solution for this problem is to call for open debate and/or peer review by scholars. However, even in some of the worst cases peer review won’t work if the peers or the general audience, remain committed to the status quo or are committed to focusing on the same agenda item. (General audiences are often fooled because they are not always told whether an academic interviewee or public speaker has a hidden political agenda.)

Advocates have any number of techniques that they use to try to undermine the policy-making checks and balances that may be in place. Their task has been made all the easier in recent years by the dramatic fall in the level of reading comprehension in many Western countries[21], and their schools’ toleration of emphases on “bottom line” conclusions, the increasingly-accepted craving for extreme brevity in writing, and the recent explosion in the use of computer-generated “eye candy” in the form of pictures and dramatic graphics.

Worst of all, attention spans have been falling dramatically. In the late 1950s, Americans were shocked when a study undertaken by the J. Walter Thompson Advertising Agency found that the average American’s attention span was only 28 seconds. According to a Microsoft/University of Western Ontario study, attention spans had fallen to 12 seconds in the year 2000 and, as a result of the introduction of smartphones to only 8.25 seconds in 2014. This, incidentally is less than the 9 seconds during which a goldfish can concentrate.[22]

A Pew Foundation study has also found that the current generation of internet consumers lives in a world of “instant gratification and quick fixes” which leads to a “loss of patience and a lack of deep thinking”.[23]

To top things off, Phillip Tetlock, who has researched why “experts” so often are as accurate at forecasting events as a chimpanzee throwing darts at a yes/no target, has found that some of these so-called “experts” will go as far as altering data in order to “support the side[24].”

Thus, it is now clear that the Americans’ capacity (capability and will) to formulate thoughtful, rational foreign policies is low; and a supportive environment for high-quality, thoughtful criticism of policy decision-making in real time is, at best, small and possibly insignificant.

Under these circumstances, the use of skilled rhetoric takes on ever-greater importance. The policy-explainers’ attempts to influence both the experts and the general public make use of one or more of the following, well-documented techniques. Over the years, I have found that the techniques of persuasion listed below have provided me with a highly-useful checklist that I now use regularly to assess the validity of any policy proposal (and also any other form of non-fiction writing).

Does the proposal:

  • Use false equivalents? One favourite example everywhere is to compare anyone to “the Nazis.” Another is the misuse of the “fairness doctrine” in order to give one position more value than it deserves.
  • Make selected observations so as to exclude anything that might weaken a given thesis?
  • Engage in “confirmation bias” and “disconfirmation bias?” This includes “tuning in to” or “tuning out” things we want to hear or don’t want to hear. It includes refusing to listen to or accept what people who make us uncomfortable have to say. This affliction is particularly common in media outlets that have taken strong editorial positions, advocates, and those for whom rationalism is not a priority.
  • Appeal to people’s beliefs, not their rationality?
  • Rationalize—try to find reasons for having adopted a position and continuing to support a position that is objectively wrong?
  • Neglect probability, including exaggerating threats and risks?
  • Engage in or play to people’s negativity bias? There is a natural human tendency to pay more attention to bad news than to good news. However, in some cases, when people are craving good news it is also possible to play to their optimism bias.
  • Deliberately try to create a bandwagon effect by carefully choosing an audience and then claiming that there is a majority or a consensus decision on a particular issue?
  • Focus on the current moment without any reference to context or history?
  • Play the game of fact fixation by taking one fact, whether relevant or not, and then building an entire argument around it? One particularly egregious example was the often-repeated claim during the 1960s and 1970s in media outlets such as Time Magazine that because the children of the Soviet nomenclatura were willing to pay huge prices for smuggled jeans, they wanted the American way of life.
  • Induce within people the sense that they are already suffering from information overload so that they will pay less attention to vital details?


A Possible Solution

Clearly, the now well-documented drop in the teaching and use of reading comprehension skills that I mentioned earlier, the drop in attention spans, and the general incapacity by most people to spot the sort of fallacies listed above, has left many, if not most Americans without the tools they need to competently critique the policy proposals that are being raised[25]. They thus become vulnerable to accepting as the truth what professional spinmeisters produce. No less significantly, the absence of acute reading and listening comprehension skills has enabled idiocies of all sorts to flourish in academe.

Quite obviously, it will be impossible in the short or medium term to undo the damage that has been wrought by this mass collapse in comprehension skills—or to prevent future forays into intellectual and analytical folly.

At the same time, however, the situation has reached such a critical point—too many lives and too much treasure is being lost—that some sort of near-term solution must be found and embraced quickly. The greatest single need is for the adoption of some sort of semi-automatic policy circuit-breaker.

Possibly the best remedy for controlling the number, if not eliminating all of the of intellectual fallacies and institutional stupidities can be found in a particularly perceptive lecture that the philosopher Karl Popper gave in 1973 at Cambridge University[26]−at the very moment that post-modernism was making real headway. In it he set out criteria for distinguishing between science and pseudo-science. The most important of these criteria was what he called the “falsifiability” of a theory. He held that any viable scientific proposition must include within it a listing of any and all of the empirical conditions that would invalidate it. Otherwise, he declared, it is pseudoscience. If this same test is applied to political analysis by both the writer and the reader of an assessment or policy proposal, it can be used to determine whether what might best be called “pseudoanalysis,” has been employed in the production of the work in question.

When I reviewed all the American policy failures that I could reasonably research in depth, I found that in each case a test of falsifiability had not been built into the plan when it was first proposed—and no outside critic had formulated such a test. I am not suggesting that foreign policy-making should have the assuredness of Einstein’s Theory of Relativity. Far from it. However, it is important to remember that one of the reasons why Einstein’s theory was accepted for serious consideration was that, together with its mind-bending assertions, Einstein included a test that could be and soon was used to try to disprove some of his most outrageous-sounding ideas.

My reason for suggesting that the same technique be used in assessing policy-making is that including tests of this sort in a review of a policy suggestion can do one crucial thing: It forces the mind to remain open to new data or to ideas that may end up conflicting with the theory upon which the policy is based. That openness then makes it possible to obey the cardinal law undergirding the scientific method—the requirement that all theories and all the results of a policy be treated as being provisional and subject to alteration at any time. Only in that way can a policy-maker make critical mid-course corrections before a bad policy leads to yet another disastrous conclusion.


[1]Niall Ferguson, Kissinger 1923-1968: The Idealist, Penguin Press, 2015


[2] Yuval Noah Hariri, Sapiens, 2015


[4] Max Weber; Peter R. Baehr; Gordon C. Wells. The Protestant ethic and the “spirit” of capitalism and other writings. Penguin, 2002


[6] Herbert Simon, Models of Man, Wiley, 1957

[7] This concept has been further explored by Daniel Kahneman in his book Thinking, Fast and Slow, 2013

[8] Gregory F. Treverton, C. Bryan Gabbard, Assessing the Tradecraft of Intelligence Analysis,

[9] Gregory Treverton, What should we expect of our spies? Prospect Magazine July 11, 2011

[10] Emmanuel Todd, “The Final Fall: An Essay on the Decomposition of the Soviet Sphere” (La chute finale: Essais sur la décomposition de la sphère Soviétique), 1976.

[11] Robert Fogel, The Fourth Great Awakening and the Future of Egalitarianism Chicago: University of Chicago Press, 2000; John B. Carpenter, “The Fourth Great Awakening or Apostasy: Is American Evangelicalism Cycling Upward or Spiraling Downward,” Journal of the Evangelical Theological Society, 44/4 (December 2001), p. 647.

[12] See, for example, Jim Lederman, Battlelines: The American Media and the Intifada, Henry Holt, 1992


[14] Charles King, “The Decline of International Studies, Why Flying Blind Is Dangerous,” Foreign Affairs, July/August 2015


[16] Allan Bloom. Closing of the American Mind. New York: Simon & Schuster, 1987

[17] N. Katherine Hayles. 1992. “Gender encoding in fluid mechanics: Masculine channels and feminine flows.” Differences: A Journal of Feminist Cultural Studies 4(2): pp. 31-32.

[18] The term was first appeared in print in its current, politicized form in a 1984 New York Times editorial

[19] Greg Lukianoff and Jonathan Haidth, “The Coddling of the American Mind,” The Atlantic, Sept.2015,






[25] In the1890s, farm boys in Kansas could get no more education than the eighth grade. Nonetheless, they seemed to have learned basic skills better than many PhD holders today. The grade eight graduation exam can be found here:

[26] Karl Popper, “Science: Conjectures and Refutations,”

Leave a Reply