Meeting

CFR Master Class With Paul Stares: Foreign Policy Crises—Learning From the Past, Adapting to the Future

Tuesday, October 27, 2020
Hannah Mckay/REUTERS
Speaker

General John W. Vessey Senior Fellow for Conflict Prevention and Director of the Center for Preventive Action, Council on Foreign Relations; @CFR_CPA

Presider

Vice President, Deputy Director of Studies, and Nelson and David Rockefeller Senior Fellow for Latin America Studies, Council on Foreign Relations; @shannonkoneil

Paul Stares discusses how the United States prepares for potential foreign policy crises, including how the U.S. has historically tried to avoid being blindsided by threatening developments and what it should do to be better prepared in the future.  

The CFR Master Class Series is a biweekly 45-minute session hosted by Vice President and Deputy Director for Studies Shannon O’Neil in which a CFR fellow will take a step back from the news and discuss the fundamentals essential to understanding a given country, region of the world, or issue pertaining to U.S. foreign policy or international relations.

O'NEIL: Thank you, and good afternoon, everyone. Welcome to the CFR Master Class. I'm Shannon O'Neil, and I'll be presiding over our session. Our subject today will be why the U.S. gets blindsided by foreign policy crises again and again, what lessons we learned from the past, and what we can do to avoid them in the future. So to lead us and guide us through all this, we have Paul Stares with us today. Many of you know Paul. Paul is the General John W. Vessey senior fellow for conflict prevention and director of our Center for Preventive Action. Paul has written or edited nine books, touching on this very broad subject. His latest book is called Preventive Engagement: How America Can Avoid War, Stay Strong, and Keep the Peace. And he's also been involved in several CFR Task Forces, so he's been thinking about these issues for many years. We are lucky to have him enlighten us or at least kick us off on the discussion that we will have over the next forty-five minutes. So I'm going to turn it over to him for opening remarks for about eight to ten minutes, and then we'll open it up to all of you for a broader discussion. So please, Paul, go ahead.

 

STARES: Well, thanks, Shannon. And thanks to all of you for joining us in this conversation today. As everybody knows, only too well, we're in the midst of a national crisis that originated overseas. Our failure to manage this crisis is mostly due to what we did domestically or didn't do domestically. But part of it also had to do with our preparedness or lack thereof to respond to a foreign health threat or risk to our nation's well-being. We've been aware of this risk for some time. We even got a foretaste of it in 2014 with the Ebola crisis, yet we were still caught off guard and off-balance when the coronavirus outbreak started. As a result, we did not respond well in the early days of the crisis. And as you said, Shannon, this is not the first time this happened. There have been many examples, but particularly after the Cold War, when we were blindsided or did not react very well to emerging threats abroad. Let me just give you some obvious examples: the Iraqi invasion of Kuwait in 1990, the breakup of Yugoslavia in the early 1990s, the Rwandan genocide, North Korea's nuclear weapons program, various crises between India and Pakistan, 9/11, the Arab Spring uprisings, Russian interventions in Georgia, Ukraine and Syria, the rise of self-proclaimed Islamic State, cyberattacks, and Russian meddling in the 2016 election. Each time we were to some extent surprised, or we were not surprised in terms of the intelligence warnings, but we were nevertheless unprepared for those particular events. On many occasions, this led to postmortems that usually prompted some kind of reforms on how we collect intelligence or analyze it. That led to some sort of technological fixes and organizational tweaks to improve our early warning and response capabilities. I will give some examples. In the 1990s, we had the creation of the CIA's political instability task force, with its emphasis on forecasting civil unrest, particularly in fragile and weak states. And 9/11, as many of you know, led to the better integration of domestic and foreign intelligence on terrorist threats as well as the creation of the Department of Homeland Security and the Office of the Director of National Intelligence. The Iraqi weapons of mass destruction (WMD) debacle led to better tradecraft protocols to avoid groupthink and other analytical biases. In recent years, we have also seen the creation of mission centers for the intelligence community to kind of integrate and focus intelligence collection, tasking, and analytical work. Given these precedents, I think I would not be surprised if a Biden administration carries out its own inquiry in the future on the coronavirus pandemic. Nevertheless, we keep getting blindsided by these events overseas. And when we're not strictly blindsided, as I said earlier, we fail to respond adequately.

 

So what I thought I would try to do is to briefly explain why this keeps happening. And also, what we might be able to do to lessen it happening in the future. I don't think I have to convince this audience about the need to do this. The risk is becoming more complex, more multifaceted. We have got growing competition with great powers, regional instability, and there's still the threat of substate and transnational actors. And then there's this whole new class of threats that don't derive from human agency, whether they are health threats or environmental sources of danger. Okay, so why does this keep happening? Those who study the problem usually distinguish between failures of intelligence and failures of policy response. The former is a failure of the intelligence community to warn the senior decision-makers about an impending threat in a clear, compelling and timely fashion so that they can respond to neutralize that threat in some way. Intelligence failures, however, can derive from many sources. They can be a failure to gather intelligence on a particular part of the world for a variety of reasons, or they can be deliberate efforts by an adversary to conceal its activities. They can also be a human error in how you interpret ambiguous evidence, such as mirror imaging and other biases. And there is also some reluctance to cry wolf too often; otherwise, it can undermine the credibility of your warning. And perversely, admitting uncertainty can also undermine how seriously decision-makers react to your warnings. 

 

In contrast, policy failures occur when decision-makers failed to respond to the warning they received. It happens when decision-makers are just too busy or too distracted to take on board what they're being told. They also have their own cognitive or political biases that lead them to dismiss warnings, devalue them, or somehow ignore them. Of course, they're also mindful of the costs and consequences of responding to a false warning, which can often deter them from taking early action. So all of this means that there are many reasons why we keep getting blindsided or just don't respond very well. So what should we do in the future? To me, the solution lies with moving away from the mindset that believes that we contain or have significantly reduced strategic uncertainty. To me, it's a kind of Holy Grail that always seems to be in reach, but we can never quite get hold of it. Instead, we should accept and even embrace the fact that strategic uncertainty is something that can never be eliminated while it can be managed to mitigate. This is not to say that I think that trying to improve our intelligence capabilities, or the quality of our analytical tradecraft, or improving the speed at which we can inform senior decision-makers of particular threats is a waste of time and money. It's not, but it can only take you so far.

 

So what do I mean by accepting and embracing strategic uncertainty? So far, we have been following a mindset that goes back to Pearl Harbor. It's about responding to threats that have already been formed or been identified with a high degree of confidence as threats to national security. This means that the early warning and early response mindset is essentially reactive. We're always responding to threats once they have been formed. And once we are sure that they actually represent a threat. And that means that they're always subject to these problems, whether on the information side or the policy side. So what does this mean in practical terms? I think I can just focus on three initiatives, and I'll be happy to explain them further in the Q&A. So I think that firstly we should complement our early warning systems with a dedicated national risk assessment process. This would take stock of the universe of threats to our security, as well as gauge both their likelihood, and most importantly, their potential impact. And this can be carried out over certain timeframes, like short, medium, or long term. So why do we focus on impact? I believe that the reason is that the only way to make decision-makers more sensitive to the information they receive is to convey to them what is at stake. Just telling them that there is some threatened faraway country with a coup, or crop failure, or terrorist incident they should pay attention to probably won't make an impact on them. They have to understand why this is important to the U.S. interests and what could be the second, third, or fourth-order consequences of that event. And this approach allows us to rank risks and therefore prioritize them. And this is what we do in our annual Preventive Priority Survey. The most serious risks, moreover, can receive more attention, both in terms of intelligence collection and analytical work. To me, this represents the most rational way to task the intelligence community.

 

Secondly, we should design our foreign and security policies in terms of their ability to mitigate external risks, both upstream and downstream. What do I mean by that? So upstream risk reduction refers to measures that lower the likelihood of the risks materializing, whereas downstream measures are designed to reduce their negative impact, if and when they do materialize. By the way, we use this basic upstream and downstream risk mitigation approach in other areas of public policy, whether it's health risks, public security, public safety, policing, and so on. Finally, and I think this is particularly important, we should professionalize our crisis preparedness. Currently, it's very ad hoc, but improvised. Our senior officials receive little or no training on crisis management. If they're lucky, they might take part in an exercise or simulation. But this doesn't happen very often. It typically gets shoehorned into their busy schedules, and often they have to take time out of weekends to do this, which is not great. Moreover, what is learned in these exercises is rarely transmitted or captured as lessons learned for subsequent decision-makers. The crisis management techniques are not taught in our professional work academies. And so we end up more or less approaching every crisis as if they are de novo and new. So we have to keep reinventing the wheel. That's basically what I would suggest we need to do. And I'll end there. I'd be happy to answer any questions you might have.

 

O'NEIL: Great. Well, I want to dive into how we solve this problem, but first, I thought I'd go back to when you were giving out some of the examples of where we've gotten blindsided. I saw maybe three buckets. Some were state actors, like Pearl Harbor and things like that, where states attacked us, some were nonstate actors, like 9/11, and others were sort of nonhuman actors, like COVID-19 or perhaps climate change issues, tornadoes, fires, and things like that. Could you maybe talk a little bit about what we may be better at or worse with these three buckets? Does it make a difference if it's a state actor versus a nonstate actor versus a nonhuman actor, and how has that evolved over time? I want to get your take.

 

STARES: Yeah, that's a terrific point. And you're absolutely right that there are obviously different types of threats. Each of these threats represents different levels of concern in terms of the damage or harm they can do to the U.S. State actors are inherently easier to warn about just because you can observe them better. You can design the indicators and warning systems that kind of like a checkbox, in which you can look for identifiable evidence that they are doing certain things that might be threatening to you. And you can sort of pass what they're saying, and you can look at what their troops are doing or whether they're being mobilized to do so. That's clearly the easiest target. That doesn't make it always guaranteed that we would detect what's going to happen in advance.

 

As I said, some of those intelligence failures didn't involve state actors. Nonstate actors are harder to try to determine the risk from. You don't always know who they are, you don't necessarily know their motives, and they may be speaking in languages or using communication channels that are just very difficult to determine. Some of these threats are also inherently difficult to forecast, such as the Arab Spring, for example. As you know, that was essentially triggered by the actions of the lowly fruit seller in Tunisia who has set fire to himself. And I bet the leader of Tunisia did not know that morning that he was at risk of losing his job that night or maybe later that week. So this is just an inherently harder task to warn about. And then, of course, you've got the non-agency threats. Those threats derive from biological processes or other human interactions, and are not a result of someone deliberately trying to harm us. And they're even harder to try to predict. And so, if you're trying to design a system to forecast this, this is inherently difficult. That's why I think it's better to focus on the circumstantial risk factors you can monitor, whether one is looking more dangerous than the other, and produce a kind of overall picture of whether the likelihood of something is increasing or decreasing. You're not making a prediction, you're just saying that there's a shift in the evidentiary base towards something, and therefore, that should trigger you to be more sensitive and improve your general level of preparedness in response to that. But it's a difficult thing. And we shouldn't assume that we can, as I say, develop some perfect forecasting techniques. I think anybody who says they can do that is selling a false bill of goods, frankly.

 

O'NEIL: Right, let's take our first question from a member.

 

STAFF: Certainly, we will take our first question from Joan Spero.

 

Q: Thank you for this interesting introduction. I'm wondering if there are any success cases? Are there situations where we have been able to identify a threat, where we have addressed it, and have been able to mitigate the problem? Thank you.

 

STARES: Yeah, great question. I neglected to say that what we have done in terms of early warning hasn't been one long litany of failure, that the systems that we put in place have helped on many occasions to warn of impending activities that pose a threat to us. We have responded, and the stars have aligned in those circumstances. I think we've often had good warning when it comes to, for instance, the India-Pakistan crisis in the early 90s. In fact, Richard Haass was involved in some of these efforts, in which we saw that the risk of a flare-up in Kashmir was growing, and we sent diplomats to the scene, and we were able to calm down what was going on. So that's an obvious case. I think there have been some potential coups in Africa that we anticipated, and we were able to see what was likely to happen with that through some intercepts. And we also were able to intervene and reduce the likelihood of things escalating and becoming dangerous. I can probably think of some other ones. I'm sure there have been many terrorist attacks thwarted because of our ability to detect them when they were being developed or were about to be launched. And we thwarted them before they were successful. So I don't want to demean the terrific work that the intelligence community does and the dedication of many senior officials who are doing excellent jobs and trying their hardest. It's just that on many occasions, as I mentioned, we have been blindsided. But that shouldn't diminish the fact that we've also been successful agents too.

 

O'NEIL: Let's take the next question.

 

STAFF: We will take our next question from Merrill McPeak.

 

Q: Thank you, Paul, for an interesting pitch. Appreciate it. You just said that out of the various kinds of threats we face, the state actors are the easiest to deal with. And you mentioned a long string of failures concerning our ability to deal with given state actors, and you left out some good ones, by the way.

 

STARES: You should know General McPeak; you're a professional. And so you're better informed than I about some of these threats.

 

Q: I would say the Chinese army's intervention in Korea, which was a surprise to MacArthur, still got me shaking my head. I don't know how we missed that one. And how about India's nuclear weapons test, and in a way, the collapse of the Soviet Union? And by the way, I don't see how we correct this by spending more money. We already, by order of magnitude, spent a lot more money than anybody else collecting information. So you suggested something different. Could we take a different approach using this risk assessment business? But I wonder if you think we're well organized to do this work? And I don't mean just the fourteen, or fifteen, or x number of organizations that constitute the intelligence community, although that ought to be something to worry about also. But the fact that we've separated intelligence, as though it were the commodity and then product. Someone said, "If to see and not to act, is not to see." So we have separated the information from the people who have the power to act on information. And it's always bothered me that we should take a boxer, blindfold him, put him in the ring, and expect him to fight. But that's essentially what we've done with the intelligence community. Anyway, over to you.

 

STARES: So, you know, people often refer to the church-state division between the intelligence community and the policy community. And it's a deliberate separation to avoid the politicization of intelligence, the political biases, and other things occurring. As you allude to, the problem is that it can be difficult for the intelligence community to convey what they see to a skeptical group of policymakers who may be unfamiliar with the sources. It may be difficult for them to grasp the significance of what they're seeing. I've always been in favor of the way the Brits do it, and it is not just because I originate from the UK. The Brits have an organization called the Joint Intelligence Committee consisting of both bureaucrats and senior intelligence officials who come up with a joint assessment to present to senior officials. And it's a way of conveying to senior officials why it's politically, not just militarily or otherwise, important to heed these warnings. And I particularly like that approach. In my book, I've advocated that something similar, such as a strategic assessments directorate, be formed in the National Security Council (NSC). What I'm worried about is that there's been a real rupture in relations between the policy community, the White House in particular, and the intelligence community. The next administration will have to try to bridge and restore their mutual faith in each other. And that's going to be a big challenge. If there is a Biden administration, I would like to see some of the mechanisms that were employed in the past restored as well. We used to have a national intelligence officer warning, but that was dismantled, unfortunately. I would like to see a regular process in which the head of the NSC can convene meetings when they believe there's a significant threat. I believe there also used to be a regular lunch meeting between the National Security Advisor, the Director of National Intelligence, and the Director of CIA, to have a kind of informal discussion about merging threats, which was also very helpful. But all that, I think, has fallen by the wayside. And I think there are ways to bridge this divide that you refer to, General McPeak.

 

O'NEIL: Let's take the next question.

 

STAFF: We will take the next question from David Scheffer. Mr. Scheffer, please go ahead.

 

Q: Great discussion, Paul. I wonder if you could say something about the information flow that comes across policymakers' desks with respect to prevention. What I never really saw come across the desk, in my experience, is a good psychological profile of those individuals who are posing a threat in terms of their leadership of governments, militia, or nonstate entities, whereby policymakers in evaluating a prevention strategy would consider a good psychological understanding of how the adversary thinks and what their personality traits are. That would give you a much better sense of how to deal with the human beings involved in posing threats that you would have an opportunity to prevent.

 

STARES: Another great question. My understanding is that the CIA and maybe other parts of the intelligence community do psychological profiles, particularly in advance of summit meetings or conversations between the president and foreign leaders. They will sort of give a background on the person the president is speaking to. I've heard of other profiling at the CIA in the past, and there are some folks around town that continue to do this, having been trained by the CIA to do this. But whether that's still going on, or whether it's a regular part of the preparation and, you know, inflammation of decision-makers, I just can't tell; I'm not privy to that. But I suspect it tends to be done more with senior leaders and foreign countries and less when it comes to the leadership in some state groups or terrorist organizations where, frankly, there may not be that much information. But I think you make a great point. It's not just the psychological profile; it's also their cultural background. We are routinely guilty of mirror imaging, believing that others think and act like us, when, in fact, their personality or cultural background will lead them to do something completely different.

 

O'NEIL: Next question, please.

 

STAFF: We will take our next question from Anders Åslund.

 

Q: Thank you very much for the great presentation, a very interesting theme. I would like to pose the rather unorthodox point: all these big bureaucracies are wrong. We have seen the CIA being wrong about virtually everything regarding the Soviet Union. It thought that nothing was going on until 1990. They didn't believe in Gorbachev's reform; that was simply too big. But you make your career in the CIA by being a good bureaucrat, not by being a good intelligence officer. I see the best part of the U.S. intelligence in the bureau intelligence and research at the State Department and the National Intelligence Council because they are small, and they think, rather than dealing with huge amounts of data that concerns nobody. So isn't this your sort of saying that we need to have a small group of intelligent people, rather than one hundred thousand  people sitting somewhere we don't know where?

 

STARES: At the risk of offending a lot of friends in the intelligence community, I tend to agree with you; we spend a huge amount of money. Frankly, I don't think the American public understands; it was a classified figure for many years about how much we really spend on intelligence. I think it's close to a hundred billion now, if not more, both for the technical side, the analytical side, and so on. And it's a lot of money. And whenever I visit Langley, or I go to the Office of the Director of National Intelligence, and these are like huge university campuses, I ask myself, what are these people doing? And do we know how they spend their day? Are there so many issues that require this number of people working on them? How nimble can they possibly be? Are they duplicating work competing with one another in a sensible way? So I kind of share your skepticism. The Bureau of Intelligence and Research (INR) has done remarkably well. And I think you and others know, it was one of the more skeptical voices in the lead up to the 2003 invasion of Iraq. They consistently questioned the analysis out of the CIA and elsewhere about the state of WMD development in Iraq. And I think they are a testament, as you say, to want a small, informed group of experts who specialize in a particular country and language and so on.

I'm reluctant to say that we should just decimate the intelligence budget and only take it down to a few million. But nevertheless, I think there is a lot of duplication. Unfortunately, I'm not totally convinced that the reforms after 9/11 to create yet another layer of bureaucracy through the Director of National Intelligence was a good idea. It has added a lot of bureaucracy. I think, as General McPeak, there are seventeen organizations that make up the intelligence community. And I think there's a lot of overlap and duplication in what they do. So I think there are ways in which we could streamline activities. But I hope that we can sort of look at this a little more rationally down the road. For instance, I would not want to see my suggestion for a risk assessment approach to bring a creation of yet another organization or yet another set of bureaucrats to work on this. That should frankly, come out of what exists now.

 

O'NEIL: You know Paul, let me follow up on that. I think it is a really interesting story. You mentioned in your remarks the two areas where we fail. One is the intelligence failure. And I think Anders just had a really interesting point about bureaucracy and how that leads to an intelligence failure. But the other one you mentioned was a failure in the policy response. Could you talk a little bit about government structures? Which ones are more vulnerable to failure in policy response? Is democracy more vulnerable to a failure in the policy response to some of these crises due to political concerns, election concerns, or a divided government and the like? Are the authoritarians better at this or not? You can see how China and others didn't get information on SARS or other things because people were scared to give bad news to those up the chain. But when they did decide to act, they were much better this time around in Wuhan, if we just look at the health thing. So talk a little bit about that policy response and how democracy is or isn't a strength or is a weakness.

 

STARES: I think authoritarian governments are probably more exposed to errors of judgment that derive from intelligence and the politicization of intelligence. There have been famous cases in history in which authoritarian leaders have also been blindsided and victims of their own groupthink for our biases. You know, Stalin, before the invasion of Russia in 1941, is a classic case. And so is Saddam Hussein. I think he was a victim of his own biases and preferences and so on. I think democracies are healthy inasmuch as they can create multiple sources of information, debate those sources of intelligence in an honest fashion, and provide a balanced assessment. The problem is that you have to sign off with a consensus assessment. You need to get seventeen or so, agencies all more or less agreeing. And we've seen this with what are known as NIEs, National Intelligence Estimates, that become this huge negotiation within the intelligence community. And people have annotations and footnotes showing their dissent on certain things, which makes a senior leader reading that wonder, well, why should I believe this because there just seems to be too many competing voices? But I think on balance, that is still better than the opposite. I think where authoritarian countries have more of an advantage is in terms of being able to move the government in a rapid, instantaneous fashion to deal with threats. And that can have negative consequences as well. But on the whole, an authoritarian government tends to be a little more responsive. It is somewhat easier to steer them than the democratic ones, with their multiple arms of government and agencies, and having to get in line and approve a certain line of effort. So I think there is a trade-off in that respect too.

 

O'NEIL: Let's take another question.

 

STAFF: We will take the next question from Jim Bergeron.

 

Q: Hello, thanks very much, Paul. I really enjoyed the conversation. Jimmy Bergeron, political advisor at NATO Maritime Borders, outside London. I was particularly struck by your comment because it speaks to my life, which is that sometimes there's the problem of getting the political response. And this was just addressed on the failure of policy response discussion. But let me push it in one other small direction. When we're not in the Pearl Harbor model, where there isn't a certainty, but a probability of harm coming, and there are consequences to acting now, in your experience, how do we get senior leaders and decision-makers into the right space to deal with that? Because there are huge institutional pressures, you know, to not act until it's obvious and late?

 

STARES: Right, terrific question. And I should have added the challenges are even greater when dealing with a warning and responding to multilateral organizations and alliance frameworks, like NATO. And as a whole, I can give you the whole background on some of the challenges that occur in these settings. But your point is a very good one. Because why would decision-makers be any more responsive to risk assessments than the early warning information? And I think the key is to have an essentially an a priori buy-in from decision-makers about what they worry about in advance of the intelligence community warning, where they have already conveyed to the intelligence community what their top threats or top risks are. And when the assessment comes out, they can rank them in terms of different tiers of relative priority. And so that when the assessment or the warning comes in, and it's tagged with a certain level of priority in terms of its impact, it is more likely to get their attention. I draw the kind of analogy with hurricane warnings. You know, if you issue a general weather report that there's going to be a bad storm coming, what does this mean? It could mean anything. What do I do? But if you say, actually, it's a category four or a category three hurricane warning, it gets people's attention, they know what to do. There's a whole kind of protocol to respond to those situations, and institutions are sort of triggered and prompted to do certain things under that kind of warning. And I think you can do the same with risk assessments. So the decision-makers are not trying to figure out, Well, why should I worry about this particular problem, and what have I got to lose? Why do I have to act now? They have already understood that this fits into a certain category of warning. And I say a lot more about this in my book, and I'd be happy to follow up with you further on the reasoning behind my argument.

 

O'NEIL: Let's go on to the next question.

 

STAFF: We will take our next question from Hani Findakly.

 

Q: Thank you very much for this discussion, Paul. It's very good. I'm just wondering about these two issues of the intelligence failure versus the policy failure with the current structure we have right now. You are advocating the idea of creating some kind of a risk assessment mechanism. Is the current structure flexible enough to allow for us to make that assessment possible? Or would the biases and the competition among the different agencies interfere with this process, slow down the process, and make the response and identification of the risk priorities more of a random event rather than a one that we can assess more rationally?

 

STARES: So if I understand your question correctly, with this process that I advocate, would that be any less likely to be subject to the kind of cognitive biases?

 

Q: I'm wondering whether it solves that issue that brings those issues to the surface quicker, for a more rational response.

 

STARES: So to me, it sensitizes decision-makers of a latent risk before it is materialized into a formed threat. And so it allows them to basically understand the nature of the threat that they're worried about before it materializes. And I think there are obviously going to be differences of opinion about how to assess those, how likely they are, what impact they can have. But I think it's better to have that kind of assessment before crisis hits than during it because that just slows you down. And you are never going to eliminate these kinds of inputs that could distort the analysis. But I think if the system is set up in a certain way, you can reduce that. And as I said, I've seen this in the British case, I've seen this in the Australian and Canadian cases, where they do have a more integrated assessment mechanism, rather than the intelligence community producing something shipped to the decision-makers. It is kind of like the guy that used to deliver your New York Times onto your doorstep in the morning. That boy didn't know whether you were going to read it, much less open the paper from the package. I think a more integrated approach would allow to reduce the challenge of convincing decision-makers about the significance of the information they receive. I'll be happy to follow up on this with you, Hani.

 

O'NEIL: Unfortunately, we have reached the end of our time. I sincerely hope that whoever occupies the White House come next January listens to some of these ideas and restructures it. But Paul, I want to thank you on behalf of all of us for guiding us today. Thank you very much.

 

STARES: Thank you, and I'm always welcome to field questions by email and follow up with any of you if you have additional questions.

 

O'NEIL: Great. So to all of you who have more questions, please reach out to Paul. And for those of you in this Master Class Series, please join us on November 10.  Sheila Smith will be talking about U.S.-Japan relations. Until then, everyone, please stay well.

Top Stories on CFR

Mexico

Organized crime’s hold on local governments fuels record election violence; Europe’s cocaine pipeline shifting to the Southern Cone.

Defense and Security

John Barrientos, a captain in the U.S. Navy and a visiting military fellow at CFR, and Kristen Thompson, a colonel in the U.S. Air Force and a visiting military fellow at CFR, sit down with James M. Lindsay to provide an inside view on how the U.S. military is adapting to the challenges it faces.

Myanmar

The Myanmar army is experiencing a rapid rise in defections and military losses, posing questions about the continued viability of the junta’s grip on power.