EconBuff Podcast #27 with Rex Pjesky
Dr. Rex Pjesky talks with me about some recent research on mask mandates and their effectiveness. Dr. Pjesky walks us through what the research claims to find and what the implications of the study are. We discuss the magnitude of the estimates and the nature of exponential growth. We explore the particular strengths of the research, and Dr. Pjesky explains how the authors controlled for certain features in their study. Dr. Pjesky argues the research has several important limitations including the difference between a mandate and actual mask use, the importance of how interactions are changed by masks and mandates, and other restrictions that might go in place along with mask mandates. Finally, Dr. Pjesky outlines key economic research ideas at play here including policy endogeneity, omitted variable bias, and confidence intervals, and we explore how critical it is to have clear interpretations of the findings. Near the end, I challenge Dr. Pjesky with a surprise question and we work out a key way to interpret the study's findings.
Photo by Adam Nieścioruk on Unsplash
Transcript
Stitzel: Hello, and welcome to the EconBuff Podcast. I'm your host Lee Stitzel. With me today is Dr. Rex Pjesky, Professor of Economics at West Texas A&M University. Rex, welcome.
Pjesky: Good to be here.
Stitzel: So Rex, today we want to talk about some mask research. We're going to start with --- let me be careful with --- some research on mask mandates. We're going to make that distinction finally several times today, I think. So, I'm going to read this title of this here. And then I'm going to have you, kind of, just give us a brief sense of what happens here. And then we'll just go from there. All right? So, the title of the paper is “Association of State Issued Mask Mandates and Allowing On-Premises Restaurant Dining with Country Level COVID-19 Case and Death Growth Rates.” This is from March 12th. So, we're not so interested in the restaurant thing, saying if we get into that, [then] we get into that. Can you, kind of, summarize what the research is on the mask mandates themselves?
Pjesky: Yeah, I saw. I mean, the papers have been out about a month or so, and I saw it briefly. A dear colleague of ours asked me to look at it and give my opinion. And basically, what it does --- it's a quasi-experimental look at county level mask mandates. And what it tries to do is [that] it tries to measure the association between a county government issuing a mask mandate, and then the number of cases that happened in that county after the mandate was established. So, you know quite a bit about quasi-experimental methods about this. So, you know, I expect you have a lot of thoughts on the paper as well. But, you know, it's the, you know, the paper really interests me because, you know, at first I gave it a really, really quick reading (not very carefully). And I saw that the result was that after 100 days there was a 1.8% reduction in cases in the counties that had issued the mask mandates. And I thought: well, that's really statistically significant. But that's, you know, I thought that's really small. There'd be all kinds of, you know, confounding factors that, you know, might make us conclude that: well, maybe mask mandates are, you know, have absolutely no effect, you know, whatsoever; because, you know, 1% reduction in cases after 100 days sounded pretty small to me. But I read some more commentary, you know, about the paper that caused me to go back and read it extremely carefully. And so, on a second reading I discovered that I misread it the first time. And actually, what this paper is finding, [is] that it's a compounded 1.8% at the end of 100 days. It's a compounded 1.8% reduction in daily percentage point reduction in daily cases, all right? And that's a huge effect, all right? That's a huge effect. So, you know, this paper's gotten, you know, quite a bit of discussion, or, you know, quite a bit of discussion around it. It's almost certainly going to be an incredibly important piece of research moving forward, as governments think about what sort of policies in response to pandemics we should do; [meanwhile] both as COVID is, sort of, maybe winding down hopefully, and in preparation for the next one that, you know, almost inevitably will happen sometime in the future. So, you know, to me this looked like a really, really important piece of work, you know, due to the nature of the data, and the nature of the methodology. So, let me, kind of, you know, tell you a little bit about how the paper was setup. And people who are listening or watching this might not know about these techniques, you know. I'll kind of fill them in as well. So, what the authors did is [that] they looked at the 3,000-4,000 counties that there are in the United States, and they looked at their responses across the pandemic; [moreover] they identified the counties that established a mask mandate, and then they found counties that also removed a mask mandate. And they gathered data on the number of daily cases that were reported by county, you know, during, you know, basically the first nine or ten months of the pandemic. And what they wanted to do is [that] they wanted to find out what happened to cases after a mandate was established in a county, compared to what happened to cases in counties that did not didn't have a mandate. So, they looked at all of these counties, and they found out which counties that issued a mandate, --- when these counties put down a mandate, like, whether it's like May 12th or June 5th or whenever, you know, whenever it was ---- and then what they would want to do is [that] they would look at what happened to cases leading up to the mandate. And then they would track what happened to cases in these counties, compared to counties [who] didn't have a mask mandate, after the mandate. So, like at 20 days after, 40 days after, 60 days after, 80 and 100 after. So, the idea here is that the with the proper controls, the counties that put on mask mandates could be compared to the counties that didn't have mask mandates as, sort of, an experiment, all right?
Not a controlled experiment, not even a natural experiment, but what researchers call a quasi-experiment; because obviously we can't, you know, randomly assign county's mask mandates. And then, you know, once they had that data prepared, the authors, you know, thought very carefully about what they wanted to control for. And they controlled for a lot of things that, I think, most people would think would need to be controlled for. So, they wanted to hold that constant, whether or not large gatherings were prohibited in these places they wanted to control for, [and even] whether or not, you know, in-restaurant dining was permitted. They even controlled for the number of tests that were being that were being administered. So, you know, some counties might be testing more heavily than others. And that might, you know, sort of, make the number of cases reported in those counties differ (or vary a little bit) across counties, in a way that would [or] could confound these results. So, you know, they put all that together, and, you know, sort of, shook it up with, you know, regression magic to try to find out controlling for all of these things what impact the mask mandates had in counties and establishments. And it turns out their results --- what they came up for is with --- was actually a huge number. So, from 0 to 20 days after a mask mandate was established in a county, the daily growth in cases reduced by 0.5 % points, all right? Then from…
Stitzel: So, talk about why it's so important to highlight % points versus %...
Pjesky: O.K.
Stitzel:…because that would change things dramatically if they didn't.
Pjesky: Yes. Yeah. That's good. Any, you know, anytime you're reading a piece of research in this context --- and it doesn't make any difference what the topic is --- it's really, really important to know the difference between a % change and a % point change, all right? So, you know, let's think about taxes, you know, for a second. I'll make up some numbers that'll illustrate the point. Let's say that we have a 50% tax rate, all right? Let's say that we want to reduce taxes by 20%. Well, if we reduce taxes by 20%, we'll end up with a 40% tax rate, O.K. [=.50 – (.20 x .50) = .40]. If we say that we're reducing taxes by 20% points, then we would end up with a 30% a tax rate [= .50 - .20 = .30]. So, when, you know, depending on the scale of your numbers, the difference between a % difference and a % point difference could be --- the interpretation that could be --- humongous.
Stitzel: Yeah.
Pjesky: So, you know, daily cases generally, you know (in the start of a pandemic daily cases) might be rising by like maybe 5% a day, or 8% a day probably not much more than that over long periods of time. But, you know, if so, if you're talking about just an example of 5% growth in daily cases, then there's a huge difference in the interpretation of, say, a 1% reduction in cases and a 1%-point reduction cases. Because if you have a policy that causes cases to be reduced by 1% point, then you've gone from a 5% growth rate to a 4% growth rate.
Stitzel: Right.
Pjesky: O.K. and if you've studied any kind of finance, or understand the magic of compound interest at all, [then] you know the difference between 4-5% is huge.
Stitzel: Yes.
Pjesky: It is huge, all right?
Stitzel: Or the difference between 5-10%.
Pjesky: Yeah. The difference between 5% --- yeah the difference between 5% --- and 1% less than 5%, which would be 4.95%. That's, I mean, that adds up too. But that's much, much smaller.
Stitzel: Yes.
Pjesky: All right. It's much smaller. The numbers going forward is --- the difference between those two is --- it takes a long time to see the difference. But it, you know, the difference between 5%-4% growth becomes evident really, really quickly. So, from, you know, 1-20 days after the mask mandate, cases [or] daily growth in cases reduced by 0.5% point; [whereas] all the way up to 80-100 days, [then] the difference between having a mask mandate and not having a mask mandate was 1.8% points.
Stitzel: Hmm mmm.
Pjesky: So, if you're talking about, sort of, a baseline case where you imagine a pandemic spreading through a county, the county that doesn't institute the mask mandate, all right, might have a 5% growth rate, all right; whereas at the end of the 100 days, right, the county that does institute the mask mandate would have a 3.2% increase in daily cases. And if you take that out over 100 days, which was the time period of the study, [then] the difference between those two growth rates ends up being huge. So, if you assume that you start out with --- and these numbers are in the paper. I'm not in the paper, [but] I've made them up as an example just so I could see for myself the magnitude of the results that this research sort of implied. If you start out in the county with 100 cases (100 new cases a day), and that grows for 5% for 100 days, [then] at the end of that 100 days, you're, you know, [going to] have a quarter million cases or so.
Stitzel: Yes.
Pjesky: Right. You have a very large number of cases. If you follow the pattern of the estimates of the effectiveness of mask mandate in the paper, all right, [then] at the end of that 100 days that particular county has about 110 or so thousand cases. So, the implication of the research put forward in the CDC paper --- if, you know, if you strictly interpret it the way that it's written --- is that a mask mandate controlling for the number of tests, controlling for dining, controlling for legal restrictions on large gatherings, controlling for all of these things…
Stitzel: Hmm mmm.
Pjesky:…controlling for population of course, [and] controlling for all of these things will result in about 60% fewer cases at the end of the 100 days after the mandate. Now that is just a tremendous result, a tremendously large result. And it's, you know, much more different than the result that I thought that the paper was giving at the, you know, my first cursory reading of it which was, you know, 1.8%. So, you know, 1.8%.
Stitzel: On net for the whole time period?
Pjesky: Yeah, on net for the whole time period. You know, you're going from, you know, 250,000 cases to, you know, 245,000 cases --- somewhere in that. So, it's just not a very big impact. 5,000 [is a] big number all right, but still not anywhere near as large as the 60% impact that the paper implies. So…
Stitzel: Yeah.
Pjesky:…you know, this is a study that, you know, it's getting a lot of attention because it deserves it. I think that's a humongous, it's a humongous result. Nothing against them, because I'm sure they're getting massive quantities of email. But I've emailed the authors asking for the data, because I'd like to take a look at this myself, you know. I've done some replication in my, you know, in my past. I'm, kind of, [wanting] like, to look at this data for myself. I haven't heard from them yet, but again, they're probably swamped with communication and stuff like that. So, I don't...
Stitzel: Yeah you have to be charitable to them, right?
Pjesky: Yes, of course. Of course.
Stitzel: So, let me let me step in here really quickly and just talk about that. So, the nice thing about this is [that] this is the type of thing that economists do. This is the type of thing that applied microeconomists do. It's the type of thing that applied microeconomists interested in policy do. Those are the things that you and I do. So, the reason that you actually reached out to me and said let's do this podcast is because this is the kind of thing if they did reply to you and send you the data, [then] you and I would have these estimates redone and tested. And we would know what it looks like if we were to model it in a matter of hours, probably because both of us have research and peer-reviewed journals that do these types of things --- you know, not COVID and mask mandate specific things, but exactly these type of things --- that deal with interventions, that deal with timeframes, [and] that deal with policies. You did a beautiful job laying out quasi-experimental earlier. I was actually going to interject and ask you to explain what that meant, but you hit that ride on the head. So, I thought that was really good. And I think you've laid out the picture really well for us. And I think what we want to turn to now, sort of, what are some strengths and what are some weaknesses here. And I'm afraid I may have a curve ball for you, because a thought has occurred to me since you and I chatted about potentially doing this. But before we do that, I want to talk about this magnitude stuff for a minute. And we've spent some time talking about this and puzzling over. If you have a result that says (I forget what number you calculated) 64% or something, [then] if it adds up to that, shouldn’t that be the headline of the paper? So, talk to us a little bit about --- maybe let the listeners peek behind the mask (that was a Freudian Slip), [or] peek behind the curtain and see what's happening behind the research, and talk to us a little bit about --- why you think that might be, or any comments you have on that. It's O.K. if you don't go super in depth.
Pjesky: Well, I mean, it's. Yes, of course the, I mean, that's the story here. And it makes me extremely doubtful that my interpretation of the words on the page of this paper is correct, because, you know, my first reading was it's a small effect 1.8%. We can just ignore that, and this probably is not very important. You know, it's probably not a very important piece of research to guide policy, you know, going forward in reference to mask mandates. But then, you know, when I recalculated and I found out how large the effect is, you know, I can't believe that this isn't, you know, being trumpeted in that context more.
Stitzel: Yeah.
Pjesky: Because, you know, if I had a result that big, [then] that would be, you know, my introductory sentence. It would be prominently in the, you know, in the summary. It would be in the title even of the paper [that] mask mandates reduce COVID cases by 60% after 100 days.
Stitzel: Yeah.
Pjesky: That is what I would say. And I've played around, you know --- I don't think we have time to get into this, but I, you know, played around --- with other interpretations that could be had with this research, and none of them fit the actual description in the paper. Now this is a report produced by the CDC. It's not a journal article, so their actual mathematical model is not in the paper. So, I'm having to rely on their verbal description of it. And, you know, to be charitable with them, I assume they know what they're doing, and they're clear in their language. But, you know, if somebody like you and I had the actual equation that was estimated, you know, interpretation wouldn't be a problem. We'd automatically know…
Stitzel: Uh huh.
Pjesky:…what their what their time period, you know, what their period is. But, you know, other commentators that have talked about this paper, you know, I've never seen a, sort of, an arithmetic model laid out for this like, you know, like I did. But they all talk about it being a substantial effect and a huge effect. I mean, it's so huge right now, [that] to be honest with you, I’m not sure I believe it.
Stitzel: Yeah. And I think that may be why…
Pjesky: Yeah. I mean,
Stitzel:…that's not in the title.
Pjesky: Yeah. 60% reduction in cases --- you would see that. That would be --- you wouldn't need a study, all right? That would be apparent. So, you know, one state or county has a mask mandate, [but] the neighboring state or county does not, O.K.? And, you know, one county is inundated [and] completely recovered, and the other county has a much, much, much better track record; because, you know, hospitalizations and deaths are going to follow these case numbers as well. So, you know, I would expect hospitalization to be significantly less. And the paper also tracks deaths, and they get very similar results which is absolutely not surprising whatever the method is. [If you] use the same method, [then] you're going to come up with very similar results for cases, and other measures of the severity of COVID, no matter what they are.
Stitzel: So, I've read the paper in detail too, because you sent over the insights article on it, that you sent as well. So, I'll confirm for the readers there are several points --- the percentage point thing, the daily thing, and even some more technical things that we don't really want to get into here. But things like fixed effects that all point to the interpretation that you're talking about is the one that they intended. So it's either not just sloppy writing, but it would have to almost be like [that] they do it in everyplace that you would look for it in that paper. So, it seems like they picked their words carefully. So, I just want the listeners to understand, you know, we're doing this as researchers. And this is part of how the research process needs to go --- that people check and double check your work. We believe in the results of science because there are people out there checking to see if you did anything wrong. So, you know, maybe the listeners are a little bit more used to like a political thing, where people have some kind of bias or narrative that they're pushing. But this is how science goes. So, I do want to say I agree with you. I can't read it any other way. And I spent time trying to read it another way to see if I could. I even made some proposals to you as well, [that] maybe it means this, and then yeah. I just can't find that in the paper. So, that's a big part of what we're doing here. Do you want to start now with, sort of, what you think is really nice about the paper? Or do you want to get into some limitations? Which order do you want to do that?
Pjesky: Yeah. Well, let's go. What's really nice about the paper --- the, you know, the paper's title is, you know, associated. The first word in the title is association. So, the authors never claim that there's a causal link here.
Stitzel: I think it's huge.
Pjesky: Yes. The authors you know also admit that, you know, the study is limited by a few factors. But what's really nice about the I think is the data that's obviously present in the paper is extremely, extremely detailed. The controls that they picked out, among the ones were that were available to them, were probably the ones that were the best ones to pick out. So, you know, somebody reading this paper for the first time without any knowledge of that might say: well, what about the number of tests that were performed? And the authors could say: well, we controlled for that so that's not a factor in these results. And, you know, what about other large-scale restrictions on behavior that could coincide [with] different times [as] different counties might try a different approach? What about that? So, maybe you're picking up some of these other restrictions like limitations on dine-in restaurants, and the authors would say: we controlled for that, all right? So, you know, given the framework that they had ---- this is a really, really nice paper. So, methodologically I have no I have no concerns with it, in terms of what the author has done, or what the authors did. Now limitations of the paper I think are many, and they're actually pretty severe in some sense. What I would like to see in this research is not necessarily a measure of mask mandates, but I'd like to see a measure of mask usage, all right? So, the claim of the paper --- and I think everybody's belief is that when the government puts on a mask mandate, all right, and the government tells people: you know, starting Monday or whenever you must wear a mask under these conditions, all right? You know, any time, you know, in the mask mandates, [they] turn out to be very similar; [whereas] anytime you're inside, some are a little more relaxed than others. Some might say things like: you must wear a mask if you're ever going to be within six feet of someone, or, you know, something like that. But the basic idea of the mask mandate is to have people wear masks in any situation where they might be around other people, [and] close enough for the mechanism of the spread COVID to come into play, and for COVID to be spread from one, you know, one person or another. And I think that, you know, everybody, sort of, understands the mechanics of how masks work, and there's a pretty broad consensus among everyone that masks work. But that's a different question than whether or not mask mandates work, and the exact mechanism that could cause a mask mandate to be associated with this kind of drop in cases.
Stitzel: Yeah. Let's talk about that just a little bit, because I do want to be as nuanced as we can be. I think the consensus is that masks have some effect and some moderate effect. And I think we understand the, like, mechanistic process. You and I had both listened to and, sort of, compared notes on a podcast earlier this week with Tyler Cowan on Econ talk with Russ Roberts --- highly recommended, or as you would say, self-recommending --- where he says if, you know, if you cover your mouth when you sneeze, [then] you believe that masks work. But that leaves open a lot of room for nuance as I told you when we were preparing this podcast. Yes. But if I were standing close to you and sneezed in your direction, but putting my hand over my mouth, you still wouldn't be very happy with me, right? So they have limitations. The implementation of them --- there's a lot of nuance there. And we, sort of, leave aside other types of concerns. But because of that, how would you react to a comment like this? I was --- probably butcher his last name, I apologize. But Jeremy Horpedahl was on a debate with The University of San Diego Ethics and I forgot the name of the institute there. And he cited that something like 80% of people seem to be wearing masks consistently, you know, in surveys. What does consistently mean is going to be a real big point of debate. And surveys have that important limitation. So, talk a little bit about why you think that point that you made is so important --- the difference between math wearing and mask mandates, and, you know, people, sort of, [being in] compliance. And then maybe I'll follow up that with a comment if you don't get all the way to the thought that I had.
Pjesky: O.K. well, the, you know, the concern that I have is that, you know, we don't really want to measure the effect of mask mandates. We really want to measure the effect of mask usage.
Stitzel: Yeah.
Pjesky: And to the extent that masks are helpful, then we want to encourage their use, all right? A mask can be a really, really cheap technology used by human beings to stop other human beings around them from getting sick, all right? So, you know, the effectiveness of that is a really, really important thing to tie down. Everyone I think, or at least the authors of this paper think, and I would agree with them, all right, [is that] we start with the belief that masks work, all right? We start with the belief that mask mandates are going to change people's behavior in some way.
Stitzel: Hmm.
Pjesky: I think this is one of the reasons why the authors can only use the word association and not, you know, some phrase that would imply causation. So, you know, if the statistics of this paper hold up, O.K., [and] if they are indeed true, all right, [then] let's take the paper seriously and say that the results are true; [therefore] then I think it's important from a policy perspective to understand the mechanism of why these mask mandates seems to work. Because if the public takes mask mandates seriously, then there's going to be some kind of change in behavior that is sparked by the mask mandate. And I would really like to know what that is. Are people actually wearing masks more? Are they wearing their masks more correctly, all right, which might also make a difference. Or is it changing people's behavior in other ways that might be associated with a mask mandate that's not directly associated with masks? So, maybe the mask mandates scare people and they don't go out anymore.
Stitzel: Hmm mmm.
Pjesky: All right? Maybe the mask mandates give people confidence and they go out more.
Stitzel: Yeah.
Pjesky: Right. It could go either way. We're not sure. So, a mask mandate could be a part of a bundle of policies and attitudes that are taking place in a discrete moment in time that radically changed people's behavior enough to affect the course of the disease. And if we're going to be serious about policy in this realm, [then] we will make much better policy if we thoroughly understand those mechanisms.
Stitzel: Hmm mmm.
Pjesky: Now it's too late, all right? And this would rather be a big undertaking. I have absolutely no idea if anyone actually did this. But if I could go back to, say, May of last 2020, you know, and have, you know, unlimited resources basically to do this kind of research, [then] I'd like to set up cameras in restaurants, and grocery stores, and other places where people congregate all right? And I would like to just film their behavior, you know, as anonymously as possible. And [I would] have, you know, armies of graduate students analyzing these films, you know. Or, you know, maybe I'd analyze the films and let my graduate student do the math. I don't know. But, you know, analyze these films and see if there is a discrete change in people's behavior when the mask mandates occur, all right? Is there a change in the distance that people keep from each other? Is there a change in the number of people wearing a mask? Is there a change in the number of people that are going through grocery stores, restaurants, [or] to and from wherever people congregate, all right? Is there a discrete change in people's behavior that is associated with the mask mandate, that could also cause these numbers, that represent the change in the course of the disease? Because the one thing that we know is not happening is that we never had a situation where leading up to a mask mandate no one wore a mask, and then after the mask mandate everyone wore a mask.
Stitzel: Yes.
Pjesky: O.K. So, we don't know for sure, all right. I think there have been surveys done by this, but I wouldn't be convinced by any survey, you know; because if it suddenly became illegal for me not to wear a mask I'm much more likely to tell someone that I'm wearing a mask.
Stitzel: Yeah.
Pjesky: So, you know, I'd have good reason to, [and] anyone would have good reason to doubt the results of surveys on this. So, we don't, you know, so in short we don't know how these mask mandates change people's behaviors...
Stitzel: Right.
Pjesky:…all right, at or on exactly the time that the mask mandates. And without that information we come up with regards to what these results are, [then] we come up with some results that are not going to be entirely convincing to me. So, you know, I might have my beliefs on the effects of masks. I might have my beliefs on the effects of mask mandates, and any one piece of empirical research is only going to have the capacity to move those beliefs that I have by just a little bit, all right? And that without that piece of, without that piece of information in this paper the, you know, my movement on the belief that I have of the effectiveness of both masks and mask mandates --- it just moved a little bit.
Stitzel: Hmm mmm.
Pjesky: So, you know, if the paper would have said something like: well, we've measured people's interactions, and we found out that traffic through grocery stores remained constant before and after the mask mandate. We found that, you know, compliance with wearing a mask went up by so many % discreetly at this time. If there was a statement like that in this paper then I would be thoroughly convinced…
Stitzel: Yeah.
Pjesky:…that mask mandates caused a radical change…
Stitzel: Hmm mmm.
Pjesky:…all right, in the course of the COVID pandemic with the mechanism that this paper implies.
Stitzel: Yeah.
Pjesky: Meaning that it's actual changes in behavior with wearing masks.
Stitzel: Yes.
Pjesky: All right. That's at work here. It's not changes in people being more careful or more cautious in other ways in conjunctions with wearing a mask a little bit more than they otherwise would.
Stitzel: O.K. So, you've come right up to the point of, I think, the biggest critique; although I've since thought of another one that I think is interesting. It's not as big but so tell us what policy endogeneity is. And tell us how we would get how policy endogeneity would work in this situation, and why this study is likely to be endogenous. But it's almost in a weird way, right? Because policy endogeneity in this case would, sort of, work one direction. But we might actually think that the effect will go the other way --- which is to say the things that trigger the policy kicking on will also be the things that will most change exactly the type of interactions that you just talked about. So, define policy endogenous for us a little bit, and then expound on that kind of idea.
Pjesky: Well, I mean, endogeneity in this context would be, you know, would be the reality that there was, sort of, an unmeasured or unobserved event or process that is causing changes in two variables, O.K.? And if we just measure the two variables that we can see, all right, we will come up with perhaps erroneous estimates of the causality, or even the association between the two variables that we see. So, you know, sort of, in a wonky way, that's what endogeneity is. It shows up with almost any empirical research involving human beings that you could think of, you know? You and I could sit around and think of all the ways that endogeneity might affect this piece of scholarship, or virtually, you know, virtually any piece of scholarship. And it can really throw your, you know --- it can give you conclusions that can take your thinking in radically wrong ways. I'll just --- let me give you a really classic example in involving disease that I think relates in a really weird way to this. You know, there have been times in the past when there have been pandemics, where citizens of a or, you know, residents of a village, or a town, or whatever will observe the incidence of the disease rising at the same time that they see physicians (or people that would care for people that had diseases) showing up in their town. And they look at that and they think: well gee, these people are bringing in the disease, so let's get rid of them.
Stitzel: Right.
Pjesky: All right? When in fact, that's not what happened. That's not what's happening at all, right? The doctors are following the disease.
Stitzel: Yeah.
Pjesky: It's not the other way around. So, endogeneity can be this, you know, this unforeseen factor that influences to observe factors. It can also be things happening that causes reverse causality, all right? So, endogeneity is basically any force that causes us to misinterpret the relationship between two variables. Now with this mask mandate, all kinds of things could be going on. You know, citizens in a county that are already wearing masks are going to be much more likely to be in favor of a mask mandate.
Stitzel: Yeah.
Pjesky: O.K. So, if you have a county where everybody is already wearing masks, and the county officials say: well, let's just have a mask mandate, [then] we're just going to say no, all right? If people are going to generally think: well, I'm already wearing a mask, so that's fine, all right? And people in counties that wear masks with great frequency are also going to call on their politicians to institute mask mandates. Because if you wear a mask, [then] you believe it works. And so, you think everyone else should wear it as well for, you know, for various reasons. And then the opposite might be true as well. If you have a county with a lot of people that, you know --- proportionately or relatively speaking --- think that masks don’t work, all right, then the officials (or the politicians) in those counties are going to have a much more difficult time passing the mask mandate to begin with. So, that's just one way that endogeneity could show up in a problem like this.
Stitzel: So, that's a bang on description there. So, you know, my thought here --- and I want to get your thought on this, because I could genuinely see it going the other way, right? But, you know, my thought here is [that[ there's a lot of the principal observations that economists should bring to this; [whereas the fact] is we can't estimate out the way that these things will grow because behavior will change, right? So, this is the classic example as you mandate seat belts, [then] people drive faster, [and] they get into more fatal crashes because they feel safer. Like, that kind of idea is one of the principal things that economists can bring to this type of discussion. So, one way we could explain why there's a overestimate (we'd call it a bias), right, in the in these estimates that they're putting forth; [moreover the idea] is they're measuring this mask mandate, but those masked mandates are coming along with just the normal course of reaction that people are going to take in a particular area. So, really the mask mandate is (excuse the pun) a symptom of the fact that people in this area are beginning to take things more seriously. And the local government might literally be reacting to pressure they're getting from people in their community going: it's getting out of hand, [and] we need to begin to react to this kind of thing. And so, you know, the listeners at home hear me when I say this: there's a lot of nuances there. That's not saying that mask mandates and these kinds of things don't have any effect. But just --- it would be baked into that estimate, and that's why the things that you brought up about the way that interactions change is so very important.
Pjesky: Yeah. I mean, that's one of the reasons why it would be extremely beneficial to have an accurate and reliable measure of people's actual individual behavior.
Stitzel: Right.
Pjesky: You know, the day before and the day after the mask mandate --- did it change people's behavior? You know, sort of, along the same lines --- it doesn't necessarily [mean], you know, endogeneity but related to that ---- another problem with this research is that these mask mandates happen at different points in time of the natural process of the disease. So, the disease has to get so bad in order to trigger citizens to request (or to demand) a mask mandate from the county officials, basically. So, you might have some, sort of, systematic pattern that causes different counties to pass these mandates, O.K.? And the individual course of the, you know, the independent course the disease in each county, all right, is going to fall at slightly different time frames, all right? Because this didn't hit areas of the country at the same time with the same severity. So, it was like in New York and, you know, places like that. Before it was really bad in Texas, and other states, and counties around the union. They didn't see big upticks in cases for, you know, for later. So, that really causes problems I think, you know, perhaps for, you know, the estimates. So, controlling for actual people's actual people's behavior, again would be absolutely crucial…
Stitzel: Yeah.
Pjesky:…for this, you know, for this study's result to be taken even more seriously.
Stitzel: So, let me bring up a couple of ideas that would get at other limitations. One thing that occurs to me that's exactly the same thing that you're just now talking about, but sort of in a different dimension is [that] you have to model a counterfactual. I mean, that's the important part of quasi-experimental. And the nice thing is they have different counties at different times, and that should provide them some variation there that would allow the counterfactual. But the problem is the counterfactual in this case is a growth rate of the cases --- which would be a non-linear thing. It'd be exponential. And this is why we're very concerned about this. This is why many economists said this is the kind of thing that you have to take seriously. It's exactly the kind of reason that a 0.5 % percentage point decrease could have the kind of total effect that you're talking about over 100 days. But I also think that makes it very difficult to model what the counterfactual would have been. So, talk to us a little bit about how they used, I think what 60 days prior, and how they use that to model the counterfactual, and why the different counties are important, and whether you think that does enough to address forming what the counterfactual is. Maybe start us out with what a counterfactual is since I've said that 17 times.
Pjesky: Well, I mean a counterfactual would be, you know --- we're not interested in comparing county A to county B, where county A had a mask mandate and county B did not have a mask mandate. What we're really interested in, all right, is finding out what would happen in county A…
Stitzel: Hmm mmm.
Pjesky:…in one state of the world where they had a mask mandate, and county A in another state of the world where they did not have a mask mandate, O.K.? This is what a random control trial [does] all right. [It is] the kind of thing that they're able to do quite a bit in the natural sciences. This is the true experimental technique that is very useful in many contexts of getting at the truth of causality. So, if you are wondering how a chemical reacts with another chemical at varying temperatures, [then] it's extremely easy for you to set up an experiment where you hold everything constant, all right, and the chemicals are exactly the same in each case. And you vary the temperature, and you measure the reaction. And the difference that you get can be --- if you've done your experiment properly --- all right, is the effect of temperature on the reaction. In social sciences that's an incredibly rare thing, in order to see, you know, in any research that anybody in the social sciences does, because people are different, all right? We don't have time machines, all right? So, we can't run time forward where we do one treatment, go back with the same people in exactly the same circumstances and run it again with a different treatment. We cannot do that with people. So, the best that we can do, all right, is try to build comparisons that we think are the same, all right? A really good example of this would be what they did to test the efficiency or the effectiveness of the various vaccines.
Stitzel: Hmm mmm.
Pjesky: All right? So, they get large numbers of people, O.K.? And they're able to measure their demographic characteristics, and anything else that they might think are important. And they give, you know, half of them the actual vaccine, and they give the other half of them a placebo on something that's not the vaccine. And they follow those two groups of people through time and see how many of each group gets COVID. And then they compare those two and then that will give you a reasonable approximation of how effective the vaccine is under those conditions.
Stitzel: Right.
Pjesky: All right? This research is not really like that. It's not really like that at all, O.K.? Because the treatments of the mask mandate are not randomly assigned.
Stitzel: Yes.
Pjesky: O.K. Some counties are going to be more likely than others to institute a mask mandate for many different reasons, all right? And those different reasons are really, really hard to control for. So, when we compare county A to county B, all right, we are really comparing apples to oranges to a certain, you know, degree. So, that makes measuring the differences and outcomes in these two counties always, you know --- there's always doubt about it, all right? There's always a reason to think that there might be some factor that we haven't been able to control for, [or] some factor that we haven't been able to observe or measure that's actually driving the distance to the differences in our estimates, as opposed to the treatment itself.
Stitzel: So, in this case, right, even those numbers that you were referencing where you modeled out what the difference would be with the different growth rates, it holds that daily growth rate constant. And if that daily growth rate --- you cited like 5% --- if that would naturally go down, because even in the absence of a mask mandate, people will change their behavior --- whether that's mask wearing, more distancing, just staying at home, or, you know, being more sensitive to public places. And people that, you know, might be ill or something like that, [then] if that rate naturally goes down in this study, they pick that up as a part of the effect of the mask mandates, if the timing is correlated with the intervention date, right? So, I think you've picked that up really nicely. Any comments about that before I turn to, sort of, I'm going to spring one?
Pjesky: No.
Stitzel: Because I know we didn't talk about that.
Pjesky: Yeah. The timing of this is extremely important, all right? And sort of, the daily fluctuations of a pandemic going through the population is going to, you know, have daily waxing and waning, right? So, it's very, very unlikely that a pandemic is going to grow at a constant 5% a day for 100 days.
Stitzel: Yeah.
Pjesky: All right?
Stitzel: The disease itself is going to speed up sometimes. It's going to slow down sometimes based on all kinds of factors, you know --- good weather, bad weather, you know, whatever, you know, whatever else [such as] pay day, everybody going grocery shopping, you know, [and] things like this are going to have these subtle impacts that could also add up.
Stitzel: Yeah.
Pjesky: O.K. when you're talking about periods of 100 days O.K., like this research does you're going to pick up all of those kinds of things and at best those kinds of things are going to add up to be noise in your estimates...
Stitzel: Yeah.
Pjesky:…all right, and make them really, really uncertain. So, you know, anytime you do statistics, you get a point estimate. You also get a standard error. The standard error of these estimates is going to be extremely shaky, O.K.? My guess is that they probably are not going to be robust for, you know, to different estimation techniques. They're probably not going to be robust if you use different time periods.
Stitzel: Yeah.
Pjesky: You know, now in the studies in the studies defense, all right, it was really, really interesting to me that when they compared counties that passed mandates, and counties that did not pass mandates, there wasn't a significant difference in the daily growth rates leading up to the mandates. So, looking at, like, 20 days to zero days before the mandate was passed, there wasn't a statistically significant difference in the pattern of growth rates for counties that were about to pass a mandate, [or] with the counties that were not about to pass the mandate. And that's really interesting. It strengthens their results.
Stitzel: Yes.
Pjesky: It strengthens their results quite a bit that they that they were able to report that.
Stitzel: So, it just occurred to me another thing, because you mentioned like weather. Weather would be a huge factor here just based on the timeline of when mandate started coming down in, you know, late spring [and] early summer. That's a huge effect potentially in the way that the virus evolves. And you also have a situation here where the COVID, you know, came in these waves that we're always talking about, and there is a sort of natural falling off that's happening. And initially I had given them credit for that, because I thought: well, they've got these (what are called) day fixed effects. You can lay that out for us here in a moment if you want. But and now I'm realizing those day fixed effects may actually be a day fixed effect relative to the intervention date. And so, they may actually not get credit for that, which I think would be a huge criticism there. So, tell us what day fixed effects are really quick, and then whether you think I'm right or not about that being a potential huge problem, given that this research is happening in summer potentially.
Pjesky: That is a good point. I hadn’t thought of that. What a fixed effect is, is something that you'd want to put into your regression, that you hope controls for some observations that you cannot observe…
Stitzel: Hmm mmm.
Pjesky:…or that you otherwise can't control for. So, you know, when you set up a regression like this ---- whether it's with the, you know, 3,000-4,000 counties, or whether it's with the, you know, 50 or 51 state units that we look at in researching, you know, various topics in economics, [then] often you'll put a fixed effect for those states, all right? So, let's say that you think that population density is important at whatever you're studying. Well, if you're studying, you know, a phenomenon that happens over the course of 100 days or 120 or 140 days like this study did, there’s not much change in population density…
Stitzel: Yes.
Pjesky:…all right, within a county over that period of time. So, if you think population density is important, all right, [then] what you'll want to do is [that] you'll want to include what's called a fixed effect, all right? It's very well named. You'll want a fixed effect for each county, that will control for characteristics that are unique to that county, but don't change very much over time.
Stitzel: Sure.
Pjesky: All right? Likewise, with your daily, or monthly, or quarterly, or yearly fixed effects, all right, will the same purpose. So, if you put a daily fixed effect, O.K., everything that is important (you hope everything that is important) to your research question, that doesn't vary, all right, across your treatment groups within a certain day…
Stitzel: Hmm mmm.
Pjesky:…will --- and I say the word hope [because] you hope that those things --- will control for it. So, if you're talking about weather, O.K., [then] you might not observe weather directly.
Stitzel: Yeah.
Pjesky: All right? But if you have a small enough area, [then] the weather is going to be the same in that area for that day, O.K.? So, if you're studying two adjacent counties, all right, or if you're studying, like, two cities that are close together --- like you were in Dallas or Fort Worth or whatever ---- and you wanted to control for weather over time, [then] well, you know, today the weather is the same in Fort Worth as it is in Dallas.
Stitzel: Yes.
Pjesky: All right. So, you're so your daily fixed effect --- you hope will control for that. And when you're looking at the whole country, [then] these daily --- and this aside, this is what I think your point is if you're looking at the whole country, these daily --- fixed effects might not do it.
Stitzel: Hmm mmm.
Pjesky: All right? So, there might be variations, O.K.? There might be variations in weather. There might be variations in all kinds of things, all right, that are unobserved across counties, [and] that are different across counties, all right? So, these are going to basically confound your results in the best-case scenario a random way.
Stitzel: Yeah.
Pjesky: O.K. So, if it truly is random, O.K., [then] you don't really have a problem.
Stitzel: Yeah.
Pjesky: O.K. So, if weather varies from county to county, O.K., [then] that is going to show up somehow when you estimate these things, all right? But if weather isn't correlated in a way that affects your treatment, then that's just going to, you know --- (use a statistical term) that's probably just going to --- increase the size your standard errors, all right? It's going to make your estimates a little bit more imprecise, perhaps. So, you know, if any watchers that, you know, anybody listening or watching this that have had statistics, [then] your t-stats are going to be a little higher…
Stitzel: Yeah.
Pjesky:…you know, for --- sorry, a little lower, you know, for instance. So, instead of a t of 2, you might have a t of 1.9 or something like that. But whether it's going to affect whether or not people go out, all right, and whether or not people go out are going to affect whether or not people get people COVID. So, if it's rainy in some areas of the country, all right, and that causes people to delay their grocery shopping, or if that causes people to not go out and eat at an outdoor restaurant or something like that, [then] well that's going to have an impact perhaps on whether or not people get COVID in that in that area. So, the county fixed effects and the daily fix effects that the authors put in in their model, they're going to do quite a bit of lifting in this model. But it is --- and this is a problem with any with any model that uses fixed effects, O.K.? It's not just not just this one. There's always going to be some question about whether or not those fixed effects somehow leave things out, all right? So, you want these fixed effects to control for a lot of things that you otherwise can't control for…
Stitzel: Right.
Pjesky:…that are fixed within your groups. Whether they're time groups or space groups, you want those fixed effects to do that lifting for you, to do those controls for you. But they don't always do that. They don't always do that.
Stitzel: So, you've laid out beautifully what fixed effects are. And you picked up, I think, a very important point. But I'm making a slightly more (I don't know if nuance is the word), [but] a slightly different point. If your day fixed effects are dates, if they're August 1st, August 2nd, [and] August 3rd, [then] that probably does a good job on average for picking up weather. But those are different in different areas, which is a good point that you've made. But I'm not sure that that's what these day fixed effects are. The other way to say day fixed effect is: this is day one since the implementation, day two, day three, [and] day four; which I'd have to think for a minute if that if you can even code them that way with a difference indifference, because you may not be able to because you're trying to pick up the group ring. But that is something that occurred to me along the way. So, which way these day fixed effects are coded really matters for the weather. And of course, your point still stands exactly as that. Do you have a comment on that before I bring up?
Pjesky: No, I'm not.
Stitzel: I'm thinking now if you've got a day fixed effect --- I don't think you can code it as day one, two, [or] three since the intervention, because that is actually perfectly collinear with your implementate [Italian verb; second-person plural of implementare] your treatment variable, right? So…
Pjesky: Yeah. So that couldn't be in there.
Stitzel: I don't think...
Pjesky: I never thought of it.
Stitzel: I never thought of it coded that way.
Stitzel: I actually think you're.
Pjesky: This is more of a stream of conscious side, that no that couldn't be in there though.
Stitzel: Yeah. O.K.
Pjesky: You know there we go. You know, that would be like adding the county's area…
Stitzel: Yeah.
Pjesky:…which does not change.
Stitzel: Right.
Pjesky: So, if you added the county's area to your regression, and you also had a county fixed effect…
Stitzel: Yes.
Pjesky:…then those would be perfectly colinear, and you'd get a, you know, in-state or whatever you're using you would say: uh-oh.
Stitzel: Yeah.
Pjesky: It's something. We did something wrong, because you'd have a bunch of dots on it instead of numbers, which…
Stitzel: I thought they would be both.
Pjesky: Yeah, we've all seen that before.
Stitzel: O.K. so.
Pjesky: It's kind of a more sophisticated version of the dummy variable trap…
Stitzel: Yes.
Pjesky:…for people that have had a little bit more advanced statistics. This is another reason why the actual mathematical expression of what they estimated would need to be in this paper.
Stitzel: Yeah.
Pjesky: So, it would make it a lot easier for, you know, people like you and me to talk about. This might be a 5-minute conversation instead of an hour conversation.
Stitzel: Right.
Pjesky: The mathematical expression that they estimated were in there, because that would specify the nature of every variable including the day and space fix effects -- county and base fix effects.
Stitzel: Right. So, let me make it, sort of, clear in that case. Like, the day fixed effects would be very easily solved by, you know, seeing the paper. And potentially, you know, they've done that in a way. That's not a problem at all. I probably think in this case --- I would more or less retract my comment about fix effects (except for in the manner that you laid out), which is I think very good and important, but it wouldn't solve the endogeneity type things. It wouldn't solve the things. Hopefully they would talk about that, and that would be important. But yeah, us being able to see the math --- that wouldn't change any of the comments that we've made up to this point, I think except for that one. But so, I want to bring up one more big point, and then turn to --- I'm gonna spring one last problem on you so we can try this on the fly --- one big problem. And this is really important with forecasting. This is a point made by Nasim Talib, which is when you're in a situation like this, you don't actually have a normal distribution of outcomes (which is important). You know, all your research --- when you're estimating these kind of things --- that's kind of your default setting. And we just --- that's what we're trying to work with now. You can do things to address having a non-normal distribution. But his point is [that] when you're estimating things like this --- a point estimate --- [and] saying it reduces the daily growth rate by 0.5% points, [then] that's a point estimate. What it is that you're estimating in that case, you know, is sensitive to the type of distribution that we have. Talib would argue you've got what's called fat tails. So, lots of very extreme outcomes --- more than our normal [and] under-normal distribution. So in that case, a forecast --- now I'm being careful here, [but] a forecast --- in that situation is very, very problematic, right? And so, these point estimates that we see before --- oh, we're gonna see a million people die, or two million people die. Those fall into that trap. Now, this isn't a forecast. But I think a similar problem arises here, which is as you've said, because this is an exponential thing [and] the difference between 4.95% and 4.5% is really (there's a very) big difference here. So, talk to us a little bit about how the confidence interval of these point estimates is very important --- which I have this in front of me. They're estimating this at a 2% and 3% confidence interval, which we would in economics consider statistically significant. But it does leave room for a confidence interval, to where what happens if instead of being 0.5%-point decrease, you've got a 0.2% point decrease? And how that would play with the exponential growth that we talked about? And why this is potentially a problem overall?
Pjesky: Well, I mean, if I interpret Caleb correctly (I might not be), [and] if the factors that he would be concerned about are indeed in play here, then both your point estimates and your standard errors wouldn't really mean anything.
Stitzel: That is right. So, that's why I was contrasting between forecasting and what they're doing. Because they're not forecasting here, right? But what I'm saying is a similar critique. You're actually very right, and that's a good point. I think it's, sort of, beyond the scope of the podcast; because now you're talking about efficiency of your estimates, which is really, like --- that's graduate level econometrics. So, let's set that aside for a moment. So, what I'm asking you to focus on is, like, the confidence interval here. We say: oh, we take 0.5 % points decrease. What happens if within their confidence interval you got 0.2%? But that's, like, you said [is] a really big difference when you have exponentially compounding over a 100 day period.
Pjesky: Yeah. I mean, these/this isn't exactly addressing your point, I don't think. But these estimates --- the kinds of things that we talked about when we began, these estimates --- are so large, that I think it should be clear to anyone that looked at the raw data that there was an impact. So, in other words, if after 100 days your mask mandate caused your cases to be 60% less-ish than counties without a mask mandate, [then] I don't think you need a statistical study at all.
Stitzel: Hmm mmm.
Pjesky: O.K.? I don't think that you would need a statistical study at all. We would be, you know, committing a, you know, a travesty against philosophy, where we would, you know, we're painfully elaborating on the obvious, all right? It would be a completely --- it would be a complete waste of time for me to do any fancy statistical tricks to try to convince someone that masks mandates worked if we saw this vast reduction…
Stitzel: Yeah.
Pjesky:…in cases in areas that had the mask mandates, versus…
Stitzel: Right.
Pjesky:…the other mask mandates. Since we don't see that, all right, then obviously there's a role for more fancy statistics, which are basically trying to disentangle and uncover things that are that are not obvious, all right?
Stitzel: Hmm mmm.
Pjesky: That are not obvious. And, you know, that doesn't mean that the effects aren't there, all right? That doesn't mean that the effects aren't there. But since the major impact of mask mandates that exist in this paper, all right, don't seem to correlate very well with what I think we observe in the real world, that causes me to doubt the paper.
Stitzel: Yeah.
Pjesky: O.K.? You know, that caused me to doubt about the paper. So, you know, if the paper were saying that mask mandates were, you know, after a 100 days reduced cases by, say, 8% or something like that, [then] that is something that I would actually take. That's something I would actually take more seriously. Because, you know, the human brain --- I can't see 8%.
Stitzel: Right.
Pjesky: All right? So if the, you know, if mask mandates were effective at reducing cases by 8% after a 100 days, [then] A, that's a policy that is extremely worth considering.
Stitzel: Yeah.
Pjesky: All right. It is probably an absolute thing that we could/should consider. And B, I'm not going to see that with my unaided eye.
Stitzel: Yes. You have to have this.
Pjesky: Yeah, we would have to have that study to reveal that. And if mask mandates were indeed effective to almost any degree whatsoever, all right, [then] it's really a fairly cheap policy to implement. And it's something that public health officials should, sort of, push for, all right? You know, absolutely that's why research like this is really important. So, when facing a pandemic, you want optimal strategies, all right? The optimal strategies are going to be unknown to a certain degree. We/you need to know how it spreads for instance. You know, you might need to know how severe it was for instance. That might be all kinds of things that we would need to know leading up to the next pandemic, all right, which will occur sometime, you know, in the future (or telling how long it will be). But it's not a matter of when --- or it's not a matter of if, it's matter of when, all right? So, the optimal response is going to vary with the point at which the pandemic is, all right? So, you know, getting testing right early is important, all right? So, you know, if mask mandates are as effective as this paper suggested it is, all right, then having mask mandates very early in the pandemic would have slowed it down, and perhaps even stopped it, all right, in its tracks. I think everyone would agree that the testing and isolating people that are sick, O.K., --- that works, all right? That's something else that works better the earlier you start it. You know, so if you're, you know, if you're testing a lot of people and trying to isolate a handful to stop the spread --- that's much more effective than if you're trying to isolate a million people.
Stitzel: Yeah.
Pjesky: O.K. So, once this disease gets out into the community, as they say, [then] testing and contract tracing becomes (maybe not less effective), but it becomes in a practical sense much harder.
Stitzel: Hmm mmm.
Pjesky: All right? So, you know, if you're a county health department, you know, and there's 100,000 people in your county, [then] it's a lot easier to keep track of 50 of them than it is to keep track of 50,000 of them…
Stitzel: Right.
Pjesky:…all right to keep/to have enough contact tracers, and to have the resources to actually, you know, do that. So, regardless of what kind of intervention that you're talking about, O.K., earlier is almost certainly [and] almost always better…
Stitzel: Yeah.
Pjesky:…all right, for real resource reasons. And because, you know, most people model these things, and most people believe the reality of these things is his exponential growth.
Stitzel: Yeah.
Pjesky: So, if this study turns out to be true, all right --- so let's assume that this study turns out to be true --- then it was a massive error in policy, that back in February and early March, the entire world wasn't talking about masking…
Stitzel: Right.
Pjesky:…and mask mandates. Because if --- they’re…
Stitzel: Well.
Pjesky:…technology that works. You know, they could have sent, you know, they could have ordered and sent everybody of, you know, every household a box of…
Stitzel: Yeah.
Pjesky:…n95 masks or something like that. But early in the pandemic, very few people thought that masks were actually an answer. So, that error, all right, was incredibly critical in a lot of different practical and political ways.
Stitzel: Yeah.
Pjesky: It opened up the door for people to be contentious about masks, when the scientific community switched to believing that they worked, to advising that we don't wear them.
Stitzel: Well.
Pjesky: But if you've got a technology available that is extremely cheap, that can cut down a pandemic spread by 60% every 100 days…
Stitzel: Yeah.
Pjesky:…all right, then that is a technology that needs to be encouraged in some way, all right? [It] could be a mandate, could be free masks, could be education about how to we are them properly, [or] it could be any number of any number of things like that.
Stitzel: Well Rex, it's worse than that, right? Because early on in the time period you're talking about, there was a period there where in the United States anyways, we had people like the CDC saying: no, you don't need to wear masks. And the reason wasn't that they had a scientific reason, because they didn't believe that mask could be provided to the point where they would be available for medical professionals. So, this is where misunderstanding your economics and the process by which prices create incentives for things to be produced misunderstanding --- that led us into all the problems that you're talking about. So, there's a lot that can be said about the political process and misunderstanding markets in that case. But that's a tragedy. And it's really a tragedy if the study's right. I mean if the study is right…
Pjesky: Yes.
Stitzel:…I mean, they're damning themselves. So, under normal circumstances I would probably bring the podcast in for a landing right here, because everything you've just said was so eloquent and laid everything out super well. But I've already teased that I have a difficult question for you. So, I'm gonna lay this question on you. And then we're here over an hour now, so after that we'll bring this in for landing. My observation is this --- and as any good academic knows [is that] you have your best thoughts in the shower. And it occurred to me this morning, because I do a lot of spatial work. So, when I say in my work “these distance rings, these geographic are mutually exclusive,” [then] I mean something specific, right? That means if I have a ring that's within a mile of somewhere, and a ring that's within two miles and I say they're exclusive, I mean, [then] I'm not including in the outer-ring the inner-ring. So, you get, you know, sort of, a donut shape if you will. They say that here these are exclusive time rings. But that's a false distinction actually, right? Because the impact that's happening between 21 and 40 days is not truly exclusive of what's happened inside of 1 to 20 days (meaning the estimates), especially on the -- I most believe that the --- first 20-day ring that they're talking about. And so, you haven't thought about this. I'm putting you on the spot. But talk to us a little bit about the compounding problem, with the point estimates of rings that aren't actually time exclusive. Any thoughts on that?
Pjesky: Well, I mean yeah. It would be like a financial advisor telling you that your savings behavior in your 20s doesn't matter to your 30s.
Stitzel: Yeah.
Pjesky: That's --- it's, you know, ridiculous, assuming I'm understanding what they're saying…
Stitzel: Yeah.
Pjesky:…in the paper correctly. That's really, kind of, a really, really questionable thing to say. And, you know, if this were a longer paper, [then] that would be one of the things I hope they would expound upon.
Stitzel: Hmm mmm.
Pjesky: If I had a, you know, lengthy correspondence with the authors of this paper, [then] it would be one of the things that I would question them on. Because of course, all right, the course of the pandemic today determines on where it was yesterday. It is by definition path dependent.
Stitzel: Yes.
Pjesky: All right. So, the more people that have it yesterday, the more people that are going to be able to get it today. So, that was really kind of an unusual thing. I'm not sure I understand what they're saying in the paper, because it almost can't be what I think it is.
Stitzel: Right.
Pjesky: If that makes sense?
Stitzel: My guest today has been Rex Pjesky. Rex, thanks for joining us on the EconBuff.
Pjesky: Thanks a lot. I enjoyed it very much.
Stitzel: Thank you for listening to this episode of the EconBuff. You can find all previous episodes on YouTube at EconBuff Podcast. You can check out our website at econbuffpodcast.wixsite.com. You can contact us at econbuffpodcast@yahoo.com.
コメント