COMPLEXITY

Caroline Buckee on Improving COVID-19 Surveillance & Response

Episode Notes

For this special mini-series covering the COVID19 pandemic, we will bring you into conversation with the scientists studying the bigger picture of this crisis, so you can learn their cutting-edge approaches and what sense they make of our evolving global situation.

This week’s guest is Caroline Buckee, formerly an SFI Omidyar Fellow, one of MIT Tech Review’s 35 Innovators Under 35, and a CNN Top 10: Thinker — now Associate Director of the Center for Communicable Disease Dynamics at the Harvard School of Public Health. In this episode, we discuss the myriad challenges involved in monitoring and preventing the spread of epidemics like COVID-19, from the ethical concerns of high-resolution mobility data to an academic research ecosystem ill-equipped for rapid response, and the uneven distribution of international science funding.

If you find the information in this program useful, please consider leaving a review at Apple Podcasts. Thank you for listening!

Further Reading:

Caroline’s Website at Harvard and Twitter Page.

Find the papers we discuss in this episode at Caroline’s Google Scholar Page.

Visit our website for more information or to support our science and communication efforts.

Join our Facebook discussion group to meet like minds and talk about each episode.

Podcast Theme Music by Mitch Mignano.

Follow us on social media:
TwitterYouTubeFacebookInstagramLinkedIn

Episode Transcription

Michael

Shall we just dive in?

 

Caroline

Yup. Yup.

 

Michael

Excellent. Caroline Buckee, thank you so much for joining us on Complexity Podcast.

 

Caroline

Thank you for having me.

 

Michael

So there are two pieces that touch on issues you brought up in the SFI Flash Seminar we hosted last week that would be really worth discussing. It’s two papers we'll link to in the show notes. One is a letter that you coauthored for Science on, “Aggregated mobility data could help fight Covid-19.” The other is a letter that you wrote for The Lancet improving epidemic surveillance and response, “Big data is dead long live big data.” Quite the name!
 

Caroline

I actually wrote that before this outbreak, and subsequently modified it to include reference to COVID because it's so pertinent right now.
 

Michael

Indeed. Yeah. You know, let's just jump into it. I think the right place to start is with the state of epidemic surveillance as it is now. And I think it's worth, before we get into the specific problems that you bring up in the, in the Science piece, talking about how we were actually doing disease surveillance at the end of 2019 in the beginning of 2020, and what does that really look like? What are the sources and how are they being employed?
 

Caroline

Sure. So I think, so the first thing to say is that there's a big difference between routine surveillance for infections that we already have diagnostic capabilities and programs and so on, versus surveillance for an emerging infection that we don't know anything about. So surveillance systems are in place for many different kinds of infectious diseases and other diseases. And those have kind of different types of data collection methods and, you know, different diagnostics and different reporting mechanisms and so on. For an emerging infectious disease like COVID-19, what starts out happening at the beginning is just going to be a handful or cluster of weird looking disease in a particular place. Right? So in terms of surveillance, what that looks like and what it looked like in this context, was strange pneumonia that started to happen associated with a particular time and place. So in this case, a wet market in Wuhan. And what unusual is, unusual looking pneumonia in a lot of patients, will depend on what your baseline is, and how well you can detect an uptick in unusual symptoms.

And then of course our ability to detect it is also going to depend on, how much the symptoms from a new disease look like. Other symptoms that we already understand. So in this case, a fever and a cough for example, is fairly nonspecific. And for surveillance systems in general, this is often an issue. For many, many diseases, we won't have a diagnostic test, we'll just have patterns of, for example, influenza-like illness. And we'll try to infer what's happening with it with transmission of a particular disease, from patterns of fever or other types of symptoms rather than confirmed cases that are circulating in different places over time. So the first thing to say, I think, is that the state of surveillance for an emerging disease is by definition going to rely on symptoms and being able to tell the difference between an unusual cluster of patients with particular symptoms versus a baseline.

So in global terms, that's kind of how we're going to first detect something unusual happening. Now our ability to track and monitor the spread of an epidemic depends on how well we're measuring cases and that has a whole bunch of different features to it. So in this context, fairly quickly, it was identified that this was a coronavirus, but in other contexts, the etiology, which is what the actual underlying pathogen is that's causing a disease, may not be known initially. So that's the first thing, right? Is trying to figure out what's causing the disease. And that happened fairly early for this one. We knew fairly early on that it was a coronavirus of some kind. Being able to track it then depends on how well we're able to capture cases within a surveillance system. So the whole debacle with testing really emphasizes the importance of this piece, of being able to track the outbreak because testing has been slow to ramp up—and that's understandable for a new disease cause you need to develop a quick diagnostic test because it's been slow to ramp up.

That means that we're not sure how many cases are occurring in different places in different times. And already that makes it quite difficult to figure out how quickly it might spread, you know, estimate the basic parameters of the disease, like the reproduction number and so on. And in the context of this outbreak—and this is true for quite a few different diseases—we also think that there's a huge number of mild cases and cases that don't have any symptoms at all. And so of course in a normal surveillance system, you're not going to capture those people, because they won't be tested, because they won't be showing up at the clinic or the hospital. So again, you have different types of biases both from a testing capacity standpoint and from the epidemiological standpoint where you're just not capturing a lot of cases because people are either home sick but they're not going to hospital and they're not been captured or they may not have symptoms.

So again, that's why the surveillance aspect is quite tricky at the beginning and it remains a big challenge in this epidemic. What you do with that data on how many cases there are and where is then going to depend on the kind of infectious disease it is, what kinds of interventions you have available for an emerging infectious disease. Non-pharmaceutical interventions are often are only available tools before we have treatments and vaccines which require of course a huge scale-up of pharmaceutical capacity and randomized controlled trials to establish what's going to work. So that's why social distancing is one of the only tools we have at the beginning of a scary epidemic like this, to shut down transmission sufficiently to give us some time to figure out how to combat the disease itself.
 

Michael

Yeah, just this morning in the SFI in-house emails, Geoffrey West shared this thing that some of the listeners may have seen going around suggesting that this novel coronavirus is like the Schrodinger's virus because you both have to act as though you have it and that you don't have it. This is a profound uncertainty that we're all acting under here, which gets to this other piece and I want to, we'll put a pin in that for later, but I think you brought us right up to the lip of the concern around when we really only have network interventions when we really can't do effective contact tracing, when we really don't know who has already had and recovered at the resolution that we would like.

 

Caroline

We have contact tracing. In fact places like South Korea and Singapore did a very effective contact tracing program, and actually we wrote a paper where we showed that in combination with social distancing contact tracing, aggressive case finding, can be very effective actually. So it's not that we can't do contact tracing, but we certainly don't have pharmaceutical interventions just now. So contact tracing is one of a suite of tools that will be helpful in, in combination with social distancing more broadly.

 

Michael

You're talking about the piece that you did with Kahn, Grad, Childs, et cetera.
 

Caroline

Yeah. Corey Peak is the first author.

 

Michael

Yes indeed. We'll link to that one. Bringing it back around to the use of mobility data and how this brings us into an ethical discussion… A lot of the conversations I've been having about COVID-19 are about how in a sense, it didn't just infect us as individuals; it revealed to us by breaking all of these other global networks, supply chains and so on,  it showed us the vulnerabilities in those networks. And one of the vulnerabilities that you address in both of these writings, although at different angles, is in places where we don't have sort of modularity built into the structure. In one of those cases it's how we aggregate data. I'm thinking about Albert Kao's work on modular decision-making and how local aggregations coarse grain things. So in this case, it's about anonymizing data at the level of individual people, but still providing useful information to people at different points at county, state, national-level decision structures.

The obvious question is this, you know, thinking about disease surveillance brings up all kinds of issues with privacy and so on. I'd love to hear how you and your colleagues have been thinking through this.

 

Caroline

So just to back up to your first point about global connectivity and supply chains. So I think absolutely this has brought home to people what epidemiologists already knew, which is that we live in a global community and depending on the epidemiology of a disease, we are not immune to global outbreaks. I think the supply chain issue is very interesting, and one of the things that we're seeing that's worrying me is that there's disruption to the humanitarian aid supply. And that's going to have a huge impact on low and middle income countries that rely on, for example, a distribution of food and other humanitarian aid, and how they're going to be able to manage that. And it's an interesting scenario. So in Vanuatu right now, there's a tropical cyclone coming and they're going to probably need international aid. But there's a possibility that those international actors will bring COVID-19 to Vanuatu with them.

And what's more, when you have a natural disaster, everybody shelters in place and they're all crowded together. So that promotes the spread of the disease. So there's all kinds of interconnectedness, that happens not just in terms of the initial outbreak and people traveling around, but also as you point supply chain. And then this issue of international aid and humanitarian interventions that we need to continue. And how do we think through those types of connectivity patterns. So that's one thing. Just to mention that I think that we really need to think about this. think through some of the factors for low income settings where the reality is that the elderly populations there are lower than they are in the Western world. And so in fact from a disease standpoint, the trade offs that they're having to make with respect to humanitarian issues, food, routine vaccination, some of those are going to be… Well I should say, they may have to make different types of decisions about what they want to do with respect to COVID-19 because of the way that their societies are set up and their demographic distributions and their reliance on global supply of a different kind. Does that make sense?

 

Michael

Yeah.


Caroline

In respect to privacy. So I think the most important thing to emphasize here is that there's a very important distinction between apps that we're seeing designed for contact tracing, which are individual level data, and they are designed specifically to try and look at chains of transmission to aid contact tracing programs. In contrast, the type of data that we've been working with is aggregated to the extent that it's no longer human subjects research for example. So we've done this work for quite a long time. And the principle behind the aggregation that happens is that you want the lowest resolution. So the courses spatial scale at which you can still say something useful. And if it's the case that the spatial scale needs to be on a smaller spatial scale or higher resolution for you to be able to say anything sensible then we don't do that.

So under the DUAs and the kind of privacy protocols that we have in place, re-identification is a big deal and we really take that seriously. So the aggregated data itself tells you something much more general than these contact tracing apps. And what it tells you is kind of generally what's happening, how far are people going when they're traveling around, how much movement is there? And that's roughly related to the contact rate within an epidemiological model. We don't really know yet exactly how those two things are linked mechanistically, but once we have better data on COVID transmission in different places coupled with specific policies being put in place, we should be able to start to disentangle what these aggregated mobility metrics mean for social distancing. And the reason that that's really important is that down the road we're going to need evidence if we're going to make decisions about how to relax social distancing on the other side of this epidemic. Without having a way to monitor social distancing interventions and understand what that will do in terms of transmission, we won't be able to relax them based on data. We'll be guessing. So again, that's going to be really key, especially if we're going to be in a scenario where we have to go into lockdown multiple times, which is one possible scenario for the future. So we really need to start measuring this in a sensible way.
 

Michael

The episode that came out just before this one was with David Krakauer discussing the first few submissions for the SFI Transmission article series. The theme linking all of the first five articles in that series had to do with this issue of the correct resolution of the model and deciding at what point you're making the trade-off. Where are the most effective trade-offs made between an honest account of the spread of probabilities to the actionability of the knowledge itself. So that links, I think,  this work on aggregating mobility data to this other piece that you wrote for The Lancet, which addresses this issue in a lot of ways. One of which is that right now epidemiology is a crisis discipline where you're working these things out in real time. And much as a Rajiv Sethi mentioned and an early episode of this show, talking about the way that we run models of one another in the criminal justice context, you know, if you meet someone in a dark alley, you don't have a lot of time to make a decision about them. And that's where all of these implicit biases come out.

And there's a similar thing going on right now, when we meet the coronavirus in a dark alley. We have to figure out what degree we are clustering versus treating these cases as unique cases. And so this piece of yours, I think it's good to start where you started, on the urgency of better surveillance systems, but you also bring in these three other points about three other very crucial challenges to this. I'd love to hear you lay those out.
 

Caroline

Sure. So for several years now there have been discussions around the use of different kinds of data to inform surveillance and to react when epidemics happen. So you know, data from phones is one of them, the one that I know the best, but there's lots of different kinds of discussions around how corporate-owned data could be useful for forecasting. So in that article I really discussed some of the barriers, over and above the privacy concerns, which I think are very real and are being addressed currently.

I think the incentive structures are all misaligned for this to be implemented in a routine way, although I've been really surprised and pleased by this response. I think we've started to see these collaborative networks being built and hopefully they will continue to be incentivized in a way that's sustainable into the future. But so incentive structures, academic incentive structures, are not really great for translational work. And then corporate data sharing, there's all kinds of issues with that. And then of course for governments it's a risky game to try and do a massive data sharing. And there are lots of different types of priorities that they have. So just to say that everyone's got their own agenda and that's challenging when you're trying to build out these analytics pipelines.

The other point I made in the article was that at the moment there's a very much a separation between the methodological and high-tech world in which some of these methods are being developed, and the realities for the most vulnerable communities and the populations who are actually dealing with the implementation of surveillance on the ground. And are going to be the frontline when it comes to first detecting and then responding to epidemics.

Really, I was thinking about Ebola in the article because I think that highlights it well. You know, there's a lot of scientists in high-income settings who think through problems very far away from the people who are going to have to implement them. And so in the article, I argue for a shift in the focus of intellectual and methodological work down the kind of translational pipeline and to the geographies where many of these challenges are being dealt with. So that's another thing.

And then the last thing I was talking about that I think relates to what you mentioned before about the previous podcast… There's a sort of a divide in the modelling community, I think, that reflects attention between simple mechanistic models and very granular, detailed agent based models, and what their utility is for different types of scenario planning versus making quick decisions during an epidemic.

And I argue in the Lancet letter that for emerging epidemics, often the simplest models are the best or the most useful, because they're transparent, and they can be quick and easily translated. I think for this outbreak, one thing that modellers always have to grapple with is whether the uncertainty in your model parameters and model structure outweigh the utility of having a very detailed agent-based model, right? So if you have a very detailed agent-based model, but you have huge amounts of uncertainty in the basic epidemiological parameters of the disease, it's not clear to me how useful that will be, right? Not to mention it's highly computationally intensive.

In contrast, a very simple model is limited in what it can, it can manage to tell you because by definition it's simple. On the other hand, you can clearly explain the major uncertainties in your parameters and in the model structure. And you can see very quickly the types of broad qualitative impacts some of those uncertainties are gonna have on different types of intervention. And so I think, well, the spectrum of model complexity maybe useful for different kinds of response and different kinds of research playing into different policy decisions. Right now there's a huge amount of uncertainty in the basic parameters. And I think one of the biggest ones is how many people are asymptomatic. And by that I mean never symptomatic. It could be a substantial fraction. And that's an uncertainty in the model that's very hard to account for unless you have serological data on how many people have antibodies to the virus. Some of these other parameters have become more clear as the epidemic has progressed, but still there are big questions about that in particular that I think I'm making detailed predictions difficult.

The other thing that's very uncertain is what impact will these social distancing interventions are having on the contact rate. And that's something that we will hopefully start to be able to parameterize a bit more as we move further into this epidemic. But right now, again, it's quite hard to parameterize… If you imagine an agent based-model with people moving around, it's quite hard to know really what's happening now in terms of the contact rate. So linking that to a prediction or a forecast is extremely challenging. So again, I think that the simple models are pretty good for general scenario planning, getting a quick idea for what's going on and highlighting the major uncertainties in some of the parameters. Whereas the agent-based models can obviously give you a lot more granular resolution on particular kinds of question. But I think there's a huge amount of uncertainty associated with that right now.
 

Michael

Let's wheel this back for a moment to the first challenge that you laid out, which is the misalignment of incentives in public-private partnerships with respect to this kind of an urgent situation. I've actually been really impressed—like you said, this particular run, it's been kind of inspiring to see academics and corporations and governments working together as well as they have. But it does call into question not just policies around corporate and governmental data sharing, but also the timescale at which academic research is normally conducted. And I'm curious how you see these things shifting in an adaptation to the crisis and what kind of hopeful developments you've observed on that front.
 

Caroline

Yeah. So, the timescale issue is a big one because obviously, if you look at NIH grants that many of us rely on, the turnaround time… I would say that the turnaround time is long for good reasons because we want peer review and so forth. But I think it's fairly extreme if you think about epidemics and how we respond to them. So the turnaround time is long. There's also the incentives in academia are also misaligned in the sense that we are still incentivized in terms of promotion based on first and last authorship and these types of archaic metrics, which don't reflect how science works these days. So now if you look at the papers that are coming out on COVID-19 and you look at the people actually doing the science, these are big team efforts, right? These are consortia of modellers and virologists and clinicians and they're working together to come up with a solution. And I think that has to be a shift in both how we allocate credit to scientists in those big teams and how we think about funding. So PI, single PI-led grants and things like that I think are just not going to be conducive to being able to push out this kind of work. The other thing I would say is that, you know, so for example, the MIDAS Network was funding centers of modellers and other collaborators with the express purpose of developing methods that could then be deployed during an epidemic. And I think what you've seen in the U S response is that that network has been absolutely central, like really central to, responding to this challenge and doing the science and rolling out models and everything. So those centers are about to lose funding and I hope that this epidemic will prove to the powers that be that actually, we need to invest in those kinds of centers. We need to give academics sustainable funding to do the slow methodological work, and then to have the flexibility to respond when the pandemic hits. So that's one thing and I hope that we will be able to use this to to showcase how science works now and get it funded in a more appropriate way.

As far as the corporate side, I think there's still some evidence that there are kind of competitive forces at work that aren't necessarily helpful for data sharing more broadly. We also have seen a number of corporate and other actors generating models and stepping in to be the interface with state governments on their response, and I think this pandemic shouldn't be seen as an opportunity to monetize. I feel that very strongly and I worry that given the current economic setup, we're still seeing evidence that those types of competitive and profit-driven motives are evident in this response. And I think it's really important that as a society we think about that moving forward, because that strikes me as potentially quite problematic.

 

Michael

Hmm. This seems like it ties into a question that I had for you that you put under the third challenge, the methodological challenge with respect to uncertainty here. For me, there's, there's something about the way that this pandemic is unfolding that is distinctly information age, right? When we had a Laurent Hébert-Dufresne, when we had Sam Scarpino on the show, both of them were talking about complex contagions that involve both biological as well as informational social components, in terms of how people are understanding it, making sense of it, behaving in regard to that. All of this seems somewhat structurally similar to the Thirty Years War, right? Like the way that a lot of the structures for how we would normally verify information are themselves going through a crisis. And we're seeing a lot of competition at the level of people without what we would think of as normal credentials stepping forward to, in terms of the printing press… The analogy is like their own version of Christianity, their own version of epidemiology. And on the one hand, this is really inspiring. You're seeing people step up en masse. But in another sense, you make the case in this Lancet article that it's really dangerous because what's happening now is that we have a lot of people who, when the barrier to entry to participation in modelling and sharing this information has been lowered like it has, there’s a narrative collapse. It becomes very difficult to coordinate action. People are buying models from people that aren't really authorized or they lack the expertise. Disinformation and misinformation are rampant and it's a tricky thing, right? Because on the one hand, we do need to be able to move fast enough to do this, but it's unclear how to move fast enough and still be able to apply the brakes when necessary, or steer it.

 

Caroline

Yeah. I mean, I think the specific point I was making the article was that you can't get away from needing solid epidemiological data and solid epidemiological analysis, and good analysis… any model ultimately you need to know how many cases you have. So, there's this feeling in this world of AI and deep learning and big data that somehow we're going to be able to make up for a lack of solid epidemiological data with all these other datasets. And while I agree that if you have nothing, it's helpful to have some other sources of information to inform your estimates, but you can't replace it and you can’t ultimately make any sensible statement without these key very basic epidemiological pieces of information. So that's sort of related but a little bit tangential to what you were saying.

With respect to expertise, I think the danger here… So first of all, I think people conflate forecasting, epidemiological models with forecasting the weather. You know, with this sense that if you just have a big enough computer, it's going to be fine, right? And it's not because epidemiology involves human behavior and feedback and you change the situation when human behavior changes. So if you say it's going to rain tomorrow and everyone carries an umbrella, that doesn't change the fact that it's going to rain. If you say there's going to be an outbreak tomorrow and everyone stays home from work, there's no longer an outbreak, your prediction is wrong, and everybody loses trust in your model, right? So these things are not the same. And I think it's very easy to underestimate the difficulties and the amount of expertise that you might need to make a sensible model that's actually useful.

So I'm all for democratizing science and I think that that's great. The problem is whether you're using that to inform policy and how it's being communicated to the public. And there you really have to be very clear, and we've seen that with this COVID-19 outbreak where there's a lot of confusion in the media and among different levels of government about what these models really say and what they can say, what they can't say. You know, we've had the press say, “Oh, the model said this last week, and now it's saying that,” when there's no inconsistency in the modelling framework, it's just that the model under scenario X is not the same as the model under scenario Y, for example. So I think that there's a huge communication problem here that has the potential to be quite dangerous. And the rise of so-called armchair epidemiologist is part of that general attempt. People want answers, right? And so people step up.

You know, there was an article yesterday in the New York times about the, the new heroes of the coronavirus outbreak and all six of them were white men. And I do think that there are gender differences in who is stepping up to have an opinion. And not just that, but there's also very big divided in terms of who's being portrayed in the media, who is coming out on TV and talking. So we talk about disinformation. I think there's also a misrepresentation and a problem there where if you look at who are the people who are coming forward to have strong opinions based on potentially not that much expertise, it's definitely has a racial and gender bias, is what I would say.
 

Michael

So that's contributing to a problem that you identify and prescribe with a reallocation of resources—you know, money research, et cetera—into those populations that are at risk. Those populations that are undersurveilled, those populations that are actually the stakeholders locally, the recipients of these interventions. And this is something that touches on all three points in this piece, the misalignment and the analysis pipeline, the gap between the innovation and the implementation, and then the inherent uncertainty of it all come to a head in, “Well, it would be better if, rather than trying to make a global theory of everything for the coronavirus, we looked at local solutions and then how those fit into a broader understanding and solution.

It would make sense if we designed these interventions in the context of where they would actually be deployed. But you know, when I had Sam Scarpino on the show, he made the point that often the populations that are most in need are the ones about which we have the least information. Right? This ties into your, there's a misrepresentation among the heroes as well as the sort of victims in this situation. And so how could we align our incentives and reinvest when we're basically shooting blind, right? If we don't know where to put it?

 

Caroline

Yeah. Well I would say so first of all, it's true that there has been a kind of neglect in terms of allocation of resources for science globally, right? And so, and that has a very distinct geographic flavor. So I think a lot of these issues, not necessarily for COVID, although for COVID as well… but being able to tailor a response in a way that is not just reflective of the situation in the US and Europe for example, but reflects the different realities around the world, different demographics, different co-morbidities, different issues for routine medical care. What you need is to invest in scientific groups in the Global South for example, we should be building centers, supporting excellent scientists who exist all across Africa and India and Bangladesh and so on. And we should be making sure that we fund centers of excellence and the researchers there who are in a much better position to be able to interface between policymakers and scientists and public health in the context that is going to be relevant for their response in a way that will ultimately protect everyone better.

Given that we live in a global world, we need to think about adjusting resources for the scientific community so that they can respond where epidemics start. And that's going to be agnostic to many different factors, but right now the focus of expertise and money and funding for science for this kind of thing is in the US and Europe and Canada and places like that. We need to shift it. We need to recognize that we live in a globally connected world and we have systematically neglected funding in the Global South. And then we need to address that directly.
 

Michael

How do you see it as working within the United States? In Europe, for example, you know, the lowest quartile in income are the ones that both have less access to healthcare. And then also in a weird way, a kind of privacy through their disclusion from the system that would provide data on them in the first place. So how do you imagine creating flexible and distributed teams that are able to serve the poor in that regard, even within healthy nations?

 

Caroline

So I think, the first thing is that a lot of these problems, stem from economic and political injustice against particular groups of people and the poor. And so the first thing to say is that the best thing that the government could do for those groups is to make sure that they're supported economically. Unemployment's just skyrocketing. We need to support those populations and make sure that they are at least economically looked after, as well as provided with excellent medical care and access. But I think that this data issue is really key. So one thing that we have talked about with this aggregated mobility data is the reason that we provide very coarse-scale data is because we really want to avoid punitive targeting of particular groups of people based on their ability or inability to social distance. And with these contact tracing apps and personal data, one of the critical things here is that it's not used in a way that exacerbates inequality and is punitive. So that's something that I think this whole field is going to have to really think hard about in any context. But particularly in the US, I think it's going to be an issue. And then we should be thinking creatively about how to re-employ a lot of the newly unemployed populations to help us fight this thing. Right? Let's try and put people to work to distribute different kinds of PPE or help with contact tracing protocols, while making sure that we're not putting them at excess risk, but there should be creative ways that we can, manage this threat and simultaneously, think about the economic hardship that the impoverished communities here are going to have to suffer through.
 

Michael

Right now, what do you find, for you personally, are the most interesting or the most salient points that you're tracking as you develop your unfolding thing? We'll tie a bow on it with this question. What do you think are the channels or the types of data that are most relevant to a randomly selected audience member right now?

 

Caroline

Yeah. So I think that there are two things. The first thing is there is a race right now to develop serological tests, and that's going to be absolutely critical. So that's a test that looks for antibodies to this virus. It's a marker of infection. Once we have serological tests, we're going to need to roll them out to make sure that we are sending healthy people who have some level of immunity and we still don't know whether people are immune, but hopefully they are. So first of all, the serological tests are going to be critical, and that will help us estimate how many people never had symptoms, the epidemic size, parameterize a lot of the models, think through policy scenarios, as well as get people back to work and back to their normal lives as quickly as possible. The second thing I think we really need to do is to monitor and measure what social distancing interventions are doing, whether they're working, how much reduction in mobility is needed to reduce transmission by a certain amount.

That's going to be critical as we move towards a situation where we're going to relax some social distancing interventions and not others. Right? Is it okay to open schools? Is it okay to open workplaces? We're going to need to make those decisions in an evidence-based way. So again, linking some of this mobility data to actual COVID is going to be critical. For that we need testing. So like everyone, I would call for more testing, randomized testing, good study design and then moving forward. I think that for me this is a probably once in a lifetime—I hope once in a lifetime—event. So my hopes and fears are that I hope that it leads to a radically restructured society and a much more inclusive society that recognizes how interconnected we are globally and within our communities. And I think a positive outcome from all of this could be that we start to be much more community-minded and we start to think through how we provide healthcare to people in a way that reflects their needs.

And so the good outcome could be that we are more equitable on the other side of this, and we've restructured supply chains so our dependencies are more local and so on, while not giving up the really vital international aid and cooperation that we need for our global world. My fear is that that won't happen. And like with other pandemics that have come and gone, we will come through this, we'll scrape through, it will be a disaster for many people but we’ll scrape through and we will have learned nothing. So that's my biggest fear is that we'll end up on the other side of this with a weakened society that has suffered a lot, both economically and socially, and that we will not have taken the opportunity to restructure our society in ways that will benefit everyone. So, I don't know. I think people need to really think through what their values are  and how we move forward to make political decisions that reflect those values. That's my hope.
 

Michael

Well, I think with a trumpet blast, we can call that the end of this conversation. Caroline, thank you so much for taking the time.

 

Caroline

Thanks for having me. Nice to talk to you.