Categories
Coronavirus Covid-19 Imperial College Public Health England Worldometers

Recent events and Coronavirus model update

Foreward

I am delighted that Roger Penrose, whose lectures I attended back in the late 60s, has become a Nobel laureate. It has come quite late (in his 80s), bearing in mind how long ago Roger, and then Stephen Hawking, had been working in the field of General Relativity, and Black Hole singularities in particular. I guess that recent astronomical observations, and the LIGO detection of gravitational waves at last have inspired confidence in those in Sweden deciding these matters.

I had the privilege, back in the day, of not only attending Roger’s lectures in London, but also the seminars by Stephen Hawking at DAMTP (Department of Applied Maths and Theoretical Physics, now the Isaac Newton Institute) in Cambridge, as well as lectures by Fred Hoyle, Paul Dirac and Martin Rees, amongst other leading lights.

Coming so late for Roger, and not at all, unfortunately, for Stephen Hawking, there is no danger of any “Nobel effect” for them (the tendency of some Nobel laureates either to not achieve much after their “enNobelment”(!) or to apply themselves, with overconfidence, to topics outside their speciality, to little effect, other than in a negative way on their reputation).

The remarkable thing about Roger Penrose is the breadth of his output in many areas of Mathematics over a very long career; and not only that, but the great modesty with which he carries himself. His many books illustrate the breadth of those interests.

I am delighted! If only my work below were worth a tiny fraction of his!

Coronavirus status

Many countries, including the UK, are experiencing a resurgence of Covid-19 cases recently, although, thankfully, with a much lower death rate. This is most likely owing to the younger age range of those being infected, and the greater experience and techniques that medical services have in treating the symptoms. I covered the age dependency in my most recent post on September 22nd, since when there has been a much higher rate of cases, with the death rate also increasing.

Model response

I have run several iterations of my model in the meantime, since my last blog post, as the situation has developed. There has been a remarkable increase in Covid-19 in the USA, as well as in many other countries.

I have introduced several lockdown adjustment points into my UK model, firstly easing the restrictions somewhat, to reflect things such as the return to schools, and other relaxations Governments in the four home UK countries have introduced, followed by some increases in interventions to reflect recent actions such as the “rule of six” and other related measures in the UK.

I’ll just show two charts initially to reflect the current status of the model. I am sure there will be some further “hardening” of interventions (exemplified in a later chart), and so the model forecast outcomes will, I expect, reduce as I introduce these when they come. I have already shown, in my recent post on model sensitivities, that the forecast is VERY sensitive to changes to intervention effectiveness in the model .

The first chart, from Excel, is of the type I have used before to show the cumulative and daily reported and modelled deaths on the same chart:

Model chart showing cumulative and daily UK deaths compared to reported deaths
Model chart showing cumulative and daily UK deaths compared to reported deaths

I have made no postulated interventions beyond October 6th in this model, but I fully expect some imminent interventions to bring down the forecast number of deaths.

The scatter in the orange dots (reported daily deaths) is caused by the regular under-reporting of deaths at weekends, followed by those deaths being added to the reports in the following couple of days of the week. Hence I show a 7-day trend line (the orange line) to smooth that effect.

The successive quantitative changes to the lockdown effectiveness are shown in the chart title, the initial UK lockdown having occurred on March 23rd.

The following chart, plotted straight from the Octave model code, shows the model versions of the lockdown and subsequent interventions in more detail, including dates. It also includes reported and modelled cases as well as deaths data, both cumulative and daily.

Chart 11 showing both cumulative and daily UK model and reported deaths and cases
Chart 11 showing both cumulative and daily UK model and reported deaths and cases

This is quite a busy chart. Again we see the clustering of reported data (this time for reported cases as well as reported deaths) owing to the reporting delays at weekends.

The key feature is the sharp rise in cases, and to a lesser extent, deaths, around the time of the lockdown easing in the summer. The outcomes at the right for April 2021 will be modified (reduced), I believe, by measures yet to be taken that have already been trailed by UK Government.

The forecasts from the model are to the right of the chart, at Day 451, April 26th 2021. The figures presented there are the residual statistics at that point. In the centre of the chart are the reported cumulative and daily figures as at October 8th. The lockdown easing dates and setting percentages are listed in the centre of the chart, in date order.

Data accuracy, and Worldometers

The charts are based on the latest daily updates, and also corrections made in the UK case data, owing to the errors caused at Public Health England (PHE) by the use of a legacy version of the Excel spreadsheet system by some of their staff. That older Excel version loses data, owing to a limit on the number of lines it can handle in a table (c. 64,000 (or, more likely, 216-1) instead of millions in current versions of Excel).

Thus (to increase reader confidence(!)) I haven’t run the Excel chart again for the charts that follow. I am indebted to Dr. Tom Sutton for his Worldometers interrogation script, which allows me to collect Worldometers data and run model changes quickly, with current data, and plot the results, using Python for the data interrogation, and Octave (the GNU free version of MatLab) to run the model and plot results, fed by the UK data from the Worldometers UK page.

Tom’s Python “corona-fetch” code allows me to extract any country’s data rapidly from Worldometers, in which I have some confidence. They updated the UK data, and cast it back to the correct days between September 25th and October 4th, following the UK Government’s initial October 2nd announcement of the errors in their reporting.

Worldometers did this before I was able to find the corrected historic data on the UK Government’s own Coronavirus reporting page – it might not yet even be there for those previous days; the missing data first appeared only as much inflated numbers for the days on that October 2nd-4th weekend.

Case under-reporting

As I highlighted in my September 22nd post, I believe that reported cases are under-reported in the UK by a factor of over 8 – i.e. less than 12.5% of cases are being picked up, in my view, owing to a lack of testing, and the high proportion of asymptomatic infections, resulting in fewer requested tests.

The under-reporting of cases (defining cases as those who have ever had Covid-19) was, in effect, confirmed by the major antibody testing programme, led by Imperial College London, involving over 100,000 people, finding that just under 6% of England’s population – an estimated 3.4 million people – had antibodies to Covid-19, and were therefore likely previously to have had the virus, prior to the end of June.

The USA

For interest, I ran the model for the USA at the same time, as it is so easy to source the USA Worldometers data. Only one lockdown event is shown in my model charts for the US, as I don’t have detailed data for the US on Government actions and population and individual reactions, on which I have done more work for the UK – the USA not being my principal focus.

I would expect there should be some intervention easing settings in the summer period for the USA, judging by what we have seen of the growth in the USA’s numbers of cases and deaths during that period.

Those relaxations, both at state level and individually, have, in my view, frustrated many forecasts (some made somewhat rashly, and not couched with caveats), including the one by Michael Levitt made as recently as mid-July for August 25th (to which I referred in some detail in my September 2nd post) when both the quantum of the US numbers, and the upwards slope for deaths and cases were quite contrary to his expectation. We can see that reflected in my model’s unamended figures, following the 74% effective March 24th lockdown event, representing the first somewhat serious reaction to the epidemic in the USA.

This is the problem, in my view, with curve-fitting (phenomenological) forecasts used on their own, as compared with mechanistic models such as mine, whose code was originally developed by Prof. Alex de Visscher at Concordia University.

All that curve-fitting is does is to perform a least-squares fit of a 3 or 4 parameter Logistics curve of some kind (Sigmoid, Roberts or Gompertz curve) top-down, with no bottom-up way to reflect Government strategies and population/individual reactions. Curve-fitting can give a fast graphical interpolation of data, but isn’t so suitable for extrapolating a forecast of any worthwhile duration.

This chart below, without the benefit of the introduction of subsequent intervention measures, shows how a forecast can begin to undershoot reality, until the underlying context can be introduced. Lockdown easing events (both formal and informal since March 24th) need to be added to allow the model to show their potential consequences for increased cases and deaths, and thus for the model to be calibrated for projections beyond the present day, October 8th.

Chart 11 showing both cumulative and daily US model and reported deaths and cases
Chart 11 showing both cumulative and daily US model and reported deaths and cases

Effect of a UK “circuit-breaker” intervention

There is current discussion of a (2 week) “circuit-breaker” partial lockdown in the UK, coinciding with schools’ half-term, and the Government seems to be considering a tiered version of this, with the areas with higher caseloads making stricter interventions. There would be differences in the policy within the four home UK countries, but all of them have interventions in mind, for that half-term period, as cases are increasing in them all.

I have postulated, therefore, an exemplar increased intervention. I have applied a 10% increase in current intervention effectiveness on October 19th (although there are some differences in the half-term dates across the UK), followed by a partial relaxation after 2 weeks, -5%, reducing the circuit-breaker measure by half – so not back to the level we are at currently.

Here is the effect of that change on the model forecast.

Chart 11 showing the effect on cumulative and daily UK model and reported deaths and cases of a 2-week circuit-breaker measure on October 19th
Chart 11 showing the effect on cumulative and daily UK model and reported deaths and cases of a 2-week circuit-breaker measure on October 19th

As one might expect for an infection with a 7-14 day incubation period, although the effect on reducing new infections (daily cases) is fairly rapid, this is lagged somewhat by the effect on the death rate; but over the medium and longer term, this change, just as for the original lockdown, reduces the severity of the modelled outcome materially.

I don’t think any models have the capability yet to reflect very detailed interventions, local and regional as they are becoming, to deal with local outbreaks in a context where much of the country is less affected. What we have been seeing are what I have called “multiple superspreader” events, and potentially a new modelling methodology, reflecting Adam Kucharski’s “k-number” concept of over-dispersion would be needed. I covered this in more detail in my August 4th blog post.

Discussion

As I reported in my blog post on May 14th, if the original lockdown had been 2 weeks earlier than March 23rd (and this principle was supported in principle by scientists reporting to the Parliamentary Science and Technology Select Committee on June 10th, which I reported in my blog post on June 11th), there would have been far fewer deaths; the UK Government is likely to want to avoid any delay this time around.

October 19th might well be later than they would wish, and so earlier interventions, varying locally and/or regionally are likely.

While the forecast of a model is critically dependent not only on the model logic, and its virus infectivity parameters, the decisions to be taken about interventions, and their timing, critically impact the epidemic and the modelled outcomes.

A model like this offers a way to calibrate and test the effect of different changes. My model does that in a rather broad-brush way, using successive broad intervention effectiveness parameters at chosen times.

Imperial College analysis

Models used by Government advisers are more sophisticated, and as I reported last time, the Imperial College data sources, and their model codes are available on their website at https://www.imperial.ac.uk/mrc-global-infectious-disease-analysis/covid-19/. Both Imperial College and Harvard University have published their outlook on cyclical behaviour of the pandemic; in the Imperial case, the triggering of interventions and any relaxations were modelled on varying ICU bed occupancy, but it could be also be done, I suppose, though R-number thresholds (upwards and downwards) at any stage. Here is the Imperial chart; the Harvard one was similar, projected into 2022.

The potentially cyclical caseload from Covid-19, with interventions and relaxations applied as ICU bed demand changes
The potentially cyclical caseload from Covid-19, with interventions and relaxations applied as ICU bed demand changes

The Imperial computer codes are written in the R language, which I have downloaded, as part of my own research, so I look forward to looking at them and reporting later on.

I know that their models allow very detailed analysis of options such as social distancing, home isolation and/or quarantining, schools/University closure and many other possible interventions, as can be seen from the following chart which I have shown before, from the pivotal and influential March 16th Imperial College paper that preceded the first UK national lockdown on March 23rd.

It is usefully colour-coded by the authors so that the more and less effective options can be more easily discerned.

PC=school and university closure, CI=home isolation of cases, HQ=household quarantine, SD=large-scale general population social distancing, SDOL70=social distancing of those over 70 years for 4 months (a month more than other interventions)
PC=school and university closure, CI=home isolation of cases, HQ=household quarantine, SD=large-scale general population social distancing, SDOL70=social distancing of those over 70 years for 4 months (a month more than other interventions)

An intriguing point is that in this chart (on the very last page of the paper, and referenced within it) the effectiveness of the three measures “CI_HQ_SD” in combination (home isolation of cases, household quarantine & large-scale general population social distancing) taken together (orange and yellow colour coding), was LESS than the effectiveness of either CI_HQ or CI_SD taken as a pair of interventions (mainly yellow and green colour coding)?

The answer to my query, from Imperial, was along the following lines, indicating the care to be taken when evaluating intervention options.

It’s a dynamical phenomenon. Remember mitigation is a set of temporary measures. The best you can do, if measures are temporary, is go from the “final size” of the unmitigated epidemic to a size which just gives herd immunity.

If interventions are “too” effective during the mitigation period (like CI_HQ_SD), they reduce transmission to the extent that herd immunity isn’t reached when they are lifted, leading to a substantial second wave. Put another way, there is an optimal effectiveness of mitigation interventions which is <100%.

That is CI_HQ_SDOL70 for the range of mitigation measures looked at in the report (mainly a green shaded column in the table above).

While, for suppression, one wants the most effective set of interventions possible.

All of this is predicated on people gaining immunity, of course. If immunity isn’t relatively long-lived (>1 year), mitigation becomes an (even) worse policy option.

This paper (and Harvard came to similar conclusions at that time, as we see in the additional chart below) introduced (to me) the potential for a cyclical, multi-phase pandemic, which I discussed in my April 22nd report of the Cambridge Conversation I attended, and here is the relevant illustration from that meeting.

Imperial College and Harvard forecasts and illustrations of cyclical pandemic behaviour
Imperial College and Harvard forecasts and illustrations of cyclical pandemic behaviour

In the absence of a pharmaceutical solution (e.g. a vaccine) this is all about the cyclicity of lockdown followed by easing; then the population’s and pandemic’s responses; and repeats of that loop, just what we are beginning to see at the moment.

Second opinion on the Imperial model code

Scientists at the School of Physics and Astronomy, University of Edinburgh have used the Imperial College CovidSim code to run the data and check outcomes, reported in the British Medical Journal (BMJ), in their paper Effect of school closures on mortality from coronavirus disease 2019: old and new predictions.

Their conclusions were broadly supportive of the veracity of the modelling tool, and commenting on their results, they say:

The CovidSim model would have produced a good forecast of the subsequent data if initialised with a reproduction number of about 3.5 for covid-19. The model predicted that school closures and isolation of younger people would increase the total number of deaths, albeit postponed to a second and subsequent waves. The findings of this study suggest that prompt interventions were shown to be highly effective at reducing peak demand for intensive care unit (ICU) beds but also prolong the epidemic, in some cases resulting in more deaths long term. This happens because covid-19 related mortality is highly skewed towards older age groups. In the absence of an effective vaccination programme, none of the proposed mitigation strategies in the UK would reduce the predicted total number of deaths below 200 000.

Their overall conclusion was:

It was predicted in March 2020 that in response to covid-19 a broad lockdown, as opposed to a focus on shielding the most vulnerable members of society, would reduce immediate demand for ICU beds at the cost of more deaths long term. The optimal strategy for saving lives in a covid-19 epidemic is different from that anticipated for an influenza epidemic with a different mortality age profile.

This is consistent with the table above, and with the explanation given to me by Imperial quoted above. The lockdown can be “too good” and optimisation for the medium/long term isn’t the same as short term optimisation.

I intend to run the Imperial code myself, but I am very glad to see this second opinion. There have been many responses to it, so I will devote a later blog post to it.

Concluding Comments

As we see, a great deal of multidisciplinary work is proceeding in many Universities and other organisations around the world. Virologists, epidemiologists, clinicians, mathematicians and many others are involved in working out solutions to the issues raised in all countries by the SARS-Cov-2 pandemic.

A vaccine must be top of the list for dealing with it, and until then, the best that we can do as members of the public is to recognise the key indicators for staying safe, some of them mentioned above in relation to the NPIs.

We have seen that in the spring and summer period it was possible to make progress with opening up the economy, but as the easing of interventions begins to coincide with autumn, the return to schools and Universities, and the increasing pressure to revive not just our economy, but also social interactions, cases have increased, and the test will continue to be how to control the spread of the virus while allowing some “normal” activities to return.

The studies I have mentioned, as well as my own work indicate clearly the complexity, and in some respects the counter-intuitive nature of managing the epidemic. There is much more to do.

Categories
Coronavirus Covid-19 Herd Immunity Imperial College Michael Levitt Office for National Statistics ONS PHE Public Health England Superspreader Sweden

Model update following UK revision of Covid-19 deaths reporting

Introduction

On August 12th, the UK Government revised its counting methodology and reporting of deaths from Covid-19, bringing Public Health England’s reporting into line with that from the other home countries, Wales, Northern Ireland and Scotland. I have re-calibrated and re-forecast my model to adapt to this new basis.

Reasons for the change

Previously reported daily deaths in England had set no time limit between any individual’s positive test for Covid-19, and when that person died. 

The three other home countries in the UK applied a 28-day limit for this period. It was felt that, for England, this lack of a limit on the time duration resulted in over-reporting of deaths from Covid-19. Even someone who had died in a road accident, say, would have been reported as a Covid-19 death if they had ever tested positive, and had then recovered from Covid-19, no matter how long before their death this had happened.

This adjustment to the reporting was applied retroactively in England for all reported daily deaths, which resulted in a cumulative reduction of c. 5,000 in the UK reported deaths to up to August 12th.

The UK Government say that it is also to report on a 60-day basis (96% of Covid-19 deaths occur within 60 days and 88% within 28 days), and also on the original basis for comparisons, but these two sets of numbers are not yet available.

On the UK Government’s web page describing the data reporting for deaths, it says “Number of deaths of people who had had a positive test result for COVID-19 and died within 28 days of the first positive test. The actual cause of death may not be COVID-19 in all cases. People who died from COVID-19 but had not tested positive are not included and people who died from COVID-19 more than 28 days after their first positive test are not included. Data from the four nations are not directly comparable as methodologies and inclusion criteria vary.

As I have said before about the excess deaths measure, compared with counting deaths attributed to Covid-19, no measure is without its issues. The phrase in the Government definition above “People who died from COVID-19 but had not tested positive are not included…” highlights such a difficulty.

Model changes

I have adapted my model to this new basis, and present the related charts below.

  • Model forecast for the UK deaths as at August 14th, compared with reported for 84.3% lockdown effectiveness, on March 23rd, modified in 5 steps by -.3%, -0% -0% and -0% successively
  • Model forecast for the UK deaths as at August 14th, compared with reported for 84.3% lockdown effectiveness, modified in 5 steps by -.3%, -0% -0% and -0% successively
  • Model forecast for the UK deaths as at August 14th, compared with reported for 84.3% lockdown effectiveness, on March 23rd, modified in 5 steps by -.3%, -0% -0% and -0% successively
  • Model forecast for the UK deaths as at August 14th, compared with reported for 84.3% lockdown effectiveness, on March 23rd, modified in 5 steps by -.3%, -0% -0% and -0% successively
  • Model forecast for the UK deaths as at August 14th, compared with reported for 84.3% lockdown effectiveness, on March 23rd, modified in 5 steps by -.3%, -0% -0% and -0% successively
  • Chart 12 for the comparison of cumulative & daily reported & modelled deaths to 26th April 2021, adjusted by -.3% on May 13th

This changed reporting basis reduced the cumulative UK deaths to August 12th from 46,706 to 41,329, a reduction of 5,377.

The fit of my model was better for the new numbers, requiring only a small increase in the initial March 23rd lockdown intervention effectiveness from 83.5% to 84.3%, and a single easing reduction to 84% on May 13th, to bring the model into good calibration up to August 14th.

It does bring the model forecast for the long term plateau for deaths down to c. 41,600, and, as you can see from the charts above, this figure is reached by about September 30th 2020.

Discussion

The relationship to case numbers

You can see from the first model chart that the plateau for “Recovered” people is nearly 3 million, which implies that the number of cases is also of the order of 3 million. This startling view is supported by a recent antibody study reported by U.K. Government here.

This major antibody testing programme, led by Imperial College London, involving over 100,000 people, found that just under 6% of England’s population – an estimated 3.4 million people – had antibodies to COVID-19, and were likely to have previously had the virus prior to the end of June.

The reported numbers in the Imperial College study could seem quite surprising, therefore, given that 14 million tests have been carried out in the U.K., but with only 313,798 positive tests reported as at 12th August (and bearing in mind that some people are tested more than once).

But the study is also in line with the estimate made by Prof. Alex de Visscher, author of my original model code, that the number of cases is typically under-reported by a factor of 12.5 – i.e. that only c. 8% of cases are detected and reported, an estimate assessed in the early days for the Italian outbreak, at a time when “test and trace” wasn’t in place anywhere.

A further sanity check on my modelled case numbers, relative to the number of forecasted deaths, would be on the observed mortality from Covid-19 where this can be assessed.

A study by a London School of Hygiene & Tropical Medicine team carried out an analysis of the Covid-19 outbreak in the closed community of the Diamond Princess cruise ship in March 2020.

Adjusting for delay from confirmation-to-death, this paper estimated case and infection fatality ratios (CFR, IFR) for COVID-19 on the Diamond Princess ship as 2.3% (0.75%-5.3%) and 1.2% (0.38-2.7%) respectively.

In broad terms, my model forecast of 42,000 deaths and up to 3 million cases would be a ratio of about 1.4%, and so the relationship between the deaths and cases numbers in my charts doesn’t seem to be unreasonable.

Changing rates of infection

I am not sure whether the current forecast for a further decline in the death rate will remain, in the light of continuing lockdown easing measures, and the local outbreaks.

Both the Office for National Statistics (ONS) and Public Health England (PHE) reported in early July a drop in the rate of decline in Covid-19 cases per 100,000 people in England.

Figure 2: The latest exploratory modelling shows the downward trend in those testing positive for COVID-19 has now levelled off

This was at the same time as the ONS reported that excess deaths have reduced to a level at or below the average for the last five years.

The number of deaths involving COVID-19 decreased for the 10th consecutive week

PHE reports this week that the infection rate is now more pronounced for under-45s than for over-45s, a reversal of the situation earlier in the pandemic. Overall case rates, however, remain lower than before; and although the rate of decline in the case rate has slowed for-over-45s, and is nearly flat now, for under-45s the infection rate has started to increase slightly.

Covid-19 cases rate of decline slows more for under-45s

The impact on the death rate might well be lower than previously, owing to the lower fatality rates for younger people compared with older people.

Herd immunity

Closely related to the testing for Covid-19 antibodies is herd immunity, a topic I covered in some detail on my blog post on June 28th, when I discussed the relative positions of the USA and Europe with regard to the spike in case numbers the USA was experiencing from the middle of June, going on to talk about the Imperial College Coronavirus modelling, led by Prof. Neil Ferguson, and their pivotal March 16th paper.

This paper was much criticised by Prof Michael Levitt, and others, for the hundreds of thousands of deaths it mentioned if no action were taken, cited as scare-mongering, ignoring to some extent the rest of what I think was a much more nuanced paper than was appreciated, exploring, as it did, the various interventions that might be taken as part of what has become known as “lockdown”.

The intervention options were also quite nuanced, embracing as they did (with outcomes coded as they were in the chart below) PC=school and university closure, CI=home isolation of cases, HQ=household quarantine, SD=large-scale general population social distancing, SDOL70=social distancing of those over 70 years for 4 months (a month more than other interventions).

PC=school and university closure, CI=home isolation of cases, HQ=household quarantine, SD=large-scale general population social distancing, SDOL70=social distancing of those over 70 years for 4 months (a month more than other interventions)
PC=school and university closure, CI=home isolation of cases, HQ=household quarantine, SD=large-scale general population social distancing, SDOL70=social distancing of those over 70 years for 4 months (a month more than other interventions)

I had asked the lead author of the paper why the effectiveness of the three measures “CI_HQ_SD” in combination (home isolation of cases, household quarantine & large-scale general population social distancing) taken together (orange and yellow colour coding), was LESS than the effectiveness of either CI_HQ or CI_SD taken as a pair of interventions (mainly yellow and green colour coding)?

The answer was in terms of any subsequent herd immunity that might or might not be conferred, given that any interventions as part of a lockdown strategy would be temporary. What would happen when they ceased?

The issue was that if the lockdown measures were too effective, then (assuming there were any immunity to be conferred for a usefully long period) the potential for any subsequent herd immunity would be reduced with too successful a lockdown. If there were no worthwhile period of immunity from catching Covid-19, then yes, a full lockdown would be no worse than any other partial strategy.

Sweden

I mention all this as background to a paper that was just published in the Journal of the Royal Society of Medicine as I started this blog post, on August 12th. It concerns the reasons why, as the paper authored by Eric Orlowski and David Goldsmith asserts, that four months into the COVID-19 pandemic, Sweden’s prized herd immunity is nowhere in sight.

This is a somewhat polemical paper, as Sweden is often held up as an example of how countries can succeed in combating the SARS-Cov-2 pandemic by emulating Sweden’s non-lockdown approach. I have been, and remain surprised by such claims, and now this paper helps calibrate and articulate the underlying reasons.

Although compared with the UK, Sweden had done little worse, if at all, despite resisting the lockdown approach (although its demographics and lifestyle characteristics are not necessarily comparable to the UK’s), compared with their more similar nearest neighbours, Norway, Denmark and Finland, Sweden has done far worse in terms of deaths and deaths per capita.

I think that either for political or for other related reasons, perhaps economic ones, even some otherwise sensible scientists are advocating the Swedish approach, somehow ignoring the more valid (and negative) comparisons between Sweden and the other Scandinavian countries, as opposed to more favourable comparisons with others further afield – the UK, for example.

I have tried to remain above the fray, notably on the Twittersphere, but, at least on my own blog, I want to present what I see as a balanced assessment of the evidence.

That balance, in this case, strikes me like this: if there were an argument for the Swedish approach, then a higher level of herd immunity would have been the payoff for experiencing more immediate deaths in favour of a better outcome later.

But that doesn’t seem to have happened, at least in terms of outcomes from testing for antibodies, as presented in this paper. As it says “it is clear that nowhere is the prevalence of IgG seropositivity high (the maximum being around 20%) or climbing convincingly over time. This is especially clear in Sweden, where the authorities publicly predicted 40% seroconversion in Stockholm by May 2020; the actual IgG seroprevalence was around 15%.

Concluding comments

As I said in my August 4th post, the outbreaks we are seeing in some UK localities (Leicester, Manchester, Aberdeen and many others) seem to be the outcome of individual and multiple local super-spreading events.

These are quite hard to model, requiring very fine-grained data regarding the types and extent of population interactions, and the different effects of a range of intervention measures available nationally and locally, as I mentioned above, applied in different places at different times.

The reproduction number, R (even nationally) can be increased noticeably by such localised events, because of the lower overall incidence of cases in the UK (something we have seen in some other countries too, at this phase of the pandemic).

While most people nationally aren’t directly affected by these localised outbreaks, I believe that caution – social distancing where possible, for example – is still necessary.

Categories
Coronavirus Covid-19 Imperial College Michael Levitt Reproductive Number

Phenomenology & Coronavirus – modelling and curve-fitting

Introduction

I have been wondering for a while how to characterise the difference in approaches to Coronavirus modelling of cases and deaths, between “curve-fitting” equations and the SIR differential equations approach I have been using (originally developed in Alex de Visscher’s paper this year, which included code and data for other countries such as Italy and Iran) which I have adapted for the UK.

Part of my uncertainty has its roots in being a very much lapsed mathematician, and part is because although I have used modelling tools before, and worked in some difficult area of mathematical physics, such as General Relativity and Cosmology, epidemiology is a new application area for me, with a wealth of practitioners and research history behind it.

Curve-fitting charts such as the Sigmoid and Gompertz curves, all members of a family of curves known as logistics or Richards functions, to the Coronavirus cases or deaths numbers as practised, notably, by Prof. Michael Levitt and his Stanford University team has had success in predicting the situation in China, and is being applied in other localities too.

Michael’s team have now worked out an efficient way of reducing the predictive aspect of the Gompertz function and its curves to a straight line predictor of reported data based on a version of the Gompertz function, a much more efficient use of computer time than some other approaches.

The SIR model approach, setting up an series of related differential equations (something I am more used to in other settings) that describe postulated mechanisms and rates of virus transmission in the human population (hence called “mechanistic” modelling), looks beneath the surface presentation of the epidemic cases and deaths numbers and time series charts, to model the growth (or otherwise) of the epidemic based on postulated characteristics of viral transmission and behaviour.

Research literature

In researching the literature, I have become familiar with some names that crop up or frequently in this area over the years.

Focusing on some familiar and frequently recurring names, rather than more recent practitioners, might lead me to fall into “The Trouble with Physics” trap (the tendency, highlighted by Lee Smolin in his book of that name, exhibited by some University professors to recruit research staff (“in their own image”) who are working in the mainstream, rather than outliers whose work might be seen as off-the-wall, and less worthy in some sense.)

In this regard, Michael Levitt‘s new work in the curve-fitting approach to the Coronavirus problem might be seen by others who have been working in the field for a long time as on the periphery (despite his Nobel Prize in Computational Biology and Stanford University position as Professor of Structural Biology).

His results (broadly forecasting, very early on, using his curve-fitting methods (he has used Sigmoid curves before, prior to the current Gompertz curves), a much lower incidence of the virus going forward, successfully so in the case of China) are in direct contrast to that of some some teams working as advisers to Governments, who have, in some cases, all around the world, applied fairly severe lockdowns for a period of several months in most cases.

In particular the work of the Imperial College Covid response team, and also the London School of Hygiene and Tropical Medicine have been at the forefront of advice to the UK Government.

Some Governments have taken a different approach (Sweden stands out in Europe in this regard, for several reasons).

I am keen to understand the differences, or otherwise, in such approaches.

Twitter and publishing

Michael chooses to publish his work on Twitter (owing to a glitch (at least for a time) with his Stanford University laboratory‘s own publishing process. There are many useful links there to his work.

My own succession of blog posts (all more narrowly focused on the UK) have been automatically published to Twitter (a setting I use in WordPress) and also, more actively, shared by me on my FaceBook page.

But I stopped using Twitter routinely a long while ago (after 8000+ posts) because, in my view, it is a limited communication medium (despite its reach), not allowing much room for nuanced posts. It attracts extremism at worst, conspiracy theorists to some extent, and, as with a lot of published media, many people who choose on a “confirmation bias” approach to read only what they think they might agree with.

One has only to look at the thread of responses to Michael’s Twitter links to his forecasting results and opinions to see examples of all kinds of Twitter users: some genuinely academic and/or thoughtful; some criticising the lack of published forecasting methods, despite frequent posts, although they have now appeared as a preprint here; many advising to watch out (often in extreme terms) for “big brother” government when governments ask or require their populations to take precautions of various kinds; and others simply handclapping, because they think that the message is that this all might go away without much action on their part, some of them actively calling for resistance even to some of the most trivial precautionary requests.

Preamble

One of the recent papers I have found useful in marshalling my thoughts on methodologies is this 2016 one by Gustavo Chowell, and it finally led me to calibrate the differences in principle between the SIR differential equation approach I have been using (but a 7-compartment model, not just three) and the curve-fitting approach.

I had been thinking of analogies to illustrate the differences (which I will come to later), but this 2016 Chowell paper, in particular, encapsulated the technical differences for me, and I summarise that below. The Sergio Alonso paper also covers this ground.

Categorization of modelling approaches

Gerard Chowell’s 2016 paper summarises modelling approaches as follows.

Phenomenological models

A dictionary definition – “Phenomenology is the philosophical study of observed unusual people or events as they appear without any further study or explanation.”

Chowell states that phenomenological approaches for modelling disease spread are particularly suitable when significant uncertainty clouds the epidemiology of an infectious disease, including the potential contribution of multiple transmission pathways.

In these situations, phenomenological models provide a starting point for generating early estimates of the transmission potential and generating short-term forecasts of epidemic trajectory and predictions of the final epidemic size.

Such methods include curve fitting, as used by Michael Levitt, where an equation (represented by a curve on a time-incidence graph (say) for the virus outbreak), with sufficient degrees of freedom, is used to replicate the shape of the observed data with the chosen equation and its parameters. Sigmoid and Gompertz functions (types of “logistics” or Richards functions) have been used for such fitting – they produce the familiar “S”-shaped curves we see for epidemics. The starting growth rate, the intermediate phase (with its inflection point) and the slowing down of the epidemic, all represented by that S-curve, can be fitted with the equation’s parametric choices (usually three or four).

This chart was put up by Michael Levitt on July 8th to illustrate curve fitting methodology using the Gompertz function. See https://twitter.com/MLevitt_NP2013/status/1280926862299082754
Chart by Michael Levitt illustrating his Gompertz function curve fitting methodology

A feature that some epidemic outbreaks share is that growth of the epidemic is not fully exponential, but is “sub-exponential” for a variety of reasons, and Chowell states that:

Previous work has shown that sub-exponential growth dynamics was a common phenomenon across a range of pathogens, as illustrated by empirical data on the first 3-5 generations of epidemics of influenza, Ebola, foot-and-mouth disease, HIV/AIDS, plague, measles and smallpox.”

Choices of appropriate parameters for the fitting function can allow such sub-exponential behaviour to be reflected in the chosen function’s fit to the reported data, and it turns out that the Gompertz function is more suitable for this than the Sigmoid function, as Michael Levitt states in his recent paper.

Once a curve-fit to reported data to date is achieved, the curve can be used to make forecasts about future case numbers.

Mechanistic and statistical models

Chowell states that “several mechanisms have been put forward to explain the sub-exponential epidemic growth patterns evidenced from infectious disease outbreak data. These include spatially constrained contact structures shaped by the epidemiological characteristics of the disease (i.e., airborne vs. close contact transmission model), the rapid onset of population behavior changes, and the potential role of individual heterogeneity in susceptibility and infectivity.

He goes on to say that “although attractive to provide a quantitative description of growth profiles, the generalized growth model (described earlier) is a phenomenological approach, and hence cannot be used to evaluate which of the proposed mechanisms might be responsible for the empirical patterns.

Explicit mechanisms can be incorporated into mathematical models for infectious disease transmission, however, and tested in a formal way. Identification and analysis of the impacts of these factors can lead ultimately to the development of more effective and targeted control strategies. Thus, although the phenomenological approaches above can tell us a lot about the nature of epidemic patterns early in an outbreak, when used in conjunction with well-posed mechanistic models, researchers can learn not only what the patterns are, but why they might be occurring.

On the Imperial College team’s planning website, they state that their forecasting models (they have several for different purposes, for just these reasons I guess) fall variously into the “Mechanistic” and “Statistical” categories, as follows.

COVID-19 planning tools
Imperial College models use a combination of mechanistic and statistical approaches.

Mechanistic model: Explicitly accounts for the underlying mechanisms of diseases transmission and attempt to identify the drivers of transmissibility. Rely on more assumptions about the disease dynamics.

Statistical model: Do not explicitly model the mechanism of transmission. Infer trends in either transmissibility or deaths from patterns in the data. Rely on fewer assumptions about the disease dynamics.

Mechanistic models can provide nuanced insights into severity and transmission but require specification of parameters – all of which have underlying uncertainty. Statistical models typically have fewer parameters. Uncertainty is therefore easier to propagate in these models. However, they cannot then inform questions about underlying mechanisms of spread and severity.

So Imperial College’s “statistical” description matches more to Chowell’s description of a phenomenological approach, although may not involve curve-fitting per se.

The SIR modelling framework, employing differential equations to represent postulated relationships and transitions between Susceptible, Infected and Recovered parts of the population (at its most simple) falls into this Mechanistic model category.

Chowell makes the following useful remarks about SIR style models.

The SIR model and derivatives is the framework of choice to capture population-level processes. The basic SIR model, like many other epidemiological models, begins with an assumption that individuals form a single large population and that they all mix randomly with one another. This assumption leads to early exponential growth dynamics in the absence of control interventions and susceptible depletion and greatly simplifies mathematical analysis (note, though, that other assumptions and models can also result in exponential growth).

The SIR model is often not a realistic representation of the human behavior driving an epidemic, however. Even in very large populations, individuals do not mix randomly with one another—they have more interactions with family members, friends, and coworkers than with people they do not know.

This issue becomes especially important when considering the spread of infectious diseases across a geographic space, because geographic separation inherently results in nonrandom interactions, with more frequent contact between individuals who are located near each other than between those who are further apart.

It is important to realize, however, that there are many other dimensions besides geographic space that lead to nonrandom interactions among individuals. For example, populations can be structured into age, ethnic, religious, kin, or risk groups. These dimensions are, however, aspects of some sort of space (e.g., behavioral, demographic, or social space), and they can almost always be modeled in similar fashion to geographic space“.

Here we begin to see the difference I was trying to identify between the curve-fitting approach and my forecasting method. At one level, one could argue that curve-fitting and SIR-type modelling amount to the same thing – choosing parameters that make the theorised data model fit the reported data.

But, whether it produces better or worse results, or with more work rather than less, SIR modelling seeks to understand and represent the underlying virus incubation period, infectivity, transmissibility, duration and related characteristics such as recovery and immunity (for how long, or not at all) – the why and how, not just the what.

The (nonlinear) differential equations are then solved numerically (rather than analytically with exact functions) and there does have to be some fitting to the initial known data for the outbreak (i.e. the history up to the point the forecast is being done) to calibrate the model with relevant infection rates, disease duration and recovery timescales (and death rates).

This makes it look similar in some ways to choosing appropriate parameters for any function (Sigmoid, Gompertz or General Logistics function (often three or four parameters)).

But the curve-fitting approach is reproducing an observed growth pattern (one might say top-down, or focused on outputs), whereas the SIR approach is setting virological and other behavioural parameters to seek to explain the way the epidemic behaves (bottom-up, or focused on inputs).

Metapopulation spatial models

Chowell makes reference to population-level models, formulations that are used for the vast majority of population based models that consider the spatial spread of human infectious diseases and that address important public health concerns rather than theoretical model behaviour. These are beyond my scope, but could potentially address concerns about indirect impacts of the Covid-19 pandemic.

a) Cross-coupled metapopulation models

These models, which have been used since the 1940s, do not model the process that brings individuals from different groups into contact with one another; rather, they incorporate a contact matrix that represents the strength or sum total of those contacts between groups only. This contact matrix is sometimes referred to as the WAIFW, or “who acquires infection from whom” matrix.

In the simplest cross-coupled models, the elements of this matrix represent both the influence of interactions between any two sub-populations and the risk of transmission as a consequence of those interactions; often, however, the transmission parameter is considered separately. An SIR style set of differential equations is used to model the nature, extent and rates of the interactions between sub-populations.

b) Mobility metapopulation models

These models incorporate into their structure a matrix to represent the interaction between different groups, but they are mechanistically oriented and do this by considering the actual process by which such interactions occur. Transmission of the pathogen occurs within sub-populations, but the composition of those sub-populations explicitly includes not only residents of the sub-population, but visitors from other groups.

One type of model uses a “gravity” approach for inter-population interactions, where contact rates are proportional to group size and inversely proportional to the distance between them.

Another type described by Chowell uses a “radiation” approach, which uses population data relating to home locations, and to job locations and characteristics, to theorise “travel to work” patterns, calculated using attractors that such job locations offer, influencing workers’ choices and resulting travel and contact patterns.

Transportation and mobile phone data can be used to populate such spatially oriented models. Again SIR-style differential equations are used to represent the assumptions in the model about between whom, and how the pandemic spreads.

Summary of model types

We see that there is a range of modelling methods, successively requiring more detailed data, but which seek increasingly to represent the mechanisms (hence “mechanistic” modelling) by which the virus might spread.

We can see the key difference between curve-fitting (what I called a surface level technique earlier) and the successively more complex models that seek to work from assumed underlying causations of infection spread.

An analogy (picking up on the word “surface” I have used here) might refer to explaining how waves in the sea behave. We are all aware that out at sea, wave behaviour is perceived more as a “swell”, somewhat long wavelength waves, sometimes of great height, compared with shorter, choppier wave behaviour closer to shore.

I’m not here talking about breaking waves – a whole separate theory is needed for those – René Thom‘s Catastrophe Theory – but continuous waves.

A curve fitting approach might well find a very good fit using trigonometric sine waves to represent the wavelength and height of the surface waves, even recognising that they can be encoded by depth of the ocean, but it would need an understanding of hydrodynamics, as described, for example, by Bernoulli’s Equation, to represent how and why the wavelength and wave height (and speed*) changes depending on the depth of the water (and some other characteristics).

(*PS remember that the water moves, pretty much, up and down, in an elliptical path for any fluid “particle”, not in the direction of travel of the observed (largely transverse) wave. The horizontal motion and speed of the wave is, in a sense, an illusion.)

Concluding comments

There is a range of modelling methods, successively requiring more detailed data, from phenomenological (statistical and curve-fitting) methods, to those which seek increasingly to represent the mechanisms (hence “mechanistic”) by which the virus might spread.

We see the difference between curve-fitting and the successively more complex models that build a model from assumed underlying interactions, and causations of infection spread between parts of the population.

I do intend to cover the mathematics of curve fitting, but wanted first to be sure that the context is clear, and how it relates to what I have done already.

Models requiring detailed data about travel patterns are beyond my scope, but it is as well to set into context what IS feasible.

Setting an understanding of curve-fitting into the context of my own modelling was a necessary first step. More will follow.

References

I have found several papers very helpful on comparing modelling methods, embracing the Gompertz (and other) curve-fitting approaches, including Michaels Levitt’s own recent June 30th one, which explains his methods quite clearly.

Gerard Chowell’s 2016 paper on Mathematical model types September 2016

The Coronavirus Chronologies – Michael Levitt, 13th March 2020

COVID-19 Virus Epidemiological Model Alex de Visscher, Concordia University, Quebec, 22nd March 2020

Empiric model for short-time prediction of Covid-19 spreading , Sergio Alonso et al, Spain, 19th May 2020

Universality in Covid-19 spread in view of the Gompertz function Akira Ohnishi et al, Kyoto University) 22nd June 2020

Predicting the trajectory of any Covid-19 epidemic from the best straight line – Michael Levitt et al 30th June 2020

Categories
Coronavirus Covid-19 Herd Immunity Imperial College Reproductive Number

Some thoughts on the current UK Coronavirus position

Introduction

A couple of interesting articles on the Coronavirus pandemic came to my attention this week; a recent one in National Geographic on June 26th, highlighting a startling comparison, between the USA’s cases history, and recent spike in case numbers, with the equivalent European data, referring to an older National Geographic article, from March, by Cathleen O’Grady, referencing a specific chart based on work from the Imperial College Covid-19 Response team.

I noticed, and was interested in that reference following a recent interaction I had with that team, regarding their influential March 16th paper. It prompted more thought about “herd immunity” from Covid-19 in the UK.

Meanwhile, my own forecasting model is still tracking published data quite well, although over the last couple of weeks I think the published rate of deaths is slightly above other forecasts as well as my own.

The USA

The recent National Geographic article from June 26th, by Nsikan Akpan, is a review of the current situation in the USA with regard to the recent increased number of new confirmed Coronavirus cases. A remarkable chart at the start of that article immediately took my attention:

7 day average cases from the US Census Bureau chart, NY Times / National Geographic

The thrust of the article concerned recommendations on public attitudes, activities and behaviour in order to reduce the transmission of the virus. Even cases per 100,000 people, the case rate, is worse and growing in the USA.

7 day average cases per 100,000 people from the US Census Bureau chart, NY Times / National Geographic

A link between this dire situation and my discussion below about herd immunity is provided by a reported statement in The Times by Dr Anthony Fauci, Director of the National Institute of Allergy and Infectious Diseases, and one of the lead members of the Trump Administration’s White House Coronavirus Task Force, addressing the Covid-19 pandemic in the United States.

Reported Dr Fauci quotation by the Times newspaper 30th June 2020

If the take-up of the vaccine were 70%, and it were 70% effective, this would result in roughly 50% herd immunity (0.7 x 0.7 = 0.49).

If the innate characteristics of the the SARS-CoV-2 virus don’t change (with regard to infectivity and duration), and there is no other human-to-human infection resistance to the infection not yet understood that might limit its transmission (there has been some debate about this latter point, but this blog author is not a virologist) then 50% is unlikely to be a sufficient level of population immunity.

My remarks later about the relative safety of vaccination (eg MMR) compared with the relevant diseases themselves (Rubella, Mumps and Measles in that case) might not be supported by the anti-Vaxxers in the US (one of whose leading lights is the disgraced British doctor, Andrew Wakefield).

This is just one more complication the USA will have in dealing with the Coronavirus crisis. It is one, at least, that in the UK we won’t face to anything like the same degree when the time comes.

The UK, and implications of the Imperial College modelling

That article is an interesting read, but my point here isn’t really about the USA (worrying though that is), but about a reference the article makes to some work in the UK, at Imperial College, regarding the effectiveness of various interventions that have been or might be made, in different combinations, work reported in the National Geographic back on March 20th, a pivotal time in the UK’s battle against the virus, and in the UK’s decision making process.

This chart reminded me of some queries I had made about the much-referenced paper by Neil Ferguson and his team at Imperial College, published on March 16th, that seemed (with others, such as the London School of Hygiene and Infectious Diseases) to have persuaded the UK Government towards a new approach in dealing with the pandemic, in mid to late March.

Possible intervention strategies in the fight against Coronavirus

The thrust of this National Geographic article, by Cathleen O’Grady, was that we will need “herd immunity” at some stage, even if the Imperial College paper of March 16th (and other SAGE Committee advice, including from the Scientific Pandemic Influenza Group on Modelling (SPI-M)) had persuaded the Government to enforce several social distancing measures, and by March 23rd, a combination of measures known as UK “lockdown”, apparently abandoning the herd immunity approach.

The UK Government said that herd immunity had never been a strategy, even though it had been mentioned several times, in the Government daily public/press briefings, by Sir Patrick Vallance (UK Chief Scientific Adviser (CSA)) and Prof Chris Whitty (UK Chief Medical Officer (CMO)), the co-chairs of SAGE.

The particular part of the 16th March Imperial College paper I had queried with them a couple of weeks ago was this table, usefully colour coded (by them) to allow the relative effectiveness of the potential intervention measures in different combinations to be assessed visually.


PC=school and university closure, CI=home isolation of cases, HQ=household quarantine, SD=large-scale general population social distancing, SDOL70=social distancing of those over 70 years for 4 months (a month more than other interventions)

Why was it, I wondered, that in this chart (on the very last page of the paper, and referenced within it) the effectiveness of the three measures “CI_HQ_SD” in combination (home isolation of cases, household quarantine & large-scale general population social distancing) taken together (orange and yellow colour coding), was LESS than the effectiveness of either CI_HQ or CI_SD taken as a pair of interventions (mainly yellow and green colour coding)?

The explanation for this was along the following lines.

It’s a dynamical phenomenon. Remember mitigation is a set of temporary measures. The best you can do, if measures are temporary, is go from the “final size” of the unmitigated epidemic to a size which just gives herd immunity.

If interventions are “too” effective during the mitigation period (like CI_HQ_SD), they reduce transmission to the extent that herd immunity isn’t reached when they are lifted, leading to a substantial second wave. Put another way, there is an optimal effectiveness of mitigation interventions which is <100%.

That is CI_HQ_SDOL70 for the range of mitigation measures looked at in the report (mainly a green shaded column in the table above).

While, for suppression, one wants the most effective set of interventions possible.

All of this is predicated on people gaining immunity, of course. If immunity isn’t relatively long-lived (>1 year), mitigation becomes an (even) worse policy option.

Herd Immunity

The impact of very effective lockdown on immunity in subsequent phases of lockdown relaxation was something I hadn’t included in my own (single phase) modelling. My model can only (at the moment) deal with one lockdown event, with a single-figure, averaged intervention effectiveness percentage starting at that point. Prior data is used to fit the model. It has served well so far, until the point (we have now reached) at which lockdown relaxations need to be modelled.

But in my outlook, potentially, to modelling lockdown relaxation, and the potential for a second (or multiple) wave(s), I had still been thinking only of higher % intervention effectiveness being better, without taking into account that negative feedback to the herd immunity characteristic, in any subsequent more relaxed phase, other than through the effect of the changing comparative compartment sizes in the SIR-style model differential equations.

I covered the 3-compartment SIR model in my blog post on April 8th, which links to my more technical derivation here, and more complex models (such as the Alex de Visscher 7-compartment model I use in modified form, and that I described on April 14th) that are based on this mathematical model methodology.

In that respect, the ability for the epidemic to reproduce, at a given time “t” depends on the relative sizes of the infected (I) vs. the susceptible (S, uninfected) compartments. If the R (recovered) compartment members don’t return to the S compartment (which would require a SIRS model, reflecting waning immunity, and transitions from R back to the S compartment) then the ability of the virus to find new victims is reducing as more people are infected. I discussed some of these variations in my post here on March 31st.

My method might have been to reduce the % intervention effectiveness from time to time (reflecting the partial relaxation of some lockdown measures, as Governments are now doing) and reimpose it to a higher % effectiveness if and when the Rt (the calculated R value at some time t into the epidemic) began to get out of control. For example, I might relax lockdown effectiveness from 90% to 70% when Rt reached Rt<0.7, and increase again to 90% when Rt reached Rt>1.2.

This was partly owing to the way the model is structured, and partly to the lack of disaggregated data I would have available to me for populating anything more sophisticated. Even then, the mathematics (differential equations) of  the cyclical modelling was going to be a challenge.

In the Imperial College paper, which does model the potential for cyclical peaks (see below), the “trigger” that is used to switch on and off the various intervention measures doesn’t relate to Rt, but to the required ICU bed occupancy. As discussed above, the intervention effectiveness measures are a much more finely drawn range of options, with their overall effectiveness differing both individually and in different combinations. This is illustrated in the paper (a slide presented in the April 17th Cambridge Conversation I reported in my blog article on Model Refinement on April 22nd):

What is being said here is that if we assume a temporary intervention, to be followed by a relaxation in (some of) the measures, the state in which the population is left with regard to immunity at the point of change is an important by-product to be taken into account in selecting the (combination of) the measures taken, meaning that the optimal intervention for the medium/long term future isn’t necessarily the highest % effectiveness measure or combined set of measures today.

The phrase “herd immunity” has been an ugly one, and the public and press winced somewhat (as I did) when it was first used by Sir Patrick Vallance; but it is the standard term for what is often the objective in population infection situations, and the National Geographic articles are a useful reminder of that, to me at least.

The arithmetic of herd immunity, the R number and the doubling period

I covered the relevance and derivation of the R0 reproduction number in my post on SIR (Susceptible-Infected-Recovered) models on April 8th.

In the National Geographic paper by Cathleen O’Grady, a useful rule of thumb was implied, regarding the relationship between the herd immunity percentage required to control the growth of the epidemic, and the much-quoted R0 reproduction number, interpreted sometimes as the number of people (in the susceptible population) one infected person infects on average at a given phase of the epidemic. When Rt reaches one or less, at a given time t into the epidemic, so that one person is infecting one or fewer people, on average, the epidemic is regarded as having stalled and to be under control.

Herd immunity and R0

One example given was measles, which was stated to have a possible starting R0 value of 18, in which case almost everyone in the population needs to act as a buffer between an infected person and a new potential host. Thus, if the starting R0 number is to be reduced from 18 to Rt<=1, measles needs A VERY high rate of herd immunity – around 17/18ths, or ~95%, of people needing to be immune (non-susceptible). For measles, this is usually achieved by vaccine, not by dynamic disease growth. (Dr Fauci had mentioned over 95% success rate in the US previously for measles in the reported quotation above).

Similarly, if Covid-19, as seems to be the case, has a lower starting infection rate (R0 number) than measles, nearer to between 2 and 3 (2.5, say (although this is probably less than it was in the UK during March; 3-4 might be nearer, given the epidemic case doubling times we were seeing at the beginning*), then the National Geographic article says that herd immunity should be achieved when around 60 percent of the population becomes immune to Covid-19. The required herd immunity H% is given by H% = (1 – (1/2.5))*100% ~= 60%.

Whatever the real Covid-19 innate infectivity, or reproduction number R0 (but assuming R0>1 so that we are in an epidemic situation), the required herd immunity H% is given by:

H%=(1-(1/R0))*100%  (1)

(*I had noted that 80% was referenced by Prof. Chris Whitty (CMO) as loose talk, in an early UK daily briefing, when herd immunity was first mentioned, going on to mention 60% as more reasonable (my words). 80% herd immunity would correspond to R0=5 in the formula above.)

R0 and the Doubling time

As a reminder, I covered the topic of the cases doubling time TD here; and showed how it is related to R0 by the formula;

R0=d(loge2)/T (2)

where d is the disease duration in days.

Thus, as I said in that paper, for a doubling period TD of 3 days, say, and a disease duration d of 2 weeks, we would have R0=14×0.7/3=3.266.

If the doubling period were 4 days, then we would have R0=14×0.7/4=2.45.

As late as April 2nd, Matt Hancock (UK secretary of State for Health) was saying that the doubling period was between 3 and 4 days (although either 3 or 4 days each leads to quite different outcomes in an exponential growth situation) as I reported in my article on 3rd April. The Johns Hopkins comparative charts around that time were showing the UK doubling period for cases as a little under 3 days (see my March 24th article on this topic, where the following chart is shown.)

In my blog post of 31st March, I reported a BBC article on the epidemic, where the doubling period for cases was shown as 3 days, but for deaths it was between 2 and 3 days ) (a Johns Hopkins University chart).

Doubling time and Herd Immunity

Doubling time, TD(t) and the reproduction number, Rt can be measured at any time t during the epidemic, and their measured values will depend on any interventions in place at the time, including various versions of social distancing. Once any social distancing reduces or stops, then these measured values are likely to change – TD downwards and Rt upwards – as the virus finds it relatively easier to find victims.

Assuming no pharmacological interventions (e.g. vaccination) at such time t, the growth of the epidemic at that point will depend on its underlying R0 and duration d (innate characteristics of the virus, if it hasn’t mutated**) and the prevailing immunity in the population – herd immunity. 

(**Mutation of the virus would be a concern. See this recent paper (not peer reviewed)

The doubling period TD(t) might, therefore, have become higher after a phase of interventions, and correspondingly Rt < R0, leading to some lockdown relaxation; but with any such interventions reduced or removed, the subsequent disease growth rate will depend on the interactions between the disease’s innate infectivity, its duration in any infected person, and how many uninfected people it can find – i.e. those without the herd immunity at that time.

These factors will determine the doubling time as this next phase develops, and bearing these dynamics in mind, it is interesting to see how all three of these factors – TD(t), Rt and H(t) – might be related (remembering the time dependence – we might be at time t, and not necessarily at the outset of the epidemic, time zero).

Eliminating R from the two equations (1) and (2) above, we can find: 

H=1-TD/d(loge2) (3)

So for doubling period TD=3 days, and disease duration d=14 days, H=0.7; i.e. the required herd immunity H% is 70% for control of the epidemic. (In this case, incidentally, remember from equation (2) that R0=14×0.7/3=3.266.)

(Presumably this might be why Dr Fauci would settle for a 70-75% effective vaccine (the H% number), but that would assume 100% take-up, or, if less than 100%, additional immunity acquired by people who have recovered from the infection. But that acquired immunity, if it exists (I’m guessing it probably would) is of unknown duration. So many unknowns!)

For this example with 14 day infection period d, and exploring the reverse implications by requiring Rt to tend to 1 (so postulating in this way (somewhat mathematically pathologically) that the epidemic has stalled at time t) and expressing equation (2) as:

TD (t)= d(loge2)/Rt (4)

then we see that TD(t)= 14*loge(2) ~= 10 days, at this time t, for Rt~=1.

Thus a sufficiently long doubling period, with the necessary minimum doubling period depending on the disease duration d (14 days in this case), will be equivalent to the Rt value being low enough for the growth of the epidemic to be controlled – i.e. Rt <=1 – so that one person infects one or less people on average.

Confirming this, equation (3) tells us, for the parameters in this (somewhat mathematically pathological) example, that with TD(t)=10 and d=14,

H(t) = 1 – (10/14*loge(2)) ~= 1-1 ~= 0, at this time t.

In this situation, the herd immunity H(t) (at this time t) required is notionally zero, as we are not in epidemic conditions (Rt~=1). This is not to say that the epidemic cannot restart – it simply means that if these conditions are maintained, with Rt reducing to 1, and the doubling period being correspondingly long enough, possibly achieved through social distancing (temporarily), across whole or part of the population (which might be hard to sustain) then we are controlling the epidemic.

It is when the interventions are reduced, or removed altogether that the sufficiency of % herd immunity in the population will be tested, as we saw from the Imperial College answer to my question earlier. As they say in their paper:

Once interventions are relaxed (in the example in Figure 3, from September onwards), infections begin to rise, resulting in a predicted peak epidemic later in the year. The more successful a strategy is at temporary suppression, the larger the later epidemic is predicted to be in the absence of vaccination, due to lesser build-up of herd immunity.

Herd immunity summary

Usually herd immunity is achieved through vaccination (eg the MMR vaccination for Rubella, Mumps and Measles). It involves less risk than the symptoms and possible side-effects of the disease itself (for some diseases at least, if not for chicken-pox, for which I can recall parents hosting chick-pox parties to get it over and done with!)

The issue, of course, with Covid-19, is that no-one knows yet if such a vaccine can be developed, if it would be safe for humans, if it would work at scale, for how long it might confer immunity, and what the take-up would be.

Until a vaccine is developed, and until the duration of any CoVid-19 immunity (of recovered patients) is known, this route remains unavailable.

Hence, as the National Geographic article says, there is continued focus on social distancing, as an effective part of even a somewhat relaxed lockdown, to control transmission of the virus.

Is there an uptick in the UK?

All of the above context serves as a (lengthy) introduction to why I am monitoring the published figures at the moment, as the UK has been (informally as well as formally) relaxing some aspects of it lockdown, imposed on March 23rd, but with gradual changes since about the end of May, both in the public’s response and in some of the Government interventions.

My own forecasting model (based on the Alex de Visscher MatLab code, and my variations, implemented in the free Octave version of the MatLab code-base) is still tracking published data quite well, although over the last couple of weeks I think the published rate of deaths is slightly above other forecasts, as well as my own.

Worldometers forecast

The Worldometers forecast is showing higher forecast deaths in the UK than when I reported before – 47924 now vs. 43,962 when I last posted on this topic on June 11th:

Worldometers UK deaths forecast based on Current projection scenario by Oct 1, 2020
My forecasts

The equivalent forecast from my own model still stands at 44,367 for September 30th, as can be seen from the charts below; but because we are still near the weekend, when the UK reported numbers are always lower, owing to data collection and reporting issues, I shall wait a day or two before updating my model to fit.

But having been watching this carefully for a few weeks, I do think that some unconscious public relaxation of social distancing in the fairer UK weather (in parks, on demonstrations and at beaches, as reported in the press since at least early June) might have something to do with a) case numbers, and b) subsequent numbers of deaths not falling at the expected rate. Here are two of my own charts that illustrate the situation.

In the first chart, we see the reported and modelled deaths to Sunday 28th June; this chart shows clearly that since the end of May, the reported deaths begin to exceed the model prediction, which had been quite accurate (even slightly pessimistic) up to that time.

Model vs. reported deaths, to June 28th 2020
Model vs. reported deaths, linear scale, to June 28th 2020

In the next chart, I show the outlook to September 30th (comparable date to the Worldometers chart above) showing the plateau in deaths at 44,367 (cumulative curve on the log scale). In the daily plots, we can see clearly the significant scatter (largely caused by weekly variations in reporting at weekends) but with the daily deaths forecast to drop to very low numbers by the end of September.

Model vs. reported deaths, cumulative and daily, to Sep 30th 2020
Model vs. reported deaths, log scale, cumulative and daily, to Sep 30th 2020

I will update this forecast in a day or two, once this last weekend’s variations in UK reporting are corrected.