Categories
Coronavirus Covid-19 Michael Levitt

Michael Levitt’s analysis of European Covid-19 data

Introduction

I promised in an earlier blog post to present Prof. Michael Levitt’s analysis of Covid-19 data published on the EuroMOMO site for European health data over the last few years.

EuroMOMO

EuroMOMO is the European Mortality Monitoring Project. Based in Denmark, their website states that the overall objective of the original European Mortality Monitoring Project was to design a routine public health mortality monitoring system aimed at detecting and measuring, on a real-time basis, excess number of deaths related to influenza and other possible public health threats across participating European Countries. More is available here.

The Excess Deaths measure

We have heard a lot recently about using the measure of “excess deaths” (on an age related basis) as our own Office for National Statistics (ONS) work on establishing a more accurate measure of the impact of the Coronavirus (SARS-CoV-2) epidemic in the UK.

I think it is generally agreed that this is a better measure – a more complete one perhaps – than those currently used by the UK Government, and some others, because there is no argument about what and what isn’t a Covid-19 death. It’s just excess deaths over and above the seasonal, age related numbers for the geography, country or community concerned, attributing the excess to the novel Coronavirus SARS-CoV-2, the new kid on the block.

That attribution, though, might have its own different issues, such as the inclusion (or not) of deaths related to people’s reluctance to seek hospital help for other ailments, and other deaths arising from the indirect consequences of lockdown related interventions.

There is no disputing, however, that the UK Government figures for deaths have been incomplete from the beginning; they were updated a few weeks ago to include Care Homes on a retrospective and continuing basis (what they called “all settings”) but some reporting of the ONS figures has indicated that when the Government “all settings” figure was 35,341, as of 19th May, the overall “excess deaths” figure might have been as high as 55,000. Look here for more detail and updates direct from the ONS.

The UK background during March 2020

The four policy stages the UK Government initially announced in early March were: Containment, Delay, Research and Mitigate, as reported here. It fairly soon became clear (after the outbreak was declared a pandemic on March 11th by the WHO) that the novel Coronavirus SARS-CoV-2 could not be contained (seeing what was happening in Italy, and case numbers growing in the UK, with deaths starting to be recorded on 10th March (at that time only recorded as caused by Covid-19 with a positive test (in hospital)).

The UK Government have since denied that “herd immunity” had been a policy, but it was mentioned several times in early March, pre-lockdown (which was March 23rd) by Government advisers Sir Patrick Vallance (Chief Scientific Adviser, CSA) and Prof. Chris Whitty (Chief Medical Officer, CMO), in the UK Government daily briefings, with even a mention of 60% population infection proportion to achieve it (at the same time as saying that 80% might be loose talk (my paraphrase)).

If herd immunity wasn’t a policy, it’s hard to understand why it was proactively mentioned by the CSA and CMO, at the same time as the repeated slogan Stay Home, Protect the NHS, Save Lives. This latter advice was intended to keep the outbreak within bounds that the NHS could continue to handle.

The deliberations of the SAGE Committee (Scientific Advisory Group for Emergencies) are not published, but senior advisers (including the CSA and CMO) sit on it, amongst many others (50 or so, not all scientists or medics). Given the references to herd immunity in the daily Government updates at that time, it’s hard to believe that herd immunity wasn’t at least regarded as a beneficial(?!) by-product of not requiring full lockdown at that time.

Full UK lockdown was announced on March 23rd; according to reports this was 9 days after it being accepted by the UK Government as inevitable (as a result of the 16th March Imperial College paper).

The Sunday Times newspaper (ST) published on 24th May 2020 dealt with their story of how the forecasters took charge at that time in mid-March as the UK Government allegedly dithered. The ST’s Insight team editor’s Tweet (Jonathan Calvert) and those of his deputy editor George Arbuthnott refer, as does the related Apple podcast.

Prof. Michael Levitt

Michael (a Nobel Laureate in Computational Biology in 2013) correctly forecast in February the potential extent of the Chinese outbreak (Wuhan in the Hubei province) at the end of March. I first reported this at my blog post on 24th March, as his work on China, and his amazingly accurate forecast, were reported that day here in the UK, which I saw in The Times newspaper.

On May 18th I reported in my blog further aspects of Michael’s outlook on the modelling by Imperial College, the London School of Hygiene and Tropical Medicine (and others) which he says, and I paraphrase his words, caused western countries to trash their economies through the blanket measures they have taken, frightened into alternative action (away from what seems to have been, at least in part, a “herd-immunity” policy) by the forecasts from their advisers’ models, reported as between 200,000 and 500,000 deaths in some publications.

Michael and I have been in direct touch since early May, when a mutual friend, Andrew Ennis, mentioned my Coronavirus modelling to him in his birthday wishes! We were all contemporaries at King’s College, London in 1964-67; they in Physics, and I in Mathematics.

I mentioned Michael’s work in a further, recent blog post on May 20th, when I mentioned his findings on the data at EuroMOMO, contrasting it with the Cambridge Conversation of 14th May, and that is when I said that I would post a blog article purely on his EurtoMOMO work, and this post is the delivery of that promise.

I have Michael’s permission (as do others who have received his papers) to publicise his recent EuroMOMO findings (his earlier work having been focused on China, as I have said, and then on the rest of the world).

He is senior Professor in Structural Biology at Stanford University School of Medicine, CA.

I’m reporting, and explaining a little (where possible!) Michael’s findings just now, rather than deeply analysing – I’m aware that he is a Nobel prize-winning data scientist, and I’m not (yet!) 😀

This blog post is therefore pretty much a recapitulation of his work, with some occasional explanatory commentary.

Michael’s EuroMOMO analysis

What follows is the content of several tweets published by Michael, at his account @MLevitt_NP2013, showing that in Europe, COVID19 is somewhat similar to the 2017/18 European Influenza epidemics, both in total number of excess deaths, and age ranges of these deaths.

Several other academics have also presented data that, whatever the absolute numbers, indicate that there is a VERY marked (“startling” was Prof. Sir David Spiegelhalter’s word) age dependency in the risk factors of dying from Covid-19. I return to that theme at the end of the post.

The EuroMOMO charts and Michael’s analysis

In summary, COVID19 Excess Deaths plateau at 153,006, 15% more than the 2017/18 Flu with similar age range counts. The following charts indicate the support for his view, including the correction of a large error Michael has spotted in one of the supporting EuroMOMO charts.

Firstly, here are the summary Excess Death Charts for all ages in 2018-20.

FIGURE 1. EuroMOMO excess death counts for calendar years 2018, 2019 & 2020

The excess deaths number for COVID19 is easily read as the difference between Week 19 (12 May ’20) and Week 8 (27 Feb ’20). The same is true of the 2018 part of the 2017/18 Influenza season. Getting the 2017 part of that season is harder. These notes are added to aid those interested in following the calculation, and hopefully help them in pointing out any errors.

The following EuroMOMO chart defines how excess deaths are measured.

FIGURE 2. EuroMOMO’s total and other categories of deaths

This is EuroMOMO’s Total (the solid blue line), Baseline (dashed grey line) and ‘Substantial increase’ (dashed red line) for years 2016 to the present. Green circles mark 2017/18 Flu and 2020 COVID-19. The difference between Total Deaths and Baseline Deaths is Excess Deaths.

Next, then, we see Michael’s own summary of the figures found from these earlier charts:

Table 3. Summary for 2020 COVID19 Season and 2017/18 Influenza Season.

Owing to baseline issues, we cannot estimate Age Range Mortality for the 2017 part of the Influenza season, so we base our analysis on the 2018 part, where data is available from EuroMOMO.

We see also the steep age dependency in deaths from under 65s to over 85s. I’ll present at the end of this post some new data on that aspect (it’s of personal interest too!)

Below we see EuroMOMO Excess Deaths from 2020 Week 8, now (on the 14th May) matching reported COVID Deaths @JHUSystems (Johns Hopkins University) perfectly (better than 2%). In earlier weeks the reported deaths were lower, but Michael isn’t sure why. But it allows him to do this in-depth analysis & comparison with EuroMOMO influenza data.

FIGURE 4. The weekly EuroMOMO Excess Deaths are read off their graphs by mouse-over.

The weekly reported COVID19 deaths are taken from the Johns Hopkins University Github repository. The good agreement is an encouraging sign of reliable data but there is a unexplained delay in EuroMOMO numbers.

Analysis of Europe’s Excess Deaths is hard: EuroMOMO provides beautiful plots, but extracting data requires hand-recorded mouse-overs on-screen*. COVID19 2020 – weeks 8-19, & Influenza 2018 – weeks 01-16 are relatively easy for all age ranges (totals 153,006 & 111,226). Getting the Dec. 2017 Influenza peak is very tricky.

(*My son, Dr Tom Sutton, has been extracting UK data from the Worldometers site for me, using a small but effective Python “scraping” script he developed. It is feasible, but much more difficult, to do this on the EuroMOMO site, owing to the vector coordinate definitions of the graphics, and Document Object Model they use for their charts.)

Figure 5. Deaths graphs from EurMoMo allow the calculation of Excess deaths

FIGURE 5. The Excess deaths for COVID19 in 2020 and for Influenza in 2018 are easily read off the EuroMOMO graphs by hand recording four mouse-overs.

The same is done for all different age ranges allowing accurate determination of the age range mortalities. For COVID19, there are 174,801 minus 21,795 = 153,006 Excess Deaths. For 2018 Influenza, the difference is 111,226 minus zero = 111,226 Excess Deaths.

Michael exposes an error in the EuroMOMO charts

In the following chart, it should be easy to calculate again, as mouse-over of the charts on the live EuroMOMO site gives two values a week: Actual death count & Baseline value.

Tests on the COVID19 peak gave a total of 127,062 deaths & not 153,006. Plotting a table & superimposing the real plot showed why. Baseline values are actually ‘Substantial increase’ values!! Wrong labelling?

Figure 6. Actual death count & Baseline value

In Figure 6, Excess Deaths can also be determined from the plots of Total and Baseline Deaths with week number. Many more numbers need to be recorded but the result would be the same.

TABLE 7. The pairs of numbers recorded from EuroMOMO between weeks 08 and 19

TABLE 7. The pairs of numbers recorded from EuroMOMO between weeks 08 and 19 of 2020 allow the Excess Deaths to be determined in a different way than from FIG. 5. The total Excess Deaths (127,062) should be the same as before (153,006) but it is not. Why? (Mislabelling of the EuroMOMO graph? What is “Substantial increase” anyway and why is it there? – BRS).

FIGURE 8. Analysing what is wrong with the EuroMOMO Excess Deaths count

FIGURE 8. The lower number in TABLE 7 is in fact not the Baseline Death value (grey dashed line) but the ‘Substantial increase’ value (red dashed line). Thus the numbers in the table are not Excess Deaths (Total minus Baseline level) but Total minus ‘Substantial increase’ level. The difference is found by adding 12×1981** to 127,062 to get 153,006. This means that the baseline is about 2000 deaths a week below the red line. This cannot be intended and is a serious error in EuroMOMO. Michael has been looking for someone to help him contact them? (**(153,006 – 127062)/12 = 25944/12 = 2162. So shouldn’t we be adding 12×2162, Michael? – BRS)

Reconciling the numbers, and age range data

Requiring the two COVID19 death counts to match means reducing the Baseline value by 23,774/12 = 1,981**. Mouse-over 2017 weeks 46 to 52 gave the table below. Negative Excess Deaths meant 2017 Influenza began Week 49 not 46. Michael tried to get Age Range data for 2017 but the table just uses 2018 Influenza data. (**see above also – same issue. Should be 25944/12 = 2162? – BRS)

TABLE 9. Estimating the Excess Deaths for the 2017 part of the 2017/18 influenza season

In TABLE 9, Michael tries to estimate the Excess Deaths for the 2017 part of the 2017/18 Influenza season by recording pairs of mouse-overs for seven weeks (46 to 52) and four age ranges. Because the Total Deaths are not always higher than the ‘Substantial increase’ base level, he uses differences as a sanity check. The red numbers for weeks 46 to 48 show that the Excess Deaths are negative and that the Influenza season did not start until week 49 of 2017.

TABLE 10. We try to combine the two parts of the 2017/18 Influenza season

TABLE 10 commentary. We try to combine the two parts of the 2017/18 Influenza season. The values for 2018 are straightforward as they are determined as shown in Fig. 5. For 2017, we need to use the values in Table 9 and add the baseline correction because the EuroMOMO mouse-overs are wrong, giving as they do the ‘Substantial increase’ value instead of the ‘Baseline’ value. We can use the same correction of 1981**(see my prior comments on this number – BRS) deaths per week as determined for all COVID19 data but we do not know what the correction is for other age ranges. An attempt to assume that the correction is proportional to the 2017 number of deaths in each age range gives strange age range mortalities.
Thus, we choose to use the total for 2017 (21,972) but give the age range mortalities just from the deaths in 2018, as the 2017 data is arcane, unreliable or flawed.

Michael’s concluding statement

COVID19 is similar to Influenza only in total and in age range excess mortality. Flu is a different virus, has a safe vaccine & is much less a threat to heroic medical professionals.

Additional note on the age dependency of Covid-19 risk

In my earlier blog post, reporting the second Cambridge Conversation webinar I attended, the following slide from Prof. Sir David Spiegelhalter was one that drew the sharp distinction between the risk to people in different age ranges:

Age related increase in Covid-19 death rates

Prof. Spiegelhalter’s own Twitter account is also quite busy, and this particular chart was mentioned there, and also on his blog.

This week I was sent this NHS pre-print paper (pending peer review, as many Coronavirus research papers are) to look at the various Covid-19 risk factors and their dependencies, and to explain them. The focus of the 20-page paper is the potential for enhanced risk for people with Type-1 or Type-2 Diabetes, but the Figure 2 towards the end of that paper shows the relative risk ratios for a number of other parameters too, including age range, gender, deprivation and ethnic group.

Risk ratios for different population characteristics

This chart extract, from the paper by corresponding author Prof. Jonathan Valabhji (Imperial College, London & NHS) and his colleagues, indicates a very high age-related dependency for Covid-19 risk, based on the age of the individual. The risk ratio for a white woman under 40, with no deprivation factors, and no diabetes, compared with a control person (a 60-69 year old white woman, with no deprivation factors, and no diabetes) is 1% of the risk. A white male under 40 with otherwise similar characteristics would have a risk of 1.94% of the control person.

Other reduction factors apply in the two 10-year age bands between 40-49 and 50-59, for a white woman (no deprivations or diabetes) in those age ranges of 11% and 36% of the risk respectively.

At 70-79, and above 80, the risk enhancement factors owing to age are x 2.63 and x 9.14 respectively.

So there is some agreement (at least on the principle of age dependency of risk, as represented by the data, if not the quantum), between EuroMOMO, Prof. Michael Levitt, Prof. Sir David Spiegelhalter and the Prof. Jonathan Valabhji et al. paper; that increasing age beyond middle age is a significant indicator of enhanced risk to Covid-19.

In some other respects, Michael is at odds with forecasts made by Prof. Neil Ferguson’s Imperial College group (and, by inference, also with the London School of Hygiene and Tropical Medicine) and with the analysis of the Imperial College paper by Prof. Spiegelhalter.

I reported this in my recent blog post on May 18th concerning the Cambridge Conversation of 14th May, highlighting the contrast with Michael’s interview with Freddie Sayers of UnHerd, which is available directly on YouTube at https://youtu.be/bl-sZdfLcEk.

I recommend going to the primary evidence and watching the videos in those posts.

Categories
Coronavirus Covid-19

My model calculations for Covid-19 cases for an earlier UK lockdown

Introduction

A little while ago (14th May), I published a post entitled What if UK lockdown had been 2 weeks earlier? where I explored the possible impact of a lockdown intervention date of 9th March instead of 23rd March, the actual UK lockdown date.

That article focused more on the impact on the number of deaths in those two scenarios, rather than the number of Covid-19 cases, where the published data is not as clear, or as complete, since so few people have been tested.

That post also made the point that this wasn’t a proper forecast, because the calibration of the model for that early an intervention date would have been compromised, as there was so little historic data to which to fit the model at that point. That still applies here.

Therefore the comparisons are not valid in detail against reported data, but the comparative numbers between the two models show how a typical model, such as mine (derived from Alex de Visscher’s code as before), is so dependent on (early) input data, and, indeed, responds in a very non-linear way, given the exponential pattern of pandemic growth.

Cases

I present below the two case numbers charts for the 9th March and 23rd March lockdown dates (I had covered the death data in more detail in my previous post on this topic, but will return to that below).

In the charts for cases here, we see in each chart (in orange) the same reported data, to date (24th May) but a big difference in the model predictions for cases. For the 9th March lockdown, the model number for cases by the 23th March is 14,800.

The equivalent model number for cases for 23rd March lockdown (i.e. modelled cases with no prior lockdown) is 45,049 cases, about 3 times as many.

The comparative reported number (the orange curve above) for 23rd March is 81,325 (based on multiplying up UK Government reported numbers (by 12.5), using Italy’s and other data concerning the proportion of real cases that might ever be tested (about 8%), as described in my Model Update post on May 8th). Reported case numbers (in other countries too, not just in the UK) underestimate the real case numbers by such a factor, because of the lack of sufficient public Coronavirus testing.

As I said in my previous article, a reasonable multiple on the public numbers for comparison might, then, be 12.5 (the inverse of 8%), which the charts above are reflect for the orange graph curve.

Deaths

For completeness, here are the comparative charts showing the equivalent model data for deaths, for the two lockdown dates.

On the right, my live model matches reported deaths data, using 84% lockdown intervention effectiveness, for the actual March 23rd lockdown, quite accurately. The model curve and the reported data curve are almost coincident (The reported data is the orange curve, as always).

On the left, the modelled number of deaths is lower from the time of lockdown. By 23rd March, for 9th March lockdown, it is 108, lower than it is for lockdown at the 23rd March (402) (with no benefit from lockdown at all in the latter case, of course).

These compare with the model numbers for deaths at the later date of May 13th, reported in my May 13th post, of 540 and 33,216 for March 9th and March 23rd lockdowns respectively (at virtually the same 84.1% intervention effectiveness).

As for the current date, at 84% effectiveness, of 24th May, the numbers of deaths on the right, for the actual 23rd March lockdown data and model is 36,660 (against the reported 36,793), and for the 9th March lockdown, on the left, would have been, in the model, 570 deaths.

That seems a very large difference, but see it as an internal comparison of model outcomes on those two assumptions. whatever the deficiencies of the availability of data to fit the model to an earlier lockdown, it is clear that, by an order of magnitude, the model behaviour over that 2 month period or so is crucially dependent on when that intervention (lockdown) happens.

This shows the startling (but characteristic) impact of the exponential pandemic growth on the outcomes from the different lockdown dates, for an outcome reporting date, 13th May, just 51 days later than the March 23rd reporting date, and for an outcome reporting date, 24th May, 62 days after March 23rd.

The model shows deaths multiplying by 5 in that 51 day period for 9th March lockdown, but 82 times as many deaths in that period for the 23rd March lockdown. For the 62 day period (11 days later), the equivalent multiples are 5.2 and 339 for 9th March and 23rd march lockdown respectively.

My 9th March lockdown modelled numbers are lower than those from Professor Rowland Kao’s research group at Edinburgh, if their Scottish numbers are scaled up for the UK. Indeed, I think my absolute numbers are too low for the March 9th lockdown case. But remember, this is about model comparisons, it’s NOT an absolute forecast.

In terms of the long term outlook (under the somewhat unrealistic assumption that 84% lockdown effectiveness continues, and in the (possibly more realistic assumption of) absence of a vaccine) deaths plateau at 42,500 for the actual March 23rd lockdown, but would have plateaued at only 625 in my model if the lockdown had been March 9th (as covered in my previous post).

Conclusions

For cases, the modelled March 9th lockdown long-term plateau, under similar assumptions) would have been 41,662 cases; but for the actual 23rd March lockdown, the model shows 2.8 million cases, a vastly higher number showing the effect of exponential behaviour, with only a 2 week difference in the timing of the intervention measures taken (at 84% effectiveness in both cases). That’s how vital timing is, as is the effectiveness of measures taken in the pandemic situation.

These long-term model outcomes reflect the observation of a likely deaths/cases ratio (1.5%) from the “captive” community on the cruise ship Diamond Princess.

But as I said earlier, these are comparisons within my model, to assess the nature and impact of an earlier lockdown, with the main focus in this post being the cases data.

It is a like-for-like comparison of modelled outcomes for two assumptions, one for the actual lockdown date, 23rd March, where the model fits reported data quite well (especially for deaths), and one for the earlier, postulated 9th March lockdown date (where the model fit must be questionable) that has been discussed so much.

Categories
6Points Cycling Strava Watopia Sand & Sequoias Zwift

6Points Mallorca Zwift training ride led by Dame Sarah Storey

Dame Sarah Storey, British Olympic cycling champion, led our 6Points Mallorca Sunday training ride today. See the live stream at YouTube at

For those that enjoyed the ride, we also highlighted the 6Points Mallorca charity JustGiving page which helps a disadvantaged children’s charity, Asdica in Mallorca.

See more about Asdica, and our other charities and sponsors, at the 6Points website. Over €66,000 has been collected through 6Points events over three years.

The ride today, a mixture of peloton, sprint and minirace riding, was over 2 laps of Watopia’s Sand and Sequoias course, about 43 kms, with the minirace from the bottom of the Titan’s Grove KoM second time around. It’s a lovely course, and the minirace is a tough one at 10kms, with that KoM to start, with even the descent after that a little lumpy too.

We do the Fuego Flats sprint twice, and then take it again at the end of the minirace, which finishes at the arch on that same sprint section.

It was a very well attended ride today, with a great lead by Sarah at even pace, keeping it very much together, until the minirace start 10kms from the end at the bottom of the Titan’s Grove KoM.

I was, of course, taking my red beacon duties very seriously, and had a good little group around me for a good part of the event.

We had 218 booked to ride, with 171 riding and 133 finishers. Our podium included a son and father combo, the Scotts, divided by Bruch Wu, always at the pointed end of our miniraces.

Regulars and locals riding included (roughly in finishing order): Jed Scott (Draft, a very rapid 1st, well done!), Bruch Wu (a regular podium in our 6Points and GGCC events, 2nd), Hamish Scott (Jed’s dad, a regular and strong rider in our events, 3rd), Tony Romo (4th), Martin Smith (5th), Sean Ekblom (GGCC beacon and 6th), Beth McIver (CryoGen), Alex Fthenakis (GGCC), Del Chattelle (GGCC), Roger Bloom, Alastair Pell (Nightingale), Charlie Farnham (Storey racing), Twinny Styler (Storey racing), Sarah Storey (Beacon and Storey Racing(!)), Heather Mayne (GGCC Zwift race team), Niall Hughes (GGCC), Gavin Stewart, Colin Sinclair (RACC), Derek Brown (GGCC), Leroy Nahay, Andrea McDowell, Andy Cattanach (GGCC), Euan Gordon (GGCC Beacon), Gavin Johnston (GGCC and graphics designer for our stream screen), Scott Ballantyne (GGCC), Leslie Tennant (GGCC), Christine Catterson (GGCC), Brian Sutton (GGCC and red beacon) and Fleury Stoops (GGCC).

All ride results are at ZwiftPower for those registered ZP, or on Companion (but with lots of flyers) for everyone.

I DQd 6 riders on ZwiftPower for being ahead of the beacon at the minirace start.

Sarah will be leading for GGCC again on 6th June, on the 11.30am BST (10.30 UTC) GGCC Saturday morning training ride, and we look forward to that!

Categories
Cambridge Conversations Coronavirus Covid-19 Michael Levitt

Cambridge Conversation 14th May 2020, and Michael Levitt’s analysis of Euro data

I covered the May 14th Cambridge Conversation in my blog post last week, and promised to make available the YouTube link for it when uploaded. It is now on the University of Cambridge channel at:

Cambridge Conversation – COVID-19 behind the numbers – statistics, models and decision-making

In my following, and most recent post, I also summarised Prof. Michael Levitt’s interview with UnHerd at my post Another perspective on Coronavirus – Prof. Michael Levitt which presents a perspective on the Coronavirus crisis which is at odds with earlier forecasts and commentaries by Prof. Neil Ferguson and Prof. Sir David Spiegelhalter respectively.

Michael Levitt has very good and consistent track record in predicting the direction of travel and extent of what I might call the Coronavirus “China Crisis”, from quite early on, and contrary to the then current thinking about the rate of growth of Coronavirus there. Michael’s interview is at:

Michael Levitt’s interview with UnHerd

and I think it’s good to see these two perspectives together.

I will cover shortly some of Michael’s latest work on analysing comparisons presented at the website https://www.euromomo.eu/graphs-and-maps, looking at excess mortality across several years in Europe. Michael’s conclusions (which I have his permission to reproduce) are included in the document here:

where as can be seen from the title, the Covid-19 growth profile doesn’t look very dissimilar from recent previous years’ influenza data. More on this in my next article.

As for my own modest efforts in this area, my model (based on a 7 compartment code by Prof. Alex de Visscher in Canada, with my settings and UK data) is still tracking UK data quite well, necessitating no updates at the moment. But the UK Government is under increasing pressure to include all age related excess deaths in their daily (or weekly) updates, and this measure is mentioned in both videos above.

So I expect some changes to reported data soon: just as the UK Government has had to move to include “deaths in all settings” by including Care Home deaths in their figures, it is likely they should have to move to including the Office for National Statistics numbers too, which they have started to mention. Currently, instead of c. 35,000 deaths, these numbers show c. 55,000, although, as mentioned, the basis for inclusion is different.

These would be numbers based on a mention of Covid-19 on death certificates, not requiring a positive Covid-19 test as currently required for inclusion in UK Government numbers.

Categories
Coronavirus Covid-19 Reproductive Number

Another perspective on Coronavirus – Prof. Michael Levitt

Owing to the serendipity of a contemporary and friend of mine at King’s College London, Andrew Ennis, wishing one of HIS contemporaries in Physics, Michael Levitt, a happy birthday on 9th May, and mentioning me and my Coronavirus modelling attempts in passing, I am benefiting from another perspective on Coronavirus from Michael Levitt.

The difference is that Prof. Michael Levitt is a Nobel laureate in 2013 in computational biosciences…and I’m not! I’m not a Fields Medal winner either (there is no Nobel Prize for Mathematics, the Fields Medal being an equivalently prestigious accolade for mathematicians). Michael is Professor of Structural Biology at the Stanford School of Medicine.

I did win the Drew Medal for Mathematics in my day, but that’s another (lesser) story!

Michael has turned his attention, since the beginning of 2020, to the Coronavirus pandemic, and had kindly sent me a number of references to his work, and to his other recent work in the field.

I had already referred to Michael in an earlier blog post of mine, following a Times report of his amazingly accurate forecast of the limits to the epidemic in China (in which he was taking a particular interest).

Report of Michael Levitt’s forecast for the China outbreak

I felt it would be useful to report on the most recent of the links Michael sent me regarding his work, the interview given to Freddie Sayers of UnHerd at https://unherd.com/thepost/nobel-prize-winning-scientist-the-covid-19-epidemic-was-never-exponential/ reported on May 2nd. I have added some extracts from UnHerd’s coverage of this interview, but it’s better to watch the interview.

Michael’s interview with UnHerd

As UnHerd’s report says, “With a purely statistical perspective, he has been playing close attention to the Covid-19 pandemic since January, when most of us were not even aware of it. He first spoke out in early February, when through analysing the numbers of cases and deaths in Hubei province he predicted with remarkable accuracy that the epidemic in that province would top out at around 3,250 deaths.

“His observation is a simple one: that in outbreak after outbreak of this disease, a similar mathematical pattern is observable regardless of government interventions. After around a two week exponential growth of cases (and, subsequently, deaths) some kind of break kicks in, and growth starts slowing down. The curve quickly becomes ‘sub-exponential’.

UnHerd reports that he takes specific issue with the Neil Ferguson paper, that along with some others, was of huge influence with the UK Government (amongst others) in taking drastic action, moving away from a ‘herd immunity” approach to a lockdown approach to suppress infection transmission.

“In a footnote to a table it said, assuming exponential growth of 15% for six days. Now I had looked at China and had never seen exponential growth that wasn’t decaying rapidly.

“The explanation for this flattening that we are used to is that social distancing and lockdowns have slowed the curve, but he is unconvinced. As he put it to me, in the subsequent examples to China of South Korea, Iran and Italy, ‘the beginning of the epidemics showed a slowing down and it was very hard for me to believe that those three countries could practise social distancing as well as China.’ He believes that both some degree of prior immunity and large numbers of asymptomatic cases are important factors.

“He disagrees with Sir David Spiegelhalter’s calculations that the totem is around one additional year of excess deaths, while (by adjusting to match the effects seen on the quarantined Diamond Princess cruise ship, and also in Wuhan, China) he calculates that it is more like one month of excess death that is need before the virus peters out.

“He believes the much-discussed R0 is a faulty number, as it is meaningless without the time infectious alongside.” I discussed R0 and its derivation in my article about the SIR model and R0.

Interestingly, Prof Alex Visscher, whose original model I have been adapting for the UK, also calibrated his thinking, in part, by considering the effect of the Coronavirus on the captive, closed community on the Diamond Princess, as I reported in my Model Update on Coronavirus on May 8th.

The UnHerd article finishes with this quote: “I think this is another foul-up on the part of the baby boomers. I am a real baby boomer — I was born in 1947, I am almost 73 years old — but I think we’ve really screwed up. We’ve caused pollution, we’ve allowed the world’s population to increase threefold in my lifetime, we’ve caused the problems of global warming and now we’ve left your generation with a real mess in order to save a relatively small number of very old people.”

I suppose, as a direct contemporary, that I should apologise too.

There’s a lot more at the UnHerd site, but better to hear it directly from Michael in the video.

Categories
Coronavirus Covid-19

Cambridge Conversations May 14th 2020 – reading data and the place of modelling

As an alumnus, I again had the opportunity today (with 3000 other people in over 70 countries) to attend the second Cambridge Conversations webinar, this time featuring Professor Sir David Spiegelhalter of Churchill College, Chair of the Winton Centre for Risk and Evidence Communication in the University of Cambridge, and Professor Mike Hulme, Professor of Human Geography and Fellow of Pembroke College, and a specialist in Climate Change.

The discussion, ‘COVID-19 behind the numbers – statistics, models and decision-making’, was moderated by Dr Alexandra Freeman, Executive Director at the Winton Centre for Risk and Evidence Communication.

The video of the 45 minute session will be available, and I will share it here in due course (it’s on a closed group at the moment, but will be on the Cambridge YouTube channel here in a few days, where the first Cambridge Conversation on Covid-19, from April, is currently available).

The presentations

Of most interest to me, since I was interested in the modelling of the pandemic outbreak, was the first part of the scene-setting, by Professor Sir David Spiegelhalter, one of the world’s foremost biostatisticians, who unpicked the numbers surrounding COVID-19. He has been reported widely and recently regarding the interpretation of Covid-19 data.

He explored the reporting of cases and deaths; explained the bases on which predictions have been made; examined comparisons with the ‘normal’ risks faced by people; and investigated whether many deaths from COVID-19 could have been expected and have simply been brought forward.

He was joined by Professor Mike Hulme, whose expertise is in climate change, with particular interest in the role of model-based knowledge in strategic and policy decision-making relative to political and cultural values: a question of similar importance to COVID-19 as it is to Climate Change policies, his own area of study.

The first set of slides, by David Spiegelhalter on the modelling aspects and the numbers coming out of the pandemic, are here:

The second part of the scene-setting, by Dr Mike Hulme, was more about how model-based knowledge is used in decision-making and public communication around Covid-19, and the differences in wider public perceptions across countries and cultures.

Much of this part of the discussion was about the difference between the broad basis for decision making vs. the more narrow basis for any particular expert advice; and that decision makers need to take into account a far wider set of parameters than just one expert model, involving cultural, ethical and many other factors. This means that methods, conclusions and decisions don’t necessarily carry over from one country to another.

Questions and answers

There was a Q&A session after the scene-setting, moderated by Dr Alexandra Freeman, and, amazingly, a submitted question from “Brian originally of Trinity College” was chosen to be asked! My question was about how to understand and model the mutual feedback between periodic lockdown adjustments and the growth rate of the virus, but it wasn’t answered very well, if at all, combined as it was with someone else’s (reasonable) question on what data we need to take forward to help us with the pandemic, which wasn’t answered properly either.

I had the impression that Mike Hulme, in particular, was more concerned with getting his own message across, and that actually several other questions didn’t get a good answer either. Speigelhalter, for his part, is well aware of his own fame/notoriety, and was quite amusing about it, but possibly at the expense of listening to the questions and answering them.

Both of them thought some of the other questions (e.g. one about “which modellers around the word are the best?”) had sought to draw out views about a “beauty contest” of people working in the field, which they (rightly) said wasn’t helpful, as initiatives and models in different countries, contexts, cultures were all partial, dealing with their own priorities. Hulme used the phrase “when science runs hot” a few times, in the context of all the work going on when the data was unreliable, causing its own issues.

Spiegelhalter had been (in his opinion mis-) quoted both by Boris Johnson AND the new leader of the opposition, Keir Starmer, regarding recent statements he had made about the difficulty of comparing data from different countries and cultures concerning Covid-19.

But as a statistician, he will be well aware of the phrase “lies, damn lies and statistics”, so I don’t have much sympathy for his ruefulness about having created issues for himself by being outspoken about such matters. His statements are delivered in quite an authoritative tone, and any nuances, I should think, in his public pronouncements might, not be noticed.

Summary

I recommend watching the YouTube video of the presentations when available on the Cambridge YouTube channel here next week, particularly (from my own perspective) Spiegelhalter’s, which drew some good distinctions about how to read the data in this crisis, and how to think about the Coronavirus issues in different parts of the population.

He had a very good point about Population Fatality Rate (PFR) vs. Infection Fatality Rate (IFR), (the difference between the chance of catching AND dying from Covid-19 (PFR) vs. the chance of dying from it once you already have it (IFR)) and how these are conflated by the media (and others) when considering the differential effect of Covid-19 on different parts of the population. One is an overall probability, and the other is a conditional probability, and the inferences are quite different, as he exemplified and explained.

There were some quite startling and clear learnings in reading the data about the relative susceptibilities of young vs. old and men vs. women, and the importance of a more complete measure of death rates, as shown by charts of the overall Excess Deaths in the population, contrasted with narrower ways of measuring the impact of the pandemic, highlighting the wider issues we face.

Both presenters wanted the whole impact of Coronavirus to be considered, not just the specific deaths from Covid-19 itself – things such as the increase in deaths from other causes, owing to the tendency of people not to want to attend hospitals; mental health; the bringing forward of deaths that might have happened in the next influenza season, if not now; and a number of other impacts.

Spiegelhalter found the lack of testing in the UK difficult to comprehend, and felt that addressing testing going forward was on the critical path to any way out of the crisis (my words).

The other main message, from Hulme, was that Governmental decision-making should be broadly based, and not driven by any particular modelling group. He didn’t reference the phrase “science-led”, as has been used so often by Government and others dealing with Coronavirus, but I imagine that he thinks that the word “science” in that phrase should be much more broadly defined (again, my interpretation of his theme).

Watch the Cambridge YouTube channel here next week and decide for yourself! At present the first Cambridge Conversation, “responding to the medical challenges of COVID-19” is available there, and summarised in my post Cambridge Conversations and model refinement.

Categories
Coronavirus Covid-19

What if UK lockdown had been 2 weeks earlier?

Introduction

There has been some debate about the timing and effectiveness of the lockdown measures adopted by the UK in its response to the Coronavirus pandemic. I’m sceptical about the sense of trying to rewrite the past, but I was intrigued as to what my model’s findings might have been. But, we are where we are, and most energy should be focused on the here and now, and the future. But, no doubt, there are lessons to be learned eventually.

NB Government health warning – this is pure speculative modelling, and it is more about the sensitivities in the setting of modelling parameters, and not a statement of fact! Don’t quote me!

Background

Some commentators and academics (Professor Rowland Kao at Edinburgh University tweeted this BBC coverage of his modelling work for Scotland) feel that the UK might well have applied the lockdown measure 2 weeks earlier, on the 9th March, as Italy did, instead of on the 23rd March. UK Government states that our first cases were later than for other countries, and so too, therefore, was our lockdown. Time will tell how significant this is.

I was interested to see what my model (the Prof Alex de Visscher code, with my parameters, modifications and published data for the UK situation) might have made of this for the UK, and I have run the model for some options.

A major problem with running any model from 9th March for the UK is that the first UK deaths, specifically attributed to Covid-19, on the original basis, prior to the inclusion of Care Home figures, were on the 10th March, although, of course, there had been many more cases of infection reported, with people being admitted to hospital and ICU departments in the hundreds. I’m sure that, as discussed above, there will have been deaths associated with the pandemic, but not officially recorded as such.

My current model forecast

It’s important, first of all, to state my baseline, which is my model’s current forecasts for the pandemic in the UK on the basis of the March 23rd lockdown, which embraced working from home, social distancing, school, restaurant and pub closures and several other measures. UK lockdown did allow for excursions for exercise once a day (not allowed at that time in Spain and Italy, for example), and travel for essential medical supplies and food (possibly to help others with those aspects).

In that baseline work, current as of May 14th, these are the graphical representations of the outlook, which forecasts a little over 42,000 deaths by the late summer 2020, stabilising at about that number into the future, assuming no changes to the 84.1% lockdown intervention effectiveness, and no pharmaceutical intervention yet (either vaccine or radically more effective treatment of symptoms). The orange curve always represents the reported numbers.

My model projection is broadly in agreement with the following projection from the IHME, the Institute for Health Metrics and Evaluation, an independent global health research center at the University of Washington, which forecasts between 43,000 and 44,000 deaths in the UK by the summer.

 Institute for Health Metrics and Evaluation (IHME) University of Washington projection

The number of cases, below, is predicted at 2.8 million in the UK, assuming that only 8% of those with the virus are actually tested and diagnosed, a percentage that is informed by the ratio of deaths to real cases derived from other countries’ experience. The fit of the model, therefore, isn’t as good for cases, as the data is fuzzy. The 12.5 multiple on reported numbers mentioned on the chart takes account of that 8%.

It is likely, therefore, that we are not yet aware of anything like the real number of cases, and even death data is now felt to have been understated, currently attributed by confirmed test diagnoses only, not by a simple mention of Covid-19 on death certificates. These latter figures are now being analysed by Government. My model will be updated (as it has been recently to include Care Homes) when those figures are adjusted, as they surely will be when the data is cleansed. The UK Office for National Statistics are working on this, and have already published some numbers (but with some lag, and so are not up to date).

The UK Government does now report for Care Homes in the “all settings” figures it reports, as of nearly two weeks ago, and it retroactively adjusted all historic reported numbers to take those into account, which my model does too; but the death certificate reference to Covid-19 is still not a factor in assigning deaths to Covid-19 in the official published UK daily data.

It is fair to say that policies on this vary hugely from country to country, and from region to region within countries, so it isn’t easy to see where to draw the line.

Options for parametric sensitivities for the March 9th lockdown scenario

It is hard to be sure what the effectiveness of lockdown measures might have been in early March. Some other countries, such as China, were seen to enforce very stringent measures at (and before) that time (having encountered the Coronavirus earlier) and in his own work, Alex de Visscher has used 90% effectiveness for China. This figure relates to the extent of the reduction in infection rate in the model, 90% effectiveness reducing the rate to 10% of its original value. See my blog article for a description of the model and its variables.

At the other end of the scale, the USA model was set at 71% initially, and Canada at 75%; Italy at 79.1% and Spain at 85% were some other choices. Lots of sensitivities were run around these settings, but this gives an idea of the possible range.

The best fit of my current model, as above, for the March 23rd lockdown is currently for an effectiveness of 84.1%, and it is continuing to match published data at that setting, and therefore remains the basis for my current March 23rd lockdown forecasting.

Without any changes to the underlying infection rates, I ran my 9th March UK lockdown model for three values of interv_success, the model variable for lockdown (intervention) effectiveness; for the current 84.1%, and then for 80% and 75%.

I should say that in the University of Edinburgh work mentioned above, which was for Scotland, information on commuting movements was input to their models, using Google mobility data, to try to get an understanding of how social restrictions affect spread of the virus. They found that “Not moving so far away from one’s home is one of the big impacts of what lockdown is doing”.

I haven’t been able to do such an activity analysis, but the % effectiveness in my model is a reflection of assumptions about that. My feeling is that in the UK, we were, and are, unlikely to be able to enforce lockdown as strongly as China did. In the UK, for the last 6 or 7 weeks, we have, as I mentioned, had more freedoms than, for example, Italy and Spain to go outside for exercise (running or cycling, say).

Our recent relatively high public compliance rate with UK Government directives for lockdown measures (working from home where possible, social distancing etc) might not have been as good earlier on, when the impact of the pandemic here in the UK wasn’t confronting people so directly, or to such an extent, as two weeks later, on the 23rd March, by when 359 people had died, and published case numbers were 6650. These numbers compare with no deaths as of 9th March (on the original basis, without inclusion of Care Homes), and just 321 cases.

I should again emphasise that the reported case numbers might well be understated by a factor of 10 to 15 (12.5 is the figure Alex has worked with), judging by the experience in other countries further down the pandemic track.

Taking all this into account, my other options for interv_success are for lower % impact on the infection rate than works best currently, not higher – the current 84.1%, and then for 80% and 75% as mentioned above.

Results

I present the outcomes in graphical form. The first sets of graphs, for lockdown starting on 9th March, are for these three options for intervention effectiveness.

84.1 % effectiveness (as per my current, well fitted model)

We see here that on the larger chart, presenting the situation up to May 13th, instead of the current reported 33186 deaths (the orange curve), the model would have forecasted 537 if lockdown had happened on March 9th, assuming 84.1% effectiveness, which I think would be too high for compliance at that time. For the forecast long term (top left), deaths would stabilise at just over 600 by the end of summer 2020, with total infections at a little over 40,000.

80% Effectiveness

This time, on the larger chart, presenting the situation up to May 13th, instead of the current 33186 deaths, the model would forecast 717 deaths if lockdown had happened on March 9th, assuming 80% effectiveness. The longer term outcome would be about 1000 deaths with just under 70,000 cases, stabilising in the autumn of 2020.

75% effectiveness

Finally, on the larger chart, presenting the situation up to May 13th, instead of the current 33186 deaths, the model would forecast 1123 deaths by 13th May if lockdown had happened on March 9th, assuming 75% intervention effectiveness. The long term deaths would be nearly 6000, but with still a few deaths per day even a year later, because of the continuing lower intervention effectiveness (assumed by the model), and cases would have reached a little under 400,000 in a year’s time, with negligible growth by then.

Discussion

These results show a marked reduction in deaths forecast for May 13th by my model for the March 9th scenario lockdown, compared with the actual March 23rd lockdown results. This is only a model and will have deficiencies in a) the reduced data available for calibration of the model before March 9th (no deaths, for example) for model fitting, and b) the lack, therefore, of a firm basis for setting transmission rates at that time, and testing them.

But it does show the high dependency the later results have on early data.

There is another feature in the numbers, however, that is more noticeable in the charts for the assumed 75% lockdown effectiveness. This is is clearer when we look more closely at the long term outcome for deaths, starting with that 75% case, as above,

Longer term outlook for deaths in the 75% intervention effectiveness case

where we can see that even in April 2021 (in the absence of pharmaceutical measures, or any change to the intervention effectiveness) the deaths are still increasing (if there were no further increase, the log chart would be flat, i.e. horizontal, at that point). The 5738 deaths at that time are not the maximum.

This outcome is even more stark if I choose 70% intervention effectiveness, and in this case I present both the linear y-axis and the log y-axis charts, since on the linear chart, the numbers are significant enough to be perceived visually. The orange curve is always the actual reported numbers, whatever the model scenario.

70% effectiveness analysis

The linear chart makes clear that by the end of April 2021, there continues to be a high rate of increase of deaths, and, equivalently, we can also see that the log chart is much further from flattening than in the 75% case, even after more than a year of lockdown, at this level (70%) of effectiveness. The modelled number of deaths by April 2021 is over 250,000, with 17 million cases.

Visually, the reported deaths look very unlikely to get anywhere near that, and in my base March 23rd lockdown model (which has a good fit with reported numbers at the moment, May 13th, the orange curve here), deaths are projected to flatten at 42,000, with cases at 2.8 million.

I totally accept that the model will have its deficiencies – the calibration phase to March 9th is short, and the infection rate parameter might not be realistic as a result.

But from a comparative model behaviour perspective, this is another example, in an exponential growth situation (as a pandemic is) that early numbers are not a good indicator of outcomes. Small early differences make very large differences down the track. It’s a non-linear relationship, with high sensitivity of the pandemic growth rate to the % effectiveness of intervention measures (most noticeably at the lower % effectiveness).

It does seem that if the effectiveness of interventions is not high enough, while there can appear to be good early results (the number of deaths by May 13th 2020, in even the 70%, least effective model, is 1,980, as compared with 540 for the 84.1% case above (starting March 9th)) the pandemic eventually overcomes the measures.

By the end of 2020, in this 70% scenario, the modelled deaths are likely already to exceed the reported deaths, and would be growing quite fast by then, as can be seen from the charts.

This tells me that if pandemic intervention measures are to work effectively, and strategically (i.e. long term), they need not only to be early, but also at least at 80% effectiveness in terms of reducing the infection transmission rate for this Coronavirus.

Long term 80% effectiveness outlook

We can confirm this from the 80% effectiveness longer term outlook for the March 9th lockdown scenario (assuming no changes in the lockdown measures, and no available pharmaceutical measures).

Long term outlook for 80% effectiveness (starting March 9th)

At this level of intervention effectiveness (80%), the modelled deaths curve peaks at about 1000, reaching that point by autumn 2020 as stated in the earlier section, with about 67,000 cases by that time, stabilising at 68,000 before a year’s time, as we see in the chart below. Again, the orange line is the current status of the reported cases (x12.5 as before).

Model forecast for cases, based on March 9th lockdown, compared to reported cases

Summary

This article simply addresses the theoretical, earlier lockdown scenario that has been much discussed. The modelling I have done is probably not adequate in terms of absolute numbers, but it is clear, from comparisons of my scenarios, that on similar assumptions to the current modelling for the March 23rd lockdown initiation, the number of deaths at May 13th would have been less – probably far less, as asserted by the University of Edinburgh study for Scotland.

Whether those assumptions are valid (i.e. that the intervention effectiveness would have been the same) is questionable.

Furthermore, it requires just a 10% to 15% diminution of that effectiveness percentage to lead to a worse outcome in the long term.

I see this as an indicator that, in the absence of pharmaceutical measures (a vaccine, ideally (of lasting effect), or medicines that handle symptoms effectively, and save lives) the intervention measures have to continue to be carefully monitored, including an understanding of which are the most effective at reducing infection transmission rates.

It may well be that as the researchers at Edinburgh stated, the “stay-at-home” policy is the most effective measure, and that trips away from home should continue to be minimised even into the long term, by working from home where possible, and otherwise travelling only for medical and food purchases.

Categories
Coronavirus Covid-19 Reproductive Number

Model update for the latest UK Coronavirus numbers

Introduction and summary

This is a brief update to my UK model predictions in the light of a week’s published data regarding Covid-19 cases and deaths in all settings – hospitals, care homes and the community – rather than just hospitals and the community, as previously.

In order to get the best fit between the model and the published data, I have had to reduce the effectiveness of interventions (lockdown, social distancing, home working etc) from 85% last week ( in my post immediately following the Government change of reporting basis) to 84.1% at present.

This reflects the fact that care homes, new to the numbers, seem to influence the critical R0 number upwards on average, and it might be that R0 is between .7 and .9, which is uncomfortably near to 1. It is already higher in hospitals than in the community, but the care home figures in the last week have increased R0 on average. See my post on the SIR model and importance of R0 to review the meaning of R0.

Predicted cases are now at 2.8 million (not reflecting the published data, but an estimate of the underlying real cases) with fatalities at 42,000.

Possible model upgrades

The Government have said that they are to sample people randomly in different settings (hospital, care homes and the community), and regionally, better to understand how the transmission rate, and the influence on the R0 reproductive number, differs in those settings, and also in different parts of the UK.

Ideally a model would forecast the pandemic growth on the basis of these individually, and then aggregate them, and I’m sure the Government advisers will be doing that. As for my model, I am adjusting overall parameters for the whole population on an average basis at this point.

Another model upgrade which has already been made by academics at Imperial College and at Harvard is to explore the cyclical behaviour of partial relaxations of the different lockdown components, to model the response of the pandemic to these (a probable increase in growth to some extent) and then a re-tightening of lockdown measures to cope with that, followed by another fall in transmission rates; and then repeating this loop into 2021 and 2022, showing a cyclical behaviour of the pandemic (excluding any pharmaceutical (e.g. vaccine and medicinal) measures). I covered this in my previous article on exit strategy.

This explains Government reluctance to promise any significant easing of lockdown in any specific timescales.

Current predictions

My UK model (based on the work of Prof. Alex Visscher at Concordia University in Montreal for other countries) is calibrated on the most accurate published data up to the lockdown date, March 23rd, which is the data on daily deaths in the UK.

Once that fit of the model to the known data has been achieved, by adjusting the assumed transmission rates, the data for deaths after lockdown – the intervention – is matched by adjusting parameters reflecting the assumed effectiveness of the intervention measures.

Data on cases is not so accurate by a long way, and examples from “captive” communities indicate that deaths vs. cases run at about 1.5% (e.g. the Diamond Princess cruise ship data).

The Italy experience also plays into this relationship between deaths and actual (as opposed to published) case numbers – it is thought that a) only a single figure percentage of people ever get tested (8% was Alex’s figure), and b) again in Italy, the death rate was probably higher than 1.5% because their health service couldn’t cope for a while, with insufficient ICU provision.

In the model, allowing for that 8%, a factor of 12.5 is applied to public total and active cases data, to reflect the likely under-reporting of case data, since there are relatively few tests.

In the model, once the fit to known data (particularly deaths to date) is made as close as possible, then the model is run over whatever timescale is desired, to look at its predictions for cases and deaths – at present a short-term forecast to June 2020, and a longer term outlook well into 2021, by when outcomes in the model have stabilised.

Model charts for deaths

The fit of the model here can be managed well, post lockdown, by adjusting the percentage effectiveness of the intervention measure, and this is currently set at 84.1%. This model predicts fatalities in the UK at 42,000. They are reported currently (8th May 2020) at 31241.

Model charts for cases

As we can see here, the fit for cases isn’t as good, but the uncertainty in case number reporting accuracy, owing to the low level of testing, and the variable experience from other countries such as Italy, means that this is an innately less reliable basis for forecasting. The model prediction for the outcome of UK case numbers is 2.8 million.

If testing, tracking and tracing is launched effectively in the UK, then this would enable a better basis for predictions for case numbers than we currently have.

Conclusions?!

I’m certainly not at a concluding stage yet. A more complex model is probably necessary to predict the situation, once variations to the current lockdown measures begin to happen, likely over the coming month or two in the first instance.

Models are being developed and released by research groups, such as that being developed by the RAMP initiative at https://epcced.github.io/ramp/

Academics from many institutions are involved, and I will take a look at the models being released to see if they address the two points I mentioned here: the variability of R0 across settings and geography, and the cyclical behaviour of the pandemic in response to lockdown variations.

At the least, perhaps, my current model might be enhanced to allow a time-dependent interv_success variable, instead of a constant lockdown effectiveness representation.