Categories
Box Hill Charity Cycling Leith Hill Prudential RideLondon Strava Surrey Hills

The Surrey Hills in the 2019 Prudential RideLondon 100

This video is about the Surrey Hills part of my Prudential RideLondon 100 in 2019, taking in Leith Hill and Box Hill. It’s a “director’s cut” from my full ride post at https://www.briansutton.uk/?p=1084.

The middle part of the video shows the cycling log-jam on Leith Hill, where we had to stop seven times, losing 10 minutes, on a section of no more than 400 metres, on the less steep, earlier part of the Leith Hill Lane climb.

Someone had fallen off just at the beginning of the steeper part ahead of us. Very frustrating! Apparently he rode off without thanking anyone for the help he was given to get going again.

The Leith Hill ascent is quite narrow, and there are always some cyclists walking on all the steep parts, effectively making it even narrower, which you can see this from the video.

The way to minimise such delays is to get an earlier start time, as advised by my friend Leslie Tennant, who has done the event half a dozen times. That keeps you clear of the slower riders.

But it was a great day overall, with the usual good weather, a big improvement over the previous year’s very wet weather, which I covered in my blog at https://www.briansutton.uk/?p=1108.

Here, then, is my Surrey Hills segment from the 2019 event.

The Prudential Ride London 100 Surrey hills, including Leith Hill and Box Hill

I have added here some screenshots of my Strava analysis for the Leith Hill segment, showing the speed, cadence and heart rate drops during those seven stops.

Time-based Strava analysis chart

First, the plot against time, which shows the speed drops very clearly, annotated as Stops 1 to 7. On the elevation profile, you can see that all of these were on the earlier part of the climb (shaded). The faller must had fallen at the point where the log-jam cleared (when a marshal told me what had happened, as I rode past at that point) at the end of that shaded section.

Important: Note that in this time-based x-axis chart, the time scale has the effect of lengthening (expanding) those parts of the x-axis scale (compared to the distance-based x-axis version later on), where we were ascending, as we took proportionately more time to cover a given distance during the delays (which would have been the case to a lesser extent at normal, slower uphill speeds anyway), and equivalently shortening the descending parts of the hill(s), where we cover more ground in comparatively less time. The shaded section of the chart shows this expansion effect on that (slow) part of the Leith Hill climb (behind the word “Leith”).

Strava analysis showing the 7 stops totalling 10 minutes

We see that the chart runs from about ride time 3:36:40 to 3:46:30, around 10 minutes. On the video I show that the first stop on that section was at time of day 10:47:14, and we got going again fully at 10:57:09, again about 10 minutes from beginning to end.

Distance-based Strava analysis chart

Next, the same Strava analysis, but with the graphs plotted against distance, instead of time.

As the elevation is in metres, the distance-based x-axis presents a more faithful rendition of the inclines – metres of height plotted against against kilometres of distance travelled, in the usual way.

Compared with the time-based chart above, this shows up as steeper ascending parts of all hills in the profile (slow riding), and less steep downsides for the hills (fast riding), which is usual when comparing time vs. distance based Strava ride analysis charts.

You will note that the (lighter) shaded section where the stops occurred is actually very short in the distance based graph (the light vertical line, behind the “i” in the “Leith” annotation) – it looks longer (in the time-based version, as well as apparently less steep as a result) in the darker shaded area of the time-based chart above. In reality, the steepness isn’t significantly different on that section, and it IS short.

Strava analysis showing the 7 stops totalling <400 metres

In this chart, this same section runs from just over 88.6 kms into the ride to just under 89 kms; i.e. between 350 – 400 metres from start to finish, some of which was walking, with a little riding, between periods of standing and waiting.

The little dips in the red heart rate curve at the 7 stops show up a little more clearly* on this chart too.

I eliminated the standing/waiting parts from the video, but you can see that I was moving very slowly even when trying to ride short parts of this section. Average speed on that section was, say, 400m in 10 minutes – 2.4 kms/hour, or 1.5 mph. Even I can ride up that hill a lot faster than that!

*The heart chart dips looks a little like ECG depressed t-waves. I know what those look like – I was diagnosed with depressed t-wave in a BUPA ECG test 50 years ago (for health insurance in my my first private sector job).

Because of that they also stress tested me on a treadmill, and had a problem getting my heart rate up, even raising the front of the treadmill, as well as speeding it up. So they also diagnosed brachycardia (slow heart rate). They found that my ECG returns to a normal pattern on exercise – phew!

Categories
Cycling Mallorca Mallorca Sobremunt

A new look at Sobremunt, the hardest climb in Mallorca

This video is about the Sobremunt climb, especially near the top of the climb which is quite hard to find amongst the various agricultural estates up there – such as the Sobremunt estate itself. I added new music to it, and some more commentary and stills.

Sobremunt, bottom to top, and some exploring

Here is some mapping for the ride:

Here are some views of the profile of the Sobremunt climb, from GCN and Cycle Fiesta:

I have also added the Strava analysis for the climb segment (with the embarrassingly slow time!) from the Ma1041 junction to the top of the Strava segment (not actually the top of the climb, but where I met Niels & Peter (in the video).

And finally, just the route for the climb, some landmarks, and start of the descent:-

The climb from the Ma1041, and the start of the descent past La Posada de Marquès
The climb from the Ma1041, and the start of the descent past La Posada de Marquès
Categories
Coronavirus Covid-19 Michael Levitt

Coronavirus model tracking, lockdown and lessons

Introduction

This is just a brief update post to confirm that my Coronavirus model is still tracking the daily reported UK data well, and doesn’t currently need any parameter changes.

I go on to highlight some important aspects of emphasis in the Daily Downing St. Update on June 10th, as well as the response to Prof. Neil Ferguson’s comments to the Parliamentary Select Committee for Science and Technology about the impact of an earlier lockdown date, a scenario I have modelled and discussed before.

My model forecast

I show just one chart here that indicates both daily and cumulative figures for UK deaths, thankfully slowing down, and also the model forecast to the medium term, to September 30th, by when the modelled death rate is very low. The outlook in the model is still for 44,400 deaths, although no account is yet taken for reduced intervention effectiveness (from next week) as further, more substantial relaxations are made to the lockdown.

Note that the scatter of the reported daily deaths in the chart below is caused by some delays and resulting catch-up in the reporting, principally (but not only) at weekends. It doesn’t show in the cumulative curve, because the cumulative numbers are so much higher, and these daily variations are small by comparison (apart from when the cumulative numbers are lower, in late February to mid-March).

UK Daily & Cumulative deaths, model vs. Government “all settings” data

It isn’t yet clear whether the imminent lockdown easing (next week) might lead to a sequence of lockdown relaxation, infection rate increase, followed by (some) re-instituted lockdown measures, to be repeated cyclically as described by Neil Ferguson’s team in their 16th March COVID19-NPI-modelling paper, which was so influential on Government at the time (probably known to Government earlier than the paper publication date). If so, then simpler medium to long term forecasting models will have to change, my own included. For now, this is still in line with the Worldometers forecast, pictured here.

Worldometers UK Covid-19 forecast deaths for August 4th 2020

The ONS work

The Office for National Statistics (ONS) have begun to report regularly on deaths where Covid-19 is mentioned on the death certificate, and are also reporting on Excess Deaths, comparing current death rates with the seasonally expected number based on previous years. Both of these measures show larger numbers, as I covered in my June 2nd post, than the Government “all settings” numbers, that include only deaths with a positive Covid-19 test in Hospitals, the Community and Care Homes.

As I also mentioned in that post, no measures are completely without the need for interpretation. For consistency, for the time being, I remain with the Government “all settings” numbers in my model that show a similar rise and fall over the peak of the virus outbreak, but with somewhat lower daily numbers than the other measures, particularly at the peak.

The June 10th Government briefing

This briefing was given by the PM, Boris Johnson, flanked, as he was a week ago, by Sir Patrick Vallance (Chief Scientific Adviser (CSA)) and Prof. Chris Whitty (Chief Medical officer (CMO)), and again, as last week, the scientists offered much more than the politician.

In particular, the question of “regrets” came up from journalist questions, probing what the team might have done differently, in the light of the Prof. Ferguson comment earlier to the Parliamentary Science & Technology Select Committee that lives could have been saved had lockdown been a week earlier (I cover this in a section below).

At first, the shared approach of the CMO and CSA was not only that the scientific approach was always to learn the lessons from such experiences, but also that is was too early to do this, given, as the CMO emphasised very clearly last week, and again this week, that we are in the middle of this crisis, and there is a very long way to go (months, and even more, as he had said last week).

The PM latched onto this, repeating that it was too soon to take the lessons (not something I agree with); and indeed, Prof. Chris Whitty came back and offered that amongst several things he might have done differently, testing was top of the list, and that without it, everyone had been working in the dark.

My opinion is that if there is a long way to go, then we had better apply those lessons that we can learn as we go along, even if, as is probably the case, it is too early to come to conclusions about all aspects. There will no doubt be an inquiry at some point in the future, but that is a long way off, and adjusting our course as we continue to address the pandemic must surely be something we should do.

Parliamentary Science & Technology Select Committee

Earlier that day, on 10th June, Prof. Neil Ferguson of Imperial College had given evidence (or at least a submission) to the Select Committee for Science & Technology, stating that lives could have been saved if lockdown had been a week earlier. He was quoted here as saying “The epidemic was doubling every three to four days before lockdown interventions were introduced. So had we introduced lockdown measures a week earlier, we would have reduced the final death toll by at least a half.

Whilst I think the measures, given what we knew about this virus then, in terms of its transmission and its lethality, were warranted, I’m second guessing at this point, certainly had we introduced them earlier we would have seen many fewer deaths.”

In that respect, therefore, it isn’t merely interesting to look at the lockdown timing issue, but, as a matter of life and death, we should seek to understand how important timing is, as well as the effectiveness of possible interventions.

Surely one of the lessons from the pandemic (if we didn’t know it before) is that for epidemics that have an exponential growth rate (even if only for a while) matters down the track are highly (and non-linearly) dependent on initial conditions and early decisions.

With regard to that specific statement about the timing of the lockdown, I had already modelled the scenario for March 9th lockdown (two weeks earlier than the actual event on the 23rd March) and reported on that in my May 14th and May 25th posts at this blog. The precise quantum of the results is debatable, but, in my opinion, the principle isn’t.

I don’t need to rehearse all of those findings here, but it was clear, even given the limitations of my model (little data, for example, prior to March 9th upon which to calibrate the model, and the questionable % effectiveness of a postulated lockdown at that time, in terms of the public response) that my model forecast was for far fewer cases and deaths – the model said one tenth of those reported (for two weeks earlier lockdown). That is surely too small a fraction, but even part of that saving would be a big difference numerically.

This was also the nature of the findings of an Edinburgh University team, under Prof. Rowland Kao, who worked on the possible numbers for Scotland at that time, as reported by the BBC, which talked of a saving of 80% of the lives lost. Prof Kao had run simulations to see what would have happened to the spread of the virus if Scotland had locked down on 9 March, two weeks earlier.

A report of the June 10th Select Committee discussions mentioned that Prof. Kao supported Prof. Ferguson’s comments (unsurprisingly), finding the Ferguson comments “robust“, given his own team’s experience and work in the area.

Prof Simon Wood, Professor of Statistical Science at the University of Bristol, was reported as saying “I think it is too early to talk about the final death toll, particularly if we include the substantial non-COVID loss of life that has been and will be caused by the effects of lockdown. If the science behind the lockdown is correct, then the epidemic and the counter measures are not over.

Prof. Wood also made some comments relating to some observed pre-lockdown improvements in the death rate (possibly related to voluntary self-isolation which had been advised in some circumstances) which might have reduced the virus growth rate below the pure exponential rate which may have been assumed, and so he felt that “the basis for the ‘at least a half’ figure does not seem robust“.

Prof. James Naismith, Director of the Rosalind Franklin Institute, & Professor of Structural Biology, University of Oxford, was reported as saying “Professor Ferguson has been clear that his analysis is with the benefit of hindsight. His comments are a simple statement of the facts as we now understand them.

The lockdown timing debate

In the June 10th Government briefing, a few hours later, the PM mentioned in passing that Prof. Ferguson was on the SAGE Committee at that time, in early-mid March, as if to imply that this included him in the decision to lockdown later (March 23rd).

But, as I have also reported, in their May 23rd article, the Sunday Times Insight team produced a long investigative piece that indicated that some scientists (from both Imperial College and the London School of Hygiene and Tropical Medicine) had become worried about the lack of action, and proactively produced data and reports (as I mentioned above) that caused the Government to move towards a lockdown approach. The Government refuted this article here.

As we have heard many times, however, advisers advise, and politicians decide; in this case, it would seem that lockdown timing WAS a political decision (taking all aspects into account, including economic impact and the wider health issues) and I don’t have evidence to support Prof. Ferguson being party to the decision, (even if he was party to the advice, which is also dubious, given that his own scientific papers are very clear on the large scale of potential outcomes without NMIs (Non Pharmaceutical Interventions).

His forecasts would very much support a range of early and effective intervention measures to be considered, such as school and university closures, home isolation of cases, household quarantine, large-scale general population social distancing and social distancing of those over 70 years, as compared individually and in different combinations in the paper referenced above.

The forecasts in that paper, however, are regarded by Prof. Michael Levitt as in error (on the pessimistic side), basing forecasts, he says, on a wrong interpretation of the Wuhan data, causing an error by a factor of 10 or more in forecast death rates. Michael says “Thus, the Western World has been encouraged by their lack of responsibility coupled with uncontrolled media and academic errors to commit suicide for an excess burden of death of one month.”

But that Imperial College paper (and others) indicate what was in Neil Ferguson’s mind at that earlier stage. I don’t believe (but don’t know, of course) that his advice would have been to wait until a March 23rd lockdown.

Since SAGE (Scientific Advisory Group for Emergencies) proceedings are not published, it might be a long time before any of this history of the lockdown timing issue becomes clear.

Concluding comment

Now that relaxation of the lockdown is about to be enhanced, I am tracking the reported cases and deaths, and monitoring my Coronavirus model for any impact.

If there were any upwards movement in deaths and case rates, and reversal of any lockdown relaxations were to become necessary, the debate about lockdown timing will, no doubt, revive.

In that case, lessons learned from where we have been in that respect will need to be applied.

Categories
Coronavirus Covid-19 Michael Levitt Reproductive Number Uncategorized

Current Coronavirus model forecast, and next steps

Introduction

This post covers the current status of my UK Coronavirus (SARS-CoV-2) model, stating the June 2nd position, and comparing with an update on June 3rd, reworking my UK SARS-CoV-2 model with 83.5% intervention effectiveness (down from 84%), which reduces the transmission rate to 16.5% of its pre-intervention value (instead of 16%), prior to the 23rd March lockdown.

This may not seem a big change, but as I have said before, small changes early on have quite large effects later. I did this because I see some signs of growth in the reported numbers, over the last few days, which, if it continues, would be a little concerning.

I sensed some urgency in the June 3rd Government update, on the part of the CMO, Chris Whitty (who spoke at much greater length than usual) and the CSA, Sir Patrick Vallance, to highlight the continuing risk, even though the UK Government is seeking to relax some parts of the lockdown.

They also mentioned more than once that the significant “R” reproductive number, although less than 1, was close to 1, and again I thought they were keen to emphasise this. The scientific and medical concern and emphasis was pretty clear.

These changes are in the context of quite a bit of debate around the science between key protagonists, and I begin with the background to the modelling and data analysis approaches.

Curve fitting and forecasting approaches

Curve-fitting approach

I have been doing more homework on Prof. Michael Levitt’s Twitter feed, where he publishes much of his latest work on Coronavirus. There’s a lot to digest (some of which I have already reported, such as his EuroMOMO work) and I see more methodology to explore, and also lots of third party input to the stream, including Twitter posts from Prof. Sir David Spiegelhalter, who also publishes on Medium.

I DO use Twitter, although a lot less nowadays than I used to (8.5k tweets over a few years, but not at such high rate lately); much less is social nowadays, and more is highlighting of my https://www.briansutton.uk/ blog entries.

Core to that work are Michael’s curve fitting methods, in particular regarding the Gompertz cumulative distribution function and the Change Ratio / Sigmoid curve references that Michael describes. Other functions are also available(!), such as The Richard’s function.

This curve-fitting work looks at an entity’s published data regarding cases and deaths (China, the Rest of the World and other individual countries were some important entities that Michael has analysed) and attempts to fit a postulated mathematical function to the data, first to enable a good fit, and then for projections into the future to be made.

This has worked well, most notably in Michael’s work in forecasting, in early February, the situation in China at the end of March. I reported this on March 24th when the remarkable accuracy of that forecast was reported in the press:

The Times coverage on March 24th of Michael Levitt's accurate forecast for China
The Times coverage on March 24th of Michael Levitt’s accurate forecast for China

Forecasting approach

Approaching the problem from a slightly different perspective, my model (based on a model developed by Prof. Alex de Visscher at Concordia University) is a forecasting model, with my own parameters and settings, and UK data, and is currently matching death rate data for the UK, on the basis of Government reported “all settings” deaths.

The model is calibrated to fit known data as closely as possible (using key parameters such as those describing virus transmission rate and incubation period, and then solves the Differential Equations, describing the behaviour of the virus, to arrive at a predictive model for the future. No mathematical equation is assumed for the charts and curve shapes; their behaviour is constructed bottom-up from the known data, postulated parameters, starting conditions and differential equations.

The model solves the differential equations that represent an assumed relationship between “compartments” of people, including, but not necessarily limited to Susceptible (so far unaffected), Infected and Recovered people in the overall population.

I had previously explored such a generic SIR model, (with just three such compartments) using a code based on the Galbraith solution to the relevant Differential Equations. My following post article on the Reproductive number R0 was set in the context of the SIR (Susceptible-Infected-Recovered) model, but my current model is based on Alex’s 7 Compartment model, allowing for graduations of sickness and multiple compartment transition routes (although NOT with reinfection).

SEIR models allow for an Exposed but not Infected phase, and SEIRS models add a loss of immunity to Recovered people, returning them eventually to the Susceptible compartment. There are many such options – I discussed some in one of my first articles on SIR modelling, and then later on in the derivation of the SIR model, mentioning a reference to learn more.

Although, as Michael has said, the slowing of growth of SARS-CoV-2 might be because it finds it hard to locate further victims, I should have thought that this was already described in the Differential Equations for SIR related models, and that the compartment links in the model (should) take into account the effect of, for example, social distancing (via the effectiveness % parameter in my model). I will look at this further.

The June 2nd UK reported and modelled data

Here are my model output charts exactly up to, June 2nd, as of the UK Government briefing that day, and they show (apart from the last few days over the weekend) a very close fit to reported death data**. The charts are presented as a sequence of slides:

These charts all represent the same UK deaths data, but presented in slightly different ways – linear and log y-axes; cumulative and daily numbers; and to date, as well as the long term outlook. The current long term outlook of 42,550 deaths in the UK is within error limits of the the Worldometers linked forecast of 44,389, presented at https://covid19.healthdata.org/united-kingdom, but is not modelled on it.

**I suspected that my 84% effectiveness of intervention would need to be reduced a few points (c. 83.5%) to reflect a little uptick in the UK reported numbers in these charts, but I waited until midweek, to let the weekend under-reporting work through. See the update below**.

I will also be interested to see if that slight uptick we are seeing on the death rate in the linear axis charts is a consequence of an earlier increase in cases. I don’t think it will be because of the very recent and partial lockdown relaxations, as the incubation period of the SARS-CoV-2 virus means that we would not see the effects in the deaths number for a couple of weeks at the earliest.

I suppose, anecdotally, we may feel that UK public response to lockdown might itself have relaxed a little over the last two or three weeks, and might well have had an effect.

The periodic scatter of the reported daily death numbers around the model numbers is because of the reguar weekend drop in numbers. Reporting is always delayed over weekends, with the ground caught up over the Monday and Tuesday, typically – just as for 1st and 2nd June here.

A few numbers are often reported for previous days at other times too, when the data wasn’t available at the time, and so the specific daily totals are typically not precisely and only deaths on that particular day.

The cumulative charts tend to mask these daily variations as the cumulative numbers dominate small daily differences. This applies to the following updated charts too.

**June 3rd update for 83.5% intervention effectiveness

I have reworked the model for 83.5% intervention effectiveness, which reduces the transmission rate to 16.5% of its starting value, prior to 23rd March lockdown. Here is the equivalent slide set, as of 3rd June, one day later, and included in this post to make comparisons easier:

These charts reflect the June 3rd reported deaths at 39,728 and daily deaths on 3rd June of 359. The model long-term prediction is 44,397 deaths in this scenario, almost exactly the Worldometer forecast illustrated above.

We also see the June 3rd reported and modelled cumulative numbers matching, but we will have to watch the growth rate.

Concluding remarks

I’m not as concerned to model cases data as accurately, because the reported numbers are somewhat uncertain, collected as they are in different ways by four Home Countries, and by many different regions and entities in the UK, with somewhat different definitions.

My next steps, as I said, are to look at the Sigmoid and data fitting charts Michael uses, and compare the same method to my model generated charts.

*NB The UK Office for National Statistics (ONS) has been working on the Excess Deaths measure, amongst other data, including deaths where Covid-19 is mentioned on the death certificate, not requiring a positive Covid-19 test as the Government numbers do.

As of 2nd June, the Government announced 39369 deaths in its standard “all settings” – Hospitals, Community AND Care homes (with a Covid-19 test diagnosis) but the ONS are mentioning 62,000 Excess Deaths today. A little while ago, on the 19th May, the ONS figure was 55,000 Excess Deaths, compared with 35,341 for the “all settings” UK Government number. I reported that in my blog post https://www.briansutton.uk/?p=2302 in my EuroMOMO data analysis post.

But none of the ways of counting deaths is without its issues. As the King’s Fund says on their website, “In addition to its direct impact on overall mortality, there are concerns that the Covid-19 pandemic may have had other adverse consequences, causing an increase in deaths from other serious conditions such as heart disease and cancer.

“This is because the number of excess deaths when compared with previous years is greater than the number of deaths attributed to Covid-19. The concerns stem, in part, from the fall in numbers of people seeking health care from GPs, accident and emergency and other health care services for other conditions.

“Some of the unexplained excess could also reflect under-recording of Covid-19 in official statistics, for example, if doctors record other causes of death such as major chronic diseases, and not Covid-19. The full impact on overall and excess mortality of Covid-19 deaths, and the wider impact of the pandemic on deaths from other conditions, will only become clearer when a longer time series of data is available.”

Categories
Coronavirus Covid-19 Michael Levitt

Michael Levitt’s analysis of European Covid-19 data

Introduction

I promised in an earlier blog post to present Prof. Michael Levitt’s analysis of Covid-19 data published on the EuroMOMO site for European health data over the last few years.

EuroMOMO

EuroMOMO is the European Mortality Monitoring Project. Based in Denmark, their website states that the overall objective of the original European Mortality Monitoring Project was to design a routine public health mortality monitoring system aimed at detecting and measuring, on a real-time basis, excess number of deaths related to influenza and other possible public health threats across participating European Countries. More is available here.

The Excess Deaths measure

We have heard a lot recently about using the measure of “excess deaths” (on an age related basis) as our own Office for National Statistics (ONS) work on establishing a more accurate measure of the impact of the Coronavirus (SARS-CoV-2) epidemic in the UK.

I think it is generally agreed that this is a better measure – a more complete one perhaps – than those currently used by the UK Government, and some others, because there is no argument about what and what isn’t a Covid-19 death. It’s just excess deaths over and above the seasonal, age related numbers for the geography, country or community concerned, attributing the excess to the novel Coronavirus SARS-CoV-2, the new kid on the block.

That attribution, though, might have its own different issues, such as the inclusion (or not) of deaths related to people’s reluctance to seek hospital help for other ailments, and other deaths arising from the indirect consequences of lockdown related interventions.

There is no disputing, however, that the UK Government figures for deaths have been incomplete from the beginning; they were updated a few weeks ago to include Care Homes on a retrospective and continuing basis (what they called “all settings”) but some reporting of the ONS figures has indicated that when the Government “all settings” figure was 35,341, as of 19th May, the overall “excess deaths” figure might have been as high as 55,000. Look here for more detail and updates direct from the ONS.

The UK background during March 2020

The four policy stages the UK Government initially announced in early March were: Containment, Delay, Research and Mitigate, as reported here. It fairly soon became clear (after the outbreak was declared a pandemic on March 11th by the WHO) that the novel Coronavirus SARS-CoV-2 could not be contained (seeing what was happening in Italy, and case numbers growing in the UK, with deaths starting to be recorded on 10th March (at that time only recorded as caused by Covid-19 with a positive test (in hospital)).

The UK Government have since denied that “herd immunity” had been a policy, but it was mentioned several times in early March, pre-lockdown (which was March 23rd) by Government advisers Sir Patrick Vallance (Chief Scientific Adviser, CSA) and Prof. Chris Whitty (Chief Medical Officer, CMO), in the UK Government daily briefings, with even a mention of 60% population infection proportion to achieve it (at the same time as saying that 80% might be loose talk (my paraphrase)).

If herd immunity wasn’t a policy, it’s hard to understand why it was proactively mentioned by the CSA and CMO, at the same time as the repeated slogan Stay Home, Protect the NHS, Save Lives. This latter advice was intended to keep the outbreak within bounds that the NHS could continue to handle.

The deliberations of the SAGE Committee (Scientific Advisory Group for Emergencies) are not published, but senior advisers (including the CSA and CMO) sit on it, amongst many others (50 or so, not all scientists or medics). Given the references to herd immunity in the daily Government updates at that time, it’s hard to believe that herd immunity wasn’t at least regarded as a beneficial(?!) by-product of not requiring full lockdown at that time.

Full UK lockdown was announced on March 23rd; according to reports this was 9 days after it being accepted by the UK Government as inevitable (as a result of the 16th March Imperial College paper).

The Sunday Times newspaper (ST) published on 24th May 2020 dealt with their story of how the forecasters took charge at that time in mid-March as the UK Government allegedly dithered. The ST’s Insight team editor’s Tweet (Jonathan Calvert) and those of his deputy editor George Arbuthnott refer, as does the related Apple podcast.

Prof. Michael Levitt

Michael (a Nobel Laureate in Computational Biology in 2013) correctly forecast in February the potential extent of the Chinese outbreak (Wuhan in the Hubei province) at the end of March. I first reported this at my blog post on 24th March, as his work on China, and his amazingly accurate forecast, were reported that day here in the UK, which I saw in The Times newspaper.

On May 18th I reported in my blog further aspects of Michael’s outlook on the modelling by Imperial College, the London School of Hygiene and Tropical Medicine (and others) which he says, and I paraphrase his words, caused western countries to trash their economies through the blanket measures they have taken, frightened into alternative action (away from what seems to have been, at least in part, a “herd-immunity” policy) by the forecasts from their advisers’ models, reported as between 200,000 and 500,000 deaths in some publications.

Michael and I have been in direct touch since early May, when a mutual friend, Andrew Ennis, mentioned my Coronavirus modelling to him in his birthday wishes! We were all contemporaries at King’s College, London in 1964-67; they in Physics, and I in Mathematics.

I mentioned Michael’s work in a further, recent blog post on May 20th, when I mentioned his findings on the data at EuroMOMO, contrasting it with the Cambridge Conversation of 14th May, and that is when I said that I would post a blog article purely on his EurtoMOMO work, and this post is the delivery of that promise.

I have Michael’s permission (as do others who have received his papers) to publicise his recent EuroMOMO findings (his earlier work having been focused on China, as I have said, and then on the rest of the world).

He is senior Professor in Structural Biology at Stanford University School of Medicine, CA.

I’m reporting, and explaining a little (where possible!) Michael’s findings just now, rather than deeply analysing – I’m aware that he is a Nobel prize-winning data scientist, and I’m not (yet!) 😀

This blog post is therefore pretty much a recapitulation of his work, with some occasional explanatory commentary.

Michael’s EuroMOMO analysis

What follows is the content of several tweets published by Michael, at his account @MLevitt_NP2013, showing that in Europe, COVID19 is somewhat similar to the 2017/18 European Influenza epidemics, both in total number of excess deaths, and age ranges of these deaths.

Several other academics have also presented data that, whatever the absolute numbers, indicate that there is a VERY marked (“startling” was Prof. Sir David Spiegelhalter’s word) age dependency in the risk factors of dying from Covid-19. I return to that theme at the end of the post.

The EuroMOMO charts and Michael’s analysis

In summary, COVID19 Excess Deaths plateau at 153,006, 15% more than the 2017/18 Flu with similar age range counts. The following charts indicate the support for his view, including the correction of a large error Michael has spotted in one of the supporting EuroMOMO charts.

Firstly, here are the summary Excess Death Charts for all ages in 2018-20.

FIGURE 1. EuroMOMO excess death counts for calendar years 2018, 2019 & 2020

The excess deaths number for COVID19 is easily read as the difference between Week 19 (12 May ’20) and Week 8 (27 Feb ’20). The same is true of the 2018 part of the 2017/18 Influenza season. Getting the 2017 part of that season is harder. These notes are added to aid those interested in following the calculation, and hopefully help them in pointing out any errors.

The following EuroMOMO chart defines how excess deaths are measured.

FIGURE 2. EuroMOMO’s total and other categories of deaths

This is EuroMOMO’s Total (the solid blue line), Baseline (dashed grey line) and ‘Substantial increase’ (dashed red line) for years 2016 to the present. Green circles mark 2017/18 Flu and 2020 COVID-19. The difference between Total Deaths and Baseline Deaths is Excess Deaths.

Next, then, we see Michael’s own summary of the figures found from these earlier charts:

Table 3. Summary for 2020 COVID19 Season and 2017/18 Influenza Season.

Owing to baseline issues, we cannot estimate Age Range Mortality for the 2017 part of the Influenza season, so we base our analysis on the 2018 part, where data is available from EuroMOMO.

We see also the steep age dependency in deaths from under 65s to over 85s. I’ll present at the end of this post some new data on that aspect (it’s of personal interest too!)

Below we see EuroMOMO Excess Deaths from 2020 Week 8, now (on the 14th May) matching reported COVID Deaths @JHUSystems (Johns Hopkins University) perfectly (better than 2%). In earlier weeks the reported deaths were lower, but Michael isn’t sure why. But it allows him to do this in-depth analysis & comparison with EuroMOMO influenza data.

FIGURE 4. The weekly EuroMOMO Excess Deaths are read off their graphs by mouse-over.

The weekly reported COVID19 deaths are taken from the Johns Hopkins University Github repository. The good agreement is an encouraging sign of reliable data but there is a unexplained delay in EuroMOMO numbers.

Analysis of Europe’s Excess Deaths is hard: EuroMOMO provides beautiful plots, but extracting data requires hand-recorded mouse-overs on-screen*. COVID19 2020 – weeks 8-19, & Influenza 2018 – weeks 01-16 are relatively easy for all age ranges (totals 153,006 & 111,226). Getting the Dec. 2017 Influenza peak is very tricky.

(*My son, Dr Tom Sutton, has been extracting UK data from the Worldometers site for me, using a small but effective Python “scraping” script he developed. It is feasible, but much more difficult, to do this on the EuroMOMO site, owing to the vector coordinate definitions of the graphics, and Document Object Model they use for their charts.)

Figure 5. Deaths graphs from EurMoMo allow the calculation of Excess deaths

FIGURE 5. The Excess deaths for COVID19 in 2020 and for Influenza in 2018 are easily read off the EuroMOMO graphs by hand recording four mouse-overs.

The same is done for all different age ranges allowing accurate determination of the age range mortalities. For COVID19, there are 174,801 minus 21,795 = 153,006 Excess Deaths. For 2018 Influenza, the difference is 111,226 minus zero = 111,226 Excess Deaths.

Michael exposes an error in the EuroMOMO charts

In the following chart, it should be easy to calculate again, as mouse-over of the charts on the live EuroMOMO site gives two values a week: Actual death count & Baseline value.

Tests on the COVID19 peak gave a total of 127,062 deaths & not 153,006. Plotting a table & superimposing the real plot showed why. Baseline values are actually ‘Substantial increase’ values!! Wrong labelling?

Figure 6. Actual death count & Baseline value

In Figure 6, Excess Deaths can also be determined from the plots of Total and Baseline Deaths with week number. Many more numbers need to be recorded but the result would be the same.

TABLE 7. The pairs of numbers recorded from EuroMOMO between weeks 08 and 19

TABLE 7. The pairs of numbers recorded from EuroMOMO between weeks 08 and 19 of 2020 allow the Excess Deaths to be determined in a different way than from FIG. 5. The total Excess Deaths (127,062) should be the same as before (153,006) but it is not. Why? (Mislabelling of the EuroMOMO graph? What is “Substantial increase” anyway and why is it there? – BRS).

FIGURE 8. Analysing what is wrong with the EuroMOMO Excess Deaths count

FIGURE 8. The lower number in TABLE 7 is in fact not the Baseline Death value (grey dashed line) but the ‘Substantial increase’ value (red dashed line). Thus the numbers in the table are not Excess Deaths (Total minus Baseline level) but Total minus ‘Substantial increase’ level. The difference is found by adding 12×1981** to 127,062 to get 153,006. This means that the baseline is about 2000 deaths a week below the red line. This cannot be intended and is a serious error in EuroMOMO. Michael has been looking for someone to help him contact them? (**(153,006 – 127062)/12 = 25944/12 = 2162. So shouldn’t we be adding 12×2162, Michael? – BRS)

Reconciling the numbers, and age range data

Requiring the two COVID19 death counts to match means reducing the Baseline value by 23,774/12 = 1,981**. Mouse-over 2017 weeks 46 to 52 gave the table below. Negative Excess Deaths meant 2017 Influenza began Week 49 not 46. Michael tried to get Age Range data for 2017 but the table just uses 2018 Influenza data. (**see above also – same issue. Should be 25944/12 = 2162? – BRS)

TABLE 9. Estimating the Excess Deaths for the 2017 part of the 2017/18 influenza season

In TABLE 9, Michael tries to estimate the Excess Deaths for the 2017 part of the 2017/18 Influenza season by recording pairs of mouse-overs for seven weeks (46 to 52) and four age ranges. Because the Total Deaths are not always higher than the ‘Substantial increase’ base level, he uses differences as a sanity check. The red numbers for weeks 46 to 48 show that the Excess Deaths are negative and that the Influenza season did not start until week 49 of 2017.

TABLE 10. We try to combine the two parts of the 2017/18 Influenza season

TABLE 10 commentary. We try to combine the two parts of the 2017/18 Influenza season. The values for 2018 are straightforward as they are determined as shown in Fig. 5. For 2017, we need to use the values in Table 9 and add the baseline correction because the EuroMOMO mouse-overs are wrong, giving as they do the ‘Substantial increase’ value instead of the ‘Baseline’ value. We can use the same correction of 1981**(see my prior comments on this number – BRS) deaths per week as determined for all COVID19 data but we do not know what the correction is for other age ranges. An attempt to assume that the correction is proportional to the 2017 number of deaths in each age range gives strange age range mortalities.
Thus, we choose to use the total for 2017 (21,972) but give the age range mortalities just from the deaths in 2018, as the 2017 data is arcane, unreliable or flawed.

Michael’s concluding statement

COVID19 is similar to Influenza only in total and in age range excess mortality. Flu is a different virus, has a safe vaccine & is much less a threat to heroic medical professionals.

Additional note on the age dependency of Covid-19 risk

In my earlier blog post, reporting the second Cambridge Conversation webinar I attended, the following slide from Prof. Sir David Spiegelhalter was one that drew the sharp distinction between the risk to people in different age ranges:

Age related increase in Covid-19 death rates

Prof. Spiegelhalter’s own Twitter account is also quite busy, and this particular chart was mentioned there, and also on his blog.

This week I was sent this NHS pre-print paper (pending peer review, as many Coronavirus research papers are) to look at the various Covid-19 risk factors and their dependencies, and to explain them. The focus of the 20-page paper is the potential for enhanced risk for people with Type-1 or Type-2 Diabetes, but the Figure 2 towards the end of that paper shows the relative risk ratios for a number of other parameters too, including age range, gender, deprivation and ethnic group.

Risk ratios for different population characteristics

This chart extract, from the paper by corresponding author Prof. Jonathan Valabhji (Imperial College, London & NHS) and his colleagues, indicates a very high age-related dependency for Covid-19 risk, based on the age of the individual. The risk ratio for a white woman under 40, with no deprivation factors, and no diabetes, compared with a control person (a 60-69 year old white woman, with no deprivation factors, and no diabetes) is 1% of the risk. A white male under 40 with otherwise similar characteristics would have a risk of 1.94% of the control person.

Other reduction factors apply in the two 10-year age bands between 40-49 and 50-59, for a white woman (no deprivations or diabetes) in those age ranges of 11% and 36% of the risk respectively.

At 70-79, and above 80, the risk enhancement factors owing to age are x 2.63 and x 9.14 respectively.

So there is some agreement (at least on the principle of age dependency of risk, as represented by the data, if not the quantum), between EuroMOMO, Prof. Michael Levitt, Prof. Sir David Spiegelhalter and the Prof. Jonathan Valabhji et al. paper; that increasing age beyond middle age is a significant indicator of enhanced risk to Covid-19.

In some other respects, Michael is at odds with forecasts made by Prof. Neil Ferguson’s Imperial College group (and, by inference, also with the London School of Hygiene and Tropical Medicine) and with the analysis of the Imperial College paper by Prof. Spiegelhalter.

I reported this in my recent blog post on May 18th concerning the Cambridge Conversation of 14th May, highlighting the contrast with Michael’s interview with Freddie Sayers of UnHerd, which is available directly on YouTube at https://youtu.be/bl-sZdfLcEk.

I recommend going to the primary evidence and watching the videos in those posts.

Categories
Coronavirus Covid-19

My model calculations for Covid-19 cases for an earlier UK lockdown

Introduction

A little while ago (14th May), I published a post entitled What if UK lockdown had been 2 weeks earlier? where I explored the possible impact of a lockdown intervention date of 9th March instead of 23rd March, the actual UK lockdown date.

That article focused more on the impact on the number of deaths in those two scenarios, rather than the number of Covid-19 cases, where the published data is not as clear, or as complete, since so few people have been tested.

That post also made the point that this wasn’t a proper forecast, because the calibration of the model for that early an intervention date would have been compromised, as there was so little historic data to which to fit the model at that point. That still applies here.

Therefore the comparisons are not valid in detail against reported data, but the comparative numbers between the two models show how a typical model, such as mine (derived from Alex de Visscher’s code as before), is so dependent on (early) input data, and, indeed, responds in a very non-linear way, given the exponential pattern of pandemic growth.

Cases

I present below the two case numbers charts for the 9th March and 23rd March lockdown dates (I had covered the death data in more detail in my previous post on this topic, but will return to that below).

In the charts for cases here, we see in each chart (in orange) the same reported data, to date (24th May) but a big difference in the model predictions for cases. For the 9th March lockdown, the model number for cases by the 23th March is 14,800.

The equivalent model number for cases for 23rd March lockdown (i.e. modelled cases with no prior lockdown) is 45,049 cases, about 3 times as many.

The comparative reported number (the orange curve above) for 23rd March is 81,325 (based on multiplying up UK Government reported numbers (by 12.5), using Italy’s and other data concerning the proportion of real cases that might ever be tested (about 8%), as described in my Model Update post on May 8th). Reported case numbers (in other countries too, not just in the UK) underestimate the real case numbers by such a factor, because of the lack of sufficient public Coronavirus testing.

As I said in my previous article, a reasonable multiple on the public numbers for comparison might, then, be 12.5 (the inverse of 8%), which the charts above are reflect for the orange graph curve.

Deaths

For completeness, here are the comparative charts showing the equivalent model data for deaths, for the two lockdown dates.

On the right, my live model matches reported deaths data, using 84% lockdown intervention effectiveness, for the actual March 23rd lockdown, quite accurately. The model curve and the reported data curve are almost coincident (The reported data is the orange curve, as always).

On the left, the modelled number of deaths is lower from the time of lockdown. By 23rd March, for 9th March lockdown, it is 108, lower than it is for lockdown at the 23rd March (402) (with no benefit from lockdown at all in the latter case, of course).

These compare with the model numbers for deaths at the later date of May 13th, reported in my May 13th post, of 540 and 33,216 for March 9th and March 23rd lockdowns respectively (at virtually the same 84.1% intervention effectiveness).

As for the current date, at 84% effectiveness, of 24th May, the numbers of deaths on the right, for the actual 23rd March lockdown data and model is 36,660 (against the reported 36,793), and for the 9th March lockdown, on the left, would have been, in the model, 570 deaths.

That seems a very large difference, but see it as an internal comparison of model outcomes on those two assumptions. whatever the deficiencies of the availability of data to fit the model to an earlier lockdown, it is clear that, by an order of magnitude, the model behaviour over that 2 month period or so is crucially dependent on when that intervention (lockdown) happens.

This shows the startling (but characteristic) impact of the exponential pandemic growth on the outcomes from the different lockdown dates, for an outcome reporting date, 13th May, just 51 days later than the March 23rd reporting date, and for an outcome reporting date, 24th May, 62 days after March 23rd.

The model shows deaths multiplying by 5 in that 51 day period for 9th March lockdown, but 82 times as many deaths in that period for the 23rd March lockdown. For the 62 day period (11 days later), the equivalent multiples are 5.2 and 339 for 9th March and 23rd march lockdown respectively.

My 9th March lockdown modelled numbers are lower than those from Professor Rowland Kao’s research group at Edinburgh, if their Scottish numbers are scaled up for the UK. Indeed, I think my absolute numbers are too low for the March 9th lockdown case. But remember, this is about model comparisons, it’s NOT an absolute forecast.

In terms of the long term outlook (under the somewhat unrealistic assumption that 84% lockdown effectiveness continues, and in the (possibly more realistic assumption of) absence of a vaccine) deaths plateau at 42,500 for the actual March 23rd lockdown, but would have plateaued at only 625 in my model if the lockdown had been March 9th (as covered in my previous post).

Conclusions

For cases, the modelled March 9th lockdown long-term plateau, under similar assumptions) would have been 41,662 cases; but for the actual 23rd March lockdown, the model shows 2.8 million cases, a vastly higher number showing the effect of exponential behaviour, with only a 2 week difference in the timing of the intervention measures taken (at 84% effectiveness in both cases). That’s how vital timing is, as is the effectiveness of measures taken in the pandemic situation.

These long-term model outcomes reflect the observation of a likely deaths/cases ratio (1.5%) from the “captive” community on the cruise ship Diamond Princess.

But as I said earlier, these are comparisons within my model, to assess the nature and impact of an earlier lockdown, with the main focus in this post being the cases data.

It is a like-for-like comparison of modelled outcomes for two assumptions, one for the actual lockdown date, 23rd March, where the model fits reported data quite well (especially for deaths), and one for the earlier, postulated 9th March lockdown date (where the model fit must be questionable) that has been discussed so much.

Categories
6Points Cycling Strava Watopia Sand & Sequoias Zwift

6Points Mallorca Zwift training ride led by Dame Sarah Storey

Dame Sarah Storey, British Olympic cycling champion, led our 6Points Mallorca Sunday training ride today. See the live stream at YouTube at

For those that enjoyed the ride, we also highlighted the 6Points Mallorca charity JustGiving page which helps a disadvantaged children’s charity, Asdica in Mallorca.

See more about Asdica, and our other charities and sponsors, at the 6Points website. Over €66,000 has been collected through 6Points events over three years.

The ride today, a mixture of peloton, sprint and minirace riding, was over 2 laps of Watopia’s Sand and Sequoias course, about 43 kms, with the minirace from the bottom of the Titan’s Grove KoM second time around. It’s a lovely course, and the minirace is a tough one at 10kms, with that KoM to start, with even the descent after that a little lumpy too.

We do the Fuego Flats sprint twice, and then take it again at the end of the minirace, which finishes at the arch on that same sprint section.

It was a very well attended ride today, with a great lead by Sarah at even pace, keeping it very much together, until the minirace start 10kms from the end at the bottom of the Titan’s Grove KoM.

I was, of course, taking my red beacon duties very seriously, and had a good little group around me for a good part of the event.

We had 218 booked to ride, with 171 riding and 133 finishers. Our podium included a son and father combo, the Scotts, divided by Bruch Wu, always at the pointed end of our miniraces.

Regulars and locals riding included (roughly in finishing order): Jed Scott (Draft, a very rapid 1st, well done!), Bruch Wu (a regular podium in our 6Points and GGCC events, 2nd), Hamish Scott (Jed’s dad, a regular and strong rider in our events, 3rd), Tony Romo (4th), Martin Smith (5th), Sean Ekblom (GGCC beacon and 6th), Beth McIver (CryoGen), Alex Fthenakis (GGCC), Del Chattelle (GGCC), Roger Bloom, Alastair Pell (Nightingale), Charlie Farnham (Storey racing), Twinny Styler (Storey racing), Sarah Storey (Beacon and Storey Racing(!)), Heather Mayne (GGCC Zwift race team), Niall Hughes (GGCC), Gavin Stewart, Colin Sinclair (RACC), Derek Brown (GGCC), Leroy Nahay, Andrea McDowell, Andy Cattanach (GGCC), Euan Gordon (GGCC Beacon), Gavin Johnston (GGCC and graphics designer for our stream screen), Scott Ballantyne (GGCC), Leslie Tennant (GGCC), Christine Catterson (GGCC), Brian Sutton (GGCC and red beacon) and Fleury Stoops (GGCC).

All ride results are at ZwiftPower for those registered ZP, or on Companion (but with lots of flyers) for everyone.

I DQd 6 riders on ZwiftPower for being ahead of the beacon at the minirace start.

Sarah will be leading for GGCC again on 6th June, on the 11.30am BST (10.30 UTC) GGCC Saturday morning training ride, and we look forward to that!

Categories
Cambridge Conversations Coronavirus Covid-19 Michael Levitt

Cambridge Conversation 14th May 2020, and Michael Levitt’s analysis of Euro data

I covered the May 14th Cambridge Conversation in my blog post last week, and promised to make available the YouTube link for it when uploaded. It is now on the University of Cambridge channel at:

Cambridge Conversation – COVID-19 behind the numbers – statistics, models and decision-making

In my following, and most recent post, I also summarised Prof. Michael Levitt’s interview with UnHerd at my post Another perspective on Coronavirus – Prof. Michael Levitt which presents a perspective on the Coronavirus crisis which is at odds with earlier forecasts and commentaries by Prof. Neil Ferguson and Prof. Sir David Spiegelhalter respectively.

Michael Levitt has very good and consistent track record in predicting the direction of travel and extent of what I might call the Coronavirus “China Crisis”, from quite early on, and contrary to the then current thinking about the rate of growth of Coronavirus there. Michael’s interview is at:

Michael Levitt’s interview with UnHerd

and I think it’s good to see these two perspectives together.

I will cover shortly some of Michael’s latest work on analysing comparisons presented at the website https://www.euromomo.eu/graphs-and-maps, looking at excess mortality across several years in Europe. Michael’s conclusions (which I have his permission to reproduce) are included in the document here:

where as can be seen from the title, the Covid-19 growth profile doesn’t look very dissimilar from recent previous years’ influenza data. More on this in my next article.

As for my own modest efforts in this area, my model (based on a 7 compartment code by Prof. Alex de Visscher in Canada, with my settings and UK data) is still tracking UK data quite well, necessitating no updates at the moment. But the UK Government is under increasing pressure to include all age related excess deaths in their daily (or weekly) updates, and this measure is mentioned in both videos above.

So I expect some changes to reported data soon: just as the UK Government has had to move to include “deaths in all settings” by including Care Home deaths in their figures, it is likely they should have to move to including the Office for National Statistics numbers too, which they have started to mention. Currently, instead of c. 35,000 deaths, these numbers show c. 55,000, although, as mentioned, the basis for inclusion is different.

These would be numbers based on a mention of Covid-19 on death certificates, not requiring a positive Covid-19 test as currently required for inclusion in UK Government numbers.

Categories
Coronavirus Covid-19 Reproductive Number

Another perspective on Coronavirus – Prof. Michael Levitt

Owing to the serendipity of a contemporary and friend of mine at King’s College London, Andrew Ennis, wishing one of HIS contemporaries in Physics, Michael Levitt, a happy birthday on 9th May, and mentioning me and my Coronavirus modelling attempts in passing, I am benefiting from another perspective on Coronavirus from Michael Levitt.

The difference is that Prof. Michael Levitt is a Nobel laureate in 2013 in computational biosciences…and I’m not! I’m not a Fields Medal winner either (there is no Nobel Prize for Mathematics, the Fields Medal being an equivalently prestigious accolade for mathematicians). Michael is Professor of Structural Biology at the Stanford School of Medicine.

I did win the Drew Medal for Mathematics in my day, but that’s another (lesser) story!

Michael has turned his attention, since the beginning of 2020, to the Coronavirus pandemic, and had kindly sent me a number of references to his work, and to his other recent work in the field.

I had already referred to Michael in an earlier blog post of mine, following a Times report of his amazingly accurate forecast of the limits to the epidemic in China (in which he was taking a particular interest).

Report of Michael Levitt’s forecast for the China outbreak

I felt it would be useful to report on the most recent of the links Michael sent me regarding his work, the interview given to Freddie Sayers of UnHerd at https://unherd.com/thepost/nobel-prize-winning-scientist-the-covid-19-epidemic-was-never-exponential/ reported on May 2nd. I have added some extracts from UnHerd’s coverage of this interview, but it’s better to watch the interview.

Michael’s interview with UnHerd

As UnHerd’s report says, “With a purely statistical perspective, he has been playing close attention to the Covid-19 pandemic since January, when most of us were not even aware of it. He first spoke out in early February, when through analysing the numbers of cases and deaths in Hubei province he predicted with remarkable accuracy that the epidemic in that province would top out at around 3,250 deaths.

“His observation is a simple one: that in outbreak after outbreak of this disease, a similar mathematical pattern is observable regardless of government interventions. After around a two week exponential growth of cases (and, subsequently, deaths) some kind of break kicks in, and growth starts slowing down. The curve quickly becomes ‘sub-exponential’.

UnHerd reports that he takes specific issue with the Neil Ferguson paper, that along with some others, was of huge influence with the UK Government (amongst others) in taking drastic action, moving away from a ‘herd immunity” approach to a lockdown approach to suppress infection transmission.

“In a footnote to a table it said, assuming exponential growth of 15% for six days. Now I had looked at China and had never seen exponential growth that wasn’t decaying rapidly.

“The explanation for this flattening that we are used to is that social distancing and lockdowns have slowed the curve, but he is unconvinced. As he put it to me, in the subsequent examples to China of South Korea, Iran and Italy, ‘the beginning of the epidemics showed a slowing down and it was very hard for me to believe that those three countries could practise social distancing as well as China.’ He believes that both some degree of prior immunity and large numbers of asymptomatic cases are important factors.

“He disagrees with Sir David Spiegelhalter’s calculations that the totem is around one additional year of excess deaths, while (by adjusting to match the effects seen on the quarantined Diamond Princess cruise ship, and also in Wuhan, China) he calculates that it is more like one month of excess death that is need before the virus peters out.

“He believes the much-discussed R0 is a faulty number, as it is meaningless without the time infectious alongside.” I discussed R0 and its derivation in my article about the SIR model and R0.

Interestingly, Prof Alex Visscher, whose original model I have been adapting for the UK, also calibrated his thinking, in part, by considering the effect of the Coronavirus on the captive, closed community on the Diamond Princess, as I reported in my Model Update on Coronavirus on May 8th.

The UnHerd article finishes with this quote: “I think this is another foul-up on the part of the baby boomers. I am a real baby boomer — I was born in 1947, I am almost 73 years old — but I think we’ve really screwed up. We’ve caused pollution, we’ve allowed the world’s population to increase threefold in my lifetime, we’ve caused the problems of global warming and now we’ve left your generation with a real mess in order to save a relatively small number of very old people.”

I suppose, as a direct contemporary, that I should apologise too.

There’s a lot more at the UnHerd site, but better to hear it directly from Michael in the video.

Categories
Coronavirus Covid-19

Cambridge Conversations May 14th 2020 – reading data and the place of modelling

As an alumnus, I again had the opportunity today (with 3000 other people in over 70 countries) to attend the second Cambridge Conversations webinar, this time featuring Professor Sir David Spiegelhalter of Churchill College, Chair of the Winton Centre for Risk and Evidence Communication in the University of Cambridge, and Professor Mike Hulme, Professor of Human Geography and Fellow of Pembroke College, and a specialist in Climate Change.

The discussion, ‘COVID-19 behind the numbers – statistics, models and decision-making’, was moderated by Dr Alexandra Freeman, Executive Director at the Winton Centre for Risk and Evidence Communication.

The video of the 45 minute session will be available, and I will share it here in due course (it’s on a closed group at the moment, but will be on the Cambridge YouTube channel here in a few days, where the first Cambridge Conversation on Covid-19, from April, is currently available).

The presentations

Of most interest to me, since I was interested in the modelling of the pandemic outbreak, was the first part of the scene-setting, by Professor Sir David Spiegelhalter, one of the world’s foremost biostatisticians, who unpicked the numbers surrounding COVID-19. He has been reported widely and recently regarding the interpretation of Covid-19 data.

He explored the reporting of cases and deaths; explained the bases on which predictions have been made; examined comparisons with the ‘normal’ risks faced by people; and investigated whether many deaths from COVID-19 could have been expected and have simply been brought forward.

He was joined by Professor Mike Hulme, whose expertise is in climate change, with particular interest in the role of model-based knowledge in strategic and policy decision-making relative to political and cultural values: a question of similar importance to COVID-19 as it is to Climate Change policies, his own area of study.

The first set of slides, by David Spiegelhalter on the modelling aspects and the numbers coming out of the pandemic, are here:

The second part of the scene-setting, by Dr Mike Hulme, was more about how model-based knowledge is used in decision-making and public communication around Covid-19, and the differences in wider public perceptions across countries and cultures.

Much of this part of the discussion was about the difference between the broad basis for decision making vs. the more narrow basis for any particular expert advice; and that decision makers need to take into account a far wider set of parameters than just one expert model, involving cultural, ethical and many other factors. This means that methods, conclusions and decisions don’t necessarily carry over from one country to another.

Questions and answers

There was a Q&A session after the scene-setting, moderated by Dr Alexandra Freeman, and, amazingly, a submitted question from “Brian originally of Trinity College” was chosen to be asked! My question was about how to understand and model the mutual feedback between periodic lockdown adjustments and the growth rate of the virus, but it wasn’t answered very well, if at all, combined as it was with someone else’s (reasonable) question on what data we need to take forward to help us with the pandemic, which wasn’t answered properly either.

I had the impression that Mike Hulme, in particular, was more concerned with getting his own message across, and that actually several other questions didn’t get a good answer either. Speigelhalter, for his part, is well aware of his own fame/notoriety, and was quite amusing about it, but possibly at the expense of listening to the questions and answering them.

Both of them thought some of the other questions (e.g. one about “which modellers around the word are the best?”) had sought to draw out views about a “beauty contest” of people working in the field, which they (rightly) said wasn’t helpful, as initiatives and models in different countries, contexts, cultures were all partial, dealing with their own priorities. Hulme used the phrase “when science runs hot” a few times, in the context of all the work going on when the data was unreliable, causing its own issues.

Spiegelhalter had been (in his opinion mis-) quoted both by Boris Johnson AND the new leader of the opposition, Keir Starmer, regarding recent statements he had made about the difficulty of comparing data from different countries and cultures concerning Covid-19.

But as a statistician, he will be well aware of the phrase “lies, damn lies and statistics”, so I don’t have much sympathy for his ruefulness about having created issues for himself by being outspoken about such matters. His statements are delivered in quite an authoritative tone, and any nuances, I should think, in his public pronouncements might, not be noticed.

Summary

I recommend watching the YouTube video of the presentations when available on the Cambridge YouTube channel here next week, particularly (from my own perspective) Spiegelhalter’s, which drew some good distinctions about how to read the data in this crisis, and how to think about the Coronavirus issues in different parts of the population.

He had a very good point about Population Fatality Rate (PFR) vs. Infection Fatality Rate (IFR), (the difference between the chance of catching AND dying from Covid-19 (PFR) vs. the chance of dying from it once you already have it (IFR)) and how these are conflated by the media (and others) when considering the differential effect of Covid-19 on different parts of the population. One is an overall probability, and the other is a conditional probability, and the inferences are quite different, as he exemplified and explained.

There were some quite startling and clear learnings in reading the data about the relative susceptibilities of young vs. old and men vs. women, and the importance of a more complete measure of death rates, as shown by charts of the overall Excess Deaths in the population, contrasted with narrower ways of measuring the impact of the pandemic, highlighting the wider issues we face.

Both presenters wanted the whole impact of Coronavirus to be considered, not just the specific deaths from Covid-19 itself – things such as the increase in deaths from other causes, owing to the tendency of people not to want to attend hospitals; mental health; the bringing forward of deaths that might have happened in the next influenza season, if not now; and a number of other impacts.

Spiegelhalter found the lack of testing in the UK difficult to comprehend, and felt that addressing testing going forward was on the critical path to any way out of the crisis (my words).

The other main message, from Hulme, was that Governmental decision-making should be broadly based, and not driven by any particular modelling group. He didn’t reference the phrase “science-led”, as has been used so often by Government and others dealing with Coronavirus, but I imagine that he thinks that the word “science” in that phrase should be much more broadly defined (again, my interpretation of his theme).

Watch the Cambridge YouTube channel here next week and decide for yourself! At present the first Cambridge Conversation, “responding to the medical challenges of COVID-19” is available there, and summarised in my post Cambridge Conversations and model refinement.