I covered the May 14th Cambridge Conversation in my blog post last week, and promised to make available the YouTube link for it when uploaded. It is now on the University of Cambridge channel at:
In my following, and most recent post, I also summarised Prof. Michael Levitt’s interview with UnHerd at my post Another perspective on Coronavirus – Prof. Michael Levitt which presents a perspective on the Coronavirus crisis which is at odds with earlier forecasts and commentaries by Prof. Neil Ferguson and Prof. Sir David Spiegelhalter respectively.
Michael Levitt has very good and consistent track record in predicting the direction of travel and extent of what I might call the Coronavirus “China Crisis”, from quite early on, and contrary to the then current thinking about the rate of growth of Coronavirus there. Michael’s interview is at:
and I think it’s good to see these two perspectives together.
I will cover shortly some of Michael’s latest work on analysing comparisons presented at the website https://www.euromomo.eu/graphs-and-maps, looking at excess mortality across several years in Europe. Michael’s conclusions (which I have his permission to reproduce) are included in the document here:
where as can be seen from the title, the Covid-19 growth profile doesn’t look very dissimilar from recent previous years’ influenza data. More on this in my next article.
As for my own modest efforts in this area, my model (based on a 7 compartment code by Prof. Alex de Visscher in Canada, with my settings and UK data) is still tracking UK data quite well, necessitating no updates at the moment. But the UK Government is under increasing pressure to include all age related excess deaths in their daily (or weekly) updates, and this measure is mentioned in both videos above.
So I expect some changes to reported data soon: just as the UK Government has had to move to include “deaths in all settings” by including Care Home deaths in their figures, it is likely they should have to move to including the Office for National Statistics numbers too, which they have started to mention. Currently, instead of c. 35,000 deaths, these numbers show c. 55,000, although, as mentioned, the basis for inclusion is different.
These would be numbers based on a mention of Covid-19 on death certificates, not requiring a positive Covid-19 test as currently required for inclusion in UK Government numbers.
Owing to the serendipity of a contemporary and friend of mine at King’s College London, Andrew Ennis, wishing one of HIS contemporaries in Physics, Michael Levitt, a happy birthday on 9th May, and mentioning me and my Coronavirus modelling attempts in passing, I am benefiting from another perspective on Coronavirus from Michael Levitt.
The difference is that Prof. Michael Levitt is a Nobel laureate in 2013 in computational biosciences…and I’m not! I’m not a Fields Medal winner either (there is no Nobel Prize for Mathematics, the Fields Medal being an equivalently prestigious accolade for mathematicians). Michael is Professor of Structural Biology at the Stanford School of Medicine.
I did win the Drew Medal for Mathematics in my day, but that’s another (lesser) story!
Michael has turned his attention, since the beginning of 2020, to the Coronavirus pandemic, and had kindly sent me a number of references to his work, and to his other recent work in the field.
I had already referred to Michael in an earlier blog post of mine, following a Times report of his amazingly accurate forecast of the limits to the epidemic in China (in which he was taking a particular interest).
As UnHerd’s report says, “With a purely statistical perspective, he has been playing close attention to the Covid-19 pandemic since January, when most of us were not even aware of it. He first spoke out in early February, when through analysing the numbers of cases and deaths in Hubei province he predicted with remarkable accuracy that the epidemic in that province would top out at around 3,250 deaths.
“His observation is a simple one: that in outbreak after outbreak of this disease, a similar mathematical pattern is observable regardless of government interventions. After around a two week exponential growth of cases (and, subsequently, deaths) some kind of break kicks in, and growth starts slowing down. The curve quickly becomes ‘sub-exponential’.
UnHerd reports that he takes specific issue with the Neil Ferguson paper, that along with some others, was of huge influence with the UK Government (amongst others) in taking drastic action, moving away from a ‘herd immunity” approach to a lockdown approach to suppress infection transmission.
“In a footnote to a table it said, assuming exponential growth of 15% for six days. Now I had looked at China and had never seen exponential growth that wasn’t decaying rapidly.
“The explanation for this flattening that we are used to is that social distancing and lockdowns have slowed the curve, but he is unconvinced. As he put it to me, in the subsequent examples to China of South Korea, Iran and Italy, ‘the beginning of the epidemics showed a slowing down and it was very hard for me to believe that those three countries could practise social distancing as well as China.’ He believes that both some degree of prior immunity and large numbers of asymptomatic cases are important factors.
“He disagrees with Sir David Spiegelhalter’s calculations that the totem is around one additional year of excess deaths, while (by adjusting to match the effects seen on the quarantined Diamond Princess cruise ship, and also in Wuhan, China) he calculates that it is more like one month of excess death that is need before the virus peters out.
“He believes the much-discussed R0 is a faulty number, as it is meaningless without the time infectious alongside.” I discussed R0 and its derivation in my article about the SIR model and R0.
Interestingly, Prof Alex Visscher, whose original model I have been adapting for the UK, also calibrated his thinking, in part, by considering the effect of the Coronavirus on the captive, closed community on the Diamond Princess, as I reported in my Model Update on Coronavirus on May 8th.
The UnHerd article finishes with this quote: “I think this is another foul-up on the part of the baby boomers. I am a real baby boomer — I was born in 1947, I am almost 73 years old — but I think we’ve really screwed up. We’ve caused pollution, we’ve allowed the world’s population to increase threefold in my lifetime, we’ve caused the problems of global warming and now we’ve left your generation with a real mess in order to save a relatively small number of very old people.”
I suppose, as a direct contemporary, that I should apologise too.
There’s a lot more at the UnHerd site, but better to hear it directly from Michael in the video.
As an alumnus, I again had the opportunity today (with 3000 other people in over 70 countries) to attend the second Cambridge Conversations webinar, this time featuring Professor Sir David Spiegelhalter of Churchill College, Chair of the Winton Centre for Risk and Evidence Communication in the University of Cambridge, and Professor Mike Hulme, Professor of Human Geography and Fellow of Pembroke College, and a specialist in Climate Change.
The discussion, ‘COVID-19 behind the numbers – statistics, models and decision-making’, was moderated by Dr Alexandra Freeman, Executive Director at the Winton Centre for Risk and Evidence Communication.
The video of the 45 minute session will be available, and I will share it here in due course (it’s on a closed group at the moment, but will be on the Cambridge YouTube channel here in a few days, where the first Cambridge Conversation on Covid-19, from April, is currently available).
Of most interest to me, since I was interested in the modelling of the pandemic outbreak, was the first part of the scene-setting, by Professor Sir David Spiegelhalter, one of the world’s foremost biostatisticians, who unpicked the numbers surrounding COVID-19. He has been reported widely and recently regarding the interpretation of Covid-19 data.
He explored the reporting of cases and deaths; explained the bases on which predictions have been made; examined comparisons with the ‘normal’ risks faced by people; and investigated whether many deaths from COVID-19 could have been expected and have simply been brought forward.
He was joined by Professor Mike Hulme, whose expertise is in climate change, with particular interest in the role of model-based knowledge in strategic and policy decision-making relative to political and cultural values: a question of similar importance to COVID-19 as it is to Climate Change policies, his own area of study.
The first set of slides, by David Spiegelhalter on the modelling aspects and the numbers coming out of the pandemic, are here:
The second part of the scene-setting, by Dr Mike Hulme, was more about how model-based knowledge is used in decision-making and public communication around Covid-19, and the differences in wider public perceptions across countries and cultures.
Much of this part of the discussion was about the difference between the broad basis for decision making vs. the more narrow basis for any particular expert advice; and that decision makers need to take into account a far wider set of parameters than just one expert model, involving cultural, ethical and many other factors. This means that methods, conclusions and decisions don’t necessarily carry over from one country to another.
Questions and answers
There was a Q&A session after the scene-setting, moderated by Dr Alexandra Freeman, and, amazingly, a submitted question from “Brian originally of Trinity College” was chosen to be asked! My question was about how to understand and model the mutual feedback between periodic lockdown adjustments and the growth rate of the virus, but it wasn’t answered very well, if at all, combined as it was with someone else’s (reasonable) question on what data we need to take forward to help us with the pandemic, which wasn’t answered properly either.
I had the impression that Mike Hulme, in particular, was more concerned with getting his own message across, and that actually several other questions didn’t get a good answer either. Speigelhalter, for his part, is well aware of his own fame/notoriety, and was quite amusing about it, but possibly at the expense of listening to the questions and answering them.
Both of them thought some of the other questions (e.g. one about “which modellers around the word are the best?”) had sought to draw out views about a “beauty contest” of people working in the field, which they (rightly) said wasn’t helpful, as initiatives and models in different countries, contexts, cultures were all partial, dealing with their own priorities. Hulme used the phrase “when science runs hot” a few times, in the context of all the work going on when the data was unreliable, causing its own issues.
Spiegelhalter had been (in his opinion mis-) quoted both by Boris Johnson AND the new leader of the opposition, Keir Starmer, regarding recent statements he had made about the difficulty of comparing data from different countries and cultures concerning Covid-19.
But as a statistician, he will be well aware of the phrase “lies, damn lies and statistics”, so I don’t have much sympathy for his ruefulness about having created issues for himself by being outspoken about such matters. His statements are delivered in quite an authoritative tone, and any nuances, I should think, in his public pronouncements might, not be noticed.
I recommend watching the YouTube video of the presentations when available on the Cambridge YouTube channel here next week, particularly (from my own perspective) Spiegelhalter’s, which drew some good distinctions about how to read the data in this crisis, and how to think about the Coronavirus issues in different parts of the population.
He had a very good point about Population Fatality Rate (PFR) vs. Infection Fatality Rate (IFR), (the difference between the chance of catching AND dying from Covid-19 (PFR) vs. the chance of dying from it once you already have it (IFR)) and how these are conflated by the media (and others) when considering the differential effect of Covid-19 on different parts of the population. One is an overall probability, and the other is a conditional probability, and the inferences are quite different, as he exemplified and explained.
There were some quite startling and clear learnings in reading the data about the relative susceptibilities of young vs. old and men vs. women, and the importance of a more complete measure of death rates, as shown by charts of the overall Excess Deaths in the population, contrasted with narrower ways of measuring the impact of the pandemic, highlighting the wider issues we face.
Both presenters wanted the whole impact of Coronavirus to be considered, not just the specific deaths from Covid-19 itself – things such as the increase in deaths from other causes, owing to the tendency of people not to want to attend hospitals; mental health; the bringing forward of deaths that might have happened in the next influenza season, if not now; and a number of other impacts.
Spiegelhalter found the lack of testing in the UK difficult to comprehend, and felt that addressing testing going forward was on the critical path to any way out of the crisis (my words).
The other main message, from Hulme, was that Governmental decision-making should be broadly based, and not driven by any particular modelling group. He didn’t reference the phrase “science-led”, as has been used so often by Government and others dealing with Coronavirus, but I imagine that he thinks that the word “science” in that phrase should be much more broadly defined (again, my interpretation of his theme).
There has been some debate about the timing and effectiveness of the lockdown measures adopted by the UK in its response to the Coronavirus pandemic. I’m sceptical about the sense of trying to rewrite the past, but I was intrigued as to what my model’s findings might have been. But, we are where we are, and most energy should be focused on the here and now, and the future. But, no doubt, there are lessons to be learned eventually.
NB Government health warning – this is pure speculative modelling, and it is more about the sensitivities in the setting of modelling parameters, and not a statement of fact! Don’t quote me!
Some commentators and academics (Professor Rowland Kao at Edinburgh University tweeted this BBC coverage of his modelling work for Scotland) feel that the UK might well have applied the lockdown measure 2 weeks earlier, on the 9th March, as Italy did, instead of on the 23rd March. UK Government states that our first cases were later than for other countries, and so too, therefore, was our lockdown. Time will tell how significant this is.
I was interested to see what my model (the Prof Alex de Visscher code, with my parameters, modifications and published data for the UK situation) might have made of this for the UK, and I have run the model for some options.
A major problem with running any model from 9th March for the UK is that the first UK deaths, specifically attributed to Covid-19, on the original basis, prior to the inclusion of Care Home figures, were on the 10th March, although, of course, there had been many more cases of infection reported, with people being admitted to hospital and ICU departments in the hundreds. I’m sure that, as discussed above, there will have been deaths associated with the pandemic, but not officially recorded as such.
My current model forecast
It’s important, first of all, to state my baseline, which is my model’s current forecasts for the pandemic in the UK on the basis of the March 23rd lockdown, which embraced working from home, social distancing, school, restaurant and pub closures and several other measures. UK lockdown did allow for excursions for exercise once a day (not allowed at that time in Spain and Italy, for example), and travel for essential medical supplies and food (possibly to help others with those aspects).
In that baseline work, current as of May 14th, these are the graphical representations of the outlook, which forecasts a little over 42,000 deaths by the late summer 2020, stabilising at about that number into the future, assuming no changes to the 84.1% lockdown intervention effectiveness, and no pharmaceutical intervention yet (either vaccine or radically more effective treatment of symptoms). The orange curve always represents the reported numbers.
My model projection is broadly in agreement with the following projection from the IHME, the Institute for Health Metrics and Evaluation, an independent global health research center at the University of Washington, which forecasts between 43,000 and 44,000 deaths in the UK by the summer.
The number of cases, below, is predicted at 2.8 million in the UK, assuming that only 8% of those with the virus are actually tested and diagnosed, a percentage that is informed by the ratio of deaths to real cases derived from other countries’ experience. The fit of the model, therefore, isn’t as good for cases, as the data is fuzzy. The 12.5 multiple on reported numbers mentioned on the chart takes account of that 8%.
It is likely, therefore, that we are not yet aware of anything like the real number of cases, and even death data is now felt to have been understated, currently attributed by confirmed test diagnoses only, not by a simple mention of Covid-19 on death certificates. These latter figures are now being analysed by Government. My model will be updated (as it has been recently to include Care Homes) when those figures are adjusted, as they surely will be when the data is cleansed. The UK Office for National Statistics are working on this, and have already published some numbers (but with some lag, and so are not up to date).
The UK Government does now report for Care Homes in the “all settings” figures it reports, as of nearly two weeks ago, and it retroactively adjusted all historic reported numbers to take those into account, which my model does too; but the death certificate reference to Covid-19 is still not a factor in assigning deaths to Covid-19 in the official published UK daily data.
It is fair to say that policies on this vary hugely from country to country, and from region to region within countries, so it isn’t easy to see where to draw the line.
Options for parametric sensitivities for the March 9th lockdown scenario
It is hard to be sure what the effectiveness of lockdown measures might have been in early March. Some other countries, such as China, were seen to enforce very stringent measures at (and before) that time (having encountered the Coronavirus earlier) and in his own work, Alex de Visscher has used 90% effectiveness for China. This figure relates to the extent of the reduction in infection rate in the model, 90% effectiveness reducing the rate to 10% of its original value. See my blog article for a description of the model and its variables.
At the other end of the scale, the USA model was set at 71% initially, and Canada at 75%; Italy at 79.1% and Spain at 85% were some other choices. Lots of sensitivities were run around these settings, but this gives an idea of the possible range.
The best fit of my current model, as above, for the March 23rd lockdown is currently for an effectiveness of 84.1%, and it is continuing to match published data at that setting, and therefore remains the basis for my current March 23rd lockdown forecasting.
Without any changes to the underlying infection rates, I ran my 9th March UK lockdown model for three values of interv_success, the model variable for lockdown (intervention) effectiveness; for the current 84.1%, and then for 80% and 75%.
I should say that in the University of Edinburgh work mentioned above, which was for Scotland, information on commuting movements was input to their models, using Google mobility data, to try to get an understanding of how social restrictions affect spread of the virus. They found that “Not moving so far away from one’s home is one of the big impacts of what lockdown is doing”.
I haven’t been able to do such an activity analysis, but the % effectiveness in my model is a reflection of assumptions about that. My feeling is that in the UK, we were, and are, unlikely to be able to enforce lockdown as strongly as China did. In the UK, for the last 6 or 7 weeks, we have, as I mentioned, had more freedoms than, for example, Italy and Spain to go outside for exercise (running or cycling, say).
Our recent relatively high public compliance rate with UK Government directives for lockdown measures (working from home where possible, social distancing etc) might not have been as good earlier on, when the impact of the pandemic here in the UK wasn’t confronting people so directly, or to such an extent, as two weeks later, on the 23rd March, by when 359 people had died, and published case numbers were 6650. These numbers compare with no deaths as of 9th March (on the original basis, without inclusion of Care Homes), and just 321 cases.
I should again emphasise that the reported case numbers might well be understated by a factor of 10 to 15 (12.5 is the figure Alex has worked with), judging by the experience in other countries further down the pandemic track.
Taking all this into account, my other options for interv_success are for lower % impact on the infection rate than works best currently, not higher – the current 84.1%, and then for 80% and 75% as mentioned above.
I present the outcomes in graphical form. The first sets of graphs, for lockdown starting on 9th March, are for these three options for intervention effectiveness.
84.1 % effectiveness (as per my current, well fitted model)
We see here that on the larger chart, presenting the situation up to May 13th, instead of the current reported 33186 deaths (the orange curve), the model would have forecasted 537 if lockdown had happened on March 9th, assuming 84.1% effectiveness, which I think would be too high for compliance at that time. For the forecast long term (top left), deaths would stabilise at just over 600 by the end of summer 2020, with total infections at a little over 40,000.
This time, on the larger chart, presenting the situation up to May 13th, instead of the current 33186 deaths, the model would forecast 717 deaths if lockdown had happened on March 9th, assuming 80% effectiveness. The longer term outcome would be about 1000 deaths with just under 70,000 cases, stabilising in the autumn of 2020.
Finally, on the larger chart, presenting the situation up to May 13th, instead of the current 33186 deaths, the model would forecast 1123 deaths by 13th May if lockdown had happened on March 9th, assuming 75% intervention effectiveness. The long term deaths would be nearly 6000, but with still a few deaths per day even a year later, because of the continuing lower intervention effectiveness (assumed by the model), and cases would have reached a little under 400,000 in a year’s time, with negligible growth by then.
These results show a marked reduction in deaths forecast for May 13th by my model for the March 9th scenario lockdown, compared with the actual March 23rd lockdown results. This is only a model and will have deficiencies in a) the reduced data available for calibration of the model before March 9th (no deaths, for example) for model fitting, and b) the lack, therefore, of a firm basis for setting transmission rates at that time, and testing them.
But it does show the high dependency the later results have on early data.
There is another feature in the numbers, however, that is more noticeable in the charts for the assumed 75% lockdown effectiveness. This is is clearer when we look more closely at the long term outcome for deaths, starting with that 75% case, as above,
where we can see that even in April 2021 (in the absence of pharmaceutical measures, or any change to the intervention effectiveness) the deaths are still increasing (if there were no further increase, the log chart would be flat, i.e. horizontal, at that point). The 5738 deaths at that time are not the maximum.
This outcome is even more stark if I choose 70% intervention effectiveness, and in this case I present both the linear y-axis and the log y-axis charts, since on the linear chart, the numbers are significant enough to be perceived visually. The orange curve is always the actual reported numbers, whatever the model scenario.
70% effectiveness analysis
The linear chart makes clear that by the end of April 2021, there continues to be a high rate of increase of deaths, and, equivalently, we can also see that the log chart is much further from flattening than in the 75% case, even after more than a year of lockdown, at this level (70%) of effectiveness. The modelled number of deaths by April 2021 is over 250,000, with 17 million cases.
Visually, the reported deaths look very unlikely to get anywhere near that, and in my base March 23rd lockdown model (which has a good fit with reported numbers at the moment, May 13th, the orange curve here), deaths are projected to flatten at 42,000, with cases at 2.8 million.
I totally accept that the model will have its deficiencies – the calibration phase to March 9th is short, and the infection rate parameter might not be realistic as a result.
But from a comparative model behaviour perspective, this is another example, in an exponential growth situation (as a pandemic is) that early numbers are not a good indicator of outcomes. Small early differences make very large differences down the track. It’s a non-linear relationship, with high sensitivity of the pandemic growth rate to the % effectiveness of intervention measures (most noticeably at the lower % effectiveness).
It does seem that if the effectiveness of interventions is not high enough, while there can appear to be good early results (the number of deaths by May 13th 2020, in even the 70%, least effective model, is 1,980, as compared with 540 for the 84.1% case above (starting March 9th)) the pandemic eventually overcomes the measures.
By the end of 2020, in this 70% scenario, the modelled deaths are likely already to exceed the reported deaths, and would be growing quite fast by then, as can be seen from the charts.
This tells me that if pandemic intervention measures are to work effectively, and strategically (i.e. long term), they need not only to be early, but also at least at 80% effectiveness in terms of reducing the infection transmission rate for this Coronavirus.
Long term 80% effectiveness outlook
We can confirm this from the 80% effectiveness longer term outlook for the March 9th lockdown scenario (assuming no changes in the lockdown measures, and no available pharmaceutical measures).
At this level of intervention effectiveness (80%), the modelled deaths curve peaks at about 1000, reaching that point by autumn 2020 as stated in the earlier section, with about 67,000 cases by that time, stabilising at 68,000 before a year’s time, as we see in the chart below. Again, the orange line is the current status of the reported cases (x12.5 as before).
This article simply addresses the theoretical, earlier lockdown scenario that has been much discussed. The modelling I have done is probably not adequate in terms of absolute numbers, but it is clear, from comparisons of my scenarios, that on similar assumptions to the current modelling for the March 23rd lockdown initiation, the number of deaths at May 13th would have been less – probably far less, as asserted by the University of Edinburgh study for Scotland.
Whether those assumptions are valid (i.e. that the intervention effectiveness would have been the same) is questionable.
Furthermore, it requires just a 10% to 15% diminution of that effectiveness percentage to lead to a worse outcome in the long term.
I see this as an indicator that, in the absence of pharmaceutical measures (a vaccine, ideally (of lasting effect), or medicines that handle symptoms effectively, and save lives) the intervention measures have to continue to be carefully monitored, including an understanding of which are the most effective at reducing infection transmission rates.
It may well be that as the researchers at Edinburgh stated, the “stay-at-home” policy is the most effective measure, and that trips away from home should continue to be minimised even into the long term, by working from home where possible, and otherwise travelling only for medical and food purchases.
This is a brief update to my UK model predictions in the light of a week’s published data regarding Covid-19 cases and deaths in all settings – hospitals, care homes and the community – rather than just hospitals and the community, as previously.
In order to get the best fit between the model and the published data, I have had to reduce the effectiveness of interventions (lockdown, social distancing, home working etc) from 85% last week ( in my post immediately following the Government change of reporting basis) to 84.1% at present.
This reflects the fact that care homes, new to the numbers, seem to influence the critical R0 number upwards on average, and it might be that R0 is between .7 and .9, which is uncomfortably near to 1. It is already higher in hospitals than in the community, but the care home figures in the last week have increased R0 on average. See my post on the SIR model and importance of R0 to review the meaning of R0.
Predicted cases are now at 2.8 million (not reflecting the published data, but an estimate of the underlying real cases) with fatalities at 42,000.
Possible model upgrades
The Government have said that they are to sample people randomly in different settings (hospital, care homes and the community), and regionally, better to understand how the transmission rate, and the influence on the R0 reproductive number, differs in those settings, and also in different parts of the UK.
Ideally a model would forecast the pandemic growth on the basis of these individually, and then aggregate them, and I’m sure the Government advisers will be doing that. As for my model, I am adjusting overall parameters for the whole population on an average basis at this point.
Another model upgrade which has already been made by academics at Imperial College and at Harvard is to explore the cyclical behaviour of partial relaxations of the different lockdown components, to model the response of the pandemic to these (a probable increase in growth to some extent) and then a re-tightening of lockdown measures to cope with that, followed by another fall in transmission rates; and then repeating this loop into 2021 and 2022, showing a cyclical behaviour of the pandemic (excluding any pharmaceutical (e.g. vaccine and medicinal) measures). I covered this in my previous article on exit strategy.
This explains Government reluctance to promise any significant easing of lockdown in any specific timescales.
My UK model (based on the work of Prof. Alex Visscher at Concordia University in Montreal for other countries) is calibrated on the most accurate published data up to the lockdown date, March 23rd, which is the data on daily deaths in the UK.
Once that fit of the model to the known data has been achieved, by adjusting the assumed transmission rates, the data for deaths after lockdown – the intervention – is matched by adjusting parameters reflecting the assumed effectiveness of the intervention measures.
Data on cases is not so accurate by a long way, and examples from “captive” communities indicate that deaths vs. cases run at about 1.5% (e.g. the Diamond Princess cruise ship data).
The Italy experience also plays into this relationship between deaths and actual (as opposed to published) case numbers – it is thought that a) only a single figure percentage of people ever get tested (8% was Alex’s figure), and b) again in Italy, the death rate was probably higher than 1.5% because their health service couldn’t cope for a while, with insufficient ICU provision.
In the model, allowing for that 8%, a factor of 12.5 is applied to public total and active cases data, to reflect the likely under-reporting of case data, since there are relatively few tests.
In the model, once the fit to known data (particularly deaths to date) is made as close as possible, then the model is run over whatever timescale is desired, to look at its predictions for cases and deaths – at present a short-term forecast to June 2020, and a longer term outlook well into 2021, by when outcomes in the model have stabilised.
Model charts for deaths
The fit of the model here can be managed well, post lockdown, by adjusting the percentage effectiveness of the intervention measure, and this is currently set at 84.1%. This model predicts fatalities in the UK at 42,000. They are reported currently (8th May 2020) at 31241.
Model charts for cases
As we can see here, the fit for cases isn’t as good, but the uncertainty in case number reporting accuracy, owing to the low level of testing, and the variable experience from other countries such as Italy, means that this is an innately less reliable basis for forecasting. The model prediction for the outcome of UK case numbers is 2.8 million.
If testing, tracking and tracing is launched effectively in the UK, then this would enable a better basis for predictions for case numbers than we currently have.
I’m certainly not at a concluding stage yet. A more complex model is probably necessary to predict the situation, once variations to the current lockdown measures begin to happen, likely over the coming month or two in the first instance.
Academics from many institutions are involved, and I will take a look at the models being released to see if they address the two points I mentioned here: the variability of R0 across settings and geography, and the cyclical behaviour of the pandemic in response to lockdown variations.
At the least, perhaps, my current model might be enhanced to allow a time-dependent interv_success variable, instead of a constant lockdown effectiveness representation.
The UK Government yesterday changed the reporting basis for Coronavirus numbers, retrospectively (since 6th March 2020) adding in deaths in the Care Home and and other settings, and also modifying the “Active Cases” to match, and so I have adjusted my model to match.
This historic information is more easily found on the Worldometer site; apart from current day numbers, it is harder to find the tabular data on the UK.gov site, and I guess Worldometers have a reliable web services feed from most national reporting web pages.
The increase in daily and cumulative deaths over the period contrasts with a slight reduction in daily active case numbers over the period.
Understanding the variations in epidemic parameters
With more resources, it would make sense to model different settings separately, and then combine them. If (as it is) the reproduction number R0<1 for the community, the population at large (although varying by location, environment etc), but higher in hospitals, and even higher in Care Homes, then these scenarios would have different transmission rates in the model, different effectiveness of counter-measures, and differences in several other parameters of the model(s). Today the CSA (Sir Patrick Vallance) stated that indeed, there is to be a randomised survey of people in different places (geographically) and situations (travel, work etc) to work out where the R-value is in different parts of the population.
But I have continued with the means at my disposal (the excellent basis for modelling in Alex Visscher’s paper that I have been using for some time).
Ultimately, as I said I my article at https://www.briansutton.uk/?p=1595, a multi-phase model will be needed (as per Imperial College and Harvard models illustrated here:-
and I am sure that it is the Imperial College version of this (by Neil Ferguson and his team) that will be to the forefront in that advice. The models looks at variations in policy regarding different aspects of the lockdown interventions, and the response of the epidemic to them. This leads to the cyclicity illustrated above.
In my model, the rate of deaths is the most accurately available data, (even though the basis for reporting it has just changed) and the model fit is based on that. I have incorporated that reporting update into the model.
Up to lockdown (March 23rd in the UK, day 51), an infection transmission rate k11 (rate of infection of previously uninfected people by those in the infected compartment) and a correction factor are used to get this fit for the model as close as possible prior to the intervention date. For example, k11 can be adjusted, as part of a combination of infection rates; k12 from sick (S) people, k13 from seriously sick (SS) people and k14 from recovering (B, better) people to the uninfected community (U). All of those sub-rates could be adjusted in the model, and taken together define the overall rate of transition of people from from Unifected to Infected.
After lockdown, the various interventions – social distancing, school and large event closures, restaurant and pub closures and all the rest – are represented by an intervention effectiveness percentage, and this is modified (as an average across all those settings I mentioned before) to get the fit of the model after the lockdown measures as close as possible to the reported data, up to the current date.
I had been using an intervention effectiveness of 90% latterly, as the UK community response to the Government’s advice has been pretty good.
But with the UK Government move to include data from other settings (particularly the Care Home setting) I have had to reduce that overall percentage to 85% (having modelled several options from 80% upwards) to match the increased reported historic death rate.
It is, of course, more realistic to include all settings in the reported numbers, and in fact my model was predicting on that basis at the start. Now we have a few more weeks of data, and all the reported data, not just some of it, I am more confident that my original forecast for 39,000 deaths in the UK (for this single phase outlook) is currently a better estimate than the update I made a week or so ago (with 90% intervention effectiveness) to 29,000 deaths in the Model Refinement article referred to above, when I was trying to fit just hospital deaths (having no other reference point at that time).
Here are the charts for 85% intervention effectiveness; two for the long term outlook, into 2021, and two up until today’s date (with yesterday’s data):
Another output would be for UK cases, and I’ll just summarise with these charts for all cases up until June 2020 (where the modelled case numbers just begin to level off in the model):-
As we can see, the fit here isn’t as good, but this also reflects the fact that the data is less certain than for deaths, and is collected in many different ways across the UK, in the four home countries, and in the conurbations, counties and councils that input to the figures. I will probably have to adjust the model again within a few days, but the outlook, long term, of the model is for 2.6 million cases of all types. We’ll see…
Outlook beyond the Lockdown – again
I’m modest about my forecasts, but the methodology shows me the kind of advice the Government will be getting. The behavioural “science” part of the advice (not in the model) taking the public “tiredness” into account, was the reason for starting partial lockdown later, wasn’t it?
It would be more of the same if we pause the wrong aspects of lockdown too early for these reasons. Somehow the public have to “get” the rate of infection point into their heads, and that you can be infecting people before you have symptoms yourself. The presentation of the R number in today’s Government update might help that awareness. My article on R0 refers
Neil Ferguson of Imperial College was publishing papers at least as far back as 2006 on the mitigation of flu epidemics by such lockdown means, modelling very similar non-pharmaceutical methods of controlling infection rates – social distancing, school closures, no public meetings and all the rest. Here is the 2006 paper, just one of 188 publications over the years by Ferguson and his team. https://www.nature.com/articles/nature04795
All countries would have been aware of this from the thinking around MERS, SARS and other outbreaks. We have a LOT of prepared models to fall back on.
As other commentators have said, Neil Ferguson has HUGE influence with the UK Government. I’m not sure how quickly UK scientists themselves were off the mark (as well as Government). We have moved from “herd immunity” and “flattening the curve” as mantras, to controlling the rate of infection by the measures we currently have in place, the type of lockdown that other countries were already using (in Europe, Italy did that two weeks before we did, although Government is saying that we did it earlier in the life of the epidemic here in the UK).
But what is clear from the public pronouncements is that Governments are now VERY aware of the issue of further peaks in the epidemic, and I would be very surprised to see rapid or significant change in the lockdown (it already allows some freedoms here in the UK, not there in some other countries, for example exercise outings once a day). I wouldn’t welcome being even more socially distanced than others, as a fit 70+ year-old person, through the policy of shielding, but if it has to be done, so be it.
In my post a few days ago at https://www.briansutton.uk/?p=1536, I mentioned that the webinar I reported there was to be released on YouTube this week, and it is now available.
It is VERY much worth 40 minutes of your time to take a look.
The potential cyclical behaviour described there is one that, as we see, the Imperial group under Neil Ferguson have been looking at; and the Harvard outlook seems similar, well into 2021, and as far as 2022 in the Harvard forecasting period.
It seems likely, (as Imperial have been advising the UK Government) from the recent daily updates, that Government is preparing us for this, and have been setting the conversation about any reduction in the lockdown clearly in the context of avoiding a further peak (or more peaks) in infection rates.
Alex de Visscher’s model I have been working with (I described it at https://www.briansutton.uk/?p=1455 ), and other similar “single-phase” models, would need some kind of iterative loop to model such a situation, something entirely feasible, although requiring many assumptions about re-infection rates, the nature and timings of any interventions Government might choose to relax and/or add, and so on. It’s a much more uncertain modelling task than even the work I’ve been doing so far.
Inasmuch as I have been looking at the fit of my current UK model to the daily data (see https://www.briansutton.uk/?p=1571), it was clear that the model was approaching a critical period over the next couple of weeks: UK death data, and to some extent the daily number of cases was levelling out, such that I could see that my model (predicting c. 39,000 deaths) was ahead of what might be likely to happen.
The modelled pandemic growth rates depend on many data, and two parameters – the time it takes for a person to infect another (i.e. the initial rate of infection per day, k11=0.39 in the current model), and also the intervention effectiveness parameter (how well the social distancing is working at reducing that infection rate per day), interv_success, set at 85% in the current model – are pertinent to the shape of the growth from the intervention date.
Since the lockdown measures are critical to that infection rate outcome as we go forward, yesterday I decided to run the model with interv_success set at 90%, and indeed it did bring down the predicted death outcome to c. 29,000. Here are those new charts, one with linear axes, and the other with logarithmic (base 10) y-axis:
These charts match the reported deaths data so far a little better; but for case numbers (a harder and more uncertain number to measure, and to model) with 90% intervention effectiveness, the reported data begins to exceed the model prediction, as we see below. This effect was enhanced by the weekend’s lower UK reported daily figures (April 18th-20th), but the April 21st figure showed another increase. It’s probably still a little to early to be axiomatic about this parameter, and the later section on model sensitivity explains why.
The two original charts for cases, with the 85% effectiveness, had looked likely to be a better match, going forward, as we seem to be entering a levelling-off period:
We can see that the model is quite sensitive to this assumption of intervention effectiveness. All data assumptions at an early stage in the exponential growth of pandemics are similarly sensitive, owing to the way that day-on-day growth rates work. The timing of the start of interventions (March 23rd in the case of the UK) and also, as we see below, the % effectiveness assumption both have a big effect.
This highlights the need to be cautious about adjusting lockdown measures (either in real life or in the models), until we can be sure what the likely impact of the “intervention effectiveness” percentage will be, and how this affects the outlook. Here we see two base-case sensitivity charts from Alex’s original paper at https://arxiv.org/pdf/2003.08824.pdf
These are generic logarithmic charts from his base model. On the left, we see the number of deaths after 60 days (bright red), 150 days (dark red) and 300 days (black) versus % effectiveness of intervention, for the base case of the (then) typical (outside China) doubling time 4 days, with intervention on day 30 (in the UK it (the lockdown) was on Day 51, March 23rd).
On the right, we see the number of deaths after 60 days (bright red), 150 days (dark red) and 300 days (black) versus the time of intervention, for the base case, doubling time 4 days, and intervention effectiveness 70 %.
Since the UK intervention on March 23rd, I feel that the intervention effectiveness has been quite good overall, so 85% and 90% don’t seem unreasonable options for the UK model.
For our UK intervention timing, the jury is out as to whether this was too late, or if an earlier intervention might have seen poorer compliance (effectiveness). The Government made the case (based on behavioural science, I believe) that the UK public would tire of lockdown, and therefore wanted to save lockdown for when they felt it would really be needed, to have its maximum effect nearer to when any peak in cases might have swamped the NHS.
The point in this section is to illustrate the sensitivity of outcomes to earlier choices of actions, modelled by such mathematical models; such models being the basis of some of the scientific advice to the UK Government (and other Governments).
Alex had run his generic model (population 100m, with 100 infected, 10 sick and 1 seriously sick on day 1) for a couple of scenarios, in slow, medium and fast growth modes: one, that represented a “herd immunity approach” (which turns out not to be effective unless there is already a pharmaceutical treatment to reduce the number of infective people); and two, a “flattening the curve” approach, which modelled reducing the infectivity, but NOT controlling the disease (i.e. R0 still well above 1), which delays the peak, but still swamps the healthcare system, and allows very large numbers (about 2/3 of the base model numbers) of deaths.
This must have been the kind of advice the UK Government were getting in mid-March, because these approaches were demoted as the principal mechanisms in the UK (no more talk of herd immunity), in favour of the lockdown intervention on March 23rd.
What next after lockdown?
The following charts (from Alex’s paper) show the outcomes, in this generic case, for an intervention (in the form of a lockdown) around day 30 of the epidemic.
Starting from the base case, it is assumed that drastic social distancing measures are taken on day 30 that reduce R0 to below 1. It is assumed that the value of k11 (the starting infectivity) is reduced by 70 % (i.e., from 0.25 per day to 0.075 per day, equivalent to R0 decreasing from 2.82 to 0.85).
On the left chart, we see: Uninfected (black), all Infected (blue) and Deceased (red) people versus time. Base case, doubling time 4 days, intervention on day 30 with 70 % effectiveness.
On the right chart, we see Incubating (blue), Sick (yellow), Seriously Sick (brown), Recovering (green) and Deceased (red) people versus time. Base case, doubling time 4 days, intervention on day 30 with 70 % effectiveness.
Conclusions for lockdown
What is clear from these charts is that the decline of an epidemic is MUCH slower than its growth.
This has important repercussions for any public health policy aiming to save lives. Even seven months into this generic modelled intervention, the number of infected is comparable to the number of infected two and a half weeks before the intervention.
On the basis of these simulations, terminating the intervention would immediately relaunch the epidemic. The intervention would need to be maintained until the population can be vaccinated on a large scale.
This difficulty is at the heart of the debate right now about how and when UK (or any) lockdown might be eased.
This debate is also conducted in the light of the Imperial and Harvard modelling concerting the cyclical behaviour of the epidemic in response to periodic and partial relaxations of the lockdown.
This is where the two parts of the discussion in this blog post come together.
Even having explored Prof. Alex Visscher’s published MatLab code for a week or two now, with UK data, even I am surprised at how well the model is matching up to published UK data so far (on April 18th 2020).
I’ll just present the charts produced my my version of the model for now. One chart is with linear scales, and the other with a log y-axis to base 10 (each main gridline is 10x the previous one):
We can see that the upcoming acid test will be as the case numbers (and associated death rates) begin to fall, thanks to the social distancing, working from home and other measures people are being asked to take (successfully, in general, at the moment).
The new consideration (new in public, at least) is the question of whether there will be a vaccine, how well it might work, for how long immunity might last, and whether there will be re-infection, and at what rate. If there is no pharmaceutical resolution, or only a partial one, these single phase infection models will need to develop, and, as we saw in my last post at The potential cyclical future pandemic, a cyclical repetition of a lockdown/epidemic loop might look the most likely outcome for a while, until the vaccine point is resolved.
Meanwhile, this single phase model flattens out at a little over 39,000 deaths. I hope for the sake of everyone that the model turns out to be pessimistic.
As an alumnus, I had the opportunity today (with 2000 other people in over 60 countries) to attend a Cambridge University webinar on the current pandemic crisis. It was moderated by Chris Smith, a consultant medical virologist and lecturer at Cambridge (and a presenter on https://www.thenakedscientists.com/ where more about this event will be found).
The experts presenting were Dr Nick Matheson, a Principal Investigator in Therapeutic Immunology, and Honorary Consultant at Addenbrookes Hospital, and Prof. Ken Smith, Professor in Medicine at Cambridge, and Director of Studies for Clinical Medicine at Pembroke College.
The video of the 45 minute session is available, and I will share it here in due course (it’s only on a closed group at the moment, but will be on the Cambridge YouTube channel https://www.youtube.com/channel/UCfcLirZA16uMdL1sMsfNxng next week) but of most interest to me, since I was interested in the modelling of the pandemic outbreak, was the following slide put up by Nick Matheson, depicting research by Neil Ferguson’s group at Imperial College, and also by the research group at Harvard University School of Public Health. Here it is:
It seems to me that as the Neil Ferguson group is involved with this (and the Harvard slide seems to corroborate something similar) it is likely that the UK Government is receiving the same material.
I am generally in support of not forecasting or detailing any imminent reduction in lockdown measures, and I can see from this why the Government (led by the science) might be pretty reluctant too.
You will know from my previous posts on this topic why the R0 number is so important (the Reproductive Number, representing the number of people one person might infect at a particular time). Sir Patrick Vallance (CSA) has mentioned several times that there is evidence that is R0 is now below 1 (on the 16th April he actually said it was between .5 and 1) for the general population (although not so in care homes and hospitals, probably).
He and Prof. Chris Whitty (CMO) (an epidemiologist) have also been saying (and so have Government, of course, on their advice) that they don’t want to reduce restrictions yet, in order to avoid a second peak of cases.
But from the Imperial College and Harvard research (mathematical modelling, presumably, on the basis of “non-pharmaceutical” likely actions that might be taken from time to time to control the outbreak) it would seem that we are in for multiple peaks (although lower than the first we have experienced) going well into 2021, and into 2022 in the case of the Harvard chart.
I was one of a very large number of people putting in a written question during the webinar, and mine was regarding the assumptions here (in the modelling) about the likelihood (or otherwise) of a vaccine on the one hand, and repeat infections on the other.
It might be that any vaccine might NOT confer long lasting immunity. It might even be that, like Dengue Fever, there will be NO vaccine (this was stated verbally in the Q&A by Prof. Ken Smith).
All very sobering. And I haven’t been drinking. Here is the full slide set:
In my research on appropriate (available and feasible (for me)) modelling tools for epidemics, I discovered this paper by Prof. Alex de Visscher (to be called Alex hereafter). He has been incredibly helpful.
Thanks to Alex including the MatLab code (a maths package I have used before) in the paper, and also a detailed description of how the model works, it qualifies as both available and feasible. There is also a GNU (public and free) version of MatLab (called Octave)) which is mostly equivalent in terms of functionality for these purposes, which makes it practical to use the mathematical tools.
In my communications with Alex, he has previously helped me with working out the relationship between the R0 Reproductive Number (now being quoted every time on Government presentations) and the doubling period for the case number of disease outbreak. I published my work on this at:
since mathematical expression are too cumbersome here on WordPress.
I would still say the Government is being a little coy about these numbers, saying that R0 is below 1 (a crucial dividing point between epidemic increase at R0>1 and epidemic slow down and dying out for R0<1). They now say (April 14th) it’s less than one outside hospitals and care homes, but not inside these places, as if (as Alex says) Covid-19 is not lethal, except for those people who die from it.
At least the Government have now stopped using graphs with the misleading appearance of y-axis log charts.
These following charts (unlike the original Government ones) make it pretty clear what kind of chart is being shown (log or linear) and also illustrate how different an impression might be given to a member of the public looking on a TV screen a) if the labelling isn’t there or obscurely referenced, and b) how the log charts to some extent mask the perception of steep growth in the numbers up until late March.
This chart from 14th April show the Government doing a little better over recent days in presenting a chart that visually represents the situation more clearly. Maybe the lower growth makes them more comfortable showing the data on linear axes.
Alex de Visscher model description
I’ll try to summarixe Alex’s approach in the model I have begun to use to work with the UK numbers (Alex had focused on other countries (US and Canada nearer to home, plus China, Italy, Spain, Iran with more developed outbreak numbers since the epidemic took hold).
I have previously described the SIR (Susceptible, Infected and Recovered) 3-compartment model on this blog at https://www.briansutton.uk/?p=1296, but in this new model we have as many as 7 compartments: Uninfected (U), Infected (I), Sick (S), Seriously Sick (SS), Deceased (D), Better but not recovered (B) and Recovered (R). They are represented by this diagram:
Firstly, an overall infection rate r1 above can be calculated as a weighted sum of these four sub-rates;
r1 = (k11I+ k12S+ k13SS+k14B)U/P
where the infection rates k11, k12, k13, k14, are set separately, towards I, S, SS and B respectively, from healthy people, with the rates proportional to the fraction of uninfected people, compared to the total population, U/P.
All other transitions between compartments are assumed as first order derivatives with their own rate constants, with time t measured in days:
dU/dt = -r1; the rate of movement from Uninfected to Infected; and similarly, with in-/out-puts added and subtracted for other compartments:
Transition rates between compartments
r1 = (k11I+ k12S+ k13SS+k14B)U/P
dU/dt = -r1
r2=k2I for I to S transitions
with dI/dt = r1-r2
r3=k3S for S to SS transitions
with dS/dt = r2-r3-r5
r4=k4SS for SS to D transitions
with dD/dt = r4
r5=k5S for S to B transitions
with dB/dt = r5+r6-r7
r6=k6SS for SS to B transitions
with dSS/dt = r3-r4-r6
r7=k7B for B to R transitions
with dR/dt = r7
Table 1: The possible transitions and their rates between compartments
Some simplifying assumptions are made: k12 = k11/2, k13 = k11/3 and k14 = k11/4, in other words scaling the infection rates to I from the other compartments S, SS and B, respectively, to the principal rate from U to I. (Subsequently k14 = 0*k11/4 was chosen to compensate for long contagious times once the SS median had been changed from to 10 days from 3.5 days. See below).
So we can recognise that the bones of the previous SIR model are here, but there is compartment refinement to allow an individual’s status to be more precisely defined, along with the related transitions.
The remaining parameters that can be set in the model are:
k3/k5 is based on the assumption that 10% of the infected (I) become seriously sick (SS), whereas 90% get better without developing serious symptoms; less than the 80:20 proportion observed normally, because many with mild symptoms go unreported.
The k4/k6 ratio is set assuming a 15% non-survival rate for hospitalised cases.
k5 and k7 were set in the original model assuming a median rate of 3.5 days in each of the S and B stages before moving to the Recovered R stage (ie a median 7 days to recover from mild symptoms). k7 has now been set to a median of 10 days (implying a 13.5 day median for recovery from mild symptoms), and to compensate for the resulting long contagious times, k14 = 0*k11/4 has been chosen as stated above.
k6 is based on an SS stage patient remaining in that stage for 10 days before transitioning to B. Because the other pathway is to the Deceased D stage, the median is actually less, 8.5 days. This would be consistent with a mean duration of 12.26 days for getting better (B), which has been observed clinically.
Implementation of the model and further parameter setting
I have been operating the model with some UK data using Octave, but Alex was using the equivalent (and original licensed software) MatLab. He used the Ordinary Differential Equation Solver module in that software, ODE45, which I have used before for solving the astronomical Restricted Three Body Problem, so I know it quite well.
Just as all of the various Governmental, Johns Hopkins University and Worldometer published graphs and data are presented with a non-zero case baseline to start the simulations, as the lead-in time is quite long, so does this model. These initial parameters for the UK model so far are chosen as:
Thus the doubling time I called TD there is given by the equation:
TD loge2 = loge (C30 /C29)
where Cd is the reported number of cases on a given day d. Known cases are assumed to be 5% of infected cases I, 1/3 of sick cases S, 90% of seriously sick SS, 12% of recovering R and 90% of deceased cases D (not all are known but we assume, for example, that reporting is more accurate for SS and D). The calculated doubling time in the model is not sensitive to the choice of these fractions.
The critical factor R0, the Reproduction Number, can be calculated by a separate program module, for one infected patient, where the number of newly infected starting with that case can be calculated over time.
Solving the equations
The model, with the many interdependent parameter choices that can be made, solves the Differential Equations described earlier, using the initial conditions mentioned in Table 3. Alex’s paper describes runs of the model for different assumptions around interventions (and non-interventions), and he has presented some outcomes of National models (for example for China) on his YouTube channel at:
and Alex’s introduction to the model on that channel, for both China, and the world outside China which describes the operation of the model, and the post processing fitting to real data charts can be seen at
Alex’s early April update on Italy is at:
I’m at an early stage of working on a UK model, using Alex’s model and techniques, and the most recent UK data is presented at the UK Government site at:
using the link to the total data that has been presented to date on that site. Unfortunately, each day I look, the presentation format changes (there used to be a spreadsheet which was the most convenient) and now the cumulative case count for the UK has increased suddenly by 3000 on top of today’s count. Death numbers are still in step. It isn’t hard to see why journalists might be frustrated with reporting by the UK Government.
I have, as a first cut, run the numbers through the model to get an initial feel as to outcomes. The charts produced are not yet annotated, but I can see the shape of the predictions, and I will just include one here before I go on to refine the inputs, the assumptions, the model parameters and the presentation.
In this chart, days are along the x-axis, and the y-axis in a log scale to base 10 (equal intervals are a multiple of 10 of the previous y-gridline) of numbers of individuals.
The red curve represents active cases, the purple curve the number of deaths D, and the yellow curve the number of seriously sick patients, SS. This chart is the simpler of the two that the model produces, but serves to illustrate what is going on.
I have not yet made any attempt to match the curves against actual UK outcomes for the first 50 days (say) as Alex has done for the other countries he has worked on.
The input choices for this version for the UK are:
interv_time = 52; % intervention time: Day 51 = March 23 – 1 = March 22
interv_success = 0.85; % fractional reduction of k11 during intervention
k11 = 0.39; % infection rate (per day)
k5 = log(2)/3.5;
k6 = log(2)/10;
k7 = log(2)/10;
fSS = 0.1; % fraction seriously sick
fCR = 0.04; % fraction critical
fmort = 0.015; % mortality rate
correction = 0.0012;
P = 67.8e6; % total population
The correction number is set to adjust for the correct starting number of cases. On Alex’s advice I set k11 = 0.39 which seems to provide a good initial match to death data.
In terms of numerical outputs, this is just a model, and hasn’t even been tuned, let alone fine tuned. The charts above reflect the output numbers in the model, which were:
doubling_time = -7511748644 – not relevant here for a declining epidemic; final_deaths = 39,319 reached after a little over 100 days; total_infections = 2,621,289 peaking at about 60 days
and I’m still making sense of these and working out some possible adjustments. Firstly I want to match with real data up to to date, so there is some spreadsheet and plotting work to do.
Now the model has been run, there can be a combinatorial number of changes that could be made to get the model to fit the first 50 days’ data.
The work has only just started, but I feel that I have a worthwhile tool to begin to parallel what the Government advisers might be doing to prepare their advice to Government. Watch this space!