Current Coronavirus model forecast, and next steps

Introduction

This post covers the current status of my UK Coronavirus (SARS-CoV-2) model, stating the June 2nd position, and comparing with an update on June 3rd, reworking my UK SARS-CoV-2 model with 83.5% intervention effectiveness (down from 84%), which reduces the transmission rate to 16.5% of its starting value (instead of 16%) prior to 23rd March lockdown.

This may not seem a big change, but as I have said before, small changes early on have quite large effects later. I did this because I see some signs of growth in the reported numbers, over the last few days, which, if it continues, would be a little concerning.

I sensed some urgency in the June 3rd Government update, on the part of the CMO, Chris Whitty (who spoke at much greater length than usual) and the CSA, Sir Patrick Vallance, to highlight the continuing risk, even though the UK Government is seeking to relax some parts of the lockdown.

They also mentioned more than once that the significant “R” reproductive number, although less than 1, was close to 1, and again I thought they were keen to emphasise this. The scientific and medical concern and emphasis was pretty clear.

These changes are in the context of quite a bit of debate around the science and the forecasts between key protagonists, and I begin with the background to the modelling and data analysis approaches.

Curve fitting and forecasting approaches

Curve-fitting approach

I have been doing more homework on Prof. Michael Levitt’s Twitter feed, where he publishes much of his latest work on Coronavirus. There’s a lot to digest (some of which I have already reported, such as his EuroMOMO work) and I see more methodology to explore, including third party input to the stream, such as Twitter posts from Prof. Sir David Spiegelhalter, who also publishes on Medium.

I DO use Twitter, although a lot less nowadays than I used to (8.5k tweets over a few years, but not at such high rate lately); much less is social nowadays, and more is highlighting of my https://www.briansutton.uk/ blog entries.

Core to that work are Michael’s curve fitting methods, in particular regarding the Gompertz cumulative distribution function and the Change Ratio / Sigmoid curve references that Michael describes. Other functions are also available(!), such as The Richard’s function.

This curve-fitting work looks at an entity’s published data regarding cases and deaths (China, the Rest of the World and other individual countries were some important entities that Michael has analysed) and attempts to fit a postulated mathematical function to the data, first to enable a good fit, and then for projections into the future to be made.

This has worked well, most notably in Michael’s work in forecasting, in early February, the situation in China at the end of March. I reported this on March 24th when the remarkable accuracy of that forecast was reported in the press.

The Times coverage on March 24th of Michael Levitt's accurate forecast for China
The Times coverage on March 24th of Michael Levitt’s accurate forecast for China

This curve fitting method, incidentally, is a technique used in the process of “training” a computer code to solve problems it hasn’t previously encountered – a process known as “machine learning”, an application or field of study of Artificial Intelligence.

Forecasting approach

By contrast, my model (based on a model developed by Prof. Alex de Visscher at Concordia University) is a forecasting model, based upon differential equations describing the infective behaviour of the virus, with my own parameters and settings, and UK data, and is currently matching deaths data for the UK, on the basis of Government published “all settings” deaths.

The model is calibrated to fit known data as closely as possible, adjusting key parameters in the equations such as those describing the SARS-CoV-2 virus transmission rate, incubation period and lethality. A set of differential equations are built describing the rate of transitions of people from uninfected, through infected, either to death or to recovery, to arrive at a predictive model for the future.

No mathematical equation is assumed for the curve shapes of charts of the published data; the model behaviour is built bottom-up from the data, postulated parameters, starting conditions and differential equations over time.

The model solves those differential equations representing such assumed relationships (through the infective behaviour of the virus) between “compartments” of people, including, but not necessarily limited to Susceptible (so far unaffected), Infected and Recovered people in the overall population.

I had previously explored such a generic SIR model, (with just three such compartments) using a code based on the Galbraith solution to the relevant Differential Equations. My following article on the Reproductive number R0 was set in the context of the SIR (Susceptible-Infected-Recovered) model, but my current model is based on Alex’s 7 Compartment model, allowing for graduations of sickness and multiple compartment transition routes (although NOT with reinfection).

There are many variations and extensions of the SIR model, such as SEIR models, allowing for an Exposed but not Infected phase, and SEIRS models, adding a loss of immunity to Recovered people, returning them eventually to the Susceptible compartment. I discussed some of these options in one of my first articles on SIR modelling, and then later on in the derivation of the SIR model, mentioning a reference to learn more.

Although, as Michael has said, the slowing of growth of SARS-CoV-2 might be because it finds it hard to locate further victims, I should have thought that this was already described in the Differential Equations for SIR related models, and that the compartment links in the model (should) take into account the effect of, for example, social distancing (via the effectiveness % parameter in my model). I will look at this further.

The June 2nd UK reported and modelled data

Here are my model output charts exactly up to, June 2nd, as of the UK Government briefing that day, and they show (apart from the last few days over the weekend) a very close fit to reported death data**. The charts are presented as a sequence of slides:

These charts all represent the same UK deaths data, but presented in slightly different ways – linear and log y-axes; cumulative and daily numbers; and to date, as well as the long term outlook. The current long term outlook of 42,550 deaths in the UK is within error limits of the the Worldometers linked forecast of 44,389, presented at https://covid19.healthdata.org/united-kingdom, but is not modelled on it.

**I suspected that my 84% effectiveness of intervention would need to be reduced a few points (c. 83.5%) to reflect a little uptick in the UK reported numbers in these charts, but I waited until midweek, to let the weekend under-reporting work through. See the update below**.

I will also be interested to see if that slight uptick we are seeing on the death rate in the linear axis charts is a consequence of an earlier increase in cases. I don’t think it will be because of the very recent and partial lockdown relaxations, as the incubation period of the SARS-CoV-2 virus means that we would not see the effects in the deaths number for a couple of weeks at the earliest.

I suppose, anecdotally, we may feel that UK public response to lockdown might itself have relaxed a little over the last two or three weeks, and might well have had an effect.

The periodic scatter of the reported daily death numbers around the model numbers is because of the reguar weekend drop in numbers. Reporting is always delayed over weekends, with the ground caught up over the Monday and Tuesday, typically – just as for 1st and 2nd June here.

A few numbers are often reported for previous days at other times too, when the data wasn’t available at the time, and so the specific daily totals are typically not precisely and only deaths on that particular day.

The cumulative charts tend to mask these daily variations as the cumulative numbers dominate small daily differences. This applies to the following updated charts too.

**June 3rd update for 83.5% intervention effectiveness

I have reworked the model for 83.5% intervention effectiveness, which reduces the transmission rate to 16.5% of its starting value, prior to 23rd March lockdown. Here is the equivalent slide set, as of 3rd June, one day later, and included in this post to make comparisons easier:

These charts reflect the June 3rd reported deaths at 39,728 and daily deaths on 3rd June of 359. The model long-term prediction is 44,397 deaths in this scenario, almost exactly the Worldometer forecast illustrated above.

We also see the June 3rd reported and modelled cumulative numbers matching, but we will have to watch the growth rate.

Concluding remarks

I’m not as concerned to model cases data as accurately, because the reported numbers are somewhat uncertain, collected as they are in different ways by four Home Countries, and by many different regions and entities in the UK, with somewhat different definitions.

My next steps, as I said, are to look at the Sigmoid and data fitting charts Michael uses, and compare the same method to my model generated charts.

*NB The UK Office for National Statistics (ONS) has been working on the Excess Deaths measure, amongst other data, including deaths where Covid-19 is mentioned on the death certificate, not requiring a positive Covid-19 test as the Government numbers do.

As of 2nd June, the Government announced 39369 deaths in its standard “all settings” – Hospitals, Community AND Care homes (with a Covid-19 test diagnosis) but the ONS are mentioning 62,000 Excess Deaths today. A little while ago, on the 19th May, the ONS figure was 55,000 Excess Deaths, compared with 35,341 for the “all settings” UK Government number. I reported that in my blog post https://www.briansutton.uk/?p=2302 in my EuroMOMO data analysis post.

But none of the ways of counting deaths is without its issues. As the King’s Fund says on their website, “In addition to its direct impact on overall mortality, there are concerns that the Covid-19 pandemic may have had other adverse consequences, causing an increase in deaths from other serious conditions such as heart disease and cancer.

“This is because the number of excess deaths when compared with previous years is greater than the number of deaths attributed to Covid-19. The concerns stem, in part, from the fall in numbers of people seeking health care from GPs, accident and emergency and other health care services for other conditions.

“Some of the unexplained excess could also reflect under-recording of Covid-19 in official statistics, for example, if doctors record other causes of death such as major chronic diseases, and not Covid-19. The full impact on overall and excess mortality of Covid-19 deaths, and the wider impact of the pandemic on deaths from other conditions, will only become clearer when a longer time series of data is available.”

Coronavirus modelling work reported by the BBC

This article by the BBC’s Rachel Schraer explores the modelling for the progression of the Coronavirus Covid-19. In the article we see some graphs showing epidemic growth rates, and in particular this one showing infection rate dependency on how many one individual infects in a given period.

https://www.bbc.co.uk/news/health-52056111

The BBC chart showing infection rate dependency on how many an individual infects

This chart led me to look into more sophisticated modelling tools than just the spreadsheet I already mentioned in my previous article on Coronavirus modelling; this is a very specialist area, and I’m working hard to model it more fully.

My spreadsheet model was a simple power law model; it allows you to enter a couple of your own parameters (the number of days out, and the doubling period for cases in days) to see the outcome; see it at:

https://docs.google.com/spreadsheets/d/1kE_pNRlVaFBeY5DxknPgeK5wmXNeBuyslizpvJmoQDY/edit?usp=sharing

It lists, as a table, case outcomes after a given number of days (up to 30 – but you can enter your own forecast number of days, and doubling period) since 100 cases, given how many days it is assumed it takes for cases to double. It’s just a simple application of a power law, and is only an analysis of output rate numbers, not a full model. It explains potential growth, on various doubling assumptions. It appears, for example, in the following Johns Hopkins chart (this time for deaths, but it’s a similar model for cases), which presents the UK and Italy prognoses lying between doubling every two days and three days since the index day at 10 deaths:

An example of a log scale y-axis chart for deaths on two doubling rate assumptions

Any predicted outcomes are VERY dependent on that doubling rate assumption, as my spreadsheet showed – in terms of cases, after 30 days since 100 cases, a doubling every 2 days would lead to about 3 million cases, but a doubling every 3 days leads to 100 thousand cases. This is an example of the non-linearity of the modelling – a 50% improvement in the case doubling period leads to a 30-fold improvement in prediction for case numbers after 100 days.

To reproduce the infection rate growth numbers in the BBC article above, relating the resultant number of cases after 30 days (say) to the average number of people an individual infects (the so-called R0 number, the Basic Reproduction Number) requires a deeper modelling technique. For an R0 explanation, see https://en.m.wikipedia.org/wiki/Basic_reproduction_number

I was interested, seeing the BBC infection rate chart, and its implications, to understand how precisely the number of people an individual is assumed to infect (on average) is related to the “doubling” rate assumptions we can make in the spreadsheet analysis.

I’ve been looking at SIR modelling – Susceptibility-Infected-Recovered modelling – in a simple form, to get the idea of how it works. There are quite a few references to the topic, going back a long way. A very useful paper I have been consulting is from Stanford University in 2007 (https://web.stanford.edu/~jhj1/teachingdocs/Jones-on-R0.pdf), and some of the basis for the shape of that basic modelling goes back to Kermack and McKendrick in 1927).

Usefully I have found some Python code in the Gillespie reference below that codifies a basic model, using a solution technique to the basic equations (which although somewhat simple first-order differential equations, are non-linear and therefore difficult to solve analytically) employing this Gillespie algorithm, which derives from work done in 1976, and is basically a Monte-Carlo probabilistic iterative time-stepping model well suited to computers (of the type I used to play with (for a quite different purpose) for the MoD in the 1970s).

https://en.m.wikipedia.org/wiki/Gillespie_algorithm

My trial model (to become familiar with the way that the model behaves) is based on the Python code, and I found that with the small total population (N) of 350, with generic parameters for infection rate (α) and recovery rate (β), there is slow growth in cases for a long time (relatively) and then a sharp increase (at about t=0.1 time units in the chart below) leading to a peak at about t=0.3, when recovery starts to happen; the population returns to health at about t=10. The very slow initial growth (from ONE index case) is why I show the x-axis with a log scale.

This very slow growth from ONE case is, I guess, why most charts begin with the first 100 cases (or, in the case of deaths, 10) so that the chart saves horizontal axis space by suppressing the long lead-in period.

My next task is to put some real numbers into a model like this, and to work it though for a LARGE population, and for comparison, to run it from time zero at 100 cases (which might avoid the long lead time in this current generic model).

I expect to find that I could then use a linear x-axis time scale, but that I would have to present the chart with a log y-scale for cases, as the model would need to represent the exponential growth we have seen for Coronavirus.

Charting my initial model, showing a test SIR model output, using generic input parameters

More sophisticated models also include birth-death adjustments (a demographic model) in the work, but as the life-cycle being assessed for the Covid-19 virus is much shorter (hopefully!) than the demographic cycle, this is ignored to start with.

Another parameter that might be included for some important infections, where there is a significant incubation period during which individuals have been infected but are not yet infectious themselves, is the “Exposed” parameter. During this period the individual is in a compartment E (for Exposed), prior to entering compartment I (for Infected), turning the SIR model into a SEIR model.

Another version of the model might take into consideration the exposed or latent period of the disease, and where an infection does not leave any immunity after recovery, so that individuals that have recovered return to being susceptible again, moving them back into the S(t) (Susceptible as a function of time) compartment of the model; this model is therefore called the SEIS model. For a description of these models, and more, see https://en.m.wikipedia.org/wiki/Compartmental_models_in_epidemiology

So we see that this is a far more complicated issue than at first sight. It is why, I think, Sir Patrick Vallance, the Chief Scientific Adviser, today began to talk about the R0 figure (a dimensionless number (a ratio)) relating to the average number of people that one individual might infect.

My feeling was that we are far from a value for R0 that would lead to the end of the epidemic being in sight, since, if we in the UK are tracking a doubling of cases every 3 days (as we have been), then this might be nearer to an R0 of 2.5, rather than anywhere near ONE. If R0 drops below 1, then the epidemic would eventually die out, which he mentioned. Above 1 and it continues to grow. As I said, I think we are far from an R0<1 situation.

The amount by which R0 exceeds the value 1 might not seem to have such a great effect on the numbers of cases we are seeing, at these early stages of an epidemic, as a but as the days wear on, the effects are VERY (i.e. exponentially) noticeable, and this is why the charts often have y-axis scales that are logarithmic, because otherwise they couldn’t easily be displayed.

In a linear y-axis chart, we run out of y-axis space quite quickly for exponential functions; to see all the data, at the later time values, we have to compress the chart vertically so much that it is then hard to see the earlier, lower numbers; we see this in such a chart below, that has a lot of “growing” to do. Note the dotted line which is the predicted line for doubling of cases every 3 days (which we in the UK have been tracking):

Data presented with linear x and y axes

It has therefore become more usual to present the data differently, with a log scale for the y-axis, where, for example, the sample dotted “doubling” lines are straight lines, not steeply growing exponential curves (in the chart below, two dotted guidelines are shown for deaths, one for 2 day doubling, and one for 3 days); the shorter the doubling period, the steeper the straight line on such a chart:

Data presented with a log y-axis – each y-gridline is set at a factor of 10x the previous one

In the 30th March Vallance presentation on TV, the growth curves on the last couple of log charts shown (cases and deaths, respectively) had a SLOPE that was DEcreasing slowly, not INcreasing (exponentially) rapidly (as the raw numbers actually are) although for a mathematician (or a data visualiser) this is a valid way to present such data.

The visual effect of choosing a such a log scale for the y-axis would have been explained in more detail in an academic lecture theatre (as I have tried to do here), and I think it is useful to point this out, and would be a clarification in the Government updates.

A final point, made in both the 29th and 30th March daily TV presentations, is that actions taken today will not have a tangible effect until a few days (maybe a week to two weeks) later; the outputs lag the inputs because of the lead times involved in infection rates, and in the effect of counter-measures on their reduction. What we see tomorrow doesn’t relate to today’s actions, it depends more on actions taken a week or more ago.

From recent charts, shown in Government updates, it does seem that what was done a week ago (self-isolation, social distancing and reduction in opportunities for people to meet other than in their household units) is beginning to have a visible effect on travel patterns, but any moves in the infection charts, if at all, are rather small so far.

Coronavirus – possible trajectories

I guess the UK line in the Johns Hopkins chart, reported earlier, might well flatten at some point soon, as some other countries’ lines have.


But if we continue at 3 days for doubling of cases, according to my spreadsheet experiment, we will see over 1m cases after 40 days. See:
https://docs.google.com/spreadsheets/d/1kE_pNRlVaFBeY5DxknPgeK5wmXNeBuyslizpvJmoQDY/edit?usp=sharing
and the example outputs attached for 3, 5 and 7 day doubling.

A million cases by 40 days if we continue on 3 day doubling of cases


If we had experienced (through the social distancing and other precautionary measures) and continue to experience a doubling period of 5 days (not on the chart but a possible input to my spreadsheet), it would lead to 25,000 cases after 40 days.

25600 cases at 5 day doubling since case 100


If we had managed to experience 7 days for doubling of cases (as Japan and Singapore seem to have done), then we would have seen 5000 cases at 40 days (but that’s where we are already, so too late for that outcome).

Not a feasible outcome for the UK, as we are already at 5000 cases or more


So the outcomes are VERY sensitively dependent on the doubling period, which in turn is VERY dependent on the average number of people each carrier infects.


I haven’t modelled that part yet, but, again, assumptions apart, the doubling period would be an outcome of that number, together with how long cases last (before death or recovery) and whether re-infection is possible, likely or frequent. It all gets a bit more difficult to be predictive, rather than mathematically expressing known data.


On a more positive note, there is a report today of the statistical work of Michael Levitt (a proper data scientist!), who predicted on February 21st, with uncanny accuracy, the March 23rd situation in China (improvements compared with the then gloomy other forecasts). See the article attached.

Michael Levitt article from The Times 24th March 2020

Coronavirus – forecasting numbers

A few people might have see the Johns Hopkins University Medical School chart on Covid-19 infection rates in different countries. This particular chart (they have produced many different outputs, some of them interactive world incidence models – see https://coronavirus.jhu.edu/map.html for more) usefully compares some various national growth rates with straight lines representing different periods over which the number of cases might double – 1 day, 2 days, 3 days and 7 days. It’s a kind of log chart to base 2.

Johns Hopkins University national trends, log base 2 chart

I’ve been beginning to simulate the outcomes for 2 input data items:

your chosen number of days (x) since the outbreak (defined at 100 cases on day zero to give a base of calculation);

your chosen rate of growth of cases, expressed by an assumed number of days for doubling the cases number (z), and then;

the output, the number of cases (y) on day x.

This spreadsheet allows you , in the last columns, to enter x and z in order to see the outcome, y.

Of course this is only an output model, it knows nothing about the veracity of assumptions – but the numbers (y) get VERY large for small doubling periods (z).

Try it. Only change the x and z numbers, please.

https://docs.google.com/spreadsheets/d/1kE_pNRlVaFBeY5DxknPgeK5wmXNeBuyslizpvJmoQDY/edit?usp=sharing

Coronavirus modelling – GLEAMviz15

Here’s the kind of stuff that the Covid-19 modellers will be doing. https://www.nature.com/articles/srep46076.pdf I have downloaded GleamViz, http://www.gleamviz.org/simulator/client/, and it is quite complicated to set up (I used to have a little Windows app called Wildfire that just needed a few numbers to get a pictorial progression of life/recovery/death from the disease, depending on infectivity, time taken to kill, etc). GLEAMviz15 is a proper tool* that needs a lot of base data to be defined. I’m thinking about it! PS – Some of the GleamViz team seem to be based in Turin.
*see my comment to this post.


Such modelling reminds me of event-based simulation models I used to develop for the MoD back in the 70s, using purpose-designed programming languages such as Simscript (Fortran-like) and Simula (Algol-like), all based around what today might be called object oriented programs (OOP), where small modules of code represent micro-events that occur, having inputs that trigger them from previous (or influencing) events, and outputs that trigger subsequent (or influenced) events, all inputs and outputs coded with a probability distribution of occurrence. In the MoD case this was about – erm, what am I allowed to say! – reliability and failure rates in aircraft, refuelling tankers and cruise missiles (even then). I recall that in my opening program statement, I named it “PlayGod”. I did ask for a copy of it years later (it’s all in very large dusty decks of Hollerith cards somewhere (I did overlap with paper tape, but I was a forward looking person!)) but they refused. Obviously far too valuable to the nation (even if I wasn’t, judging by my salary).


With regard to the publication of the modelling tools, I’ll leave aside the data part of it…there is a lot, and much of it will be fuzzy, and I’m sure is very different for every country’s population, depending where they are with the disease, and what measures they have taken over a period and whether, for example, they had “super-spreaders” early on, as some countries did.


Back in the early seventies, a European macro-economic project led to the publication of a book, The Limits to Growth. The context is described in https://en.m.wikipedia.org/wiki/The_Limits_to_Growth
Not long after, our Government released the macro-economic model software they were using to explore these aspects of the performance and the future of our economy in a worldwide scenario.
I worked at London University’s commercial unit for a while, and we mounted this freely available Government model, and offered it to all-comers.


The Economist publication was one of our clients for this, and their chief economics journalist, Peter Riddell, was someone we met several times in connection with it. He was very interested in modelling different approaches (to matters such as exchange rates, money availability (monetary and fiscal policy) and their impact on economic development in a constrained environment) and report on them, as a comparison with the Government’s own modelling, policies and strategies.


This was all at a time when the Economy was perceived to be at risk (as it is now from this pandemic), and inflation and exchange rates, monetary and fiscal policy (for example) were very much seen as determinants of our welfare as a nation, and for all other nations, including the emerging European Community, who originally sponsored this work.


We seem to be in a very similar situation with regard to the modelling of this Covid-19 pandemic, and I don’t see why the software being used (at least by the UK Government’s analysts (probably academics, I think Sir Patrick Vallance said in his answers to the Select Committee this morning)) could not be released so that we might get a better understanding of the links assumed between actions and outcomes, impact of assumptions made, and the impact of actual and potential measures (and their feedback loops, positive or negative) taken in response to the outbreak.
Apparently the Government IS going to release its modelling, and I would certainly be interested to see it – its parameters, assumptions, its logic and its variety of outcomes and dependencies on assumptions. The possible outcomes are probably EXACTLY why the Government hasn’t released it yet.

New Year skiing at Breckenridge 2003/4/5

This video is about skiing in Breckenridge during our vacation at New Year 2003-4.

We had been skiing in Breckenridge since the early 90s. Once our boys Harry & Tom discovered firstly Whistler, and then Breckenridge, we couldn’t persuade them to go back to skiing in Europe as we had for many years!

On our first trip there we met Dutch van Andel, our ski instructor, and spent many happy days out with him, a most effective way of taking ski lessons over a day as a family instead of individual or on large groups – a happy medium!

Dutch and his sons, Hayden and Gerhard, became firm friends of ours, and more recently, when Dutch had moved back to the Netherlands and we had resumed skiing in Europe, we had a great ski trip together in Bormio (a feature video yet to come!)

Meanwhile, here is a video taken by the ski school on one of our days out with Dutch, when he took us to the Nastar slalom course as part of our  – erm – training!

Here is another video taken by the ski school on one of our days out with Dutch, at New Year 2005, with a colleague, Mike, from the ski school, following us and catching the magic moments!

Courmayeur in January 2008

In January 2008, we spent a week skiing in Courmayeur, a smaller Italian resort, following Zermatt the year before and St Anton the year after. Courmayeur lost nothing by comparison, and as you will see below, we enjoyed the wonderful snow conditions and varied terrain, including off-piste and tree skiing, enormously.

The first video is about a day at Courmayeur, 15th Jan 2008, when fresh overnight snow left several runs in beautiful condition for deep snow skiing. Here are Harry and Tom, and their poor old cameraman, trying their hand at several of them.

Lunch at Maison Vieille was a real treat, the best of the mountain restaurants, and we went there a few times during the week. Amazingly there was no queuing for lunch, although it was busy enough, and we couldn’t wait to go back.

This is not the largest resort we have been to by any means, but in our time there, in January, there were so many excellent runs in wonderful snow conditions, and good weather nearly all of the time. We would go back there like a shot – but there are so many European resorts we want to revisit after so may years skiing in Breckenridge, Colorado.

While the boys were young, they wouldn’t let us ski anywhere else owing to the familiar food and language in the US, and of course we had wonderful times with our instructor, Dutch van Andel, and his two boys, Hayden and Gerhard, who all became firm friends, and hosted us for Christmases as well! Dutch joined us in Bormio, in 2014, on one of our more recent European expeditions. More of that in another post…

Early off-piste in fresh snow in Courmayeur

A second day of equally lovely snow conditions! Harry and I were the early starters, but we soon called Tom up the mountain to enjoy the wonderful snow conditions.

The next day, Harry and I get out early, and call Tom up to the spectacular conditions

The next day, 17th January, we went off-piste skiing again in Courmayeur, when more fresh overnight snow left several runs in beautiful condition for deep snow skiing. We decided to use the Mont Blanc guides, to help us find some new runs and some tree skiing. It was quite difficult here and there, as you will see!

All of us, Ann, Harry and Tom and I spent the day with Enrico Bonino, of Mont Blanc Guides (and an experienced climber) and it was quite an education!

Some more difficult off-piste skiing, with Enrico Bonino, including in the trees

My 2018 Prudential RideLondon 100

This video is about my 2018 Prudential RideLondon 100, riding for Marie Curie (as I did again a year later in 2019) in memory of my mother-in-law Laura, and in thanks for the help Marie Curie gave her and us during her final illness. I’ll be supporting Marie Curie again this year, 2020, even though I have my own entry.

It was pretty poor weather, much worse than usual, starting to rain at or near the start, and not stopping until around Dorking; and it was also pretty cold as a result, so much so that I refused a couple of stops, having shivered at the two where I did stop. I also had a puncture at a faster part of the course at Putney Heath, and also after I finished. The Continental guys at the finish were great, and together we found a hidden tiny speck of glass, neither visible or tangible from outside or inside the tyre, so all was well for my ride home, with no repeat of the puncture!

Overall, though it was a great experience, and my first attempt at the Prudential, and together with the Saturday Freeride (also shown in the pictures) the whole weekend was the usual fulfilling, enjoyable and well-organised experience, especially with my riding buddy, Leslie, who has done the Prudential many times, and whom I joined up with at the end.

I look forward to riding with Marie Curie again at the 2020 edition on August 16th.

My 2018 Prudential RideLondon 100

Here is a short extract video, just covering the section between Leith Hill and Box Hill

The Leith Hill and Box Hill climbs in the (very wet) 2018 Prudential Ridelondon 100

6Points Ibiza Oct 6th 2019 – full version

This video is about the 6Points cycling trip to Ibiza on 6th Oct 2019. It’s is a more detailed record of our visit than my previous shorter video, and includes GoPro clips from my bike, as well as photos taken along the way.

Our ride comprised 140 kms or so of cycling, with about 1700m of climbing, and all of us on the ride agreed that it was another great day out, following our 6Points ride in Formentera the previous day.

Ibiza is the third largest of the Balearic Islands, an autonomous community of Spain. Its largest settlements are Ibiza Town (Catalan: Vila d’Eivissa, or simply Vila), Santa Eulària des Riu, and Sant Antoni de Portmany. Its highest point, called Sa Talaiassa (or Sa Talaia), is 475 metres (1,558 feet) above sea level; we visited Sant Josep de sa Talaia, as our highest point early in the ride.

Ibiza has become well known for its association with nightlife, electronic dance music, and for the summer club scene, all of which attract large numbers of tourists drawn to that type of holiday. David Guetta, whose music provides the background to this video, has DJ’d in Ibiza, back in the party days!

By visiting the north, east, south and western compass points of the island, as well as the highest point, and the (sea level!) beach at Portinatx, we saw most of the island from several viewpoints.

The steepest part of the ride, the climb out of Portinatx, after lunch, was memorable!

I’m already looking forward to my next visit.

Our 6Points Ibiza ride in October 2019
Philip, Bryan, Anja, Alan, Dalia, Simon, Brian, Nick, (Alex and Joules) – the 6Points crew!

A shorter video of the same ride, but with no GoPro footage

6Points Formentera, Oct 2019 – full version

This video is about the 6Points cycling trip to Formentera on 5th Oct 2019. It’s is a more detailed record of our visit than my previous shorter video, and includes GoPro clips from my bike, as well as photos taken along the way.

Although it was only 70kms or so of cycling, and not very hilly, all of us on the ride agreed that it was a great day out. By visiting the north, east, south and western compass points of the island, as well as the highest point, and dipping our bikes in the sea to recognise the lowest altitude of our ride, we saw most of the island from several viewpoints.

I discovered, while putting the video together, that at least two of the lighthouses at compass point locations, Far de La Mola in the east and , and Far de Barbaria in the south, have some interesting claims to fame, as well as Formentera itself having something of a “hippie” reputation. I knew that Ibiza is regarded by some as a “clubbing” island (video in preparation!) but I didn’t know of Formentera’s background.

Next to the Far de La Mola lighthouse is a 1978 monument in honour of the writer Jules Verne’s birth in 1828, for the mention of it he makes in his book “Hector Servadac (travels and adventures through the solar system)”.

The lighthouse at Cap de Barbaria, the southernmost point of Formentera, is the setting for the film “Lucía y el sexo” (Sex and Lucía) by director Julio Medem.

Our tip to the easternmost point at Can Marroig took us through the Ses Salines natural reserve, land acquired from the owners of the farm and properties there, once home to vineyards set up after the disruption of French vineyards caused by phylloxera plague. And back in the day, the original hippie crowd, such as Bob Dylan and Janis Joplin, came here in the ’60s.

I’m already looking forward to my next visit.

Our 6Points cycling trip to Formentera, October 5th 2019