Below was recently emailed out the the European forecasting hub mailing list by @sbfnk:
Thanks as always for your continued contributions to the European COVID-19 Forecast Hub. We are making a few updates to the platform and project. In particular, we are putting a bit more of a focus than before on COVID-19 hospitalisations. The changes and their motivation are detailed below. We are hoping that the outlined changes in data sources and methodology will encourage teams to submit forecasts of hospital admissions.
Data on hospital admissions are now sourced from Our World in Data (OWID), which collates these from multiple sources including the ECDC where available. For some countries, these data are “right truncated”, that is they are reported with a delay, leading to the most recent data to be an underestimate that is likely to be revised in the future. Making forecasts for these data streams therefore includes an element of “nowcasting”, i.e. correcting for reporting delays.
We are therefore making two data sets available in the OWID data directory in the hub. One of them includes these data (named “truth_OWID-Incident Hospitalizations.csv”) and can be used by models that can correct for right truncation, and another one (named “truncated_OWID-Incident Hospitalizations.csv”) that only contains data not expected to be substantially revised in the future, in line with the case/death data we provide. More detail on these two and additional data sets that we collate can be found in the data-truth directory on GitHub. Any existing models for forecasting hospitalisations need to be updated to use these files instead of the ones previously in the ECDC directory.
For any evaluation of forecast performance, we are treating data as final after 28 days - any revisions that occur more than 28 days after any date will be ignored. The data files linked above reflect this and also label each data point as either “final” (will not be further revised), “near final” (data from the last 28 days that might be revised but is unlikely to do so given past experience) and “expecting revisions” (likely to still be revised given past experience). This means that models that can correct for right truncation should only do so for delays of a maximum of 28 days. These changes also mean that it is now possible to make forecasts of past dates (e.g. “-1 week forecasts”) - making a prediction of data that lies in the recent past but is expected to still be revised within 28 days.
We hope that these changes will motivate more teams to submit forecasts of hospital admissions to the hub. On the one hand, we are hoping that a reliable data stream covering many countries in the hub will facilitate modelling efforts. On the other hand, we think that there are interesting research questions related to the project such as whether correcting for delays improves forecast performance, that we are hoping to explore in future hub publications. Lastly, the focus on hospitalisations is in line with public health relevance, and the resulting forecasts will continue to inform ECDC recommendations. There will be an opportunity to discuss these changes in a future Hub modellers’ meeting.
On a last point, some of you may have seen that Johns Hopkins University is discontinuing its data collection. We are reviewing these data streams and will communicate an update in the future. In the meantime, we would ask teams that submit forecasts of cases and deaths to continue to do so.
This seems like an obvious testing ground for some epinowcast
models as well as other related methods (like @johannes baseline model: Include a simple reference model). Would anyone be interested in collaborating on setting some of these up? If there is enough interest we could even test out different complexities of approach (i.e all the new renewal model features based on @adrianlison work).
We would currently need to restrict ourselves to nowcasts Vs augmenting forecasts but potentially this could provide some additional motivation to add the features required to change that.
Does anyone have suggestions for other methods that they might like to implement?
A while ago I put quite some time into setting up a {targets} workflow for the Germany nowcasting hub (GitHub - epiforecasts/eval-germany-sp-nowcasting: Evaluating Semi-Parametric Nowcasts of COVID-19 Hospital Admissions in Germany). This might be a good place to start potentially though the epinowcast
parts require a fair bit of revision due to interface and model changes.
Probably the first thing to do is set up a call to chat through what this would entail and would people would like to do.