Community Seminar 2023-02-01 - Johannes Bracher - Collaborative nowcasting of COVID-19 hospitalization incidences in Germany

On Wednesday we have our first community real-time analysis seminar at 2pm UK time (see here for some background Community seminars for real-time infectious disease modelling)!

@johannes will be presenting his and @dwolffram in progress evaluation of the first 6 months of the Germany nowcasting hub collaboration for about 20-30 minutes and then we will have a Q and A and more general discussion.

I’m really excited to hear more on this as from what I have seen the evaluation really manages to identify some interesting trends (though not sadly that I am the most brilliant nowcaster alive).


Real-time surveillance data are a crucial element in the response to infectious disease outbreaks. However, their interpretation is often hampered by delays occurring at various stages of data collection and reporting. These bias the most recent values downward, thus obscuring current trends. Statistical nowcasting techniques can be employed to correct these biases, allowing for accurate characterization of recent developments and thus enhance situational awareness. In this talk, we present a pre-registered real-time assessment of seven nowcasting approaches, applied by independent research teams to German 7-day hospitalization incidences. Due to their unusual definition where hospitalization counts are aggregated by the date of positive test rather than admission, German hospitalization incidences are particularly affected by delays and can take several weeks or months to fully stabilize. For this preregistered study, all methods were applied from 22 November 2021 to 29 April 2022, each day issuing probabilistic nowcasts for the current and 28 preceding days. Nowcasts were collected in the form of quantiles in a public repository and displayed in a dashboard. Moreover, a mean and a median ensemble nowcast were generated. We find that overall the compared methods were able to remove a large part of the biases introduced by delays. Most participating teams underestimated the importance of very long delays, though, resulting in nowcasts with a slight downward bias. Also, the accompanying uncertainty intervals were too narrow for almost all methods. Averaged over all nowcast horizons, the best performance was achieved by a model using case incidences as a covariate and taking into account longer delays than the other approaches. For the most recent days, which are often considered the most relevant in practice, a mean ensemble of the submitted nowcasts performed best.

Where to find the seminar

You can sign up for our event calendar here:

Or just join the event directly itself here:


  • We will be recording the talk but not the Q and A afterwards
  • If you would like to present your work as part of this series please reach out.
  • If you would like to help run these meetings please reach out.

@johannes et al - thanks for the talk, re-posting my Qs from the Zoom chat:

  • is there a documented argument for this 7-day-hospitalization definition somewhere convenient?
  • Related: is there a statement somewhere for the control objective (e.g. non-violation of some hospital capacity indicator) associated with the hub? Or was there nothing in particular for that?
  • also re hospitalization reporting: presumably this definition also has some “covid related” framing. like if i break my arm, after “recently” testing+ - what happens with that?
  • for forecast HUB enabling: is there movement towards a provide-us-a-standardized-docker+money to prevent the “it got unplugged” problem? Basically, describe the forecast contours for inputs / outputs / environment and then centralize the IT-elements of making it go for the forecast hub. The answer in the meeting was, roughly “no, not enough capacity, but it would obviously be great”, so follow-up: practical to describe in terms of time / $$$ / quality / etc how “great” it would be? I get that you (…everyone) is under resourced, but would be useful to describe to a funder / gov / etc what they’re missing out on with the current approach.
  • regarding far past errors / coverage failures: totally believable in raw absolute numbers - but do we think that misses due to e.g. 100 day delayed updates actually matter relative to the point of this exercise (which is presumably something like e.g. near term policy decisions relative to hospitalization capacity)? Are there ways to adjust scoring considerations (e.g. something akin to discounting / decay)?
  • for the proper scoring work - what are the prospects for translating that framing to practical measures (e.g. like what gets done in optimal control problems, cost-benefit functions) that can be more easily translated into policy-maker head space (e.g. budget impacts)? roughly turning a score into a meaningful forecast $$$ benefit / penalty or some such?
  • what sort of synthetic validation of the forecasting models / ensembling / etc have you attempted? Seems plausible to concretely attribute e.g. data stream problems.

Really nice to see everyone and thanks @johannes for a great talk!

Here is a schedule for this series: Seminar Schedule - Google Sheets

As I said if interested in speaking or if there is someone you would be interested in hearing please reach out.

@FelixGuenther has very kindly offered to help organise so we will be rotating chairs.