Assessing how much nowcasts are informed by observations

When plotting our nowcasts, we probably want to distinguish between estimates based on complete data (everything beyond the maximum delay), estimates based on partial data (the actual nowcast), and (once we have forecasting ability in epinowcast) forecasts. This could be done in similar vein to epinow2, see e.g. here.

Such a distinction is important especially for the forecast part, because beyond the present (and conditional on our nowcast until the present) our further estimates only depend on the expectation model and not on observed data, which is important to be aware of. However, this may also apply to nowcasts for days shortly before the present: if only few cases have a short delay, and we are uncertain about the reporting delay distribution, then the nowcast for days shortly before the present may also be primarily guided by the expectation model. It is then also, to large parts, a forecast. Being able to identify such situations and better understand how much nowcasts for a certain delay are informed by actual observations vs. guided by the expectation model could be important.

One way to go about this would be to produce forecasts using the expectation model starting from a date before the present (e.g. from before the maximum delay, from last week and so on) and compare these with the actual nowcasts. If both are very similar for days close to the present, then the additional observations do not provide much additional information over our expectation model. This can either mean that the expectation model is just great, or that there is no value in nowcasting so close to the present and that estimates close to the present should rather be epistemically treated as forecasts…

If we want to use forecasts from points in time after the maximum delay, we need the forecasting ability in epinowcast. If we want to use forecasts from before/at the maximum delay, it would be practical/efficient to have a version/mode of the epinowcast model that works on complete data, i.e. without explicitly estimating reporting delays.

As a side note, a comparison as described above can also be seen the other way round, i.e. as a real-time performance evaluation of the forecasting capability of the expectation model, only that no ground truth is available yet due to right truncation, so we use the nowcast as the best ground truth we have.

What we still need to discuss further is how to actually compare the nowcast and the forecast aside from plotting them together, i.e. using some probabilistic scoring rules, measures of divergence between distributions etc.

I think these are all great points. Another option for visualising the increasing reliance on forecasts vs. observations would be a continuous shading akin to what was used in the original EpiNow package, shown e.g. in Early analysis of the Australian COVID-19 epidemic | eLife (fig) or https://www.medrxiv.org/content/10.1101/2020.05.04.20090639v1.full-text (fig.

A broader question would be whether it might make sense to move plotting functionality in a separate package that could then be used by other nowcasting packages, potentially based on a common data class. See this related issue in EpiNow2.

For comparing nowcasts/forecasts I agree it would be good to use probabilistic scoring rules alongside their divergence equivalent (as describebed e.g. in section 2.2 of Krüger et al. [1608.06802] Predictive Inference Based on Markov Chain Monte Carlo Output).

Good points @adrianlison and this is something we definitely need to think more about (though as you point out actual implementation is somewhat limited by needing forecasting functionality).

I think the approach we took in EpiNow2 is quite effective and people seem to generally find it helpful (as it is quite clear) but I possibly prefer the shading approach we took with the original package (that @sbfnk flags) as it is less heuristic.

This can either mean that the expectation model is just great, or that there is no value in nowcasting so close to the present and that estimates close to the present should rather be epistemically treated as forecasts…

This is something we really need to look at. In general there are lots of interesting questions about the interaction between the expectation model and the reporting delay model that we are really only just scratching the surface of.

I agree with @sbfnk that at some point post-processing (and also pre-processing) could and should be moved to another package to ease development headaches and increase modularity. I’m not sure that time is now though as we need to iron out more functionality and build out prototypes regularly. Adding to much infrastructure now could lead to locking in technical debt for the future.

The counterargument to that is if it would unlock more resources/make it easier for others to contribute as the current model infra is quite intimidatingly complex and may make it harder to engage with other, easier to understand, areas of the code base.