Minimum viable model for generation time estimation

@samabbott thanks for pointing me to this.

This proposal looks fantastic.

My only comment is about the simulation study.

When an epidemic grows exponentially, we expect forward serial intervals to be long because infectors will have shorter incubation periods than infectees. So even in the absence of truncation, fitting a convolution model that assumes identical incubation period distribution could introduce some bias. So it seems like accounting for truncation won’t be sufficient to get rid of this dynamical effect…

On the other hand, if we group serial intervals based on the exposure dates of the infectors, then I think the dynamical effect goes away. And so accounting for truncation would be sufficient…

I’m having trouble articulating the problem in more detail because these two scenarios are basically re-ordering of data but they’re somehow giving different answers/intuitions (but I think it’s an important distinction that needs to be made and addressed in practice)… I suspect that the difference is in the amount of truncation that needs to be taken care of. In the first case, the backward incubation period distributions are not truncated but are dynamically biased. And so the amount of truncation is from symptom onset to observation time. In the second scenario, there is no dynamical effect but the amount of truncation is from infection to observation time. So this difference suggests that depending on how the amount of truncation is implemented in the estimation framework, dynamical effect on incuabtion periods could unintentionally bias the estimates of generation interval. I think it might be important to think about this more carefully…