Thoughts on a Ten Simple rule paper

Hello to everyone,

After the seminar last week, I was chat with @samabbott about we write a Ten simple rule paper on, “Ten simple rule on generating nowcasting for decision-making”, the idea is to summarize our knowledge and experiences on generating and dealing with communication on nowcasting for decision-making, how this process can be facilitated and which are the main things that can help on the use of nowcasting, of any kind, being used as a tool of decision making.

Let me know your thoughts on this and if we have quorum, we can start drafting a paper on it, on the format of Ten simple, from PLOS One

Bests,
Rafa

3 Likes

Thanks for posting this @rafalpx!

I really like this idea and I think we could collectively come up with something really useful.

Would we want to limit this to nowcasting as in right truncated count correction or would we want to generalise beyond this? Generalising could really open a whole can of worms so perhaps we want to keep it focussed?

2 Likes

I think keep it focused? Have been contemplating recently more abstract versions of the general problem, and agree with @samabbott that it quickly gets … esoteric.

Also, I suspect some of the 10 simple rules are going to be about having concrete prediction / control targets, understanding your data, and the like - the sort of rules that push back on heading in very theoretical directions.

1 Like

Very interesting! I’m new to the forum but happy to help in any way if useful.

1 Like

Same here. I am very new to this forum, but I would be happy to at least involve in discussion.

2 Likes

I think it is better to keep it simple, I would just limit to nowcasting, probably on short-term periods.
I’m thinking if it is worth to split the use on outbreaks/epidemic/pandemic situations and endemic/surveillance/monitor settings, as how can this situations can be address different strategies to communicate and which kind of nowcasting settings is helpful on it.

2 Likes

I’ll give a little bit more time to other people here to see and circulate among colleagues, maybe end of next week we can start summarizing the bullet points, feel free to suggest any in here as well

1 Like

Sounds like a good idea to me. I like the idea of splitting into those two categories as as you say helps identify different strategies (as well as potentially the similarities we can work on in peacetime for outbreaks).

1 Like

some incomplete thoughts about potential rules:

  • should have something about the decision (maybe multiple somethings?). i can imagine some decisions hinge more on “how high will this get” vs “how quickly will X rise” vs “group specific outcomes” vs … - basically, understand what the decision needs => understand what to actually target with hind/now/fore-casting
  • related: need to carefully translate what decision needs into what the tool can actually do. e.g. “how high will this get?” isn’t really something to do with nowcasting (aside from the extent to which it gives you a proper view of what’s happening now => what will happen in the future) - questions like that need to be turned into “how high might X get within two weeks, with Y% certainty”?
  • probably a few notes about data: what’s available, what the entries actually mean, having a plan to get the nowcasting related information (e.g. when there may be multiple versions of data for the same day)
2 Likes

the second point seems to me a first or second rule, when translating nowcasting outcomes to decision making, maybe we have to less decision makers here and we may need some contact with them to draft this idea.

In summary, I like your points, I’ll try to draft something this weekend and will post here to contributions

1 Like

Looking forward to seeing a draft.

Re us not having much representation on the decision-making side here, that’s one of the problems we ought to point the rule(s) at addressing, yeah?

Basically, generating forecasts that aren’t able to answer the pertinent questions (or worse, look like they are relevant, but aren’t) is a critical failure. Best way to ensure things are pertinent is to engage with the consumers, which are presumably (for our case) public health policy making / deciding types.

1 Like

here is the link for the first draft of this paper:

DRAFT

I couldn’t come up with 10 rules, but we have so far, 8 rules, anyone with the link can edit and write to the document.

Have a nice week!

1 Like

Maybe is a good thing to let rules 9 and 10 to things that are more ethical, e.g., if nowcasting points to unrealized events, or if the decision-makers go against the direction the nowcasting is pointing to, what can be done? and how to handle this type of situation.

1 Like

This calling the interested ones in helping with the draft, we are getting a good shape on this, so it will be very helpful to get some ideas and feedback from more people here.

The draft link is here

I’ll look at this today - sorry for the delay! Looking forward to it.

1 Like

Coming a bit late to the discussion, but I like what you’re trying to do here! Some big-picture comments:

  • I think you’ll want to be clearer on definitions. Currently, it’s not really clear if you’re focusing on just nowcasting (as in the title) or a combination of hindcasting, nowcasting, and forecasting. I would recommend including them all, as they are rarely done in isolation for real-world uses. Similarly, you refer a lot to decision-making, but the questions you highlight are (appropriately) more focused on situational awareness and less about informing specific actions. Both are critical for decision-makers, but I think placing the focus on situational awareness more explicitly might be better framing
  • I think it would be useful to reorder the rules to a more natural flow. Perhaps try making them more chronological - eg as the process unfolds, from the actual technical work of making and validating the estimates, to how the results are presented / communicated, to how to interact with stakeholders, and ending with sunsetting.
  • As I read it, the target audience of the article is currently the modellers. It might be a useful exercise to think about what the rules would be if the target audience were the decision-makers instead, then ask whether there are any counterpart rules for the modellers that were missed by just thinking about it from the modellers’ perspective to begin with?

And a few notes on the rules themselves:

  • Rules 1 and 5 seem like they could be combined into something like ‘Use clear, consistent language when communicating results’
  • Rules 3 and 4 could also be combined into something like ‘Ensure the metrics being estimated provide meaningful situational awareness for decsion-makers’ or possibly ‘Ensure the metrics being estimated provide meaningful situational awareness in a format useful to stakeholders’ (Rule 7 is part of how you do this)
  • I think it would be useful to include a rule that discusses presentation of uncertainty explicitly
  • Rule 2 is key and I wonder if it would make sense to include specific examples of caveats that would apply to most methods - such as holiday effects (ie changes in reporting delays etc around major holidays)
  • Maybe include something about ongoing/real-time assessment of performance, either as part of Rule 9 or separately?

Looking forward to seeing how this evolves!

2 Likes

Thank you very much for all the comments! They are really enlightening and helpful.

1 Like
  • I think you’ll want to be clearer on definitions. Currently, it’s not really clear if you’re focusing on just nowcasting (as in the title) or a combination of hindcasting, nowcasting, and forecasting. I would recommend including them all, as they are rarely done in isolation for real-world uses. Similarly, you refer a lot to decision-making, but the questions you highlight are (appropriately) more focused on situational awareness and less about informing specific actions. Both are critical for decision-makers, but I think placing the focus on situational awareness more explicitly might be better framing

@jrcpulliam we could either expand the scope or narrow the scope more closely to just the correction of right truncated counts. As many of the considerations are similar it likely does make sense to widen vs narrow the scope. Agree focussing on situational awareness is sensible and perhaps we need to finesse the title etc to reflect this.

making them more chronological - eg as the process unfolds, from the actual technical work of making and validating the estimates, to how the results are presented / communicated, to how to interact with stakeholders, and ending with sunsetting.

:heart: this idea.

It might be a useful exercise to think about what the rules would be if the target audience were the decision-makers instead, then ask whether there are any counterpart rules for the modellers that were missed by just thinking about it from the modellers’ perspective to begin with?

:fire:

Agree on all the suggestions for the rules themselves. Nice :thought_balloon:

1 Like

I think we have some kind of consensus that @jrcpulliam comments are pretty good, can you please add them to the DRAFT?

Do on any mode you wanna, editor or suggesting, I keep track the differences and aggregate them to the draft

1 Like

@rafalpx I added some comments and content (including trying to provide some definitions for projections/forecasts/nowcasts in the introduction). Feel free to delete, edit, or keep. Happy to help contribute to the discussion as well. Really awesome work!

2 Likes