The overview of all the submissions is now up: Research Software Maintenance Fund: full application funding panels completed | Software Sustainability Institute
It looks like they somewhat regret the process from reading this. In general grants submitted scored very highly i.e average of 4.8 or very good should be funded (they made ad-hoc changes to their scale to add more good categories after submission).
They ended up funding 13 grants from 143 submissions (I made a little calculator to look at value add of grants see Grant application break-even analysis calculator · GitHub - I think this grant was just above break even i.e just beneficial on an ecosystem level).
In their wrap up they had:
For this first round, we wanted to be deliberately broad, to understand the types of activity that the community felt needed funding.
I think this is a really disappointing approach from them and something I would like grant funders not to do. A full grant application shouldn’t be part of your market research as it is such a large time sink for everyone in the ecosystem that isn’t you - especially in such poorly funded things like OS software.
How did we do?
We scored 4/6 so very good on their scale but well below average and not close to being funded. There wasn’t much specific feedback on why and what there is doesn’t seem to really align with the scores. The main takeaway is already have the staff in place.
I think my main learning point from applying for this and the CZ essential software grants is that there may not be that much point applying for these generalist software grants unless infectious diseases are really in the zeitgeist. That is somewhat problematic in that there aren’t domain specific options of any kind that I am aware of. However, I think given the feedback this had effectively had no chance regardless of content.
That being said I am aware the people at Imperial applied for this as well so perhaps they did better. If anyone knows it would be interesting to compare notes.
Detailed reviewer comments:
Loses a point for limited quantification of maintenance outcomes (e.g., explicit release cadence, CI/coverage targets) and limited articulation of how deprecation/back-compat will be communicated and enforced
There was no clear outline of success metrics.
Weaknesses: There was no real evidence of contribution to the wider software ecosystem.
Weaknesses: The risk of being unable to recruit a suitable RSE is a significant risk.
We got our lowest score in this category and this was the only negative listed.
Marks off for currently informal governance (future state is promised but not yet defined), and for lack of concrete policies upfront (e.g., deprecation windows, versioning/release policy, minimum CI coverage %, bus-factor reduction plans).
Weaknesses: Establishing formal governance structures is not insignificant, as highlighted by being given its own work package. However the number of work packages does concern me
Overall comments
Reviewer 1:
Strengths: High-impact, widely used tools thoughtful plan to reduce duplication and improve interfaces; credible team with prior ecosystem successes; strong community-building and EDIA orientation; metrics-aware management approach.
Weaknesses: Success hinges on hiring a specialist RSE and securing community uptake of new governance; several process details (release/deprecation policy, explicit coverage targets, concrete cross-project agreements) are not yet locked in; some outcomes would benefit from clearer, measurable acceptance criteria given the sizeable
Reviewer 2:
The objectives for the software including creating a community around it seem ideal. However I feel the time has it included more goals then are achievable within the timescale.
Reviewer 3:
Overall this is a robust proposal and one that is needed so that this type of software can evolve and assist researchers in an area that is in constant change.