Redefining development funding
Opinion
14 Nov 2023
In 2021, we designed the Systems Innovation Learning Partnership (SILP) Experimentation Fund, a new funding programme to support experiments in systems innovation. Our ambition was to reimagine traditional funding approaches by shifting the focus towards distributive power, participatory decision-making, and, above all, learning.
Asking different questions
Coming from an EU-funding background, I applied my previous experience to shaping the design of our new fund – creating a 25-page call document replete with the usual fare of ‘in-scope activities’, eligibility, assessment, and selection criteria. However, it was rigid, technical and lacked the essence of our core mission.
Questions loomed large: What qualified as ‘eligible’ activities in experimental systems change? How could we identify ‘desirable’ outcomes when dealing with the unknown? And, most notably, how do you define ‘impact’ when the focus is on the learning process itself?
The document had become too complex, and I found myself at an impasse. After discussions with colleagues, we drafted a lean, two-page document that contained only the essential information required to apply.
On reflection, it was clear that I had been asking the wrong questions. It wasn’t about forcing an experimental approach into an existing framework; it was about crafting a framework that allowed for experimentation and the innovative outcomes we wanted to see. I soon realised that funding for learning required rethinking the foundations of grantmaking.
The microgrant
At the fund’s inception, our priority was to make the application process valuable. With this in mind, we introduced the concept of microgrants for successfully longlisted experiments. Initially designed to ease the financial burden on grantseekers as they developed their ideas, the concept evolved radically.
Microgrants weren’t constrained by specific conditions; rather, they provided grantees the liberty to use them in ways that best suited their needs – from co-creating approaches to securing contributions from key stakeholders. We consciously refrained from imposing stringent output requirements, redirecting our focus from ‘value for money’ to the inherent value these microgrants brought to applicants and their ideas.
Significantly, we didn’t make it mandatory for those who received the microgrant to apply to the second stage. This flexibility served as a down-payment on the learning-as-outcome approach, and created a space for exploration, which is vital for systems innovation. Furthermore, it underscored our role as partners in the process of creating opportunities, rather than just being financial enablers and outcome auditors.
Crafting the perfect portfolio
After receiving the second-stage applications, we began building the ‘learning portfolio.’ Unlike conventional portfolios tailored to specific systemic challenges, our focus was on individual and collective interventions that help us learn about the nature of systems and systems change – especially when they demonstrated synergies in context, approach, challenge, actors, and more.
We took a multi-faceted approach to the portfolio-building process, beginning by viewing 3-minute videos submitted by longlisted applicants, supplemented by feedback from community grant makers – an international group of systems innovation and development practitioners, who worked with us on the co-design and delivery of the assessment and portfolio selection process. Next, we asked each community grant maker to build their own portfolio, with an emphasis on novelty and in particular, diversity – of learning questions, geography, context, and stakeholders.
We combined these individual portfolios into an initial ‘consensus portfolio,’ based on the most-selected projects up to the maximum budget available. To refine this portfolio, we divided the community grant makers into two groups. Each group was tasked with a two-round review and iteration process. Round 1 was focused on improving the portfolio by adding experiments to it, in accordance with the selection criteria. Round 2 then involved removing experiments, as the portfolio had inevitably surpassed the budget envelope, while trying to retain the enhancements from Round 1 as far as possible.
Understandably, the idea of ‘improving’ the portfolio was a difficult challenge. How do you value one geographic change against another, especially when both contribute to increasing diversity? How do you value one portfolio learning synergy over another? What does ‘synergy’ really mean – complementarity in difference or similarity? Furthermore, when removing potential experiments from the portfolio, is it even possible to retain the idea of improvement? We learned a lot from these sessions and as a group, continue to reflect on whether we adhered to the initial selection criteria.
Concluding reflections
It’s important to acknowledge that other funding entities have done co-creation, starter funds, ideation, and participatory grantmaking, often on a grander scale. Nevertheless, what makes the SILP Experimentation Fund radical is our approach.
From its inception to the launching of our cohort, our programme had at its heart funding-for-learning, the distribution of power and decision-making through participation. We were as much a part of the experiment as the final cohort, embracing ideas of exploration, collaboration, community engagement, and capability building.
While we believe in the uniqueness of our approach, we welcome other perspectives. If you would like to redefine development funding in collaboration with the Systems Innovation Learning Partnership, please click here.