Skip to main content

Open-Source Readiness

KEY RECOMMENDATION: Evaluate teams against organizational open-source 'readiness models' to guide capacity development.

KEY RECOMMENDATION: Identify small-scale, lower-risk investments for skill-building experiments. These can often be aimed at places where your lack of open-source investment is causing problems (e.g. lack of re-usability).

KEY RECOMMENDATION: Be sure to include areas of 'adjacent value' when assessing the overall value of your DPG. These can be multipliers to your project's valuation.

EXAMPLE: The GeoNode project, initiated by the World Bank's Global Facility for Disaster Reduction and Recovery (GFDDR), realized significant but difficult-to-quantify value through the vibrant commercial community that grew around the project. In time, these contributors became so active and financially committed to the project's continued development that GFDDR was able to scale back its investment. This dynamic outcome is hard to capture in most models.

Open source is all about execution. Reaping the benefits of open-source investment requires completing a series of difficult steps ranging from designing an initial strategy to building an appropriate community to leveraging the resulting dynamics for strategic advantage, whether that be scaling to broad adoption quickly or orchestrating participants to focus on areas of needed innovation.

This module presents a somewhat abstract but experientially based view of organizational readiness for open source, including models to consider referencing, and practical advice for how to move your team along the path to masterful open-source execution. The Adoptability module delves more deeply into specific open-source capabilities you'll need to effectively adopt or build an open-source product. Note that the use of the term 'team' here also includs external vendors.

FUTURE WORK: What have DPG creators learned about building local vendor and labor capacity and readiness? What are typical readiness and capacity growth patterns for organizations working with open data and AI models, and how might this apply to agencies building DPGs?

Most government teams are not yet prepared to travel this capability path and reap those rewards at scale. They lag in their ability to execute, largely because they don't have enough experience, particularly as compared to private industry. But governments can learn quickly from what's been tried and tested by others as they've built organizational capacity to use open source to its full potential.

As an agency starts to climb the learning curve, it can be helpful to assess current abilities as a form of organizational readiness and to describe the journey to mastery as one of gaining capability. Practitioners often describe this as climbing the "readiness ladder". To do this, we might locate current capabilities in a readiness model. These clarify where an agency is in its journey to mastery and suggest next steps and likely results. Often, teams use these models to identify areas for potential growth as well as potential pitfalls. They are most useful for those early in their open-source progress, and this module focuses on those beginning stages.

It is useful for an agency to consider its maturity on an organization wide basis, but creation of an open-source DPG is usually executed by teams, and it is the capabilities of specific teams that matter most. Therefore, this module considers individual teams as the starting point for analysis, because even agency-wide open-source skills training will succeed or fail mostly at the level of specific teams working on specific projects. However, teams obviously work within an organization that shapes their operational environment, and at some point the broader the organizational view matters. When and under what conditions this occurs is very case dependent.

There are several readiness models that go by various names. We share a number of them here because it is often worth considering more than one when examining an organization or a particular team.

Of the published models, we find the first one -- written by Microsoft Senior Director Jeff McAffer (now Senior Director at GitHub)-- is particularly useful. It's simple but includes strategic components, accounts for realistic failure modes, and understands that open-source readiness will be unevenly distributed in any agency large enough to have multiple teams. We also like how it describes the phases of capability growth almost like generalized 'mental models' at an organization or team level in a way that's both abstract and pragmatic. We've found that stepping back every so often to reflect using such shared language and concepts helps teams stay grounded in the big picture, rather than focusing exclusively on the details.

KEY RECOMMENDATION: Engage the team in a frank discussion about your open-source readiness and locate the team and the organization on McAffer's engagement model. What might this level mean for opportunities for growth, and what might it mean for potential problems? Identify three key steps the people in that discussion can take to improve execution capabilities at the team level and keep checking in as the team develops.

On many teams, initial open-source capabilities might be nascent. Team members might not have had significant (or perhaps even any) experience using open-source strategies to create value. The team likely works in an environment where FOSS investment is rare, and many do not see much reason to change that. That lack of knowledge might translate in some quarters into hostility toward FOSS. People will say "It can never work here" even as open source slowly seeps into more and more of the technology around them.

At this stage, open-source strategies will be difficult to execute. Internal political risks might be high. Policies needed to engage open source productively might be missing. Staff might not know how to begin working with external open-source contributors. Many people might lack even a basic understanding of what it means to do open-source work. Efforts to work in an open-source mode suffer from an increased risk of failure, and they might fail in ways that reinforce the belief that open source is not worth further consideration.

Many factors might move a team past this early stage and on their way toward tolerating open source, but movement usually comes from external pressure, changing environments, and staff additions and turnover. As conditions around an agency begin to change, and the costs and risks of refusing to engage start to rise, pressure to engage open source will increase.

Those costs might include the pain of maintaining internal forks of external open-source projects. Or more commonly, the risks of not maintaining all those forks. Similarly, the benefits of making minor open-source investments start to become clear, even if only because other teams will begin to reap those benefits and your team gains good, internal examples to follow.

However any specific team begins to adapt, climbing the learning curve is often an exercise in experimentation. Most don't think of open-source investment in those terms. Instead, they cast initial open-source forays as making small concessions to necessity. Some see these experiments as seizing unique, non-repeatable advantages. Most don't think of those small, initial projects as the future direction of the team. More teams should consider the possibility, though. Sometimes, explicitly labeling such experiments as learning exercises and skill-building allows a team to maximize the value of those experimental investments. It prioritizes reflective analysis and learning. It gives permission to fail. Those can be useful to teams seeking adaptability. When considering McAffer's model, we might relabel his "tolerance" phase as "experimentation".

KEY RECOMMENDATION: Look for the places where your lack of open-source investment are causing problems that can be addressed by small-scale, low-risk experiments. Start there, being sure to label these experiments as skill-building exercises. Think about how you can engage your larger team or organization as you develop, making the learning relevant to their own needs and goals and bringing them along as allies.

Experiments come in many forms, but the most common first experiment is using some outside open-source code and engaging the open-source project. That might involve filing bug reports, offering a contribution, or merely participating in project mailing lists and forums. These are all relatively low-risk, low-investment ways to begin connecting a team to outside FOSS projects. If your team's future plans include larger-scale open-source engagement, building skills through these kinds of small-scale experiments can be very productive.

The experimentation phase is usually a skill-building and knowledge-gaining phase because it exercises the skills that cause a team to shift from merely tolerating open source to trying to harness it. Having those skills throughout a team provides the vision that starts to shift attitudes at more than just an individual level.

The problem that arises, especially as multiple teams start to embrace FOSS, is that they lack the infrastructure to succeed at it across the entire organization. They are missing policies, auditing, skills, culture, and experience. This is a pivotal, risky moment. A large number of teams will still be in the initial phases. Efforts to move internal culture toward open source will be perceived by some as a pointless shift toward the latest buzzword. Despite your best attempts to share learning across the organization, experimental skills will still be unevenly distributed internally. Many new open-source projects will fail, and this will convince some that all open source is destined to fail. The proverbial 'trough of disillusionment' will seem unbridgable. In some agencies, some might even sabotage open-source projects for policy or political reasons.

McAffer sees this phase as one of hype, and perhaps that's because it's also when an agency embraces open source without quite being ready to execute. Agencies in this phase tend to engage FOSS in shallow, unsophisticated ways simply because they don't yet have the experience to make better strategic use of open-source opportunities. The way to move past this stage is not to reduce the hype (though that might help) but rather to increase readiness.

When an agency reaches this stage, it will have multiple teams eager to do open source and multiple teams still wary of the change. Managers at this stage will need guidance on using FOSS as a strategic component and managing teams with increasingly external deliverables. Developers will need technical infrastructure, easy-to-follow licensing policies, and permission to engage externally. Perhaps more importantly, they will need to develop new habits of working in the open and sharing even early, rough versions of their work. Beyond just technical teams, Human Resources will need hiring and compensation guidance as both skills and performance evaluation criteria shift. Building internal systems that provide all of those pieces is how agencies gain proficiency. Adding skills, process, and policy is how that happens, and it requires management approval and resources. Agencies that fail to provide this support from fairly high in the organization tend to level off at this level of readiness.

RECOMMENDATION: If you've been tracking your team's growth in execution capabilities and bringing others along on your learning journey, you should have a good shared understanding of what skills, processes, and policies you need to execute successfully, along with proof points of the value such investments will bring. This will help you present a strong recommendation to your organization's leadership and help you move from the Hype to Proficient stage in McAffer's readiness model. (Remember that these skills can come from external vendors as well as your internal team. Both approaches help you build a deeper talent pool).

Another consideration for agencies climbing the readiness ladder is that the need for skilled open-source practitioners will almost always exceed the supply. The demand for open-source skills is growing so fast. The world does not have enough experienced open-source strategists and developers to keep up. A number of companies and universities have decided to address this problem by centralizing much of their open-source expertise in an Open Source Programs Office (OSPO). The OSPO's job is to use that collected expertise to improve open-source readiness and efficacy across the entire organization, often managing common issues like license compliance, developing an open-source culture, ensuring high quality code releases, and advising on software tools. The Linux Foundation provides a good overview of the role and value of OSPOs. The TODO Group is collection of private industry OSPOs that also provides open guidance and resources.

We are not aware of many agencies that have an OSPO, but governments should consider focusing open-source readiness efforts on one agency or department that can then help others. In the United States, the federal government's General Services Administration (GSA) did this with 18F, which, after reaching relatively high levels of open-source capability and knowledge, developed guidance for other agencies approaching open source. GSA's Code.gov has created a toolkit that gives agencies guidance around creating and maintaining federal source code inventories and open-source repositories.

RECOMMENDATION: At the organizational level, Open Source Program Offices can help teams work more effectively and efficiently in open source. As you move up the readiness ladder, consider if such a centralized operational function would help the organization to execute better.

A final note on understanding and communicating the value or cost-effectiveness of open source. Gaining a more sophisticated understanding of this value - and having that understanding more broadly shared across an organization and its supporters - is both a prerequisite to and a marker of moving along these readiness models. Moreover, organizational decision makers and funders will want to know the cost/benefit ratio or the return on their open-source investment and might also want to compare the approach to alternatives.

Ultimately, the value of your DPG will depend upon how well you've met your goals: a qualitative and quantitative assessment of the DPG's positive impact on people's lives. How you'll first estimate and later actually measure this social impact is one of the first items to define in your project -- this obviously goes hand-in-hand with goal setting -- and the certainty of your measurements should improve with time.

There are tools out there that can help you make some cost and benefit determinations, but none capture the full value of an open-source approach -- especially specific to your context -- and it's likely that you'll need to modify some combination of these tools to your needs.

FUTURE WORK: Can we identify a common but customizable approach to DPG valuation that takes into account how open approaches -- for content, data, and software -- create areas of new value and models for how these could have future multiplier effects?

The World Bank's Open Data for Resilience Initiative & GeoNode:A Case Study in Institutional Investments in Open Source does a good job measuring key open-source-related intangibles. Although it's hard to distill the approach taken in the paper into a formal framework, it can be described as paying attention to areas of adjacent value. This approach aligns pretty naturally with thinking of an open-source project as part of an ecosystem -- or a set of ecosystems, from contribution through adoption -- which will likely be the case for most DPGs that aim for broad social impact.

Looking only at the cost of writing code themselves versus the value received from sharing costs with partners, it's conservatively estimated that the GeoNode project brought around a 200% return on investment (as of 2017). But the real value of open-source development to the World Bank's Global Facility for Disaster Reduction and Recovery (GFDRR), the leading organization behind GeoNode, was the self-sustaining open-source GeoNode community that emerged and the benefits that community continues to deliver. For example, a consortium of U.S. government agencies quickly picked up development work on the core of GeoNode -- valued at over $1 million USD -- which permitted GFDRR to tune its resource investments to features it specifically needed. Companies began providing commercial support, helping to grow further investment in GeoNode. A growing user base made it easier to identify and prioritize areas for improvement. These are several of many aspects of GeoNode's community and ecosystem that prove there's greater value in open source than what's captured in existing valuation frameworks.

However you ultimately decide to approach valuing your DPG, start from a holistic view of your project and map out the potential benefits and related costs of open source, being realistic about what you can actually measure and prioritizing what will have the most cost and/or the most benefit per the goals of your project. Not everything is relevant.

Some valuation frameworks and tools we recommend reviewing and adapting include:

  • The Digital Impact Alliance publishes a great overview of five different ways of valuing the impact of ICT investments, A Valuing Impact Toolkit for ICT Investment, that includes guidance on which might be most fitting, depending on factors such as available resources, skills and data -- important considerations for any organization but perhaps particularly for those in low-resource environments.

  • To get much deeper into the cost assessment side of software, USAID has published the Software Global Goods Valuation Framework (with an accompanying spreadsheet) to assess cumulative development costs through an analysis of both retrospective and ongoing costs. Although aimed at digital health products, it fits other application areas as well. Perhaps most interesting to this model is its inclusion of a method for analyzing the cost of code, called the Constructive Cost Model (COCOMO 81).

  • For an example of how the World Bank measured the direct fiscal (not social) impact of Estonia's X-road project (a post facto measure), see Estonian e-Government Ecosystem: Foundation, Applications, Outcomes, which extrapolates a narrow view of fiscal value from the number of queries made to the system.

  • The Grameen Foundation published an in-depth financial ROI framework to help microfinance institutions (MFIs) forecast and analyze the benefits of adopting the DPG Mifos, the open-source platform for microfinance. It highlights that the decision to adopt and deploy Mifos cannot be based only on the ROI analysis, noting the importance of understanding non-financial intangibles, like creating a better foundation for future innovation. The framework doesn't include these intangibles, which can have different prioritization and value across implementations. Despite that limitation -- and the fact that the framework is unique to Mifos and MFIs and thus isn't an exact model for other projects to follow -- it's a well reasoned approach to a broader view of benefits (categorized as increased revenues and decreased costs specific to how MFIs function) and costs (categorized as project expenses, like data migration and staff time) that's worth reviewing. It's a thoughtfully bounded approach that's specific to their context and audience. For those looking to create a DPG with the goal of broad adoption across different locales with slightly different contexts, it's also worth studying how the Grameen Foundation created this tool to help MFIs more effectively adopt their DPG.

Again, none of the above models quantify and incorporate the value of the intangible benefits of open source very well, although their costs are generally captured.

KEY RECOMMENDATION: Be sure to include areas of 'adjacent value' when assessing the overall value of your DPG. These can be multipliers to your project's valuation.

All of the above describes a path from the very beginning toward eventual mastery of open source. Our discussion focused on readiness in terms of skills and capabilities, but the truth is that doing open source well is more of a cultural shift than anything else. Organizations using open source fluently quickly find that the open approach is just their default process. That culture shift is what lies at the top of the readiness learning curve.