The SAI Uncertainty Database is Reflective's attempt to develop a dynamic, public, scientifically-grounded assessment of the key technical uncertainties in SAI. We intend for this database to help:
- Researchers select projects that answer the most decision-relevant questions.
- Funders build portfolios that systematically reduce high-leverage uncertainties.
- Policymakers see where the science is strong, where it's limited, and what it would take to improve it.
- Form an initial foundation to create a transparent, prioritized, stage-gated SAI research roadmap.
Quantifying uncertainties
Drawing on team expertise, literature reviews, and expert consultations, we developed the database with three main steps:
- Define the space of technical uncertainties: We identified the main technical uncertainties that must be understood for informed decisions about SAI and split them up into four categories:
- Aerosol Evolution
- Climate Response
- Earth System Response
- Engineering
Note: There is a trade-off when defining individual entries between the level of granularity and maintaining a useful and manageable list; we've tried to keep it sufficiently granular to be actionable without having too many uncertainties to keep track of.
- Describe and score each uncertainty: We populated the following fields for each entry:
- Metric: A specific, measurable "what-if" scenario, such as a specific amount of ozone loss or change in AMOC strength. The metric must be quantitative to allow an estimate of how likely it is and why it matters for decisions.
- How "tight" or "loose" one makes the metric affects both the likelihood and the consequences. There is no intrinsic right answer, but we try to aim for a quantification so that where possible either the level of uncertainty or decision relevance is "medium" (see below).
- In some instances, a single metric is used to represent a broader class of uncertainties (e.g., AMOC strength representing ocean circulation).
- Level of Uncertainty: How likely are we to be wrong about this uncertainty? More specifically, how likely is our current best estimate of a specific quantity which represents this uncertainty to be wrong by an amount quantified in the metric?
- Low: 0 - 10%
- Medium: 10 - 50%
- High: > 50%
- Decision Relevance: How much would decision-making on SAI be impacted if we are wrong about the uncertainty?
Level Non-Engineering Definition Engineering Definition Low Changes efficacy or one specific impact of SAI by <20% from the current central estimate, and there is no plausible case where this uncertainty changes the overall cost-benefit enough to impact its importance to informed decision-making. No impact on feasible deployment timeline. Medium Could change efficacy by >20% or materially impact side effect(s), but it is very unlikely to alter the overall cost-benefit enough to materially impact its importance to informed decision-making. An impact on when one could deploy of ≤5 years, as it relates to the scenario assumption below. High There is a plausible case in which the outcome of this uncertainty alone substantially changes the overall cost-benefit or risk-risk assessment of SAI so as to materially impact its importance to informed decision-making. (Or the existence of the uncertainty would make it difficult to make informed decisions). An impact on when one could deploy of >5 years, as it relates to the scenario assumption below. - Resolvability scale: At what point in the following pathway could this uncertainty plausibly be materially reduced.
- In silico
- Small-scale testing: 10-100t SO2
- Large-scale testing: 100Kt SO2 (in a season); note that this is still too small to affect climate
- Discernible surface climate impact: > 1 Mt/year of SO2 (~0.1°C global cooling)
- Long-term sustained deployment: ≥ 0.5°C for ≥ 20 years
- Metric: A specific, measurable "what-if" scenario, such as a specific amount of ozone loss or change in AMOC strength. The metric must be quantitative to allow an estimate of how likely it is and why it matters for decisions.
Scenario dependence
The decision-relevance of some uncertainties depends on the deployment scenario envisioned. To constrain this initial effort, we focused on a "well-managed, moderate" scenario, while acknowledging that scenario selection is itself an important uncertainty. Where specification depends on the scenario for the non-engineering uncertainties, we assume the following:
- Hemispherically balanced deployment with injection in the subtropics at ~21km.
- Deployment would be gradually ramped up, and 0.5°C of cooling would occur at least a decade into deployment.
- Where relevant, we anchor our assessment of uncertainty and consequences on a deployment that cools by 0.5°C (by then we will know much more than now, and can re-evaluate uncertainties and whether to continue to increase cooling in light of what we know then).
- Injection of a gaseous precursor to sulfate (i.e. SO2 or H2S).
For many of the "engineering" related uncertainties, the degree of uncertainty depends on the assumed deployment timeline. We believe that it is likely for a hypothetical deployment to start with high-latitude, low-altitude deployment. For the engineering uncertainties, we assume a scenario with:
- A start date of 2035 for deployment sufficient for discernable surface climate impact (as defined above), corresponding to ~0.1°C of cooling—or roughly 1 Tg SO2/yr—as a threshold.
- Initial deployment at high latitude with modified existing aircraft. This means deployment at 13-15km, at ~50-70°N and ~50-70°S, in respective spring and early summer (e.g., MAMJ in the Northern Hemisphere, though the details here don't matter).
- While one could start at higher latitudes with existing aircraft, within 5-10 years the deployment would transition to using new aircraft at higher altitudes. In our scenario, there is never sustained global cooling of 0.5°C or more using high-latitude only deployment, and so for the purposes of this assessment, uncertainty in climate and earth-system response is based solely on the subtropical case. The high-latitude/low-altitude case is only relevant in this scenario for engineering uncertainties.
The purpose of defining this scenario is not to imply that it is preferable or more likely to occur than others, but simply to enable quantification and comparability of uncertainties.
Notes and other assumptions
- In principle, the resolution of an uncertainty could produce either a negative or a positive result (i.e. SAI could be better than expected rather than worse). Here, we choose metrics which emphasize the potential downsides in each case, such that our 'high' decision relevance would always reduce the benefit of SAI.
- We explicitly do not consider uncertainties and risks arising from the societal and geopolitical dimensions (e.g. "moral hazard"). Some of these could be added in the future, though it is likely that there would be substantial disagreement on the probabilities of these. This is an important task, and we encourage groups with more expertise in these areas to conduct similar efforts analyzing these non-technical uncertainties.
- Our approach is global in scale. For example, the response of the West African Monsoon to SAI is not explicitly named as an uncertainty, but instead is included under uncertainty in tropical circulation.
- We generally prioritize "direct" effects (comparing SAI impacts against a background warming scenario) over "residual" effects (comparing against an earlier climate state) to highlight novel risks caused by SAI.
- In various places we refer to the multi-model mean or range of some quantity (e.g. aerosol size distribution). This should be understood to refer to the set of modern global climate models with adequate performance to be useful in assessing the process in question — the assessment of which models contribute is therefore part of the subjective assessment of the degree of uncertainty. The value in question (e.g. a mean across these models) may not (yet) be available in the literature. Our planned work towards v2 of the database includes calculating some of these values.
Consultative process
The first version, v1, of the database is the product of several rounds of internal deliberation and an external feedback solicitation. Following the initial public release, we will continue to develop and update the database with input from the community. The process that brought the database to this v1 was as follows:
- The structure, process, and initial content for the database were iterated on by the Reflective team and its academic advisors, and selected external SAI researchers.
- Targeted outreach to SAI researchers generated feedback on the content and placement of specific uncertainties.
- Based on these rounds of feedback, Reflective produced a v0 database, which was shared among the SRM research community on mailing lists, Slack channels, and by directly emailing relevant experts, with an open feedback process.
- After incorporating this feedback, the database was presented to Reflective's Scientific Advisory Board (SAB), and then refined into v1, the first public release.
Management policies
Process for Incorporating Feedback:
Anyone may submit feedback on this database using the link at the bottom right of the website (for general feedback) or at the top right of each uncertainty page (for comments associated with that specific uncertainty). The following guidelines will dictate how feedback will be reviewed and incorporated:
- The Reflective team will review all feedback on a rolling basis, but will make a decision on incorporating feedback within 1-month of receiving it.
- Reflective will push any changes to the live database (from feedback or otherwise) once a month — and will document those changes publicly in a changelog. If no changes are made in a given month, the log will not be updated.
- Reflective will always provide justification in the changelog for why certain changes are made, sometimes including references to specific pieces of feedback.
- Contributors will be asked whether or not they would like to be cited in the change log when they submit feedback.
- Reflective may reach out to feedback contributors to learn more about their input.
Feedback Criteria:
- Feedback may be submitted on all parts of the database. However, please note that we are primarily seeking feedback on content — uncertainty and consequence levels, metrics, additional uncertainties , etc. — rather than structure — methodology, scenario assumptions, etc.
- Reflective has a strong preference for feedback that is accompanied by supporting references, and is significantly more likely to incorporate this type of feedback.
- Reflective's decisions on whether or not to incorporate feedback will be based on the available literature, potential discussions with contributors, and internal team expertise.
Ensuring an Up-to-Date Database:
Reflective will assess new research on a rolling basis to determine if updates are merited. Any updates made as a result of new literature will be documented monthly along with the changes made from community feedback.
Impacts view
In the future, we will link our uncertainties to a set of impacts. In some cases, the uncertainty directly maps onto a relevant impact. For example, uncertainty in the monsoon response to SAI directly maps onto the potential impact of "monsoon disruption". In other cases, a physical uncertainty may affect multiple impacts, or a given impact (e.g. sea-level rise) may be influenced by multiple uncertainties.
Our motivation for including both the "uncertainties" framing and the "impacts" framing is that they are aimed at different audiences:
- The uncertainties view is primarily aimed at scientists and aims to prioritise between the near-term uncertainties, understand the physical, chemical, and/or engineering driver of uncertainty.
- In contrast, the impacts view is primarily aimed at policymakers and the public, and aims to communicate:
- An acknowledgment of the impacts and risks about which policymakers and the public are concerned.
- How our scientific uncertainties relate to our understanding of these risks – i.e. do we need to learn more before we can confidently state that each item is or is not a risk?
Research roadmap
This database will be used to build a transparent, prioritized, and stage-gated SAI research roadmap to make required research digestible for funders and policymakers. We are currently undertaking user research to determine what aspects and features will be most useful to include in this project.
If you have any comments on this work please submit them through the feedback link on the bottom right of this page.