A version of this article was originally published on June 1, 2015 on Greentech Media.
By Eric Gimon and Sonia Aggarwal
One of the most striking features of today’s modern energy economy is the pace and scale at which new technologies are changing an industry used to a much slower pace of evolution. New fossil fuel drilling techniques, drastically cheaper renewable generation, rapidly improving battery storage and power electronics, and the rise of information technology are either already or soon to be changing many of the fundamental features of our energy economy. Policymakers used to operating, planning, regulating and legislating in an environment where capital deployment happens over the course of years and assets maintain their value over the course of decades need to adapt their practices to account for accelerating change and disruptive feedback loops.
To understand how to manage these rapid changes, policymakers rely heavily on quantitative analysis. And quantitative analysis, particularly forward-looking projections, relies heavily on underlying assumptions. The quality and breadth of these assumptions is thus crucial to making good decisions.
Cost assumptions, in particular, can be major drivers of study conclusions. Informed policy-making means study assumptions (especially deployment and cost figures) need to be scrutinized in each instance, and updated when necessary. Policymakers, as well as other stakeholders, should think critically about best practices for study inputs, and consider investing more in data gathering where necessary.
Unfortunately, deployment and cost numbers for new technologies are systematically underestimated, undermining smart policy choices.
Case in point: solar cost projections
To be more specific, let’s focus on one technology, solar photovoltaics (PV), which is currently disrupting the electricity sector. While technology costs are admittedly hard to forecast, especially for earlier-stage technologies that are still undergoing a great deal of innovation, empirical evidence shows that current methods for estimating costs are not serving us well. This is especially important when policy decisions rely on outdated inputs.
The economic fundamentals of solar PV are governed by the interplay between rapid deployment and fast annual decreases in total install costs per unit. Rapid deployment decreases costs through “learning by doing” (also known as learning curves or experience curves). In turn, lower costs make solar PV more economically attractive and drive further increases in deployment, inducing a “feedback loop”.
Failure to account for improvements in cost and capabilities for new technologies often follows a typical sequence: first, a drawn out stakeholder consultation and study process result in time-lags in cost figures. Next, a desire to lend credibility to the study by using “conservative” assumptions, alongside the need for consensus further encourage pessimistic estimates of functionality, cost, and deployment rates. The real kicker comes when exponential increases in deployment (as have happened for solar and wind generation) far outpace predictions.
Three current examples
Three instances of cost and deployment rates affecting study outcomes from top institutions illustrate this point: the Energy Information Administration’s just released Annual Energy Outlook 2015, a 2014 re-evaluation of scenario costs for the National Renewable Energy Laboratory’s (NREL) 2012 “Renewable Electricity Futures Study,” and a detailed 2014 capital cost estimator report for the Western Electricity Coordinating Council.
A particularly egregious case of time-lag can be seen in the Energy Information Administration’s (EIA) Annual Energy Outlook (AEO) projections for peak solar capacity. Previous outlooks reveal a consistent pattern of under-estimation (see figure below). EIA projections directly inform national policymakers; so if EIA fails to use current data in its estimates, it becomes harder for policymakers, businesses, and other stakeholders to make intelligent decisions about the future, or to gain an appreciation of how to best manage change that is happening on the ground. For example, the North American Electricity Reliability Corporation (NERC) worried that the 281 TWhs of annual renewable electricity generation in 2020 called for by EPA’s proposed Clean Power Plan (CPP) would unduly affect reliability, when in fact this is exactly the amount of renewable energy already generated in 2014 with no ill effects. Because NERC’s November 2014 report relied on outdated AEO 2013 data, the policymakers were unable to properly weigh the facts on the ground.
In NREL’s “Renewable Electricity Futures Study” (REF) a focus on consensus and technical feasibility drove conservative cost and deployment projections. Starting from 2010 cost data, the REF concluded in 2012 that an 80 percent renewable energy future was technically feasible and resulted in moderate cost increases under conservative technology improvement assumptions. A 2014 follow-up study—just two years later—examined new cost reduction scenarios from 2010 for 2050, comparing the cost impacts from the 2012 study’s most aggressive Incremental Technology Improvement (ITI) scenario for 2050 with a new Advanced Technology Improvement (ATI) scenario. Remarkably, the 2012 study’s ITI scenario’s 2050 cost estimates have already been achieved in 2014 for solar (GTM Research/SEIA U.S. Solar Market Insight®) and wind (LBNL). What’s more, new 2014 cost numbers are much more in line with the 2020 cost estimates of the updated ATI scenario. And that ATI scenario—which now looks like it also has conservative cost estimates—reaches 80 percent renewables in 2050 at almost no increased cost relative to business as usual.
This conclusion: 80 percent renewable energy in 2050 is not only technically feasible, but economically neutral compared to a business-as-usual case, represents a significant difference from the original REF conclusions, with important policy implications. Given that solar and wind deployment is accelerating, and that the grid study’s ATI scenario freezes cost reductions after 2020 and maintains conservative assumptions about improvements to grid management, it is feasible that an American grid powered by 80 percent renewables could be cheaper than business as usual by 2050.
A third case, the recent March 2014 “Capital Cost Review of Power Generation Technologies” for the Western Electricity Coordinating Council’s (WECC) 10-year and 20-year planning processes, also misses the mark on solar cost declines (despite the use of solid methodology) because it fails to recognize solar’s exponential deployment pattern. Deployment rates and learning rates (the cost decrease with every doubling of market size) are the two main ingredients for estimating future cost reductions. WECC applies learning rates consistent with historical data, but uses deployment rates that are much too low. One way to see this is that WECC estimates 2015 global solar installations around 200 MW-dc, while current projections are roughly 15 percent higher. It is therefore unsurprising that while GTM/SEIA’s Q4 2014 report found residential PV costs already dropped 10 percent during 2014 to $3.48/Watt-dc, the WECC review doesn’t anticipate these costs declines until 2020. Conservative growth estimates rapidly accumulate over longer time horizons to severely skew planning exercises. Policy-makers are left constantly trying to catch up to the facts on the ground, unable to manage change effectively or properly plan for the future.
By using two or three year-old numbers for solar and other disruptive technologies policymakers are driving blind: we overestimate the cost of deploying new technologies and underestimate their likely deployment. This puts one foot in the past and one foot in the future, impairing our ability to fully take advantage of important new technologies available today.
How can policymakers sort through the data?
We shouldn’t shoot the messengers for using outdated cost and deployment rate numbers. It’s hard to stay on top of the rapidly changing clean energy landscape and the admirable urge to use numbers vetted by a stakeholder process tends to introduce even more delay. There may also be good reasons to be conservative; for example, analyses may be more widely accepted when their cost estimates are conservative.
On the other hand, smart policy is threatened by unrealistic cost numbers. Here are some suggestions for enabling a better informed policy process:
(1) When considering cost or deployment projections for rapidly evolving technologies, insist that the most recent publicly available data be used, or at least analyzed as a scenario. Hold consultants accountable for presenting numbers more representative of the current real-world information. Use multiple sources to triangulate.
(2) Request that studies test a wider range of cost assumptions to understand sensitivities and ensure policy isn’t being set based on out-of-date information. Use meaningful variations in exponential rates to bracket uncertainty, and to mitigate against the natural human tendency to think in terms of linear extrapolations.
(3) Build continuous improvement into policies and/or use an iterative approach – don’t lock yourself into policy that assumes future prices, technologies, or capabilities.
In a fast changing energy landscape there is no excuse for flying blind.
Thanks to Shayle Kann, Jim Baak, and Michael O’Boyle for their input on this piece. The authors are responsible for its final content.