Analyses from diverse stakeholders show DOE’s proposed rule is off-base

On September 29, the Department of Energy released a Notice of Proposed Rulemaking (NOPR) that would bail out unprofitable coal and nuclear plants.  The notice argues generators with fuel on-hand are necessary for reliability and resilience, though these claims are demonstrably false.  Nevertheless, FERC has acted on DOE’s proposal and seeks comments by October 23 to determine whether DOE’s proposal becomes a FERC regulation.  FERC’s set of questions for stakeholders can be found here.

We created America’s Power Plan to deliver helpful analysis to policymakers to support the energy transition to a clean, affordable, resilient grid.  Though two of the three sitting FERC commissioners (Powelson & LeFleur) have publicly indicated they do not support the NOPR approach, this is no guarantee their decision will defend well-functioning wholesale markets and incorporate the best evidence that a renewable energy future is resilient and affordable.  It will be important to build a robust record that supports analysis from the DOE’s own report on reliability and resilience published in July—markets are more-than-adequately supporting reliability, and customers should benefit from lower costs as clean energy comes in to undercut other resources.  A forthcoming report in October from APP experts Robbie Orvis and Sonia Aggarwal will focus on solutions already underway in wholesale markets to value flexibility, a key ingredient of resilience.

To help our readers who want to get involved in this debate gather the best arguments out there, here are some resources from the experts of America’s Power Plan and others.

Resources from APP Experts:

The Department of Energy’s Notice of Proposed Rulemaking (NOPR) to FERC, directing the Commission to issue new tariff rules that reward certain (coal and nuclear) resources for so called “resilience” benefits, fails to demonstrate how it will improve resilience while threatening to upend the very markets it purports to protect.

The nearly unprecedented NOPR requires FERC to establish a tariff and “recovery of costs and a return on equity” for plants that have “a 90-day fuel supply on site,” which they argue would enable the plants “to operate during an emergency, extreme weather conditions, or a natural or man-made disaster.” According to the NOPR, “compensable costs shall include, but not be limited to, operating and fuel expenses, costs of capital and debt, and a fair return on equity and investment.”

When old, established industries are threatened by new, better technologies, they often go running to Washington for special protections. It is an old practice, generally taxing the common good for private interests. Unfortunately, the U.S. Department of Energy has set a new record for gall in this practice in a fairly stunning move that would impose a new tax on electricity consumers and roil America’s power markets for years to come.

Here’s the story: Renewable energy — especially wind and solar — has plummeted in price. Today a new wind farm, for example, is often cheaper than just the operating costs of an old coal power plant. Cheap natural gas creates additional price threats to existing coal or nuclear. And these favorable economics for renewables and gas don’t even count the public benefits they create through clean air, reduced greenhouse gas emissions and avoided fuel price spikes. . . . Click to read more

What analysts are saying

ICF forecasts DOE’s proposal could cost ratepayers between $800 million and $3.8 billion annually through 2030, and reduce development of new natural gas-fired capacity by 20-40 gigawatts:


Rhodium Group says only .0007% of nationwide power disruptions over the past five years were due to fuel supply problems, and the vast majority were the result of severe weather damaging transmission and distribution:

 

What stakeholders are saying

From the Media

Analysis of the DOE Report

Emerging Lessons on Performance-based Regulation from the United Kingdom

A version of this article was published on Greentech Media on October 6, 2017

 

Many U.S. states are considering moving from cost of service regulation for utilities toward a regulatory structure that incents efficient fleet turnover, incorporates clean energy and other cost-effective technologies, and stimulates smarter build-or-buy decisions.  These conversations are motivated by aging infrastructure, new customer energy use patterns, innovative competition from third-party service providers, the need for flexibility to accommodate carbon-free variable generation, a recognition of utilities’ unique role in the electric system, and a commensurate desire to ensure they remain financially viable.

Performance-based regulation (PBR) has emerged as a promising potential solution to these challenges for many public utility commissions (PUCs).  Some, like Ohio, Minnesota, and Missouri, have initiated informal discussions or official workshop series on the topic. Others, like Pennsylvania and Michigan, have commissioned or directly conducted research.  And still more, like Rhode Island, Illinois, and New York, have already taken concrete steps in this direction.

The need to improve utility incentives is reaching consensus, and the potential for performance-based regulation to meet the task is widely discussed. But regulators interested in PBR are searching for real-life examples where it has worked well.  Luckily, the United Kingdom began moving in this direction a few years ago.  Though the U.K. is different in its public policy priorities and regulatory capacity and philosophy than many U.S. states, RIIO already provides U.S.-relevant lessons from their experience.

RIIO – performance-based regulation at work

First, some context: the U.K.’s Office of Gas and Electricity Markets (Ofgem) regulates 14 electric companies and four gas companies, akin to a public utility commission in the U.S.  More than 25 years ago, the U.K. market was restructured by splitting generation and distribution businesses, creating a centralized generation market, and decoupling distribution utility revenues from sales.  In 2010, after a year of gathering stakeholder comments, Ofgem made another set of major changes. These reforms were designed to keep costs low for customers, seeking “better value for money,” and encouraging innovation among remaining monopoly utilities in gas distribution, electricity transmission, and electricity distribution.  Ofgem’s changes “sought to put consumers at the heart of network companies’ plans for the future and encourage longer-term thinking, greater innovation and more efficient delivery.”

The new regulatory structure comprised a multi-year rate plan with a revenue cap plus performance incentives.  The program, affectionately called RIIO (Revenue set to deliver strong Incentives, Innovation, and Outputs; or Revenue = Incentives + Innovation + Outputs), went into effect just over four years ago, and contains several important design features:

  • RIIO extends the time between financial reviews to eight years, with a review after four (the first part of this four-year review is happening now, and inspired this update).
  • RIIO combines utility capital expenditures and operational expenditures into one capped bucket of allowable revenue (a “revenue cap”), and enables a rate of return on the whole (a structure they call “totex” to indicate both capex and opex). This design intends to do two things:
    • First, the revenue cap provides financial incentives for utilities to spend prudently, as they have an opportunity to keep (at least some of) whatever costs they save as profit.
    • Second, the totex reduces the capital bias that can arise from traditional cost of service regulation (which allows a rate of return for capital expenditures but treats operational expenditures as a pass-through). When utilities can only make money from investments, they will systematically choose capital solutions over operational solutions that may be more cost effective—one of the key insights from RIIO for U.S. regulators.
  • RIIO highlights six important goals or “outcomes”, for which it defines quantitative metrics and sets specific targets. RIIO’s outcomes will likely sound quite familiar to U.S. regulators: safety, environment, customer satisfaction, connections, social obligations, and reliability/availability.
  • Beyond the financial incentives created by the revenue cap (discussed above), RIIO adds financial incentives and penalties for each of the outcome categories. These outcome-based performance incentives sum to about 200-250 basis points of incentives for excellent performance or a similar magnitude of penalties for poor performance.  This design feature is intended to motivate utilities to innovate to deliver what customers want out of the utility system.

 

  • RIIO tracks these outcomes and others via a standardized scorecard, making it easier for stakeholders to follow which goals utilities are meeting or exceeding, and where they may be falling short.
  • RIIO also held aside a pot of funding for innovative projects from R&D through pilots, to kick-start the intended shift in utility culture. In order to be eligible for these funds, utilities must agree to share lessons and ideas generated by the research.

Many of these elements of PBR are under discussion in states around the U.S., but RIIO combines all of them into one holistic change to utility regulation.

Lessons from across the pond

With that context, let’s dig in to the lessons from RIIO’s mid-term review.  Ofgem recently released an open letter that begins the discussion about lessons, based on preliminary evaluation work.

Most important, experience so far supports the notion that revenue caps with totex and carefully calibrated outcome-based performance incentives can drive innovation, stabilize or improve utility profitability, and focus utility attention on the outcomes customers most want.  For example, in the first performance year, many distribution utilities beat forecasts for customer bills, exceeded most of their performance targets, and achieved returns on equity averaging just over nine percent – 300 basis points more than their estimated six percent cost of equity.  Beyond performance numbers, anecdotal evidence also suggests that utilities have shifted their focus toward performance under RIIO.  There is no indication that Ofgem and the U.K. utilities will move away from this regulatory structure after testing it over the last four years.

Lesson for U.S. regulators: It’s worth exploring whether revenue caps, outcome-based performance incentives, and perhaps totex are right for your state.  Make sure the combined financial impact of performance incentives are carefully crafted and just large enough to capture utility management attention.

RIIO’s detailed design also provides important lessons for U.S. regulators looking to move toward rewarding utilities based on performance.  Most of the emerging lessons relate to the difficulty of getting long-term projections right, and the need for automatic calibration along the way.

First, setting the right revenue cap is very challenging in a world of growing uncertainties. For example, will efficiency flatten demand or will electrification kick-start demand growth?  External factors can cut both ways, but in the U.K., Ofgem notes that “forecasts for real price effects in setting allowances…appear in some instances to have resulted in gains for the companies.”  Thankfully, Ofgem designed the cap to share gains or losses between utilities and customers, but still, over the last few years, it is possible the utilities earned more than efficient business practices would have yielded alone under better-calibrated revenue caps.

Lesson for U.S. regulators: Pay attention to important normalization factors (for external factors like GDP, inflation, population changes, or electrification rates) and build in transparent off-ramps and correction factors (for external factors like storms) from the beginning.  Look for ways to share value fairly between utilities and customers.

Second, Ofgem’s mid-term review identified the eight-year length of the multi-year revenue cap as a key source of uncertainty.  Of course, the tension here is between setting targets too far into an uncertain future versus creating a long enough runway for utilities to innovate and deliver desired outcomes.  Ofgem points to rapidly changing technologies and competitive forces on the distribution side, urging a review of the length of the period “given the potential scale of future uncertainty facing network companies.”

Lesson for U.S. regulators: Create programs that last less than eight years, or suggest predefined points for review and adjustment within less than eight years.

Third, though some U.K. utilities are paying penalties for underperformance on outcomes, most are successfully earning incentives for performing well on outcomes.  This mix of penalty payments and incentive earnings is balanced, but it is worth noting that more utilities are performing (and earning) well than poorly on their outcomes.  This may indicate more ambitious performance targets could have been warranted to better share benefits between utilities and customers.

Lesson for U.S. regulators: Information asymmetry is likely to tilt in favor of looser targets for utilities.  It may be worth conducting independent studies of potential to assess whether proposed targets are sufficiently tight. It is less risky for customers to start with small financial incentives and work up, rather than over-incent utilities and then have to squeeze incentives down to the right level.

Fourth, even though Ofgem worked with utilities and stakeholders to define outcome metrics carefully at the program’s start, a couple instances of ambiguity still arose in the performance period.

Lesson for U.S. regulators: Invest the time up front, before a performance-based program begins, to define outcome-based performance measures clearly and quantitatively.

More lessons will arise as Ofgem continues its midterm review of RIIO, but we hope that these first, emerging lessons will be useful to U.S. regulators considering performance incentive program design questions today.

Trending Topics – Getting the most out of vehicle electrification

An version of this article appeared on Greentech Media on August 23, 2017

By Mike O’Boyle

Electric vehicles (EVs) are on the path to becoming mainstream, thanks to strong policy support and rapid lithium-ion battery cost declines.  BNEF projects 40 percent of new U.S. car sales will be electric in 2030, with EVs cost-competitive without subsidies around 2025.  That’s an extra 24 terawatt-hours (TWh), or half a percent of new flexible demand, added to America’s power system annually in just over a decade – a regulatory blink of an eye.  Depending on when EVs charge, that translates to 3-6 gigawatts (GW) of flexible demand-response capacity added each year – roughly half of today’s total demand response capacity in PJM Interconnection.

Electric utilities will play a major role supporting transportation electrification, and as electricity providers, will benefit from additional sales and infrastructure required to meet new demand.  An ICCT report found a statistically significant link between grid-connected EV infrastructure and vehicle electrification, and a Brattle Group report showed electricity demand from a fully electrified transportation fleet in 2050 dwarfs potential lost sales from distributed solar generation by a factor of five.  So whether or not utilities are allowed to own and rate base charging infrastructure, massive investment opportunities are coming down the road.  But if utility shareholders receive new earnings opportunities through EVs, what value will customers get in return?

Last year America’s Power Plan published a five-step framework for getting the most out of grid modernization to ensure customers get the value promised from grid modernization investment programs.  Electrification is one subset of these efforts, and a similar approach (adding market development as a precursor) can help regulators prepare for immense market changes.  Getting the most out of vehicle electrification requires supporting market development, integrated distribution planning, defining goals, setting metrics, defining targets, and exploring changes to utility financial incentives.

Step 1a – Supporting market development

Before developing a comprehensive EV evaluation framework, utilities will have to experiment and innovate.  In the short-term before EVs ramp up, regulators should support innovative grid-edge applications through pilots and an initial round of EV infrastructure (rate based or not) laying the groundwork for EVs to become grid resources.  In order to turn EVs into reliable demand response and storage resources, these applications need work be made operational in a reliable way, including communication protocols, standards, and consistent operational practices.  New rate designs will also have to be tested and developed.

PUCs haven’t yet developed robust frameworks for assessing the prudency of utility charging infrastructure investment, so initial approval of a closely watched first round of experimental investments can encourage innovation and inform regulation.  Commissions may consider allowing utilities to provide incentives to help customers electrify in this early phase, then pare incentives back in the future under a more comprehensive approach as the scale and scope of EV infrastructure grows and the industry becomes more mature.  Rocky Mountain Institute’s report, Pathways for Innovation, provides a useful roadmap from experimentation to deployment.

Step 1b – Integrated distribution planning – EV edition

Integrated distribution planning (IDP) determines the hosting capacity and potential benefits of distribution system resources under different utility control scenarios – a prerequisite to optimize distributed energy resource deployment alongside conventional supply-side resources. IDP is heating up with new proceedings in Maryland, New Hampshire, New York, and Minnesota (see the 50 States of Grid Modernization for the complete list), joining early adopter states like Hawaii and California.

Among other valuable results, IDP generates the data utilities need to understand where and when EV charging can provide the greatest benefit of all customers.  One key element is location; IDP helps identify uncongested circuits with the smallest incremental cost of adding charging capacity.  On congested circuits, EV chargers that would otherwise add to congestion can reduce their system-wide impact if customers receive charging incentives during periods of low demand.  In addition, IDP allows utilities to:

  • Plan for various rates of EV adoption
  • Understand the benefits of smart versus regular chargers
  • Plan for different combinations of autonomous vehicles, public EV fleets, and individual customers

Of course, these efforts should be coordinated with municipal and state transportation agencies that will likely play primary roles in vehicle electrification, including route planning, congestion, and clustering of public-facing chargers.

Finally, IDP provides visibility into the economics and viability of EVs as system resources for managing of wind and solar variability.  Rather than build new natural gas peakers, smart chargers capable of responding to system operator control can help manage peaks by delaying charging.

Step 2 – Define the goals of a vehicle electrification program

The second step starts by asking what regulators, on behalf of customers, hope to achieve by allowing utility investments in EV deployment, and what role should the utility play?  Traditional goals of affordable, reliable, safe power aren’t going anywhere, and EVs should help achieve these goals.  But other goals, such as facilitating customer charging, improving local air quality, and power sector decarbonization are newer goals impacting EV infrastructure and demand management.

An obvious principle EV deployment goal should be increasing service convenience and quality for a growing EV customer base of while increasing EVs on the road.  Serving customer demand for EVs, including disadvantaged communities, means facilitating new smart charger roll-out and demand management systems that help customers charge rapidly, in many locations, as cheaply as possible.  Though investment is required, time-varying rates and demand response payments can help EVs enhance affordability, improve existing infrastructure efficiency, and enable autonomous EV charging and aggregation as flexible resources

Local air quality is another common goal of vehicle electrification, which will likely benefit low-income communities that tend to have worse air quality than average.  Because the utility plays a significant role supporting EV deployment, some benefits to local air quality can be attributed to their performance in promoting EV adoption.

EVs not only decarbonize the transportation sector, they also help decarbonize the power sector.  Vehicle electrification has great potential to facilitate integrating local and bulk-system renewable energy resources, i.e. adding flexibility by shifting demand from one hour of the day to another, or providing short-term frequency response.  Shifting is a key strategy for integrating variable renewables from Teaching the Duck to Fly.  If vehicle manufacturers and customers can agree on rules for discharging, this flexibility potential will nearly double.

Step 3 – Metrics of a successful vehicle electrification program

Metrics should focus on outcomes reflecting policymaker goals – if it is a state goal, electrification itself should be measured and publicly reported by the utility, in terms of energy (kWh), customers (vehicles/customer), electric vehicle miles traveled (eVMT), and peak-coincident charging (kW).  These four metrics help customers understand progress in meeting transportation electrification goals.  Regulators can also consider comparing overall spending on charging infrastructure with electrification metrics, giving a sense of grid spending per unit of electrified transportation.

Often vehicle electrification outcomes are subsets of a greater goal, i.e. clean energy, affordability, or reliability.  System metrics for grid modernization or clean energy can subsume vehicle integration metrics. Because new vehicles necessarily increase demand, utility performance in key areas like peak demand management (% MW reduction), efficiency (kWh/customer), or carbon emissions (CO2/MWh) or other air pollution, must therefore account for “beneficial electrification,” while maintaining high standards for reducing impacts of EV adoption on those outcomes.  For example, when New York’s Consolidated Edison recently adopted an outcome oriented efficiency metric, kWh per customer, it normalized for vehicle and appliance electrification by adding in customer load to the target.

Step 4 – Create an open process to set targets

Once metrics are selected, reasonable targets can help guide utility planning. A transparent target-setting process should include plenty of time for stakeholder review and comment, and targets should be set far enough into the future to accommodate investment and program timelines. Regulators should consider the unique context of each region or utility, and place targets within a range that represents a stretch, but not an unreasonable one.

Pilots can be helpful where the potential for utilities to optimize EV charging via rates or demand response is unknown. For example, a recent BMW-Pacific Gas & Electric pilot program successfully demonstrated that EVs can serve as reliable and flexible grid assets, giving regulators a sense of what is possible.

Target setting is one part art and one part science, raising the importance of a transparent and predictable process for calibrating targets based on real-world performance. Laying out the target revision process ahead of time is critical to lowering utility investment risk.

Step 5 – Consider linking utility returns to performance

Smart Grid Hype & Reality notes that today’s “investor-owned utility rewards are based on processes (investment), not outcomes (performance).”  To ensure utilities are properly motivated to deliver new power sector outcomes, regulators that are unsatisfied with the results of measurement and target setting should consider linking utility compensation to performance.

Many different resources explore options for reorienting utility compensation around performance, including Synapse’s Handbook on Utility Performance Incentive Mechanisms, America’s Power Plan’s Cost and Value Series Parts One and Two, RAP’s report for the Michigan PUC, and Ceres and Peter Kind’s Pathway to a 21st Century Utility.  Many of these concepts are in the proving phase in the U.K., are being implemented in New York and Massachusetts, and are being explored in “utility of the future” proceedings in Illinois, Ohio, Minnesota, Oregon, and Hawaii.

For EVs in particular, two methods could be helpful – a conditional rate of return on charging infrastructure based on performance, (if allowed) and overall performance incentive mechanisms.  Utility commissions have, and will undoubtedly continue to find it prudent for utilities to build, own, maintain, and operate charging infrastructure; particularly on public property, in low-income areas, and for large businesses and parking structures.  In such cases, key metrics outlined above could be linked via basis point adjustments to utilities’ return on investment on those rate-based assets.  Regulators could also set a revenue cap on charging infrastructure, with incentives to achieve electrification targets while spending below budget.

Performance incentives are essentially cash bonuses increasing utility returns if specific targets are met, while penalizing the utility when they fail.  For example, a utility could be rewarded for reducing peak demand (MW) below the target set by regulators by turning off EV chargers when needed.

Electrification presents a massive opportunity for utilities to invest productive capital into the distribution system.  Reorienting utility investment around outcomes can help customers get commensurate value in return.

++ Thanks to Phil Jones, Chris Nelder, and Nic Lutsey for their input on this piece.  The author is responsible for the its final content.

Trending Topics – Energy efficiency’s existential crisis is also an opportunity

A version of this article was originally published on July 25, 2017 on Greentech Media.

By Matt Golden

Just about every plan to achieve a clean energy low carbon future includes a large helping of energy efficiency. But while it’s true that efficiency is generally much cheaper than generation, energy efficiency as we know it faces an existential challenge.

The rate at which we’re deploying efficiency is simply not keeping pace with utility and grid needs. But even if we were able to achieve scale, in the current construct, it’s unclear how we would pay for the massive investment required.

Fortunately, there is another way. We now have the data, market, and financing in place to procure energy savings to solve time- and location-specific grid problems. Bundling projects into portfolios turns efficiency into an investor and procurement-friendly product that has manageable and predictable yields.

By treating efficiency as a genuine distributed energy resource (DER) we can stop relying on ratepayer charges and programs and instead unleash private markets and project finance to deploy and fund energy efficiency projects in the same way we do solar, wind and other energy resources — through long term contracts, creating cash flows that can be financed like grid infrastructure through project finance rather than consumer credit.

Efficiency’s existential dilemma

While many of our current efforts are focused on overcoming barriers to demand, the elephant in the room is that if we get efficiency on the rails towards real scale, current ratepayer funded programs will simply run out of money.

According to a recent blog post by ACEEE, combined efficiency investments across every sector of the economy (not just buildings) range from about $60 to $115 billion a year in the United States. A conservative estimate from a 2009 McKinsey report puts the price tag of upfront efficiency investment at $520 billion by 2020.

By comparison, current efficiency program spending hovers around $8 billion a year nationally, resulting in a market program market including private capital of approximately $16 billion. It’s a big number, but compared to capital investment needed to achieve the potential of energy efficiency in America’s buildings which will be counted in the trillions, it’s a drop in the bucket.

Rethinking efficiency in order to engage markets

The grid is undergoing a transformation from central generation to clean distributed sources of power such as solar and wind. This has resulted in new challenges as we integrate intermittent renewables, often at the grid edge. The imbalance between California’s daytime solar supply and evening demand (the “duck curve”) is contributing to regular periods of negative pricing and driving the need for time- and location-responsive distributed energy resources (DER) such as storage, EV charging, and demand response. As DER markets emerge, it is clear that there are no silver bullets to solve this problem, and that current resources are both costly and in short supply compared to the scale of the challenge.

Energy efficiency represents the largest and least expensive of these potential resources, but has largely been left out of the conversation. This is because traditional energy efficiency is based on monthly average savings and therefore can’t solve for grid issues that vary by location and time.

However, as smart meter interval data becomes available in an increasing number of states, and portfolios of efficiency projects and data are aggregated, we will have the ability to calculate savings on portfolios of energy efficiency projects in terms of both time and location. This analysis creates resource curves (time and locational savings load shapes) that can be used to design efficiency portfolios that reliably deliver “negawatts” where and when they are most needed, rather than simply average reductions in consumption for a given month.

Rather than paying in advance through rebates for traditional energy efficiency that doesn’t differentiate between peaks or valleys in demand, utilities will be able to procure savings based on when and where they are happening. By breaking down “energy efficiency” into classes of projects that deliver more valuable resource curves, we can make savings worth more when it has the biggest impact, giving market players the tools and incentives they need to optimize their offerings to deliver the most valuable results to the grid and the best deal to customers.

The existential question

With utilities and wholesale market procurement providing a long-term and scalable buyer, the next question is: how do we finance the massive upfront investment required to achieve the energy efficiency potential locked up in America’s existing building stock?

By making efficiency work like other capacity resources, we solve for two of the outstanding existential problems that have stood between energy efficiency and its potential: how to bring efficiency to bear as a real solution for modern day grid issues such as intermittent generation, and how to attract the private investment required to get us there.

Rather than paying rebates upfront and measuring monthly outcomes years later — resulting in prescriptive programs and costly regulations — utilities can use standard open-source methods and calculations such as CalTRACK and the OpenEEmeter to establish markets in which a wide range of businesses can enter into mid- or long-term contracts, similar to supply side PPAs, where they are paid for performance through savings purchase agreements (SPA) for the value of how they shift load over time, based on normalized metered savings.

A new pay-for-performance arrangement would flip the way we pay for energy efficiency on its head. Whereas today energy efficiency investments are financed by consumers either out of pocket or based on their credit or the value of their asset, we can instead use project finance in the same way we pay for power plants and other distributed resources — by paying for performance over time and financing the resulting cash flow. Rather than relying on individual consumers to subsidize the public benefits of efficiency, the costs would be spread across all ratepayers and would be rate-based like other utility investments.

The solutions we need are available today

While it’s true that energy efficiency on individual buildings can be all over the map, at the portfolio level it makes for a remarkably stable investment. The transition from attempting to be right all the time to instead accepting quantifiable risks and managing performance through portfolios marks a transition from engineering to finance.

To put it another way, while guaranteeing outcomes to a single customer is exceptionally hard and costly (and has diminishing returns), a portfolio of projects will perform with consistent results, providing purchasers with high confidence in performance and yielding consistent returns for investors. Combined with investment grade insurance products to backstop the performance of bundled portfolios of efficiency projects, financing cash flows of efficiency portfolios works exactly like other grid infrastructure investments.

Efficiency aggregators compete to enter into savings purchase agreements to deliver demand reductions to utilities when and where they need them. Utilities pay for these savings as they are delivered through procurement. Aggregators can then insure and finance these cash flows and compete to deliver products that both resonate with customers and are optimized to maximize the grid value. So long as efficiency is cheaper than the marginal costs of alternatives such as generation, storage, or transmission and distribution investments, it is a good deal for ratepayers.

Paying for performance in practice

While this approach sounds far-fetched and futuristic — it isn’t.

Everything needed to quantify the impact of energy efficiency resource curves, engage private capital, and manage performance risk is ready to go. The only thing left is for regulators and utilities to establish open and competitive markets to give investors and business model innovators a place to play.

In response to CA law’s AB-802 and SB-350, which requires pilots in normalized metered efficiency and pay-for-performance, PG&E recently selected winning bidders for its first pay-for-performance pilot in which aggregators will be paid based on metered performance over time, rather than through customer rebates and time and materials to implementors. The pilot also represents a first step towards using efficiency to help to close the 4,000 GWh capacity gap created by the planned shutdown of the Diablo nuclear plant in 2025.

Pay-for-performance efficiency isn’t just limited to California. Similar efforts are getting underway in New York, Massachusetts, Illinois, Oregon, Texas and Washington. However, many of these pilots are still extremely small scale and are unnecessarily complex and entangled in webs of outdated regulations — we are stuck in purgatory between current regulations designed to manage programs that pay in advance and future markets where aligning incentives means regulators can focus on sending the right price signal and prevent abuse and gaming.

Steps to reach scale

The transition from programs to markets is not a one step process. It requires a series of investments in data and a cultural shift from regulators and utilities toward adopting financial principles of portfolio management.  This transition will take time and data, so it’s critical that we get the ball rolling immediately:

  1. Utilities should implement open source metering of energy efficiency performance in order to optimize program implementation and make savings and resource curve data open and transparent.
  2. Utilities should create pay-for-performance pilots next to existing programs, allowing third party aggregators to innovate and compete based on outcomes.
  3. Regulators should allow utilities to recover cost so long as the utility cost of metered efficiency is lower than the marginal cost of alternative resources.
  4. Regulators and utilities should move efficiency resource curves into all resource procurements alongside other distributed resources.

Given the problems faced by the changing grid, and the market and financial barriers to scale inherent in the current approach to energy efficiency, it is urgent that we start aggressively standing up markets that value energy efficiency resource curves through pay-for-performance, to unlock private investment and market innovation.

The good news is that solutions exist to overcome efficiency’s existential challenges and deliver the investment needed to achieve the vast potential of energy efficiency. The sooner we pivot the better — there is no time to waste.

Trending Topics – Mind the “storage” gap: how much flexibility do we need in a high renewables future?

A version of this article was originally published on June 22nd, 2017 on Greentech Media.

By Brendan Pierpont

Imagine for a moment that we have built enough wind and solar power plants to supply 100 percent of the electricity a region like California or Germany consumes in a year. Sure, the wind and sun aren’t always available, so this system would need flexible resources that can fill in the gaps. But with continuing rapid cost declines of wind, solar, and batteries, it’s possible that very ambitious renewable energy targets can be met at a cost that is competitive with fossil fuels.

Every region has a different climate and demand profile. Taking California or Germany as an example, and assuming no interconnections with neighboring regions, up to 80 percent of the variable renewable power produced could be used in the hour it is generated with the right mix of wind and solar – in other words, 80 percent of supply could be coincident with demand. Still, a reliable grid needs fast-responding resources to satisfy the remaining 20 percent of demand; filling this gap is one of the principal flexibility challenges of a low-carbon grid. But what will that flexibility cost?

The answer is surprising – by 2030 an 80 percent renewable energy system including needed flexibility could cost roughly the same as one relying solely on natural gas. As Climate Policy Initiative demonstrated in our recent report Flexibility: the path to low-carbon, low cost electricity grids, if prices for renewable generation and battery storage continue to fall in line with forecasts, meeting demand in each hour of a year with 80 percent of electricity coming from wind and solar could cost as little as $70 per megawatt-hour (MWh) – even when accounting for required short-term reserves, flexibility, and backup generation. Of course, this analysis makes some simplifying assumptions; it represents the new-build cost of generation and flexibility to meet demand in every hour of a year using historical wind, solar and demand profiles from Germany, and it doesn’t factor in transmission connectivity or model the constraints of existing baseload power plants in detail. But it also leaves out the significant potential for cheaper flexibility from regional interconnections, existing hydroelectricity, and the demand side.

Still, this analysis helps us understand what kinds of flexibility we will need and what it will cost. The promise of a low-cost grid based on wind and solar is so compelling, it’s worth digging into what we’d need to do to realize this vision.

What is flexibility, anyway?

A power system has a wide variety of flexibility needs – with time scales ranging from seconds to seasons – and a range of different technology options can be used to meet those needs, depending on the time scale.

On very short time frames from seconds to minutes, fast-responding resources are needed to keep the grid in balance and compensate for uncertain renewables and demand forecasts. These needs should grow only modestly as shares of renewables climb to high levels, and they could be accommodated cheaply using existing hydro generation (where it exists), or even smart solar and wind power plants. Fast-responding demand response or energy storage would also be good choices, particularly after storage costs decline further as projected.

Solar and wind output can change rapidly on a predictable, hourly basis as well, requiring flexible resources that can quickly pick up the slack. One feature of California’s now-infamous “duck curve” is the need for fast-ramping resources to meet the evening decline in solar production. California has devised innovative market mechanisms to ensure flexible gas and hydro generators are available to meet these ramping needs.

On a daily basis, the profile of renewables production doesn’t neatly match demand, requiring resources that can store or shift energy, or otherwise fill in the gaps across the day. Today, daily imbalances are met primarily by dispatching fossil fuel fired power plants. But a number of solutions are gaining momentum, such as automatically shifting when consumers use energy and building large batteries.

At even longer time frames, there can be multi-day and seasonal mismatches between when renewable energy is produced and consumed. The need for long-term, multi-day energy shifting – exemplified by several windless, cloudy winter days with high electric heating demand – is perhaps the biggest challenge to complete decarbonization of the power grid, because batteries are ill-suited to seasonal shifting needs. In fact, using lithium ion batteries for seasonal storage, cycling once per year, would cost tens of thousands of dollars for each MWh shifted.

Graphic: Technology fit with flexibility needs

The challenge of power grid decarbonization hinges on this ability to store or shift energy. But how much energy would the power grid really need to shift, and over how long?

Solar drives daily storage needs

A power system that relies primarily on solar would have abundant power in the middle of each day, and scarcity during the night. Trying to exclusively power the grid with solar, with no ability to store or shift energy, would mean more than half of demand would go unmet.

Many technologies are well-suited to shifting energy within a day. Today solar generation relies on dispatching hydro and thermal power plants to meet changing demand, but in the future, lithium ion and flow batteries promise multiple hours of storage and shifting capability. Thermal energy can be stored in buildings, shifting when electricity is used for heating or cooling. And as electric vehicles become more widespread, ubiquitous charging infrastructure, electricity pricing and automated charging could shift when drivers charge their vehicles.

But are the daily energy storage and demand-shifting solutions emerging today going to be enough? Well, it depends.

In California, demand is highest during the summer, when solar production is at its peak. If California could store and shift solar energy to any time in each day, solar could meet nearly 90 percent of California’s electricity demand. Only 10 percent of energy demand would go unmet by solar because of multi-day and seasonal storage gaps.

In Germany, however, demand is highest during the winter months, driven in part by electric heating demand. So storing and shifting solar energy within each day would still leave 30 percent of energy demand unmet. In other words, the long-term storage gap for solar in Germany is three times larger than in California.

Wind drives storage needs of up to a week

Wind, on the other hand, is a better match with demand hour-by-hour, with 70-80 percent of wind production coincident with demand in California or Germany. And compared with solar, daily storage has little value for wind. Shifting energy within the day could only improve wind’s match to demand by a few percentage points. For wind, the biggest gains come from shifting energy by up to a week. In both California and Germany, the ability to shift energy by up to a week could allow nearly 90 percent of energy demand to be met with wind

Beyond a week, seasonal storage needs depend on regional demand and renewable resource profiles, and, critically, what mix of renewable resources the region has installed. A system incorporating both wind and solar can have lower storage needs than a system based predominantly on one resource or the other. In Germany, a mix of 70 percent wind and 30 percent solar could meet 90 percent of demand on a daily basis, reducing the need for longer-term storage. In California, solar is already a pretty good fit for seasonal energy needs, but the addition of around around 10 percent of electricity from wind could slightly lower both daily seasonal storage needs in California.

Graphic: Storage gap for 100 percent wind or 100 percent solar in California and Germany

Graphic: Storage gap for a wind and solar mix that minimizes long-term storage needs in California and Germany

But far fewer technology options allow for long-term energy shifting. Consumers can’t go for a week without heat, cooling, or charging vehicles, and many long-term storage technologies like hydrogen are still too costly and inefficient for widespread use. The default option for long-term storage is a familiar one – rely on fuel-burning power plants that provide flexibility to today’s power systems. Finding cheap, reliable and carbon-free ways to shift energy for periods longer than a week may be the key decarbonization challenge.

So how should we approach the seasonal storage gap?

Policymakers and planners have several strategies they can use to bridge the storage gap:

  1. Target a mix of renewable resources that minimizes long-term storage needs. Procuring the right mix of resources can be the easiest way to reduce the seasonal storage gap.
  2. Connect with neighboring regions to trade surpluses and shortfalls of energy. Northern Europe and the Western U.S. are taking steps to better integrate regional grids, although getting neighboring states and countries to cooperate can be challenging.
  3. Make use of existing hydropower. Regions with abundant hydroelectricity may already have enough existing flexibility to completely satisfy seasonal storage needs. But electricity and ecological needs don’t always align, and drought years could spell trouble for grid reliability.
  4. Make industrial demand seasonal. Paying the fixed capital and labor costs of an electric arc furnace for several months of the year while a steel foundry lays idle may in fact be cheaper than building the storage or generation needed to meet that demand carbon-free year-round. But this solution would require a careful balancing act between maintaining industrial competitiveness, complying with trade agreements, and ensuring job stability for workers.
  5. Develop long-term storage technologies to shift energy across weeks and months. Turning renewable electricity into hydrogen or synthetic natural gas can enable longer-term and larger-scale storage, and can be used directly for transportation, heating and industry. But so far, these conversion technologies are inefficient and expensive.
  6. Develop flexible, dispatchable carbon-free power plants to cover shortfall periods. A recent survey of decarbonized grid models suggested that nuclear and carbon capture and storage may be needed to completely decarbonize the grid. But market models and technologies will need to evolve for these resources to operate flexibly and profitably.

Transitioning to a low-carbon grid

A low carbon grid is the lynchpin of any serious plan to avoid the dangerous impacts of climate change. And with solar, wind, and energy storage costs dropping year over year, the vision of a low-cost, flexible grid driven by renewable energy seems tantalizingly within reach. But if we are going to fully decarbonize the grid, the long-term storage gap is one of the biggest challenges that lies ahead. We already have many of the technologies and tools we need for this shift, but our electricity policies and markets need to evolve for a new generation of technologies with different cost and risk profiles. If we start laying the groundwork today, we’ll be ready to keep pace with the rapid transition ahead.

++

Brendan Pierpont is a Consultant with the Energy Finance team at Climate Policy Initiative

Trending Topics – Secretary Perry, We Have Some Questions Too

A version of this article was originally published on April 24th, 2017 on Greentech Media.

By Mike O’Boyle

In April, DOE Secretary Rick Perry issued a memorandum to his staff asking some pointed questions about the future of the electric grid as coal is retired off the system, including:

  • “Whether wholesale energy and capacity markets are adequately compensating attributes such as on-site fuel supply and other factors that strengthen grid resilience and, if not, the extent to which this could affect grid reliability and resilience in the future; and
  • The extent to which continued regulatory burdens, as well as mandates and tax and subsidy policies, are responsible for forcing the premature retirement of baseload power plants”

Given the rapid change facing the electricity system, these questions may seem reasonable, but they reflect an outdated world view. The DOE’s publication of this memorandum presents an opportunity to uncover many of these outdated assumptions and understand the drivers behind the unstoppable transition from coal to other technologies. By taking each premise in turn and providing evidence-based analysis, we can see that the projected demise of coal will result in a cleaner, cheaper, and more reliable energy system.

Premise 1: “baseload power is necessary to a well-functioning grid”

asTo understand whether this is true, some definitional work is needed. Baseload generation’s purpose is to meet the base load or demand, which Edison Electric Institute defines as “the minimum load over a given period of time” in its Glossary of Electric Industry Terms. The same glossary defines baseload generation as “Those generating facilities within a utility system that are operated to the greatest extent possible to maximize system mechanical and thermal efficiency and minimize system operating costs . . . designed for nearly continuous operation at or near full capacity to provide all or part of the base load.” In other words, baseload plants are those whose efficiency is highest when run at a designed level of power, usually maximum output, and deviations from this level of power reduce efficiency and increase costs. Baseload generation is an economic construct, not a reliability paradigm.

A system with baseload thermal generators as its backbone comes with some reliability pros and cons. For example, baseload power usually has heavy generators with spinning inertia, which gives conventional generators time to respond with more power when a large generator or transmission line unexpectedly fails. But we now know how to get such responses much more quickly from customer loads, newer inverter-based resources like wind, storage and solar, and gas-fired resources.

As the Rocky Mountain Institute’s Amory Lovins details in a recent piece for Forbes, fuel storage may appear to provide some protection a failure of gas supplies or weather events, but stored fuel has its own set of problems and failure modes. The 2014 Polar Vortex rendered 8 of 11 GW of gas-fired generators in New England unable to operate. Coal has serious risks of supply due to susceptible transport by rail, as over 40 percent of U.S. coal comes from a narrow rail corridor from Wyoming’s Powder River Basin. Extreme cold can also render on-site coal unusable, as happened during the Southwestern blackout of February 2011 that shut off power to tens of millions of customers. Nuclear power can be shut down or impaired by unseasonable hot weather, when cooling water is too warm and plants must be shut down for safety and to prevent mechanical damage. So in fact, baseload units, even with fuel stored onsite, are sensitive to weather and many other failure events.

Lovins also points out that coal and nuclear baseload generators are unable to operate continuously, despite perceptions to the contrary. On average, coal-fired stations suffer unexpected “forced outages” 6-10 percent of the time, and nuclear plants experience forced outages 1-2 percent of the time, plus 6-7 percent scheduled downtime for refueling and planned maintenance. On the flip side, solar and wind are 98-99% available when their fuel (the sun and wind) is available, and the ability to predict the weather is improving all the time. The reliability risks from fossil fuels are collectively managed today, mostly by paying to keep reserve generation running to respond when they unexpectedly fail, but this creates the need for redundancies and costs in the grid comparable to those that cover the uncertainty of weather forecasts for wind and solar power.

Premise 2: “[The] diminishing diversity of our nation’s electric generation mix… could [undermine] baseload power and grid resilience.”

The U.S. electricity mix is seeing a trend of increasing diversity, rather than decreasing diversity. Until recently, the notion that supporting coal generation would improve diversity was nonsensical; coal was the dominant and largest source of U.S. electricity for decades, so an argument for more diversity would be an argument for reducing the use of coal. Today, coal and natural gas produce roughly equal shares of U.S. generation, while nuclear and hydro (hidden in the renewables bucket below) are projected to continue their near-constant supporting roles. With increasing renewable fuels, driven particularly by the growth of wind, solar, and biomass generation, one can see that fuel diversity has actually increased dramatically since 2001.

Source: Energy Information Administration, Annual Energy Outlook 2017

Today, coal is declining, with over 90 GW of the more than 250 GW fleet projected to retire under business-as-usual conditions by 2030, but it will remain a meaningful player in the marketplace for at least the next decade according to baseline Energy Information Administration (EIA) projections. In the long-term, however, whether reducing coal generation impacts fuel diversity and resilience depends more on what replaces it than whether the coal remains.

A portfolio of generation options with different characteristics insulates consumers from price risk and availability risk. Keeping some coal-fired generation online would help in that regard, particularly if its environmental costs are not considered. If retiring coal and nuclear are replaced mostly by natural gas, we would see a decline in fuel diversity, and that could potentially increase risk due to the characteristics of the natural gas supply. The same would be true, for example, if we myopically rely on solar as the only technology to decarbonize the grid. Studies of the optimal mix of resources in California to meet the 50 percent Renewable Portfolio Standard by E3 and NREL each found that geographic and technology diversity of the renewable resources will substantially reduce the cost of compliance compared to the high-solar and in-state-only cases.

But under current projections out to 2030, we are only going to see greater fuel diversity, not less, as natural gas, demand-side resources, and utility-scale renewables take the place of retiring coal. This should increase the resilience and security of the system, particularly if this change is accompanied by more investment in transmission, storage, and demand-side management.

Premise 3: “[Renewable] subsidies create acute and chronic problems for maintaining adequate baseload generation and have impacted reliable generators of all types.”

Beyond the question of what “adequate baseload generation” actually means, it is undoubtedly true that coal and nuclear baseload units are suffering financially in both vertically integrated and restructured markets. The recent FERC technical conference was a forum for generators and wholesale market operators to vent their frustration in what they see as the inadequacy of markets to provide generators with sufficient revenue. But in reality, this financial pain is the effect of oversupply and intense competition. For example, despite low capacity prices in the PJM Interconnection, five GW of new natural gas capacity cleared in the most recent auction. Coupled with stagnant demand, something has to give – and inefficient coal plants are the more expensive and the least flexible generators that are not needed in this competitive landscape.

Competitive pressure from cheap gas, inexpensive renewables, and declining demand are undermining the financial viability of baseload plants, we are far from a crisis of reliability and resilience. Consider the reserve margins and reference levels in the major markets:

 

Each market is oversupplied; in the case of PJM and SPP, the condition is drastic – they have double the excess capacity that they need to meet stringent federal reliability criteria. One panelist at the May 1-2 FERC technical conference captured the dynamic: “PJM with reserve margins of 22%, I think of Yogi Berra, my favorite economist. We’ve got so much capacity we’re going to run out.” A well-functioning market would allow uncompetitive generators to retire amid steep competition and declining prices, given the oversupply conditions. To blame coal’s suffering on policies supporting clean energy denies the root cause – coal-fired generation is dying on economics alone.

Several premises that underlie the need for the forthcoming DOE study are false. Chief among them is a singular focus on baseload generation, particularly coal-fired power plants, as necessary for maintaining reliability. Baseload generation is an economic characteristic, not a reliability concern. Replacing expensive, environmentally unsustainable coal is not a matter of ensuring adequate baseload – the key will be quantifying the reliability services that are needed and ensuring that the replacement generators can provide it. But hitting the panic button, which is what Rick Perry’s memorandum appears to do, is completely missing the picture; rather than losing diversity, our electricity mix is rapidly diversifying and our markets are vastly oversupplied with energy sources today. Any attempt to conclude otherwise will simply create unjustified roadblocks for new renewable generation, which is crucial to an affordable, reliable, and clean electricity system.

Trending Topics – How a Cold March Day in Texas Exposed the Value of Flexibility, and What Markets Can Learn

A version of this article was originally published on April 24th, 2017 on Greentech Media.

By Eric Gimon

As the sun rose over Dallas on Monday, March 3rd, 2014, the temperature read 15°F. Across the state, Texans turned on their heaters at full blast as they prepared to head to work for the day. Meanwhile, at the operations center for Texas’ electricity system (Electric Reliability Council of Texas, or “ERCOT”), operators saw the price of electricity skyrocket: around 8 AM prices jumped to nearly $5000/MWh, more than 100 times the average price of electricity.

Though the unusually cold weather caused demand for electricity to increase well above historical levels, the market behaved as intended. Many power plant owners, who know that their capacity is typically not needed this time of year, had their plants offline for maintenance. Thus when a period of unusually high demand came along on March 3rd, and with relatively low supply, prices skyrocketed, demonstrating the fundamentals of supply and demand. Power plants that were available and able to turn on quickly – to be flexible – to meet the spike in demand were rewarded handsomely.

As the renewables transition continues apace, flexibility will become increasingly important. Policy-makers and investors will need to watch carefully how flexibility is paid for.

 
In a market design like the “energy only” market in Texas, price spikes are a normal and important part of the market’s functioning, properly reflecting the marginal cost of electricity at that specific time, assuming no market manipulation. They provide an indication of how much and what types of resources are needed.

When spikes happen at predictable times of system needs, like during the summer when high temperatures cause increased electricity demand for air conditioning, they provide a good investment signal for peak capacity. When they happen at unusual times like the March 3, 2014 event, they provide a crucial signal to both buyers and sellers in the wholesale market of a need for investment in more flexible resources that can make themselves available at times of stress, on either the generation side or on the demand side. Too many of these unusual “bellwether” events indicate a system short on flexibility, while too few signal a system that is oversupplied (or lucky).

Bellwether events

Because flexible resources allow grid operators to respond rapidly to large changes in supply or demand, the frequency and magnitude of bellwether events are indicative of the need for flexible resources. In 2014, a handful of these bellwether events provided about 20-25 percent of net revenues for one typical Texas combined-cycle plant, indicating that the market was willing to pay for flexible resources. But in 2015 and 2016, due to plant owners keeping their units online more often, the addition of new capacity, and milder weather, the same plant garnered hardly any net revenue from bellwether events, indicating that there was no longer any need for extra flexibility.

How is this relevant for policy-makers and investors? Because of the variable nature of renewable resources, which create greater swings in the supply mix over smaller time-scales, the number of bellwether events is likely to grow as more and more cheap wind and solar power enters the Texas market. With over 5,000 MW of solar projected to come online by 2021, Texas is likely to need more flexibility in the coming years.

As the market continues rewarding flexibility, the resource mix could change substantially, which would impact the market in other ways. For example, if more combined cycle gas plants come online, they can help address flexibility needs, but will compete with baseload generators much of the year, which could increase downward pressure on already low wholesale prices. Alternatively, new flexible generation resources, like fast-start simple-cycle gas turbines, natural gas-fired diesel engines, demand-side resources, or storage could also respond to bellwether events. They would not change the market during ordinary times because they will be deployed only in times of stress when they capture the most value. To achieve the most cost-effective solution, market operators must allow resources of all types and sizes to participate by ensuring as transparent, accessible, stable, and technology-neutral an energy market as possible.

To a great extent, the investment signal for flexible resources is well handled in “energy-only” markets like ERCOT. As the need for flexible resources grows, there will be an increasing number of bellwether events; resource developers are likely to respond by building new resources that can capture this value on the spot market and through bilateral contracts with utilities. However, not all markets are structured to reward flexibility in the same way as in energy-only markets.

Different kinds of markets

Most electricity markets in the U.S. have additional payments or requirements outside of the energy market aimed at ensuring reliability. These payments, often administered through a “forward capacity market,” are meant to improve the economics of investing in new power plants and maintaining existing ones to ensure there is sufficient capacity available during times of peak demand. But forward capacity markets have traditionally focused on ensuring there is enough capacity to meet the peak level of demand over the course of the year, without giving much thought to the relative flexibility of that capacity.

The March 3rd cold spell in Texas provides a valuable lesson on why looking only at annual peak demand, rather than the need for flexible resources throughout the course of the year, can be problematic. When using capacity markets to ensure long-term reliability, it is not exactly clear what capacity to pay for when procuring “reliability” ahead of time. Reliability means different things at different times, and under different resource mixes. If forward capacity markets strictly reward market participants for meeting system peak demand, as they have traditionally done, market operators may not necessarily be rewarding the type of flexible capacity needed in bellwether events.

In theory, energy-only markets like ERCOT in Texas and markets with forward capacity markets should aim to be equally efficient at providing an economical and reliable grid. They should roughly compensate system resources for investing in new flexibility at a similar value. However, forward capacity markets tend to divert revenues from the energy market, and in so doing dilute the strength of the energy market signal to value resource flexibility. These out-of-market mechanisms have traditionally failed to consider the relative flexibility of capacity resources.

In energy-only markets like ERCOT, after new or upgraded system resources enter the market they capture the value of the reliability they provide when the system is stressed and prices spike, or by contracting forward with wholesale buyers to provide mutually beneficial risk management. If a resource cannot respond efficiently to short-term volatility, it will miss out on the associated opportunities and will be unable to offer wholesale buyers the risk-management services they need.

With a forward capacity market, the principle way to manage bellwether events is to supplement revenues by rewarding resources disproportionately for being available during prescribed periods ahead of time. If a resource fails to produce when called on it is usually penalized, either through foregone payments or directly through a penalty administered by the market operator. In both cases, a system resource that fails to make itself available during periods of system stress, like a bellwether event, is taking a big gamble by missing out on a significant amount of revenue, and in some cases a large fraction of its annual revenue. Resources that can respond quickly during such events avoid the costs incurred by less flexible resources that must operate unprofitably for hours or even days before and after the events to be sure they’re available when most needed.

If capacity markets expand their scope from anticipating peak supply needs to ensuring year-round reliability indiscriminately, they run the risk of significantly overpaying for reliability. Paying all types of resources to be available at all times, as opposed to paying just those resources that can more surgically be available in times of system stress, means buying a lot of extra reliability when it is unneeded. Furthermore, overly broad definitions of a capacity product may leave surgical flexibility providers unable to make a profit, even though they could provide a lot of reliability value. Think, for example, of a demand response provider being asked to be available every day of the year for up to eight hours, when the real need is to participate in a handful of bellwether events for three or four hours.

One fix involves tweaking the capacity market design to cover a broader definition of system needs, e.g. as in Hitting the Mark on Missing Money. Another involves creating a more iterative Staircase Capabilities Market design.

Learning across markets

In order to meet the flexibility challenge of shifting the future resource mix to cleaner, cheaper sources, markets with or without capacity markets must learn from each other. If an energy-only market sees a proliferation of very expensive bellwether events, such that net revenues from these much exceed capacity payments seen in other markets, their regulators and rate-payers should ask why more resources aren’t becoming available to meet the underlying need for investment in flexible supply- and demand-side resources. Conversely, if capacity markets are paying out relatively larger sums than those energy-only markets pay out through bellwether events, they should be questioning their framework for compensating resources to ensure reliability. In any case, to achieve least-cost reliability in a clean energy future, all markets should be inclusive towards all possible technologies – including demand-side options – that mitigate the impacts of bellwether events like that cold morning in Texas.

Trending Topics – Getting the Most out of Grid Modernization

States and utilities around the country are considering new utility investments in modernizing the grid. To name a few, California utilities have proposed multi-billion dollar grid modernization upgrades through their distribution resource plan proposals, while Eversource has rolled out a $400 million grid modernization plan in Massachusetts, and Washington, D.C. is directing $25 million of its Pepco-Exelon settlement toward pilot and demonstration projects as a part of a larger proceeding. Today, half of customers already have advanced metering, and that number is climbing rapidly.

Getting the regulation of grid modernization right is certainly a trending topic – Ohio’s PUC Chairman just announced PUCO would be undertaking an investigation to develop “innovative regulations and forward-thinking policies” to guide grid modernization investment. As utilities come to the table for grid modernization funds in many states, regulators and stakeholders have an opportunity to plan now to get ahead of the process and generate the most benefits from those investments.

APP experts Sonia Aggarwal and Mike O’Boyle have laid out five steps utility regulators can take to ensure customers reap the benefits promised by a modern grid. A condensed version of this approach is laid out below, which complements a full whitepaper published in ElectricityPolicy.com, and copies are available via email.

Step 1: Conduct and integrated assessment of the distribution and transmission systems

Good practice for grid modernization programs starts with “integrated distribution planning” (IDP), a practice in which demand-side and distribution-level investments are considered in conjunction with bulk-system resources to achieve an optimized, integrated electricity system. This includes understanding the potential contribution from distributed energy resources, including a general assessment (and ideally a locational assessment) of a cost-effective portfolio of resources. Without a clear assessment of how distribution-level resources can provide value to the grid as a whole, utilities will struggle to unlock the full potential of grid modernization to provide environmental, reliability, and savings benefits.

At the same time, IDP can produce the data regulators and stakeholders will need to measure current system performance and set rational targets for grid modernization performance (steps 3-5 below). Smart customer-facing rate design, DER procurement, and technology deployment can then be deployed more precisely to improve overall environmental and economic performance.

Step 2: Define the goals of a grid modernization program

Different regions may identify different goals for grid modernization programs; the key point is spending time early on in the program to ensure stakeholders are on the same page with the full set of goals. Minnesota’s e21 initiative is a good example of this, and Ohio’s PowerForward initiative may also produce its own flavor of stakeholder agreement on system goals. Recognizing state goals will differ from stakeholder goals, we focus on the three we consider most important to grid modernization: (1) affordability, (2) reliability and resilience, and (3) environmental performance.

Step3: Choose metrics for each goal

Focusing on outcome-oriented metrics capable of tracking performance of the full grid modernization investment portfolio allows the utility more flexibility to find the least-cost approach to deliver outcomes, and should reduce the administrative burden on regulators reviewing utility investments over time. For example, the utility has choices about whether to focus on Volt-VAR optimization programs, improve integrated system planning, improve customer response through automation or enrollment in time-of-use rates, or undertake any number of other measures. Over the course of a grid modernization investment program, the utility is likely to learn which of these avenues or combinations thereof is most effective. And as long as outcomes are being achieved for each goal, regulators can have confidence customers are getting the full value of utility investment plans.

At the same time, policymakers should consider how much control the utility has over performance for each metric. The degree of control most often lies on a spectrum. For example, customer behavior that is outside the utility’s control can impact overall bills. However, utilities can take actions to shape customer behavior to some degree. The utility should be encouraged to weigh those kinds of actions against more traditional investments.

Finally, a measurable and verifiable outcome, free from manipulative influence, is crucial. Regulators should seek maximum transparency and replicability when stakeholders check utility performance against reporting. Regulators should also beware of measuring performance in ways that rely on subjective interpretation or contain loopholes to hide bad performance.

The measurable outcome most directly linked to affordability is the average customer bill, but it turns out to be a challenging metric for grid modernization because bills depend on so many factors (many outside the utility’s control), and don’t focus on the full picture of value, which grid modernization can create over time for customers. Instead, regulators can look to peak demand reduction or system load factor to determine the affordability and value of grid modernization investments, since a modern grid should be managed dynamically and thus should become less “peaky.” Such outcome-oriented metrics, perhaps system averages, or perhaps per feeder, can focus investments and activities on minimizing overall costs. Absolute peak reductions could be normalized to account for economic growth or weather anomalies.

Well-executed grid modernization efforts also improve reliability and resilience. Utilities can improve situational awareness and allow for islanding or other approaches to stop cascading outages. Focusing on real-world measures of reduction in outage frequency (SAIFI) and duration (SAIDI) compared to a baseline start year is not a radical idea, thus it is a good place to start. Normalization can help account for abnormal weather impacts as well: Using a rolling average of three years is a common practice to reduce the impact of outlier years. Metrics for Energy Efficiency: Options and Adjustment Mechanisms provides greater detail on different approaches to weather and economic development normalization.

Grid modernization can further unlock the value of clean distributed energy resources like rooftop solar, community solar, demand response, and customer-sited storage. These technologies can help reduce pollution by generating carbon-free electricity or shifting energy demand patterns to better align with solar and wind availability. Valuable DERs add flexibility to the system, enabling higher shares of clean electricity.

Step 4: Create an open process to set targets

After selecting metrics, regulators must set reasonable targets. Through a transparent stakeholder process, stakeholders can carefully place the targets within a range that represents a reasonable stretch. This is ultimately more an art than a science, making it important to establish a transparent, predictable process for adjusting and calibrating the targets based on real-world performance data. Laying out the process for calibration at target revision ahead of time will be critical to maintaining low investment risk for utilities.

Some regions, such as New York, have decided to set targets in individual utility rate cases. Other regions, such as Ontario, have set them through a central process based on benchmarks. How they are implemented will depend on commission resources and existing processes for reviewing utility performance.

Step 5: Consider tying utility revenue to performance

The financial structure of a grid modernization program can impact its chances of success, as well as its overall affordability. Below we suggest some structural ideas to ensure that customers share in the program’s economic benefits.

Option 1: Conditional rate of return
Utility regulators may consider conditioning the total allowed rate of return for the full portfolio of grid modernization investments on achieving net benefits for customers, or on performance against targets. To weigh these options, policymakers should first consider the utility’s investment incentives under the current revenue model before altering the rate of return to align those incentives with the outcomes regulators seek from grid modernization.

For example, smart meters provide a wealth of potential benefits to customers, some of which fit with the existing utility business model, and others that don’t. Smart meters can automate meter reading (an operational expense traditionally passed through to customers without a profit opportunity for the utility), but may also send dynamic price signals to customers to manage distribution system peak and avoid physical infrastructure upgrades (a capital expense, and thus a traditional opportunity for utility profit).

In this example, a conditional return on smart meters may link the returns on equity normally allowed under traditional regulation to achievement of peak demand reduction, or a combination of targets. The “precondition” approach would require the utility to demonstrate achievement of these goals before earning the full authorized return for shareholders. A scaling approach would increase the return as performance on outcomes improves.

Option 2: Budget cap with shared savings
A “budget cap” describes a pre-approved total level of expenditures not to be exceeded for grid modernization efforts over a particular period, with a mechanism for sharing the budget’s savings between the utility shareholders and customers. This would provide revenue certainty for utilities to invest in grid modernization, but also incent program managers to look for operational savings opportunities as long as certain quantitative outcomes can be met.

The metrics identified in this paper provide a starting point for the kinds of outcomes that can be evaluated. Grid modernization investment plans in California, Massachusetts, and Illinois provide some examples of overall investment levels in distribution grid infrastructure to consider as potential sources for benchmarking other programs. And the U.K. RIIO model combines the revenue cap and conditional rate of return models – allowing utilities to capture operational savings and reap extra returns for good performance on outcome-oriented metrics.

Conclusion

Grid modernization represents a monumental opportunity to achieve cleaner, more affordable, resilient electricity service. It is worth taking time at the beginning of a grid modernization effort to carefully consider what specific outcomes these investments should target, and how utility and third-party investments can contribute to an optimized, integrated grid. Regulators in specific regions can benefit from determining which outcomes are most important to them, developing quantitative metrics associated with those outcomes, and beginning to compensate utilities based on their performance against those targets.

++++

This work was also covered and heavily quoted by Greentech Media and UtilityDive

Trending Topics – A Survivor’s Guide to the Debate over Existing Nuclear Plants

A version of this article was originally published on March 6th, 2017 on Greentech Media.

By Eric Gimon

In a recent op-ed, U.S. Senators Alexander and Whitehouse made a concerted plea for the support of nuclear power. Their plea comes when wholesale electricity prices are at historic lows, and the fate of the existing fleet of nuclear plants, many up for re-licensing in the near future, has been called into question as their underlying economics are threatened. Policymakers are faced with tough choices as to whether and how to intervene to save these plants. Some nuclear plants consistently in the red in competitive wholesale markets have driven some to call for re-regulation and the abandonment of a free market approach. What’s a reasonable policymaker to do when thinking of nuclear power against the overall need for cheap, clean, and reliable power?

With an aging nuclear fleet, policymakers will inevitably face decisions about how long to support existing plants and how to avoid capacity shortfalls when shutting them down at the ends of their lives. Which of these options drives a cheaper, cleaner, and more reliable electric system will vary based on context. Policymakers should consider that context rather than axiomatically saying, “we must save nuclear at all costs,” or, “we must get rid of it all immediately.”

Nuclear considerations

At first, understanding nuclear power seems straightforward, as the basic facts are simple. The U.S. has a reliable fleet of nuclear reactors, the largest in the world. U.S. nuclear plants provide about 800 terawatt-hours (TWh) of carbon-free, reliable power representing 20 percent of all domestic electricity generation. Meanwhile, construction of new U.S. plants has ground to a virtual halt due to the large and rising costs and financial risk of building new units, especially due to delays.

Going beyond these basic facts reveals challenges: What about potentially dangerous nuclear waste? Could plants be cheaper with less onerous rules and passive safety designs? And so forth.

The first step to wading through the morass of facts and opinions is to take a hard-nosed and pragmatic approach to any decision regarding existing plants, with a healthy dose of skepticism. Are existing nuclear reactors clean? They have a waste issue, but they emit no air pollutants. This significantly reduces externalities in an otherwise dirty grid, as we will see below. They are also very reliable, generating 90-95 percent of their capacity annually. The real sticking point is: are they cheap?

Nuclear power plants tend to have high fixed annual costs, and so they need to run as much as possible to spread these costs over a maximum number of megawatt-hours (MWh). This makes nuclear plants inflexible, meaning they cannot rapidly ramp their output, for economic reasons if not technical ones.

In a large market this inflexibility is not much of an issue, nuclear generators are price takers and get paid the average annual locational wholesale electricity price. In this case, the MWh are strictly commodities; nuclear power is cheap if its costs fall below competing generators’. However, in a more constrained market, for example one with lots of zero-fuel-cost variable generation like wind and solar, nuclear power only competes as a commodity up to the minimum level of net-load (load minus zero-cost generation). Any more nuclear generation would raise overall system costs.

A pragmatic approach: two case studies

Because of such complexities, there is no simple answer for how much support to provide for endangered nuclear plants. Beware arguments to save existing nuclear at all costs, or to get rid of it all immediately. Instead, it will pay to examine each plant on a case-by-case basis, in the context of the available alternatives. Two recent cases with opposite outcomes but equally sound rationales in California and Illinois serve as models for how policymakers can look pragmatically at other upcoming nuclear cases. At the risk of trivializing other important factors and for simplicity’s sake, we examine just two key technical characteristics in addition to generation costs for thinking about the costs and benefits of retiring or propping up an existing nuclear plant: emissions and grid flexibility.

First, California: On June 21, 2016, Pacific Gas and Electric Company (PG&E) and a number of parties struck a settlement entailing the retirement of the Diablo Canyon nuclear plant. A report from M.J. Bradley and Associates captures most of the rationale for this decision. Due to improved energy efficiency, distributed generation, and load defection through direct wholesale purchases and community choice aggregators, PG&E anticipates that its total generation needs for 2030 will decline significantly on an absolute basis. After taking into account California’s 50 percent renewable portfolio standard (RPS) and the existing hydropower fleet, this leaves room for somewhere between 16 and 24 TWh of remaining annual generation, including valuable flexible gas generators, to meet PG&E’s demand. In turn, this puts the squeeze on the annual 16-18 TWh of baseload generation from Diablo Canyon.

According to the report, running that plant near its maximum capacity would force renewables to curtail their output to make room for the inflexible nuclear energy. Instead, PG&E projects it can cover any shortfall from the retirement of Diablo by purchasing only an incremental 4 TWh of clean resources (energy efficiency, renewables, demand response and storage) – 25 percent of Diablo’s full output. Because flexibility is at a premium for balancing variable generation in California, a strictly baseload generation profile loses a lot of value.

These flexibility constraints mean that legislation or regulatory action to sustain Diablo Canyon wouldn’t necessarily lead to any incremental emissions reductions under present assumptions. It is possible that if California replaced its 50 percent RPS with a broader carbon regime that included nuclear power, and targeted equivalent or lower emissions in 2030, then re-licensing Diablo Canyon might be part of a lower cost portfolio, but this is far from guaranteed.

Additionally, if California were better integrated into a regional grid, this larger grid could more easily accommodate Diablo Canyon’s inflexibility. If PG&E could competitively sell off excess nuclear generation to its neighbors, it could use the remaining power to replace its own dirtier gas generators and lower emissions. At that point it might be worth tipping the scales towards relicensing.

To sum up, because Diablo Canyon is generating somewhat expensive power into a shrinking power pool where policy and economics are creating demand for more flexible generation, it is doesn’t seem worthwhile to try and save it.

Meanwhile, in Illinois, the Exelon Corporation owns a fleet of nuclear generating stations connected to two of the largest power markets in the world, Midcontinent Independent System Operator (MISO) and PJM Interconnection. These very large systems can more easily absorb the inflexible baseload coming from the round-the-clock nuclear generation, apart from some local issues with transmission bottlenecks. Still, two of the plants in this fleet, the Quad Cities pair of reactors and the Clinton reactor, were slated for retirement due to their inability to compete in challenging wholesale market conditions.

On December 1, 2016, the Illinois legislature passed the extensive and complex Future Energy Jobs bill which included a rescue package for the Clinton and Quad Cities plants. At roughly $10/MWh, the package covers roughly one-third of the plants’ production costs and is collected through a 1-2 percent surcharge on customers’ electric rates. As these plants generate close to 24 TWh annually, this works out to a $240 million annual customer-funded rescue package. Since system flexibility is not really at issue for these plants, was this charge worth as far as emissions are concerned?

Given general conditions of electricity generation overcapacity in the Midwest that drive the low wholesale prices, existing generators could likely replace the lost nuclear generation. According to the U.S. Environmental Protection Agency’s regional emissions factors, replacement power from MISO and PJM would generate roughly 1,500-1,600 lbs. of CO2e greenhouse gas (GHG) emissions. Using a social cost of carbon of $36/metric ton, retiring Clinton and Quad Cities would avoid approximately $24-26/MWh in GHG-related benefits, well in excess of the $10/MWh support they will be receiving.

Of course, these GHG social benefits don’t all directly help Illinois citizens, as carbon dioxide is a global pollutant. But eliminating local air pollution certainly does. Further combining regional EPA emissions factors with various EPA estimates of health cost from pollution-related illness and death, nuclear retirement would also eliminate $48-$129/MWh in mortality and morbidity reduction benefits! (Of course, this figure depends on how one values the 100-400 lives saved per year.) Just on an air pollution basis, the Illinois nuclear support is a good deal for its citizens.

Compared to California, the Illinois case is much simpler. The nuclear facilities are in much larger markets that can absorb their inflexibility, and though their generation costs slightly more than prevailing prices, it offers emissions benefits (as well as other local jobs benefits) that significantly outweigh the extra income these two plants require to stay open.

What does this all mean for your state?

Looking at the California and Illinois examples, it seems entirely appropriate for policymakers to intervene in the case of existing nuclear plants, either to accelerate retirement or to offer an economic lifeline. The cost/benefit equation for these large plants brings in many factors like emissions, system needs, security, and jobs impacts to name a few, so political deals and fixes like the recent New York $7.6 billion nuclear rescue package are to be expected. Nevertheless, if policymakers want to make pragmatic choices they might examine the following questions quantitatively:

  • What are the climate and health impacts of closing a given nuclear plant? What resources would replace it if retired? How quickly would replacements be needed?
  • What contribution is a nuclear plant making to the reliability and efficiency of running the grid? In an age where flexibility is increasingly at a premium due to low-cost variable resources, can the system efficiently absorb this inflexible baseload easily?
  • Even though an existing resource may seem cheaper, sometimes building a new power plant is more economical. How much will it cost to prop up an existing plant? Is there an equitable mechanism for doing so that properly accounts for the full balance of costs and benefits of an intervention?

Only after a pragmatic, hard-nosed analysis should a decision on support for existing or new nuclear plants be made. And if the decision is to support a plant in a wholesale market region, care must be taken to ensure that the mechanism for support does not undermine the integrity of the market, or run afoul of federal law. Hopefully, we will maintain the most valuable plants currently at the mercy of an over-capacity grid. Meanwhile, climate advocates should take heart that not every nuclear retirement represents a step back on the mitigation front.

Trending Topics – Wind and solar are our cheapest electricity generation sources. Now what do we do?

A version of this article was originally published on January 26th, 2017 on Greentech Media.

By Mike O’Boyle

For years, many debates on the future of the electricity system centered on getting the balance right between higher costs and lower environmental impacts. But the economics of the renewable energy transition are rapidly shifting. In financial advisory and asset management firm Lazard’s 10th annual report on the levelized cost of energy (LCOE) for different electricity-generating technologies, renewables are now the cheapest available sources of electricity (other than efficiency) even without subsidies, a trend confirmed by similar analyses of wind and solar costs from Lawrence Berkeley National Laboratory. With subsidies, in some places new wind is even cheaper than the short-term marginal costs of existing fossil-fueled plants, raising new questions about whether these plants should be retired early.

It’s looking like we may not have to choose between affordability and environmental impact – a cleaner, cheaper grid may be within reach. Lazard’s analysis flips the question of clean versus cost on its head, and for many parts of the country in 2017, we’ll be asking how much can we save by accelerating the renewable energy transition?

What does Levelized Cost of Energy mean?

Lazard uses LCOE analysis to identify how much each unit of electricity (measured in megawatt-hours, or MWh) costs to generate over the lifetime of power plants. LCOE represents every cost component – capital and financing expenditures to build, operations & maintenance, and fuel costs to run – spread out over the total megawatt-hours generated during the power plant’s lifetime.

Because different plants have different operating characteristics and cost components, LCOE allows us to fairly compare different technologies. Think of it as being able to evenly compare apples to oranges.

How wind and solar are winning the day

According to Lazard, today’s wind costs are one-third what they were in 2009, falling from $140/MWh to $47/MWh in just seven years:

wind cost declines

Lazard’s cost estimates are consonant with reported prices of contracts for wind, with several wind contracts in the low $20s and high $10s.

However, these numbers don’t tell the whole story. Though wind power is the cheapest it’s ever been, wind costs were well below their 2009 peak in the early-to-mid 2000s, mostly due to low labor and commodity prices. Wind costs also vary widely depending on regional wind resources – from the low $30s in the Great Plains and Texas to the mid-60s in California and the Northeast, and have small integration costs in the range of $2-7/MWh.

Utility-scale solar’s cost declines have been even more dramatic, falling 85 percent since 2009 to today’s range of $46-61/MWh:

solar cost declines

Lazard’s solar costs are confirmed in a compilation of 2014-2015 solar prices by Lawrence Berkeley National Lab, with several contracts coming in well below $40/MWh, taking advantage of the 30 percent investment tax credit. Like wind, sunlight availability varies by region. For example, very little solar gets built in the Northeast where the same solar plant only generates 68 percent of what it would generate in the Southwest and California at similar cost.

As old plants retire, utilities should compare wind and solar with the cheapest form of new conventional fuel-fired generation today – natural gas-fired combined cycle power plants with LCOE ranges from $48-78/MWh:

levelized cost of energy 2016

The case for wind and solar as the grid’s cheapest resources becomes even clearer when federal subsidies are considered: Tax credits drive renewable energy’s costs down to $17-47/MWh for wind and $37-49/MWh for solar.

The all-in price of wind is not only cheaper than building new natural gas plants in most of the country, but new wind beats some fossil fuel power plants on their marginal cost (i.e. costs for operating, maintaining, fueling, etc.) alone. In other words, it’s now cheaper in a significant number of places to build new wind energy than it is to simply continue running an existing coal and nuclear plant, and all-in costs for solar are not far behind:

all-in RE costs vs FF marginal costs

This precipitous drop in wind and solar prices means utilities and their regulators need to keep up on the latest numbers, or else they’ll be driving blind when deciding investments in new infrastructure and whether existing plants should be retired.

How can this impact overall system costs?

Like any generation technology, wind and solar impact the dynamics of the whole system, including what resources and infrastructure are needed as complements. But claims about the integration costs of variable resources like wind and solar are often overstated. A recent study from the National Oceanic and Atmospheric Administration (NOAA) used high-quality, granular hourly weather data to find complementary wind and solar resources and considered what would happen if they were linked with a national high voltage direct current (HVDC) transmission backbone. They found that 80 percent zero-carbon generation, including over 50 percent wind and solar, could provide reliable service at lower cost than today’s power mix without increasing hydro or battery storage capacity.

It turns out Lazard’s cost numbers already approach the lowest cost assumptions from the NOAA study, meaning NOAA’s most generous assumptions about cost declines by 2030 are nearly a reality in 2017. For example, in its “low renewable high gas cost” scenario, NOAA used $1.19/watt for solar’s capital plus O&M costs. In Lazard’s recent report, solar capital plus O&M costs have already fallen to $1.44-$1.59 in 2016. This highlights the need for models and the policymakers that use them to ensure cost data is up to date. Failure to do so could lock in hundreds of billions of investment in uneconomic natural gas infrastructure.

The costs of integrating high shares of wind and solar become much higher without the HVDC backbone, as resource-poor regions can’t access the windiest and sunniest (and thus cheapest) places to generate wind and solar power. Costly, drawn-out processes for siting transmission and power plants and allocating costs of multi-state transmission lines further drive up project costs and stifle investment. Policymakers can turn to America’s Power Plan for recommendations on streamlining the siting process and limit local impacts to create policy that reduces siting costs for renewables in the U.S.

How should policymakers and utilities use these new numbers?

Even with these new numbers, more natural gas plants are being built every week, expensive coal is not retiring as fast as economics would dictate, and the transition to renewable energy isn’t happening fast enough to prevent the worst effects of climate change.

Two misconceptions limit renewable energy deployment even in the face of wind and solar’s drastic cost declines: 1) misguided alarmism about the reliability of renewables, and 2) misconceptions of the cost of running the grid with more renewables.

Managing America’s grid with variable renewables also requires rethinking how we operate and plan our electricity systems to provide reliable service, and many utilities and wholesale markets have been slow to adapt. Grid operators sometimes claim we need to back up solar and wind an equal ratio of thermal generators or storage for “when the sun doesn’t shine and the wind doesn’t blow”, but the NOAA study shows additional transmission and better regional coordination can do it much cheaper. National Renewable Energy Laboratory (NREL) analysis corroborates NOAA’s findings – showing we could quadruple the amount of wind and solar on the grid today without reliability issues.

Besides reliability, many argue wind and solar come with insurmountable integration costs, i.e. backup generation and transmission lines to connect remote locations to the grid. Of course this is true, although estimates of wind integration costs generally fall within a modest $2-7/MWh range even at high penetrations:

wind integration costsSource: U.S. Dept. of Energy. 2015 Wind Technologies Market Report

This variation in integration cost studies reflects the difficulty of attributing integration costs to any one technology across a big grid, where a diverse mix of power generation linked by robust transmission naturally smooths variability – much like an index fund versus a single volatile stock, for example.

Conventional thermal generation like gas, coal, and nuclear power also require new transmission, fuel supply and storage, and large backup reserves. These can also be counted as “integration costs,” even without accounting for the health and climate costs of carbon dioxide and other pollution, which can exceed $25/MWh for natural gas, and $60/MWh for coal.

A new paradigm

Transitioning our electricity sector away from fossil fuels is no longer just an environmental imperative, it’s an economic one. Free markets now favor solar and wind—look no further than gas-rich Texas for evidence. Texas has more than three times more wind capacity than any other state, and solar is expected to grow 400 percent by 2022.

Outdated policies leave us unprepared to take full advantage of the rapid cost declines we’re seeing in the wind and solar industry. Failure to adapt to rapidly changing cost numbers will result in uneconomic investments that lock in emissions. The time has come to adopt a paradigm where wind and solar form the backbone of our electricity grid.