A year-end update on electricity policy from the field

A version of this article was published in Greentech Media on December 28, 2017.

By Sonia Aggarwal

The electricity sector’s competitive dynamic completely flipped in 2017. Not only is it cheaper to build new wind and solar than new coal or often natural gas, but in growing swaths of the country, it’s often cheaper to build new wind (and sometimes solar) than continuing to run existing coal plants. And the implications are profound.

Utilities from Missouri, to Wisconsin, Minnesota, and others have proposed early shut-downs of coal facilities, pointing to real customer affordability benefits from switching to cleaner power sources.  Once-profitable merchant coal plants in Texas and Massachusetts retired or announced retirement, unable to compete with cheap natural gas and renewables.  Promising work on transition support for coal communities is gaining momentum. And, despite federal-level attempts to roll back policies that protect public health from energy-related pollution, only twelve states are at risk of missing their Clean Power Plan targets. The electricity market has shifted, and the policies and institutions that govern it need to do more work to catch up.

When we started America’s Power Plan five years ago, we hypothesized that the current electricity sector institutional and market structures were not set up to support an affordable, reliable, majority-clean energy system that adequately supported utility customers through the profound grid transformation underway. We were, of course, not the only ones –great thinking was going on nationwide about which policy and institutional changes might be required to support the new system, so we teamed with around 200 of America’s electricity policy experts to think through a comprehensive plan for the next phase of this system evolution. Our top recommendations included refocusing the utility business on performance rather than capital spending, promoting wholesale markets that expose the value of flexibility, supporting proactive transmission policy, and setting smart rate design that charges customers a fair price for grid services they consume while paying them a fair price for those they provide.

Many of these changes are underway in leader states and regions, and we are hopeful progress will continue. For specific insights, we asked experts in the field, many of whom have been involved in America’s Power Plan from the beginning, to comment on changes they are witnessing in the field and what they are hopeful about for 2018. We focused on five topics: the implications of the changes in relative costs of electricity technologies, new utility models, wholesale power markets, transmission policy, and customer rate design. Here’s what we heard from some of our nation’s brightest minds.

What was the most surprising implication of the changes in relative costs of electricity sources in 2017? 

“For the U.S., I view 2017 as the year in which conventional wisdom has come around to the idea that switching to clean energy could save utility customers money. Specifically, we have seen clean energy costs fall to the point that the total cost of electricity from new clean energy (with federal subsidies) in much of the country could actually be lower than the cost of burning additional fuel in operating coal plants.” – Uday Varadarajan, Principal, Climate Policy Initiative

What are you most hopeful for in terms of anticipated price dynamics in 2018?

“We are adding lots of clean power to a world with flat or declining demand, and the new power has zero marginal cost, so low wholesale prices dominate. Many of the ancillary services will also be cheap for some time to come, simply because of surplus supply. These trends should last a few years, to the benefit of customers and the environment.” – Hal Harvey, CEO, Energy Innovation

What do you think was the most exciting development in terms of utility business models in 2017?

“2017 gave us signs battery storage will become economically compelling for utilities and their customers.  We also saw hints of the value of grid-edge data analytics, and a new debate on defining grid resilience. Channeled through the concept of “grid modernization,” these developments may offer utilities a significant investment strategy for environmental and financial sustainability, while also driving program innovation.  If utilities can leverage these transformational elements with an emerging responsiveness to customers – particularly commercial and industrial customers – 2018 may be the year power sector uncertainty turns to clarity and confidence.  In 2017, we also saw the start of a new interest in the electrification of transportation, which can offer both a new grid management resource and a significant revenue growth opportunity for the power sector.  If that interest turns into investment and deployment of smart charging infrastructure, that confidence and clarity will become flat-out excitement and enthusiasm.” –Tanuj Deora, Chief Content Officer, Smart Electric Power Alliance

What are you most hopeful for in terms of new utility models in 2018?

“Looking to 2018, I’m hopeful for the continued proliferation of innovative financial mechanisms that can create real win-wins for everyone, including the utility’s shareholders. Some I have my eyes on: capital recycling mechanisms that enable utilities to retire coal and invest in renewables and piloted incentive approaches in NY and California that reward the utility for pursuing DER alternatives to conventional infrastructure.” –Virginia Lacy, Principal, Rocky Mountain Institute

What do you think was the most exciting development in electricity markets in 2017?

“Ameren Missouri announced a massive increase in wind and solar plans from 120 MW previously to 750 MW based on falling costs and large customer demand.  This illustrates that even in a context with no federal or state climate policy and no direct consumer market access, supply and demand pressures will result in renewable energy growth.”  – Rob Gramlich, President, Grid Strategies LLC

What are you most hopeful for in terms of anticipated changes to wholesale markets in 2018?

“For 2018 and forward I certainly hope stakeholders will continue to trust and allow (wholesale) markets to work and to evolve. Short-term, interventions for specific outcomes may be tempting, but long-term, well designed markets are more efficient to provide reliability at lowest possible cost.” – Ake Almgren, vice-chair of the board for PJM Interconnection

What do you think was the most promising development in transmission in 2017? 

“In the view of Americans for a Clean Energy Grid, the most exciting development in transmission in 2017 was the announcement of American Electric Power Corporation’s intent to build dedicated 300-mile 765 KV transmission line to deliver energy from the “Wind Catcher” project, 800 state-of-the-art wind turbines in the Oklahoma panhandle with two gigawatts of capacity, into the Tulsa region and from there into markets in Oklahoma, Texas, and Louisiana.  The largest project of its type, this announcement underlines the attractive combined economics of renewable energy and high-voltage transmission, competitive even in the fossil fuel industry’s principal domestic production region, as well as the leadership of a major traditional electric utility committing to delivering clean energy through an expanded, modernized, and integrated high-voltage grid as part of its strategy going forward. – John Jimison, Executive Director, Americans for a Clean Energy Grid, Inc.

What are you most hopeful for in terms of progress on transmission siting in 2018?

In 2018 we will be looking for more utilities to respond directly to stakeholder concerns during the siting process. There are a lot of great stories out there of utilities adapting proposed projects to better meet the needs and preferences of local communities. But a few bad actors remain. As we look for ways to avoid the protests and prolonged court battles of recent years, it will be up to utilities and key stakeholders to forge the compromise that makes big projects possible.”  – Johnathan Hladik, policy program director, Center for Rural Affairs

What do you think was the most exciting development in electricity rate design in 2017? 

“Many states rejected increases in fixed charges, and embraced time-varying pricing of some sort.  This helps put electricity pricing in the same realm as pricing for unregulated commodities like groceries, gasoline, and hotel rooms, where prices vary as costs change and demand varies, but consumers pay only for what we use. Utilities as far-flung as Honolulu and the UK introduced “sunshine” rates, lowest during the middle of the day when solar production is highest, reflecting the fact that renewable energy is quickly becoming the economical choice for electricity supply.  – Jim Lazar, senior advisor at RAP

What are you most hopeful for in terms of anticipated changes to electricity rate design in 2018?

“I look forward to building on work to use anonymized customer usage data to research rate design questions. Combining this data with other publicly available data will help us look at how changes in rate design will impact specific groups of customers, such as economically disadvantaged customers or customers with particular types of housing needs. I am also looking forward to exploring the value of resources like community solar and optimizing rate designs to ensure the value is capture and appropriately shared among utility customers.” – Kristin Munsch, Deputy Director of Illinois Citizens Utility Board

***

These insights cover many important power sector trends underway, and shed light on priorities for the coming year.  It is clear the electricity sector is in a period of rapid transformation.  Feel free to contact us to get in touch and tell us what you’re most interested in hearing about next year!

Trending Topics – Resilience in a clean energy future

A version of this article appeared on Greentech Media on November 29, 2017.

By Mike O’Boyle

Resilience may be the most trending topic in today’s electricity sector.  The Department of Energy’s (DOE) report on baseload retirements impacts and subsequent Notice of Proposed Rulemaking (NOPR) to subsidize baseload units for the resilience they allegedly provide the U.S. power system begged the question not only whether 90-days of fuel onsite improves resilience (two experts from America’s Power Plan say no) – but more fundamentally, what is resilience and how can it be measured?

Answers are conspicuously absent in DOE’s analyses and attempted rulemaking, but they’re not alone – FERC’s questions to stakeholders responding to the DOE NOPR include the following:

Despite the certainty expressed by DOE, stakeholder comments have confirmed the electricity system lacks an agreed-upon definition or metrics for resilience as a concept that is separate from reliability.  Furthermore, it’s unclear that either requires action from FERC – the North American Electric Reliability Corporation (NERC) already ably regulates reliability and resilience of the bulk system.  Still, bulk and distribution system regulators are receptive to calls for a more resilient grid in the face of more increasingly intense weather events, greater economic reliance on continuous electricity service, a more variable and distributed generation fleet, and greater threats of cyberattack and physical assaults.

In fact, resilience is increasingly a focus for state-level utility stakeholders, particularly in the context of grid modernization.  At the 2017 NARUC Annual Meeting, three hours of subcommittee meetings discussed grid resilience, and a general session, ominously titled “Mother Nature, the Ultimate Disruptor,” addressed efforts to improve resilience across critical infrastructure including the grid. So taking stock of what we know, and what we don’t, about resilience is useful before approving large-scale investments or payments to enhance grid resilience that may exacerbate the problem.

How resilience differs from reliability

Reliability and resilience are intertwined and often conflated, making reliability a good place to start. The NERC, which has FERC-delegated authority under the Energy Policy Act of 2005 to create and enforce reliability standards for electric utilities and grid operators, defines reliability as a combination of sufficient resources to meet demand (adequacy) and the ability to withstand disturbances (security).  To hold reliability authorities accountable, NERC monitors the ability of reliability coordinators to respond to generation or transmission outages. For example, NERC penalizes excessive deviations from system frequency and voltage, two leading indicators that system operators may have inadequate resources to respond quickly to unforeseen supply and demand imbalances.

A common, accepted measure of adequacy is the percentage of capacity in excess of projected or historical peak demand for that system, although precise adequacy standards differ between reliability regions, subject to NERC approval. Adequacy also includes essential reliability services like frequency and voltage support, and will increasingly require a focus on flexibility as more wind and solar come online.

Security is harder to measure, as it reflects a preparedness to endure uncertain external forces.  Modeling and thought exercises help, but the impacts of low-probability, high-impact events remain difficult to predict until they occur.  NERC is on the case, promulgating cyber security, emergency preparedness and operations, and physical security standards to ensure grid operators and utilities are prepared for attacks or blackouts.

The impacts of inadequate resources or security against anything from hurricanes to squirrels to cyberattacks can be measured in terms of outages.

potential security risk

Reliability is generally measured in terms of the system average duration and frequency of outages (SAIDI and SAIFI), with different permutations based on whether the system average or customer average is more important to the reliability regulator.

As a more expansive concept than reliability, resilience encompasses consequences to the electricity system and other critical infrastructure from high-impact external events whose likelihood was historically low, but is now increasing.  Reliability metrics like SAIDI and SAIFI generally make exceptions for extreme weather events when measuring utility performance – whereas resilience is often articulated as a grid attribute that improves response to such events.

Source: National Academies of Sciences, Engineering, and Medicine, “Enhancing the Resilience of the Nation’s Electricity System,” (2017)

The DOE-supported Grid Modernization Laboratory Consortium (GMLC) explores the concept of reliability metrics for “critical customers” as resilience indicators.  A resilient grid may go down for some time, but preserving or prioritizing restoration for critical customers like hospitals, water and sanitation systems, first responders, communications towers, and food storage is important.

Resilience is also additive to reliability around system recovery.  Resilience recognizes that low-probability, high impact events will inevitably cause outages – the key is investing in infrastructure that reduces the duration, cost, and impact on critical services of outages.  New technologies that increase system awareness and automation, particularly on the transmission and distribution system, allow for rapid islanding of downed circuits before failures cascade into other parts of the distribution system, rerouting to restore power while isolating reliability issues and hardening the grid to withstand new threats.

Despite new research into resilience, reliability regulators (particularly NERC) already perform most of the work needed to ensure the grid, particularly the bulk power system, is resilient.  The ways in which resilience can be additive as a concept are few but important, and fall to specific applications like severe weather events, continued service for critical infrastructure, and improved recovery through grid awareness.  Attempts to improve resilience through rulemaking that focuses on fuel security or resource adequacy miss the point – these elements of service have been improving steadily for years and can be procured in a technology-neutral way, with an able-bodied NERC at the helm.

Resilience through the energy transition

Economic forces and policy priorities are driving a transition to a cleaner, more variable power mix. Meanwhile customers are becoming more participatory.  Each of these transitions affects resilience in positive and negative ways.

We know that damage to the distribution system caused the vast majority of outages over the last five years.  Conceptually, the availability of fuel to power plants could also be a cause, but data show it is a very unlikely cause of outages – only 0.0007% of outages were caused by fuel security issues.

Still, the transition away from fuel-based power to higher shares of renewable energy may affect bulk power system reliability and resilience in both positive and negative ways.

For human-caused events such as cyber or physical attacks, renewable energy removes significant fuel supply risk.  Coal relies heavily on rail for delivery, which is subject to physical attacks, since roughly 40 percent of U.S. coal comes out of Wyoming’s Powder River Basin, nearly all via the 103-mile Joint Line rail corridor.  Nuclear plant destruction during operations could be potentially catastrophic.  The natural gas delivery system is vulnerable to cyber and physical attacks, though some delivery will continue if one line is disrupted.  Converting to renewables avoids these fuel security issues; however, cost-effective integration of a high share of utility-scale renewables depends on increasing transmission system capacity to deliver energy where it is needed and balance out geographic variability. Taking down one or two lines could disrupt the system’s ability to balance, either on a regional or interconnection-wide basis, hampering reliability until threats were addressed.

Natural events, particularly weather-related events, must also be considered.  Hydroelectric generation is drought vulnerable for periods of months, while cloud cover from intense storms and hurricanes threaten solar availability for days.  Extreme winds may force curtailment of a portion of the wind for short periods of time.  As we saw during the 2014 Polar Vortex, coal piles on-hand can freeze, and co-dependence on natural gas for heating and generation during extreme cold can threaten resource availability.  Prolonged heat waves can leave nuclear unusable if cooling water is too hot.

With respect to outage recovery, combining inverter-based storage and generation may be more effective than baseload at performing a black start, since spinning masses would not need to be synchronized, though we lack practical examples of restarting with very low spinning mass.  As Amory Lovins recently wrote, nuclear plant performance restarting after the 2003 Northeast Blackout was abysmal – it took weeks to get them back online to full capacity.

The other element of transition is distribution system resilience as grids increasingly rely on distributed, small-scale devices to provide services that complement centralized, utility-scale generation and contribute to a smarter, more connected, and more automated distribution system.  Connected devices are helpful in identifying and isolating threats on the grid while preventing cascading failures and improving restoration, but may open up the system to more widespread cyberattacks.  Local generation can add resilience to natural events – the Borrego Springs microgrid pilot in SDG&E’s territory allows a remote community to disconnect from the larger grid and maintain critical services during wildfire and high wind seasons which threaten a critical transmission line from generation closer to load centers the coast.

Lessons for policymakers

Resilience centers on withstanding and recovering from high-impact events.  Policymakers can largely trust the existing reliability apparatus to cover resilience related to the bulk power grid.  In particular, NERC already provides standards for cyber security, and NERC’s Essential Reliability Services Working Group is working to quantify the services needed to maintain and improve reliability and resilience.

Still, NERC covers only the bulk electricity system. Restoring the distribution grid requires and implicates other infrastructure, like gasoline delivery and roads for delivery trucks, while critical service providers also rely on electricity service.  Disaster preparedness is something utilities and their regulators take seriously – but creating a cross-agency planning process could help improve and align agencies responding to threats.

Instead of duplicating NERC’s efforts, state policymakers can focus on grid modernization to deliver a resilient and flexible last mile of customer delivery.  Knowing what they’re paying for is crucial to adopting cost-effective resilience assessments that balance cost and disaster preparedness.  To ensure cost-effective resilience, policymakers should develop resilience metrics for the distribution system tied to measurable outcomes, starting from Resilience Analysis Process work already performed by Sandia National Labs (SNL).  SNL’s seven-step process develops and routinely updates resilience metrics in light of new modeling and actual system disruptions.

Source: Sandia National Labs

Getting the most out of grid modernization” is a five step framework from America’s Power Plan to help policymakers turn metrics into action and hold utilities accountable for delivering resilience and other customer value.

All of this takes place in the context of a dramatic energy transition with more connected distributed devices and variable fuel-free generators.  When assessing the cost and reliability of future high-renewables systems, resilience attributes and metrics can begin figuring into the mix if they go beyond reliability once they’re developed.  Where benefits are identified, they should be incorporated into plans, and where gaps are found, utilities and other market makers should identify technology-neutral system attributes such as flexibility to shore up resilience.

Trending Topics – Wholesale markets need reform, but flexibility, not resilience, is the key

A version of this article was posted on Greentech Media on October 31, 2017.

By Eric Gimon

U.S. electricity markets face scrutiny over revenue problems and reliability concerns as greater amounts of renewable energy and distributed resources come online, particularly after the Department of Energy’s (DOE) Notice of Proposed Rulemaking, but coal and nuclear subsidies to boost “resilience” miss the main challenge facing wholesale markets – the need for grid flexibility.

Flexibility, not fuel on-hand, is at the core of what it means for a grid to be reliable or resilient.  Where restructured wholesale markets rule and the “invisible hand” of the market is meant to provide secure and efficient real-time balancing of supply and demand, resource adequacy, and long-term cost recovery for system resources, markets and regulators have been particularly slow to evolve.  Markets, especially, are powerful tools for finding least-cost resources to meet physical grid needs, but tend to favor incumbent generation over variable resources and flexible demand- and supply-side resources against the near and long-term interests of consumers.

To reach a clean, resilient, affordable future, markets must evolve to value flexible resources – the key to reducing integration costs for variable resources.  Two recent reports from America’s Power Plan (APP) outline how markets can evolve in the short- and long-term to cost-effectively integrate ever higher amounts of variable renewable generation like wind and solar.

Flexibility is the coin of the realm

Utility-scale and distributed renewable energy resources are on a tear.  A recent Lawrence Berkeley National Lab report found utility-scale solar total installation costs have dropped 80 percent since 2010, and residential solar systems have fallen 60 percent over the same time.  DOE’s SunShot goal of $1/watt utility-scale solar has been met three years ahead of schedule, and new wind power is coming in below $20 per megawatt-hour (MWh), cheaper than running many coal plants.   But while economics and environmental goals drive increasing renewable generation, variable resources challenge our existing frameworks for grid management and investments.

One key resource need stands out for both the near-term and long-term evolution of our electricity grids and wholesale markets: flexibility.  Flexibility broadly means the grid’s ability to adjust generation dispatch, reconfigure transmission and distribution systems, and modulate demand to accommodate predictable and unpredictable balances between supply and demand.   Flexibility is an aggregate quality of networked grids, combining both the technical capabilities of all connected devices and the system’s ability to efficiently coordinate them.

Current trends have increased the need for flexibility and the opportunity to make more of it available to grid operators.  New variable generation like solar and wind are one of the biggest drivers of need for more flexibility, but aging infrastructure and inflexible power plants, outdated utility business models, and our society’s increasing dependence on reliable electric service also demand a more flexible grid.

Opportunities to unlock flexibility are everywhere: new and more flexible gas plants, storage deployed at all scales, power electronics to regulate wind and solar output as along with transmission and distribution assets, and a constellation of connected devices ready to consume electricity more intelligently.  Expanding the “balancing area” geography over which supply and demand is balanced helps too.

From NREL’s The Value of Energy Storage for Grid Applications

Restructured wholesale electricity markets, which dominate America’s electricity landscape but work best by avoiding specific technology mandates, need to find new and improved ways to surface the value of flexibility and allow current and future market participants to provide it at least cost.

Getting more flexibility today

A new research paper from APP experts Robbie Orvis and Sonia Aggarwal, “A Roadmap for Finding Flexibility in Wholesale Markets,” highlights best practices for market design and operations in a high-renewables future, focusing on ways policymakers can unlock more cheap flexibility.

The paper identifies main challenges to integrating renewables in wholesale markets: managing predictable and unpredictable variation on the bulk system, and doing the same with distributed renewables.  For utility-scale renewables, predictable variability means knowing when a wind front or windy season is coming, or when the sun is rising or setting, with associated net load ramps.  Unpredictable variability comes from sudden weather changes, like unexpected multi-day lulls.  For distributed renewable energy resources like rooftop solar or demand response, unpredictability is a function of exogenous factors like the weather, but also reflects how opaque the distribution system is to bulk system operators.  Distributed assets like PV, efficiency, or demand response can behave predictably, but grid operators need to have more data about where DERs are on the system, what kind of DERs they are, and how they are programmed.

Flexibility is the key ingredient to manage each of these challenges, empowering wholesale markets with the ability to automatically adapt to variations in net-load in cost-efficient and reliable ways.  Best practices from across U.S. wholesale markets illuminate the near-term path forward:

  • Fix market rules to unlock flexibility of existing resources

In one example, system operators can create a net generation product for distributed resources that enables aggregators to participate via fleets, making the size threshold as small as possible.  NYISO’s Behind-the-Meter net generation resource allows behind the meter storage to participate in wholesale electricity markets, including being dispatched beyond the meter.

  • Create and modify products to harness the flexibility of existing resources and incent new flexible resources

Higher scarcity pricing and reserve adders are one of many ways to do this.  ERCOT’s high scarcity price and Operating Reserve Demand Curve adder creates additional value for flexible units during times of system stress.  Where necessary, system operators can create new products for flexibility or products that reward flexible resources, even if just for a limited number of years.  Finally, system operators can pay for reliability services that are of increasing importance but are currently uncompensated, like frequency response.

A clean, high renewables future is within sight. Policy-makers need only look at the best practices of their colleagues around the country to understand how to manage the transition as it happens.

Paying for flexibility and other resources in the future

Over the long term, however, more significant structural changes are likely required to integrate low-cost renewables and manage the major resource base transition animating wholesale markets.  Wholesale markets will need to reach a stable end-state where they can successfully manage real-time dispatch, resource adequacy (especially flexibility) and long-term cost recovery for clean resources.

A second APP research paper, “On Market Designs for a Future with a High Penetration of Variable Renewable Generation,” offers two possible paths for wholesale electricity markets to manage three key challenges in a high renewables future:

  • How will the market pay for the long-term provision of electricity when marginal costs are zero much of the time?
  • How do grid operators make operational decisions of which zero-cost assets to dispatch in times of surplus?
  • What will be the roles of distributed resources, especially the controllable ones? What price signals will they follow and how will they be dispatched?

The first path is an evolved version of today’s markets which becomes increasingly dependent on flexibility from storage and demand-side’s ability to shift consumption for its viability.  The second path splits the market into a long-term “firm” market which covers most consumer needs and a “residual market” which operates much like todays’ spot markets but trades in both withdrawals and injections of electricity against the “firm” market deliveries.

These paths address the challenges above in diverse ways but they also overlap by leaning on long-term contracting, a natural way to align with the investment needs of capital-heavy fuel-light assets.  They also both avoid capacity remuneration mechanisms commonly seen today – in other words, capacity markets.

In addition, the two general paths above for the evolution of wholesale market design in a future with a high penetration of variable renewable energy and distributed resources reveal several important themes:

  • Alignment: Markets must be aligned with physical and financial realities of the underlying for viable market design.
  • Optimization: Markets need the tools to optimize both near-term dispatch and long-term investment in grid assets.
  • Risk Management: Markets must be able to toad value by managing risk by shifting risk from one set of parties to another (customers to generators) and by reducing risk through pooling (lowering costs as well).  Any future market design needs to provide this function.

Conclusion

The transition from today’s legacy grid into a low-cost, low-carbon engine for our future economy is an especially pronounced challenge for restructured wholesale electricity markets because they function through a combination of direct regulatory interventions and dynamic market forces.  The two new APP research papers focus on the start and finish of this transition.  They identify changes needed to provide the opportunity for and manage meaningful increments in renewable generation today and provide a vision for how the future grid might function, consistent with engineering and financial realities.

But another important part of overcoming this challenge includes managing a timely transition to a cleaner grid where many fossil-fueled assets will need to retire before the end of their useful life, and where the grid may sometimes be long or short on some of the resources it needs.  Lessons from today and a vision for future help us on the path, but much work remains to be done.

Analyses from diverse stakeholders show DOE’s proposed rule is off-base

On September 29, the Department of Energy released a Notice of Proposed Rulemaking (NOPR) that would bail out unprofitable coal and nuclear plants.  The notice argues generators with fuel on-hand are necessary for reliability and resilience, though these claims are demonstrably false.  Nevertheless, FERC has acted on DOE’s proposal and seeks comments by October 23 to determine whether DOE’s proposal becomes a FERC regulation.  FERC’s set of questions for stakeholders can be found here.

We created America’s Power Plan to deliver helpful analysis to policymakers to support the energy transition to a clean, affordable, resilient grid.  Though two of the three sitting FERC commissioners (Powelson & LeFleur) have publicly indicated they do not support the NOPR approach, this is no guarantee their decision will defend well-functioning wholesale markets and incorporate the best evidence that a renewable energy future is resilient and affordable.  It will be important to build a robust record that supports analysis from the DOE’s own report on reliability and resilience published in July—markets are more-than-adequately supporting reliability, and customers should benefit from lower costs as clean energy comes in to undercut other resources.  A forthcoming report in October from APP experts Robbie Orvis and Sonia Aggarwal will focus on solutions already underway in wholesale markets to value flexibility, a key ingredient of resilience.

To help our readers who want to get involved in this debate gather the best arguments out there, here are some resources from the experts of America’s Power Plan and others.

Resources from APP Experts:

The Department of Energy’s Notice of Proposed Rulemaking (NOPR) to FERC, directing the Commission to issue new tariff rules that reward certain (coal and nuclear) resources for so called “resilience” benefits, fails to demonstrate how it will improve resilience while threatening to upend the very markets it purports to protect.

The nearly unprecedented NOPR requires FERC to establish a tariff and “recovery of costs and a return on equity” for plants that have “a 90-day fuel supply on site,” which they argue would enable the plants “to operate during an emergency, extreme weather conditions, or a natural or man-made disaster.” According to the NOPR, “compensable costs shall include, but not be limited to, operating and fuel expenses, costs of capital and debt, and a fair return on equity and investment.”

When old, established industries are threatened by new, better technologies, they often go running to Washington for special protections. It is an old practice, generally taxing the common good for private interests. Unfortunately, the U.S. Department of Energy has set a new record for gall in this practice in a fairly stunning move that would impose a new tax on electricity consumers and roil America’s power markets for years to come.

Here’s the story: Renewable energy — especially wind and solar — has plummeted in price. Today a new wind farm, for example, is often cheaper than just the operating costs of an old coal power plant. Cheap natural gas creates additional price threats to existing coal or nuclear. And these favorable economics for renewables and gas don’t even count the public benefits they create through clean air, reduced greenhouse gas emissions and avoided fuel price spikes. . . . Click to read more

What analysts are saying

ICF forecasts DOE’s proposal could cost ratepayers between $800 million and $3.8 billion annually through 2030, and reduce development of new natural gas-fired capacity by 20-40 gigawatts:


Rhodium Group says only .0007% of nationwide power disruptions over the past five years were due to fuel supply problems, and the vast majority were the result of severe weather damaging transmission and distribution:

 

What stakeholders are saying

From the Media

Analysis of the DOE Report

Trending Topics – Emerging Lessons on Performance-based Regulation from the United Kingdom

A version of this article was published on Greentech Media on October 6, 2017

By Sonia Aggarwal

Many U.S. states are considering moving from cost of service regulation for utilities toward a regulatory structure that incents efficient fleet turnover, incorporates clean energy and other cost-effective technologies, and stimulates smarter build-or-buy decisions.  These conversations are motivated by aging infrastructure, new customer energy use patterns, innovative competition from third-party service providers, the need for flexibility to accommodate carbon-free variable generation, a recognition of utilities’ unique role in the electric system, and a commensurate desire to ensure they remain financially viable.

Performance-based regulation (PBR) has emerged as a promising potential solution to these challenges for many public utility commissions (PUCs).  Some, like Ohio, Minnesota, and Missouri, have initiated informal discussions or official workshop series on the topic. Others, like Pennsylvania and Michigan, have commissioned or directly conducted research.  And still more, like Rhode Island, Illinois, and New York, have already taken concrete steps in this direction.

The need to improve utility incentives is reaching consensus, and the potential for performance-based regulation to meet the task is widely discussed. But regulators interested in PBR are searching for real-life examples where it has worked well.  Luckily, the United Kingdom began moving in this direction a few years ago.  Though the U.K. is different in its public policy priorities and regulatory capacity and philosophy than many U.S. states, RIIO already provides U.S.-relevant lessons from their experience.

RIIO – performance-based regulation at work

First, some context: the U.K.’s Office of Gas and Electricity Markets (Ofgem) regulates 14 electric companies and four gas companies, akin to a public utility commission in the U.S.  More than 25 years ago, the U.K. market was restructured by splitting generation and distribution businesses, creating a centralized generation market, and decoupling distribution utility revenues from sales.  In 2010, after a year of gathering stakeholder comments, Ofgem made another set of major changes. These reforms were designed to keep costs low for customers, seeking “better value for money,” and encouraging innovation among remaining monopoly utilities in gas distribution, electricity transmission, and electricity distribution.  Ofgem’s changes “sought to put consumers at the heart of network companies’ plans for the future and encourage longer-term thinking, greater innovation and more efficient delivery.”

The new regulatory structure comprised a multi-year rate plan with a revenue cap plus performance incentives.  The program, affectionately called RIIO (Revenue set to deliver strong Incentives, Innovation, and Outputs; or Revenue = Incentives + Innovation + Outputs), went into effect just over four years ago, and contains several important design features:

  • RIIO extends the time between financial reviews to eight years, with a review after four (the first part of this four-year review is happening now, and inspired this update).
  • RIIO combines utility capital expenditures and operational expenditures into one capped bucket of allowable revenue (a “revenue cap”), and enables a rate of return on the whole (a structure they call “totex” to indicate both capex and opex). This design intends to do two things:
    • First, the revenue cap provides financial incentives for utilities to spend prudently, as they have an opportunity to keep (at least some of) whatever costs they save as profit.
    • Second, the totex reduces the capital bias that can arise from traditional cost of service regulation (which allows a rate of return for capital expenditures but treats operational expenditures as a pass-through). When utilities can only make money from investments, they will systematically choose capital solutions over operational solutions that may be more cost effective—one of the key insights from RIIO for U.S. regulators.
  • RIIO highlights six important goals or “outcomes”, for which it defines quantitative metrics and sets specific targets. RIIO’s outcomes will likely sound quite familiar to U.S. regulators: safety, environment, customer satisfaction, connections, social obligations, and reliability/availability.
  • Beyond the financial incentives created by the revenue cap (discussed above), RIIO adds financial incentives and penalties for each of the outcome categories. These outcome-based performance incentives sum to about 200-250 basis points of incentives for excellent performance or a similar magnitude of penalties for poor performance.  This design feature is intended to motivate utilities to innovate to deliver what customers want out of the utility system.

 

  • RIIO tracks these outcomes and others via a standardized scorecard, making it easier for stakeholders to follow which goals utilities are meeting or exceeding, and where they may be falling short.
  • RIIO also held aside a pot of funding for innovative projects from R&D through pilots, to kick-start the intended shift in utility culture. In order to be eligible for these funds, utilities must agree to share lessons and ideas generated by the research.

Many of these elements of PBR are under discussion in states around the U.S., but RIIO combines all of them into one holistic change to utility regulation.

Lessons from across the pond

With that context, let’s dig in to the lessons from RIIO’s mid-term review.  Ofgem recently released an open letter that begins the discussion about lessons, based on preliminary evaluation work.

Most important, experience so far supports the notion that revenue caps with totex and carefully calibrated outcome-based performance incentives can drive innovation, stabilize or improve utility profitability, and focus utility attention on the outcomes customers most want.  For example, in the first performance year, many distribution utilities beat forecasts for customer bills, exceeded most of their performance targets, and achieved returns on equity averaging just over nine percent – 300 basis points more than their estimated six percent cost of equity.  Beyond performance numbers, anecdotal evidence also suggests that utilities have shifted their focus toward performance under RIIO.  There is no indication that Ofgem and the U.K. utilities will move away from this regulatory structure after testing it over the last four years.

Lesson for U.S. regulators: It’s worth exploring whether revenue caps, outcome-based performance incentives, and perhaps totex are right for your state.  Make sure the combined financial impact of performance incentives are carefully crafted and just large enough to capture utility management attention.

RIIO’s detailed design also provides important lessons for U.S. regulators looking to move toward rewarding utilities based on performance.  Most of the emerging lessons relate to the difficulty of getting long-term projections right, and the need for automatic calibration along the way.

First, setting the right revenue cap is very challenging in a world of growing uncertainties. For example, will efficiency flatten demand or will electrification kick-start demand growth?  External factors can cut both ways, but in the U.K., Ofgem notes that “forecasts for real price effects in setting allowances…appear in some instances to have resulted in gains for the companies.”  Thankfully, Ofgem designed the cap to share gains or losses between utilities and customers, but still, over the last few years, it is possible the utilities earned more than efficient business practices would have yielded alone under better-calibrated revenue caps.

Lesson for U.S. regulators: Pay attention to important normalization factors (for external factors like GDP, inflation, population changes, or electrification rates) and build in transparent off-ramps and correction factors (for external factors like storms) from the beginning.  Look for ways to share value fairly between utilities and customers.

Second, Ofgem’s mid-term review identified the eight-year length of the multi-year revenue cap as a key source of uncertainty.  Of course, the tension here is between setting targets too far into an uncertain future versus creating a long enough runway for utilities to innovate and deliver desired outcomes.  Ofgem points to rapidly changing technologies and competitive forces on the distribution side, urging a review of the length of the period “given the potential scale of future uncertainty facing network companies.”

Lesson for U.S. regulators: Create programs that last less than eight years, or suggest predefined points for review and adjustment within less than eight years.

Third, though some U.K. utilities are paying penalties for underperformance on outcomes, most are successfully earning incentives for performing well on outcomes.  This mix of penalty payments and incentive earnings is balanced, but it is worth noting that more utilities are performing (and earning) well than poorly on their outcomes.  This may indicate more ambitious performance targets could have been warranted to better share benefits between utilities and customers.

Lesson for U.S. regulators: Information asymmetry is likely to tilt in favor of looser targets for utilities.  It may be worth conducting independent studies of potential to assess whether proposed targets are sufficiently tight. It is less risky for customers to start with small financial incentives and work up, rather than over-incent utilities and then have to squeeze incentives down to the right level.

Fourth, even though Ofgem worked with utilities and stakeholders to define outcome metrics carefully at the program’s start, a couple instances of ambiguity still arose in the performance period.

Lesson for U.S. regulators: Invest the time up front, before a performance-based program begins, to define outcome-based performance measures clearly and quantitatively.

More lessons will arise as Ofgem continues its midterm review of RIIO, but we hope that these first, emerging lessons will be useful to U.S. regulators considering performance incentive program design questions today.

Trending Topics – Getting the most out of vehicle electrification

An version of this article appeared on Greentech Media on August 23, 2017

By Mike O’Boyle

Electric vehicles (EVs) are on the path to becoming mainstream, thanks to strong policy support and rapid lithium-ion battery cost declines.  BNEF projects 40 percent of new U.S. car sales will be electric in 2030, with EVs cost-competitive without subsidies around 2025.  That’s an extra 24 terawatt-hours (TWh), or half a percent of new flexible demand, added to America’s power system annually in just over a decade – a regulatory blink of an eye.  Depending on when EVs charge, that translates to 3-6 gigawatts (GW) of flexible demand-response capacity added each year – roughly half of today’s total demand response capacity in PJM Interconnection.

Electric utilities will play a major role supporting transportation electrification, and as electricity providers, will benefit from additional sales and infrastructure required to meet new demand.  An ICCT report found a statistically significant link between grid-connected EV infrastructure and vehicle electrification, and a Brattle Group report showed electricity demand from a fully electrified transportation fleet in 2050 dwarfs potential lost sales from distributed solar generation by a factor of five.  So whether or not utilities are allowed to own and rate base charging infrastructure, massive investment opportunities are coming down the road.  But if utility shareholders receive new earnings opportunities through EVs, what value will customers get in return?

Last year America’s Power Plan published a five-step framework for getting the most out of grid modernization to ensure customers get the value promised from grid modernization investment programs.  Electrification is one subset of these efforts, and a similar approach (adding market development as a precursor) can help regulators prepare for immense market changes.  Getting the most out of vehicle electrification requires supporting market development, integrated distribution planning, defining goals, setting metrics, defining targets, and exploring changes to utility financial incentives.

Step 1a – Supporting market development

Before developing a comprehensive EV evaluation framework, utilities will have to experiment and innovate.  In the short-term before EVs ramp up, regulators should support innovative grid-edge applications through pilots and an initial round of EV infrastructure (rate based or not) laying the groundwork for EVs to become grid resources.  In order to turn EVs into reliable demand response and storage resources, these applications need work be made operational in a reliable way, including communication protocols, standards, and consistent operational practices.  New rate designs will also have to be tested and developed.

PUCs haven’t yet developed robust frameworks for assessing the prudency of utility charging infrastructure investment, so initial approval of a closely watched first round of experimental investments can encourage innovation and inform regulation.  Commissions may consider allowing utilities to provide incentives to help customers electrify in this early phase, then pare incentives back in the future under a more comprehensive approach as the scale and scope of EV infrastructure grows and the industry becomes more mature.  Rocky Mountain Institute’s report, Pathways for Innovation, provides a useful roadmap from experimentation to deployment.

Step 1b – Integrated distribution planning – EV edition

Integrated distribution planning (IDP) determines the hosting capacity and potential benefits of distribution system resources under different utility control scenarios – a prerequisite to optimize distributed energy resource deployment alongside conventional supply-side resources. IDP is heating up with new proceedings in Maryland, New Hampshire, New York, and Minnesota (see the 50 States of Grid Modernization for the complete list), joining early adopter states like Hawaii and California.

Among other valuable results, IDP generates the data utilities need to understand where and when EV charging can provide the greatest benefit of all customers.  One key element is location; IDP helps identify uncongested circuits with the smallest incremental cost of adding charging capacity.  On congested circuits, EV chargers that would otherwise add to congestion can reduce their system-wide impact if customers receive charging incentives during periods of low demand.  In addition, IDP allows utilities to:

  • Plan for various rates of EV adoption
  • Understand the benefits of smart versus regular chargers
  • Plan for different combinations of autonomous vehicles, public EV fleets, and individual customers

Of course, these efforts should be coordinated with municipal and state transportation agencies that will likely play primary roles in vehicle electrification, including route planning, congestion, and clustering of public-facing chargers.

Finally, IDP provides visibility into the economics and viability of EVs as system resources for managing of wind and solar variability.  Rather than build new natural gas peakers, smart chargers capable of responding to system operator control can help manage peaks by delaying charging.

Step 2 – Define the goals of a vehicle electrification program

The second step starts by asking what regulators, on behalf of customers, hope to achieve by allowing utility investments in EV deployment, and what role should the utility play?  Traditional goals of affordable, reliable, safe power aren’t going anywhere, and EVs should help achieve these goals.  But other goals, such as facilitating customer charging, improving local air quality, and power sector decarbonization are newer goals impacting EV infrastructure and demand management.

An obvious principle EV deployment goal should be increasing service convenience and quality for a growing EV customer base of while increasing EVs on the road.  Serving customer demand for EVs, including disadvantaged communities, means facilitating new smart charger roll-out and demand management systems that help customers charge rapidly, in many locations, as cheaply as possible.  Though investment is required, time-varying rates and demand response payments can help EVs enhance affordability, improve existing infrastructure efficiency, and enable autonomous EV charging and aggregation as flexible resources

Local air quality is another common goal of vehicle electrification, which will likely benefit low-income communities that tend to have worse air quality than average.  Because the utility plays a significant role supporting EV deployment, some benefits to local air quality can be attributed to their performance in promoting EV adoption.

EVs not only decarbonize the transportation sector, they also help decarbonize the power sector.  Vehicle electrification has great potential to facilitate integrating local and bulk-system renewable energy resources, i.e. adding flexibility by shifting demand from one hour of the day to another, or providing short-term frequency response.  Shifting is a key strategy for integrating variable renewables from Teaching the Duck to Fly.  If vehicle manufacturers and customers can agree on rules for discharging, this flexibility potential will nearly double.

Step 3 – Metrics of a successful vehicle electrification program

Metrics should focus on outcomes reflecting policymaker goals – if it is a state goal, electrification itself should be measured and publicly reported by the utility, in terms of energy (kWh), customers (vehicles/customer), electric vehicle miles traveled (eVMT), and peak-coincident charging (kW).  These four metrics help customers understand progress in meeting transportation electrification goals.  Regulators can also consider comparing overall spending on charging infrastructure with electrification metrics, giving a sense of grid spending per unit of electrified transportation.

Often vehicle electrification outcomes are subsets of a greater goal, i.e. clean energy, affordability, or reliability.  System metrics for grid modernization or clean energy can subsume vehicle integration metrics. Because new vehicles necessarily increase demand, utility performance in key areas like peak demand management (% MW reduction), efficiency (kWh/customer), or carbon emissions (CO2/MWh) or other air pollution, must therefore account for “beneficial electrification,” while maintaining high standards for reducing impacts of EV adoption on those outcomes.  For example, when New York’s Consolidated Edison recently adopted an outcome oriented efficiency metric, kWh per customer, it normalized for vehicle and appliance electrification by adding in customer load to the target.

Step 4 – Create an open process to set targets

Once metrics are selected, reasonable targets can help guide utility planning. A transparent target-setting process should include plenty of time for stakeholder review and comment, and targets should be set far enough into the future to accommodate investment and program timelines. Regulators should consider the unique context of each region or utility, and place targets within a range that represents a stretch, but not an unreasonable one.

Pilots can be helpful where the potential for utilities to optimize EV charging via rates or demand response is unknown. For example, a recent BMW-Pacific Gas & Electric pilot program successfully demonstrated that EVs can serve as reliable and flexible grid assets, giving regulators a sense of what is possible.

Target setting is one part art and one part science, raising the importance of a transparent and predictable process for calibrating targets based on real-world performance. Laying out the target revision process ahead of time is critical to lowering utility investment risk.

Step 5 – Consider linking utility returns to performance

Smart Grid Hype & Reality notes that today’s “investor-owned utility rewards are based on processes (investment), not outcomes (performance).”  To ensure utilities are properly motivated to deliver new power sector outcomes, regulators that are unsatisfied with the results of measurement and target setting should consider linking utility compensation to performance.

Many different resources explore options for reorienting utility compensation around performance, including Synapse’s Handbook on Utility Performance Incentive Mechanisms, America’s Power Plan’s Cost and Value Series Parts One and Two, RAP’s report for the Michigan PUC, and Ceres and Peter Kind’s Pathway to a 21st Century Utility.  Many of these concepts are in the proving phase in the U.K., are being implemented in New York and Massachusetts, and are being explored in “utility of the future” proceedings in Illinois, Ohio, Minnesota, Oregon, and Hawaii.

For EVs in particular, two methods could be helpful – a conditional rate of return on charging infrastructure based on performance, (if allowed) and overall performance incentive mechanisms.  Utility commissions have, and will undoubtedly continue to find it prudent for utilities to build, own, maintain, and operate charging infrastructure; particularly on public property, in low-income areas, and for large businesses and parking structures.  In such cases, key metrics outlined above could be linked via basis point adjustments to utilities’ return on investment on those rate-based assets.  Regulators could also set a revenue cap on charging infrastructure, with incentives to achieve electrification targets while spending below budget.

Performance incentives are essentially cash bonuses increasing utility returns if specific targets are met, while penalizing the utility when they fail.  For example, a utility could be rewarded for reducing peak demand (MW) below the target set by regulators by turning off EV chargers when needed.

Electrification presents a massive opportunity for utilities to invest productive capital into the distribution system.  Reorienting utility investment around outcomes can help customers get commensurate value in return.

++ Thanks to Phil Jones, Chris Nelder, and Nic Lutsey for their input on this piece.  The author is responsible for the its final content.

Trending Topics – Energy efficiency’s existential crisis is also an opportunity

A version of this article was originally published on July 25, 2017 on Greentech Media.

By Matt Golden

Just about every plan to achieve a clean energy low carbon future includes a large helping of energy efficiency. But while it’s true that efficiency is generally much cheaper than generation, energy efficiency as we know it faces an existential challenge.

The rate at which we’re deploying efficiency is simply not keeping pace with utility and grid needs. But even if we were able to achieve scale, in the current construct, it’s unclear how we would pay for the massive investment required.

Fortunately, there is another way. We now have the data, market, and financing in place to procure energy savings to solve time- and location-specific grid problems. Bundling projects into portfolios turns efficiency into an investor and procurement-friendly product that has manageable and predictable yields.

By treating efficiency as a genuine distributed energy resource (DER) we can stop relying on ratepayer charges and programs and instead unleash private markets and project finance to deploy and fund energy efficiency projects in the same way we do solar, wind and other energy resources — through long term contracts, creating cash flows that can be financed like grid infrastructure through project finance rather than consumer credit.

Efficiency’s existential dilemma

While many of our current efforts are focused on overcoming barriers to demand, the elephant in the room is that if we get efficiency on the rails towards real scale, current ratepayer funded programs will simply run out of money.

According to a recent blog post by ACEEE, combined efficiency investments across every sector of the economy (not just buildings) range from about $60 to $115 billion a year in the United States. A conservative estimate from a 2009 McKinsey report puts the price tag of upfront efficiency investment at $520 billion by 2020.

By comparison, current efficiency program spending hovers around $8 billion a year nationally, resulting in a market program market including private capital of approximately $16 billion. It’s a big number, but compared to capital investment needed to achieve the potential of energy efficiency in America’s buildings which will be counted in the trillions, it’s a drop in the bucket.

Rethinking efficiency in order to engage markets

The grid is undergoing a transformation from central generation to clean distributed sources of power such as solar and wind. This has resulted in new challenges as we integrate intermittent renewables, often at the grid edge. The imbalance between California’s daytime solar supply and evening demand (the “duck curve”) is contributing to regular periods of negative pricing and driving the need for time- and location-responsive distributed energy resources (DER) such as storage, EV charging, and demand response. As DER markets emerge, it is clear that there are no silver bullets to solve this problem, and that current resources are both costly and in short supply compared to the scale of the challenge.

Energy efficiency represents the largest and least expensive of these potential resources, but has largely been left out of the conversation. This is because traditional energy efficiency is based on monthly average savings and therefore can’t solve for grid issues that vary by location and time.

However, as smart meter interval data becomes available in an increasing number of states, and portfolios of efficiency projects and data are aggregated, we will have the ability to calculate savings on portfolios of energy efficiency projects in terms of both time and location. This analysis creates resource curves (time and locational savings load shapes) that can be used to design efficiency portfolios that reliably deliver “negawatts” where and when they are most needed, rather than simply average reductions in consumption for a given month.

Rather than paying in advance through rebates for traditional energy efficiency that doesn’t differentiate between peaks or valleys in demand, utilities will be able to procure savings based on when and where they are happening. By breaking down “energy efficiency” into classes of projects that deliver more valuable resource curves, we can make savings worth more when it has the biggest impact, giving market players the tools and incentives they need to optimize their offerings to deliver the most valuable results to the grid and the best deal to customers.

The existential question

With utilities and wholesale market procurement providing a long-term and scalable buyer, the next question is: how do we finance the massive upfront investment required to achieve the energy efficiency potential locked up in America’s existing building stock?

By making efficiency work like other capacity resources, we solve for two of the outstanding existential problems that have stood between energy efficiency and its potential: how to bring efficiency to bear as a real solution for modern day grid issues such as intermittent generation, and how to attract the private investment required to get us there.

Rather than paying rebates upfront and measuring monthly outcomes years later — resulting in prescriptive programs and costly regulations — utilities can use standard open-source methods and calculations such as CalTRACK and the OpenEEmeter to establish markets in which a wide range of businesses can enter into mid- or long-term contracts, similar to supply side PPAs, where they are paid for performance through savings purchase agreements (SPA) for the value of how they shift load over time, based on normalized metered savings.

A new pay-for-performance arrangement would flip the way we pay for energy efficiency on its head. Whereas today energy efficiency investments are financed by consumers either out of pocket or based on their credit or the value of their asset, we can instead use project finance in the same way we pay for power plants and other distributed resources — by paying for performance over time and financing the resulting cash flow. Rather than relying on individual consumers to subsidize the public benefits of efficiency, the costs would be spread across all ratepayers and would be rate-based like other utility investments.

The solutions we need are available today

While it’s true that energy efficiency on individual buildings can be all over the map, at the portfolio level it makes for a remarkably stable investment. The transition from attempting to be right all the time to instead accepting quantifiable risks and managing performance through portfolios marks a transition from engineering to finance.

To put it another way, while guaranteeing outcomes to a single customer is exceptionally hard and costly (and has diminishing returns), a portfolio of projects will perform with consistent results, providing purchasers with high confidence in performance and yielding consistent returns for investors. Combined with investment grade insurance products to backstop the performance of bundled portfolios of efficiency projects, financing cash flows of efficiency portfolios works exactly like other grid infrastructure investments.

Efficiency aggregators compete to enter into savings purchase agreements to deliver demand reductions to utilities when and where they need them. Utilities pay for these savings as they are delivered through procurement. Aggregators can then insure and finance these cash flows and compete to deliver products that both resonate with customers and are optimized to maximize the grid value. So long as efficiency is cheaper than the marginal costs of alternatives such as generation, storage, or transmission and distribution investments, it is a good deal for ratepayers.

Paying for performance in practice

While this approach sounds far-fetched and futuristic — it isn’t.

Everything needed to quantify the impact of energy efficiency resource curves, engage private capital, and manage performance risk is ready to go. The only thing left is for regulators and utilities to establish open and competitive markets to give investors and business model innovators a place to play.

In response to CA law’s AB-802 and SB-350, which requires pilots in normalized metered efficiency and pay-for-performance, PG&E recently selected winning bidders for its first pay-for-performance pilot in which aggregators will be paid based on metered performance over time, rather than through customer rebates and time and materials to implementors. The pilot also represents a first step towards using efficiency to help to close the 4,000 GWh capacity gap created by the planned shutdown of the Diablo nuclear plant in 2025.

Pay-for-performance efficiency isn’t just limited to California. Similar efforts are getting underway in New York, Massachusetts, Illinois, Oregon, Texas and Washington. However, many of these pilots are still extremely small scale and are unnecessarily complex and entangled in webs of outdated regulations — we are stuck in purgatory between current regulations designed to manage programs that pay in advance and future markets where aligning incentives means regulators can focus on sending the right price signal and prevent abuse and gaming.

Steps to reach scale

The transition from programs to markets is not a one step process. It requires a series of investments in data and a cultural shift from regulators and utilities toward adopting financial principles of portfolio management.  This transition will take time and data, so it’s critical that we get the ball rolling immediately:

  1. Utilities should implement open source metering of energy efficiency performance in order to optimize program implementation and make savings and resource curve data open and transparent.
  2. Utilities should create pay-for-performance pilots next to existing programs, allowing third party aggregators to innovate and compete based on outcomes.
  3. Regulators should allow utilities to recover cost so long as the utility cost of metered efficiency is lower than the marginal cost of alternative resources.
  4. Regulators and utilities should move efficiency resource curves into all resource procurements alongside other distributed resources.

Given the problems faced by the changing grid, and the market and financial barriers to scale inherent in the current approach to energy efficiency, it is urgent that we start aggressively standing up markets that value energy efficiency resource curves through pay-for-performance, to unlock private investment and market innovation.

The good news is that solutions exist to overcome efficiency’s existential challenges and deliver the investment needed to achieve the vast potential of energy efficiency. The sooner we pivot the better — there is no time to waste.

Trending Topics – Mind the “storage” gap: how much flexibility do we need in a high renewables future?

A version of this article was originally published on June 22nd, 2017 on Greentech Media.

By Brendan Pierpont

Imagine for a moment that we have built enough wind and solar power plants to supply 100 percent of the electricity a region like California or Germany consumes in a year. Sure, the wind and sun aren’t always available, so this system would need flexible resources that can fill in the gaps. But with continuing rapid cost declines of wind, solar, and batteries, it’s possible that very ambitious renewable energy targets can be met at a cost that is competitive with fossil fuels.

Every region has a different climate and demand profile. Taking California or Germany as an example, and assuming no interconnections with neighboring regions, up to 80 percent of the variable renewable power produced could be used in the hour it is generated with the right mix of wind and solar – in other words, 80 percent of supply could be coincident with demand. Still, a reliable grid needs fast-responding resources to satisfy the remaining 20 percent of demand; filling this gap is one of the principal flexibility challenges of a low-carbon grid. But what will that flexibility cost?

The answer is surprising – by 2030 an 80 percent renewable energy system including needed flexibility could cost roughly the same as one relying solely on natural gas. As Climate Policy Initiative demonstrated in our recent report Flexibility: the path to low-carbon, low cost electricity grids, if prices for renewable generation and battery storage continue to fall in line with forecasts, meeting demand in each hour of a year with 80 percent of electricity coming from wind and solar could cost as little as $70 per megawatt-hour (MWh) – even when accounting for required short-term reserves, flexibility, and backup generation. Of course, this analysis makes some simplifying assumptions; it represents the new-build cost of generation and flexibility to meet demand in every hour of a year using historical wind, solar and demand profiles from Germany, and it doesn’t factor in transmission connectivity or model the constraints of existing baseload power plants in detail. But it also leaves out the significant potential for cheaper flexibility from regional interconnections, existing hydroelectricity, and the demand side.

Still, this analysis helps us understand what kinds of flexibility we will need and what it will cost. The promise of a low-cost grid based on wind and solar is so compelling, it’s worth digging into what we’d need to do to realize this vision.

What is flexibility, anyway?

A power system has a wide variety of flexibility needs – with time scales ranging from seconds to seasons – and a range of different technology options can be used to meet those needs, depending on the time scale.

On very short time frames from seconds to minutes, fast-responding resources are needed to keep the grid in balance and compensate for uncertain renewables and demand forecasts. These needs should grow only modestly as shares of renewables climb to high levels, and they could be accommodated cheaply using existing hydro generation (where it exists), or even smart solar and wind power plants. Fast-responding demand response or energy storage would also be good choices, particularly after storage costs decline further as projected.

Solar and wind output can change rapidly on a predictable, hourly basis as well, requiring flexible resources that can quickly pick up the slack. One feature of California’s now-infamous “duck curve” is the need for fast-ramping resources to meet the evening decline in solar production. California has devised innovative market mechanisms to ensure flexible gas and hydro generators are available to meet these ramping needs.

On a daily basis, the profile of renewables production doesn’t neatly match demand, requiring resources that can store or shift energy, or otherwise fill in the gaps across the day. Today, daily imbalances are met primarily by dispatching fossil fuel fired power plants. But a number of solutions are gaining momentum, such as automatically shifting when consumers use energy and building large batteries.

At even longer time frames, there can be multi-day and seasonal mismatches between when renewable energy is produced and consumed. The need for long-term, multi-day energy shifting – exemplified by several windless, cloudy winter days with high electric heating demand – is perhaps the biggest challenge to complete decarbonization of the power grid, because batteries are ill-suited to seasonal shifting needs. In fact, using lithium ion batteries for seasonal storage, cycling once per year, would cost tens of thousands of dollars for each MWh shifted.

Graphic: Technology fit with flexibility needs

The challenge of power grid decarbonization hinges on this ability to store or shift energy. But how much energy would the power grid really need to shift, and over how long?

Solar drives daily storage needs

A power system that relies primarily on solar would have abundant power in the middle of each day, and scarcity during the night. Trying to exclusively power the grid with solar, with no ability to store or shift energy, would mean more than half of demand would go unmet.

Many technologies are well-suited to shifting energy within a day. Today solar generation relies on dispatching hydro and thermal power plants to meet changing demand, but in the future, lithium ion and flow batteries promise multiple hours of storage and shifting capability. Thermal energy can be stored in buildings, shifting when electricity is used for heating or cooling. And as electric vehicles become more widespread, ubiquitous charging infrastructure, electricity pricing and automated charging could shift when drivers charge their vehicles.

But are the daily energy storage and demand-shifting solutions emerging today going to be enough? Well, it depends.

In California, demand is highest during the summer, when solar production is at its peak. If California could store and shift solar energy to any time in each day, solar could meet nearly 90 percent of California’s electricity demand. Only 10 percent of energy demand would go unmet by solar because of multi-day and seasonal storage gaps.

In Germany, however, demand is highest during the winter months, driven in part by electric heating demand. So storing and shifting solar energy within each day would still leave 30 percent of energy demand unmet. In other words, the long-term storage gap for solar in Germany is three times larger than in California.

Wind drives storage needs of up to a week

Wind, on the other hand, is a better match with demand hour-by-hour, with 70-80 percent of wind production coincident with demand in California or Germany. And compared with solar, daily storage has little value for wind. Shifting energy within the day could only improve wind’s match to demand by a few percentage points. For wind, the biggest gains come from shifting energy by up to a week. In both California and Germany, the ability to shift energy by up to a week could allow nearly 90 percent of energy demand to be met with wind

Beyond a week, seasonal storage needs depend on regional demand and renewable resource profiles, and, critically, what mix of renewable resources the region has installed. A system incorporating both wind and solar can have lower storage needs than a system based predominantly on one resource or the other. In Germany, a mix of 70 percent wind and 30 percent solar could meet 90 percent of demand on a daily basis, reducing the need for longer-term storage. In California, solar is already a pretty good fit for seasonal energy needs, but the addition of around around 10 percent of electricity from wind could slightly lower both daily seasonal storage needs in California.

Graphic: Storage gap for 100 percent wind or 100 percent solar in California and Germany

Graphic: Storage gap for a wind and solar mix that minimizes long-term storage needs in California and Germany

But far fewer technology options allow for long-term energy shifting. Consumers can’t go for a week without heat, cooling, or charging vehicles, and many long-term storage technologies like hydrogen are still too costly and inefficient for widespread use. The default option for long-term storage is a familiar one – rely on fuel-burning power plants that provide flexibility to today’s power systems. Finding cheap, reliable and carbon-free ways to shift energy for periods longer than a week may be the key decarbonization challenge.

So how should we approach the seasonal storage gap?

Policymakers and planners have several strategies they can use to bridge the storage gap:

  1. Target a mix of renewable resources that minimizes long-term storage needs. Procuring the right mix of resources can be the easiest way to reduce the seasonal storage gap.
  2. Connect with neighboring regions to trade surpluses and shortfalls of energy. Northern Europe and the Western U.S. are taking steps to better integrate regional grids, although getting neighboring states and countries to cooperate can be challenging.
  3. Make use of existing hydropower. Regions with abundant hydroelectricity may already have enough existing flexibility to completely satisfy seasonal storage needs. But electricity and ecological needs don’t always align, and drought years could spell trouble for grid reliability.
  4. Make industrial demand seasonal. Paying the fixed capital and labor costs of an electric arc furnace for several months of the year while a steel foundry lays idle may in fact be cheaper than building the storage or generation needed to meet that demand carbon-free year-round. But this solution would require a careful balancing act between maintaining industrial competitiveness, complying with trade agreements, and ensuring job stability for workers.
  5. Develop long-term storage technologies to shift energy across weeks and months. Turning renewable electricity into hydrogen or synthetic natural gas can enable longer-term and larger-scale storage, and can be used directly for transportation, heating and industry. But so far, these conversion technologies are inefficient and expensive.
  6. Develop flexible, dispatchable carbon-free power plants to cover shortfall periods. A recent survey of decarbonized grid models suggested that nuclear and carbon capture and storage may be needed to completely decarbonize the grid. But market models and technologies will need to evolve for these resources to operate flexibly and profitably.

Transitioning to a low-carbon grid

A low carbon grid is the lynchpin of any serious plan to avoid the dangerous impacts of climate change. And with solar, wind, and energy storage costs dropping year over year, the vision of a low-cost, flexible grid driven by renewable energy seems tantalizingly within reach. But if we are going to fully decarbonize the grid, the long-term storage gap is one of the biggest challenges that lies ahead. We already have many of the technologies and tools we need for this shift, but our electricity policies and markets need to evolve for a new generation of technologies with different cost and risk profiles. If we start laying the groundwork today, we’ll be ready to keep pace with the rapid transition ahead.

++

Brendan Pierpont is a Consultant with the Energy Finance team at Climate Policy Initiative

Trending Topics – Secretary Perry, We Have Some Questions Too

A version of this article was originally published on April 24th, 2017 on Greentech Media.

By Mike O’Boyle

In April, DOE Secretary Rick Perry issued a memorandum to his staff asking some pointed questions about the future of the electric grid as coal is retired off the system, including:

  • “Whether wholesale energy and capacity markets are adequately compensating attributes such as on-site fuel supply and other factors that strengthen grid resilience and, if not, the extent to which this could affect grid reliability and resilience in the future; and
  • The extent to which continued regulatory burdens, as well as mandates and tax and subsidy policies, are responsible for forcing the premature retirement of baseload power plants”

Given the rapid change facing the electricity system, these questions may seem reasonable, but they reflect an outdated world view. The DOE’s publication of this memorandum presents an opportunity to uncover many of these outdated assumptions and understand the drivers behind the unstoppable transition from coal to other technologies. By taking each premise in turn and providing evidence-based analysis, we can see that the projected demise of coal will result in a cleaner, cheaper, and more reliable energy system.

Premise 1: “baseload power is necessary to a well-functioning grid”

asTo understand whether this is true, some definitional work is needed. Baseload generation’s purpose is to meet the base load or demand, which Edison Electric Institute defines as “the minimum load over a given period of time” in its Glossary of Electric Industry Terms. The same glossary defines baseload generation as “Those generating facilities within a utility system that are operated to the greatest extent possible to maximize system mechanical and thermal efficiency and minimize system operating costs . . . designed for nearly continuous operation at or near full capacity to provide all or part of the base load.” In other words, baseload plants are those whose efficiency is highest when run at a designed level of power, usually maximum output, and deviations from this level of power reduce efficiency and increase costs. Baseload generation is an economic construct, not a reliability paradigm.

A system with baseload thermal generators as its backbone comes with some reliability pros and cons. For example, baseload power usually has heavy generators with spinning inertia, which gives conventional generators time to respond with more power when a large generator or transmission line unexpectedly fails. But we now know how to get such responses much more quickly from customer loads, newer inverter-based resources like wind, storage and solar, and gas-fired resources.

As the Rocky Mountain Institute’s Amory Lovins details in a recent piece for Forbes, fuel storage may appear to provide some protection a failure of gas supplies or weather events, but stored fuel has its own set of problems and failure modes. The 2014 Polar Vortex rendered 8 of 11 GW of gas-fired generators in New England unable to operate. Coal has serious risks of supply due to susceptible transport by rail, as over 40 percent of U.S. coal comes from a narrow rail corridor from Wyoming’s Powder River Basin. Extreme cold can also render on-site coal unusable, as happened during the Southwestern blackout of February 2011 that shut off power to tens of millions of customers. Nuclear power can be shut down or impaired by unseasonable hot weather, when cooling water is too warm and plants must be shut down for safety and to prevent mechanical damage. So in fact, baseload units, even with fuel stored onsite, are sensitive to weather and many other failure events.

Lovins also points out that coal and nuclear baseload generators are unable to operate continuously, despite perceptions to the contrary. On average, coal-fired stations suffer unexpected “forced outages” 6-10 percent of the time, and nuclear plants experience forced outages 1-2 percent of the time, plus 6-7 percent scheduled downtime for refueling and planned maintenance. On the flip side, solar and wind are 98-99% available when their fuel (the sun and wind) is available, and the ability to predict the weather is improving all the time. The reliability risks from fossil fuels are collectively managed today, mostly by paying to keep reserve generation running to respond when they unexpectedly fail, but this creates the need for redundancies and costs in the grid comparable to those that cover the uncertainty of weather forecasts for wind and solar power.

Premise 2: “[The] diminishing diversity of our nation’s electric generation mix… could [undermine] baseload power and grid resilience.”

The U.S. electricity mix is seeing a trend of increasing diversity, rather than decreasing diversity. Until recently, the notion that supporting coal generation would improve diversity was nonsensical; coal was the dominant and largest source of U.S. electricity for decades, so an argument for more diversity would be an argument for reducing the use of coal. Today, coal and natural gas produce roughly equal shares of U.S. generation, while nuclear and hydro (hidden in the renewables bucket below) are projected to continue their near-constant supporting roles. With increasing renewable fuels, driven particularly by the growth of wind, solar, and biomass generation, one can see that fuel diversity has actually increased dramatically since 2001.

Source: Energy Information Administration, Annual Energy Outlook 2017

Today, coal is declining, with over 90 GW of the more than 250 GW fleet projected to retire under business-as-usual conditions by 2030, but it will remain a meaningful player in the marketplace for at least the next decade according to baseline Energy Information Administration (EIA) projections. In the long-term, however, whether reducing coal generation impacts fuel diversity and resilience depends more on what replaces it than whether the coal remains.

A portfolio of generation options with different characteristics insulates consumers from price risk and availability risk. Keeping some coal-fired generation online would help in that regard, particularly if its environmental costs are not considered. If retiring coal and nuclear are replaced mostly by natural gas, we would see a decline in fuel diversity, and that could potentially increase risk due to the characteristics of the natural gas supply. The same would be true, for example, if we myopically rely on solar as the only technology to decarbonize the grid. Studies of the optimal mix of resources in California to meet the 50 percent Renewable Portfolio Standard by E3 and NREL each found that geographic and technology diversity of the renewable resources will substantially reduce the cost of compliance compared to the high-solar and in-state-only cases.

But under current projections out to 2030, we are only going to see greater fuel diversity, not less, as natural gas, demand-side resources, and utility-scale renewables take the place of retiring coal. This should increase the resilience and security of the system, particularly if this change is accompanied by more investment in transmission, storage, and demand-side management.

Premise 3: “[Renewable] subsidies create acute and chronic problems for maintaining adequate baseload generation and have impacted reliable generators of all types.”

Beyond the question of what “adequate baseload generation” actually means, it is undoubtedly true that coal and nuclear baseload units are suffering financially in both vertically integrated and restructured markets. The recent FERC technical conference was a forum for generators and wholesale market operators to vent their frustration in what they see as the inadequacy of markets to provide generators with sufficient revenue. But in reality, this financial pain is the effect of oversupply and intense competition. For example, despite low capacity prices in the PJM Interconnection, five GW of new natural gas capacity cleared in the most recent auction. Coupled with stagnant demand, something has to give – and inefficient coal plants are the more expensive and the least flexible generators that are not needed in this competitive landscape.

Competitive pressure from cheap gas, inexpensive renewables, and declining demand are undermining the financial viability of baseload plants, we are far from a crisis of reliability and resilience. Consider the reserve margins and reference levels in the major markets:

 

Each market is oversupplied; in the case of PJM and SPP, the condition is drastic – they have double the excess capacity that they need to meet stringent federal reliability criteria. One panelist at the May 1-2 FERC technical conference captured the dynamic: “PJM with reserve margins of 22%, I think of Yogi Berra, my favorite economist. We’ve got so much capacity we’re going to run out.” A well-functioning market would allow uncompetitive generators to retire amid steep competition and declining prices, given the oversupply conditions. To blame coal’s suffering on policies supporting clean energy denies the root cause – coal-fired generation is dying on economics alone.

Several premises that underlie the need for the forthcoming DOE study are false. Chief among them is a singular focus on baseload generation, particularly coal-fired power plants, as necessary for maintaining reliability. Baseload generation is an economic characteristic, not a reliability concern. Replacing expensive, environmentally unsustainable coal is not a matter of ensuring adequate baseload – the key will be quantifying the reliability services that are needed and ensuring that the replacement generators can provide it. But hitting the panic button, which is what Rick Perry’s memorandum appears to do, is completely missing the picture; rather than losing diversity, our electricity mix is rapidly diversifying and our markets are vastly oversupplied with energy sources today. Any attempt to conclude otherwise will simply create unjustified roadblocks for new renewable generation, which is crucial to an affordable, reliable, and clean electricity system.

Trending Topics – How a Cold March Day in Texas Exposed the Value of Flexibility, and What Markets Can Learn

A version of this article was originally published on April 24th, 2017 on Greentech Media.

By Eric Gimon

As the sun rose over Dallas on Monday, March 3rd, 2014, the temperature read 15°F. Across the state, Texans turned on their heaters at full blast as they prepared to head to work for the day. Meanwhile, at the operations center for Texas’ electricity system (Electric Reliability Council of Texas, or “ERCOT”), operators saw the price of electricity skyrocket: around 8 AM prices jumped to nearly $5000/MWh, more than 100 times the average price of electricity.

Though the unusually cold weather caused demand for electricity to increase well above historical levels, the market behaved as intended. Many power plant owners, who know that their capacity is typically not needed this time of year, had their plants offline for maintenance. Thus when a period of unusually high demand came along on March 3rd, and with relatively low supply, prices skyrocketed, demonstrating the fundamentals of supply and demand. Power plants that were available and able to turn on quickly – to be flexible – to meet the spike in demand were rewarded handsomely.

As the renewables transition continues apace, flexibility will become increasingly important. Policy-makers and investors will need to watch carefully how flexibility is paid for.

 
In a market design like the “energy only” market in Texas, price spikes are a normal and important part of the market’s functioning, properly reflecting the marginal cost of electricity at that specific time, assuming no market manipulation. They provide an indication of how much and what types of resources are needed.

When spikes happen at predictable times of system needs, like during the summer when high temperatures cause increased electricity demand for air conditioning, they provide a good investment signal for peak capacity. When they happen at unusual times like the March 3, 2014 event, they provide a crucial signal to both buyers and sellers in the wholesale market of a need for investment in more flexible resources that can make themselves available at times of stress, on either the generation side or on the demand side. Too many of these unusual “bellwether” events indicate a system short on flexibility, while too few signal a system that is oversupplied (or lucky).

Bellwether events

Because flexible resources allow grid operators to respond rapidly to large changes in supply or demand, the frequency and magnitude of bellwether events are indicative of the need for flexible resources. In 2014, a handful of these bellwether events provided about 20-25 percent of net revenues for one typical Texas combined-cycle plant, indicating that the market was willing to pay for flexible resources. But in 2015 and 2016, due to plant owners keeping their units online more often, the addition of new capacity, and milder weather, the same plant garnered hardly any net revenue from bellwether events, indicating that there was no longer any need for extra flexibility.

How is this relevant for policy-makers and investors? Because of the variable nature of renewable resources, which create greater swings in the supply mix over smaller time-scales, the number of bellwether events is likely to grow as more and more cheap wind and solar power enters the Texas market. With over 5,000 MW of solar projected to come online by 2021, Texas is likely to need more flexibility in the coming years.

As the market continues rewarding flexibility, the resource mix could change substantially, which would impact the market in other ways. For example, if more combined cycle gas plants come online, they can help address flexibility needs, but will compete with baseload generators much of the year, which could increase downward pressure on already low wholesale prices. Alternatively, new flexible generation resources, like fast-start simple-cycle gas turbines, natural gas-fired diesel engines, demand-side resources, or storage could also respond to bellwether events. They would not change the market during ordinary times because they will be deployed only in times of stress when they capture the most value. To achieve the most cost-effective solution, market operators must allow resources of all types and sizes to participate by ensuring as transparent, accessible, stable, and technology-neutral an energy market as possible.

To a great extent, the investment signal for flexible resources is well handled in “energy-only” markets like ERCOT. As the need for flexible resources grows, there will be an increasing number of bellwether events; resource developers are likely to respond by building new resources that can capture this value on the spot market and through bilateral contracts with utilities. However, not all markets are structured to reward flexibility in the same way as in energy-only markets.

Different kinds of markets

Most electricity markets in the U.S. have additional payments or requirements outside of the energy market aimed at ensuring reliability. These payments, often administered through a “forward capacity market,” are meant to improve the economics of investing in new power plants and maintaining existing ones to ensure there is sufficient capacity available during times of peak demand. But forward capacity markets have traditionally focused on ensuring there is enough capacity to meet the peak level of demand over the course of the year, without giving much thought to the relative flexibility of that capacity.

The March 3rd cold spell in Texas provides a valuable lesson on why looking only at annual peak demand, rather than the need for flexible resources throughout the course of the year, can be problematic. When using capacity markets to ensure long-term reliability, it is not exactly clear what capacity to pay for when procuring “reliability” ahead of time. Reliability means different things at different times, and under different resource mixes. If forward capacity markets strictly reward market participants for meeting system peak demand, as they have traditionally done, market operators may not necessarily be rewarding the type of flexible capacity needed in bellwether events.

In theory, energy-only markets like ERCOT in Texas and markets with forward capacity markets should aim to be equally efficient at providing an economical and reliable grid. They should roughly compensate system resources for investing in new flexibility at a similar value. However, forward capacity markets tend to divert revenues from the energy market, and in so doing dilute the strength of the energy market signal to value resource flexibility. These out-of-market mechanisms have traditionally failed to consider the relative flexibility of capacity resources.

In energy-only markets like ERCOT, after new or upgraded system resources enter the market they capture the value of the reliability they provide when the system is stressed and prices spike, or by contracting forward with wholesale buyers to provide mutually beneficial risk management. If a resource cannot respond efficiently to short-term volatility, it will miss out on the associated opportunities and will be unable to offer wholesale buyers the risk-management services they need.

With a forward capacity market, the principle way to manage bellwether events is to supplement revenues by rewarding resources disproportionately for being available during prescribed periods ahead of time. If a resource fails to produce when called on it is usually penalized, either through foregone payments or directly through a penalty administered by the market operator. In both cases, a system resource that fails to make itself available during periods of system stress, like a bellwether event, is taking a big gamble by missing out on a significant amount of revenue, and in some cases a large fraction of its annual revenue. Resources that can respond quickly during such events avoid the costs incurred by less flexible resources that must operate unprofitably for hours or even days before and after the events to be sure they’re available when most needed.

If capacity markets expand their scope from anticipating peak supply needs to ensuring year-round reliability indiscriminately, they run the risk of significantly overpaying for reliability. Paying all types of resources to be available at all times, as opposed to paying just those resources that can more surgically be available in times of system stress, means buying a lot of extra reliability when it is unneeded. Furthermore, overly broad definitions of a capacity product may leave surgical flexibility providers unable to make a profit, even though they could provide a lot of reliability value. Think, for example, of a demand response provider being asked to be available every day of the year for up to eight hours, when the real need is to participate in a handful of bellwether events for three or four hours.

One fix involves tweaking the capacity market design to cover a broader definition of system needs, e.g. as in Hitting the Mark on Missing Money. Another involves creating a more iterative Staircase Capabilities Market design.

Learning across markets

In order to meet the flexibility challenge of shifting the future resource mix to cleaner, cheaper sources, markets with or without capacity markets must learn from each other. If an energy-only market sees a proliferation of very expensive bellwether events, such that net revenues from these much exceed capacity payments seen in other markets, their regulators and rate-payers should ask why more resources aren’t becoming available to meet the underlying need for investment in flexible supply- and demand-side resources. Conversely, if capacity markets are paying out relatively larger sums than those energy-only markets pay out through bellwether events, they should be questioning their framework for compensating resources to ensure reliability. In any case, to achieve least-cost reliability in a clean energy future, all markets should be inclusive towards all possible technologies – including demand-side options – that mitigate the impacts of bellwether events like that cold morning in Texas.