Trending Topics – Resilience in a clean energy future

A version of this article appeared on Greentech Media on November 29, 2017.

By Mike O’Boyle

Resilience may be the most trending topic in today’s electricity sector.  The Department of Energy’s (DOE) report on baseload retirements impacts and subsequent Notice of Proposed Rulemaking (NOPR) to subsidize baseload units for the resilience they allegedly provide the U.S. power system begged the question not only whether 90-days of fuel onsite improves resilience (two experts from America’s Power Plan say no) – but more fundamentally, what is resilience and how can it be measured?

Answers are conspicuously absent in DOE’s analyses and attempted rulemaking, but they’re not alone – FERC’s questions to stakeholders responding to the DOE NOPR include the following:

Despite the certainty expressed by DOE, stakeholder comments have confirmed the electricity system lacks an agreed-upon definition or metrics for resilience as a concept that is separate from reliability.  Furthermore, it’s unclear that either requires action from FERC – the North American Electric Reliability Corporation (NERC) already ably regulates reliability and resilience of the bulk system.  Still, bulk and distribution system regulators are receptive to calls for a more resilient grid in the face of more increasingly intense weather events, greater economic reliance on continuous electricity service, a more variable and distributed generation fleet, and greater threats of cyberattack and physical assaults.

In fact, resilience is increasingly a focus for state-level utility stakeholders, particularly in the context of grid modernization.  At the 2017 NARUC Annual Meeting, three hours of subcommittee meetings discussed grid resilience, and a general session, ominously titled “Mother Nature, the Ultimate Disruptor,” addressed efforts to improve resilience across critical infrastructure including the grid. So taking stock of what we know, and what we don’t, about resilience is useful before approving large-scale investments or payments to enhance grid resilience that may exacerbate the problem.

How resilience differs from reliability

Reliability and resilience are intertwined and often conflated, making reliability a good place to start. The NERC, which has FERC-delegated authority under the Energy Policy Act of 2005 to create and enforce reliability standards for electric utilities and grid operators, defines reliability as a combination of sufficient resources to meet demand (adequacy) and the ability to withstand disturbances (security).  To hold reliability authorities accountable, NERC monitors the ability of reliability coordinators to respond to generation or transmission outages. For example, NERC penalizes excessive deviations from system frequency and voltage, two leading indicators that system operators may have inadequate resources to respond quickly to unforeseen supply and demand imbalances.

A common, accepted measure of adequacy is the percentage of capacity in excess of projected or historical peak demand for that system, although precise adequacy standards differ between reliability regions, subject to NERC approval. Adequacy also includes essential reliability services like frequency and voltage support, and will increasingly require a focus on flexibility as more wind and solar come online.

Security is harder to measure, as it reflects a preparedness to endure uncertain external forces.  Modeling and thought exercises help, but the impacts of low-probability, high-impact events remain difficult to predict until they occur.  NERC is on the case, promulgating cyber security, emergency preparedness and operations, and physical security standards to ensure grid operators and utilities are prepared for attacks or blackouts.

The impacts of inadequate resources or security against anything from hurricanes to squirrels to cyberattacks can be measured in terms of outages.

potential security risk

Reliability is generally measured in terms of the system average duration and frequency of outages (SAIDI and SAIFI), with different permutations based on whether the system average or customer average is more important to the reliability regulator.

As a more expansive concept than reliability, resilience encompasses consequences to the electricity system and other critical infrastructure from high-impact external events whose likelihood was historically low, but is now increasing.  Reliability metrics like SAIDI and SAIFI generally make exceptions for extreme weather events when measuring utility performance – whereas resilience is often articulated as a grid attribute that improves response to such events.

Source: National Academies of Sciences, Engineering, and Medicine, “Enhancing the Resilience of the Nation’s Electricity System,” (2017)

The DOE-supported Grid Modernization Laboratory Consortium (GMLC) explores the concept of reliability metrics for “critical customers” as resilience indicators.  A resilient grid may go down for some time, but preserving or prioritizing restoration for critical customers like hospitals, water and sanitation systems, first responders, communications towers, and food storage is important.

Resilience is also additive to reliability around system recovery.  Resilience recognizes that low-probability, high impact events will inevitably cause outages – the key is investing in infrastructure that reduces the duration, cost, and impact on critical services of outages.  New technologies that increase system awareness and automation, particularly on the transmission and distribution system, allow for rapid islanding of downed circuits before failures cascade into other parts of the distribution system, rerouting to restore power while isolating reliability issues and hardening the grid to withstand new threats.

Despite new research into resilience, reliability regulators (particularly NERC) already perform most of the work needed to ensure the grid, particularly the bulk power system, is resilient.  The ways in which resilience can be additive as a concept are few but important, and fall to specific applications like severe weather events, continued service for critical infrastructure, and improved recovery through grid awareness.  Attempts to improve resilience through rulemaking that focuses on fuel security or resource adequacy miss the point – these elements of service have been improving steadily for years and can be procured in a technology-neutral way, with an able-bodied NERC at the helm.

Resilience through the energy transition

Economic forces and policy priorities are driving a transition to a cleaner, more variable power mix. Meanwhile customers are becoming more participatory.  Each of these transitions affects resilience in positive and negative ways.

We know that damage to the distribution system caused the vast majority of outages over the last five years.  Conceptually, the availability of fuel to power plants could also be a cause, but data show it is a very unlikely cause of outages – only 0.0007% of outages were caused by fuel security issues.

Still, the transition away from fuel-based power to higher shares of renewable energy may affect bulk power system reliability and resilience in both positive and negative ways.

For human-caused events such as cyber or physical attacks, renewable energy removes significant fuel supply risk.  Coal relies heavily on rail for delivery, which is subject to physical attacks, since roughly 40 percent of U.S. coal comes out of Wyoming’s Powder River Basin, nearly all via the 103-mile Joint Line rail corridor.  Nuclear plant destruction during operations could be potentially catastrophic.  The natural gas delivery system is vulnerable to cyber and physical attacks, though some delivery will continue if one line is disrupted.  Converting to renewables avoids these fuel security issues; however, cost-effective integration of a high share of utility-scale renewables depends on increasing transmission system capacity to deliver energy where it is needed and balance out geographic variability. Taking down one or two lines could disrupt the system’s ability to balance, either on a regional or interconnection-wide basis, hampering reliability until threats were addressed.

Natural events, particularly weather-related events, must also be considered.  Hydroelectric generation is drought vulnerable for periods of months, while cloud cover from intense storms and hurricanes threaten solar availability for days.  Extreme winds may force curtailment of a portion of the wind for short periods of time.  As we saw during the 2014 Polar Vortex, coal piles on-hand can freeze, and co-dependence on natural gas for heating and generation during extreme cold can threaten resource availability.  Prolonged heat waves can leave nuclear unusable if cooling water is too hot.

With respect to outage recovery, combining inverter-based storage and generation may be more effective than baseload at performing a black start, since spinning masses would not need to be synchronized, though we lack practical examples of restarting with very low spinning mass.  As Amory Lovins recently wrote, nuclear plant performance restarting after the 2003 Northeast Blackout was abysmal – it took weeks to get them back online to full capacity.

The other element of transition is distribution system resilience as grids increasingly rely on distributed, small-scale devices to provide services that complement centralized, utility-scale generation and contribute to a smarter, more connected, and more automated distribution system.  Connected devices are helpful in identifying and isolating threats on the grid while preventing cascading failures and improving restoration, but may open up the system to more widespread cyberattacks.  Local generation can add resilience to natural events – the Borrego Springs microgrid pilot in SDG&E’s territory allows a remote community to disconnect from the larger grid and maintain critical services during wildfire and high wind seasons which threaten a critical transmission line from generation closer to load centers the coast.

Lessons for policymakers

Resilience centers on withstanding and recovering from high-impact events.  Policymakers can largely trust the existing reliability apparatus to cover resilience related to the bulk power grid.  In particular, NERC already provides standards for cyber security, and NERC’s Essential Reliability Services Working Group is working to quantify the services needed to maintain and improve reliability and resilience.

Still, NERC covers only the bulk electricity system. Restoring the distribution grid requires and implicates other infrastructure, like gasoline delivery and roads for delivery trucks, while critical service providers also rely on electricity service.  Disaster preparedness is something utilities and their regulators take seriously – but creating a cross-agency planning process could help improve and align agencies responding to threats.

Instead of duplicating NERC’s efforts, state policymakers can focus on grid modernization to deliver a resilient and flexible last mile of customer delivery.  Knowing what they’re paying for is crucial to adopting cost-effective resilience assessments that balance cost and disaster preparedness.  To ensure cost-effective resilience, policymakers should develop resilience metrics for the distribution system tied to measurable outcomes, starting from Resilience Analysis Process work already performed by Sandia National Labs (SNL).  SNL’s seven-step process develops and routinely updates resilience metrics in light of new modeling and actual system disruptions.

Source: Sandia National Labs

Getting the most out of grid modernization” is a five step framework from America’s Power Plan to help policymakers turn metrics into action and hold utilities accountable for delivering resilience and other customer value.

All of this takes place in the context of a dramatic energy transition with more connected distributed devices and variable fuel-free generators.  When assessing the cost and reliability of future high-renewables systems, resilience attributes and metrics can begin figuring into the mix if they go beyond reliability once they’re developed.  Where benefits are identified, they should be incorporated into plans, and where gaps are found, utilities and other market makers should identify technology-neutral system attributes such as flexibility to shore up resilience.

Skin in the Game: New case studies illuminate best practices for DER ownership & operation

This post was originally published on The Energy Collective on September 23, 2015.

This month the experts of America’s Power Plan released a new resource for policymakers and electric utility stakeholders as a part of our work under the Solar Electric Power Association’s (SEPA) 51st State Challenge.  The new report begins to answer the question – “Who should own and operate distributed energy resources?”  The report examines a series of case studies on different ownership models for distributed energy resources (DERs) with system optimization as the metric for success.  It turns out that there are many options for who can own and operate DERs—and any of them can work, as long as the revenue streams (and revenue delivery mechanisms) are designed or adjusted appropriately.

The new paper functions as an addendum to America’s Power Plan’s original 51st State submission, An Adaptive Approach to Promote System Optimization, by Michael O’Boyle.  This concept paper, which was selected as a top concept to feature at SEPA’s summit in April, described fundamental principles of rate design and market structure that can help states better support technological innovation and grid efficiency.  Recognizing that approaches will vary across the nation, the paper advocates for a reorientation of regulation toward the outcomes consumers want most from the electricity system: affordable, reliable, environmentally clean electricity service.

Who Should Own and Operate Distributed Energy Resources? applies these principles to three different models of distributed energy resource ownership and operation: utility-owned and operated DERs, third party-operated DERs, and customer-operated DERs. The paper identifies strengths and weaknesses in each approach, with an eye toward recommending complementary policies to consider depending on which ownership and operation structure a region chooses.  This table categorizes the case studies included in this report:

Addendum

Each of these case studies draws on experience with different approaches to tackling the same problem: how to take advantage of cost-effective distributed technologies that have run into outdated regulatory models.  Any of these ownership and operation models can improve system optimization so long as the value proposition to each actor aligns with the public interest.

Utility owned and operated DERs

The utility-owned and operated model may be able to demonstrate and accelerate early deployment of new technologies, as in the case of the California Solar PV Program (SPVP) and Borrego Springs Microgrid.  However, when utility-owned DERs play in competitive markets, they may either crowd out competitors or result in inferior performance.

One way to support emerging technologies and avoid cost overruns in a utility-owned structure is to tie program-related utility revenue to specific, quantitative performance outcomes.  This minimizes the investment risk for customers.  Another way is to allow utilities to pilot DERs­—in parallel with third-party alternatives—again with clear metrics for performance.  Regulators can reevaluate the programs periodically, as was the case with California’s Solar PV Program and Arizona’s rooftop solar programs.

One emerging example of this kind of parallel piloting is Central Hudson’s May 1 proposal to own and operate community solar alongside SolarCity as part of a demonstration project under New York’s Reforming the Energy Vision initiative.  This pilot program will allow comparison of a third-party community solar model with a utility-owned model.  SDG&E also recently proposed two sister pilot programs under its distributed resource plan that will compare utility owned and operated storage with a customer-centric storage model that includes a dynamic rate.  Each will help to define the evolving boundaries of the “natural monopoly” and competition on the distribution system.

Third-Party and Customer-operated DERs

The third-party and customer-owned and operated DERs have been effective at maximizing the revenue available to them, but can improve performance as well if the available revenue streams are better aligned with the public interest, and if third-parties can access system data from utilities.  For example, dynamic rates that better track the value of demand response could increase the savings and responsiveness of customers participating in NV Energy’s EcoFactor program, which is a demand response and home energy management program that uses an internet-connected smart thermostat and interacts with the customer’s air conditioner.

Customer-centric programs could also improve if customers can get paid for the full value DERs provide to the bulk and distribution systems, including external value (such as environmental benefit).  ComEd’s Residential Real-Time Pricing (RRTP) program, for example, could evolve to include a varying distribution charge that compensates customers for on-site generation on top of avoided energy costs.  Value of Solar Tariffs in Minnesota and Austin Energy are examples of how this can work in practice for a specific resource (distributed solar), but the dynamic interaction between the bulk system and distribution optimization still requires refinement.

Technology can enable consumers to better optimize their behavior against more complex rate designs.  When automation technologies (like programmable communicating thermostats) are paired with rates designed to promote optimal deployment of customer DERs, as in the Sacramento Municipal Utility District (SMUD) Multifamily Summer Solutions (MSS) Study, customer demand response (and parallel savings) improve dramatically:

SMUD Pilot results

Source: ACEEE 2014

+++

The original principles articulated in An Adaptive Approach to Promote System Optimization can help guide experimentation with all of these models of DER ownership and operation.  Fair valuation of DERs is key to ensuring they can compete with centralized generation to meet system needs at least cost.  Likewise, integration of new technologies through any model should be iterative to minimize the risk to utility customers.  To the extent possible, new rate design should be coupled with enabling technologies and third-party data access to ensure that complementary technologies can help customers optimize their bills and maximize system benefits.  No one model will fit every state or utility; but with the right framework and complementary policies, each model can support an optimized grid system.

America’s Power Plan Featured at SEPA’s 51st State Summit (UPDATE)

America’s Power Plan’s submission to SEPA’s 51st State Challenge was featured at the 51st State Summit in San Diego, CA on April 27.  An “Innovation Review Panel”  selected three papers to be featured at the Summit, including “An Adaptive Approach to Promote System Optimization,” by Michael O’Boyle and the experts of America’s Power Plan.  The 51st State Challenge called for utility stakeholders to take a “fresh” look at electricity policy in a state with no preexisting regulations or market structure.  Submissions came from utilities, clean energy companies, policy consultants, advocates, and independent researchers.

The paper examines fundamental principles of rate design and market structure that can be adapted to fit any political climate, resource mix, and technology evolution.  The proliferation of new resources behind the meter has subjected the traditional distribution utility to unprecedented competition, challenging the notion that it remains a “natural monopoly.”  But there is still a role for a central system manager to coordinate and optimize the system and a single owner for the system of poles and wires, although they need not be the same entity.  Given the pace of technological innovation and the myriad solutions to improve grid efficiency, regulation in the 51st State must constantly adapt to support an ever-improving, competitive electricity system.  Recognizing that answers depend on the idiosyncrasies of any state, the paper advocates for a reorientation of regulation toward the outcomes consumers want most from the electricity system: affordable, reliable, environmentally clean electricity service.

The paper consolidates the recommendations into four principles of rate design and market structure:

51st State Principles Table

The other two featured papers largely converged with the fundamental principles articulated in the America’s Power Plan submission.  The “Sharing Utility”: Enabling & Rewarding Utility Performance, envisioned a utility that transitioned into a platform for users and power producers to share resources in an optimal way maximizing customer benefits and minimizing costs.  It went further than other papers to examine the incremental steps, including the stakeholder process, to enable a transition to a highly distributed future that seemlessly integrates new technologies at the grid edge.  The other featured paper, The 51st State of Welhuton: Market Structures for a Smarter, More Efficient Grid, examined an end-state where individual users’ interactions with the grid were fully automated by an “energy box” software/hardware package.  The paper envisions a future in which the distribution system is operated, but not owned, by an independent distribution system operator (IDSO) that acts much like RTOs on the bulk system today.  In Welhuton, the IDSO ensures many of the fundamental principles in the Adaptive Approach are implemented through a marketplace that sends price signals to individual customers in real time.

With adaptation as a central policy goal, Energy Innovation and America’s Power Plan hope that the 51st State can be a model for technology and innovation that helps all states find ways to promote an optimized electricity system.

Trending Topics in Electricity Today—Do Pay-for-Performance Capacity Markets Deliver the Outcomes We Need?

A version of this article was originally published on April 28, 2015 on Greentech Media.

By Mike Hogan, Michael O’Boyle, and Sonia Aggarwal

Competitive wholesale power markets are meant to sustain needed investment based on market participants hedging risks in response to transparent pricing in the energy and ancillary services markets (“the energy market”).  In practice it has been challenging to ensure that market prices fully reflect actual market conditions. This has led to concerns that some of the money and risk exposure needed to drive investment is “missing” from the energy markets.  Some market operators have responded by introducing “capacity markets,” which are intended to bridge the gap between revenues available from energy markets and the all-in cost of desired capacity.  Capacity markets offer commitments, still short-term relative to most investment timescales, to make fixed payments for the right to call on the resource when needed.  In so doing, they “levelize” a portion of expected revenues that would otherwise have been volatile and difficult to predict.  They also transfer some of the role of determining both the amount and type of investment needed from the market to a central administrator.

The amount of capacity a system needs in a given period is a function of the maximum expected demand and capacity markets have traditionally been designed on that basis.  But customer expectations about reliability require that these resources perform not just when demand is at its highest but also under other extreme conditions.  That was not the case in the Northeast and Mid-Atlantic during the 2013/2014 Polar Vortex when reliability was placed at risk because a great deal of committed capacity failed to show up.  In large part, this resource flakiness was caused by weather-related plant outages coupled with fuel delivery problems, failures that the existing capacity markets largely do not address.  System operators have proposed revisions to existing markets to drive improvements in the resource mix in hopes that it will result in better reliability.  But it remains to be seen which if any of these reforms will keep up with the needs of a system in transition while promoting affordable, clean electricity.

Pay-for-Performance

System operators in regions affected by the Polar Vortex—PJM, NYISO and ISO-NE —have each proposed market reforms to address resource performance.  While NYISO, which has a capacity market, has concentrated on improvements in energy market pricing, PJM and ISO-NE have concentrated on revising their capacity markets, adding “pay-for-performance” mechanisms that increase capacity payments for resources that perform during all peak and emergency hours, rather than just the annual peak, and penalize the resources that fail to show up.  After these changes, the risk of non-performance will fall more heavily on capacity resources and less on system operators and consumers.

The Federal Energy Regulatory Commission (FERC) approved ISO-NE’s capacity market revision in 2014 and is currently considering PJM’s proposal.  ISO-NE split its capacity payments into two parts: an initial payment followed by a performance payment or penalty.  As before, the marginal offer sets the clearing price for the base capacity at the time of the auction and higher offers are rejected.  The difference is that when these resources enter the system three years later their total payment is adjusted for performance via an additional payment or penalty.

The performance payment or penalty is a function of how well a resource actually performs during emergency, summer-, and winter-peak conditions (“scarcity events”) relative to its original capacity offer.  The penalties paid by under-performing resources cover the higher costs paid to over-performing resources to maintain system balance. The table below (from ICF) shows the numbers for ISO-NE:

ICF PPR

How Will This Change the Resource Mix?

Under the old market structure, resource owners offered capacity into the market based on the difference between all-in costs and expected revenues from energy and ancillary service markets, with the risk of “normal” operating problems borne largely by consumers.  Under this new structure generators must account for a substantially higher risk of penalties for non-performance during scarcity events, which themselves will grow more frequent and less predictable as more variable generation is added to the system and as “extreme” weather events become more commonplace in a changing climate.  Additionally, resources not directly involved in the capacity market (either because they do not offer in their resource or because their offers are too high to be selected) can still be rewarded for providing electricity during scarcity events.

As a result, the new structure becomes an unattractive prospect for resources that are seasonal or at risk during scarcity events­.  At the same time, resources that can expect to be available year-round and in extreme conditions get a shot in the arm under the “pay for performance” structure, with a renewed incentive to lock down their fuel supplies, add dual-fuel capabilities, and protect plant operations from extreme weather events like deep freezes or drought.  ICF predicts this will raise capacity prices for ISO-NE but ultimately drive down wholesale energy prices and increase overall system efficiency and reliability.

While it seems certain that these changes will improve resource availability during scarcity events, it is less clear whether they will deliver greater system flexibility since there is no explicit reward for responding quickly (rather than simply being up and running in advance).  The Analysis Group concluded that the most significant response in ISO-NE would be to add dual-fuel capability to existing gas plants, which would do little to increase the flexibility of the system.  In fact, driving down wholesale energy prices (by replacing them with fixed capacity payments) reduces incentives for flexible resources—particularly demand response and energy storage—whose values rely heavily on short-term price volatility.

PJM’s Pending Proposal

The new PJM Capacity Performance proposal adopts a similar framework to the one used in ISO-NE but introduces “resource coupling” to help level the playing field for all resources.  “Resource coupling” in the capacity market allows more seasonal or variable resources like some forms of demand response, variable generation and energy efficiency to “couple” their offers with one or more resources that complement their generation profiles.  For example, wind turbines (which often produce more in the winter and at night) can combine with solar plants (which produce more in the summer), energy storage, or demand response to comprise a single offer into the capacity market in PJM.

Some public interest organizations in the PJM proceeding would prefer to see FERC reject the proposal, however, asserting in their comments to FERC that the deck may be unfairly stacked against renewable resources, and that this is a broad, costly solution for a relatively small problem.  The ability for seasonal or variable resources to couple their offers with other resources mitigates the inherent disadvantage they face to some extent, but the benefits of coupling will be dampened by the restrictiveness of combining smaller sets of resources instead of taking advantage of the diversity in the full portfolio of resources on the system.

If the system’s primary unmet need is dependable capacity, the proposed capacity market reforms may well do the trick.  A broader challenge remains, however: these markets have long-term flexibility needs as well, and even these revised administrative capacity mechanisms may prove too rigid to adapt efficiently to the coming system evolution.  Placing greater emphasis on improved energy market price formation, such as the NYISO proposals approved in early 2015, may be an alternative that addresses the wider set of challenges more efficiently.

Implications for Grid Flexibility and Resource Adequacy across the Country

It’s difficult to say what impact these capacity market reforms will have on the rest of the country.  By no means are pay-for-performance capacity markets the only way to ensure resource adequacy, and they do not directly favor a significantly more flexible resource mix.  Energy and capacity markets can—and should—be reformed to drive efficient investment in more flexible, reliable resources.  For example, Mike Hogan, author of Aligning Power Markets to Deliver Value, described in a recent paper how we could value flexibility in energy and ancillary service markets by more fully pricing scarcity and further opening markets to non-traditional providers.  NYISO focused their reforms on improvements in energy market pricing, and even in PJM and ISO-NE the capacity market reforms have been accompanied by multiple proposals to improve shortage pricing in energy markets.  The Electricity Reliability Council of Texas (ERCOT) has proposed reforms to its energy and ancillary service markets to value the properties that flexible resources can provide.  Outside of restructured market areas, planning will continue to play an important role in ensuring adequate system flexibility.

There are, of course, tradeoffs between different market solutions.  Capacity market approaches may be simpler to administer but they focus on meeting peak when we know we also need more system-wide flexibility.  Likewise, allowing energy market prices to fluctuate more freely or refining ancillary service market products may support flexible resources but may prove too complex or too politically unpalatable.  Energy decision-makers will want to watch closely to see which of the responses to the Polar Vortex­ – or which other approaches we’ve yet to see proposed – best facilitates the transition to an affordable, reliable, clean electricity system.

However it’s supported, it is clear that a more flexible, resilient resource mix is needed.  As weather patterns change and variable resources become a greater share of our electricity supply, an efficient market should deliver flexible resources to complement low-marginal-cost energy from variable resources at the lowest possible cost.

+++

Thanks to George Katsigiannakis, Jennie Chen, and Eric Gimon for their input on this piece. The authors are responsible for its final content.