ERCOT’s summer peak demand forecast: new investment, generator profits, no blackouts

A version of this article is available on Greentech Media

By Eric Gimon

Wholesale electricity markets are in the news, as oversupplied markets drive prices down and force early retirements of coal and nuclear units.  Companies like FirstEnergy have petitioned the federal government for salvation through regulatory changes and new market rules to drive up prices and restore profit margins – but the picture is different in Texas.

The Electric Reliability Council of Texas’ (ERCOT) “energy-only” market model exposes the value of flexible resources without capacity markets, testing market design in a high-renewables future.

Texas’ market model is working so far: market forces are accelerating the transition from dirty, expensive plants to cleaner, cheaper resources including variable renewables, demand response, and batteries.  Avoiding capacity markets has saved ERCOT customers billions and the system has remained reliable.

But recent coal retirements and increased load forecasts are putting ERCOT’s energy-only market model to the test.  ERCOT’s reserve margin, a key measure of resource adequacy, is expected to be significantly below its target level this summer, prompting fears of electricity service disruptions.  Capacity market debate observers will be watching closely, as capacity subsidies, the most common alternative for ensuring reliability, have been criticized for slowing the economic transition away from uneconomic coal and nuclear while suppressing price signals for more flexible units that complement cheaper, cleaner energy resources.

To a naïve observer, the energy-only market structure’s test will be whether ERCOT can avoid shortfalls, i.e. a loss-of-load event, this summer.  But no level of investment or reserve margin can entirely eliminate all risk or protect the grid from ever falling short.  The true test of ERCOT’s market design is whether strong investment signals, i.e. higher prices, spur investment to drive the system back from acceptable risk to a more desirable level of risk.  Fortunately ERCOT looks capable of meeting this subtler test, and it should stay the course to avoid expensive capacity markets.

Will a disruption happen? Putting ERCOT’s planning reserve margin in context

A December 2017 planning report prompted concerns that an energy-only market might not ensure adequate reliability, pegging ERCOT’s summer 2018 expected planning reserve margin (PRM) at 9.3 percent.  ERCOT’s PRM has since increased to 11 percent, meaning expected generation fleet capacity exceeds expected summer peak load, minus emergency load management tools (including types of demand response) by 11 percent.

This is below ERCOT’s 13.75 percent minimum target reserve margin, which reflects ERCOT’s desired risk threshold in line with a “one event in ten years” resource adequacy standard.  However, this summer’s dip below the target PRM doesn’t mean Texas is taking unacceptable system-wide service disruption risks.

Because plenty of uncertainty exists about exactly what level of reserve margin corresponds to a given system risk level, the target PRM is not a magic number, shown by a recent Brattle and Astrapé study.  In 2014, the Public Utilities Commission of Texas (PUCT) asked them to estimate ERCOT’s economically-optimal PRM to inform their ongoing review of market design for resource adequacy.  The PUCT wanted to know if ERCOT’s energy-only market design could deliver their desired reliability level. Brattle’s top line results were eye-opening, with values for possible target reserve margins all over the place:

Source: Samuel Newell et al., Estimating the Economically Optimal Reserve Margin in ERCOT, The Brattle Group & Astrape Consulting, prepared for the Public Utilities Commission of Texas (2014)

The Economically-Optimal Reserve Margin (EORM) is the reserve margin emerging from a least-cost system, already below this summer’s forecasted 11 percent PRM.  According to Brattle’s modelling, with that PRM one can expect a “loss of load event” (LOLE), defined as a system deficit triggering rotating outages, happening slightly less than once every two years (0.44 annual probability of a LOLE – Table 13).  This is significantly worse than ERCOT’s desired reliability standard of one event in ten years (0.1 LOLE), which Brattle’s base case indicates a 14.1 percent PRM is required to achieve.

The report also includes a wide range of results (12.6 percent-16.1 percent) for a reliability standard of 0.1 LOLE in sensitivity cases.  This wide range reflects how sensitive an estimate of the desirable PRM is to model assumptions, especially assumptions about the frequency of extreme events.  The PRM cannot be a precise measure of reliability risk beyond an accuracy of a couple percentage points.

Whatever happens in Texas this summer, the probabilities in this study probably won’t correspond exactly to the level of reliability risk ERCOT is running at an 11 percent PRM.

Is the risk of system reliability as bad as it seems?

Should ERCOT be headed towards an undesirable level of reliability this summer, the natural follow-up question is: How close to the Sun will Texas’ grid fly?  In other words, does an 11 percent PRM equal at least an acceptable level of system risk if new investment will push it back up in coming years?

One fact should lead policymakers to conclude that Texas is still facing an acceptable level of system risk this summer: compared to metrics from other jurisdictions, 11 percent seems like an adequate PRM.

The one in ten years resource adequacy standard is a historical construct adopted by the electric power industry that grid operators can interpret differently.  For example, “one event in ten years” could be thought of as one day or 24 hours in ten years, i.e. 2.4 loss-of-load-hours (LOLH) per year.  According to figure ES-1, ERCOT only needs a 9.1 percent PRM to achieve this LOLH standard, so it would be compliant this summer.  International jurisdictions often use the 0.001 percent expected unserved energy (EUE) – the report lists 9.6 percent as the minimum PRM to achieve that metric.  Hence, plenty of evidence shows 11 percent could be considered an acceptable PRM.

Furthermore, reliability from all three of these metrics is more stringent than what customers experience due to distribution-related outages.  Suppose this summer ERCOT has an unusually high peak load and reserves dip too low.  After a progressive series of steps allowing system operators to add generation from other grids and enlisting large customers who voluntarily are paid to be curtailed during emergencies, ERCOT and the PUCT will ask the public to conserve electricity.  Once all avenues are played out, ERCOT can institute rotating outages to preserve the entire grid’s integrity – no system-wide blackout would occur.

Rotating outages have only happened three times in ERCOT history, yet even then customers actually experience relatively little disruption compared to what they are already used to because of problems on the distribution grid.

An earlier 2012 Brattle report explains that even at the lower 2.4 LOLH reliability standard, the possibility of rotating outages means customers can expect in a given year to be without power for “only three minutes per customer; this compares to an average of a few hundred minutes per customer per year from distribution outages.”  The slight possibility of more system-deficit issues is a blip compared to much more common distribution outages, and likely won’t meaningfully degrade customer service.

Finally, it is also quite possible that the risk of a reliability event this summer is smaller than Brattle’s 50 percent probability because of strong economic incentives in ERCOT’s market design.  Energy-only markets reward economic self-interest (greed) in a socially productive way when prices exactly spike at times of maximum system stress.  Much higher summer prices when reserves run short mean that price-responsive customers are more likely to reduce their load to take advantage of money-saving opportunities.   On the supply-side, combustion-turbines and old gas steam plants modeled as having a 19-20 percent chance of outage are already gearing up to run during peak demand.  The PRM itself has crept up from 9.3 to 11 percent over six months partly through market response to forecasted higher prices.

The real debate: how can energy-only markets adequately mitigate reliability risk?

So, even if a single loss-of-load event in ERCOT happens this summer, such an occurrence would not give particularly precise information on whether system risk was at 1-in-3 year or 1-in-10 year level.  While this summer’s planning reserve margin is one indicator of system reliability risk, understanding the exact connection requires detailed modelling and depends on key assumptions.  Because LOLEs happen against a background of much more frequent distribution outages, customers aren’t likely to detect any material changes to service reliability compared to prior years with higher PRMs.

Instead of focusing on PRM, regulators and interested observers should examine other metrics to evaluate ERCOT’s near-term health.  This summer, they should watch how frequently the grid is calling on all its available resources or facing higher demand than expected.  More or less frequent periods of stress correspond to a higher or lower risk of a loss-of-load event and are important data for modelers evaluating system reliability risk.  Regulators should look at actual resource performance during periods of stress to see if market participants are responding to the immediate performance incentives inherent in an energy-only market design.  Most importantly, regulators should pay attention to the economic signals for increased market participation that the low PRM is sending through forward and real-time markets.

By choosing an energy-only market, Texas regulators have accepted that prices will be too low some years to stimulate significant new investment, but that this market will also foster new investment as prices spike when resources become short.  The big questions become: as the PRM moves up and down over various business cycles, where does it average out?  And does this average correspond to an acceptable level of reliability?

The PRM resulting from market forces is called the market equilibrium reserve margin (typically a bit higher than the economically optimal reserve margin). This equilibrium matters to policymakers because if it is lower than the benchmark PRM they think is needed to reach their desired reliability level, than the energy-only market design will not achieve their reliability goals.  Policymakers must then make further tweaks to the energy-only markets to increase generator revenues during times of stress, like changes to the operating reserve demand curve or, as a last resort, drastic measures like a capacity market.

According to Brattle’s 2014 report, ERCOT needs a capacity market because its equilibrium is around 11.5 percent PRM, well short of its 13.75 percent target and the 14.1 percent PRM that achieves a 1-in-10 LOLE.  Brattle estimated long-term incremental costs of a capacity market to customers at $400 million/year, a one percent bill premium.  Even so, the PUCT continued trusting the market and Texas customers have benefited greatly given the high PRMs over the last few years.

Texas should let the market work as designed

Despite the PRM dropping below benchmark, facts on the ground argue Texas should stay the course.  Apart from the shortcomings and inefficiencies of a capacity market, the Brattle modelling may be underestimating the market response to a low PRM, meaning the energy-only market should provide financial support for an adequate level of risk.

Forward prices (ICE) are very high for this summer ($120/MWh and $220/MWh average 7am-10pm for July and August as of May 10th).  Based on such prices, combined-cycle gas (CCGT) plants in the market could easily make revenues net of short-term expenses more than 200-250 percent the level necessary to recover annualized long-term capital expenses – a significant incentive to invest in new gas resources.  Furthermore, the marginal new build may not be a CCGT or gas peaker plant, with associated build-time lag.  Resources like wind, batteries, solar, reciprocating engines, and demand response can – and will – come online a lot quicker in this pricing environment.

Instead of revisiting fixes like capacity markets, Texas policymakers should hang tight and give markets a chance to show their stuff while focusing on continual improvement to energy-market efficiency, all the while at an arguably acceptable risk.  For example, finding ways to improve the availability of price responsive demand (lowers costs and the need for reserves), or promoting policies to increase the finances of new clean resources by improving credit-worthiness of load counter-parties (like retailers) that use them to hedge exposure to soaring prices like those expected this summer.

By doubling down on its faith in markets, Texas can continue to be a great example of a market-driven transition to a cleaner, cheaper, and more reliable grid.

+++

Thanks to Lenae Shirley and Rob Gramlich for their feedback on this piece.  The article and any errors therein are solely the responsibility of America’s Power Plan.

The power market fix we’ve been waiting for . . .

Last week, the Federal Energy Regulatory Commission (FERC) held a two-day technical conference to discuss distributed energy resources in wholesale markets. This rich discussion stemmed from FERC’s recent adoption of Order 841, which directs regional markets to come up with rules for energy storage to participate in markets, but stops short of supporting other distributed energy resources in markets (presentations are available online).  But could it be that last week’s discussion missed the best possible solution?

Guest author Mark Ahlstrom provides a creative solution that just might be the power market fix we’ve all been waiting for.  We are having so much trouble making the markets work for the wave of new grid technologies upon us – but maybe that is because we set up the markets with a limited view of future options.

We trade energy, capacity, and ancillary services that are based on the most basic capabilities of our oldest generators. What if, instead, we started with the most capable universal resource we can imagine, one that can provide, absorb and store energy at will, and then define real resources by how they differ from this idealized resource? All of a sudden, other kinds of resources fit much more easily into one model, whether they are storage plants, conventional generators, renewable generators, demand response resources, DERs, or unique combinations and aggregations that we haven’t even thought of yet.

Below, Mark Ahlstrom, president of the Energy Systems Integration Group, lays out how this concept could enable markets to support a broader suite of affordable, reliable and clean technologies that are already upon us, and support the grid’s continued modernization.


A version of this article appeared in Greentech Media on May 1, 2018

The power market fix we’ve been waiting for . . .

By Mark Ahlstrom

Wholesale electricity markets are opening up to new resources, stretching the limits of how power markets optimize and dispatch resources on the grid. Demand response, storage, renewables, and distributed generation don’t look like conventional generators, and so their properties and capabilities don’t map well onto the existing system, which was designed for fuel-fired generators.

New characteristics must be added to create a new “participation model” and market software must be modified for almost every new technology, requiring a complicated and slow process before the new technology can fully participate and contribute its valuable capabilities to the system.

This is a barrier to innovation – if we have to patch the existing framework every time a new resource comes along that doesn’t fit the classic fuel-fired generator mold, it can take years to develop and implement a new participation model for those resources. FERC’s axiom of non-discriminatory access encourages the entry of new resources, yet the system itself creates an uphill climb for innovators to enter the market.

A new FERC order, while ostensibly addressing energy storage resources, actually provides an important opportunity to create a universal data model for all grid resources. The traditional way of looking at generators and loads is no longer sufficient. We should take advantage of this opportunity.

Storage as a universal participation model

FERC Order 841 requires electricity markets to create an energy storage participation model that allows a resource to:

“Provide all capacity, energy, and ancillary services that the resource is technically capable of providing in the RTO/ISO markets, …be dispatched and set the wholesale market clearing price as both a wholesale seller and wholesale buyer, …and account for the physical and operational characteristics through bidding parameters or other means.”

If done correctly, this participation model should become the “universal participation model” we need for all resources – generators, loads, and even some transmission. Without it, our markets and systems will become increasingly convoluted and unable to deal effectively with the resources that we are already building today, and even more so with what we will come up with next.

Even today, solar-plus-storage plants, aggregated virtual power plants, or even gas-plus-storage plants face uncertainty about how to offer into the market in a way that offers all of the capabilities to the market. So what good is the next great idea if the market system can’t use it?

First, what is a participation model?  The RTOs/ISOs have general tariff provisions that apply to all market participants. In addition, the RTOs/ISOs create tariff provisions for specific types of resources when those resources have unique physical and operational characteristics or other attributes that warrant distinctive treatment. These distinct tariff provisions created for a particular type of resource are participation models.

To reduce barriers to innovation, we need to flip this on its head. FERC’s definition suggests that participation models are created as an exception to the general tariff case, and the general case is essentially assumed to be a conventional generation resource, such as a traditional thermal power plant. But in the case of electric storage resources such as battery storage systems, the resource is more capable than the general case, albeit with some differences.

For example, from the grid operator’s viewpoint, a conventional thermal generator is either online (and injecting some limited range of power) or it is offline, and the decision to get it online must be made well in advance. New resources like solar and solar can be started or ramped almost at will, and battery storage takes the next step of being both a generator and a load resource with tremendous flexibility.

Starting with the most general and idealized conceptual resource as the general case, then turning off or adjust the parameters of this idealized model would be more logical, otherwise we are constantly working to extend a hodgepodge of tacked-on exceptions. Unavoidable implementation delays are inherently discriminatory to new innovation and RTOs are constantly operating in catch-up mode, with ongoing prodding from new entrants and from FERC.

In fact, if RTOs do their job as FERC is requesting, all other generation and load resources can be represented within the generalized storage participation model required by Order 841. The storage participation model, perhaps with slight “idealization” in its design, becomes the universal model that can be used, simply by changing appropriate parameters, for all other resources.

Getting to a universal data model

Several major descriptor categories are needed in this universal data model, allowing a resource to represent its various capacities, ranges, rates, limits, operating constraints, and operating interactions. For example, the capacity of a storage resource may be its nameplate value as either a generator or a load, and it may be able to vary real power continuously and rapidly through the full range, but it may be energy limited. A more traditional resource may have a limited operating range, perhaps with discontinuities across its range of possible power values, but it may be able to sustain a longer duration.

Today’s bidding parameters for conventional generators identify operating constraints such as ramp rates and minimum times for starting, running, restarting, and so forth. The universal model must thoughtfully generalize these operating interactions to describe the capabilities for providing reliability services and ancillary services, whether or not the resource must currently be providing or absorbing real energy in order to provide the services, and the speed of such services under normal and emergency circumstances. Some of this information is static, while other parts are dynamic based on current conditions. But as a whole, the data model and associated price curves would fully represent the resource’s offer to the market in a way that is truly comparable and objective across all resources.

Designing the universal data model will not be easy. It’s complicated, and it will require our smartest and most experienced data architects to get it right. Each category contains dozens of descriptors and parameters, some of them quite complex – but it is paramount and possible.

Anyone with a software background knows why this is so important: Get the data model wrong and software has limited capabilities and it is an expensive nightmare to code and maintain. But get the data model right, and you have tremendous capabilities with flexible and elegant code. Data definitions literally change the way that we solve problems. The data model affects what you can do and how you think about it.

What do we get for adopting a universal data model?

A well-designed universal data model would improve our market systems and commitment/dispatch systems in the long run. If all resources are viewed as subsets of a flexible, idealized model rather than as tacked-on additions to a less flexible model, the next generations of our systems can be more elegant and more capable. It should also improve our ability to think about solutions to our power system needs, because it becomes easier to see how our available building blocks can fit together.

For example,

  • We will realize that we have hundreds of ways to respond to ramps and maintain system balance and reserves.
  • When we need faster frequency response, we can identify all resources that can provide it and understand any interactions.
  • For planning, because we can see both the current capabilities for all the resources and conceivably also know which of them have capabilities that can be adjusted, our planning process can be smarter about using what we have or building something new.

Finally, this universal model could work for more than loads and generators and storage; it could also work for high voltage DC (HVDC) transmission. The terminal of a HVDC system is an energy source behind a DC-to-AC converter, just like a large battery storage system. HVDC could participate in the markets using parameters in the universal data model that are like those of a battery storage system, but one that is capable of providing a very long, and potentially infinite, sustained duration.

This creates interesting possibilities for business models and interconnections across market seams. For example, interregional trading could dramatically reduce the costs and risks associated with integrating large amounts of variable renewable energy, and innovative combinations of HVDC transmission with other technologies could participate through the universal model rather than as conventional transmission.

The time has come to rethink our design, so let’s start with the most capable and idealized participation model that we can imagine, and then represent all real-world resources as subsets of that idealized case. A flexible battery storage resource is arguably the closest example to that idealized resource that we have today. Other generators and loads can be represented by varying the parameters in this data model.

This model’s elegance could change the way that we think about our resources, and it would allow us to operate our markets in a technology-neutral, economic, and nondiscriminatory way. Done right, it would largely eliminate the need for new programming (or new FERC orders like Order 841) when emerging technologies or novel combinations of technologies emerge.

FERC is requiring the RTOs to develop a new model for electric storage resources, but we should take advantage of this chance to rethink how we look at all resources. This is a rare opportunity. Let’s use it.

++

Mark Ahlstrom is President of the Energy Systems Integration Group, a non-profit engineering, resources and education association that holistically serves the energy industry.

Time to refine how we talk about wholesale markets

A version of this article was published on Greentech Media on February 12, 2018

By Robbie Orvis & Mike O’Boyle

Let’s face it – conversations about electricity markets can be . . . opaque.  Jargon and acronyms abound, and anyone interested in market policy faces a steep learning curve to simply understand the current system, let alone identify what needs to change.  As a result, existing power plant owners not only shape the policy, they shape the conversation.

Three often-used market terms – price suppression, capacity payments, and price spikes – that contain hidden biases against good market design and clean energy have become accepted into the market vernacular.  It’s time to re-examine these terms, refine them, and reframe the conversation about designing markets for a clean, affordable, reliable electricity future.

Price suppression, better known as lower costs

Price suppression often refers to the impact of renewable energy on energy market prices; more specifically, a situation in which it’s claimed that competitive market prices “don’t retain and justly compensate the resources it needs and [don’t] attract new competitively-compensated resources.” But to understand the meaning and implications of “price suppression”, it’s important to understand how energy prices are set, what causes them to rise and fall, and what this means for relevant stakeholders.

Competitive energy markets are built on principles of supply and demand. Power plants “offer” their generation to market operators at a certain price – their production cost. Load-serving entities, often retail utilities, tell the market operator how much power they want to buy and the price range they’re willing to pay for it. The market operator, which is also responsible for operating the physical transmission system, then uses software to find the least cost set of resources to provide sufficient electricity.  The Federal Energy Regulatory Commission’s (FERC) energy markets primer is a great resource to learn more about these markets.

The final price paid by retail utilities is the price of the most expensive generator that was chosen by the software to meet demand.  This calculation varies by location due to physical limitations such as transmission capacity constraints and available generation.  So even if the market chooses to run a nuclear power plant with operating costs of $20 per megawatt-hour (MWh), if a more expensive natural gas plant must run at a marginal cost of $30/MWh to meet total demand, the market price of electricity paid to all selected generators is $30/MWh. To remain profitable, many power plants depend on a significant difference between their production costs and the price of energy (in economics terms, “rent”).

New resources can affect this price by changing which unit is marginal, or in other words, making a cheaper unit the marginal one. For example, if a new wind power plant with operating costs of $0/MWh comes online and eliminates the need for the gas plant in the example above, the marginal price drops to $20/MWh because the nuclear plant is the next most expensive plant needed. Because generators get paid the market price, all selected generators now get paid less.  Some pejoratively call that “suppressing” the price.

But in the above example, the fact that the new facility is a wind plant is irrelevant. Instead, what matters is that the new facility has lower operating costs than the marginal unit, causing prices for everyone at that location to drop.  What’s causing lower prices is that a new resource that can produce electricity more economically, such as a renewable power plant, is displacing a more expensive generator and dropping the price of the marginal cost resource.  A new high-efficiency gas plant might have exactly the same result.

Furthermore, falling prices aren’t even a bad thing! “Price suppression” is synonymous with “lower costs,” so when wholesale prices remain low, customers should pay less for electricity.

Falling prices may also be the much needed market signal for expensive, inefficient power plants to retire.  Today’s markets are oversupplied with flat or falling demand, and continue to see new resources enter the market, so falling prices are sending the appropriate market signal.

When unneeded and uneconomic power plants to leave the market, cleaner and cheaper plants take their place and lower costs for customers – a fundamental component of electricity market rebalancing.

Capacity payment? More like capacity subsidy.

Four of the seven organized wholesale markets for electricity in the U.S. – PJM, ISO-NE, MISO, and NYISO – include market products for capacity. Capacity markets build on the basic energy market structure by also paying power plants for the amount of capacity they provide to the system, in addition to the amount of electricity they generate.  In general, the capacity is defined as the amount of power a resource can provide during times of system stress, particularly when demand is unusually high.


source: FERC

Generators submit offers to the grid operator for the capacity market just like in energy markets; but unlike the energy markets that purchase the energy needed for the coming hours and the next day, the capacity market pays for a promise to be available to provide energy in future years. The grid operator then selects the lowest-cost set of resources able to meet an administratively-determined level for capacity in the future.  Again, the highest-cost unit that is needed to meet that level of capacity sets the price.

Energy markets and capacity markets work in tandem. As energy prices drop, a generator that is unable to reduce costs will need additional revenue from the capacity market to stay solvent. Together, energy and capacity markets are intended to obtain the right amount of capacity to maintain reliability; no more, no less. For most grid operators, the goal is to ensure that there will be enough generation available to satisfy the peak demand for the year, plus a buffer of roughly 15 percent in case of an emergency.

Unfortunately, many capacity markets have been reworked to encourage far too much capacity to enter and stay in the market – much of it in response to existing power plant owners’ complaints of “price suppression” in energy markets.  Consider PJM, the nation’s largest wholesale market operator. In its latest auction, PJM’s capacity market cleared 165 gigawatts (GW) of capacity, resulting in a 23.9 percent reserve margin. In other words, PJM has 8.9 GW more capacity than is required, yet still pays – subsidizes – these generators to stay in the market.

Customers ultimately foot the bill for capacity payments, so the more capacity that clears an auction, the more customers pay for that capacity. When the capacity market clears at values so much above the required amount that is truly needed, it is subsidizing unneeded capacity.

Capacity markets have evolved for multiple reasons, but most notably because other market inefficiencies and regulations artificially cap how much generators would earn in a completely competitive energy-only market.  For example, all U.S. energy markets have price caps (see the next section). Though prices rarely reach these cap levels, price caps restrict prices from going higher to reflect the true value of the electricity at that time.  Ultimately, that can lead to a less efficient marketplace and significant lost revenue for generators that provide electricity when it is most needed.  Capacity markets are essentially a way to make up for these market inefficiencies, but as discussed above, they have created yet another set of inefficiencies.

Price spikes are just market signals

Wholesale electricity prices sometimes rise to reflect system stress, as prices would in any market when demand increases relative to current supply.  As demand rises, increasingly expensive resources are needed in the market, raising the price paid to all generators who are able to run.  Spikes typically when demand is highest, but it also happens when supply is constrained – either generators are unavailable due to maintenance or outages, or transmission constraints force buyers to buy energy from more expensive resources.

Many regulators try to avoid price spikes in energy markets, usually through a combination of capacity markets and price caps.  Price spikes are perceived as either a malfunction of markets (“how could energy possibly be worth $5,000/MWh or more?”) or an undesirable and preventable event that shocks utilities and their customers.  For example, the Los Angeles Times recently noted that market regulators’ mission was to “[protect] consumers from extreme fluctuations in energy prices when demand spikes.”

But what matters is not the highest instantaneous price, it’s the total energy cost paid by consumers.  Utilities, as energy market buyers, already insulate customers from price spikes through long-term bilateral contracts, while customer-facing rates typically reflect the average cost of energy, rather than the real-time price of energy.  Price spikes (in the absence of market manipulation) should be viewed as market signals to invest in new resources, particularly more flexible and efficient ones.

FERC agrees price spikes can send the right signal.  Before 2016, most regional transmission operators (RTOs) had price caps at or around $1,000/MWh, but in 2016, FERC ruled that these price caps were resulting in unjust and unreasonable rates.  FERC reasoned that when demand is straining the available supply or during extreme weather such as the Polar Vortex, some generators that were forced to operate to maintain reliability would lose money with a price cap of $1,000/MWh.  As a result, FERC raised the price cap to $2,000/MWh in all FERC-regulated markets.

The exception here is ERCOT, Texas’ single-state regional transmission operator and wholesale energy market, whose price cap was nine times higher than other markets at $9,000/MWh in 2016, and which is not regulated by FERC because it is not interconnected with the rest of the U.S. grid.  Even with opportunity for higher price spikes, ERCOT’s pricing structure has had the effect of lowering overall costs.  For example, although the day-ahead price climbed over $500/MWh in more days in 2015 than in 2014, average wholesale prices were $13/MWh lower in 2015.  This suggests price spikes should not be viewed negatively in isolation, but rather require a look at the bigger picture.

APP’s Eric Gimon recently documented how a real-time Texas price spike to over $4,000/MWh provided value for flexible resources to keep the system in balance, even when it is not near peak demand.  Flexibility will become more valuable as the share of variable renewables increases, so Texas showed how allowing prices to spike can provide value to flexible resources while allowing the average cost of power to decline as more cost-effective renewables come online.

New terms for a new future

To support the transition to a clean, affordable, reliable grid, markets will have to evolve.  Wind and solar are now the cheapest resources in many of these markets, but market design and underlying terminology favors conventional fuel-based generation.  Before we can change market design to enable an affordable and reliable clean energy transition, we have to change the conversation.