The Cost of Outages Is a Policy Issue

Based on my work with Canadian and Australian utilities, the cost of outages is first a policy issue – not a regulatory one, not an operation one. Arguments based on the cost of outages may resonate with policy makers, including Smart City stakeholders, because of public pressure or impact on the economy at large. However, these arguments do not resonate with regulatory agents (who follow policies) nor with utilities (who do not have customer outage costs in their financial statements. Individual users may or may not know their specific costs related to outages, but broad outage cost assessments will not affect them

While utility customers are the ones bearing the cost of outages, multiple surveys have shown that customers are not willing to pay more for more reliable power. Even in individual cases, where a utility would propose to split specific reliability improvement costs with industrial users, the customers decline even though the associated payback period was much shorter than would be required for other purchasing decisions. Essentially customers are saying to policy makers and regulators that they pay enough and that reliability is something that is just expected. Public opinion, regardless of the actual costs incurred, is a powerful tool for disgruntled customers, who can vote policy makers in or out of office. Public opinion may incite policy makers to act, requiring utilities to invest in reliability improvement

This being said, customers incur real costs when an interruption occurs, but accurately capturing these costs is elusive – the ICE calculator is the best developed attempt at estimating overall economic costs. Policy makers, stewards of the economy, can be sensitive to the economic cost argument, when reliability improvement costs are seen through the lens of an industrial policy, with may lead to subsidies to improve reliability

The regulatory agencies follow policies. Traditionally, rates that utilities charge are based on the cost of generating, transmitting and distributing. In return for their obligation to serve customers in an exclusive service territory, utilities are allowed a guaranteed rate of return on their capital expenditures. Reliability is attained tacitly through conservative engineering and maintenance activities. However, policy and regulatory changes over the last 20 years or so have put tremendous pressure on utilities to reduce their costs, and many have gone through or are still going through massive downsizing. As a direct consequence, reliability suffered for some systems. If reliability incentives or penalties are used, reliability targets are typically based on historical values, not the economic costs of outages

Utilities would like to invest more to improve reliability. These investments would add to the asset base upon which investors get a guaranteed return. However, regulators may not let utilities spend for reliability improvement because of the impact on rates unless policy requires them to

Since outage costs may resonate with policy makers, it is a worthwhile argument for Smart City initiatives. Cities cannot function without electricity. It moves subways and trains. It cools, heats and lights our homes and businesses. It pumps our water and keeps fresh the food we eat. And it powers the technologies that are the foundation of a Smart City. By implementing smart grid technologies such as microgrids and distribution automation, electric utilities play a key role in making cities both resilient and sustainable. Yet, many electric utilities do not partner with mayors to work on cities’ resiliency and sustainability challenges. Policy makers could then use outage cost arguments when working with their utilities on reliability improvement initiatives.

 

GTM Squared Report

I just finished reading the annual survey of utilities prepared by GTM Squared (http://www.greentechmedia.com/squared/read/annual-survey-report-2016-the-future-of-global-electricity-systems). I found it a useful reference to understand the challenges faced by utilities worldwide, and I thought of sharing some interesting highlights:

  • 3/4 of utilities say that regulatory hurdles are the greatest challenge they face today. Preference is to develop market-based reforms, as well as clear interconnection/net metering rules – in other words, mechanisms that deal with/assign value to Distributed Energy Resources. Note that DER (such as distributed generators and storage) will play an increasing role in utilities worldwide.
  • Half of respondents see the consumers at the forefront of the industry’s evolution. However, it is surprising that utilities in the same survey do not put a greater priority on customer engagement.
  • On storage, respondents see an increasing emphasis toward actual projects, and less on the physics and technology of storage. DER vendors now offer better systems intelligence and grid integration to companies focused on building a next-generation power grid (more sustainable and more resilient). Energy storage is now living up to the hype, having seen record installations in 2015.

Utility-Scale Solar Report

I finally got around to read the US Department of Energy report on utility-scale solar energy (https://emp.lbl.gov/sites/all/files/lbnl-1000917.pdf) published a couple of months ago. Here are my highlights:

  • Installation trend is compelling. Installed capacity is now 30,000 MW – about 30 times more than 5 years ago.
  • Installation costs are falling – by more than 50% since the 2007-2009 period, the lowest-priced projects being around $2/W (AC).
  • Capacity factor is now improved to 27.5%. The main factors of this variation are, in order of importance: the strength of the solar resource at the project site; whether the array is mounted at a fixed tilt or on a tracking mechanism; the inverter loading ratio; and the type of PV modules used.
  • Power purchase agreement prices have fallen. Utility scale solar PPA is now as low as $40/MWh. At these low levels – which appear to be robust, given the strong response to recent utility solicitations – PV compares favorably to just the fuel costs (i.e., ignoring fixed capital costs) of natural gas-fired generation, and can therefore potentially serve as a “fuel saver” alongside existing gas-fired generation (and can also provide a hedge against possible future increases in fuel prices).

Evolution of Energy Generation and Distribution in Canada’s Smart Power Grid – Innovation 360 Conference Panel

On September 29, I was asked to participate on a panel titled “Evolution of Energy Generation and Distribution in Canada’s Smart Power Grid” at the Innovation 360 conference in Gatineau, Québec (http://innovation360.ca). Here is the essence of what I contributed.

By definition, in an electricity network, energy consumption plus losses equal electricity generation. This must be true at any point in time, or protection systems will shed load or trip generators.

There are 4 ways to balance load and generation:

1) Traditionally, dispatchable generators that can easily ramp up or down were tasked to follow the load. Big hydro plants and natural gas generators are particularly good at this. However, we are running of big hydro opportunities, and natural gas are sources of greenhouse gas emission, contributing to global warming.

2) Another way to balance load and generation is to interconnect with neighboring network that may not have the same load profile. Today, all of North America is interconnected in some way. However, building transmission lines is a lengthy process that typically faces major local opposition. As a result, most transmission lines run at capacity during peaks, weakening the bulk transmission system as the Northeast blackout of 2003 demonstrated.

3) In the last couple of decades, we have started to control load, like turning off air conditioning units by pager or getting large industrial like smelters to go offline for a couple of hours during a major peak. Time-of-use or market pricing are also attempts to have loads better follow available generation capacity. However, much of the conservation focus thus far has been on energy efficiency, not peak load reduction.

4) Very recently, energy storage has been getting attention. Traditionally, the only utility-scale storage technology available was pump-storage, like the Sir Adam Beck plant in Niagara, but few of those plants are possible, and they are not efficient. Going forward, batteries, either utility-scale or distributed storage, will grow, although for now utility-scale batteries are MW-class, when hundreds of MW or GW are needed.

Balancing load and generation is also becoming more and more difficult. On one hand, consumption is getting peakier, partly due to side effects of some energy saving programs, like turning down thermostats at night in the winter, and then turning them back up in early morning, just in time for the morning peak. On the other hand, wind and solar generators are replacing fossil generators, adding unpredictability to generation and taking away controllability, thus requiring even more balancing resources.

Integrating renewable into the grid is not only causing balancing problems. It also creates voltage management and protection problems. Those are solvable, but significant, engineering problems that require expensive upgrades to the electricity grid.

Ultimately, load and generation balancing, voltage management and grid protection adds costs that are ultimately born by subscribers. It therefore quickly becomes a political issue.

As a society, we have been subsidizing fossil fuels. Clearly, going forward, we will need to greatly invest in the grid if we want to limit the predicaments of global warming for our children and grand-children.

Using Analytics to Assess Station Thermal Risks Caused by Reverse Power Flow

With sufficient Distributed Generation (DG – embedded generation in Europe) installed on the feeders of a substation, reverse power flow may occur at the station when load is low. This is especially the case when large generators (such as wind farms) are connected on express feeders dedicated to their use.

Substations have been designed, rated and operated as step-down substation with power flowing from higher system voltage to lower system voltage. Some substation transformers also have dual secondary winding transformers that do not allow for reverse power flow conditions, as unequal reverse flow in the two secondary windings would cause overheating and potential failure of the transformer.

Utilities limit DG capacity downstream of a station to avoid excessive reverse-flow and to prevent overheating of substation transformers. For example, Hydro One requires that generation shall not exceed 60% of the maximum MVA rating of a single transformer plus minimum station load.

The (worst-case) engineering assumption is that maximum generation coincides with lowest load at a station. Is it the case? Some years ago, I ran a Monte-Carlo simulation between load and wind generation, based on theoretical distribution of both, but doubling the generation normally allowed. It found that generation would be excessive… less than 2% of the time (and not by much, and at a time when load is low and so are prices). Using actual smart meter data, it is now possible to actually know what is going on and better assess risks. For solar generation in hot climates, there is a negative correlation between load and generation – in other words, maximum generation does not happen in times of minimum load.

Even better: correlating with forecast weather data can assess whether reverse flow could be excessive in a few hours, and require large DGs to go off-line ahead of a problem (and this would not happen frequently). While I have not seen such an application, it is clearly in the realm of possibilities.

Deep analytics, used as a planning tool or in support of operations, enables safe integration of more distributed integration by managing thermal limit of station transformers operating in reverse flow.

Tutorial: Key Players in the Energy Markets: Rivalry in the Middle

The players described in the previous post have vastly different characteristics. The most striking difference is the level of rivalry.

IMG_2174

Distributors operate in a defined territory, often corresponding to a city, a state or a province, where they are the sole provider – thankfully, as there would otherwise be multiple lines of poles along roads. Given this monopoly, distributors are subjected to price regulation, meaning that the price they charge for the use of their infrastructure (poles, conductors, cables, transformers, switches, etc.) is set, typically equal to their costs plus an allowed return on their investment. This is done by filing tariffs that are approved by the regulatory body following a rate hearing.

Retail is often a competitive industry, as there is no structural barrier to having multiple players. However, some distributors are also given the retail monopoly over their territory. Some may also provide retail services in competition with other retailers. In those cases, the distributor-owned retailer is also regulated and has to seek approval of its rates, but other retailers typically do not, although they may have to file their rate plans.

It is possible to have multiple transmission companies operating in the same territory, each owing one or a few transmission lines. However, because those transmission lines are not perfect substitutes (they do not necessarily have the same end-points in the network) and because transmission capacity is scarce, electricity transmitter typically have regulated rates, although they may compete for new constructions.

System operators are monopolies over a territory, and they have to maintain independence. They are, in effect, monopolies, although system operators are often government- or industry-owned. Their costs are recharged to the customer base, directly or indirectly.

Large generators are in a competitive business, competing in an open market, although distributed generators, which are much smaller, usually benefits from rates set by a regulator or a government.

Tutorial: Key Players in the Energy Markets

I will be making a conference to investors later this year and I will also be training some people internally at my employer. The topics will touch on the electricity industry structure and I am preparing some material for it.

The industry can be quite complex in some jurisdictions. I boiled the complexity down to just this:

New Picture

Traditional large-scale generator own and maintain coal, natural gas, nuclear, hydro, wind and solar plants connected to transmission lines. Those are large plants – typically hundreds of megawatts.

Transmitters own and maintain transmission lines – the large steel towers seen going from large generators to cities. Those typically run at 120,000 volts and more, up to over 1,000,000 volts in some cases.

Distributors own and maintain the local infrastructure of poles and conduits going to customer sites. Those typically run at 1,200 to 70,000 volts, usually stepped down to 600 volts. 480 volts, 240 volts or 120 volts for connection to customers.

Most customers are connected to distributors, although some large industrial facilities (such as aluminum smelters) are directly connected to transmission lines.

While customers are connected to distributors, they purchase electricity from an independent retailer or from the retail arm of a distributor.

With customer installing distributed generation on their premises, they sell back power to the market, often through aggregators.

Retailers buy electricity from generators in an energy market – like a stock exchange, but for electricity.

By definition, the energy produced at any instant must be equal to the energy taken by customers, accounting for a small percentage of losses in transmission and distribution. (We are starting to see large-scale storage operators, which may act as both consumer and generator, depending they are charging or releasing electricity in the network.) This critical balance is maintained by the system operator that direct generators to produce more ore less to match load; in some case, the system operator will also direct distributors to shed load (customers) if generation or transmission is insufficient to meet the demand.

The next post will deal with energy and money flows in the market.

Covered Conductors Vs. Single-Phase Reclosers

A utility client told me that they were trying out covered conductors on a feeder in a forested area. This was the first time that this large utility tried covered conductors. The objective is to reduce the impact of tree contacts and falling branches that blow fuses and therefore result in permanent outages for customers. In this context, the great length of feeders and the high system voltage (25 kV) make coordinating reclosers and fuses difficult.

Covered conductors have a thin insulation covering – not rated for the full phase voltage, but sufficient to reduce the risks of flashovers and fire when a tree branch falls between phases, when a tree makes momentary contact with a conductor, or when an animal jumps to it. Covered conductors also allow utilities to use tighter spacing between conductors.

While covered conductors help with tree contacts, they also have a number of operational disadvantages:

  • High impedance faults with a downed conductor are more likely, leading to public safety issues, especially since the conductor may not show arcing and may not look as if it is energized.
  • Covered conductors are more susceptible to burndowns caused by fault arcing. Covering prevents the arc from motoring with magnetic forces along the wire, concentrating heat damage. Repair time and cost increase significantly.
  • Covered wires have a larger diameter and are heavier, increasing loading, especially with freezing ice and high wind, which can likeliness of mechanical damages (including broken poles and cross arms), leading again to high repair time and costs.
  • Covered conductors have somewhat lower ampacity at high temperature (worsened by the black color that absorb more heat from the sun), with more limited short-circuit capability. High temperature also degrades the insulation. This results in more design and planning constraints that may increase construction costs.
  • Water can accumulate between insulation and wire at the low point between of a span, causing premature corrosion and weaken the conductor and can lead to failure.
  • Covered conductors must be installed differently than bare ones. For instance, using conducting insulator tie can lead to partial discharges and radio interference.
  • Finally, cost is an obvious issue – replacing conductors on existing lines is extremely expensive, possibly as much as $100k per km.

These issues got me thinking on how I could provide a better alternative. Replacing fuses with single-phase reclosers appears to be an interesting (if unlikely) alternative to covered conductors. Cutout-mounted single-phase reclosers can easily be installed in existing cutouts to protect lateral circuits. Those circuits are then protected against tree contacts without the disadvantage of covered conductors. Coordination with upstream mainline reclosers is eased by making the single-phase recloser faster than the mainline recloser. Cost is clearly lower than re-conductoring.

Full disclosure: I am employed by S&C, and S&C makes a cutout-mounted recloser.

Pseudo-Realtime Voltage Regulation to Increase DG Penetration

Close-loop voltage control in distribution networks traditionally relied on Potential Transformers (PT) on feeders communicating with a control algorithm sending setting signals to voltage regulators and capacitor banks. More recently, Faraday devices have been used instead of PTs, being less expensive to purchase and to install.

What about smart meters with voltage measurement capability? Some smart meters measure voltage at the service point, which accounts for voltage drop in secondary feeders and transformers. There are also far more meters than PTs or Faraday sensors, providing greater coverage. But there is a problem: smart meter networks have long internal latency – it may take minutes for voltage signals to get back to a control center. This renders smart meters unusable in a traditional real-time control loop.

However, analytics could make use of delayed smart meter data, combined it with other data such as weather and historical data, to provide pseudo real-time feedback.

This could prove particularly effective with high level of Distributed Generation (DG) penetration that is affected by weather, such as solar and wind. Where a traditional voltage control system relying on real-time feedback could be overwhelmed or mislead by the variability of renewable generation, a control system relying on deep analytics of smart meter and weather data could be more effective in maintaining distribution grid stability.

Using Analytics to Assess Islanding Risks of Distributed Generators

One of the most critical situations with Distributed Generators (DG – embedded generators in Europe) is that a interrupter on a distribution feeder may trip to isolate a circuit section and the DGs might continue supplying the load on that section, creating an “island”. When load closely match generation in the island, it may be sustained for some time, posing safety hazards – this is known to have caused death.

Distributed generators have various passive or active anti-islanding mechanisms that open a breaker at the point of connection when an islanding condition is detected. However, islanding detection techniques used in small DGs (such as residential photovoltaic generators) are far from perfect – without expensive circuitry, they may not always immediately detect an island when generation and load are closely matched. Therefore, some utilities require that load on any feeder section (i.e., between interrupters) be always greater than generation, ensuring that an island cannot sustain itself. This means that the total distributed generation capacity on a feeder section must be significantly less than the minimum aggregated load on that section. The problem is compounded by the fact the engineers assessing DG connection requests usually do not know actual load and generation per line section – estimations need to be made.

In the end, allowable distributed generation on a line section can be a pretty small number – in Ontario, Hydro One requires that total generation must not exceed 7% of the annual line section peak load – meaning that few customers are allowed to have generators.

Applying analytics on smart meter data can better assess how much distributed generation can safely be connected to a line section. For instance, minimum load may never be correlated with maximum generation – e.g., in hot climates, minimum load occurs at night, when there is no solar generation. Analytics can look into past load and generation records to determine how much generation can be connected without getting into potential islanding condition. Safe generation levels may be many times more than the previous conservative worst-case-that-never-happens engineering guidelines allowed.