Strategic Electricity Inter-ties Committee of the House of Commons

On October 25, I appeared before the Standing Committee on Natural Resources discussing Strategic Electricity Inter-ties. The Standing Committee on Natural Resources studies bills, government activities and expenditures, and issues related to Canada’s energy, forest, minerals and metals, and earth sciences sectors.

The idea behind strategic electricity inter-ties is to improve power exchanges between provinces by increasing tie capacity with new transmission lines. The brief that I wrote  and my testimony argued that that energy storage may be a better alternative in light of the long time frame to build new transmission lines (15-20 years is typical), the current state of the art in storage, and expected growth in performance and cost decline of the technology.

It was my first experience of appearing before such a committee, and I like the experience. I was impressed by the questions that the members of parliament asked. They also seemed to like my arguments, as many came to me afterward to thank me.

Here is the brief that was presented: S&C Brief Standing Committee RNNR 20171024.

Telecom as a Model, not a Service, to Electric Utilities

On September 27, 2017, I presented at the Utilities Technology Council of Canada. I have attached the presentation, and here is the abstract.

Abstract: The telecom industry has seen tremendous changes, replacing in just a few short years the Plain Old Telephone System that took over a century to build with the Internet and cellular networks. Since telecom and electric utilities have a lot in common, like linear assets, large customer base and territory, and technology-driven culture, what can we learn from the transformation of telecom to better manage the ongoing technological changes in electric utilities?

A pillar of the Canadian economy is undergoing a profound transformation

Now is a time of innovation in the electric industry, like no other since Thomas Edison.

Now is the time when wealth can be created as we use our resources and our brains to ensure a resilient and sustainable energy future for all.

Potential wealth creation stems from the fundamental changes occurring in the electricity sector:

  • Globally, electricity and heat production are the largest contributors to greenhouse gas (GHG) emissions. Canada is blessed with abundant carbon-free hydroelectric generation, but our energy sector as a whole is a major emitter of climate-changing GHG.
  • In response, major investments have been made across the world in designing and implementing renewable sources and energy storage, including wind and solar. The price of those sources is decreasing at double-digit rates per year and they are getting increasingly competitive with traditional sources.
  • Wind and solar generation are not only becoming cost effective, but doing so at a much smaller scale than traditional generation. Distributed generation is being installed deep in the electrical grid, at its edges or even behind the meters. The traditional and centralized grid designed by Edison is being transformed into a digital grid of microgrids integrated to local energy resources.
  • The new, distributed and digital-enabled electrical grid is more resilient because it relies on multiple and alternate energy sources and paths. The electrical grid then becomes more resilient to extreme weather events that, unfortunately, become more frequent with climate change.
  • Residential and industrial customers benefit from improved reliability as they are increasingly dependent on electricity to power our modern life in smart communities and with the advent of electrical transportation.

Innovation and wealth creation opportunities are everywhere in this context. Technical innovation is what drives the decreasing costs of renewable sources for energy users. Vendors need to invent new commercial solutions to balance the new distributed grid and ensure that customers stay powered up. Increasing energy efficiency means that we can do more with less. Utilities and entrepreneurs adopt new business models to better serve customer segments. In particular, utilities, previously defined by their geographic territories, are morphing into energy service providers, often competing with offerings from new entrants, or even competing with each other like never before, driving cost down for Canadian consumers and businesses. The digitalization of the electrical grid creates large quantities of data that new software applications can leverage to increase efficiency and create commercial opportunities. Canadian customers, now with the power of choice, can no longer be taken for granted and demand more.

What is even more dramatic is that the changes affecting the electric industry are shaking a pillar of the Canadian economy. The electric industry touches every home and business in Canada and reliable power is an essential ingredient for the competitiveness of our economy. Electric power generation, transmission and distribution utilities contribute almost $30 billion to the Canadian economy, with electrical equipment manufacturers contributing another $4 billion. This industry employs over 100,000 Canadians, but the Conference Board has estimated that 156,000 workers will be needed to carry out the renewal of Canada’s electricity infrastructure. Canada’s net exports of electricity and electrical products amount to billions of dollars every year. The Canadian electricity system is in need of massive infrastructure renewal. The Conference Board of Canada estimates that by 2030, close to $350 billion in new investment will be required just to maintain existing electricity capacity, with most of Canada’s non-hydro assets needing renewal or replacement by 2050. The importance of the electric industry scales up the potential of wealth creation, but also underlines the perils that we are facing: should the Canadian electric industry fail to renew itself for the challenges of the 21st century, the entire economy of Canada would suffer, with foreign service providers taking control and energy exports dwindling.

In conclusion, accelerating the transformation of the Canadian electric industry is essential. In an industry traditionally defined by centralized generation and rigid geographic boundaries between utilities, new linkages need to occur: utilities and customers, vendors and entrepreneurs, cities and businesses, ensuring that all see the opportunities that didn’t exist before and have the support they need to get their ideas to market quickly. The transformation of the electric industry will ensure that Canadians benefit from the billions of dollars to be invested in the electricity system. The structure of the industry will emerge transformed, with Canadian-owned service providers offering novel energy solutions, backed by a web of hardware, software, and professional service vendors. This will increase the opportunities for Canadians to export their energy, their expertise, and the fruit of their labor.

The Sun for a Penny

I recently presented at the Canadian Electricity Association (CEA) on the future of the industry. What would happen to the power industry if the cost to generate solar electricity reached 1¢/kWh? What could be the impact of a carbon tax? What are the business opportunities arising from the need for reliable power? While electric utilities have seen tremendous transitions during the 125-year history of the CEA, the current rate of development is unprecedented. To paraphrase a famous quote by Wayne Gretzky, utilities need to “skate to where the puck is going to be, not where it has been.” This presentation tried to provide power utilities with some insights into the future direction of the puck! See the presentation here: The Sun for a Penny 20170225a

Impact of Regulatory Regimes on Executive Behavior

Few outside the executive suite of utilities really appreciate how the regulatory regime affects executive behavior. As understanding behavior is key to selling, I am sharing my thoughts below, applicable mainly to North American utilities.

Problem Statement for Executives of Investor-Owned Utilities 

Given their monopoly over a defined territory, North American Investor-Owned Utilities (IOU) are subjected to price regulation by the state or the province, meaning that a regulator (such as a public service commission, a public utility commission or an energy board) sets the price they charge for the use of their infrastructure (poles, conductors, cables, transformers, switches, etc.).

Most North American IOUs are under rate-of-return regulation, or a variation of it. With rate-of-return regulation, regulator set the price so that utilities are compensated for their costs (operating costs, depreciation on assets, and taxes) and allowed a fair return on their investment. This is done by filing tariffs that are approved by the regulator following a rate hearing.

Utility executives are paid to maximize shareholder returns. Since utility shareholders are rewarded by a fair rate of return on a base of assets, executives create shareholder value by justifying more assets to the regulator while lowering the risk profile that shareholders perceive in future earnings. However, the regulator only allows new asset expenditures if they are prudent and if the society benefits. A capital expenditure is prudent if the costs are reasonable at the time they are incurred, and given the circumstances and what is known or knowable at this time. The society benefits if the expenditures minimize the required revenue paid by ratepayers, have a positive impact on the economy (such as improved reliability), improve customer service (such as fewer complaints), reduce societal risks (such as those caused by major weather events or those linked to information security), or achieve government policies and meet regulations (such as renewable generation targets). By constantly meeting regulatory concerns, utility executives ensure that the utility will be compensated through rates, with predictable earnings and minimizing the risk profiles that investors perceive. Conversely, when a utility fails to show that it is making prudent decisions or that the society benefits, then the regulator may disallow investments from the rate base. In such a case, shareholders bear the shortfall through reduced earnings and share value.

For utility executives, the fundamental objective is to select investment projects that minimize required revenue (a regulatory term defined as operating expenses + depreciation + taxes + return on assets) while being prudent and maximizing societal benefits (to ensure approval). These projects increase the regulated base of assets while minimizing the shareholder risk profiles. This is why utility executives are generally willing to trade lower operating expenses (which is the only other controllable element in the definition of required revenue) for higher capital expenditures. It is also why they are seeking ways to lower operating expenses through subcontracting or outsourcing, as it frees revenue to justify additional capital expenditures. This is often expressed as a rule of thumb, such as “we are OK with $10 of capital to save $1 of operating expenses”, although regulatory approval is always required.

Expressions such as “equipment failures hurt the bottom line” make little sense for a utility executive: if an old equipment that failed is replaced by a new one, that’s actually good, as the old one is written off (the loss being recovered from the ratepayers) and a new asset is added to the base (for which shareholders will get a return). Similarly, the expression “reducing operating expenses improves your bottom line” is not absolutely true – such reduction eventually accrues to ratepayers, not shareholders, but often just to offset other increases. However, it can be true in a sense if the reduced operating expenses are the result of capital expenditures that increase the asset base and, hence, the return paid to shareholders. Hypothetically, utility executives should want to replace all (non-executive) workers (i.e., operating expenses) by robots (i.e., capital assets).

This leads to a number of factors that utility executives ponder when deciding on new investment projects. They will be inclined to support an investment project before their regulator if it results in a combination of the following factors, arguably ordered from the strongest down:

  • Meeting governmental obligations:
    • Meeting statutory obligations, such as workers’ health and safety regulations and CIP V5 cybersecurity standards.
    • Meeting policy obligations, such as integrating renewable sources in the distribution network, energy conservation programs, removal of PCB or oil filled equipment, and reduction of greenhouse gas emissions.
    • Prudency, which determines if the costs are reasonable with what is known at the time of filing.
  • Lower rate impact:
    • Lower operating expenses, such as avoiding overtime truck rolls.
    • Lower energy costs for rate payers, such as if technical losses are diminished.
    • Stretched service life or reduced maintenance costs of existing assets, such as by limiting stress on station transformers installed 50 years ago and approaching end-of-life.
    • Lowering carbon taxes.
  • Reduced societal risks:
    • Greater resiliency during major events, such as looping distribution feeders and underground construction.
    • Better public safety, such as avoiding forest fire.
  • Positive impact on the economy:
    • Reducing sustained or momentary outage costs.
    • Three-phasing of rural lines to better serve C&I customers.
  • Improved quality of service:
    • Improved customer service metrics, such as fewer customer complaints from flickers.
    • Fairness among customers, such as improving reliability experienced by customers in rural areas to approach that of urban areas.

Each utility operates in its own regulatory and societal environment. Therefore, the relative importance of these factors varies between utilities. In particular, some price-cap regulation is starting to appear in North America. With price-cap regulation, prices are set from a starting point and then adjusted according to an economic price index (such as CPI) minus some expected productivity improvement and plus or minus incentives. However, few states and provinces have moved to price-cap regulation for electric utilities. Also, given that the starting point of price-cap is rate-of-return, and given that unforeseen events may cause utilities to petition regulators for additional capital spending, the difference in executive behavior between the 2 regimes may not be as large as one might think. Still, with utilities under price-cap regulation, it is better to talk about total cost of ownership than about capital spending. Some utilities also have quality of service incentives that increase the importance of reliability indices.

Problem Statement for Executives of Customer-Owned Utilities 

Customer-Owned Utilities (COU), essentially cooperatives and municipal utilities, are often regulated by their local government (such as a city council), just like other city services like water and waste disposal. They typically have a shorter feedback loop with customers than IOUs. Contrary to executives of investor-owned utilities, executives of customer-owned utilities do not have an incentive to maximize their base of assets, so tradeoffs may favor more operating expenses, especially so since they are seen as good employers in their communities. Investment decisions will weigh more on societal benefits and risks, with emphasis on customer service and quality of service. Therefore, it is important to adjust the language, as insisting on capital investments only does not make sense for customer-owned utilities.

Large Canadian provincial utilities and municipal utilities across North America are publicly owned, like traditional COUs, but often pay dividends to their owners. Their behavior is normally somewhere between those the IOU and COU extremes, especially if most of the rate increases can be shifted to generators.

The New Grid Needs to Be a Lot More Complicated

The Old Grid used to be relatively simple, with generation following load:

Old Grid

It is now a lot more complicated:

New Grid

The grid is transforming and getting more complicated.

  • We are decommissioning fossil plants to reduce GHG emission and nuclear plants because of safety concerns.
  • There is only so many rivers, so the solution of building new hydro plants is not sufficient.
  • We are then replacing fossil and nuclear base load plants with renewables that are intermittent.
  • To compound the problem of balancing the grid, loads are also becoming peakier, with reduced load factor. Interestingly, many energy conservation initiatives actually increase power peaks.
  • To connect the new renewable generation, we then need to build more transmission. The transmission network also allows network operators to spread generation and load over more customers – geographic spread helps smooth out generation and load.
  • Building new transmission lines face local opposition and takes a decade. The only other alternatives to balance the grid are storage … and Demand Management.
  • Another issue is that we are far more dependent on the grid that we used to be. With electrical cars, an outage during the night may mean that you can’t go to work in the morning. So, we see more and more attention to resiliency, with faster distribution restoration using networked distribution feeders as well as microgrids for critical loads during sustained outages.
  • Renewable generation and storage can more effectively be distributed to the distribution network, although small scale generation and storage are much more expansive than community generation and storage.
  • With distributed generation, distributed storage and a networked distribution grid, energy flow on the distribution grid becomes two-way. This requires additional investments into the distribution grid and a new attention to electrical protection (remember the screwdriver).

All of this costs money and forces the utilities to adopt new technologies at a pace that has not been seen in a hundred years. The new technology is expensive, and renewable generation, combined with the cost of storage, increases energy costs. There is increasing attention to reduction of operating costs and optimization of assets.

“Resilient Power for Sustainable Cities” Presentation at the Canadian Electricity Association

I presented this to senior managers of Canadian utilities attending the 24 February Distribution Council of the Canadian Electricity Association. It can be found on SlideShare at


The cost of disasters has been increasing exponentially since the 1970s – and cities are mostly affected, which is not surprising since cities produce 80% of the world gross domestic product (GDP). Since the majority of disasters are related to climate events, cities are also part of the root cause, since they generate 75% of our greenhouse gas (GHG) emissions. Mayors, acting locally on a short feedback loop, view the challenges they face on a daily basis – it is about their constituents getting sick, having clean water, being warm or cool, holding productive jobs, commuting efficiently, surviving disasters. They see that a smart city needs, first and foremost, to be both resilient to face increasing disasters and sustainable to reduce its environmental impact and to improve quality of life – while being financially affordable

Cities can’t function without electricity. It moves subways and trains. It cools, heats and lights our homes and businesses. It pumps our water and keeps fresh the food we eat. And it powers the technologies that are the foundation of a smart city. By implementing smart grid technologies such as microgrids and distribution automation, electric utilities play a key role in making cities both resilient and sustainable. Yet, many electric utilities do not partner with mayors to work on cities’ resiliency and sustainability challenges. A better approach is to see city policy makers as major stakeholders and a driving force in modernizing the grid.

Have you talked to your mayor(s) lately?

The Cost of Outages Is a Policy Issue

Based on my work with Canadian and Australian utilities, the cost of outages is first a policy issue – not a regulatory one, not an operation one. Arguments based on the cost of outages may resonate with policy makers, including Smart City stakeholders, because of public pressure or impact on the economy at large. However, these arguments do not resonate with regulatory agents (who follow policies) nor with utilities (who do not have customer outage costs in their financial statements. Individual users may or may not know their specific costs related to outages, but broad outage cost assessments will not affect them

While utility customers are the ones bearing the cost of outages, multiple surveys have shown that customers are not willing to pay more for more reliable power. Even in individual cases, where a utility would propose to split specific reliability improvement costs with industrial users, the customers decline even though the associated payback period was much shorter than would be required for other purchasing decisions. Essentially customers are saying to policy makers and regulators that they pay enough and that reliability is something that is just expected. Public opinion, regardless of the actual costs incurred, is a powerful tool for disgruntled customers, who can vote policy makers in or out of office. Public opinion may incite policy makers to act, requiring utilities to invest in reliability improvement

This being said, customers incur real costs when an interruption occurs, but accurately capturing these costs is elusive – the ICE calculator is the best developed attempt at estimating overall economic costs. Policy makers, stewards of the economy, can be sensitive to the economic cost argument, when reliability improvement costs are seen through the lens of an industrial policy, with may lead to subsidies to improve reliability

The regulatory agencies follow policies. Traditionally, rates that utilities charge are based on the cost of generating, transmitting and distributing. In return for their obligation to serve customers in an exclusive service territory, utilities are allowed a guaranteed rate of return on their capital expenditures. Reliability is attained tacitly through conservative engineering and maintenance activities. However, policy and regulatory changes over the last 20 years or so have put tremendous pressure on utilities to reduce their costs, and many have gone through or are still going through massive downsizing. As a direct consequence, reliability suffered for some systems. If reliability incentives or penalties are used, reliability targets are typically based on historical values, not the economic costs of outages

Utilities would like to invest more to improve reliability. These investments would add to the asset base upon which investors get a guaranteed return. However, regulators may not let utilities spend for reliability improvement because of the impact on rates unless policy requires them to

Since outage costs may resonate with policy makers, it is a worthwhile argument for Smart City initiatives. Cities cannot function without electricity. It moves subways and trains. It cools, heats and lights our homes and businesses. It pumps our water and keeps fresh the food we eat. And it powers the technologies that are the foundation of a Smart City. By implementing smart grid technologies such as microgrids and distribution automation, electric utilities play a key role in making cities both resilient and sustainable. Yet, many electric utilities do not partner with mayors to work on cities’ resiliency and sustainability challenges. Policy makers could then use outage cost arguments when working with their utilities on reliability improvement initiatives.


GTM Squared Report

I just finished reading the annual survey of utilities prepared by GTM Squared ( I found it a useful reference to understand the challenges faced by utilities worldwide, and I thought of sharing some interesting highlights:

  • 3/4 of utilities say that regulatory hurdles are the greatest challenge they face today. Preference is to develop market-based reforms, as well as clear interconnection/net metering rules – in other words, mechanisms that deal with/assign value to Distributed Energy Resources. Note that DER (such as distributed generators and storage) will play an increasing role in utilities worldwide.
  • Half of respondents see the consumers at the forefront of the industry’s evolution. However, it is surprising that utilities in the same survey do not put a greater priority on customer engagement.
  • On storage, respondents see an increasing emphasis toward actual projects, and less on the physics and technology of storage. DER vendors now offer better systems intelligence and grid integration to companies focused on building a next-generation power grid (more sustainable and more resilient). Energy storage is now living up to the hype, having seen record installations in 2015.

Utility-Scale Solar Report

I finally got around to read the US Department of Energy report on utility-scale solar energy ( published a couple of months ago. Here are my highlights:

  • Installation trend is compelling. Installed capacity is now 30,000 MW – about 30 times more than 5 years ago.
  • Installation costs are falling – by more than 50% since the 2007-2009 period, the lowest-priced projects being around $2/W (AC).
  • Capacity factor is now improved to 27.5%. The main factors of this variation are, in order of importance: the strength of the solar resource at the project site; whether the array is mounted at a fixed tilt or on a tracking mechanism; the inverter loading ratio; and the type of PV modules used.
  • Power purchase agreement prices have fallen. Utility scale solar PPA is now as low as $40/MWh. At these low levels – which appear to be robust, given the strong response to recent utility solicitations – PV compares favorably to just the fuel costs (i.e., ignoring fixed capital costs) of natural gas-fired generation, and can therefore potentially serve as a “fuel saver” alongside existing gas-fired generation (and can also provide a hedge against possible future increases in fuel prices).

Evolution of Energy Generation and Distribution in Canada’s Smart Power Grid – Innovation 360 Conference Panel

On September 29, I was asked to participate on a panel titled “Evolution of Energy Generation and Distribution in Canada’s Smart Power Grid” at the Innovation 360 conference in Gatineau, Québec ( Here is the essence of what I contributed.

By definition, in an electricity network, energy consumption plus losses equal electricity generation. This must be true at any point in time, or protection systems will shed load or trip generators.

There are 4 ways to balance load and generation:

1) Traditionally, dispatchable generators that can easily ramp up or down were tasked to follow the load. Big hydro plants and natural gas generators are particularly good at this. However, we are running of big hydro opportunities, and natural gas are sources of greenhouse gas emission, contributing to global warming.

2) Another way to balance load and generation is to interconnect with neighboring network that may not have the same load profile. Today, all of North America is interconnected in some way. However, building transmission lines is a lengthy process that typically faces major local opposition. As a result, most transmission lines run at capacity during peaks, weakening the bulk transmission system as the Northeast blackout of 2003 demonstrated.

3) In the last couple of decades, we have started to control load, like turning off air conditioning units by pager or getting large industrial like smelters to go offline for a couple of hours during a major peak. Time-of-use or market pricing are also attempts to have loads better follow available generation capacity. However, much of the conservation focus thus far has been on energy efficiency, not peak load reduction.

4) Very recently, energy storage has been getting attention. Traditionally, the only utility-scale storage technology available was pump-storage, like the Sir Adam Beck plant in Niagara, but few of those plants are possible, and they are not efficient. Going forward, batteries, either utility-scale or distributed storage, will grow, although for now utility-scale batteries are MW-class, when hundreds of MW or GW are needed.

Balancing load and generation is also becoming more and more difficult. On one hand, consumption is getting peakier, partly due to side effects of some energy saving programs, like turning down thermostats at night in the winter, and then turning them back up in early morning, just in time for the morning peak. On the other hand, wind and solar generators are replacing fossil generators, adding unpredictability to generation and taking away controllability, thus requiring even more balancing resources.

Integrating renewable into the grid is not only causing balancing problems. It also creates voltage management and protection problems. Those are solvable, but significant, engineering problems that require expensive upgrades to the electricity grid.

Ultimately, load and generation balancing, voltage management and grid protection adds costs that are ultimately born by subscribers. It therefore quickly becomes a political issue.

As a society, we have been subsidizing fossil fuels. Clearly, going forward, we will need to greatly invest in the grid if we want to limit the predicaments of global warming for our children and grand-children.

Using Analytics to Assess Station Thermal Risks Caused by Reverse Power Flow

With sufficient Distributed Generation (DG – embedded generation in Europe) installed on the feeders of a substation, reverse power flow may occur at the station when load is low. This is especially the case when large generators (such as wind farms) are connected on express feeders dedicated to their use.

Substations have been designed, rated and operated as step-down substation with power flowing from higher system voltage to lower system voltage. Some substation transformers also have dual secondary winding transformers that do not allow for reverse power flow conditions, as unequal reverse flow in the two secondary windings would cause overheating and potential failure of the transformer.

Utilities limit DG capacity downstream of a station to avoid excessive reverse-flow and to prevent overheating of substation transformers. For example, Hydro One requires that generation shall not exceed 60% of the maximum MVA rating of a single transformer plus minimum station load.

The (worst-case) engineering assumption is that maximum generation coincides with lowest load at a station. Is it the case? Some years ago, I ran a Monte-Carlo simulation between load and wind generation, based on theoretical distribution of both, but doubling the generation normally allowed. It found that generation would be excessive… less than 2% of the time (and not by much, and at a time when load is low and so are prices). Using actual smart meter data, it is now possible to actually know what is going on and better assess risks. For solar generation in hot climates, there is a negative correlation between load and generation – in other words, maximum generation does not happen in times of minimum load.

Even better: correlating with forecast weather data can assess whether reverse flow could be excessive in a few hours, and require large DGs to go off-line ahead of a problem (and this would not happen frequently). While I have not seen such an application, it is clearly in the realm of possibilities.

Deep analytics, used as a planning tool or in support of operations, enables safe integration of more distributed integration by managing thermal limit of station transformers operating in reverse flow.

Tutorial: Key Players in the Energy Markets: Rivalry in the Middle

The players described in the previous post have vastly different characteristics. The most striking difference is the level of rivalry.


Distributors operate in a defined territory, often corresponding to a city, a state or a province, where they are the sole provider – thankfully, as there would otherwise be multiple lines of poles along roads. Given this monopoly, distributors are subjected to price regulation, meaning that the price they charge for the use of their infrastructure (poles, conductors, cables, transformers, switches, etc.) is set, typically equal to their costs plus an allowed return on their investment. This is done by filing tariffs that are approved by the regulatory body following a rate hearing.

Retail is often a competitive industry, as there is no structural barrier to having multiple players. However, some distributors are also given the retail monopoly over their territory. Some may also provide retail services in competition with other retailers. In those cases, the distributor-owned retailer is also regulated and has to seek approval of its rates, but other retailers typically do not, although they may have to file their rate plans.

It is possible to have multiple transmission companies operating in the same territory, each owing one or a few transmission lines. However, because those transmission lines are not perfect substitutes (they do not necessarily have the same end-points in the network) and because transmission capacity is scarce, electricity transmitter typically have regulated rates, although they may compete for new constructions.

System operators are monopolies over a territory, and they have to maintain independence. They are, in effect, monopolies, although system operators are often government- or industry-owned. Their costs are recharged to the customer base, directly or indirectly.

Large generators are in a competitive business, competing in an open market, although distributed generators, which are much smaller, usually benefits from rates set by a regulator or a government.

Tutorial: Key Players in the Energy Markets

I will be making a conference to investors later this year and I will also be training some people internally at my employer. The topics will touch on the electricity industry structure and I am preparing some material for it.

The industry can be quite complex in some jurisdictions. I boiled the complexity down to just this:

New Picture

Traditional large-scale generator own and maintain coal, natural gas, nuclear, hydro, wind and solar plants connected to transmission lines. Those are large plants – typically hundreds of megawatts.

Transmitters own and maintain transmission lines – the large steel towers seen going from large generators to cities. Those typically run at 120,000 volts and more, up to over 1,000,000 volts in some cases.

Distributors own and maintain the local infrastructure of poles and conduits going to customer sites. Those typically run at 1,200 to 70,000 volts, usually stepped down to 600 volts. 480 volts, 240 volts or 120 volts for connection to customers.

Most customers are connected to distributors, although some large industrial facilities (such as aluminum smelters) are directly connected to transmission lines.

While customers are connected to distributors, they purchase electricity from an independent retailer or from the retail arm of a distributor.

With customer installing distributed generation on their premises, they sell back power to the market, often through aggregators.

Retailers buy electricity from generators in an energy market – like a stock exchange, but for electricity.

By definition, the energy produced at any instant must be equal to the energy taken by customers, accounting for a small percentage of losses in transmission and distribution. (We are starting to see large-scale storage operators, which may act as both consumer and generator, depending they are charging or releasing electricity in the network.) This critical balance is maintained by the system operator that direct generators to produce more ore less to match load; in some case, the system operator will also direct distributors to shed load (customers) if generation or transmission is insufficient to meet the demand.

The next post will deal with energy and money flows in the market.

Covered Conductors Vs. Single-Phase Reclosers

A utility client told me that they were trying out covered conductors on a feeder in a forested area. This was the first time that this large utility tried covered conductors. The objective is to reduce the impact of tree contacts and falling branches that blow fuses and therefore result in permanent outages for customers. In this context, the great length of feeders and the high system voltage (25 kV) make coordinating reclosers and fuses difficult.

Covered conductors have a thin insulation covering – not rated for the full phase voltage, but sufficient to reduce the risks of flashovers and fire when a tree branch falls between phases, when a tree makes momentary contact with a conductor, or when an animal jumps to it. Covered conductors also allow utilities to use tighter spacing between conductors.

While covered conductors help with tree contacts, they also have a number of operational disadvantages:

  • High impedance faults with a downed conductor are more likely, leading to public safety issues, especially since the conductor may not show arcing and may not look as if it is energized.
  • Covered conductors are more susceptible to burndowns caused by fault arcing. Covering prevents the arc from motoring with magnetic forces along the wire, concentrating heat damage. Repair time and cost increase significantly.
  • Covered wires have a larger diameter and are heavier, increasing loading, especially with freezing ice and high wind, which can likeliness of mechanical damages (including broken poles and cross arms), leading again to high repair time and costs.
  • Covered conductors have somewhat lower ampacity at high temperature (worsened by the black color that absorb more heat from the sun), with more limited short-circuit capability. High temperature also degrades the insulation. This results in more design and planning constraints that may increase construction costs.
  • Water can accumulate between insulation and wire at the low point between of a span, causing premature corrosion and weaken the conductor and can lead to failure.
  • Covered conductors must be installed differently than bare ones. For instance, using conducting insulator tie can lead to partial discharges and radio interference.
  • Finally, cost is an obvious issue – replacing conductors on existing lines is extremely expensive, possibly as much as $100k per km.

These issues got me thinking on how I could provide a better alternative. Replacing fuses with single-phase reclosers appears to be an interesting (if unlikely) alternative to covered conductors. Cutout-mounted single-phase reclosers can easily be installed in existing cutouts to protect lateral circuits. Those circuits are then protected against tree contacts without the disadvantage of covered conductors. Coordination with upstream mainline reclosers is eased by making the single-phase recloser faster than the mainline recloser. Cost is clearly lower than re-conductoring.

Full disclosure: I am employed by S&C, and S&C makes a cutout-mounted recloser.

Pseudo-Realtime Voltage Regulation to Increase DG Penetration

Close-loop voltage control in distribution networks traditionally relied on Potential Transformers (PT) on feeders communicating with a control algorithm sending setting signals to voltage regulators and capacitor banks. More recently, Faraday devices have been used instead of PTs, being less expensive to purchase and to install.

What about smart meters with voltage measurement capability? Some smart meters measure voltage at the service point, which accounts for voltage drop in secondary feeders and transformers. There are also far more meters than PTs or Faraday sensors, providing greater coverage. But there is a problem: smart meter networks have long internal latency – it may take minutes for voltage signals to get back to a control center. This renders smart meters unusable in a traditional real-time control loop.

However, analytics could make use of delayed smart meter data, combined it with other data such as weather and historical data, to provide pseudo real-time feedback.

This could prove particularly effective with high level of Distributed Generation (DG) penetration that is affected by weather, such as solar and wind. Where a traditional voltage control system relying on real-time feedback could be overwhelmed or mislead by the variability of renewable generation, a control system relying on deep analytics of smart meter and weather data could be more effective in maintaining distribution grid stability.

Using Analytics to Assess Islanding Risks of Distributed Generators

One of the most critical situations with Distributed Generators (DG – embedded generators in Europe) is that a interrupter on a distribution feeder may trip to isolate a circuit section and the DGs might continue supplying the load on that section, creating an “island”. When load closely match generation in the island, it may be sustained for some time, posing safety hazards – this is known to have caused death.

Distributed generators have various passive or active anti-islanding mechanisms that open a breaker at the point of connection when an islanding condition is detected. However, islanding detection techniques used in small DGs (such as residential photovoltaic generators) are far from perfect – without expensive circuitry, they may not always immediately detect an island when generation and load are closely matched. Therefore, some utilities require that load on any feeder section (i.e., between interrupters) be always greater than generation, ensuring that an island cannot sustain itself. This means that the total distributed generation capacity on a feeder section must be significantly less than the minimum aggregated load on that section. The problem is compounded by the fact the engineers assessing DG connection requests usually do not know actual load and generation per line section – estimations need to be made.

In the end, allowable distributed generation on a line section can be a pretty small number – in Ontario, Hydro One requires that total generation must not exceed 7% of the annual line section peak load – meaning that few customers are allowed to have generators.

Applying analytics on smart meter data can better assess how much distributed generation can safely be connected to a line section. For instance, minimum load may never be correlated with maximum generation – e.g., in hot climates, minimum load occurs at night, when there is no solar generation. Analytics can look into past load and generation records to determine how much generation can be connected without getting into potential islanding condition. Safe generation levels may be many times more than the previous conservative worst-case-that-never-happens engineering guidelines allowed.

Better DG Connection Assessment by Validating Phase Mapping and Tap Settings with Utilities Analytics

Distributed generators (DG – embedded generators in Europe) can cause voltage excursions outside the allowable range and can exacerbate phase imbalance, increasing losses (especially on North American networks). Utilities set engineering rules to try to mitigate those effects, for example by limiting how much generation can be connected per feeder section.

Unfortunately, meter-to-transformer-to-phase (MTP) mapping (MPT in Europe) is notoriously inaccurate, meaning that engineers do not know the distribution of single-phase DGs on a feeder – with DGs often clustered on single-phase laterals, DG dispersal across phases may be far from even. Similarly, distribution transformer tap positions are generally unknown, but often set high because under-voltages was the traditional problem – with DGs, over-voltage can become the issue. This forces engineers to take an overly cautious approach when assessing DG connections or face the risk of network problem later.

In the past, validating MTP mapping and distribution tap settings required extensive fieldwork to track each triplex to a transformer, to track the transformer to a phase, and to visually check tap setting with a bucket truck. Now, analytic applications can correlate voltage levels over time to identify to what transformers and phase each meter belongs, and identify transformers where tap setting is too high or too low. The analytical engine can also correlate service point street address and longitude/latitude coordinates with those of the transformer. The correlations are statistical, but, with enough historical data, the accuracy is equal to or better than a visual survey, at a much-reduced cost.

With reliable phase and tap information, engineers can now assess DG connections requests with greater confidence that voltage stability of the grid will be maintained.

Reducing Reliance on Individuals in Field Regions

In a previous post, I said that consolidation reduces costs. But it does more: consolidation eases implementation of systems to reduce dependency on the particular knowledge and experience of key individuals. This is particularly clear in 2 areas:

  • Work Scheduling and Dispatching. Advanced schedulers, such as ClickSoftware, may automatically dispatch field crews based on skillset, equipment and availability, without relying on dispatchers’ particular knowledge and experience, especially for unplanned (emergency) work. In reducing human interventions, dispatchers become supervisors of the overall process, focusing on difficult situations that the system cannot process effectively by itself. In addition to more efficient truck rolls, the number of dispatchers and schedulers (now consolidated) can be reduced.
  • Customer Relationships Management (CRM). Large utilities may have sophisticated Customer Information Systems (CIS) for millions of residential and small commercial and industrials accounts, but there is often no system to manage the hundreds of large commercial, industrial and institutional (CI&I) customers. Therefore, these remain the privy of local resources owning the customer contacts. The lack of rigour in regard to customer contact is probably a contributor to low CI&I customer satisfaction often observed. It would not make sense to implement a large system for a few customers, but a light CRM, such as, can be cost effective and have a relatively fast implementation time

Full disclosure: My father worked for 25 years as a utility dispatcher. He is long dead now, but I am sure that he would be amazed to see the tools that dispatchers at modern utilities may have now.

Reducing Overhead by Consolidating Field Regions

Large utilities have many multiple regions in their territory, with each region having multiple field depots. This structure leads to a great amount of duplication and overlap of responsibilities as key business functions such as work planning, work scheduling, project management and customer relationships are duplicated across regions. This also causes deviations and lack of uniformity in the way the work is executed in regions and depots.

There is a clear trend in the industry to consolidate regions and depots, flattening the organisation. Talking to utility managers having gone through consolidation of field regions, I concluded that one can expect a 20% reduction in overhead in a 2:1 consolidation – and this can be compounded many times, i.e. a 4:1 consolidation leads to almost 40% overhead reduction.

Why was this not done earlier? Implementation of Enterprise Resource Management (ERP) systems, which forces standardization of process, is one key driver. Furthermore, an ERP can effect consolidation without requiring centralization of roles – consolidation without centralization has less organizational resistance from middle management than pure centralization.