Monthly Archives: March 2015

Pseudo-Realtime Voltage Regulation to Increase DG Penetration

Close-loop voltage control in distribution networks traditionally relied on Potential Transformers (PT) on feeders communicating with a control algorithm sending setting signals to voltage regulators and capacitor banks. More recently, Faraday devices have been used instead of PTs, being less expensive to purchase and to install.

What about smart meters with voltage measurement capability? Some smart meters measure voltage at the service point, which accounts for voltage drop in secondary feeders and transformers. There are also far more meters than PTs or Faraday sensors, providing greater coverage. But there is a problem: smart meter networks have long internal latency – it may take minutes for voltage signals to get back to a control center. This renders smart meters unusable in a traditional real-time control loop.

However, analytics could make use of delayed smart meter data, combined it with other data such as weather and historical data, to provide pseudo real-time feedback.

This could prove particularly effective with high level of Distributed Generation (DG) penetration that is affected by weather, such as solar and wind. Where a traditional voltage control system relying on real-time feedback could be overwhelmed or mislead by the variability of renewable generation, a control system relying on deep analytics of smart meter and weather data could be more effective in maintaining distribution grid stability.

Using Analytics to Assess Islanding Risks of Distributed Generators

One of the most critical situations with Distributed Generators (DG – embedded generators in Europe) is that a interrupter on a distribution feeder may trip to isolate a circuit section and the DGs might continue supplying the load on that section, creating an “island”. When load closely match generation in the island, it may be sustained for some time, posing safety hazards – this is known to have caused death.

Distributed generators have various passive or active anti-islanding mechanisms that open a breaker at the point of connection when an islanding condition is detected. However, islanding detection techniques used in small DGs (such as residential photovoltaic generators) are far from perfect – without expensive circuitry, they may not always immediately detect an island when generation and load are closely matched. Therefore, some utilities require that load on any feeder section (i.e., between interrupters) be always greater than generation, ensuring that an island cannot sustain itself. This means that the total distributed generation capacity on a feeder section must be significantly less than the minimum aggregated load on that section. The problem is compounded by the fact the engineers assessing DG connection requests usually do not know actual load and generation per line section – estimations need to be made.

In the end, allowable distributed generation on a line section can be a pretty small number – in Ontario, Hydro One requires that total generation must not exceed 7% of the annual line section peak load – meaning that few customers are allowed to have generators.

Applying analytics on smart meter data can better assess how much distributed generation can safely be connected to a line section. For instance, minimum load may never be correlated with maximum generation – e.g., in hot climates, minimum load occurs at night, when there is no solar generation. Analytics can look into past load and generation records to determine how much generation can be connected without getting into potential islanding condition. Safe generation levels may be many times more than the previous conservative worst-case-that-never-happens engineering guidelines allowed.

Better DG Connection Assessment by Validating Phase Mapping and Tap Settings with Utilities Analytics

Distributed generators (DG – embedded generators in Europe) can cause voltage excursions outside the allowable range and can exacerbate phase imbalance, increasing losses (especially on North American networks). Utilities set engineering rules to try to mitigate those effects, for example by limiting how much generation can be connected per feeder section.

Unfortunately, meter-to-transformer-to-phase (MTP) mapping (MPT in Europe) is notoriously inaccurate, meaning that engineers do not know the distribution of single-phase DGs on a feeder – with DGs often clustered on single-phase laterals, DG dispersal across phases may be far from even. Similarly, distribution transformer tap positions are generally unknown, but often set high because under-voltages was the traditional problem – with DGs, over-voltage can become the issue. This forces engineers to take an overly cautious approach when assessing DG connections or face the risk of network problem later.

In the past, validating MTP mapping and distribution tap settings required extensive fieldwork to track each triplex to a transformer, to track the transformer to a phase, and to visually check tap setting with a bucket truck. Now, analytic applications can correlate voltage levels over time to identify to what transformers and phase each meter belongs, and identify transformers where tap setting is too high or too low. The analytical engine can also correlate service point street address and longitude/latitude coordinates with those of the transformer. The correlations are statistical, but, with enough historical data, the accuracy is equal to or better than a visual survey, at a much-reduced cost.

With reliable phase and tap information, engineers can now assess DG connections requests with greater confidence that voltage stability of the grid will be maintained.

Reducing Reliance on Individuals in Field Regions

In a previous post, I said that consolidation reduces costs. But it does more: consolidation eases implementation of systems to reduce dependency on the particular knowledge and experience of key individuals. This is particularly clear in 2 areas:

  • Work Scheduling and Dispatching. Advanced schedulers, such as ClickSoftware, may automatically dispatch field crews based on skillset, equipment and availability, without relying on dispatchers’ particular knowledge and experience, especially for unplanned (emergency) work. In reducing human interventions, dispatchers become supervisors of the overall process, focusing on difficult situations that the system cannot process effectively by itself. In addition to more efficient truck rolls, the number of dispatchers and schedulers (now consolidated) can be reduced.
  • Customer Relationships Management (CRM). Large utilities may have sophisticated Customer Information Systems (CIS) for millions of residential and small commercial and industrials accounts, but there is often no system to manage the hundreds of large commercial, industrial and institutional (CI&I) customers. Therefore, these remain the privy of local resources owning the customer contacts. The lack of rigour in regard to customer contact is probably a contributor to low CI&I customer satisfaction often observed. It would not make sense to implement a large system for a few customers, but a light CRM, such as, can be cost effective and have a relatively fast implementation time

Full disclosure: My father worked for 25 years as a utility dispatcher. He is long dead now, but I am sure that he would be amazed to see the tools that dispatchers at modern utilities may have now.

Reducing Overhead by Consolidating Field Regions

Large utilities have many multiple regions in their territory, with each region having multiple field depots. This structure leads to a great amount of duplication and overlap of responsibilities as key business functions such as work planning, work scheduling, project management and customer relationships are duplicated across regions. This also causes deviations and lack of uniformity in the way the work is executed in regions and depots.

There is a clear trend in the industry to consolidate regions and depots, flattening the organisation. Talking to utility managers having gone through consolidation of field regions, I concluded that one can expect a 20% reduction in overhead in a 2:1 consolidation – and this can be compounded many times, i.e. a 4:1 consolidation leads to almost 40% overhead reduction.

Why was this not done earlier? Implementation of Enterprise Resource Management (ERP) systems, which forces standardization of process, is one key driver. Furthermore, an ERP can effect consolidation without requiring centralization of roles – consolidation without centralization has less organizational resistance from middle management than pure centralization.