IEEE/PES Distribution Automation Tutorial 2007/2008
This section provides the sections of the IEEE/PES Distribution Automation Tutorial given in Tampa (2007) and Pittsburgh (2008). Each section has an attached Powerpoint presentation. Some sections have a supporting text.
Section 1 - Distribution Automation Fundamentals
The text below supports the above presentation.
Deregulation and restructuring of electric utility business is forcing the utilities to turn their attention towards providing better supply reliability and quality to customers at the distribution level. Many utilities are contemplating providing performance-based rates to their customers. They would be willing to pay compensation to the customers if the performance falls below a minimum level. Such actions will allow utilities to brace for the upcoming competition from other parties interested in supplying power to the customers. Although higher reliability and quality are the goals of the utilities, they would like to accomplish this while optimizing the resources. Another goal for a utility should be improvement in system efficiency by reducing system losses. Distribution Automation (DA) provides options for real-time computation, communication, and control of distribution systems, and thus provides opportunities for meeting the above mentioned goals. The concept of distribution automation first came into existence in 1970’s  and since then its evolution has been dictated by the level of sophistication of existing monitoring, control, and communication technologies, and performance and economic factors associated with the available equipment. Evolution of Supervisory Control And Data Acquisition (SCADA) systems, which have been in use for monitoring the generation and transmission systems, has also helped progress in the field of distribution automation. Although distribution systems are a significant part of power systems and progress in computer and communication technology has made distribution automation possible [2-7], advances in distribution control technology have lagged considerably behind advances in generation and transmission control. Progress of distribution automation has been relatively slow due to reluctance of utilities in spending money on automation since many utilities have found it difficult to justify automation based purely on cost-benefit numbers. However, distribution automation provides many intangible benefits, which should be given consideration while deciding to implement distribution automation. Unbundling of electric services in the future is likely to make distribution automation more attractive because distribution companies might be operating as independent entities. Automation allows utilities to implement flexible control of distribution systems, which can be used to enhance efficiency, reliability, and quality of electric service. Flexible control also results in more effective utilization and life-extension of the existing distribution system infrastructure.
In general, those functions that can be automated in distribution systems can be classified into two categories, namely, monitoring functions and control functions. Monitoring functions are those needed to record (1) meter readings at different locations in the system, (2) the system status at different locations in the system, and (3) events of abnormal conditions. The data monitored at the system level are not only useful for day to day operation but also for system planning. Supervisory Control and Data Acquisition (SCADA) systems perform some of these monitoring functions. The control functions are related to switching operations, such as switching a capacitor, or reconfiguring feeders. In addition, system protection can also be a part of overall distribution automation schemes. Some customer related functions, such as remote load control, automated meter reading (AMR), and remote connect/disconnect may also be considered as distribution automation functions. However, AMR has evolved significantly itself as a separate area.
The functions mentioned above are performed in a relatively slow time frame (minutes to hours). These devices are not designed to endure frequent switching. Recently, several new devices have been developed which allow rapid control. Application of distribution-level power electronic devices such as the Static Condenser (STATCON) for distribution system control has already been demonstrated . These devices are continuously controlled and respond in real-time to system changes. Coordination of a STATCON with Load-Tap-Changer (LTC) and mechanically-switched capacitors reduces fluctuations in system voltage, improving the quality of service.
Electric power quality has become an increasingly problematic area in power system distribution systems. Power quality may be defined as "the measurement, analysis, and improvement of bus voltage, usually a load bus voltage, to maintain that voltage to be a sinusoid at rated voltage and frequency ". A direct correlation exists between the lack of electric power quality delivered to the customer and the number of complaints received from the customer. As a result, EPRI has directed substantial research efforts into the development of advanced technologies to improve the performance of utility distribution systems. The technology, called custom power, seeks to integrate modern power electronics-based controllers such as the solid-state breaker (SSB), the STATCON, and the Dynamic Voltage Restorer (DVR) with distribution automation and integrated utility communications to deliver a high grade of electric power quality to the end user . Although extremely useful, custom power devices have been used in distribution systems only on a limited basis. Detailed study of these devices and their applications is a separate subject by itself and is beyond the scope of this book.
Demonstration of the feasibility of distribution automation through various pilot projects increased the interest of the technical community in this field. Some of the early pilot projects include the Athens Automation and Control Experiment  sponsored by the US Department of Energy and Electric Power Research Institute (EPRI) sponsored projects at Texas Utilities and Carolina Power & Light . A list based on an IEEE survey of other projects is available in a report that was prepared by the author . The number of manufacturers offering distribution automation equipment increased substantially in the 1990s. Until the early part of 1990s reliability of equipment was a major concern. The equipment available now is more reliable and robust compared to the older generation. However, there are still several issues, which are an obstacle to wide spread implementation of distribution automation. These issues include cost of the equipment, absence of hardware and software standards, and availability of application software. Several organizations have been active in promoting open systems and in forming standards for hardware and software relating to distribution automation. Significant amongst these is the work of EPRI in forming and promoting Utility Communication Architecture (UCA). Standardization will allow the users of distribution automation systems to mix and match components manufactured by different manufacturers, and also to port software from one platform to another.
Implementation of distribution automation requires careful thinking and planning. As discussed in a presentation , the utilities can either adopt the "top-down" approach or the "bottom-up" approach. The top-down approach is the revolutionary approach in which a large-scale fully-integrated automation system is installed to automate most or all of the functions performed by various individual devices in the distribution system. The bottom-up approach is evolutionary in the sense that automation devices to perform only a particular function are installed or only a small part of the system is automated. Other functions and other parts of the system are automated gradually.
The top-down approach is expensive and requires major modifications in the utility operation, and thus, it is suitable for only a few utilities. The bottom-up strategy is more suitable for a majority of utilities. This approach allows utilities to adjust to changes at a more measured pace and to install automated systems for the most immediate needs. However, the most difficult task for a utility contemplating distribution automation is to identify the functions to be automated [3,14]. The needs of every utility are dependent on geographic location, operating philosophy, and financial situation. Therefore, a careful screening of all the possible control functions is imperative before implementing any of them.
Relationship of DA to SCADA and AM/FM
The SCADA systems has been in use in the transmission and subtransmission systems for many years now. Hence, the technology associated with them has become quite mature. The application of the SCADA systems in distribution systems in very recent. An increased interest in distribution automation has led to an increase in use of the SCADA systems in distribution systems. In fact, many functions performed by the SCADA systems, particularly data acquisition, are an integral part of distribution automation. System data is very essential for distribution automation because without data control decisions can not be made. However, the SCADA systems are different from the distribution automation systems, mainly from the control point of view. In the SCADA systems the control is supervisory, where an operator looks at the data and makes decision to take control action. In distribution automation systems, most decision are made by the computer and corresponding control actions are performed in real-time with very little intervention by the operator.
Except for the control part, the SCADA systems are very similar to distribution automation systems. Thus it is natural to think that development of distribution automation should be on a SCADA platform. However, that has not been the case. Distribution automation grew independent of SCADA, mainly because the communication needs of initial distribution automation systems were different from the SCADA systems existing at that time. Load control and remote meter reading have always been parts of distribution automation systems, therefore, the communication systems needed for distribution automation required that communication be available between individual customers and the control station. Moreover, SCADA technology was itself not very mature at that time. Thus, developers of distribution automation systems used different software platforms and different languages for their system. This meant that those utilities who were interested in distribution automation had to learn a new operating environment. The distribution automation system manufacturers have realized this problem and have formed alliance with the SCADA manufacturers to integrate the two systems. Moreover, many SCADA manufacturers have entered the distribution automation market, and therefore, more integration between the two systems is noticeable in the recent years.
Almost parallel to this development, a development has taken place in the automated mapping and facilities management (AM/FM) arena. Advent of high powered graphics computer has accelerated progress in this field. Most of the development in the AM/FM area has been in the land management area. The pipeline industry has also been making use of this technology. More recently, the power companies have started using the AM/FM technology. In an AM/FM system the electrical service maps are superimposed on the geographical maps. With the help of these maps and the database associated with those maps, the utilities can manage their distribution facilities more efficiently. Some of the common functions performed by the AM/FM systems are distribution system design, facility mapping, right of way/permit tracking, facilities inventory, and system and equipment maintenance . Most of these functions do not have the real-time feature which is an important ingredient of distribution automation. However, some functions for which real-time feature is important can be performed using AM/FM systems. These functions include outage analysis and system restoration. In the event of an outage, the calls from customers are displayed on the system maps. Then, from the outage pattern, possible causes of outage are determined. The maps are then used to direct crew to perform switching operations or switches can be operated remotely.
The AM/FM systems are generally very data and graphics intensive. In addition to the distribution system data, they also need geographical mapping data. Some of the system data is also used for the distribution automation system. Thus, to make efficient use of the system databases, they can be shared by different systems. To make such data sharing feasible, the AM/FM system and distribution automation system can be connected via a computer network. Yet another approach is to fully integrate the AM/FM and distribution automation systems with a server and several workstations. The main drawback of this approach is that it will radically change the operation of the company. It will cut across operations, planning, billing, and facilities management departments. All these departments will require coordination in operating this system and will have to learn to use the same operating system for their different tasks.
Integration of DA, SCADA and AM/FM
The integration of the SCADA and the distribution automation systems appears to be inevitable. However, full integration of the AM/FM and the distribution automation/SCADA has many uncertainties. The computational power and technology has matured to allow full integration, but business practices and needs of the utilities may prevent a full integration. Moreover, AM/FM is a very mature area by itself, and therefore, it is likely to maintain a separate identity within the utility operations. One can find examples of utilities using AM/FM systems or distribution automation systems. Those using AM/FM systems have very little distribution automation. Similarly, those using distribution automation systems have none or limited mapping facilities. The choice of one or the other system is based on the importance the utility places on different functions.
|An integrated Distribution Management System would require a communications infrastructure to communicate with individual customer locations and control points in the distribution system on one side, and with the Energy Management System on the other side. Generally, such a communications infrastructure is a hybrid system utilizing different communication medium for different parts. Some of the earlier distribution automation systems used telephone for communication between the control center and the substation, and communication from substation to the customers and control points was based on power-line carrier or radio. Power-line carrier was an obvious choice because a link to all the points of communications was available; it was only a question on installing the right equipment to accomplish communication. Power-line carrier based communications suffered from heavy attenuation in certain parts of the distribution system. Thus, gradually popularity of power-line carrier has decreased over a period of time. However, a technology, which uses power lines and is based on shifting zero-crossing of the current waveform, is still being used successfully. Earlier radio systems also had a problem because of limited range and their inability to send signal across obstacles, such as tall buildings. Development of packet radio technology and the availability of 900 MHz spectrum to electric utilities has made radio a very popular communication medium. Currently available radio systems can communicate with points in a large area very reliably.|
Developments in the fiber-optic technology has made it a viable communication medium for certain applications and its use in distribution systems has been increasing steadily. Cellular telephone technology is also becoming popular for communication in distribution system. Satellite as a communications medium has been experimented by some utilities, but currently it is not used very widely. For other communication issues see an article by Block .
Distribution Automation Functions
The distribution automation functions can in general be divided into two main categories, namely customer level functions and system level functions. The customer level functions are those functions which require installation of some device with communication capability at the customers’ premises. These include load control, remote meter reading, time-of-use rates, and remote connect/disconnect The system level functions are those functions which relate to system operations. The control and communications devices for these functions are installed at different locations in the system, such as substations and feeders. These functions include fault detection and service restoration, feeder reconfiguration, voltage/var control etc. In addition to system operation type functions, digital protection of substations and feeders is considered part of distribution automation in some situations.
Many people prefer to subdivide the system level functions into two groups, namely substation related functions and feeder related functions . In fact, some people consider domain of distribution automation to include only feeder level functions and substation level functions are covered by a separate field called substation automation . Although most of the focus will be on feeder level functions in this chapter, such division of functions has not been considered. Each function selected may be applicable for both substation as well as feeders. In some situations, the functions at substation and feeder level may be performed in a coordinated fashion, for example, the switching of capacitors on the feeders may be coordinated with the switching of capacitors at the substation. However, details on functions related only to substation automation are not included in this chapter. A list of functions considered follows:
- . Outage location and service restoration
- . Feeder reconfiguration and transformer balancing
- . Extension of transformer lifetimes
- . Recloser/breaker monitoring and control
- . Capacitor switching for voltage/var control
- . Voltage control using regulators
- . Substation transformer load-tap-changer (LTC) control
- . Distribution system monitoring.
These function can be split into subfunctions as has been done in the EPRI report . For the sake of simplicity we have decided not to take that approach. Since many functions depend on each other, we had to compromise with two conflicting objectives to finalize this list for determination of cost/benefit associated with these functions. If all the functions which depend on each other are merged into one function then the user has very little choice. On the other hand, if too much choice is given then programming becomes difficult and also the use of the program becomes very difficult. Salient features of each of the selected functions are discussed in the sections below. Under each function heading, the manual procedure for that function as well as the automated procedure using distribution automation systems is discussed. The benefits associated with automation of that function are also discussed. The material presented here is meant to provide a general overview of these functions. Specific methods for implementing some of these functions are discussed in other chapters.
Outage Location and Service Restoration
A distribution system, particularly an overhead type, is susceptible to various type of faults. In the event of a permanent fault, the protective devices are expected to operate and isolate the faulted section. However, if the fault is of a high impedance type, the protective devices may not operate to isolate the faulted section. In such situations, location of faults becomes more difficult. In both cases, some customers experience a power interruption. Since no information on status of various devices in the distribution system is available to the distribution system dispatchers, there is no direct way to find out about the outages. Thus, the dispatchers depend on telephone calls from customers or a sudden change in power flow at a metered location upstream in the system to come to know of the outages. Customers’ calls only provide an approximate location of the outage. Moreover, in case of a major storm the outages can be widespread and difficult to locate. Once the approximate location of outages is known, line crews are dispatched to drive along the lines to look for damage. After the damaged area is located, it has to be isolated from the rest of the system if the fuse protecting that line has not operated. This is done by first opening the substation breaker and then manually operating the switches or removing the fuses. Coordination between the line crews and dispatchers is maintained via portable radio to perform this task properly. The next step is to restore power to those parts of the system which are undamaged but have lost power because of problems elsewhere in the system. The power to these parts may be provided from alternates routes. The dispatchers determine such possible routes and ask the line crew to operate the isolators. Most of the isolators cannot be operated under load, therefore, the substation breaker is opened before operating the isolators. Since the whole process is done manually, it takes a long time.
Automation of this function requires installation of remotely controlled sectionalizers on the feeders and installation of sensors on the feeders and/or at customer locations to detect interruption of service. One popular approach is based on gathering outage information from customers via telephone or radio communication. Whenever a sustained interruption takes place, either the affected sensors automatically send the information to the central computer or the calls received from the customers are logged into a database. Many utilities have already implemented some level of automation in handling calls related to power interruptions. Such activity has been called ’Trouble Call Analysis’ in the utility literature.
The location of the outage is determined based on escalation of data from the customer level to the substation level. To aid the operators in the outage location, the calls received from the customers are also automatically mapped on the system map by some utilities. From these maps inference can be obtained about possible locations of faults. More advanced techniques require installation of sensors on the feeders in addition to customer-ends. One such approach requires recording the time of service interruption . The data are processed using a statistical technique to determine outage locations. Once the location is known, the faulted section is isolated from the rest of the system with the help of remote controlled sectionalizers if the protective devices have not already isolated the faulted part. Subsequently, the switching needed to restore power to unfaulted parts of the system can be accomplished remotely. Moreover, since the location of the outage is known, the crew is sent to the precise location instead of asking to go in a general area. Thus, the whole process of outage location and service restoration can be accomplished more efficiently by less people in much less time.
Feeder Reconfiguration and Transformer Balancing
The load in a distribution system varies by hour, by day and by season. For every load level, the system has an optimal configuration of feeders. So far the optimality has been defined in terms of minimum loses, but restructuring of the utility industry has made service reliability a more important criterion for system operation. Hence, optimality can be either defined in terms of maximum reliability or as a weighted combination of reliability and losses. Moreover, the total transformer losses can be minimized if the substation transformers are loaded in proportion to their capacity instead of loading some transformers very heavily and others very lightly. In a manual system, the reconfiguration of system is done on a seasonal basis, perhaps at the most a few times in a year. Since such reconfiguration may require several manual switching operations, it is not feasible to do it more frequently.
The reconfiguration of the system for reliability and loss reduction can be accomplished in an automated mode using the same sectionalizers which are used for fault isolation and service restoration. Only extra need will be for application software. Since the operation of the sectionalizers is controlled remotely, system reconfiguration can be done as frequently as the dispatcher desires. From a practical point of view, however, reconfiguration once in a few hours would be sufficient. Additional benefit of more frequent reconfiguration will be very minimal.
Transformer Life Extension
The substation transformers normally operate at loads lower than their capacity. However, during emergencies, such as failure of another transformer, they can be operated at loads higher than the rated capacity. But overloading can be done only for a limited time without jeopardizing the life of the transformer. Higher the overloading, lower is the time allowed for overloading. In a manual process, the dispatcher has to rely on trial-and-error to get proper level of loading. The dispatcher would close the switch to supply additional load with an expectation that the total load would be less than a certain value. But if the load after switching happens to be higher than expected, he would have to open the switch, drop a few feeders, and then close the switch again. The process would have to be repeated until the load is at a desired level. The switching on and off of load can stress the transformer significantly and thus reduce its total life. Using an automated procedure, this task can be performed without stressing the transformer.
Automation of this function requires equipment for monitoring the transformer including oil and winding temperature. Equipment for monitoring the health of the transformer based on dissolved gas analysis are also available. Data and measurements from the feeders connected to the transformer are needed too. The oil and winding temperatures determine the level of overloading possible under the given loading conditions. Then the feeders can be selected such that there is a balance between the desired loading and the loads of the feeders. Thus overloading of the transformers can be controlled precisely without too many unwanted switching operations. Hence, stress on the transformers can be avoided and life extension of transformers can be achieved.
Recloser/Breaker Monitoring and Control
In the manual mode, no remote monitoring and control is available on the breakers and reclosers. The settings of the relay and recloser timings can be changed only by going to the location of the equipment. In case of pole mounted reclosers, it is extremely time consuming to change settings. Further, since no monitoring is available, the recloser and breaker contacts are refurbished at fixed intervals whether it is necessary or not. This maintenance frequency is usually based on the duty level the recloser or breaker is expected to perform. Generally, the maintenance interval is estimated conservatively (i.e., refurbishment are made, on the average, sooner than is necessary). Hence, in many cases the contacts are serviced before it is necessary.
The advantages of automating this function are many. In an automated scenario, firstly the relay settings and recloser timings can be set remotely. This will allow for better control of the system whenever the system configuration changes. Moreover, the labor needed to reset the relay and recloser timings can be saved because these settings can be done remotely instead of going to the location. Secondly, monitoring of the energy interrupted by the recloser and breaker can provide a precise estimate of the health of the contacts. Using this information refurbishing of the contacts can be scheduled whenever necessary. Hence, too early or too late servicing of the recloser and breakers contacts can be avoided.
Capacitor Switching for Voltage and Reactive Power Control
|Capacitors are used in the distribution systems for voltage and reactive power support. The capacitors are placed at strategic locations to improve overall system operation. These capacitors can be fixed type or switched type. The switched capacitors switch-on or switch-off upon receipt of a signal from a controlling device attached to the capacitor. This control device may be a timer, a temperature sensitive relay, a voltage sensitive relay, a current sensitive relay, a reactive power sensitive relay, or a combination of the above. The timers are set based on an assumed load curve. However, on a given day the load may not be the same as the assumed load curve. Moreover, the timer does not discriminate a working day from a holiday. The temperature-sensitive device is set based on an assumption that the load is high when the temperature is high because the air conditioning demand goes up during hot weather. This type of control does not work very well because there is a lag of few hours between the outside temperature and air conditioning load because of thermal inertia of the houses. Other types of controls also have problems, which are discussed in the available literature .|
To alleviate some of the problems associated with the above mentioned schemes of controlling capacitors many companies have introduced microprocessor based controllers. In some cases, these controllers have facility included for communication with the central station. These controllers can be programmed to use a combination of several factors to switch the capacitors. These controllers perform significantly better than the conventional controllers. However, they do not provide the most optimal capacitor configuration. A major drawback of these controllers is that they respond to the local conditions existing at the location of the capacitor. They do not take into account the impacts of switching a capacitor on other parts of the system.
An optimal capacitor configuration can be obtained by implementing a higher level of automation where switching of all the capacitors can be coordinated under different load conditions. In such a scheme, meters are needed at different locations to measure real and reactive power, voltage and current. The metered data and status of capacitors is sent to the central computer via communication lines. The computer then determines the optimal switching configuration of the capacitors for the measured system conditions. Under the optimal configuration the system losses are kept at a minimum value. Since the system has real-time measurement capabilities, the switching configuration can be changed as frequently as desired. From a practical point of view it is not desirable to switch capacitors too frequently to prevent failure of switches or capacitors and due to power quality concerns. However, in the future, power electronics based schemes will be available for control of reactive power, which will eliminate the above mentioned concerned associated with mechanical switching of capacitors.
Regulator Operation for Voltage Control
Voltage regulators are used in distribution systems for finer control of voltage, particularly on long distribution lines where voltage drops are high. These regulators are set to maintain voltage within a specified band and when the voltage becomes lower than the low setting, the tap on the regulator moves to increase the number of turns of the output side. Similarly, when the voltage goes above the high setting, the tap moves to reduce the number of turns on the output side. The regulators are set to regulate the voltage at a specified point on the downstream side. Since actual measurements are not available, line impedance from the regulator to the regulated point and measured current at the regulator are used to estimate the voltage at the regulated point. This method has been called line drop compensation in the literature
It is quite obvious that if distribution automation is implemented, actual voltage at the regulated point can be metered and can be used in control of regulators. Moreover, operation of regulators can be coordinated with capacitor switching to reduce losses and to obtain better voltage profile on the feeders under different load conditions. Better voltage profile on the feeders will result in lesser low voltage complaints from the customers.
Remote control of regulators also provides an advantage during emergencies. Since load is directly correlated with voltage, it can be reduced by reducing voltage during emergencies by overriding the normal operation of the regulator. Such control is not available without automation.
Transformer LTC Control
The substation transformers have LTC control which changes the tap position in response to load. Since higher load results in higher voltage drop, the tap moves to higher position to maintain the voltage at proper level on the feeders. Similarly, under low load conditions, the tap moves to lower setting to compensate for the increase in voltage due to lesser voltage drop on the line. If the substation has two transformers that are operated in parallel, the LTC control devices on the two transformers coordinate with each other to maintain the same output voltage to prevent circulating current in the transformers. Existing control devices work quite well, therefore, there is little benefit from further automating this function using remote control. However, since the new control devices are digital, they may need lesser maintenance and diagnosis of malfunctioning devices will be easier. A major advantage of remote control of LTC becomes realizable under emergency situations. As mentioned in the previous section, load can be reduced by reducing the voltage. Hence, using remote control the tap on the transformer can be moved to a lower setting under emergency conditions to alleviate load. Thus, visits to substation which will be necessary to manually override the control and set the tap to a lower value will be avoided.
Distribution System Monitoring
The purpose of distribution system monitoring is very similar to SCADA in the traditional sense. Monitoring is necessary to acquire data for many of the distribution functions. Some of these functions require real-time data from the system to make control decisions. Real-time data is also useful in providing information to operators on abnormal system conditions in the form of alarms. In addition to the real-time data, system data can be gathered and archived for later use. Such data can then be used for forecasting and planning purposes. As defined in the EPRI report , there are three components of distribution system monitoring, namely, data monitoring, data logging, and analog data freeze.
The main purpose of data monitoring is to maintain system databases for alarms, user interface and logging. Thus, under abnormal system conditions alarms can be enunciated to alert the operator of those conditions. In addition to the alarm, operators are provided with the relevant data, which can be used by them to take corrective actions.
The main purpose of data logging is to prepare printed reports of the system operating conditions or events for future use. The data types that can be logged using this function are various, for example, alarms and their summaries; periodic logs, such as off normal summary, substation bus voltage log, tagged and out-of-service equipment; on demand logs, such as present values of the variables, limits, settings, and status; log of operator control actions; one-line diagrams of substations and feeder real-time configurations; fault reports; sequence-of-event logs.
Analog data freeze function gives a "snapshot" of the quantities of interest. This function can be set to start capturing pertinent data based on threshold value of certain variables. Thus, the system conditions prior to a disturbance can be obtained. This information can be used by the operators to restore the system to original conditions following a disturbance. The operators can also freeze the data during normal operating conditions. Such data can be used by the operators to study the system and for planning purposes.
Advanced Distribution Automation
Although we have seen some progress in DA implementation, business uncertainties due to deregulation and restructuring of the power industry slowed implementation of distribution automation over the past 15 years. Now, there is renewed interest in distribution automation due to emergence of new technologies, specifically new measuring devices and sensors, more powerful and refined communication equipment, highly advanced computing equipment, advanced power electronics equipment, and new control and protection ideas. Efficiency improvement was the main driver of distribution automation in its initial stages. Now, distribution automation has to address enhancements in efficiency as well as reliability and quality of power distribution. Today the utilities are more concerned about improving reliability due to implementation of performance based rates and improving power quality due to their impact on sensitive loads. Hence, new tools to quantify benefits of distribution automation are needed. These tools should be able to include functionality of new devices and the benefits provided by them. Specific tools that need attention for implementation of advanced distribution automation (ADA) include tools for cost/benefit evaluation, system analysis, and reliability evaluation. The following issues must be considered while developing these tools.
Features of Automation Devices
The automation devices available now have significantly higher capability than that of the past. The same device can do multiple tasks. For example, a device that is used to control equipment in the field can also gather system data and provide protection function. Additionally, these devices can have local intelligence, which can be used to filter data or to make local decisions. Since the devices have multiple capabilities, cost/benefit analysis of automation functions becomes more complex. In the past, separate devices were used for each function, thus the cost of a specific device was allocated to that functions. The new tools must consider different automation functions in an integrated manner for cost/benefit evaluation.
Value of Higher Reliability and Quality to Customers
Different customers need different levels of reliability and quality, and some are willing to pay for it. ADA can be implemented for higher reliability and quality. Some utilities have implemented power quality parks to attract customers with sensitive needs to locate in these parks. However, before these parks are built, utilities need to know their customer base. That is, how many potential customers are there and how much they are willing to pay for the premium service. Some form of direct survey of the customer base is needed to obtain this information. It is easier for the utility to provide premium service to the customers, if all the customers that need such service are located in a physical congruous area. It is much more difficult and expensive to provide premium service to a customer served from a feeder where nobody else needs it.
Probabilistic Nature of Failure Rates
In addition to providing premium service, ADA can help in improving service reliability of the system. Different parts of distribution systems experience different levels of failures due to exposure to different elements. For example, feeders exposed to higher number of trees have more likelihood of failures. Similarly, wind, lightning, and animals can cause failures. Although higher exposure to these elements increases probability of failures, the actual failures occur in a very random nature. Mapping the impact of various elements on failures rates of distribution feeders is a difficult task. However, such mapping will allow identification of feeders with higher probability of failure. Once these feeders are identified, they can be targeted for improvements including application of ADA.
ADA requires faster decisions and thus real-time analysis of distribution systems. Distribution state estimator is an example of analysis tools needed for ADA. The input data for analysis includes system topology, parameters of different components in the system, status of switches and breakers, and measured data from various points in the system. Since more data can be measured the analysis becomes more complex. The tools should be able to use these data effectively.
Distributed VS Central Computational Intelligence
Since large amounts of data are expected in ADA implementation, computational burden can become very large. Also, there is a possibility of communication bottlenecks due to transmission of large quantities of data. Distributed computational intelligence must be utilized to avoid such bottlenecks. The system can have several such devices located at strategic locations. Data from specific parts of the system is directed to the selected devices. These devices process the information to make local decisions. Each device also communicates important information to other distributed devices as well as the central computer. The central computer receives data from each distributed device and then processes the data to make global decisions for the system. An appropriate balance between local and central computational intelligence is needed for an efficient ADA scheme.
Advances in sensor technology are making new sensors available for deployment in distribution systems. These sensors can provide information that was not available in the past. If the cost of sensors is low, large quantities can be placed at critical locations in the system. The information available from these sensors can be used to implement new applications. Distribution Sensors Requirements
Asset management is an important aspect of utility’s operation in the present business environment. ADA can enhance asset management for the utilities. The new sensors can monitor condition of equipment, which can be used to schedule maintenance. Real-time analysis based on measured data provides real-time loading of equipment. This information can be used to manage loading to equipment in an efficient manner and thus enhancing utilization of the assets.
Advanced Communication and Internet Technology
The communication technology has seen rapid advances in recent years. Better and more effective communication equipment utilizing fiber optics, satellite, and radio are available. In addition, the Internet is available for web-based applications. The ADA schemes should utilize the new communication media and the Internet. The communication media should be appropriate for the intended application.
1. K. Clinard and John Redmon, Editors, Distribution Management Tutorial, IEEE PES Winter Meeting, Tampa, FL, February 1998.
2. A. Pahwa and J.K. Shultis, Assessment of the Present Status of Distribution Automation, Report No. 238, Engineering Experiment Station, Kansas State University, Manhattan, KS, March 1992.
3. D. Bassett, K. Clinard, J. Grainger, S. Purucker, and D. Ward, Tutorial Course: Distribution Automation, IEEE Publication 88EH0280-8-PWR.
4. T.Moore, "Automating the Distribution Network," EPRI Journal, September 1984, pp. 22-28.
5. T. Moore, J.B. Bunch, Guidelines for Evaluating Distribution Automation, EPRI Report EL-3728, November 1984.
6. T. Kendrew, "Automated Distribution," EPRI Journal, January/February 1990, pp.46-48.
7. J.B. Bunch, Guidelines for Evaluating Distribution Automation, EPRI Report EL-3728, November 1984.
8. J.S. Paserba, N.W. Miller, S.T. Naumann, M.G. Lauby, and F.P. Sener, "Coordination of a Distribution Level Continuously Controlled Compensation Device with Existing Substation Equipment for Long Term Var Management," Paper No. 93 SM 437-4 PWRD, IEEE PES Summer Meeting, Vancouver, Canada, July 1993.
9. G.T. Heydt, Electric Power Quality, Stars in a Circle Publications, West Lafayette, IN, 1991.
10. J. Douglas, "Power Quality Solutions,", IEEE Power Engineering Review, v. 14, no. 3, March 1994.
11. P.A. Gnadt and J.S. Lawer, Editors, Automating Electric Utility Distribution System: The Athens Automation and Control Experiment, Prentice-Hall Advanced Reference Series, Prentice-Hall, Upper Saddle River, NJ, 1990.
12. Proceedings: Transmission and Distribution Automation Systems, EPRI Report EL-6762, March 1990.
13. E.A. Undren and J.R. Benckenstein, "Protective Relaying in Integrated Distribution Substation Control Systems," Presentation for Panel Session on Integration of Demand-Side Management and Distribution Automation, IEEE Power Engineering Society Winter Meeting, Atlanta, Georgia, Feb 90.
14. E.H. Davis, S.T. Grusky, and F.P. Sioshansi, "Automating the Distribution System: An Intermediary View for Electric Utilities," Public Utilities Fortnightly , Jan 19, 1989, pp. 22-27.
15. C.D. Leibrandt and R.A. Rhodes, "Integration of SCADA and AM/FM Systems," T&D Automation EXPO’91, March 1991.
16. D. Block, "Utility Automation Technology," Electric Power Industry Outlook and Atlas 1997 to 2001, PennWell Books, Tulsa, OK, 1996.
17. P.D. Rodrigo, A. Pahwa, and J.E. Boyer, "Location of Outages in Distribution Systems Based on Hypotheses Testing," IEEE Transactions in Power Delivery, January 1996, pp. 546-551.
18. B.W. Coughlan, D.L. Lubkeman, and J. Sutton, "Improved Control of Capacitor Bank switching to Minimize Distribution System Losses," The Proceedings of the Twenty-Second Annual North American Power Symposium , Oct 90, pp. 336-345.
19. J.K. Shultis and A. Pahwa, Economic Models for Cost/Benefit Analysis of Eight Distribution Automation Functions, Report No. 234, Engineering Experiment Station, Kansas State University, Manhattan, KS, June 1992.
20. H.L. Willis, Power Distribution Planning Reference Book, Marcel Dekker, Inc., New York, NY, 1997.
Section 2 - Demand Side Management
Section 3 - Communication System Characteristics
The text below supports the above presentations.
Modern electric power systems have been dubbed "the largest machine made by mankind" because they are both physically large – literally thousands of miles in dimension – and operate in precise synchronism. In North America, for example, the entire West Coast, everything east of the Rocky Mountains, and the State of Texas operate as three autonomous interconnected "machines". The task of keeping such a large machine functioning without breaking itself apart is not trivial. The fact that power systems work as reliably as they do is a tribute to the level of sophistication that is built into them. Communication systems play a vital role in power system operation.
The choice of a Communications system for Distribution Automation (DA) is uniquely driven by requirements determined by the desired business functions to be filled. DA communication requirements are driven by a utility’s business requirements and include things as diverse as customer meter reading, customer load control, power and service quality monitoring, feeder status monitoring, feeder switch control and monitoring, supervisory monitoring and control of feeder automation systems (SCADA functionality), and provision of peer-to-peer communication functions for feeder automation systems. Each of these functional requirements drives a corresponding communication requirement and, in turn, drives the selection of the appropriate communication technology for any particular utility application.
This chapter of the tutorial starts out with a brief review of the history of supervisory control, followed by an examination of the process by which business requirements work to drive communication requirements and ultimately the selection of communication technologies. It concludes by examining the characteristics of some of the popular and emerging communication technologies which are available in the marketplace.
Supervisory Control and Data Acquisition (SCADA) Historical Perspective
Electric power systems as we know them began developing in the early 20th century. Initially generating plants were associated only with local loads that typically consisted of lighting and electric transportation. If anything in the system failed – generating plant, power lines, or connections – the lights would quite literally be "out". Customers had not yet learned to depend on electricity being nearly 100% reliable, so outages, whether routine or emergency, were taken as a matter of course.
As reliance on electric power grew, so did the need to find ways to improve reliability. Generating stations and power lines were interconnected to provide redundancy and higher voltages were used for longer distance transportation of electricity. Points where power lines came together or where voltages were transformed came to be known as "substations". A "Distribution System" used "Feeders" to connect substations to the customer loads. Substations and feeders often employed protective devices to allow system failures to be isolated so that faults would not bring down the entire system and operating personnel were often stationed at substations so that they could monitor and quickly respond to any problems which might arise. They would communicate with central system dispatchers by any means available – often by telephone – to keep them apprised of the condition of the system. Such "manned" substations were normative throughout the first half of the 20th century.
As the demands for reliable electric power became greater and as labor became a more significant part of the cost of providing electric power, technologies known as "Supervisory Control and Data Acquisition", or SCADA for short, were developed which would allow remote monitoring and even control of key system parameters. SCADA systems began to reduce and even eliminate the need for personnel to be on-hand at substations.
Early SCADA systems provided remote indication and control of substation parameters using technology borrowed from automatic telephone switching systems. As early as 1932 Automatic Electric was advertising "Remote-Control" products based on its successful line of "Strowger" telephone switching apparatus (See Figure 1). Another example (used as late as the 1960s) was an early Westinghouse REDAC system that used telephone-type electromechanical relay equipment at both ends of a conventional "twisted-pair" telephone circuit. Data rates on these early systems were slow – data was sent in the same manner as rotary-dial telephone commands – ten bits per second - so only a limited amount of information could be passed using this technology.
Early SCADA systems were built on the notion of replicating remote controls, lamps, and analog indications at the functional equivalent of pushbuttons, often placed on a mapboard for easy operator interface. The SCADA masters simply replicated, point-for-point, control circuits connected to the remote, or slave, unit.
During the same timeframe as SCADA systems were developing, a second technology – remote teleprinting, or "Teletype" – was coming of age, and by the 1960s had gone through several generations of development. The invention of a second device – the "modem" (MOdulator / DEModulator) allowed digital information to be sent over wire pairs which had been engineered to only carry the electronic equivalent of human voice communication. With the introduction of digital electronics it was possible to use faster data streams to provide remote indication and control of system parameters. This marriage of Teletype technology with digital electronics gave birth to "Remote Terminal Units" (RTU’s) which were typically built with discrete solid-state electronics and which could provide remote indication and control of both discrete events and analog voltage and current quantities.
Beginning also in the late 1960s and early 1970s technology leaders began exploring the use of small computers (minicomputers at that time) in substations to provide advanced functional and communication capability. But early application of computers in electric substations met with industry resistance because of perceived and real reliability issues.
The introduction of the microprocessor with the Intel 4004 in 1971 (see http://www.intel4004.com for a fascinating history) opened the door for increasing sophistication in RTU design that is still continuing today. Traditional point-oriented RTU’s that reported discrete events and analog quantities could be built in a fraction of the physical size required by previous discrete designs. More intelligence could be introduced into the device to increase its functionality. For the first time RTU’s could be built which reported quantities in engineering units rather than as raw binary values. One early design developed at Northern States Power Company in 1972 used the Intel 4004 as the basis for a "Standardized Environmental Data Acquisition and Retrieval (SEDAR)" system which collected, logged, and reported environmental information in engineering units using only 4 kilobytes of program memory and 512 nibbles (half-bytes) of data memory.
While the microprocessor offered the potential for greatly increased functionality at lower cost, the industry also demanded very high reliability and long service life measured in decades that were difficult to achieve with early devices. Thus the industry was slow to accept the use of microprocessor technology in mission-critical applications. By the late 1970s and early 1980s integrated microprocessor-based devices were introduced which came to be known as "Intelligent Electronic Devices", or IED’s,
Early IED’s simply replicated the functionality of their predecessors – remotely reporting and controlling contact closures and analog quantities using proprietary communication protocols. Increasingly, IED’s are being used also to convert data into engineering unit values in the field and to participate in field-based local control algorithms. Many IED’s are being built with programmable logic controller (PLC) capability and, indeed, PLC’s are being used as RTU’s and IED’s to the point that the distinction between these different types of smart field devices is rapidly blurring.
Early SCADA communication protocols were usually proprietary in nature and were also often kept secret from the industry. A trend beginning in the mid-1980s has been to minimize the number of proprietary communication practices and to drive field practices toward open, standards-based specifications. Two noteworthy pieces of work in this respect are the International Electrotechnical Commission (IEC) 60870-5 family of standards and the IEC 61850 standard. The IEC 60870-5 work represents the pinnacle of the traditional point-list-oriented SCADA protocols, while the IEC 61850 standard is the first of an emerging approach to networkable, object-oriented SCADA protocols based on work started in the mid-1980s by the Electric Power Research Institute which became known as the Utility Communication Architecture (UCA).
Communications System Functional Requirements
Design of any communication system should always be preceded by a formal determination of the business and corresponding technical requirements that drive the design. Such a formal statement is known as a "Functional Requirements Specification". Functional requirements capture the intended behavior of the system. This behavior may be expressed as services, tasks or functions the system is required to perform.
In the case of Distribution SCADA it will contain such information as system status points to be monitored, desired control points, analog quantities to be monitored, and identification of customer metering and control points. It will also include identification of acceptable delays between when an event happens and when it is reported, required precision for analog quantities, and acceptable reliability levels. The functional requirements analysis will also include a determination of the number of remote points to be monitored and controlled. It should also include identification of all communication stakeholders. These might include (for Distribution Automation) the control center, the customer billing office, and technical support and planning personnel. It may also include stakeholders as diverse as the customer himself if services such as Internet-accessible meter reading and power quality information are to be offered.
The functional requirements analysis should also include a formal recognition of the physical, electrical, communications, and security environment in which the communications is expected to operate. Considerations here include recognizing the possible (likely) existence of electromagnetic interference from nearby power systems, identifying available communications facilities, identifying functionally the locations between which communication is expected to take place, and identifying communication security threats which might be presented to the system.
It is sometimes difficult to identify all of the items to be included in the functional requirements, and a technique which has been found useful in the industry is to construct a number of example "use cases" which detail particular individual sets of requirements. Aggregate use cases can form a basis for a more formal collection of requirements.
After the functional requirements have been articulated, the corresponding architectural design for the communication system can be set forth. Communication requirements include those elements which must be included in order to meet the functional requirements.
Some elements of the communication requirements include:
- Identification of communication traffic flows – source/destination/quantity
- Overall system topology – eg, star, mesh
- Identification of end system locations
- Device/Processor Capabilities
- Communication Session/Dialog Characteristics
- Device Addressing schemes
- Communication Network Traffic Characteristics
- Performance Requirements
- Timing Issues
- Application Service Requirements
- Application Data Formats
- Operational Requirements (Directory, Security, and Management of the network)
- Quantification of electromagnetic interference withstand requirements
Distribution Automation Communication Requirements
Distribution Automation communication requirements are driven by business functional requirements which may include, but are not limited to, the following:
- Feeder status monitoring
- Feeder voltage quality monitoring
- Reactive power monitoring (capacitor bank monitoring)
- Managing reactive compensation (capacitor bank switching)
- Feeder switch control
- Feeder sectionalizer and recloser control
- Supervisory control of feeder fault isolation schemes
- Provide communication channels for fault isolation schemes
- Monitor customer power quality
- Read customer meters for total usage
- Read customer time-of-use usage
- Control end-use loads according to predetermined schedules
- Control end-use loads according to system conditions, such as peak load periods
As discussed above, as the desired functional requirements are identified, they should be translated to a corresponding set of communication requirements which include identifying the needed communication paths, the required volume of data, acceptable delays and error rates, and any cost constraints. It may be useful for this phase of the analysis to compile "use cases" and to show data flows in pictorial form. Only after this analysis is complete can suitable communication systems be identified.
The paragraphs below discuss several data communication systems which can be used for various utility applications. In order to show the breadth of communication systems which are offered for utility use and also to allow this reference to be applied as technologies mature and their economic applications broaden, this list deliberately includes some technologies which are not presently considered suitable for distribution automation application.
Electric utilities use a combination of analog and digital communications systems for their operations consisting of power line carrier, radio, microwave, leased phone lines, satellite systems, and fiber optics. Each of these systems have characteristics that make them well-suited to particular applications. The advantages and disadvantages of each are briefly summarized below:
- Power line communications is a popular choice for distribution automation because it provides communications wherever the power lines are located. Disadvantages include the fact that communication will be disrupted by disturbances in the distribution line, and switching of the distribution line will cause communication routing to change.
- Microwave radio systems have been traditionally applied only in point-to-point station or substation communications with wide bandwidths. But emerging commercial products may make them useful in smaller applications in the distribution environment. Microwave is useful for general communications for all types of applications.
- Radio systems provide narrower bandwidths but are nonetheless useful for mobile applications or communication to locations difficult to reach otherwise.
- Satellite systems likewise are effective for reaching difficult to access locations, but are not good where the long delay is a problem. They also tend to be costly.
- Leased phone lines are very effective where a solid link is needed to a site served by standard telephone service. They tend to be expensive in the long term, so are usually not the best solution where many channels are required.
- Fiber optic systems are a new option. They are expensive to install and provision, but are expected to be very cost effective. They have the advantage of using existing rights-of-way and delivering communications directly between points of use. In addition they have the very high bandwidth needed for modern data communications.
- Spread Spectrum Radio is a new option which can provide affordable solutions using unlicensed services. Advances in this field are appearing rapidly and they should be examined closely to determine their usability to satisfy relaying requirements.
- Common-carrier communication provided by cellphone carriers using IP-based messages is an emerging service which may prove very attractive for utility use.
- Hybrids of two or more of the above technologies can provide optimal service for selected DA functions.
Components of a SCADA System
Traditional SCADA systems grew up with the notion of a SCADA "master" and a SCADA "slave" or "remote". The implicit topology was that of a "star" or "spoke and hub", with the master in charge. In the historical context, the "master" was a hardwired device with the functional equivalent of indicator lamps and pushbuttons (see Figure 2).
Modern SCADA systems employ a computerized SCADA Master in which the remote information is either displayed on an operator’s computer terminal or made available to a larger "Energy Management System" through networked connections. RTU’s are either hardwired to digital, analog, and control points or frequently act as a "sub-master" or "data concentrator" in which connections to other intelligent devices are made using communication links. Most interfaces in these systems are proprietary, although in recent years standards-based communication protocols to the remote terminal units have become popular. In these systems if other stakeholders such as engineers or system planners need access to the RTU for configuration or diagnostic information, separate, often ad-hoc, provision is usually made using technologies such as dial-up telephone circuits.
With the introduction of networkable communication protocols, typified by the IEC 61850 series of standards, it is now possible to simultaneously support communication with multiple clients located at multiple remote locations. Figure 3 shows how such a network might look. This configuration will support clients located at multiple sites simultaneously accessing substation or feeder devices for applications as diverse as SCADA, device administration, system fault analysis, metering, and system load studies.
SCADA systems as traditionally conceived report only real-time information, but interfaces built according to standards IEC 61968 and IEC 61970 will allow integration of both control center and enterprise information systems as shown in Figure 3. A feature which may be included in a modern SCADA system is that of an historian which time-tags each change of state of selected status parameters or each change (beyond a chosen deadband) of analog parameters and then stores this information in an efficient data store which can be used to rebuild the system state at any selected time for system performance analyses.
The Structure of a SCADA Communication Protocol
The fundamental task of a SCADA communications protocol is to transport a "payload" of information (both digital and analog) from the field to the control center and to allow remote control in the field of selected operating parameters from the control center. Other functions that are required but usually not included in traditional SCADA protocols include the ability to access and download detailed event files and the ability to remotely access devices for administrative purposes. These functions are sometimes provided using ancillary dial-up telephone-based communication channels. Newer, networkable, communication practices such as IEC 61850 make provision for all of the above functionality and more using a single wide area network connection to the remote device.
From a communications perspective, all communication protocols have at their core a "payload" of information that is to be transported. That payload is then wrapped in either a simple addressing and error detection envelope and sent over a communication channel (traditional protocols) or is wrapped in additional layers of application layer and networking protocols which allow transport over wide area networks (routable object-oriented protocols like IEC 61850).
In order to help bring clarity to the several parts of protocol functionality, in 1984 the International Standards Organization (ISO) issued Standard ISO/IEC 7498 entitled "Reference Model of Open Systems Interconnection" or, simply, the "OSI Reference Model". The model was updated with a 1994 issue date, with the current reference being "ISO/IEC 7498-1:1994 ", and available from "http://www.iso.org".
The OSI Reference Model breaks the communication task into seven logical pieces as shown in Figure 4. All communication links have a data source (application layer 7 information) and a physical path (layer 1). Most links also have a data link layer (layer 2) to provide message integrity protection. Security can be applied at layers 1 or 2 if networking is not required but must be applied at or above the network layer (3) and is often applied at the application layer (layer 7) to allow packets to be routed through a network. More sophisticated, networkable, protocols add one or more of layers 3-6 to provide networking, session management, and sometimes data format conversion services. Note that the OSI Reference Model is not, in and of itself, a communication standard. It is just a useful model showing the functionality that might be included in a coordinated set of communication standards.
Also note that Figure 4 as drawn shows a superimposed "hourglass". The hourglass represents the fact that it is possible to transport the same information over multiple physical layers – radio, fiber, twisted pair, etc – and that it is possible to use a multiplicity of application layers for different functions. In the middle – the networking – layers, interoperability over a common network can be achieved if all applications agree on common networking protocols. For example, the growing common use of the Internet protocols TCP/IP represents a worldwide agreement to use common networking practices (common middle layers) to route messages of multiple types (application layer) over multiple physical media (physical layer – twisted pair, Ethernet, fiber, radio) in order to achieve interoperability over a common network (the Internet).
Figure 5 shows how device information is encapsulated (starting at the top of the diagram) in each of the lower layers in order to finally form the data packet at the data link layer which is sent over the physical medium. The encapsulating packet – the header and trailer and each layer’s payload – provide the added functionality at each level of the model, including routing information and message integrity protection. Typically the overhead requirements added by these wrappers are small compared with the size of the device information being transported. Figure 6 shows how a message can travel through multiple intermediate systems when networking protocols are used.
Traditional SCADA protocols, including all of the proprietary legacy protocols, DNP, and IEC 60870-5-101, use layers 1, 2, and 7 of the reference model in order to minimize overheads imposed by the intermediate layers. IEC 60870-5-104 and recent work being done with DNP add networking and transport information (layers 3 and 4) so that these protocols can be routed over a wide-area network. IEC 61850 is built using a "profile" of other standards at each of the reference model layers so that it is applicable to a variety of physical media (lower layers), is routable (middle layers) and provides mature application layer services based on ISO 9506 – the Manufacturing Message Specification – MMS.
Communication Protocols: Past, Present, and Future
As noted in the section on SCADA history, early SCADA protocols were built on electromechanical telephone switching technology. Signaling was usually done using pulsed direct-current signals at a data rate on the order of ten pulses per second. Analog information could be sent using "current loops", which are able to communicate over large distances (thousands of feet) without loss of signal quality. Control and status points were indexed using assigned positions in the pulse train. Analog information was sent using "current loops" which could provide constant current independent of circuit impedance. Communications security was assured by means of repetition of commands or such mechanisms as "arm" and "execute" for control.
With the advent of digital communications (still pre-computer), higher data rates were possible. Analog values could be sent in digital form using analog-to-digital converters, and errors could be detected using parity bits and block checksums. Control and status points were assigned positions in the data blocks which needed to be synchronized between the remote and master devices. Changes of status were detected by means of repetitive "scans" of remote devices, with the "scan rate" being a critical system design factor. Communications integrity was assured by the use of more sophisticated block ciphers including the "cyclical redundancy check" which could detect both single- and multiple- bit errors in communications. Control integrity was ensured by the use of end-to-end "select-check-operate" procedures. Each manufacturer (and sometimes user) of these early SCADA systems would typically define their own communication protocol and the industry became known for the large number of competing practices.
Computer-based SCADA master stations, followed by microprocessor-based remote terminal units, continued the traditions set by the early systems of using points-list-based representations of control and status information. Newer, still-proprietary, communication protocols became increasingly sophisticated in the types of control and status information which could be passed. The notion of "report by exception" was introduced in which a remote terminal could report "no change" in response to a master station poll, thus conserving communication resources and reducing average poll times.
By the early 1980s the electric utility industry enjoyed the marketplace confusion brought by on the order of 100 competing proprietary SCADA protocols and their variants. With the rising understanding of the value of building on open practices, a number of groups began to approach the task of bringing standard practices to bear on utility SCADA practices. As shown in Figure 7, a number of different groups are often involved in the process of reaching consensus on standard practices. The process reads from the bottom to the top, with the "International Standards" level the most sought-after and also often the most difficult to achieve. Often the process starts with practices which have been found useful in the marketplace but which are, at least initially, defined and controlled by a particular vendor or, sometimes, end user. The list of vendor-specific SCADA protocols is long and usually references particular vendors. One such list (from a vendor’s list of supported protocols) reads like a "who’s who" of SCADA protocols and includes:
Conitel, CDC Type 1 and Type II, Harris 5000, Modicon MODBus, PG&E 2179, PMS-91, QUICS IV, SES-92, TeleGyr 8979, PSE Quad 4 Meter, Cooper 2179, JEM 1, Quantum Qdip, Schweitzer Relay Protocol (221, 251, 351), and Transdata Mark V Meter
Groups at the Institute of Electrical and Electronics Engineers (IEEE), the International Electrotechnical Commission (IEC), and the Electric Power Research Institute (EPRI) all started in the mid-1980s to look at the problem of the proliferation of SCADA protocols. IEC Technical Committee 57 (IEC TC57) working group 3 (WG 3) began work on it’s "60870" series of telecontrol standards. Groups within the IEEE Substations and Relay Committees began examining the need for consensus for SCADA protocols. And EPRI began a project which became known as the "Utility Communications Architecture" in an effort to specify an enterprise-wide, networkable, communications architecture which would serve business applications, control centers, power plants, substations, distribution systems, transmission systems, and metering systems.
This section discusses each of several communications media that might be used for SCADA communications and reviews their merits in light of the considerations discussed above.
This section discusses several data communication systems which can be used for various utility applications. In order to show the breadth of communication systems which are offered for utility use and also to allow this reference to be applied as technologies mature and their economic applications broaden, this list deliberately includes some technologies which are not presently considered suitable for distribution automation application.
ARDIS (Advanced Radio Data Information Service)
ARDIS was originally developed jointly by Motorola and IBM in the early 1980s for IBM customer service engineers and is owned by Motorola. Service is now available to subscribers throughout the U.S. with an estimated 65,000 users, mostly using the network in vertical market applications. Many of these users are IBM customer engineers.
ARDIS is optimized for short message applications which are relatively insensitive to transmission delay. ARDIS uses connection-oriented protocols that are well-suited for host/terminal applications. With typical response times exceeding four seconds, interactive sessions generally are not practical over ARDIS. As a radio-based service, ARDIS can be expected to be immune to most of the EMC issues associated with substations. It provides either 4800 bps or 19.2 kbps service using a 25 KHz channel in the 800 MHz band.
Cellular Telephone Data Services
Several different common-carrier services that are associated with cellphone technologies are being offered in the marketplace. Space here permits only cursory mention of the several technologies and their general characteristics.
|"Advanced Mobile Phone System" or AMPS is the analog mobile phone system standard, introduced in the Americas during the early 1980s. Though analog is no longer considered advanced at all, the relatively seamless cellular switching technology AMPS introduced was what made the original mobile radiotelephone practical, and was considered quite advanced at the time.|
Cellular Digital Packet Data (CDPD) is a digital service which can be provided as an adjunct to existing conventional AMPS 800-MHz analog cellular telephone systems. It is available in many major markets but often unavailable in rural areas. CDPD systems use the same frequencies and have the same coverage as analog cellular phones. CDPD provides IP-based packet data service at 19.2 kbps and has been available for a number of years. Service pricing on a use basis has made it prohibitively costly for polling applications, although recent pricing decreases have put a cap in the range of $50 per month for unlimited service. As a radio-based common carrier service, it is immune to most EMC issues introduced by substations. CDPD is nearing the end of its commercial lifecycle and will be decommissioned in the relatively near future by major carriers.
AMPS includes a supervisory data channel used to provide service and connection management to cellphone users. This data channel has been adapted and offered for sale to the electric utility community to support low-data-requirement SCADA functions for simple utility RTU and remote monitoring applications. Reference the discussion of SMS below for more information on control channel signaling.
Although AMPS represents the only "universal" standard practice in the North American cellphone industry, it does use technology now regarded as obsolete and is scheduled for retirement in the not-too-distant future.
New applications should consider the use of other common-carrier digital systems such as Personal Communications Service (PCS), TDMA (Time Division Multiple Access), GSM (Global System for Mobile Communications), or Code Division Multiple Access (CDMA). A third generation of cellphone technology is currently under development, using new technologies called "wideband", including EDGE, W-CDMA, CDMA2000, and W-TDMA. The marketplace competition among these technologies can be expected to be lively. While these technologies can be expected to play a dominant role in the future of wireless communications, because of the rapidly changing marketplace it remains unclear what the long-term availability or pricing of any particular one of these technologies will be.
Digital microwave systems are licensed systems operating in several bands ranging from 900 MHz to 38 GHz. They have wide bandwidths ranging up to 40 MHz per channel and are designed to interface directly to wired and fiber data channels such as ATM, Ethernet, SONET, and T1 derived from high-speed networking and telephony practice.
As a licensed radio system, the FCC (Federal Communications Commission) allocates available frequencies to users in order to avoid interference. Application of these systems requires path analysis to avoid obstructions and interconnection of multiple repeater stations to cover long routes. Each link requires a line-of-sight path.
|Digital microwave systems can provide support for large numbers of both data and voice circuits. This can be provided either as multiples of DS3 (1xDS3 = 672 voice circuits) signals or DS1 (1xDS1 = 24 voice circuits) signals, where each voice circuit is equivalent to 64Kbps of data or (increasingly) as ATM or 100 Mbps Fast Ethernet, with direct RJ-45, Category 5 cable connections. They can also link directly into fiber optic networks using SONET/SDH.|
Digital microwave is costly for individual substation installations but might be considered as a high performance medium for establishing a backbone communications infrastructure that can meet the utility’s operational needs.
See also the discussion of "Spread Spectrum Radio and Wireless LAN’s" for future directions.
Fiber optic cables offer at the same time high bandwidth and inherent immunity from electromagnetic interference. Large amounts of data as high as gigabytes per second can be transmitted over the fiber.
The fiber cable is made up of varying numbers of either single- or multi-mode fibers, with a strength member in the center of the cable and additional outer layers to provide support and protection against physical damage to the cable during installation and to protect against effects of the elements over long periods of time. The fiber cable is connected to terminal equipment that allows slower speed data streams to be combined and then transmitted over the optical cable as a high-speed data stream. Fiber cables can be connected in intersecting rings to provide self-healing capabilities to protect against equipment damage or failure.
Two types of cables are commonly used by utility companies: OPGW (Optical Power Ground Wire which replaces a transmission line’s shield wire) and ADSS (All Dielectric Self-Supporting). ADSS is not as strong as OPGW but enjoys complete immunity to electromagnetic hazards, so it can be attached directly to phase conductors.
Although it is very costly to build an infrastructure, fiber networks are highly resistant to undetected physical intrusion associated with the security concerns outlined above. Some of the infrastructure costs can be recovered by joint ventures with or bandwidth sales to communication common carriers. Optical fiber networks can provide a robust communications backbone for meeting a utility’s present and future needs.
Cable television systems distribute signals to residences primarily using one-way coaxial cable which is built using an "inverted tree" topology to serve large numbers of customers over a common cable, using (analog) intermediate amplifiers to maintain signal level. This design is adequate for one-way television signals but does not provide the reverse channel required for data services. Cable systems are being upgraded to provide Internet service by converting the coaxial cables to provide two-way communications and adding cable modems to serve customers. The resulting communication data rate is usually asymmetrical, in which a larger bandwidth is assigned downstream (toward the user), with a much smaller bandwidth for upstream service.
Typically the system is built with fiber optic cables providing the high-speed connection to cable head-ends. Since coaxial cables are easier to tap and to splice, they are preferred for delivery of the signals to the end user. The highest quality, but also most costly, service would be provided by running the fiber cable directly to the end user. Because of the high cost of fiber, variations on this theme employ Fiber To The Node (FTTN) (neighborhood fiber), Fiber To The Curb (FTTC), and Fiber To The Home (FTTH).
Because of the difficulty in creating undetected taps in either a coaxial line or a fiber optic cable, these systems are resistant to many security threats. However, the fact that they typically provide Internet services makes them vulnerable to many of the cyber attacks discussed above, and appropriate security measures should be taken to ensure integrity of service if this service is chosen for utility applications.
Multiple Address (MAS) Radio is popular due to its flexibility, reliability, and small size. A MAS radio link consists of a master transceiver (transmitter/receiver) and multiple remote transceivers operating on paired transmit/receive frequencies in the 900 MHz band. The master radio is often arranged to transmit continuously, with remote transmitters coming up to respond to a poll request. Units are typically polled in a "round-robin" fashion, although some work has been done to demonstrate the use of MAS radios in a contention-based network to support asynchronous remote device transmissions.
The frequency pairs used by MAS must be licensed by the FCC and can be reused elsewhere in the system with enough space diversity (physical separation). Master station throughput is limited by radio carrier stabilization times and data rates limited to a maximum of 9.6 kbps. The maximum radius of operation without a special repeater is approximately 15 km, so multiple master radios will be required for a large service territory.
MAS radio is a popular communication medium and has been used widely by utilities for SCADA (supervisory control and data acquisition) systems and DA (distribution automation) systems.
MAS radio is susceptible to many of the security threats discussed above, including Denial of Service (radio jamming), Spoof, Replay, and Eavesdropping. In addition, the licensed frequencies used by these systems are published and easily available in the public domain. For this reason it is important that systems using MAS radio be protected against intrusion using the techniques discussed above.
Mobile Computing infrastructure
Systems and personal devices which allow "on the go" communications, often including Internet access, are rapidly emerging in the marketplace. These systems offer opportunities to provide communications for IP-based utility applications, often with easy setup and low service costs. New wireless technologies can be expected to provide data rates in excess of 100 kilobits per second. Applications built on these technologies should include network level or above security protection similar to that required of other networked communication systems. For additional discussion on these emerging technologies, refer also to the discussion on "Spread Spectrum Radio and Wireless LAN’s".
Mobile radio systems operating in the VHF, UHF, and 800 MHz radio bands have sometimes been pressed into shared data service along with their primary voice applications. Such use is problematic due to the fact that the systems are designed for analog (voice) rather than digital (data) applications and due to the fact that they are shared with voice users. It is difficult to license new applications on these channels and their use for digital applications should be discouraged. The emerging "mobile computing" technologies are much more attractive for these applications.
Mobitex Packet Radio
Mobitex is an open, international standard for wireless data communications developed by Ericsson. It provides remote access for data and two-way messaging for mobile computing applications.
The technology is similar to that used in ARDIS and cellular telephone systems. Like mobile telephone systems, the Mobitex networks are based on radio cells. Base stations allocate digital channels to active terminals within limited geographic areas. Data is divided into small packets that can be transmitted individually and as traffic permits. Setup time is eliminated and network connections are instantaneous. Packet switching makes more efficient use of channel capacity. Area and main exchanges handle switching and routing in the network, thereby providing transparent and seamless roaming within the U.S. A modest data rate of 8 or 16 kbps makes it useful for small amounts of data or control but not for large file transfers. Service is offered to large portions of the U.S. population (primarily in the East), but rural service may be lacking. As part of a public network, applications should employ end-to-end application-layer security.
Paging systems have been used very effectively for certain utility applications which typically require only one-way command operation. Paging networks are built using carefully engineered sets of system controllers, transmitters, and data links designed to make sure the system has optimal coverage and response while minimizing interference. Some systems use satellite channels to provide wide-area coverage. Most paging systems use simulcast techniques and multiple transmitters to give continuous coverage over their service areas. Typical systems provide publicly accessible interfaces using dial-up, modem, and/or Internet access. The over-the-air protocol is the POCSAG (the British Post Office Code Standardisation Advisory Group) standard operating in the 900 MHz band. Most systems are one-way (outbound), but a few also offer inbound messaging services. Systems have large capacities but are subject to intolerable delays when overloaded. Service cost is typically very low, making this system very attractive for certain applications.
As part of a public network, application layer security to protect from masquerading attacks is appropriate. A coordinated denial-of-service attack may be possible but is unlikely to occur in the types of applications for which this system is suited.
Power Line Carrier
Power Line Carrier (PLC) systems operating on narrow channels between 30 and 500 KHz are frequently used for high voltage line protective relaying applications. Messages are typically simple, one-bit, messages using either amplitude- or frequency-shift keying which tell the other end of a dedicated link to trip or to inhibit the tripping of a protective circuit breaker.
Other PLC systems have been developed for specialized distribution feeder applications such as remote meter reading and distribution automation. Early in the development of PLC systems, it was observed that signals below approximately 10 KHz would propagate on typical distribution feeders, with the primary impediments coming from shunt power factor correction capacitors and from series impedances of distribution transformers. These two components work together as a low-pass filter to make it difficult to transmit higher frequency signals. In addition, signaling at lower frequencies approaching the power line frequency is difficult because of harmonic interference from the fundamental power line itself. One successful system uses Frequency Shift Keying (FSK) signals in the 10 KHz range to provide communications for distribution automation.
Two systems, the Two Way Automatic Communications System (TWACS), and the Turtle, use communications based on modification of the 60 Hz waveform itself. Both systems use disturbances of the voltage waveform for outbound communication and of the current waveform for inbound communication. The primary difference between the two systems is that TWACS uses relatively higher power and data rates of 60 bits per second, while the Turtle system uses extremely narrow bandwidth signaling – on the order of 1/1000 bit per second – and massively parallel communications in which each remote device has its own logical channel. The TWACS system is used for both automatic meter reading and distribution automation, while the Turtle system is used mostly for meter reading.
With the proper equipment, both of these systems would be subject to both eavesdropping and masquerading types of security threats, so security measures are appropriate. With the limited data rates of these systems, only simple encryption techniques using secret keys are appropriate.
Recent and much-publicized work has been conducted to develop high speed data services which claim to deliver data rates as high (in one case) as a gigabit per second. Known in the industry as BPL (for Broadband over Power Lines), these systems use spread spectrum techniques to deliver data rates previously unattainable. But fundamental physical constraints make it unlikely that successful data rates will be delivered much above several megabits per second. A technical issue with these technologies is the fact that they occupy the same spectrum as do licensed users in the High Frequency space (3-30 MHz) and can cause interference to, as well as receive interference from, these other services. FCC Part 15 rules require any unlicensed users (such as BPL) to not interfere with existing uses and to vacate the frequency if found to be interfering, so utility application of this technology should only be done with caution.
PLC systems are exposed to public access, so encryption techniques are appropriate to protect any sensitive information or control communications.
Satellite systems which offer high-speed data service have been deployed in two different forms, broadly categorized by their orbits.
Hughes built the first Geosynchronous Orbit (GEO) communications satellite in the early 1960s under a NASA contract to demonstrate the viability of such satellites operating in an earth orbit 22,300 miles (35,900 km) above the ground. The attractive feature of these satellites is that they appear fixed in the sky and therefore do not require costly tracking antennas. Such satellites are commonly used today to distribute radio and television programming and are useful for certain data applications.
Because of the large distances to the satellite, GEO systems require relatively large parabolic antennas in order to keep satellite transponder power levels to a manageable level. Because of the distances involved, each trip from earth to satellite and back requires ¼ second of time. Some satellite configurations require all data to pass through an earth station on each hop to or from the end user, thereby doubling this time before a packet is delivered to the end device. If a communications protocol is used which requires link-layer acknowledgements for each packet (typical of most legacy SCADA protocols), this can add as much as one second to each poll/response cycle. This can be unacceptably large and have a significant impact on system throughput, so careful protocol matching is appropriate if a GEO satellite link is being considered. This long delay characteristic also makes GEO satellites undesirable for two-way telephone links.
A second satellite technology which is gaining popularity is the "low earth orbit" (LEO) satellite. LEO’s operate at much lower altitudes of 500 to 2000 kilometers. Because of the lower altitude, the satellites are in constant motion (think of a swarm of bees), so a fixed highly directional antenna cannot be used. But compensating for this is the fact that the smaller distances require lower power levels, so if there are a sufficient number of satellites in orbit and if their operation is properly coordinated, LEO’s can provide ubiquitous high-speed data or quality voice service anywhere on the face of the earth. LEO systems can be quickly deployed using relatively small earth stations. There are a number of competing service providers providing several varieties of LEO service: "Little LEOs" for data only; "Big LEOs" for voice plus limited data; and "Broadband LEO’s" for high-speed data plus voice. Search "LEO satellite" on the Internet for more information.
All satellite systems are subject to eavesdropping, so the use of appropriate security measures is indicated to avoid loss of confidential information.
Short Message System (SMS)
SMS (also known as "text messaging") uses the forward and reverse control channels (FOCC and RECC, respectively) of cell phone systems to provide two-way communication service for very short telemetry messages. The FOCC and RECC are the facilities normally used to authorize and set up cellphone calls. Since the messages are short and the channel is unused during a voice call, there is surplus unused bandwidth available in all existing analog cell phone systems which can be used for this service. SMS systems send information in short bursts of 10 bits in the forward (outbound) direction and 32 bits in the reverse (inbound) direction, making them well-suited for control and status messaging from simple Remote Terminal Units (RTU’s). Message integrity is enhanced through the use of 3 out of 5 voting algorithms. A number of companies are offering packaged products and services which can be very economic for simple status and control functions. Utility interface to the system is provided using various Internet, telephone, and pager services. Search the web for "SMS telemetry" for more information.
Spread Spectrum Radio and Wireless LAN’s
New radio technologies are being developed as successors to traditional MAS and microwave radio systems which can operate unlicensed in the 900 MHz, 2.4 GHz, and 5.6 GHz bands or licensed in other nearby bands. These systems typically use one of several variants of spread spectrum technology and offer robust, high-speed point-to-point or point-to-multipoint service. Interfaces can be provided ranging from 19.2 kbps RS232 to Ethernet. Line-of-sight distances ranging from 1 to 20 miles are possible, depending on antenna and frequency band choices and transmitter power. Higher-powered devices require operation in licensed bands.
This technology has been successfully used both for communication within the substation fence as well as communication over larger distances between the enterprise and the substation or between substations. An example of communication within the substation is adding new functionality, such as transformer condition monitoring, to an existing substation. An internal substation radio connection can make such installations very cost-effective while at the same time providing immunity to electromagnetic interference which might otherwise arise from the high electric and magnetic fields which are found in a substation environment.
As contrasted to traditional radio systems, spread spectrum radio transmits information spread over a band of frequencies either sequentially (frequency hopping spread spectrum – FHSS) or in a so-called "chirp" (direct sequence spread spectrum – DSSS). Other closely related but distinct modulation techniques include Orthogonal Frequency Division Multiplexing (OFDM), which sends data in parallel over a number of subchannels. The objective in all of these systems is to allow operation of multiple systems concurrently without interference and with maximum information security. The existence of multiple systems in proximity to each other increases the apparent noise background but is not immediately fatal to successful communications. Knowledge of the frequency hopping or spreading "key" is necessary for the recovery of data, thus at the same time rendering the system resistant to jamming (denial of service) and eavesdropping attacks.
Variants of DSSS, FHSS, and OFDM are being offered in commercial products and are being adopted in emerging wireless LAN standards such as the several parts of IEEE 802.11 (Wireless LAN) and 802.16 (Broadband Wireless Access)
This is a rapidly changing technology. Search the web for "Spread Spectrum", "DSSS", "FHSS", and "OFDM" for more information and to discover a current list of vendors.
The paragraphs below present short discussions of systems based on traditional "wired" telephone technology. They range from low-speed leased and dial-up circuits through very high-speed optical connections.
Telephone Lines: Leased and Dial-up
Dedicated so-called "leased" or "private" voice-grade lines with standard 3 kHz voice bandwidth can be provided by the telephone company. Dial-up telephone lines provide similar technical characteristics, with the key difference being the manner of access (dial-up) and the fact that the connection is "temporary".
Commonly thought of as providing a "private twisted pair", leased lines are seldom built in this manner. Rather, these circuits are routed, along with switched lines, through standard telephone switches. Unless otherwise ordered, routing (and performance characteristics) of such circuits may change without warning to the end user. Dedicated circuits, known in the industry as 3002 circuits, can support modem data rates up to 19.2 kbps, and up to 56 kbps with data compression. High performance so-called "Digital Data Services (DDS)" circuits can support modem communications up to 64 kbps with special line conditioning.
Security issues for all telephone circuits include the fact that they are easily tapped in an unobtrusive manner, which makes them vulnerable to many of the security attacks discussed above. In addition, they can be re-routed in the telephone switch by a malicious intruder, and dial-up lines are easily accessed by dialing their phone numbers from the public telephone network. Thus it is important that these circuits be protected by the appropriate physical, data-link, or network layer measures as discussed above. In the case of IED interfaces accessible by dial-up phone lines, they must at a minimum be protected by enabling sign-on passwords (and changing of the system default passwords), with the possibility of using other systems such as dial-back modems or physical layer encryption as discussed in the chapter on cyber security.
Telephone circuits are susceptible to all of the electromagnetic interference issues discussed above and should be protected by appropriate isolation devices.
Integrated Services Digital Network (ISDN) is a switched, end-to-end wide area network designed to combine digital telephony and data transport services. ISDN was defined by the International Telecommunications Union (ITU) in 1976. Two types of service are available: ISDN basic access (192 kbps), sold as as ISDN2, 2B+D, or ISDN BRI; and ISDN primary access (1.544 Mbps), sold as ISDN23, 23B+D, or ISDN PRI . The total bandwidth can be broken into either multiple 64 kbps voice channels or from one to several data channels. ISDN is often used by larger businesses to network geographically dispersed sites.
Broadband ISDN (B-ISDN) provides the next generation of ISDN, with data rates of either 155.520 Mbps or 622.080 Mbps.
ISDN can be configured to provide private network service, thereby sidestepping many of the security issues associated with public networks. However, it is still subject to security issues which arise from the possibility of an intruder breaking into the telephone company equipment and rerouting "private" services.
As a wired service, it is also subject to the electromagnetic interference issues which substations create. The high-speed digital signals will not successfully propagate through isolation and neutralizing transformers and will require isolation using back-to-back optical isolators at the substation.
Digital Subscriber Loop (DSL)
Digital Subscriber Loop (DSL) transmits data over a standard analog subscriber line. Built upon ISDN technology, DSL offers an economical means to deliver moderately high bandwidth to residences and small offices. DSL comes in many varieties known as xDSL, where x is used to denote the varieties. Commonly sold to end users, ADSL (asymmetric DSL), sends combined data and voice over ordinary copper pairs between the customer’s premises and the telephone company’s central office. ADSL can provide data rates ranging from1.5Mbps to 8Mbps downstream (depending on phone line characteristics), and16kbps to 640kbps upstream. The digital and analog streams are separated at both the central office and the customer’s site using filters, and an ADSL modem connects the data application to the subscriber line.
Telephone companies use HDSL (high speed DSL) for point-to-point T1 connections, and SDSL (symmetric or single line DSL) to carry T1 on a single pair. HDSL can carry T1 (1.544 Mbps) and FT1 (fractional T1) data in both directions. The highest speed implementation to date is VDSL (very high speed DSL) that can support up to 52Mbps in the downstream data over short ranges. ADSL can operate up to 6000m, whereas VDSL can only attain full speed up to about 300m.
A key advantage of DSL is its competitive pricing and wide availability. A disadvantage is that service is limited to circuit lengths of less than 3.6 km without repeaters. As a wired service, DSL has the same security and EMC issues as ISDN.
T1 And Fractional T1
T1 is a high speed digital network (1.544 mbps) developed by AT&T in 1957 and implemented in the early 1960s to support long-haul pulse-code modulation (PCM) voice transmission. The primary innovation of T1 was to introduce "digitized" voice and to create a network fully capable of digitally representing what was, up until then, a fully analog telephone system. T1 is part of a family of related digital channels used by the telephone industry that can be delivered to an end user in any combination desired.
The T1 family of channels stacks up as follows:
T1 is a well-proven technology for delivering high-speed data or multiple voice channels. Depending on the proximity of the utility facility to telephone company facilities, the cost can be modest or high. See also the discussion of DSL for additional options.
As a wired facility, T1 is subject to the electromagnetic interference issues discussed above unless it is offered using fiber optic facilities (see discussion of fiber optic). Since T1 was originally designed to serve voice users, delivery of data bits with a minimum of latency and jitter is important, but occasional discarded data is not considered a problem. Therefore, equipment using T1 links should provide link error checking and retransmission.
A T1 link is point-to-point and interfacing to a T1 facility requires sophisticated equipment, so a T1 facility is resistant to casual eavesdropping security attacks. But since it is part of a system exposed to outside entities and with the possibility that an intruder to the telephone facility could eavesdrop or redirect communications, it is important that systems using T1 facilities employ end-to-end security measures at the Network layer or above as discussed in the security section.
Frame Relay is a service designed for cost-efficient intermittent data transmission between local area networks and between end-points in a wide area network. Frame Relay puts data in a variable-size unit called a frame and leaves error correction up to the end-points. This speeds up overall data transmission. Usually, the network provides a permanent virtual circuit (PVC), allowing the customer to see a continuous, dedicated, connection without paying for a full-time leased line. The provider routes each frame to its destination and can charge based on usage. Frame Relay makes provision to select a level of service quality, prioritizing some frames. Frame Relay is provided on fractional T-1 and full T-carrier. It provides a mid-range service between ISDN (128 kbit/sec) and ATM (155.520 or 622.080 Mbit/sec) (see separate paragraphs). Based on the older X.25 packet-switching technology, Frame Relay is being displaced by ATM and native IP-based protocols (see discussion of MPLS).
Asynchronous Transfer Mode is a cell relay protocol which encodes data traffic into small fixed-size (53 byte with 48 bytes of data and 5 bytes of header) "cells" instead of variable-sized packets as in packet-switched networks. ATM was originally intended to provide a unified networking solution that could support both synchronous channel networking and packet-based networking along with multiple levels of service quality for packet traffic. ATM was designed to serve both circuit-switched networks and packet-switched networks by mapping both bitstreams and packet-streams onto a stream of small fixed-size "cells" tagged with virtual circuit identifiers. These cells would be sent on demand within a synchronous time-slot in a synchronous bit-stream. ATM was originally designed by the telecommunications industry and intended to be the enabling technology for Broadband Integrated Services Digital Network (B-ISDN), replacing the existing switched telephone network. Because ATM came from the telecommunications industry, it has complex features to support applications ranging from global telephone networks to private local area computer networks.
ATM has enjoyed widespread deployment but only partial success. It is often used as a technology to transport IP traffic but suffers significant overhead for IP traffic because of its short cell size. Its goal of providing a single integrated technology for LANs, public networks, and user services has largely failed. See discussion of MPLS.
Synchronous Optical Networking (SONET) is a standard for sending digital information over optical fiber. It was developed for the transport of large amounts of telephone and data traffic. The more recent Synchronous Digital Hierarchy (SDH) standard developed by the International Telecommunication Union (ITU) is built on experience gained in the development of SONET. SONET is used primarily in North America and SDH in the rest of the world. SONET can be used to encapsulate earlier digital transmission standards or used directly to support ATM. The basic SONET signal operates at 51.840 Mbit/sec and is designated Synchronous Transport Signal one (STS-1). The STS-1 frame is the basic unit of transmission in SONET. SONET supports multiples of STS-1 up to STS-3072 (159.252480 Gbit/sec).
Multiprotocol Label Switching (MPLS)
Multiprotocol Label Switching (MPLS) is a data-carrying mechanism which operates in the OSI model one layer below protocols such as IP. Designed to provide a unified service for both circuit-based and packet-switching clients, it provides a datagram service model. It can carry many different kinds of traffic, including voice telephone traffic and IP packets. Previously, a number of different systems, including Frame Relay and ATM, were deployed with essentially identical goals. MPLS is now replacing these technologies in the marketplace because it is better aligned with current and future needs. MPLS abandons the cell-switching and signaling-protocol of ATM and recognizes that small ATM cells are not needed in modern networks, since they are so fast (10 Gbit/sec and above) that even full-length 1500 byte packets do not incur significant delays. At the same time, MPLS preserves the traffic engineering and out-of-band control needed for deploying large-scale networks. Originally created by CISCO as a proprietary protocol, it was renamed when it was handed over to the IETF for open standardization.
Summary and Conclusions
Successful operation of electric power systems and Distribution Automation are strongly dependent on communication technologies. As we have seen in this presentation, a careful analysis of business functions and the resultant communication requirements is a prerequisite to making good choices of communication technologies for DA. There is a large and growing list of communication technologies available to serve the needs of utility Distribution Automation systems.
Section 4 - Communication System Performance
Section 5 - Integrated Volt - VAR Control
Section 6 - Economic Evaluation Methods and Case Studies
Part A: Developement and Evaluation of Alternate Plans
Part B: Economic Comparison of Alternate Plans
The text below supports the above presentation.
In general, automation is seen by all industries as a way to reduce cost and increase efficiency and staff security. The electrical industries live the same context and Hydro-Quebec, as many other utilities, has implemented automated system on its transmission network and in the generation stations. At this day, all generation stations, transmission and distribution substations are fully automated, at Hydro-Québec, including the medium voltage breaker feeding the distribution system.
At the other end of the distribution system, most of the end-use industrial and commercial plants are highly automated. Even in the domestic area automation is implemented to improve energy efficiency and comfort. The new domain of domestic robotic or "domotic" is developing. But the average electricity distribution system has few automation systems implemented. The main reason is that the technology to automate an electrical distribution system was expensive or not easily available. Recent development in IT technologies, including computer based control cabinet, makes the distribution system automation more achievable than ever and the development of these technologies is expected to continue in the near future. All these improvements make the distribution grid automation more possible than before.
The distribution industry feels the pressure as any other industry to improve its efficiency. Energy and power are the backbone of the economy and customers are expecting growing reliability and quality levels. So, Distribution automation is seen as the next step in the electrical energy industry.
The first question that a distribution utility board should ask is "is the actual distribution system optimal?" or "is the actual distribution system design adapted to the actual and future needs of the customers?" Of course the answer depends on the distribution utility context. But a global approach is needed for distribution automation to use the investment on the system as leverage for multiple applications needing intelligence and telecommunications.
Global approach and vision of DA
First of all, it is worth defining what distribution automation is because it can be interpreted in several ways. For some utilities, it can be remote controlling switches and breakers on MV feeders, remote controlling capacitors on MV feeders, implementation of AMR/AMI systems or a combination of a combination of some of these systems. For other utilities, distribution automation can even be remote controlling substations breakers.
Hydro-Québec’s vision of distribution automation is an integrated system of the following:
- Remote controlling switches and breakers on MV feeders including Dispersed Energy Resources
- Volt and VARs Control system including remote controlling capacitors on MV feeders
- Fault location system
- Power Quality monitoring system
- Load side management
- Automatic reconfiguration for MV feeders
In the long term, allowing:
- DER to feed customers directly when a major outage occurs – Microgrids
- Direct access between the consumer and distribution system data such as consumer energy consumption, dynamic rates, etc. – Consumer data portal
This vision is described in a public report from Hydro-Québec  that can be found on the IEEE distribution automation working group website as many other related documents.
Such a distribution system cannot be implemented in a few years. It is a system that has to be justified according to the distribution utility business drivers and context (Investment budget, regulation…) and it has to follow the evolution of technologies, customer’s needs and load growth.
It is important to take a look at the global value of the components of a distribution network. The graphic below illustrates Hydro-Québec’s Distribution by asset category (excluding equipment inside the substation and at the customers’ premises like the meters)
Fig. 1 below shows asset value of distribution equipment on Hydro- Québec’s distribution system. This asset value partition is representative of any typical distribution system. The infrastructure of Hydro-Québec’s distribution system (Underground structure, underground cables, overhead conductors, poles…) represents the major part of the value of the system (73 %). Transformers have also a significant share with 19 % and the switches and breakers on the distribution system, including both the MV part and the control cabinets, represent 8 % of the asset. The control part of the asset, including Control cabinets and meters, represent less than 1% of the total value of the asset. It is interesting to point out that the investment on a distribution automation system targets a very small part of the asset to gain control and know-how of the system condition. Nevertheless, whatever the relative amount involved, the justification must be based on economical analyses.
Choice of equipment and standard cost
When building the business case, Hydro-Québec stated some guiding principles to guide its vision. One of these principles is:
- The distribution network evolution must start from the actual network and gradually moves toward an intelligent grid
Based on this principle Hydro-Québec decided to use the installed equipment on its distribution system as a basis to automate its system.
There are two types of equipment that are targeted by the remote control program:
MV load-break switches and MV circuit breakers (reclosers). For the load-break switches, motorised control cabinets will be installed on the existing load-break switches already on the distribution system. For the reclosers, RTU will be installed in existing control cabinets. However, the complete distribution system architecture will be reviewed to optimise the location and the type of equipment to remote control. In the long term the choice of equipment is expected to evolve and Hydro-Québec already started to look at alternative equipment to include in its program.
The standard costs have been established according to the different types of work needed:
- Remote control of existing MV switch
- Remote control of existing MV Breaker
- Adding a new remote control MV switch
- Adding a new remote control MV breaker
The cost of equipment modification is ranging from 35 to 55 k$ CDN per site. When doing the different simulations, the appropriate cost must be included in the study according to each studied scenario. As any economical studies cost must include maintenance, operation and telecommunication costs over a long term period. These costs also include replacement of control cabinets every 10 years. All of these costs depends on local factors and can hardly be transposed from a utility to another.
For the telecommunication scheme, costs were built with the most conservative (costly) scenario which was the conventional dial-up telephone line. But studies are being done to use proper telecommunication system with lower cost.
Analysis of benefits from DA
Hydro-Québec has identified that a distribution automation program can influence six domains (three with benefits and three with indirect benefits). These domains were identified from an external benchmark study .
- Service Continuity and Reliability
- Energy Efficiency
- Reduction in Labor Costs
- Carry-forward Investment
- Social Costs
- Information Management, Predictive Maintenance, Power Quality
Let us look at the different studies conducted for each of the benefit
Service Continuity and Reliability
Economical comparison between conventional methods to improve reliability and Distribution Automation
The first study compares different schemes in order to identify the best way to improve reliability. Therefore, a typical Hydro-Québec MV feeder was chosen and three schemes to improve its reliability were analyzed. Table 1 below gives the results of the costs and SAIDI improvement of these schemes
Table 1 Cost/benefit analysis of alternative scheme to improve reliability on a distribution system
|Scheme A: |
Increasing network robustness
|Scheme B: |
Division of feeder to reduce the number of customers per section
$600 – $1,500 k
0.67 – 1.11
|Scheme C: |
Automated distribution line
The conclusion of this study is that conventional solutions such as "Increasing network robustness" or "Division of feeder to reduce the number of customers per section" are expensive and have a relatively low impact on the SAIDI compared to the cost of an "Automated distribution line". Therefore the distribution automation is the best of the three schemes to improve the reliability of the distribution system. This scheme is also in accordance with the current industry trend, especially the recent industry roadmaps
(CEATI Distribution Roadmap (January 2004)  and EPRI Advanced Distribution (June 2004) )
The next step is to decide how to implement the distribution automation system to maximize the benefits at the lowest cost possible. This exercise was done by using in-house software that simulated the effect of different ways to implement distribution automation. The software recomputed reliability indices with two years of the real outages on Hydro-Québec’s distribution system (approximately 2,800 feeders) with nine scenarios of implementation. The scenarios ranged from non remote controlled equipments such as fault indicators and conventional reclosers to a fully automatic reconfiguration system with remote control of switches and breakers of all the system. In fact, the software was answering the question "What would have been the SAIDI and the SAIFI for each studied scenario?" Moreover the software adds intelligence to optimize the number of equipments installed for each scenario. Therefore it was possible to compute the cost of the additional equipment needed for each scenario. Some costs were added to complete the economic studies (i.e. cost of maintenance, the equipment including control cabinets, the cost of telecommunications when applicable, etc.). This allowed reliable comparison between scenarios made on the same outage database and same hypotheses. A summary of the more significant scenarios (5 out of the original 9 studies) are described in Table 2.
Analysis of the different scenarios showed that implementing additional local automated system such as reclosers or non remote control fault indication improved the SAIDI and SAIFI in a marginal way (below 5 %). This is partly due to the fact that Hydro-Québec’s distribution system already has reclosers.
Table 2 Technical evaluation of different scenarios to implement distribution automation
|SAIDI improve-ment||SAIFI improve-ment|
|1 – Remote fault indication only||1.96||4.6%||0.00%|
|2 – Optimized recloser installation (1 per feeder) w'ithout 'remote control||1.98||3.6%||3.55%|
|3 – Remote control of actual switches and breakers||1.61||21.6%||0.00%|
|4 – Remote control of actual switches and breakers and addition of breakers when needed||1.60||22.1%||3.55%|
|5 – Remote control of actual switches and|
breakers, addition of breakers when needed and automatic reconfiguration
As soon as an automated system is implemented (Scenarios 3 to 5), the improvement jumps over the 20 % level on SAIDI. This number was confirmed by the different pilot projects that Hydro-Québec carried out over the last 20 years where SAIDI improvement was computed between 13 to 20 %.
Economical selection of feeders to automate
Having all the reliability improvement feeder by feeder for each scenario, additional studies were made to establish which feeders had the most reliability improvement to limit the implementation of the Distribution Automation program to the most profitable feeders.
By selecting these feeders, Hydro-Québec wanted to reduce reliability disparity between customers. Reliability index (SAIDI) reached a stable point, but it remains unequal among customers. Since 1999, the SAIDI index has reached a stable value at 2 hours per customer, per year, although 15 % of our customers have a SAIDI index higher than 4 hours. Outages remain a major concern to customers and these concerns are brought to the regulator. Distribution Automation will target a reduction of outage duration in selected sectors to reduce the SAIDI and improve the position of Hydro-Québec in the North American market.
Feeders back-up possibilities
To benefit from remote control, the selected feeders must have at least one back-up feeder. In fact in the theoretical study many radial feeders were pointed out having an unacceptable SAIDI. Those feeders, assuming that they already have on line breakers and switches, will not be improved significantly by remote control since no back-up is available. This is why the selection of feeders to be automated must be by group of 3 to 4 feeders tied together to allow system reconfiguration through remote control operations. It surely imply that some of the feeders within a group may not have been identified has needing an upgrade by the global study, but nevertheless they will have to be automated to insure the back-up capabilities.
To be effective the feeders selected to improve reliability must have access to the telecommunication system. This why the selected feeders were superposed with the telecommunications possibility to make sure that the selection is viable. At this point only conventional telephone lines are considered in the scenario. A more complete telecommunication study is undergoing to determine the best telecommunication technologies to implement for Distribution Automation in Québec.
Distribution system Energy Efficiency
Some systems are already used to improve the energy efficiency on several distribution systems in North America through the remote control of shunt capacitors. Hydro-Québec’s distribution system has few capacitors installed. A study is being done to justify the implementation of fixed, locally or remotely controlled capacitors on its system.
On the other hand, some other distribution companies reduce the voltage on their system to increase the efficiency and reduce the cost of buying energy at the market marginal price. Hydro-Québec Distribution completed a pilot project in 2005 to identify the benefit of reducing the voltage on its system. The conclusion of this pilot project is impressive and if applied to the entire Hydro-Québec’s distribution system, a total of 2 Twh could be saved. With this evaluation, the benefit of this sole project could compensate for the complete distribution automation program. Some validations are currently undergoing on this project. Hydro-Québec is foreseeing a global system energy efficiency program with both, capacitors installation and voltage control. Involving sensors, telecommunication and some sort of intelligence, this application is considered as a part of an automated distribution system.
Reduction in Labor Costs
Obviously a distribution automation system results in a reduction of the cost for the restoration crews to locate the outage. Moreover, for the planned outages, remote controlling the switches will also result in cost reduction avoiding unnecessary travel. For Hydro-Québec’s system, a review of internal logbooks and time sheets of line linemen lead to an evaluation of saving 20,000 person-hours or $4.3 M CDN per year.
Distribution Automation has been used by some utilities to defer capital investment. For example, installation of transmission equipment has been deferred because distribution automation allows quick transfers of loads on the distribution network. For now, one project is under study at Hydro-Québec to take advantage of distribution automation to reduce capital investment of a transmission line, but no number is available at this time.
A distribution automation program such as Hydro-Québec is implementing has an impact on the customers’ productivity. Reducing the length and the number of outages will reduce the "social cost" associated with outages. The Regulator showed some interest in the social costs for the customers. Thus, Hydro-Québec asked for an external benchmark of the different methods  and applied them to its situation. Table 3 below is giving the result of this study.
A total of five (Four external and one from Hydro-Québec) different social cost evaluation methods was applied to the province of Québec. The external methods came from Électricité de France, IEEE 493-1997, Population Research System and University of Saskatchewan.
Using the different methods, a range of avoided cost ($70 to $170 M CDN) was evaluated. These numbers were presented to the Regulator as the best figures that can be given with the present state of the art.
Information Management, Predictive Maintenance, Power Quality
The Distribution Automation program will provide additional information from the network that will lead to additional performance improvements and cost reduction. Predictive maintenance, power quality improvement and better planning practices will result from this information. For the moment, the only cost saving evaluation related to distribution network information is a reduction of $300 k CDN due to Power Quality related claims could come from a better knowledge of the power quality distribution system but large benefits are expected from this global information system.
Building up the business case from all these benefits
Once the technical studies are all completed, the communication and presentation of the business case is very important and should not be neglected. Challenging the content of the studies from a non technical point of view, reviewing the documents and the presentations is very important. Although the distribution automation is a highly technical content project, its presentation should be simplified, without removing the essential arguments so that non technical managers or regulators can understand the rationale of the business case. It is recommended to have a multidisciplinary team to review the business case before presentation to the deciders.
For Hydro-Québec, the authorised program is to install 3,750 remote control equipment on approximately 1,000 feeders (one third of the complete Hydro-Québec’s distribution system). The program will cost $188 M CDN over a 6-year period. In general, the program is to install control cabinet on existing MV switches and breakers, but additional equipment is also included in the program when needed.
The final result is that the number of customers having a SAIDI of more than 4 hours will be reduced by 50 % due to the Distribution Automation program. The overall SAIDI of the Hydro-Québec distribution network will be improved by 13 %.
This complete study was the basis of the justification presented and authorized by the Regulator (Régie de l’Énergie du Québec) in July 2005. The presented arguments also mentioned that this initial program is the basis of the future intelligent network.
Impact of Integration of projects and equipments
Implementation of the Distribution Automation program begins with the available equipment of today. But with the evolution of technology, utilities have to consider an integration of the technology to reduce the cost. For example smart meters installed in an AMI program could also give information on power quality. Telecommunication link installed for Distribution Automation purposes can serve for many purposes (remote control, diagnosis of equipment, live PQ measurement…). Hydro-Québec built its Distribution Automation roadmap with the integration of equipment.
Integration will reduce the cost of the overall system. But to be efficient, the integration must be done through interoperability and interchangeability standards that are yet to be written for the distribution system. This why an effort has to be done by utilities and manufacturers to write these standards, based on what is done for transmission equipment or even in other industry.
The goal is to have equipment designed on a "Plug and Play" concept as it is in the computer business. Meanwhile utilities will have to move on with the available equipment and make their own integration to reduce the cost having an eye open on the evolution of standards.
Distribution automation is the next evolution of the electrical energy system. Hydro-Québec made a thorough economical and technical analysis of all the aspects surrounding the implementation of Distribution Automation. This study cover direct and indirect benefits pushing to the limit the knowledge of each benefit applied to Hydro-Québec’s situation.
The main drivers to justify the Distribution Automation program to the Regulator were:
- Reducing the disparity of reliability among Hydro-Québec Distribution customers
- Social cost
The project incorporated a broader view of distribution automation than remote controlling of switches and breakers. This vision is now taking shape as pilot projects (covering volt and VARs control, fault location) are undergoing based on economical justification. In the future, the next projects will benefit from the implementation of the computer based equipment installed for the other Distribution Automation projects to integrate other applications. As more integrated projects will be added to the global distribution automation system, the reduced cost of integration will prove the economical justification of the vision.
|||Hydro-Québec’s Distribution System Automation Roadmap – 2005–2020, October 2005|
|||Distribution Automation Benchmarking Performed for Hydro-Québec by, EnerNex Corporation, July 2004|
|||Report CEATI T024700-5036 "Electric Distribution Utility Roadmap"|
|||Report EPRI, Technical requirements Advanced Distribution Automation, June 2004.1010915|
|||Cost of outages, Performed for Hydro-Québec by, EnerNex Corporation, October 2004|