EMS Outages and Lessons Learned: Understanding Critical Tools and Failures

EMS Outages and Lessons Learned: Understanding Critical Tools and Failures
Slide Note
Embed
Share

Upon completing this course, you will recognize typical causes and failure modes for Energy Management Systems, identify the importance of tools, understand critical EMS applications, recognize operators' roles in problem identification and reporting, and identify system operation procedures during failures. Learn about EMS failures, communication issues, tools like SCADA and RTU, common restoration themes, and more. Definitions include SCADA, EMS, RTNET, and others. Explore tools such as SCADA, ICCP, RTCA, SCED, VSAT, and TSAT. Dive into an overview of ERCOT EMS, highlighting system reliability, redundancy, and support staff availability.

  • EMS Failures
  • Tools Importance
  • Definitions
  • Reliability
  • ERCOT

Uploaded on Mar 12, 2025 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. EMS Outages and Lessons Learned TDSP 2014 ERCOT Operations Training Seminar Texas Reliability Entity Jagan Mandavilli, Bob Collins, Mark Henry

  2. Objectives Upon completing this course of instruction, you will: Recognize the typical causes and failure modes for Energy Management Systems (EMS) systems and tools Identify the importance of some of the tools TDSP s use Identify the EMS applications critical to your operation Recognize the TDSP operator s role in identifying problems and reporting EMS failures Identify components of the procedures for operation of the system during EMS failures 2

  3. Content EMS Failures Communication and Control (EMS) Failures Inter Control Center Communication Protocol (ICCP) failures Remote Terminal Unit (RTU) issues EMS Applications Failures State Estimator(SE), RTCA, VSAT, TSAT SCADA Backup Control center operation Loss of Operator User Interface EMS failures due to database updates Training and Live EMS Screens on same display Analysis of Restorations Contributing & Root causes with examples Common themes with examples 3

  4. Definitions SCADA Supervisory Control And Data Acquisition EMS Energy Management System RTNET Real Time Network Analysis RTCA Real Time Contingency Analysis VSAT Voltage Stability Analysis Tool TSAT Transient Stability Analysis Tool ICCP Inter Control Center Communications Protocol RTU Remote Terminal Unit EAS Event Analysis Sub Committee EMSTF Energy Management Systems Task Force 4

  5. Tools and Their Importance SCADA ICCP RTNET RTCA SCED VSAT TSAT 5

  6. ERCOT EMS Overview 6

  7. EMS Reliability EMS are extremely reliable Extremely high industry wide availability Systems usually have redundancy Multiple systems are common, with on-the-fly failover Backup centers, sometimes manned Communications circuits on highly redundant ring networks Data handling has built in error detection and correction Support staff available 24 x 7 7

  8. What do EMS Problems Look Like? Trends flatline Data no longer updates Color changes Alarms Strange application results Lockup of applications Loss of Visibility 8

  9. NERC EMS Failure Event Analysis NERC and personnel examined events 81 Category 2b events (Oct 26, 2010 Sep 3, 2013) reported 64 events thoroughly analyzed and reviewed 54 entities reporting - 20 entities experiencing multiple outages Restoration time for partial outages: 18 to 411 min Restoration time for complete outages: 12 to 253 min Vendor diagnostic failures Software & Hardware Issues Several noticeable themes 10

  10. NERC Lessons Learned from EMS Events #1 Remote Terminal Units Not on DC Sources The power supply to an RTU for a High Voltage Direct Current (HVDC) converter station was not designed to be fed from station batteries, resulting in a loss of the RTU when all AC feeds to the substation were lost due to an event. Lesson Learned While the availability of multiple AC sources provides a deep degree of reliability for RTUs, entities should evaluate the practicality and feasibility of powering RTUs needed for control, situation awareness, system restoration and/or post analysis from the station batteries. Operator Training Seminar 2014 11

  11. NERC Lessons Learned from EMS Events #2 EMS System Outage and Effects on System Operations An entity s EMS began to lose data necessary for visibility of portions of its transmission network causing functionality and/or solution interruptions for some of its EMS operational tools. No loss of load occurred during this event and it was quickly determined to not be a cyber security event. Lessons Learned All entities should have a procedure such as Conservative Operations which provides possible steps they may have to take to ensure reliability. Training should be conducted routinely on all procedures especially those related to low-probability, high-impact events regardless of how often the procedures are used. Operator Training Seminar 2014 12

  12. NERC Lessons Learned from EMS Events #3 EMS Loss of Operators User Interface Application A control center experienced a loss of control and monitoring functionality of the EMS due to the loss of the operator s user interface application between its primary EMS computer/host server and the system operator consoles. Lessons Learned Create a save case of settings before and after any change to the system is made. The save case will aid in supplying the necessary documentation needed to perform comparisons. Analyze EMS performance on a periodic basis and evaluate if the system is meeting the needs as designed and intended. Operator Training Seminar 2014 13

  13. NERC Lessons Learned from EMS Events #4 SCADA Lockup A Transmission Owner (TO) s control center experienced a SCADA failure which resulted in a loss of monitoring functionality for more than thirty minutes. Lessons Learned It is beneficial that Transmission Operators (TOP) and TOs install a heartbeat monitor alarm to detect stale or stagnant data. A periodic evaluation of the mismatch thresholds should be conducted for state estimator alarming specific to each operating area, such that it will allow for the optimum sensitivity while minimizing false mismatch alarms. Operator Training Seminar 2014 14

  14. NERC Lessons Learned from EMS Events #5 Failure of EMS Due to Over-Utilization of Disk Storage Loss of control functionality due to the hard disk on the SCADA server being fully utilized. Lessons Learned SCADA equipment monitoring should include monitoring of hard disk storage utilization. Purging processes need to be set up to perform periodic clean up of disk space. Operator Training Seminar 2014 15

  15. NERC Lessons Learned from EMS Events #6 Indistinguishable Screens during a Database Update Led to Loss of SCADA Monitoring and Control During a planned database update and failover, an EMS Operations Analyst inadvertently changed an online SCADA server database mode from remote (online) to local (local offline copy), which caused a loss of SCADA monitoring and control of Bulk Electric System (BES) facilities. Lessons Learned Changing the database mode on a server is not recommended. A future release of EMS software should eliminate the ability to switch database modes on a server. Operator Training Seminar 2014 16

  16. NERC Lessons Learned from EMS Events #7 Inappropriate System Privileges Causes Loss of SCADA Monitoring An entity experienced a loss of SCADA telemetry specifically a loss of the channel status indicators for 76% of its transmission system. This problem occurred during the implementation of a scheduled SCADA database update that caused one of the front-end processors to be in an abnormal state. An incorrect command was used to remedy the situation, which resulted in the channel status indicators being set to a failed state. Lessons Learned Entities should consider: Reviewing the training with respect to change management to ensure that it includes a checklist of steps required; and Educating SCADA support staff on global impact of commands on the entire SCADA system. Operator Training Seminar 2014 17

  17. NERC Lessons Learned from EMS Events #8 Loss of EMS IT Communications Disabled Transmission System Operators lost ability to authenticate to the EMS system, resulting in a loss of monitoring and control functionality for more than 30 minutes. Lessons Learned EMS network design should include, where possible, a redundant local authentication server on the same internal network as the primary local authentication server. Operator Training Seminar 2014 18

  18. NERC Lessons Learned from EMS Events #9 SCADA Failure Resulting in Reduced Monitoring Functionality An entity s primary control center SCADA Management Platform (SMP) servers became unresponsive, which resulted in a partial loss of monitoring and control functions for more than 30 minutes. Because this loss of functionality was a result of a conflict between security software configuration changes and core operating system functions, a cyber-security event was quickly ruled out, and no loss of load occurred during this event. Lessons Learned Registered entities should consider a multi-site hosting configuration. This configuration provides flexibility and convenience for rapid recovery capability of EMS and SCADA functions. Operator Training Seminar 2014 19

  19. NERC Lessons Learned from EMS Events #10 Failure of Energy Management System While Performing Database Update There was a failure of EMS while performing a database update. Lessons Learned When the EMS was purchased, the vulnerability of an integrated system architecture was unknown. To eliminate this now-exposed vulnerability, it is recommended that functional separation of the Primary from the Backup Control Center be implemented. Operator Training Seminar 2014 20

  20. Number of Reports October 26, 2010 September 3, 2013 12 10 8 6 4 2 0 21

  21. Characteristics of EMS Outages 100 12 80 Number of Events 50 60 71 69 40 31 20 10 0 Outages on Weekdays/Outages on Weekends CIP activity led to outage/Non-CIP activity led to outage Outage due to Planned Activity/Outage Unforeseen 22

  22. Root Causes by Category A6 - Training 2% A1 Design Engineering 16% A5 - Communication 5% AZ - Information LTA 20% A2 Equipment Material 25% A4 Management Organization 30% A3 - Individual Human Performance 2% 23

  23. Contributing Causes by Category AX - Overall Configuration 5% A5 - Communication 6% A3 - Individual Human Performance 9% A7 - Other 2% A1 - Design/Engineering 18% A4 Management Organization 28% A2 Equipment Material 32% 24

  24. Top Root/Contributing Causes (in order) Software Failure (A2B6C07) Design output scope LTA (A1B2C01) Inadequate vendor support of change (A4B5C03) Testing of Design/Installation LTA (A1B4C02) Defective or failed part (A2B6C01) System Interactions not considered (A4B5C05) Inadequate risk assessment of change (A4B5C04) Insufficient Job scoping (A4B3C08) Post Modification Testing LTA (A2B3C03) Inspection/Testing LTA (A2B3C02) Attention given to wrong issues (A3B3C01) Untimely corrective actions to known issue (A4B1C08) 25

  25. Common Themes 1. Software Failures 2. Software Configuration/Installation/Maintenance 3. Hardware Failures 4. Hardware Configuration/Installation/Maintenance 5. Failover Testing Weaknesses 6. Testing Inadequacies 26

  26. Software Failures What is Affected? Application Software Bug/Defect Base System Alarms/Health Check/Syncing etc. Front End Processing Supervisory Control Applications (SCADA) ICCP User Interface (UI) Relational Database Management Systems (RDBMS) Build Process Scripts Miscellaneous Scripts Communication Equipment Firmware/Software Bug/Defect RTUs Switches Modems Routers Firewalls Operating System Software Bug/Defect Unix/Linux/Windows 27

  27. Hardware Failures Application Servers/Nodes Network Interface cards Server hard drive control board Aux Power regulator control Communication Equipment RTU Switches Routers Firewalls Fiber Optic Cables Time source Power Sources Uninterruptible Power Supply (UPS) External Generators Power Cables 28

  28. Failover Testing Weaknesses Improper settings preventing the failover Improper procedure to failover System setup issues preventing failover Improper patch management between primary/spare/backup servers Primary server issues reflected on spare/backup as well No Isolation Improper failover configurations settings Improper network device configuration settings for failover Design requirements not considering failovers 29

  29. Testing Inadequacies Inadequate testing Improper procedures to test Incomplete scope Not engaging all the parties involved 30

  30. Software and Hardware Categories and Restoration Times 152 160 25 140 131 Restoration Time in Minutes 20 20 19 120 100 94 Event Count 100 91 15 86 13 80 66 10 60 8 7 40 5 4 20 2 0 0 Hardware C/I/M Hardware Failure - Com Hardware Failure - Power Hardware Failure - Server Software Failure - App Software Failure - Com Software C/I/M Mean Outage Restoration Time (Mins) Event Count 31

  31. Historical Failure Restoration Data Mean Complete Outage Restoration Time: 56 Minutes Mean Partial Outage Restoration Time: 43 Minutes Mean Total Outage Restoration Time: 99 Minutes 450 400 350 300 250 200 150 100 50 0 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75 77 Complete Outage Restoration Time Partial Outage Restoration Time Mean Complete Outage Restoration Time Mean Partial Outage Restoration Time Mean Outage Restoration Time 32

  32. Lessons Learned Publish information about problems and solutions NERC continues review of events with a working group of stakeholders and Regional personnel Situational Awareness workshop held in June 2013 with future workshops planned Dialogue with vendors to inform and improve 33

  33. Reporting Requirements NERC Standard EOP-004-2 Complete loss of voice communication capability affecting a Bulk Electric System (BES) control center for 30 continuous minutes or more (same as Category 2a of EAP) Complete loss of monitoring capability affecting a BES control center for 30 continuous minutes or more such that analysis capability (i.e., State Estimator or Contingency Analysis) is rendered inoperable (similar to Category 2b of EAP) Report to ERCOT, TRE, NERC and DOE per TRE web link: http://www.texasre.org/Reliability/EOP- 004disturbancereports/Pages/Default.aspx 34

  34. Reporting Requirements NERC Events Analysis Category 1f - Unplanned evacuation from a control center facility with Bulk Power System (BPS) SCADA functionality for 30 minutes or more Category 1h - Loss of monitoring or control, at a control center, such that it significantly affects the entity s ability to make operating decisions for 30 continuous minutes or more. Examples include, but are not limited to the following: Loss of operator ability to remotely monitor, control BES elements, or both Loss of communications from SCADA RTUs Unavailability of ICCP links reducing BES visibility Loss of the ability to remotely monitor and control generating units via AGC Unacceptable State Estimator or Contingency Analysis solutions 35

  35. What Can Operators Do? Watch for failures and unexpected situations Determine the criticality and impact to the reliability of the grid Promptly report the failures Log the date/time of the failure, a description of alarms/events, time of system/function restoration Expect the EMS failure and prepare to react Have the necessary back up procedure in place and be familiar with them 36

  36. ERCOT Procedures Analysis Tool Outages section from the ERCOT Transmission and Security Desk Procedures (Section 3.3) Respond to Miscellaneous Issues section from the ERCOT Transmission and Security Desk Procedures (Section 10.1) Telemetry and Communications (Operating Guide Section 7) Failover procedure Loss of ICCP 37

  37. ERCOT Procedures on Telemetry Sect 10.1, Transmission & Security Desk (Feb. 2014) Telemetry Issues that could affect SCED and/or LMPs IF: THEN: There is a telemetry issue; Ensure the appropriate Control Room personnel are aware of the issue, and Instruct the TO/QSE to correct the issue. IF: THEN: The TO/QSE cannot fix the issue in a timely manner; Ask the TO/QSE to override the bad telemetry. IF: THEN For some reason, the TO/QSE cannot override the bad telemetry; Notify the Operations Support Engineer to work with the TO/QSE and/or the ANA group to override the bad telemetry. 38

  38. ERCOT Transmission Desk Procedures for Loss of SE/RTNET (Summary Section 3.3, Feb. 2014) 1. If SE/RTNET has not solved within last 15 thru < 30 minutes then: Continue to monitor system, Notify Operations Support Engineer (OSE) and Refer to Desktop Guide Trans. Desk 2.1 and run through checklist 2. Must complete within 30 minutes of tool outage : Notify two master QSEs that represent Nuclear Plants that ERCOTs SE not functioning and expected functional within approximately [# minutes] 3. If NOT solved within last 30 minutes, Advisory Hotline call to TO s 4. Notify Real-Time operator to make hotline call to QSEs. 5. If unavailable for extended period of time or topology changes occur, request OSE run manual studies to ensure system reliability. 6. Post Advisory message on MIS Public. and LOG 39

  39. ERCOT Procedures for Loss of Analysis Tools Similar procedures are followed for other tools, per Transmission Operations Desk Procedure (Feb 2014) Tool Notes: Less Than Time Limit 15 thru 30 15 thru 30 15 20 Greater than Time Limit 30 30 30 - 35 SE/RTNET RTCA TSAT Manual studies, notify Oncor, no general advisory issued 15- 20 30 -35 VSAT Manual studies, advisory to TO s, request topology change notification 40

  40. Other References ERCOT Nodal Protocols, Sect 3.10 ERCOT Nodal Operating Guides, Sect 7 ERCOT State Estimator Standards ERCOT Telemetry Standards ERCOT Operating Procedure Manual, Shift Supervisor Desk, Sect 10 NERC Events Analysis Process NERC Standard EOP-004-2 NERC EMS Task Force 41

  41. Credits Much of the information contained in this presentation was previously published by North American Electric Reliability Corporation (NERC) in a variety of publications. It is the result of extensive review of actual power system events over a 2 year period by the EMS Event Task Force. Questions? 42

  42. Please turn your iClicker on and answer each of the following questions. EXAM EXAM 43

  43. 1. Which of the following Operator tools can lead to EMS failures? a) SCADA b) ICCP c) RTNET d) All of the above 44

  44. 2. What is the top root/contributing cause of EMS failures? a) Inadequate vendor support b) Hardware failure c) Inadequate testing d) Software failure 45

  45. 3. What action should ERCOT take for a loss of State Estimator solution longer than 30 minutes? a) Monitor frequency and hope for the best b) Make a Hotline call to issue an Advisory to the TOs c) OOME Up units d) RUC units off line 46

  46. 4. Which of the following steps should an Operator take during an EMS failure? a) Promptly report the failures b) Determine the criticality and impact of the failure to the reliability of the grid c) Log the date/time of the failure d) Implement backup procedures e) All of the above 47

  47. 5. What is the NERC Standard that requires reporting of EMS failures? a) NERC Events Analysis Process b) NERC Standard TOP-001-1 c) NERC Standard EOP-004-2 d) NERC EMS Task Force 48

Related


More Related Content