Lessons Learned from EMS Failures in ERCOT Operations

Lessons Learned from EMS Failures in ERCOT Operations
Slide Note
Embed
Share

This content discusses the typical causes and failure modes of Energy Management Systems (EMS) along with the importance of various tools used by Qualified Scheduling Entities (QSEs). It covers the critical EMS applications, operator roles in problem identification and reporting, and procedures during system failures. Additionally, it addresses communication and control failures, SCADA issues, restoration analysis, and key definitions in the EMS field.

  • EMS failures
  • ERCOT Operations
  • Energy Management Systems
  • QSEs
  • System Failures

Uploaded on Mar 11, 2025 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. EMS Outages and Lessons Learned QSE 2014 ERCOT Operations Training Seminar Texas Reliability Entity Jagan Mandavilli, Bob Collins, Mark Henry

  2. Objectives Upon completing this course of instruction, you will: Recognize the typical causes and failure modes for Energy Management Systems (EMS) systems and tools Identify the importance of some of the tools QSE s use Identify the EMS applications critical to your operation Recognize the QSE operator s role in identifying problems and reporting EMS failures Identify the components of the procedures for operation of the system during EMS failures 2

  3. Content EMS Failures Communication and Control (EMS) Failures Inter Control Center Communication Protocol (ICCP) failures Remote Terminal Unit (RTU) issues EMS Applications Failures Automatic Generation Control (AGC) Failure SCADA failures Backup Control center operation Loss of Operator User Interface EMS failures due to database updates Training and Live EMS Screens on same display Analysis of Restorations Contributing & Root causes with examples Common themes with examples 3

  4. Definitions SCADA Supervisory Control And Data Acquisition EMS Energy Management System AGC Automatic Generation Control LFC Load Frequency Control ICCP Inter Control Center Communications Protocol RTU Remote Terminal Unit EAS Event Analysis Sub Committee EMSTF Energy Management Systems Task Force SCED Security Constrained Economic Dispatch 4

  5. Tools and Their Importance SCADA AGC/LFC ICCP SCED 5

  6. ERCOT EMS Overview 6

  7. EMS Reliability EMS are extremely reliable Extremely high industry wide availability Systems usually have redundancy Multiple systems are common, with on-the-fly failover Backup centers, sometimes manned Communications circuits on highly redundant ring networks Data handling has built in error detection and correction Support staff available 24 x 7 7

  8. What do EMS Problems Look Like? Trends flatline Data no longer updates Color changes Alarms Strange application results Lockup of applications Loss of Visibility 8

  9. NERC EMS Failure Event Analysis NERC and personnel examined events 81 Category 2b events (Oct 26, 2010 Sep 3, 2013) reported 64 events thoroughly analyzed and reviewed 54 entities reporting - 20 entities experiencing multiple outages Restoration time for partial outages: 18 to 411 min Restoration time for complete outages: 12 to 253 min Vendor diagnostic failures Software & Hardware Issues Several noticeable themes 10

  10. NERC Lessons Learned from EMS Events #1 Remote Terminal Units Not on DC Sources The power supply to an RTU for a High Voltage Direct Current (HVDC) converter station was not designed to be fed from station batteries, resulting in a loss of the RTU when all AC feeds to the substation were lost due to an event. Lesson Learned While the availability of multiple AC sources provides a deep degree of reliability for RTUs, entities should evaluate the practicality and feasibility of powering RTUs needed for control, situation awareness, system restoration and/or post analysis from the station batteries. Operator Training Seminar 2014 11

  11. NERC Lessons Learned from EMS Events #2 EMS System Outage and Effects on System Operations An entity s EMS began to lose data necessary for visibility of portions of its transmission network causing functionality and/or solution interruptions for some of its EMS operational tools. No loss of load occurred during this event and it was quickly determined to not be a cyber security event. Lessons Learned All entities should have a procedure such as Conservative Operations which provides possible steps they may have to take to ensure reliability. Training should be conducted routinely on all procedures especially those related to low-probability, high-impact events regardless of how often the procedures are used. Operator Training Seminar 2014 12

  12. NERC Lessons Learned from EMS Events #3 EMS Loss of Operators User Interface Application A control center experienced a loss of control and monitoring functionality of the EMS due to the loss of the operator s user interface application between its primary EMS computer/host server and the system operator consoles. Lessons Learned Create a save case of settings before and after any change to the system is made. The save case will aid in supplying the necessary documentation needed to perform comparisons. Analyze EMS performance on a periodic basis and evaluate if the system is meeting the needs as designed and intended. Operator Training Seminar 2014 13

  13. NERC Lessons Learned from EMS Events #4 SCADA Failure Resulting in Loss of Monitoring Function A Transmission Owner (TO) s control center experienced a SCADA failure which resulted in a loss of monitoring functionality for more than thirty minutes. Lessons Learned It is beneficial that Transmission Operators (TOP) and TOs install a heartbeat monitor alarm to detect stale or stagnant data. A periodic evaluation of the mismatch thresholds should be conducted for state estimator alarming specific to each operating area, such that it will allow for the optimum sensitivity while minimizing false mismatch alarms. Operator Training Seminar 2014 14

  14. NERC Lessons Learned from EMS Events #5 Failure of EMS Due to Over-Utilization of Disk Storage Loss of control functionality due to the hard disk on the SCADA server being fully utilized. Lessons Learned SCADA equipment monitoring should include monitoring of hard disk storage utilization. Purging processes need to be set up to perform periodic clean up of disk space. Operator Training Seminar 2014 15

  15. NERC Lessons Learned from EMS Events #6 Indistinguishable Screens during a Database Update Led to Loss of SCADA Monitoring and Control During a planned database update and failover, an EMS Operations Analyst inadvertently changed an online SCADA server database mode from remote (online) to local (local offline copy), which caused a loss of SCADA monitoring and control of Bulk Electric System (BES) facilities. Lessons Learned Changing the database mode on a server is not recommended. A future release of EMS software should eliminate the ability to switch database modes on a server. Operator Training Seminar 2014 16

  16. NERC Lessons Learned from EMS Events #7 Inappropriate System Privileges Causes Loss of SCADA Monitoring An entity experienced a loss of SCADA telemetry specifically a loss of the channel status indicators for 76% of its transmission system. This problem occurred during the implementation of a scheduled SCADA database update that caused one of the front-end processors to be in an abnormal state. An incorrect command was used to remedy the situation, which resulted in the channel status indicators being set to a failed state. Lessons Learned Entities should consider: Reviewing the training with respect to change management to ensure that it includes a checklist of steps required; and Educating SCADA support staff on global impact of commands on the entire SCADA system. Operator Training Seminar 2014 17

  17. NERC Lessons Learned from EMS Events #8 Loss of EMS IT Communications Disabled Transmission System Operators lost ability to authenticate to the EMS system, resulting in a loss of monitoring and control functionality for more than 30 minutes. Lessons Learned EMS network design should include, where possible, a redundant local authentication server on the same internal network as the primary local authentication server. Operator Training Seminar 2014 18

  18. NERC Lessons Learned from EMS Events #9 SCADA Failure Resulting in Reduced Monitoring Functionality An entity s primary control center SCADA Management Platform (SMP) servers became unresponsive, which resulted in a partial loss of monitoring and control functions for more than 30 minutes. Because this loss of functionality was a result of a conflict between security software configuration changes and core operating system functions, a cyber-security event was quickly ruled out, and no loss of load occurred during this event. Lessons Learned Registered entities should consider a multi-site hosting configuration. This configuration provides flexibility and convenience for rapid recovery capability of EMS and SCADA functions. Operator Training Seminar 2014 19

  19. NERC Lessons Learned from EMS Events #10 Failure of Energy Management System While Performing Database Update There was a failure of EMS while performing a database update. Lessons Learned When the EMS was purchased, the vulnerability of an integrated system architecture was unknown. To eliminate this now-exposed vulnerability, it is recommended that functional separation of the PCC from the ACC be implemented. Operator Training Seminar 2014 20

  20. Number of Reports October 26, 2010 September 3, 2013 12 10 8 6 4 2 0 21

  21. Characteristics of EMS Outages 100 12 80 Number of Events 50 60 71 69 40 31 20 10 0 Outages on Weekdays/Outages on Weekends CIP activity led to outage/Non-CIP activity led to outage Outage due to Planned Activity/Outage Unforeseen 22

  22. Root Causes by Category A6 - Training 2% A1 Design Engineering 16% A5 - Communication 5% AZ - Information LTA 20% A2 Equipment Material 25% A4 Management Organization 30% A3 - Individual Human Performance 2% 23

  23. Contributing Causes by Category AX - Overall Configuration 5% A5 - Communication 6% A3 - Individual Human Performance 9% A7 - Other 2% A1 - Design/Engineering 18% A4 Management Organization 28% A2 Equipment Material 32% 24

  24. Top Root/Contributing Causes (in order) Software Failure (A2B6C07) Design output scope LTA (A1B2C01) Inadequate vendor support of change (A4B5C03) Testing of Design/Installation LTA (A1B4C02) Defective or failed part (A2B6C01) System Interactions not considered (A4B5C05) Inadequate risk assessment of change (A4B5C04) Insufficient Job scoping (A4B3C08) Post Modification Testing LTA (A2B3C03) Inspection/Testing LTA (A2B3C02) Attention given to wrong issues (A3B3C01) Untimely corrective actions to known issue (A4B1C08) 25

  25. Common Themes 1. Software Failures 2. Software Configuration/Installation/Maintenance 3. Hardware Failures 4. Hardware Configuration/Installation/Maintenance 5. Failover Testing Weaknesses 6. Testing Inadequacies 26

  26. Software Failures What is Affected? Application Software Bug/Defect Base System Alarms/Health Check/Syncing etc. Front End Processing Supervisory Control Applications (SCADA) AGC ICCP User Interface (UI) Relational Database Management Systems (RDBMS) Build Process Scripts Miscellaneous Scripts Communication Equipment Firmware/Software Bug/Defect RTUs Switches Modems Routers Firewalls Operating System Software Bug/Defect Unix/Linux/Windows 27

  27. Hardware Failures Application Servers/Nodes Network Interface cards Server hard drive control board Aux Power regulator control Communication Equipment RTU Switches Routers Firewalls Fiber Optic Cables Time source Power Sources Uninterruptible Power Supply (UPS) External Generators Power Cables 28

  28. Failover Testing Weaknesses Improper settings preventing the failover Improper procedure to failover System setup issues preventing failover Improper patch management between primary/spare/backup servers Primary server issues reflected on spare/backup as well No Isolation Improper failover configurations settings Improper network device configuration settings for failover Design requirements not considering failovers 29

  29. Testing Inadequacies Inadequate testing Improper procedures to test Incomplete scope Not engaging all the parties involved 30

  30. Software and Hardware Categories and Restoration Times 152 160 25 140 131 Restoration Time in Minutes 20 20 19 120 100 94 Event Count 100 91 15 86 13 80 66 10 60 8 7 40 5 4 20 2 0 0 Hardware C/I/M Hardware Failure - Com Hardware Failure - Power Hardware Failure - Server Software Failure - App Software Failure - Com Software C/I/M Mean Outage Restoration Time (Mins) Event Count 31

  31. Historical Failure Restoration Data Mean Complete Outage Restoration Time: 56 Minutes Mean Partial Outage Restoration Time: 43 Minutes Mean Total Outage Restoration Time: 99 Minutes 450 400 350 300 250 200 150 100 50 0 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75 77 Complete Outage Restoration Time Partial Outage Restoration Time Mean Complete Outage Restoration Time Mean Partial Outage Restoration Time Mean Outage Restoration Time 32

  32. Lessons Learned Publish information about problems and solutions NERC continues review of events with a working group of stakeholders and Regional personnel Situational Awareness workshop held in June 2013 with future workshops planned Dialogue with vendors to inform and improve 33

  33. Reporting Requirements NERC Standard EOP-004-2 Complete loss of voice communication capability affecting a Bulk Electric System (BES) control center for 30 continuous minutes or more (same as Category 2a of EAP) Complete loss of monitoring capability affecting a BES control center for 30 continuous minutes or more such that analysis capability (i.e., State Estimator or Contingency Analysis) is rendered inoperable (similar to Category 2b of EAP) Report to ERCOT, TRE, NERC and DOE per TRE web link: http://www.texasre.org/Reliability/EOP- 004disturbancereports/Pages/Default.aspx 34

  34. Reporting Requirements NERC Events Analysis Category 1f - Unplanned evacuation from a control center facility with Bulk Power System (BPS) SCADA functionality for 30 minutes or more Category 1h - Loss of monitoring or control, at a control center, such that it significantly affects the entity s ability to make operating decisions for 30 continuous minutes or more. Examples include, but are not limited to the following: Loss of operator ability to remotely monitor, control BES elements, or both Loss of communications from SCADA RTUs Unavailability of ICCP links reducing BES visibility Loss of the ability to remotely monitor and control generating units via AGC Unacceptable State Estimator or Contingency Analysis solutions 35

  35. What Can Operators Do? Watch for failures and unexpected situations Determine the criticality and impact to the reliability of the grid Promptly report the failures Log the date/time of the failure, a description of alarms/events, time of system/function restoration Expect the EMS failure and prepare to react Have the necessary back up procedure in place and be familiar with them 36

  36. ERCOT Procedures Failover procedure Loss of AGC Operation Guides Constant Frequency operation Loss of ICCP 37

  37. Real Time Operating Procedure- Section 3.3 System Failures Monitor Frequency for the Loss of EMS or Site Failover The ability to view an adequate Frequency source may be limited during a site-failover, database load, or if AGC is temporarily unavailable. To view the System Frequency during these conditions you may monitor the following sources. o ERCOT Control Room digital wall frequency displays o PI ProcessBook ERCOT TrueTime Frequency (Taylor) and/or o PI ProcessBook ERCOT TrueTime Frequency (Bastrop) It may be necessary to reload the PI ProcessBook ERCOT Main Summary display to show the historical data. 38

  38. Real Time Operating Procedure- Section 3.3 System Failures EMMS (LFC and RLC/SCED) Failure Regulation, RRS, UDBP, BP, EBP, and manual offset not functioning. LFC (AGC): AGC is SUSPENDED or PAUSED, Last ACE crossing zero time on the Generation Area Status page is not updating, AGC operation adversely impacts the reliability of the Interconnection, SCED and EMS are not functioning, oProblem cannot be resolved quickly REFERENCE Display: EMP Applications>Generation Area Status>Nodal Operational Status>Resource Limits Data DETERMINE: Which QSE has ample capacity to place on constant frequency Control; THEN: Direct the selected QSE to go on constant frequency As time permits, issue an electronic VDI oChoose OPERATE AT CONSTANT FREQUENCY as the Instruction Type from QSE Level Place ERCOT AGC into Monitor mode 39

  39. Generating Unit Operations During Complete Loss of Communications Excerpt from 2014 DRAFT NERC guideline for units without voice or data links to their QSE but able to generate and monitor frequency 40

  40. Draft NERC Reliability Guideline Frequency Chart for the ERCOT Region 41

  41. References ERCOT Nodal Protocols, Sect 3.10 ERCOT Nodal Operating Guides, Sect 7 ERCOT State Estimator Standards ERCOT Telemetry Standards ERCOT Operating Procedure Manual, Shift Supervisor Desk, Sect 10 NERC Events Analysis Process NERC Standard EOP-004-2 NERC EMS Task Force DRAFT NERC Reliability Guideline: Generating Unit Operations During Complete Loss of Communications 42

  42. Credits Much of the information contained in this presentation was previously published by North American Electric Reliability Corporation (NERC) in a variety of publications. It is the result of extensive review of actual power system events over a 2 year period by the EMS Event Task Force. Questions? 43

  43. Please turn your iClicker on and answer each of the following questions. EXAM EXAM 44

  44. 1. Which of the following Operator tools can lead to EMS failures? a) SCADA b) ICCP c) AGC d) All of the above 45

  45. 2. What is the top root/contributing cause of EMS failures? a) Inadequate vendor support b) Hardware failure c) Inadequate testing d) Software failure 46

  46. 3. What action should ERCOT take for the loss of their LFC? a) Monitor frequency and hope for the best b) Place a large QSE on constant frequency c) OOME Up units d) RUC units off line 47

  47. 4. Which of the following steps should an Operator take during an EMS failure? a) Promptly report the failures b) Determine the criticality and impact of the failure to the reliability of the grid c) Log the date/time of the failure d) Implement backup procedures e) All of the above 48

  48. 5. What is the NERC Standard that requires reporting of EMS failures? a) NERC Events Analysis Process b) NERC Standard TOP-001-1 c) NERC Standard EOP-004-2 d) NERC EMS Task Force 49

Related


More Related Content