The US. Cybersecurity and Infrastructure Security Agency (CISA) just issued guidance related to AI that impacts power plants. The worry is that some gas turbine plants and other power generation facilities might seek to implement AI in a way that exposes them to cyberattack.
CISA’s guidelines were issued in cooperation with several other agencies from around the world: Australian Signals Directorate’s Australian Cyber Security Centre; U.S. National Security Agency’s Artificial Intelligence Security Center; U.S. Federal Bureau of Investigation; Canadian Centre for Cyber Security; German Federal Office for Information Security; Netherlands National Cyber Security Centre; New Zealand National Cyber Security Centre; and United Kingdom National Cyber Security Centre.
“That kind of coordination is rare and signals the importance of this issue,” said Floris Dankaart, lead product manager, managed extended detection and response at cybersecurity consulting firm, NCC Group. “Equally important, most AI-guidance addresses IT, not OT (the systems that keep power grids, water treatment, and industrial processes running).”
The CISA document has been released in response to the phenomenal rise of AI in the workplace. The introduction to the bulletin reads:
“Since the public release of ChatGPT in November 2022, artificial intelligence (AI) has been integrated into many facets of human society. For critical infrastructure owners and operators, AI can potentially be used to increase efficiency and productivity, enhance decision-making, save costs, and improve customer experience. Despite the many benefits, integrating AI into operational technology (OT) environments that manage essential public services also introduces significant risks—such as OT process models drifting over time or safety-process bypasses—that owners and operators must carefully manage to ensure the availability and reliability of critical infrastructure.”
What is meant by OT is the hardware and software involved in industrial control systems and monitoring systems. Fleet, power plant, and gas turbine controls would come under this banner. In recent years, utility systems, pipelines, and building control systems have been hacked easily as they have not historically been supported by adequate cybersecurity measures. With these systems now connected to the internet and using sensors connected to the industrial internet of things, this problem has become more apparent.
The CISA directive provides critical infrastructure owners and operators with information they need when integrating AI into OT environments. It is built around four key principles:
- Understand AI. Understand the unique risks and potential impacts of AI integration into OT environments, the importance of educating personnel on these risks, and the secure AI development lifecycle.
- Consider AI Use in the OT Domain. Assess the specific business case for AI use in OT environments and manage OT data security risks, the role of vendors, and the immediate and long-term challenges of AI integration.
- Establish AI Governance and Assurance Frameworks. Implement robust governance mechanisms, integrate AI into existing security frameworks, continuously test and evaluate AI models, and consider regulatory compliance.
- Embed Safety and Security Practices Into AI and AI-Enabled OT Systems. Implement oversight mechanisms to ensure the safe operation and cybersecurity of AI-enabled OT systems, maintain transparency, and integrate AI into incident response plans.
AI opens OT environments to serious cybersecurity risks. AI data, models, and deployment software can be manipulated to cause incorrect outcomes or bypass security and functional safety measures or guardrails. Traditional cybersecurity risks remain applicable to AI systems. Security measures like access control, auditing, and encryption still apply for securing AI and AI-enabled systems. In addition, AI-enabled systems are subject to AI-specific cybersecurity risks, such as prompt injection. These risks can impact overall system availability, pose functional safety risks, and can lead to financial losses, reputational damage, and network/OT compromise. The guidance lays out how these risks can be mitigated in detail. The report continues to lay out various problems and solutions related to AI in industry.
Another part of the problem is the lack of know-how within utilities, power plants, and other industrial facilities.
“A major challenge will be addressing skill gaps in OT teams, especially where it relates to AI,” said Dankaart of NCC Group. “OT environments are typically much more structured than IT environments, which might be at odds with many modern AI applications.”
His advice is for those in industrial to display caution when implementing AI. This begins by understanding fully what AI means – both generally and specifically in the intended use case. It is easy to get carried away with the AI hype. This can lead to incorrect or over-broad implementations that expose the plant to added risk without adding much real value to its operations. It is best to start small, use AI where it can add the most value and where it poses the least cybersecurity risk.
“Luckily, some of the best practices in OT and AI use overlap; the idea that you must always have a manual fallback procedure, the ability to operate ‘in island mode’ and human-in-the-loop controls, to name a few,” said Dankaart.



