NYDFS publishes industry letter on cybersecurity risk arising from AI

The guidance does not impose new requirements, it helps DFS-regulated institutions meet their existing obligations in light of evolving risks from AI.

New guidance to assist regulated entities in addressing and combating cybersecurity risks arising from artificial intelligence has been issued by Adrienne Harris, New York State Department of Financial Services (NYDFS) Superintendent.

The guidance builds on the agency’s ongoing work to protect NYDFS-licensed entities from cybersecurity risks through its landmark cybersecurity regulation implemented in 2017 (23 NYCRR Part 500). It follows recently adopted NYDFS guidance to combat discrimination by insurers using artificial intelligence (AI).

The guidance does not impose any new requirements beyond obligations that are in Part 500; rather, it is meant to explain how covered entities should use the framework set forth in those cybersecurity rules to assess and address the cybersecurity risks arising specifically from AI. 

“AI has improved the ability for businesses to enhance threat detection and incident response strategies, while concurrently creating new opportunities for cybercriminals to commit crimes at greater scale and speed,” said Superintendent Harris. “New York will continue to ensure that as AI-enabled tools become more prolific, security standards remain rigorous to safeguard critical data, while allowing the flexibility needed to address diverse risk profiles in an ever-changing digital landscape.”

AI-enabled social engineering

One aspect of the guidance targets AI-enabled social engineering – one of the most significant threats to the financial services sector – to recommend possible solutions in this arena.

AI has improved the ability of threat actors to create highly personalized and more sophisticated content that is more convincing than historical social engineering attempts, the agency notes.

Threat actors are increasingly using AI to create realistic and interactive audio, video, and text (deepfakes) that allow them to target specific individuals via email (phishing), telephone, text, videoconferencing, and online postings. These AI-driven attacks often attempt to convince employees to divulge sensitive information about themselves and their employers. When deepfakes result in the sharing of credentials, threat actors are able to gain access to Information Systems containing Nonpublic Information.

AI-enhanced cybersecurity attacks

Another major risk associated with AI is the ability of threat actors to amplify the potency, scale, and speed of existing types of cyberattacks, the guidance points out. Once inside an organization’s Information Systems, AI can be used to conduct reconnaissance to determine, among other things, how best to deploy malware and access and exfiltrate NPI.

The regulation requires entities to have a monitoring program that can identify new cybersecurity vulnerabilities and be able to monitor the activities of all users, plus email and web traffic.

Furthermore, AI can accelerate the development of new malware variants and change ransomware to enable it to bypass defensive security controls, thereby evading detection.

In addition, AI has accelerated the speed and scale of cyberattacks. The guidance also has some pointed reminders to firms about the multiple layers of security controls with overlapping protections needed here.

Exposure or theft of NPI

Products that use AI typically require the collection and processing of substantial amounts of data, often including nonpublic information (NPI). Maintaining NPI in large quantities poses additional risks for covered entities that develop or deploy AI because they need to protect substantially more data, especially biometric data, and threat actors have a greater incentive to target these entities in an attempt to extract NPI for financial gain or other malicious purposes.

Third party and supply chain risk

Supply chain vulnerabilities represent another critical area of concern for organizations using AI or a product that incorporates AI. AI-powered tools and applications depend heavily on the collection and maintenance of vast amounts of data.

Each link in an entity’s supply chain introduces potential security vulnerabilities that can be exploited by threat actors. As a result, any vendor or supplier, if compromised by a cybersecurity incident, could expose an entity’s NPI and become a gateway for broader attacks on that entity’s network, as well as all other entities in the supply chain.

Mitigating AI-related threats

The Cybersecurity Regulation requires entities to assess risks and implement minimum cybersecurity standards designed to mitigate cybersecurity threats relevant to their businesses – including those posed by AI. These cybersecurity measures provide multiple layers of security controls with overlapping protections so that if one control fails, other controls are there to prevent or mitigate the impact of an attack.

The guidance outlines some controls and measures that, especially when used together, help entities to combat AI-related risks, including:

  • Risk assessments that take into account cybersecurity risks faced by the entity, including deepfakes and other threats posed by AI, to determine which defensive measures they should implement. Additionally, when designing risk assessments, entities should address AI-related risks in the following areas: the organization’s own use of AI, the AI technologies utilized by third-party service providers (TPSPs) and vendors, and any potential vulnerabilities stemming from AI applications that could pose a risk to the confidentiality, integrity, and availability of the entity’s information systems.
  • Anticipating the threats facing TPSPs from the use of AI and AI-enabled products and services; how those threats, if exploited, could impact the entity, and how the TPSPs protect themselves from such exploitation.
  • Robust access controls is another defensive measure used to combat the threat of deepfakes and other forms of AI-enhanced social engineering attacks, and to prevent threat actors from gaining unauthorized access to an entity’s information systems and the NPI maintained on them. One of the most effective access controls is multi-factor authentication, the letter notes, which the Cybersecurity Regulation requires covered entities to implement.
  • Training should be provided for all personnel, including senior executives and senior governing body members. The training should ensure all personnel are aware of the risks posed by AI, procedures adopted by the organization to mitigate risks related to AI, and how to respond to AI-enhanced social engineering attacks, the guidance says.
  • The regulation requires entities to have a monitoring program that can identify new cybersecurity vulnerabilities and be able to monitor the activities of all users as well as email and web traffic to be able to block malicious content and protect against the installation of malicious code.
  • Entities are required to implement data minimization practices, as they must dispose of NPI that is no longer necessary for business operations or other legitimate business purposes, which includes NPI used for AI purposes. They should also maintain and update data inventories as they are crucial for assessing potential risks and ensuring compliance with data protection regulations.