Link(s): | G7 cyber expert group statement on Artificial Intelligence and Cybersecurity: September 2025 – GOV.UK |
Context
HMT has issued a G7 Cyber expert group statement on Artificial Intelligence and Cybersecurity. Rather than set guidance or regulatory expectations, the statement aims to raise awareness of artificial intelligence’s (AI) cybersecurity dimensions and outlines key considerations for financial institutions, regulatory authorities, and other stakeholders that support security and resilience in the financial sector.
Key points to note and next actions
The statement contains key considerations for financial institutions to manage AI-related cyber risks, such as:
- Strategy, Governance, and Oversight: Are governance frameworks responsive to emerging AI risks?
- Cybersecurity Integration: Are AI systems aligned with secure-by-design principles?
- Data Security: Are data sources vetted and is lineage tracked?
- Logging and Monitoring: Are anomalies logged and reviewed?
- Identity and Authentication: Are systems resilient against impersonation and AI-enabled fraud?
- Incident Response: Are incident response plans updated to account for AI-enhanced attacks and AI-specific incidents?
- Resources, Skills, and Awareness: What is the path to ensure adequate expertise to evaluate and monitor AI use?
As AI becomes more embedded in software systems and financial operations, cybersecurity implications will continue to evolve. Financial stakeholders should:
- Explore AI’s potential for enhancing cyber defence capabilities.
- Update risk frameworks to reflect AI-specific cybersecurity vulnerabilities and mitigation strategies.
- Engage in collaborative research and policy development with technology firms and academia.
- Promote public-private dialogue to promote secure and trustworthy AI in the financial sector
With a risk-informed approach, AI can be an effective tool for cybersecurity and resilience, while helping preserve the integrity and stability of the financial system