Guidelines for secure AI system development

The Canadian Centre for Cyber Security (Cyber Centre), part of the Communications Security Establishment (CSE), is pleased to join the UK National Cyber Security Centre (NCSC), the US Cybersecurity and Infrastructure Security Agency (CISA), as well as 20 international partner organizations, to provide the guidelines for secure AI system development.

AI systems have the potential to bring many benefits to society. However, for the opportunities of AI to be fully realised, it must be developed, deployed and operated in a secure and responsible way.

AI systems are subject to novel security vulnerabilities that need to be considered alongside standard cyber security threats. When the pace of development is high – as is the case with AI – security can often be a secondary consideration. Security must be a core requirement, not just in the development phase, but throughout the life cycle of the system.

For this reason, the guidelines that were developed are broken down into four key areas within the AI system development life cycle: secure design, secure development, secure deployment, and secure operation and maintenance.

The ultimate goal of this guide is to provide considerations and mitigations advice to help reduce the overall risk to an organisational AI system development process.

More information and to read the complete Guidelines for secure AI system development.

Report a problem on this page

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Please select all that apply:

Thank you for your help!

You will not receive a reply. For enquiries, please contact us.

Date modified: