Joint guidance on the careful adoption of agentic artificial intelligence services

The Canadian Centre for Cyber Security (Cyber Centre) has joined the Australian Signals Directorate's Australian Cyber Security Centre (ASD's ACSC) and the following international partners in releasing cyber security guidance on the careful adoption of agentic artificial intelligence (AI) services:

  • United States' Cybersecurity and Infrastructure Security Agency (CISA)
  • United States' National Security Agency (NSA)
  • New Zealand's National Cyber Security Centre (NCSC-NZ)
  • United Kingdom's National Cyber Security Centre (NCSC-UK)

Agentic AI systems are composed of agents that rely on large language models (LLMs) to autonomously reason, plan, make decisions and take actions without human intervention. Although agentic AI systems offer powerful automation benefits and can enhance operational efficiency, their ability to act autonomously across interconnected tools, data and environments introduces significant security risks.

As the role of agentic AI systems grows, it is crucial for organizations to implement security controls to protect national security and critical infrastructure systems from agentic AI-specific risks.

This joint guidance is intended for organizations that are considering developing or deploying agentic AI systems. It outlines security considerations related to LLMs and AI and describes the key risks associated with agentic AI.

The joint guidance also provides best practices to enable agentic AI developers, vendors and operators to secure agentic AI systems. This includes implementing a layered defence and strict access controls to reduce the likelihood of compromise. The authoring agencies also provided tailored guidance on the following:

  • designing and developing secure agents
  • deploying agentic AI securely
  • operating agentic AI securely
  • defending against future risks

Consult the full joint guidance: Careful adoption of agentic AI services

Date modified: