Generative artificial intelligence - ITSAP.00.041

Many organizations use artificial intelligence (AI) for process optimization, data analysis, diagnostics and customization of their user experience. Generative AI is a type of AI that generates new content by modelling features from large datasets that were fed into the model. While traditional AI systems can recognize patterns or classify existing content, generative AI can create unique content in many forms, including text, image, audio or software code.

A subset of generative AI that has seen significant improvement in recent years is large language models (LLMs). To create content, LLMs are provided a set of parameters (for example, a query or prompt). Since late 2022, several LLMs (for example, Microsoft’s Copilot, OpenAI’s ChatGPT and Google’s LaMDA) and services using LLMs (for example, Google’s Bard and Microsoft’s Bing) have gained the world’s attention. This publication provides some information on the potential risks and mitigation measures associated with generative AI.

On this page

How generative is AI being used

Generative AI is both a transformative and disruptive technology that may significantly alter how consumers, industries or businesses operate. It has the potential to enable creativity and innovation that could improve services and business operations. Some common examples of generative AI being used to enhance products and contextualize content include:

Image and video

Generative AI can be used to analyze, alter and create visual content for personal or business use. AI can use visual searches and contextualize content to offer alternate descriptions and examples.

Robotics

AI technology that uses motion planning and detection to perform different tasks, for example, self-driving vehicles and drones. Generative AI can be used to automate processes and enhance features.

Language

AI can understand voice and text to analyze, respond and carry out tasks. Call centres and website chatbots use generative AI to analyze initial requests and offer information to try and solve common questions without needing human interaction.

Entertainment

AI scans engagement to analyze connections between different software and applications and recommend content to users.

Generative AI is used in many industries and businesses to help enhance processes. The following sectors have found useful applications for generative AI:

Healthcare

Assists healthcare providers in making faster diagnoses and offers the ability to create personalized treatment plans. It can also be used in medical-assisting robots to help in surgery, diagnostic testing and analysis areas.

Software development

Enables software developers to generate code, assists in debugging or offers code snippets. This can help speed up the development and release of software products. Generative AI is also implemented in software to enhance different features and offer context or analyses for users, for example, in Microsoft Word.

Online marketplace

Generates human-like responses with chatbots and conversational agents, which can help organizations improve customer service and reduce support costs.

Business

Creates personalized customer communications for existing and prospective clients and generates predictive sales modelling to forecast their behaviour. It can also quickly produce unique and cost-effective outputs to use in marketing campaigns, advertising and video productions.

Agriculture

Automates farming tasks like planting, harvesting and monitoring in self-working machinery. It also offers tailored advisories and predictions to enhance sustainability and efficiency in product results and to reduce costs and labour.

Education

Allows educators to create personalized learning plans for students tailored to their individual performance, needs, and interests, which could help teachers better support their students.

Cyber security

AI facilitates enhancement of cyber defence tools against ransomware and other attacks. It assists cyber security practitioners to more easily scan large datasets to identify potential threats and minimize false positives by filtering out non-malicious activities.

 

The risks involved with generative AI

While the capabilities of generative AI technology present great opportunities, they also bring many risks. Generative AI can enable threat actors to develop malicious exploits and potentially conduct more effective cyber attacks, especially as advances in AI allow for higher quality and quantity of content. A significant concern is that it can provide threat actors with a greater capacity to influence. Here are some of the potential risks to be aware of:

Misinformation and disinformation

Content not clearly identified as being AI-generated can result in the spread of misinformation, disinformation and confusion. Threat actors use AI in scams and fraudulent campaigns against individuals and organizations.

Phishing

Threat actors can craft targeted spear-phishing attacks more frequently, automatically, and with a higher level of sophistication. Highly realistic phishing emails or scam messages could lead to identity theft, financial fraud or other forms of cybercrime.

Privacy of data

Users may unknowingly provide sensitive corporate data or personally identifiable information (PII) in their AI queries and prompts. Threat actors could harvest this sensitive information to impersonate individuals or spread false information.

Malicious code

Technically skilled threat actors can overcome restrictions within generative AI tools to create malware for use in a targeted cyber attack. Those with little or no coding experience can use generative AI to easily write functional malware that could disrupt a business or organization.

Buggy code

Software developers may inadvertently introduce insecure and buggy code into the development pipeline. This could happen if they omit or improperly implement error handling and security checks.

Poisoned datasets

Threat actors can inject malicious code into the dataset used to train the generative AI system. This could undermine the accuracy and quality of the generated data. It could also increase the potential for large-scale supply-chain attacks.

Biased content

Most of the training datasets fed into LLMs come from the open Internet. As such, generated content has a fundamental bias in that only limited amounts of the world’s total data are online and available for AI to use. Also, generated content may be prejudiced if the training dataset lacks balanced representation of data points.

Loss of intellectual property

Generative AI tools may enable sophisticated threat actors to steal corporate data more easily, quickly and in larger quantities. Loss of intellectual property (for example, proprietary business information and copyrighted data) can devastate your organization's reputation, revenue, and future growth.

Be aware of information received from AI

It is important to be cautious when using generative AI and understand it is a technology that uses machine learning to construct responses based on a prompt or query. Always keep in mind that its outputs:

  • can be incorrect
  • might not make sense
  • might not take certain factors into account
  • can be biased

It is also important to be careful and analyze AI content before acting or using it. You should always be aware of and validate your sources to verify whether the content being presented is accurate.

 

How to mitigate the risks

Generative AI is a powerful tool that threat actors can leverage to launch cyber attacks. As this technology becomes more widespread, cyber attacks will likely grow in frequency and sophistication. Although detecting AI-enabled threats can be challenging, organizations and individuals can prepare for the increased challenges that these attacks may bring.

Organizations and individuals should practice basic cyber security hygiene as a starting point in understanding risks and taking the appropriate measures to mitigate them.

Organizations should consider the following cyber security measures to minimize their risks of being compromised by cyber attacks:

Enforce strong authentication mechanisms

Secure accounts and devices on your networks with multi-factor authentication (MFA) to prevent unauthorized access to your high-value resources and sensitive data.

Apply security patches and updates

Enable automatic updates of IT equipment and patch known exploited vulnerabilities as soon as possible. This will help to prevent AI-generated malware from infecting the network.

Stay informed

Keep up to date on the latest threats and vulnerabilities associated with generative AI and take proactive steps to address them.

Protect your network

Use network detection tools to monitor and scan the network for abnormal activities. This allows you to quickly identify incidents and threats and deploy appropriate mitigation measures. Additionally, explore how AI might be deployed defensively in network protection tools and consider any ramifications.

Train your employees

Educate all users on how to identify the warning signs of social engineering attacks and who to contact to manage these situations securely. This should include an easy way for users to report phishing attacks or suspicious communications.

Individuals should consider the following measures to protect their personal data from AI-related cyber attacks:

Be cautious when sharing data

Do not share private information with AI tools unless you understand what they are doing with your data. Sharing data trains AI models to then be potentially exploited or sold.

Verify content

As more data becomes available, it may be difficult to know who is responsible for the content or how much of it is logical or factual. It's important to read and look for signs that the content was produced by a generative AI tool. Review the generated content and take the time to fact check it against credible sources.

Practice basic cyber security hygiene

Stay informed, use strong passwords and enable MFA to protect online accounts. Make sure to keep software up to date, use antivirus software and avoid public Wi-Fi networks.

Limit exposure to social engineering or business email compromise

Implement basic online safety practices such as:

  • reducing the amount of personal information you post online
  • avoiding opening email attachments and clicking on links from unknown sources
  • communicating via an alternate, verified channel
  • being suspicious of callers or senders that ask for sensitive information
 

Security measures to consider using

If you plan to use or are already using generative AI, the following security measures can help you generate quality and trustworthy content while mitigating privacy concerns:

Implement a cyber security risk plan

Your organization should establish a plan that identifies policies on how AI should be used and the content that is allowed to be generated. Enforce security-by-design throughout the AI system lifecycle to monitor components and third-party software. Your policies should include the oversight and review processes required to ensure the technology is used appropriately. Consider if AI is a necessary tool for the task (for example, weigh the risks and costs) and whether developing an in-house AI tool would be of higher value than using third-party products.

Select your vendor carefully

When using pre-trained AI, ask your provider if the datasets were acquired externally or developed internally and how they were validated. Use diverse and representative data to avoid inaccurate and biased content. Establish a process for outputs to be reviewed by a diverse team from across your organization to look for inherent biases within the system. Ensure your vendor has robust security practices implemented in their data collection, storage and transfer processes. Continuously fine-tune or retrain the AI system with appropriate external feedback to improve the quality of outputs.

Be careful what information you provide

Avoid providing PII or sensitive corporate data as part of the queries or prompts. Determine whether the tool allows your users to delete their search prompt history.

Learn more

Date modified: