Cyber Threats to Canada’s Democratic Process: 2023 update

Communications Security Establishment
Ottawa, ON K1J 8K6

ISSN: 2563-8165
Cat: D95-10E-PDF

 
 
Edward Drake Building

The Communications Security Establishment (CSE) is Canada’s centre of excellence for cyber operations. As one of Canada’s key security and intelligence organizations, CSE protects the computer networks and information of greatest importance to Canada and collects foreign signals intelligence. CSE also provides assistance to federal law enforcement and security organizations in their legally authorized activities, when they may need CSE’s unique technical capabilities.

CSE protects computer networks and electronic information of importance to the Government of Canada, helping to thwart state-sponsored or criminal cyber threat activity on our systems. In addition, CSE’s foreign signals intelligence work supports government decision-making in the fields of national security and foreign policy, providing a better understanding of global events and crises and helping to further Canada’s national interest in the world.

Part of CSE is the Canadian Centre for Cyber Security (Cyber Centre), Canada’s technical authority on cyber security. The Cyber Centre is the single unified source of expert advice, guidance, services, and support on cyber security for Canadians and Canadian organizations.

CSE and the Cyber Centre play an integral role in helping to protect Canada and Canadians against foreign-based terrorism, foreign espionage, cyber threat activity, kidnapping of Canadians abroad, attacks on our embassies, and other serious threats with a significant foreign element, helping to ensure our nation’s security, stability, and prosperity.

 

Foreign adversaries are increasingly using cyber tools to target democratic processes around the world. Disinformation has become ubiquitous in national elections, and adversaries are now using generative artificial intelligence (AI) to create and spread fake content. This report addresses cyber threat activity targeting elections, and the growing threat that generative AI poses to democratic processes globally and in Canada.

Key findings and global trends

  • Cyber threat activity targeting elections has increased worldwide. The proportion of elections targeted by cyber threat activity relative to the total number of national elections globally has increased from 10% in 2015 to 26% in 2022. Since our publication of Cyber Threats to Canada’s Democratic Process: July 2021 update, we observed that the proportion of elections targeted increased from 23% in 2021 to 26% in 2022.Footnote 1
  • In 2022, we found that slightly over a quarter (26%) of all national elections globally had at least one reported cyber incident. Of the countries whose national elections were targeted by cyber threat activity from 2015 to 2022, approximately 25% are NATO countries and approximately 35% are OECD (Organisation for Economic Co-operation and Development) countries.
  • We observe that state-sponsored cyber threat actors with links to Russia and China continue to conduct most of the attributed cyber threat activity targeting foreign elections since 2021. Russia and China’s cyber threat activity includes attempts to conduct distributed denial of service (DDoS) attacks against election authority websites, accessing voter personal information or information relating to the election, and vulnerability scanning on online election systems.Footnote 2 We assess it very likely that Russia and China will continue to be responsible for most of the attributed cyber threat activity targeting foreign elections in the next two years and will focus on targeting countries of strategic significance to them.
  • State-sponsored cyber threat activity against Canada is a constant, ongoing threat that is often a subset of larger, global campaigns undertaken by adversaries. During periods of heightened bilateral tensions, cyber threat actors can be called upon to conduct cyber activity or influence operations targeting events of national importance, including elections. We assess that increased tensions or antagonism between Canada and a hostile state is very likely to result in cyber threat actors aligned with that state targeting Canada’s democratic processes or disrupting Canada’s online information ecosystem ahead of a national election.
  • The majority of cyber threat activity targeting elections is unattributed. Since the publication of the Cyber Threats to Canada’s Democratic Process: July 2021 update, more than half of the perpetrators of cyber threat activity targeting national elections were unknown. In 2022, 85% of cyber threat activity targeting elections was unattributed, meaning that these cyber incidents are not ascribed or credited to a state-sponsored cyber threat actor. When the perpetrators were known, only two countries were reported to actively target foreign elections in the last two and a half years: Russia and China. We assess it very likely that cyber threat actors are increasingly using obfuscation techniques and/or are outsourcing their cyber activities in order to hide their identities or links to foreign governments.
  • From the publication of the Cyber Threats to Canada’s Democratic Process: July 2021 update until Spring of 2023, we found that all national elections globally (146 in total) were subject to online disinformation geared towards influencing voters and the election. We also detected an increase in the amount of synthetic content being produced relating to national level elections, almost certainly related to the increased accessibility of generative AI. However, we note that the number of reported cases where synthetic content is being used to spread disinformation about elections remains relatively low compared to the amount of synthetic content observed online. We assess that the use of generative AI for synthetic content related to national elections will almost certainly increase in the next two years, as this technology becomes more widely available.

This report is the fourth iteration of Cyber Threats to Canada’s Democratic Process and provides an update to the 2017, 2019 and 2021 reports released by CSE. Its purpose is to inform Canadians about the cyber threats to our democratic process in 2023.

Scope

This report considers cyber threat activity that affects democratic processes. Cyber threat activity involves the use of cyber tools and techniques (e.g. malware and spear phishing) to compromise the security of an information system by altering the confidentiality, integrity, and availability of a system or the information it contains. This assessment considers cyber threat activity and cyber-enabled influence campaigns, which occur when cyber threat actors use cyber threat activity or generative AI to covertly manipulate online information in order to influence opinions and behaviors.

Sources

In producing this report, we relied on reporting from both classified and unclassified sources. CSE’s foreign intelligence mandate provides us with valuable insights into adversary behaviour. Defending the Government of Canada’s information systems also provides CSE with a unique perspective to observe trends in the cyber threat environment.

Limitations

We discuss a wide range of cyber threats to global and Canadian political and electoral activities, particularly in the context of Canada’s next federal election, currently set for 2025. Providing threat mitigation advice is outside the scope of this report. See more information below.

More information

Further resources can be found on the Cyber Centre’s cyber security guidance page and on the Get Cyber Safe website.

For readers interested in more detailed information about cyber tools and the evolving cyber threat landscape, we refer you to the following:

Estimative language

Estimative language chart

Our judgements are based on an analytical process that includes evaluating the quality of available information, exploring alternative explanations, mitigating biases, and using probabilistic language. We use terms such as “we assess” or “we judge” to convey an analytic assessment. We use qualifiers such as “possibly”, “likely”, and “very likely” to convey probability according to the chart below.

The contents of this report are based on information available as of October 26, 2023.

The chart below matches estimative language with approximate percentages. These percentages are not derived via statistical analysis, but are based on logic, available information, prior judgements, and methods that increase the accuracy of estimates.

Estimated language chart long description immediately follows
Long description - Estimative language chart
  • 1 to 9% Almost no chance
  • 10 to 24% Very unlikely/very improbable
  • 25 to 39% Unlikely/improbable
  • 40 to 59% Roughly even chance
  • 60 to 74% Likely/probably
  • 75 to 89% Very likely/very probable
  • 90 to 99% Almost certainly
 

This assessment is the fourth version of “Cyber Threats to Canada’s Democratic Process” and is an update on the global cyber threat activity trends targeting national elections since the last publication in 2021. It also provides information on how cyber threat activity can target election infrastructure, how cyber-enabled influence campaigns impact Canada’s information ecosystem, and how generative AI technologies will shape the future of democratic debate online.

Canada’s democratic process: A target for cyber threat activity?

Cyber threat activity poses a real and growing threat to Canada’s democratic processes. Cyber threat actors, including state-sponsored cyber threat actors, hacktivists, and cybercriminals, interfere with the democratic process and seek to impact Canada’s ability to have fair and free elections. Canada’s efforts to promote international trade and development, international peace and security, as well as international human rights, increase the likelihood that it will become a target for cyber threat actors looking to change election outcomes in order to influence policy or diplomatic relations. Canada’s membership in key organizations, such as NATO (North Atlantic Treaty Organization) and the G7 (Group of Seven), its role in the Indo-Pacific region as well as its support for Ukraine almost certainly make it a target for cyber threat activity and influence campaigns, including those directly targeting our democratic processes.

We have observed that voters are the most frequent targets of cyber threat activity affecting elections worldwide, and Canadian voters are among some of the most connected in the world, making them a larger potential target for cyber threat activity.Footnote 3 Because a large number of Canadians share information online, cyber threat actors looking to influence Canadian voters’ opinions and behaviours can manipulate online information using cyber techniques to conduct influence operations (e.g., hack-and-leak) or use AI technologies to generate fake content (e.g., deepfakes). Increased tensions between Canada and other states could lead to state-sponsored cyber threat actors targeting Canada’s election and disrupting Canada’s democratic process. During periods of heightened bilateral tensions, cyber threat actors can be called upon to conduct cyber activity or influence operations targeting events of national importance, including elections. We assess that increased tensions or antagonism between Canada and a hostile state is very likely to result in cyber threat actors aligned with that state targeting Canada’s democratic processes or disrupting Canada’s online information ecosystem ahead of a national election.

Foreign adversaries are using cyber capabilities to threaten democratic processes

Foreign adversaries use cyber capabilities to influence political outcomes and threaten a country’s democratic process by targeting voters, politicians, political parties, and election infrastructure. Cyber threat actors can directly compromise websites, social media accounts, networks, and devices used by election management bodies, or pollute the information ecosystem by spreading disinformation and by conducting influence campaigns ahead of elections.

Examples of cyber activity that we have observed globally since 2021 include:

  • distributed denial of service (DDoS) attacks against election authority websites and electronic voting systems
  • unauthorized access to voter databases to collect private information
  • spear phishing attacks against elections officials and politicians
  • attempts to manipulate election results by compromising election worker voter database access
  • use of bots and inauthentic social media accounts to influence political discourse

It is becoming increasingly difficult to determine which adversaries are responsible for cyber threat activity targeting democratic processes. Outsourcing cyber threat activity to third parties, such as hacktivists and cybercriminals, or purchasing cyber tools and services from commercial providers and online marketplaces can help foreign adversaries obfuscate their operations. Foreign adversaries have access to a wide range of cyber tools and services on illegal markets that supplement their in-house cyber capabilities. Influence-for-hire firms can also help hide the source of influence campaigns by providing tools and services that spread disinformation and manipulate political discourse.

For example, in February 2023, a team of journalists uncovered an Israeli “influence-for-hire” firm’s hacking and disinformation operations which claimed to have helped clients, including foreign governments, target more than 30 elections across the globe.Footnote 4 In addition, foreign adversaries outsource their cyber activities to non-state cyber groups, such as cybercriminal groups and hacktivists, to avoid direct attribution and access enhanced cyber capabilities.

Cyber threat activity and AI technology: Cyber threat actor goals

Short-term goals
  • Put into question the results of the election
  • Promote polarizing political discourse by manipulating social media algorithms with fake bot accounts
  • Reduce voter turnout
  • Generate misleading deepfake videos and other AI generated synthetic content
Mid-term goals
  • Weaken confidence in leadership
  • Online public discourse becomes “one-sided” and political polarization fuels discontent and social movements
  • Weaken confidence in election infrastructure
  • Increase skepticism of information online
Long-term goals
  • Create distrust that the electoral process is democratic
  • Co-opt domestic social movements to promote foreign economic, military, or ideological interests
  • Voters become disenfranchised and apathetic to elections
  • Create disbelief in information online
 

The Cyber Centre has been analyzing cyber threat activity targeting national level elections globally since 2015. Not all cyber threat activity is reported – much of it is covert. Therefore, we assess that our data almost certainly underestimates the total number of events targeting democratic processes around the world. Based on our observations from 2015 to 2023, we identified four global trends.

Trend 1: Targeting of democratic processes has increased

The proportion of elections targeted by cyber threat activity relative to the total number of national elections globally has increased from 10% in 2015 to 26% in 2022. Since our last publication of the Cyber Threats to Canada’s Democratic Process: July 2021 update, we observe that the proportion of national elections targeted increased from 23% in 2021 to 26% in 2022.Footnote 5 The percentage of elections targeted in 2020 was noticeably lower than other years, and we assess that this is almost certainly an anomaly co-related with the COVID-19 pandemic. Additionally, we found that in 2022 over a quarter (26%) of all national elections had at least one cyber incident. These findings demonstrate a high level of cyber threat activity, however, some cyber threat activity targeting democratic processes remains unidentified or unreported, and we assess that it is very likely that these findings represent conservative estimates.

We found that the number one type of cyber incident affecting national elections was a denial of access or distortion of election commission websites, followed by internet shutdowns during elections. The total share of targeted elections that were in NATO countries increased from 2.8% in 2021 to 3.7% in 2022. (Figure 1a) The COVID-19 pandemic likely explains why fewer OECD countries elections were targeted in 2020 and 2021, as we observed an uptick in the share of targeted elections that were in OECD countries, from 4% in 2021 to 13% in 2022. (Figure 1b)

Figure 1: Percentage of national-level elections targeted by cyber activity by year

Figure 1a
Figure 1a - Long description immediately follows
Long description - Figure 1a

Figure 1a depicts a chart which shows the percentage of global elections each year that had at least one cyber incident related to a national level election. For each year between and including 2015 to 2022, there is a vertical bar with two colours; navy blue representing non-NATO countries and teal representing NATO countries.

Elections in NATO countries, elections in non-NATO countries and overall trend
Election Year Percentage of global elections with an incident, not in NATO Percentage of global elections with an incident, in NATO Total percentage of global elections with an incident
2015 8.5 1.7 10.2
2016 10.8 4.6 15.4
2017 9.6 7.7 17.3
2018 11.3 6.4 17.7
2019 15.9 2.9 18.8
2020 6.2 4.6 10.8
2021 20.3 2.9 23.2
2022 22.6 3.8 26.4
Figure 1b
Figure 1b - Long description immediately follows
Long description - Figure 1b

Figure 1b depicts a chart which shows the percentage of global elections each year that had at least one cyber incident related to a national level election. For each year between and including 2015 to 2022, there is a vertical bar with two coulours; navy blue representing non-OECD countries and teal representing OECD countries.

Elections in OECD countries, elections in non-OECD countries and overall trend
Election Year Percentage of global elections with an incident, not in OECD Percentage of global elections with an incident, in OECD Total percentage of global elections with an incident
2015 6.8 3.4 10.2
2016 12.3 3.1 15.4
2017 7.7 9.6 17.3
2018 8.0 9.7 17.7
2019 10.1 8.7 18.8
2020 6.2 4.6 10.8
2021 18.8 4.4 23.2
2022 13.2 13.2 26.4
 

Trend 2: Russia and China continue to conduct most of the attributed cyber threat activity targeting foreign elections

We observe that state-sponsored cyber threat actors with links to Russia and China continue to conduct most of the attributed cyber threat activity targeting foreign elections since 2021. Russia has consistently been responsible for observed cyber threat activity interfering with foreign elections since 2016, and China has been active every year since 2015, with the exception of 2017 and 2021 (Figure 2). Russia and China’s cyber threat activity includes attempted DDoS attacks against election authority websites, accessing voter personal information or information relating to the election, and vulnerability scanning on online election systems.

Figure 2: Proportion of cyber incidents attributed to countries targeting foreign national-level elections by year

Figure 2 - Long description immediately follows
Long description - Figure 2

A bar chart of foreign cyber incidents per year and broken down by attributed perpetrator. For each year between and including 2015-2022, there is a vertical bar with up to 4 colours and all bars add up to 100%. Each of the colours represent the percentage of foreign cyber incidents that was attributed to a single type of actor, either Russia, China, Other, or Unknown.

The data for this chart is as follows:

Election Year China Percentage Russia Percentage Other Percentage Unknown Percentage
2015 12.5 0 0 87.5
2016 15.0 35.0 5.0 45.0
2017 0 46.2 0 53.8
2018 14.3 7.1 0 78.6
2019 3.9 34.6 11.5 50.0
2020 20.0 23.3 23.3 33.4
2021 0 11.1 0 88.9
2022 6.1 9.1 0 84.8
 

We assess that attributed cyber threat activity is almost certainly focused on influencing elections to fulfill strategic objectives in geopolitical regions of interest to Russia and China. In some cases, cyber activity is politically motivated and will target a country’s democratic processes as a form of retribution. For example, pro-Russia state-affiliated cyber actors have targeted elections of countries who have provided assistance to Ukraine. We assess it very likely that Russia and China will continue to be responsible for most of the attributed cyber threat activity targeting foreign elections and will focus on targeting countries of strategic significance to them. We note that upcoming European elections in 2023 and 2024 could be a significant target for Russia due to the military and economic importance of Europe’s support to Ukraine.

 

Trend 3: The majority of cyber threat activity targeting elections is unattributed

Since the publication of the Cyber Threats to Canada’s Democratic Process: July 2021 update, more than half of the perpetrators of cyber threat activity targeting national elections were unknown. In 2022, 85% of cyber threat activity targeting elections was unattributed, meaning that these cyber incidents are not ascribed or credited to a state-sponsored cyber threat actor. We assess it very likely that cyber threat actors are increasingly using obfuscation techniques and/or are outsourcing their cyber activities in order to hide their identities or links to foreign governments.

By outsourcing malicious cyber threat activities, foreign adversaries can avoid public attribution and diplomatic consequences. Foreign adversaries have been increasing their use of non-state cyber threat groups to avoid cyber activities being linked back to their government. Non-state cyber threat groups have less government oversight, do not abide by the same conventions and norms, and can organize cyber activities, such as distributed denial-of-service (DDoS) attacks, quickly and with little warning. Foreign adversaries are also using influence-for-hire firms to conduct influence operations under the radar. Since 2011, at least 27 online information operations have been partially or wholly attributed to commercial public relations or marketing firms.Footnote 6 Services related to election interference represent a growing market, and if the use of third-party proxies continues, we assess that in the next two years, governments will likely have difficulties linking cyber threat activities targeting elections back to the foreign adversaries responsible.Footnote 7

Trend 4: Generative AI is increasingly being used to influence elections

Cyber threat actors are using generative AI technologies to shape the future of democratic debate online. In August 2019, researchers found that there has been an increase in dark web source activities, as well as an increase in advertising for customized deepfake service offerings.Footnote 8Since the publication of the Cyber Threats to Canada’s Democratic Process: July 2021 update, we have detected an increase in the amount of synthetic content (e.g. deepfakes) relating to elections, almost certainly due to the increased accessibility of many of these technologies. However, we note that the number of reported cases where synthetic content is being used to spread disinformation about elections remains relatively low compared to the amount of synthetic content observed online. We assess that AI synthetic content generation related to national elections will almost certainly increase in the next two years, as this technology becomes more widely available. As synthetic content generation increases and becomes more widespread, it will almost certainly become more difficult to detect, making it harder for Canadians to trust online information about politicians or elections.

 

Elections around the world are increasingly relying on digital technologies, meaning that the threat of cyber attacks against election infrastructure is growing. Cyber threat actors target election infrastructure to directly impact the elections process. Examples include conducting a DDoS attack, shutting down an election commission website, gaining unauthorized access to a voter database via phishing email, or attacking election infrastructure such as voting machines.

Figure 3: Election infrastructure

Figure 3 - Long description immediately follows
Long description - Figure 3

A graphic representing different parts of election infrastructure. “Election infrastructure” is written in a circle at the centre and has seven different branches linked to other smaller circles representing different aspects of election infrastructure including: election websites, social media accounts, election networks, emails used by election management bodies/employees, voting machines, optical scan machines, and voter registration systems.

 

Unlike influence campaigns which aim to influence voter behaviour, cyber threat actors targeting election infrastructure seek to attack the electoral process directly, modify results, or reduce access to voting. There are three stages in which cyber threat actors can target election infrastructure: when voters register, when they vote, and when the votes are tallied. Cyber threat activity compromising any of these three stages of the electoral process can jeopardize the integrity of an election.

 

Voter registration

In almost all countries voters must register. In Canada, voters can register for national elections either at the polls or online.Footnote 9 Online registration can speed up the election process and voter registries can be kept secure through safety measures such as controlling registry access, physically protecting associated hardware, and providing additional I.T. security measures. However, voting registries contain valuable data which can be a target for malicious cyber threat actors. For example, cyber threat actors can attempt to alter online voter records, erase or encrypt data, make the website inaccessible for registration, or display misleading information about registration. Cyber threat actors can also attempt to by-pass security measures to access voter databases and use this personal information to target voters. For instance, on October 22, 2020, the Federal Bureau of Investigation (FBI) and the Cybersecurity and Infrastructure Security Agency (CISA) publicly denounced an Iranian campaign to obtain US voter information and send threatening email messages to intimidate voters and disseminate disinformation pertaining to the election.Footnote 10

Casting the ballot

Once a voter’s identity is confirmed they can cast their vote either by using a paper ballot or by selecting an option on a screen. In Canada, only paper ballots are used in federal elections. Other countries, such as the United States, France, and Brazil, use direct-recording electronic (DRE) machines, commonly referred to as “voting machines,” in their elections.Footnote 11 DRE machines are susceptible to tampering by malicious cyber threat actors, and cyber security experts have in the past demonstrated several vulnerabilities within these systems.Footnote 12 Since 2023, 11 countries have abandoned e-voting citing concerns about trust and security of the vote.Footnote 13 Some DRE machines do not record voters’ choices onto paper, which can lead to complications in recounting votes.Footnote 14

Vote tally and the paper trail

Most countries use some form of technology to process and tally the votes. One of the most common technologies to tally votes are optical scan machines. While some of Canada’s municipal and provincial elections use optical scan machines, all federal election results are counted by hand.Footnote 15 These machines scan paper ballots to register the voters’ marks, and to store the results electronically. This system allows for a quicker tallying of the votes but also ensures that the paper ballots can be compared to the scanner’s tabulation. Like other types of computer-based technology, optical scan machines are susceptible to compromises and physical access to these machines must be protected in order to ensure the software’s integrity.Footnote 16 Relying on an online system to collect and tabulate votes, without having a paper audit trail as a backup, can make it difficult to detect errors or compromises made to voting machines software or hardware.

 
 

Cyber threat activity can generate disinformation that influences voters ahead of elections. This disinformation can be part of a wider election influence campaign, where cyber threat actors use social engineering tactics and techniques to manipulate voters’ emotions and behaviours.Footnote 17 Gaining unauthorized access to privileged information can influence public discourse online and potentially affect voters’ opinions and voting preferences. This type of cyber threat activity can include a hack-and-leak of sensitive information from a political party’s database, hacking into a politician’s social media account to post disinformation, or defacing a political party’s website with disinformation. Rather than targeting election infrastructure directly, cyber threat actors will use cyber capabilities to try to influence or manipulate the electorate.

Cyber activity against democratic processes worldwide is more often conducted to influence the electorate prior to elections rather than to target election infrastructure (Figure 4). Based on these findings, we assess that on average, cyber threat actors targeting elections favour manipulating the information environment over attempts to directly impact the voting process.

Figure 4: Number of observed incidents targeting national-level elections via election infrastructure vs. social engineering by year

Figure 4 - Long description immediately follows
Long description - Figure 4

A bar chart of number of incidents targeting either election infrastructure or involving social engineering. For each year between and including 2015-2022, there are two vertical bars, in different colours, the height of which indicates the number of incidents.

The data for this chart is as follows:

Election Year Incidents involving social engineering Incidents involving election infrastructure
2015 1 3
2016 5 2
2017 6 2
2018 5 2
2019 7 5
2020 10 9
2021 9 6
2022 14 2
 

There are several reasons why cyber threat actors conduct social engineering rather than target election infrastructure. These include:

  • having a broader set of targets to choose from
  • needing fewer bespoke techniques, tactics, and procedures (TTPs) to gain access to privileged information
  • targeting sources of information that do not have the protection of an IT team (e.g. obtaining information from a political staffer’s personal email account)
  • justifying hack-and-leaks as being altruistic and providing the public with important information that they “should know about”
  • being able to outsource influence activities to a marketing or PR firm
  • having more plausible deniability; targeting the electorate is less direct, and harder to trace
 

Foreign adversaries conducting influence campaigns

Foreign adversaries will use cyber threat activity to influence elections by creating, circulating, and/or amplifying disinformation in online public spaces. They do this to manipulate a country’s population covertly in the hopes that the outcome of the election will align with their strategic objectives abroad. Foreign adversaries may also consider targeting another country’s electorate as being less escalatory than targeting the country’s election infrastructure. Nevertheless, foreign adversaries will attempt to obfuscate their involvement in influence campaigns and the cyber activities that feed into these influence campaigns. Geo-spoofing and encrypted messaging platforms make it extremely difficult to identify disinformation’s origin.Footnote 18 In some cases, they will hire a third party to conduct influence campaigns to target elections. These third parties are commonly referred to as “influence-for-hire” firms and are part of a thriving industry that has grown since 2019.Footnote 19 Researchers at the Oxford Internet Institute found 48 instances of states working with influence-for-hire firms from 2019 to 2020, a 128% increase since the 2017 to 2018 period.Footnote 19 Foreign adversaries will also use social botnets to amplify certain narratives online and push content onto voters with the same political views, worsening the effect of political echo chambers and increasing political polarization ahead of elections.Footnote 20 We assess almost certainly that influence campaigns propagated by state-sponsored cyber threat actors represent an ongoing, persistent threat to Canadians.

Online news environment

The Online News Act requires tech companies to compensate Canadian media organizations for the news content that appears on their online platforms.

Some tech companies have refused to comply and will block Canadian news from their platforms. In 2019, almost 50% of Canadians aged between 18 and 24 relied on social media as their main source of news.Footnote 21

We assess that in the absence of Canadian news sources, younger Canadians are very likely at a higher risk of being exposed to misleading news content, which may be part of wider disinformation and influence campaigns.

 

Generative artificial intelligence can produce various types of content, including text, images, audio, and video, sometimes referred to as “deepfakes.” This synthetic content can be used in influence campaigns to covertly manipulate information online, and as a result, influence voter opinions and behaviours. Despite the potential creative benefits of generative AI, its ability to pollute the information ecosystem with disinformation threatens democratic processes worldwide.

Machine learning

Generative AI is an application of machine learning. Machine learning is when computers learn how to complete a task from given data without explicitly programing a step-by-step solution. Machine learning programs have progressed to the point where the content they produce is often nearly impossible to tell apart from humanmade content.Footnote 22

In recent years, generative AI has become increasingly popular as its ability to generate synthetic content (text, images, or videos) has become accessible through large tech companies like OpenAI, Meta, and Google. Unfortunately, cyber threat actors are also using these capabilities to generate or amplify disinformation online. Between August 2019 and January 2021, third-party monitoring recorded an uptick in dark web source activities on deepfake-related topics as well as an increase in advertising for customized deepfake service offerings.Footnote 23 We assess it very likely that cyber threat actors will increasingly use generative AI in influence campaigns targeting elections.

 

Figure 5: Types of synthetic content created by Generative AI

Figure 5 - Long description immediately follows
Long description - Figure 5

Three separate circles each representing the different types of synthetic content that generative AI can create: 1) Text, 2) Image, 3) Video

 

In most cases, it is unclear who is behind AI-generated disinformation. However, we assess it very likely that foreign adversaries or hacktivists will use generative AI to influence voters ahead of Canada’s next federal election. We have observed that cyber threat actors are already using this technology to pursue strategic political objectives abroad. For example, pro-Russia cyber threat actors have used generative AI to create a deepfake of Ukrainian President Zelenskyy surrendering following Russia’s invasion of Ukraine.Footnote 24 We assess that foreign adversaries and hacktivists are likely to weaponize generative AI within the next two years to create deepfake videos and images depicting politicians and government officials and to further amplify and automate inauthentic social botnets using text and image generators.

 

Deepfake videos influencing elections

The term “deepfake video” – combining “deep learning” and “fake” – refers to machine learning models that use image and audio synthesis techniques to generate fake videos that can appear realistic and genuine to viewers. Generative AI is used to reverse engineer real audio or video of a person to convincingly mimic their image and style of speech, producing a video of events that never actually occurred.Footnote 25 Deepfake videos of political figures risk deceiving voters and creating further political polarization. For example, in February of 2023, a deepfake was circulated on social media depicting Joe Biden making anti-transgender comments, despite his administration’s public support for the LGBTQ community.Footnote 26 This example is only one among thousands of deepfakes of politicians circulating on social media, making it harder for voters to distinguish between real and fake political messaging.Footnote 27 The public’s own understanding of the prevalence of deepfake videos online can also bring into question legitimate sources of information. For example, political debates can be a source of crucial information for voters in the lead up to the election since they present political party platforms and have been shown to change swing voters’ candidate preferences.Footnote 28 However, if cyber threat actors circulate deepfakes altering debate content, voters may be deceived. Even if the truth is made clear later on, the damage may lead voters to question the legitimacy of political debates in the future. While most social media platforms, such as Instagram, Facebook, and YouTube, are making efforts to flag and remove deepfakes from their platforms, they are not always able to detect and remove deepfake content quickly before it can be widely circulated.

Social media companies’ ability to detect and remove deepfakes is further complicated by considerations about creativity and freedom of speech. Political parties are themselves using generative AI capabilities as part of their campaigns, for example, to create videos depicting “future scenarios” if a political rival is elected.Footnote 29 While disclaimers are used to identify the video as a deepfake, very little regulation currently exists in Canada and the US on the extent to which generative AI can be used in political advertising.Footnote 30

Social botnets augmented by AI capabilities

Cyber threat actors use fake social media profiles to disseminate or amplify disinformation ahead of elections.Footnote 31 A cluster of fake profiles operated by software robots, or “social botnets”, can “control online social network accounts and mimic the actions of real users”.Footnote 32 Social botnets can influence and/or misrepresent popular opinion and researchers have found that bots accounted for as much as 10% of accounts participating in conversations on certain topics, such as crisis events.Footnote 33 Social botnets have also been known to amplify domestic narratives or disinformation to contribute to a country’s political polarization. As such, they are often part of larger influence campaigns and several “influence-for-hire” firms list this as one of their offered services.Footnote 34

We assess that generative AI will almost certainly be increasingly used to further automate and augment social botnet functions in the next two years. AI text generators, like ChatGPT and Bard, are capable of generating paragraphs of coherent text that are virtually impossible to tell apart from human writing.Footnote 35 These generative AI capabilities can be applied to social botnets to improve their posts and make them sound more believably human.Footnote 36 Moreover, AI image generators, like GAN Lab, Midjourney or DALL-E, can fabricate fake images that are in some cases almost impossible to tell apart from real ones.Footnote 37 These capabilities can be used to generate fake profile pictures for botnet social media accounts, or to generate misleading content for posts. For example, in March 2023 a pro-Chinese government influence campaign used several AI-generated images to support narratives negatively portraying US leaders.Footnote 38 Differentiating between what is real and what is AI-generated will become more difficult for voters as social botnets continue to evolve and as generative AI capabilities become increasingly available.

We assess it very likely that the capacity to generate deepfakes exceeds our ability to detect them. Current publicly available detection models struggle to reliably distinguish between deepfakes and real content. Given the ineffectiveness of deepfake detection models, and the increasing availability of generative AI, it is likely that influence campaigns using generative AI that target voters will increasingly go undetected by the general public. We also assess that it is very likely that as technology develops, it will become better at fooling detection models, which will make it more difficult for social media companies to detect and automatically remove synthetic content before it reaches voters.

 

Based on our findings, we assess that disinformation about the next federal election will almost certainly be found online and that foreign adversaries will likely use generative AI to target Canada’s federal election in the next two years. We assess that, overall, Canada is a lower priority target for cyber threat activity than some of its allies, such as the US and UK. However, Canada does not exist in a vacuum and cyber activity affecting our allies’ democratic processes will likely have an impact on Canada as well. For example, a high percentage of Canadians use US social media platforms and are often exposed to the same deepfakes and foreign influence campaigns targeting US citizens.Footnote 39

We also note that the four global trends we identified have implications for Canada. The percentage of elections targeted by cyber threat activity has increased globally and, based on this trend, we assess cyber incidents are also more likely to happen in Canada’s next federal election than they have been in the past. As stated in the National Cyber Threat Assessment 2023-2024, cyber threat activity has become an important tool for states to influence events without reaching the threshold of conflict. We judge that cyber threat activity targeting democratic processes are likely viewed by foreign adversaries such as China and Russia as an obscure and risk-averse way of impacting Canada’s policy outcomes. We also note that identifying the perpetrators of cyber threat activity targeting elections is becoming increasingly difficult as obfuscation techniques and third-party contracting become widespread. We judge it likely that this will also mean that it will become increasingly difficult for Canada to attribute cyber threat activity targeting its democratic processes.

In Canada, technology is used throughout the national election process and can be an important part of making elections efficient and accurate, however, not having physical paper ballots presents some risks. Relying on digital forensic teams to assess election interference presents challenges including flagging non-fraudulent voting abnormalities as fraud and not being able to distinguish cyber compromises from system malfunctions. Currently, Canada’s national elections are paper based, however, some provincial, territorial, Indigenous and municipal governments are deliberating the benefits and drawbacks of online voting.Footnote 40 The Northwest Territories conducted its 2019 territorial elections using online voting and a large percentage of municipalities in Ontario and Nova Scotia are adopting online voting practices. As of September 15, 2023, we found that 217 of Ontario’s 444 municipalities (49%) and 42 of Nova Scotia’s 49 municipalities (86%) used online voting in at least one of their past elections. (Figure 6)

Figure 6: Map of electronic voting in Canada

Figure 6 - Long description immediately follows
Long description - Figure 6

A map of Canada, with the provinces and territories slightly separated. They are all uniformly coloured, except for the Northwest Territories, Ontario, and Nova Scotia. The Northwest Territories has a dotted pattern, which indicates it had one territorial election with online voting. Ontario and Nova Scotia have two colours, the amount of each colour being proportional to the number of municipalities using online voting. For Ontario, this is 49% and for Nova Scotia, this is 86%.

 

Potential election interference and suspected election result tampering can put into question the legitimacy of an election and result in investigations into the election process. Disproving false narratives relating to election interference can be difficult: the technical components of cyber threat activity are not always easily understood by voters and the extent of cyber compromises can be misunderstood or misinterpreted.

 

Cyber threat activity continues to be used to target democratic processes globally, and the Government of Canada, CSE, and the Cyber Centre produce advice and guidance to help inform Canadians about the cyber threats to Canada’s elections.

The Cyber Centre provides cyber security advice and guidance to all major political parties, in part through publications such as the Cyber Security Guide for Campaign Teams and Cyber Security Advice for Political Candidates.

The Cyber Centre has also published the following:

The Cyber Centre also works closely with Elections Canada to protect its infrastructure, including publishing a report on Security Considerations for Electronic Poll Book Systems.

We encourage Canadians to consult the Cyber Centre’s resources including the National Cyber Threat Assessment 2023-2024, and the How to Identify Misinformation, Disinformation, and Malinformation publication, as well as the Fact Sheet for Canadian Voters. CSE’s Get Cyber Safe campaign will also continue to publish relevant advice and guidance to inform Canadians about cyber security and the steps they can take to protect themselves online.

 
 
Report a problem on this page

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Please select all that apply:

Thank you for your help!

You will not receive a reply. For enquiries, please contact us.

Date modified: