cyber resilience framework
Threatonomics

AI and Misuse

What bad actors are up to with artificial intelligence

by Tiffany Saade , Data & Risk
Published

Welcome to part two in our series on AI and cyber risk. Be sure to read the first installment “What you need to know: Artificial Intelligence at the Heart of Cyber,” here.

Key takeaways

  • Researchers have found that adversaries could leverage AI models to generate automated polymorphic malware — malware that can modify its source code to evade detection in a network.
  • Experts are concerned about adversaries repurposing AI models to automate different stages of an offensive cyber operation.
  • AI models have lowered the barrier of entry for adversaries to craft information-based social engineering at a heightened speed and volume — in the form of phishing email generation. This sophistication allows adversaries to scale their efforts.
  • AI models have lowered the barrier of entry for adversaries to create deep fakes that impersonate senior figures in the organization, spread misinformation, manipulate stock prices, etc.
  • While adversaries can repurpose AI models for malicious purposes, these AI models can also be subject to various attack vectors (i.e. data and model poisoning, jailbreaking, membership inference attacks etc.)
  • Attacks on AI/ML systems can alter their functionality, confidentiality, and their output integrity.
  • AI models are more adept at producing content that mimics reality. It is becoming increasingly difficult to differentiate with the human eye between deceiving AI-generated content (i.e. synthetic video/images, phishing emails) and legitimate content produced by traditional means.
  • AI models simplify generating and obfuscating malicious code, ultimately expanding the threat landscape. While researchers have developed Proof-Of-Concepts (POCs) for AI-enabled polymorphic malware, there is currently no open-source intelligence indicating that hackers are actively using AI models to generate this particular type of malware in their cyber operations.
  • There is still an incomplete understanding of how hackers misuse AI models to generate novel Tactics, Techniques, and Procedures (TTPs) at different stages of an offensive cyber operation, which could be waging a gap in crafting cyber defense strategies.
  • Cybersecurity efforts are frequently coined as ‘reactive’ rather than ‘proactive,’ meaning most of the lessons and responses to new threats come after the damage was done. With the evolving capabilities of bad actors, it is more difficult to predict what attack vectors adversaries might resort to, and it is even more challenging to align an organization’s incentives with the need to routinely implement security best-practices.
  • Resilience suggests a few recommendations for basic cybersecurity hygiene to strengthen an organization’s cyber posture in light of AI-enabled offensive cyber capabilities.

Background

In February 2024, OpenAI – in collaboration with Microsoft— tracked adversaries from Russia, North Korea, Iran, and China, leveraging their generative AI platform, ChatGPT, for a variety of cyber-related functionalities to enhance the productivity and efficiency of their current TTPs [1]. OpenAI immediately terminated the accounts owned by these adversaries. While their findings did not reveal fully automated cyber-attacks enabled by ChatGPT, OpenAI, and Microsoft discovered that adversaries were leveraging the model for the following use cases:

While adversaries have been leveraging AI at different levels of preparation for offensive cyber operations, they have yet to develop AI-enabled novel capabilities. While GenAI can bring novel risks to the cyber landscape, it has mainly enhanced existing threats by amplifying their quality, specificity, and volume.

  • AI-enabled reconnaissance
  • AI-enabled scripting procedure
  • AI-enabled vulnerability mining
  • AI-supported social engineering
  • AI-enabled anomaly detection evasion
  • AI-refined operational command techniques

Risks of the misuse of GenAI in cyberspace

AI-enabled reconnaissance: ‘Know Your Targets’
Malicious actors can automate different stages of the Cyber Kill Chain through various types of AI training methods—such as supervised learning, adversarial machine learning, and reinforcement learning. Supervised learning—a subset of machine learning that relies on labeled data and classification to train models for outcome prediction and pattern mapping—can support adversaries in mapping what vulnerability to pot exploit in a system. Without AI assistance, adversaries usually rely on their expertise and intuition to pinpoint the vulnerabilities worth exploiting, depending on the outcome they set. However, supervised learning can redefine that stage of the Kill Chain (reconnaissance) by recognizing the vulnerability patterns that the adversary is looking for.

In the reconnaissance stage of a cyber-operation, researchers find that Attackers can leverage AI for intelligent target profiling and intelligence collection. Adversaries can leverage GenAI models to understand better the systems they are targeting; they can collect “structural, operational, and topological data about the user’s devices, network flows, and network infrastructure to identify a critical relationship with the intended targets” and utilize GenAI models to extract different patterns of targeting within a large amount of data [2, 3]. For example, AI models can also support threat actors with intelligence collection, with known AI techniques such as Natural Language Processing [4]. GenAI models can output a detailed analysis of the potential target system that surpasses human capabilities [5].
Adversaries can also automate malicious intelligence collection by adopting an “end-to-end spear-phishing technique that “involves identifying high-priority targets and automatically disseminating personalized machine-generated information” [6,7].

AI-enabled social engineering: phishing emails and deep fakes
In this context, social engineering [8] refers to using artificial intelligence technologies to manipulate human behavior and ‘engineer’ content that is often misleading or impersonating individuals. AI-enabled social engineering can be information-based — text, audio, and video deep fakes — or identity-based – in the form of phishing emails.

Information-based social engineering: phishing emails

AI models have considerably amplified social engineering tactics by lowering the barrier of entry for adversaries with different language knowledge to craft grammatically sound emails and enabling adversaries to generate convincing and targeted written content. By leveraging natural language processing and machine learning, adversaries can tailor messages based on the recipient’s background, behavior, and professional activities, thereby increasing the likelihood of deception — at a heightened speed and volume. It becomes increasingly challenging to distinguish phishing emails from credible ones, thus increasing the possibility of employees inadvertently clicking on malicious links, believing them to be ‘benign’ or ‘secure’ [9]. This sophistication boosts adversaries’ success rate and allows attackers to scale their efforts, simultaneously targeting many individuals or organizations and accessing sensitive information through compromised entities [10, 11]. A Microsoft threat researcher states that “language support is a natural feature of LLMs. It is attractive for threat actors with continuous focus on social engineering and other techniques relying on false, deceptive communications tailored to their targets’ jobs, professional networks, and other relationships” [12].

Some of the immediate negative consequences from AI-generated phishing emails include but are not limited to:

  • Financial loss, especially if the adversary impersonated a team member within the company and requested a direct transfer of funds or if financial information was stolen and resulted in unauthorized transactions [13, 14].
  • Data breaches occur when the adversary aims to collect sensitive company data (e.g., intellectual property, login information, router numbers, etc.). They can also disrupt operations within the affected company [15].
  • Trust erosion can occur, especially when a phishing attack results in a data breach. In this case, compromised companies likely suffer a decline in trust from customers and partners. Restoring this trust can be costly, and in some cases, customers may choose to move their business elsewhere [16].

Below are examples of phishing email AI tools openly available on the black market:

FraudGPT and WormGPT
FraudGPT and WormGPT are AI tools—built on the architecture of ChatGPT, but without security guardrails—sold on the black market that enables bas actos to generate convincing phishing emails, which lowers the barrier for non-native English speakers to create these outputs to lure humans at endpoint and ultimately gain initial access into an organization. These tools could also allow a bad actor to inject and conceal malicious links in the email body, or change the nomenclature of the malicious link to make it seem legitimate. FraudGPT can also generate scam landing pages to encourage visitors to provide information. Experts from SlashNext revealed that the developers of FraudGPT allegedly trained the algorithm on “large quantities of malware data” [17].

Identity-based Social Engineering: Deepfakes

AI-generated deepfakes — also known as manipulated synthetic media in the form of audio, image, or video — pose a significant risk to the integrity, credibility, and trust within and between companies [18, 19]. Adversaries can leverage deepfakes to impersonate critical individuals, such as CEOs or public figures, to spread misinformation, manipulate stock prices, or even unlock corporate espionage. In the healthcare sector, a deepfake might show footage of a renowned doctor or scientist making falsified claims about vaccine safety or a medical product, potentially undermining company sales and brand integrity. In manufacturing, adversaries can use deepfakes for industrial sabotage, such as creating fake videos instructing employees to shut down production lines based on non-existant safety threats, ultimately leading to heavy financial losses. Some of the downstream consequences of AI-enabled disinformation include but are not limited to the following:

  • Market manipulation: Deepfakes could manipulate stock prices by spreading false information about a company. Such volatility and uncertainty about the company’s standing could encourage investors to sell their shares and increase scrutiny from regulators and investors [20].
  • Shifts in consumer behavior: When a video or audio deepfake arguing against the quality of a product is disseminated, these false claims can lead consumers to avoid purchasing the affected product and other offerings from that company. In other words, misleading information about a specific company service can have a ripple effect on the company overall [21].
  • Impersonation leading to unintended fraudulent transactions: Bad actors can also resort to deepfakes to impersonate senior management, requesting their employees to conduct financial transactions [22]. Employees might react or abide by the false directives championed in the deepfake— since they believe they came from senior management.
    Below are examples of a few deep fake tools that are openly available on the market for adversaries to leverage:

ROOP
ROOP is an AI tool that enables users to quickly generate deepfakes quickly and swap faces in images and videos in real-time [23]. No model training is required for face-swapping in videos. Instead of using a large dataset, ROOP works with just a single reference image. The tool has led to the creation of add-ons for better live video deepfakes. Bad actors have allegedly been leveraging ROOP and ROOP-Cam to bypass know-your-customer (KYC) checks [24].

ReACTOR
ReACTOR is another AI deepfake tool that specializes in creating Not-Safe-For-Work (NSFW) deepfakes but includes the option to move to Safe-For-Work (SFW) settings [25]. Both ROOP and ReACTOR mainly use the StableDiffusion dataset to create deepfakes from images.

DEEPFACELIVE
DeepFaceLive is a deepfake tool leveraged specifically for video calls. Experts tracked Russian-speaking threat actors discussing DeepFaceLive to bypass security guardrails on video call platforms and perform fraud. Its GitHub repository contains faces of famous people (politicians, celebrities, etc.) for making deepfakes [26].

Threats to the safety of AI models

While bad actors can misuse AI models for many harmful use cases, there are growing concerns regarding the safety of AI infrastructure threats. In that lens, MITRE’s ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) “provides a framework for identifying and addressing vulnerabilities in AI systems, which can help to prevent attacks and protect sensitive data” [27, 28]. Attacks on AI/ML systems can alter their functionality, confidentiality, and the integrity/quality of their outputs. Some of the attack vectors that ML systems could be subject to include but are not limited to:

  • Data poisoning consists of infiltrating malicious data into the training set used to develop an ML model. The goal is to manipulate the model’s learning process [29].
  • Model extraction allows adversaries to replicate an ML model using information gleaned from querying the model, enabling attackers to copy and potentially misuse the model [30].
  • Evasion attacks prompt an adversary to manipulate input data to evade detection by the model, (i.e. manipulate security guardrails such as spam detection or malware scanning) [31].
  • Inference attacks could result in data privacy breaches since the adversary would aim to extract sensitive information from ML models, such as learning about the data points used to train the model [32].
  • Denial of ML Service is a group of attacks a bad actor can resort to, to compromise the availability of a model. These attacks include model inversion and service disruption [33].

Challenges of containing misuse of GenAI in cyberspace

Mitigating the risks from the misuse of GenAI remains challenging for a multitude of reasons:

  • AI models are improving at producing content that is challenging to differentiate from reality (synthetic audio, phishing emails etc.) [34].
  • AI tools are lowering the barrier of entry for low-skilled hackers, helping them write malicious code and obfuscate it [35].
  • The fast pace at which AI models are evolving means defensive measures can quickly become obsolete. This implies that defenders need to continuously update and harden their security measures to increase operational readiness for emerging threats from the threat landscape.
  • Effective mitigation often requires integrating solutions across different platforms and coordinating between various organizational units, which can be complex and resource-intensive [36].
  • A robust understanding of the evolving nature of the threat landscape with the advent of AI is necessary but still incomplete among the defenders’ ecosystem— in terms of understanding the specific ways models can be misused for cyber-operations and its associated downstream harms [37, 38]. Continuously monitoring the extent to which AI models will amplify current threats or create new ones can help organizations better prepare to contain emerging threats enhanced by AI (that are still not fully understood).

Cybersecurity efforts are frequently coined as ‘reactive’ rather than ‘proactive’, meaning most of the lessons and responses to new threats come after the damage was done, for two reasons (among others): it is difficult to predict what attack vectors adversaries might resort to, and it is even more challenging to align an organization’s incentives with the need to routinely implement security best-practices [39, 40]. The rapid development cycle of AI technologies further complicates this issue, as it shortens the time for security professionals to understand and defend against new exploits, which pushes the cyber-resilience landscape into a continuous catch-up mode.
The unpredictability of how malicious actors can repurpose AI models and how AI models will evolve poses significant barriers to proactively mitigating cybersecurity risks. With AI models’ vast capabilities comes a concern for misuse as these models evolve.

Recommendations

Secure basic cybersecurity hygiene. While GenAI can bring novel risks to the cyber landscape, it has mainly enhanced existing threats by amplifying their quality, precision, and volume. In that lens, “nailing the basics” with an effective security program and routine cybersecurity practices is crucial. When building a security strategy, Resilience recommends organizations consider the following:

  • Design clear measures for identity verification. Streamlining identity verification across several platforms or channels (calling, emailing, or messaging) before engaging in sensitive transactions is particularly important in help-desk interactions, wire transfers, or account credential resets [41].
  • Mitigate credential-based attacks. Curbing credential-based attacks could be achieved by adopting hardware tokens like Yubikeys and other passwordless and phishing-resistant authentication methods. Password management takes a front seat in specific scenarios, especially against AI-enabled attacks aimed at password-guessing.
  • Routine patching. Adversaries can accelerate vulnerability mining by leveraging AI tools, meaning critical vulnerabilities in organizations ’ networks need more rapid attention and repair. Routine patching, including frequent software updates, is a robust measure against adversaries looking for entry points into the organization’s networks.
  • Capacity-building and organizational awareness. Train employees on the risks posed by cyber threats and on understanding the necessary processes to prevent breaches. Conduct tabletop exercises to simulate breach scenarios and invest in penetration testing to ensure the organization’s readiness. Cultivate a culture of cybersecurity awareness, encouraging users to actively identify and report suspicious activities or potential security breaches immediately. This also includes training personnel on the most recent AI-related security threats and best practices for AI deployment and maintenance.
  • Invest in monitoring and alerting. Deploy robust monitoring and alerting systems to detect and respond to real-time suspicious activities or potential security incidents.
  • Catalog AI systems used in-house. Keeping track of all the organization’s AI systems, their types (machine learning models, neural networks, etc), their placement within the organization’s IT infrastructure, and the business portion they interact with. Organizations should also conduct routine risk assessments (penetration testing, increasing robustness of security guardrails, prompt engineering, etc) focusing on spotting data handling and processing vulnerabilities.

_______

Citations

[1] https://www.microsoft.com/en-us/security/blog/2024/02/14/staying-ahead-of-threat-actors-in-the-age-of-ai/
[2] https://www.tandfonline.com/doi/full/10.1080/08839514.2022.2037254
[3]https://www.blackhat.com/docs/us-16/materials/us-16-Seymour-Tully-Weaponizing-Data-Science-For-Social-Engineering-Automated-E2E-Spear-Phishing-On-Twitter-wp.pdf
[4] https://www.tandfonline.com/doi/full/10.1080/08839514.2022.2037254
[5]https://www.researchgate.net/publication/349946540_Effectiveness_of_artificial_intelligence_techniques_against_cyber_security_risks_apply_of_IT_industry
[6]https://www.blackhat.com/docs/us-16/materials/us-16-Seymour-Tully-Weaponizing-Data-Science-For-Social-Engineering-Automated-E2E-Spear-Phishing-On-Twitter-wp.pdf
[7] https://cset.georgetown.edu/publication/automating-cyber-attacks/
[8] https://darktrace.com/cyber-ai-glossary/social-engineering
[9] https://drata.com/blog/ai-in-scams-and-social-engineering
[10] https://hbr.org/2024/05/ai-will-increase-the-quantity-and-quality-of-phishing-scams
[11]https://www.cnbc.com/2023/11/28/ai-like-chatgpt-is-creating-huge-increase-in-malicious-phishing-email.html
[12]https://www.microsoft.com/en-us/security/blog/2024/02/14/staying-ahead-of-threat-actors-in-the-age-of-ai/
[13]https://www.cnbc.com/2024/02/14/gen-ai-financial-scams-are-getting-very-good-at-duping-work-email.html
[14] https://www.cybersecuritydive.com/news/phishing-financial-impact/643737/
[15] https://www.mailgun.com/blog/email/ai-phishing/
[16] https://darktrace.com/blog/email-security-and-the-psychology-of-trust
[17] https://hackernoon.com/what-is-fraudgpt
[18] https://www.gao.gov/assets/gao-20-379sp.pdf
[19] https://www.dhs.gov/sites/default/files/publications/increasing_threats_of_deepfake_identities_0.pdf
[20]https://www.cnbc.com/2024/05/28/deepfake-scams-have-looted-millions-experts-warn-it-could-get-worse.html
[21] https://www.tamiu.edu/cswht/documents/wp-2024-001-chen.pdf
[22] https://www.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk/index.html
[23] https://github.com/s0md3v/roop
[24]https://www.scmagazine.com/news/deepfake-face-swap-attacks-on-id-verification-systems-up-704-in-2023
[25] https://github.com/Gourieff/ReActor-UI
[26] https://www.deepfakevfx.com/downloads/deepfacelive/
[27] https://atlas.mitre.org/
[28] https://www.nightfall.ai/ai-security-101/mitre-atlas
[29] https://www.crowdstrike.com/cybersecurity-101/cyberattacks/data-poisoning/
[30] https://paperswithcode.com/task/model-extraction/codeless
[31] https://www.ibm.com/docs/en/watsonx/saas?topic=atlas-evasion-attack
[32] https://arxiv.org/abs/1807.09173
[33] https://atlas.mitre.org/
[34] https://www.okta.com/blog/2024/01/how-cybercriminals-are-using-gen-ai-to-scale-their-scams/
[35] https://www.ncsc.gov.uk/report/impact-of-ai-on-cyber-threat
[36] https://hbr.org/2024/05/4-types-of-gen-ai-risk-and-how-to-mitigate-them
[37] https://mixmode.ai/blog/the-evolving-threat-landscape-why-ai-is-essential-for-cybersecurity-success/
[38] https://aibusiness.com/verticals/ai-in-cybersecurity-understanding-the-digital-security-landscape
[39] https://www.hyas.com/blog/proactive-intelligence-a-paradigm-shift-in-cyber-defense
[40] https://arxiv.org/html/2404.01205v1
[41]https://www.forbes.com/sites/forbestechcouncil/2024/03/14/how-to-secure-your-business-against-ai-threats/

You might also like

Breaking Lemonade: Understanding Value at Risk

I talk a lot about value-at-risk among my colleagues, with our customers, and the broader market. Value-at-risk may be the single most important measure to grasp, without which one cannot accurately measure risk transfer, excess risk, risk acceptance, and return on controls. Yet, these are all important concepts that leadership in modern organizations need to […]

Would you fall for a live deepfake?

The Office of Senate Security revealed last week that the head of the Senate Foreign Relations Committee was targeted in a deep fake video call. An unknown person, claiming to be the former Ukrainian Minister of Foreign Affairs, Dmytro Kuleba, lured the Senator onto a Zoom call. The attack was thwarted when the Senator and […]

Artificial Intelligence for Cyber Resilience

AI tools are shifting the calculus for cyber defense by enhancing key areas such as vulnerability mapping, breach detection, incident response, and penetration testing. This integration could help an organization bolster its cyber resilience against an ever-evolving threat landscape. AI tools could automate the discovery and monitoring of vulnerabilities, providing real-time updates of an organization’s […]

cyber resilience framework

Cybersecurity Incidents & Trends in Canada

Executive Summary Emerging cyber threats increasingly target Canadian organizations, government agencies, and individuals, with recent attacks revealing sophisticated tactics by threat actors. Threat actors delivered the Formbook infostealer to companies via emails that posed as job candidates. Meanwhile, the Chameleon Trojan attacked Canadian financial institutions and a restaurant chain by masquerading as legitimate apps. Cybercriminals […]

Digital Risk: Enterprises Need More Than Cyber Insurance

What you need to know: Artificial Intelligence at the Heart of Cyber

As AI technologies become more embedded in cyber strategies, they enhance the capabilities of threat actors while also offering innovative defenses to organizations [1]. AI tools can amplify adversaries’ traditional Techniques, Tools, and Procedures (TTPs) by automating the generation of sophisticated threats such as polymorphic malware — which can dynamically alter its code to evade […]