Threatonomics

Do you Need Human Brains to make AI Useful in Cybersecurity?

The Indispensable Role of Humans in AI-Driven Cyber Risk Management

by Rob Brown , Sr Director of Cyber Resilience
Published

As the world advances with data processing and artificial intelligence (AI) capabilities at a mind-boggling pace, we might feel as if humans are becoming obsolete. This is certainly the question of an endless series of articles that have clogged our inboxes since the release of ChatGPT publicly in late 2022.

Maybe this development is a good thing. After all, humans are prone to all kinds of biases, irrational thinking, “noisy” behaviors, and a lack of good information to support making judgments and decisions. In the cybersecurity field, these biases are worsened as we are burdened by “critical fires” and data overload, making it hard to filter out noise and make sound decisions. Those seasoned security professionals who develop these skills take years of experience to hone their leadership ability. Even then, highly experienced professionals often end up operating outside their “normal range of performance” during a crisis, succumbing to fatigue and error. 

But that’s the plight of humans caught in our mortal coil. Therefore, the advent of systems that can suppress the effects of costly systematic human errors in judgment should be celebrated by the security field, should they not?

We know that AI systems are designed by humans to overcome reproducible human foibles and can work quite well when automating the analysis of large sets of data. To summarize decades of research, the model of you makes better judgments and predictions than you. That’s because AI systems operate without any of the “soft computer” problems of the human brain. They don’t get hungry. They don’t get fatigued. They aren’t motivated by ideology. They just do what they’re told to do with surprising repeatability, unaffected by the often grinding nature of the cybersecurity field. 

In a field like cybersecurity, AI and machine learning are already used in a number of cases to spot malware that hasn’t been seen before or flag emails that are likely phishing attempts. Cybersecurity operations is a field where basic automation of highly repetitive tasks has led to the development of the original SIEMs and even given rise to new fields like SOAR. 

But of course, these advancements don’t easily translate to more executive-level responsibilities in security, nor do they imply that AI systems are infallible. 

We understand that unintended consequences in complex systems are often a result of the systems encountering conditions its designers never considered. When this happens, we turn back to the humans who designed the systems—humans who are capable of complex and critical thought—and ask for design changes. We don’t ask the failed system to repair itself; we ask humans to intervene.

This gives us good reason to be optimistic about humans’ purpose, or at least for our gainful employment, in the increasingly cyber-connected world. Humans are still necessary contributors to the AI systems humans design. 

Humans solve human problems

One of the first areas of irreplaceable human contribution is the purpose of designing AI systems to solve our problems. A problem to be solved represents a mismatch between the state of the world and human preferences for the world. In the security space, we are still uncovering which problems are actually applicable to AI use cases, but most of these revolve around the problem that security generates large volumes of cyber event data. Google Cloud has integrated an AI Workbench into its Chronicle product for security operations professionals. IBM has integrated a version of its Watson system into the QRadar  Security Intelligence platform to help with risk analysis and incident response. And Darktrace has been working on its Enterprise Immune System for years to detect and respond to anomalous network activity.

While these new systems represent an evolution in the role of technology within security event data analysis, they are still largely used for automation, not risk analysis. They are tools humans use to solve the problem of having too much data to look at. 

While we will obviously employ advanced AI in our search for even more technical solutions to analyzing large data sets, we don’t want to presume that advanced AI is capable of “knowing” our human or business preferences or independently allowing it to take action on our risk by proxy. If AI systems become advanced enough to operate with their own sense of agency, they will solve their problems, not ours. Humans are needed in the loop to solve human problems.

Humans are a necessary database

Humans are often the initial source of data and information that feeds AI systems. All of the systems we design that ultimately function well do so because they have access to volumes of good data. But herein lies a problem: we simply don’t have access to volumes of good cyber incident data for all cybersecurity problems to be solved through the systems we would like to design and deploy. Cyber is relatively new, constantly evolving, and contingent on human markets. It is ultimately a human-made challenge that requires human-made solutions. 

To reach a point where AI systems are valuable beyond simply automation of data organization, we need humans to gather data on cyber incident management, human experience, expertise, and measured intuition. These facets can tell us a lot about the world before we have formal data collected by more impartial sensors. This is especially true in the case of critical cybersecurity issues, such as aligning our cybersecurity budgets with our risk tolerances or building good cyber governance programs that balance transparency and operations. It might be sloppy and imprecise, but that’s not as large a problem as one might suspect.  Not only can we think logically about how the past informs the present and near future, but we can creatively conceive of alternate futures to proactively plan for them. We can then apply even rough estimates through the mathematics of uncertainty to guide us first to reasonably accurate information and then to increasingly precise data as indicated by economic efficiency. 

Humans provide oversight and guidance

Decision-making represents a uniquely human exercise because it involves  applying creative and critical thinking to achieve the world that we desire. That desire is an expression of human values and preferences. In the cybersecurity world, for example, this might require deep consideration of how our preferences are ultimately satisfied by the way we handle an extortion attempt when backups have failed or by deciding to delay a product launch after discovering a new vulnerability. 

An essential characteristic of a good decision is one that is consistent with the decision maker’s values and preferences. The presence and expression of human preferences are essential to determine whether the systems that humans design are operating according to their intended goals and objectives.

That leads us to a bit of a paradox in this idea around AI offering oversight and guidance: we develop automated systems because they are less noisy and less prone to human error. Yet people still click links in emails that are flagged as suspicious. Why? Because human impulses are easily tricked. There probably isn’t a better example of these beneficial systems than those we use in aircraft to overcome spatial disorientation. This is a potentially deadly failure of perception and impulse that all pilots are subject to that can only be overcome by trusting the aircraft instruments. When a properly calibrated automated guidance system says, “pay attention to this system approaching failure mode,” we shouldn’t dismiss it even when it feels counterintuitive. And yet we know that no system is ultimately fail-proof. How do we resolve this paradox?

We have to learn to gain confidence in automated systems while at the same time remaining somewhat skeptical that they are performing as desired. The way around this is to trust at the moment but recalibrate retrospectively in the near term. The iterative recalibration is to ensure that AI systems continue to serve or satisfy our values as a whole. 

This is like the separation of powers in the administration of government. Each separated power is composed of a flawed set of actors, yet the tensions among the powers tend to cancel out biases and extreme agendas and policies. The human in the AI loop ensures that there is a “separation of powers” that provides recalibrated error correction along the way. 

The human-in-the-loop in cybersecurity and risk management is essential to contextualize and respond productively in a massively complex human ecosystem.  AI without the human element is doomed to fail due to its inability to understand the need to solve human problems, a frequent lack of immediately relevant and comprehensive data, or an inability to provide oversight and guidance to human values. Humans alone cannot perfectly monitor and manage cyber risk either– they need AI-powered tools that can quickly review mass amounts of data when it’s available, reliably identify potential threats, and promptly alert on action items. 

To offer our Edge solution clients this balance between human engagement and security tooling, Resilience provides access to regular meetings with its (human) security experts. Our teams work closely with clients to contextualize, analyze, and support informed decision-making based on the technical analysis provided through our AI-informed models. The human-in-the-loop element is invaluable when interpreting and applying the control recommendations made by our risk modeling as clients look to implement their quantified action plans. Our humans–or experts– in the loop, offer crucial clarity and context around items that cannot be explained or anticipated mathematically. 

Three cheers for humans!

Humans-in-the-loop when managing AI efforts in cybersecurity is essential. If we take “the loop” to mean the development and implementation of advanced AI systems, we need humans in the loop for AI to be relevant for cybersecurity beyond simply scaling technical automation. The loop needs critical and narrative thinking, and in fact, wouldn’t exist without humans in the first place. Humans provide initial data derived from expertise informed by years of managing security incidents and calibrated intuition based on what is the right level of risk tolerance for a specific organization. 

Most importantly, the humans in the loop exist to serve human preferences and values; these are the strategic decision-making factors that lead to smart security investments that support strong businesses. This expertise provided by Resilience has been incredibly well received by our clients, and in 2023, we achieved a 77 NPS score for our team’s invaluable efforts in helping clients understand their evolving state of cyber risk.  

Not bad for serving as the “meat-ware” layer for an AI system!

You might also like

Seven Essential Steps to Vulnerability Management: Learnings from the Ivanti Exposures  

In light of the most recent Ivanti vulnerability, the importance of a robust vulnerability management strategy and incident response plan has never been clearer.  The Ivanti vulnerabilities, particularly CVE-2024-22024, unveiled on February 8th, 2024, serve as a stark reminder of the relentless nature of cyber threats. These vulnerabilities, which allow unauthenticated, remote attackers to access […]

Five Predictions on the State of Cyber Claims in 2024

Unravel the complexities of cyber risk with the 2023 Mid-Year Claims Report by Resilience. Dive into our analysis and predictions for the cyber insurance industry in 2024, including the pivotal role of AI and regulatory changes.

Knowing Your Risk Surface: A Risk-Focused Approach to Incident Response

After decades of more damaging and less predictable cyber attacks, modern cybersecurity practitioners have recognized the critical need to incorporate more risk-based approaches to their planning efforts. However, despite the continuing advances within the cybersecurity field, analytics firms are noting record years for cybercriminals and breaches against some of the most well-defended organizations in the […]

Top Three Trends on Cyber Resilience from The World Economic Forum

With generative AI dominating the conversation at the World Economic Forum’s annual meeting in Davos this year – a massive 32 sessions in total – it’s easy to overlook another topic that was the focus of WEF’s 2024 Global Cybersecurity Outlook: Cyber Resilience.  The term has taken on a new importance in 2024 as enterprise […]

Mastering Cyber Resilience

Cyber Resilience 101, 202, and accompanying Cyber Resilience Workshops are designed to teach brokers the fundamentals of proactive cyber risk management