Do you Need Human Brains to make AI Useful in Cybersecurity?

The Indispensable Role of Humans in AI-Driven Cyber Risk Management

by Rob Brown , Sr Director of Cyber Resilience

As the world advances with data processing and artificial intelligence (AI) capabilities at a mind-boggling pace, we might feel as if humans are becoming obsolete. This is certainly the question of an endless series of articles that have clogged our inboxes since the release of ChatGPT publicly in late 2022.

Maybe this development is a good thing. After all, humans are prone to all kinds of biases, irrational thinking, “noisy” behaviors, and a lack of good information to support making judgments and decisions. In the cybersecurity field, these biases are worsened as we are burdened by “critical fires” and data overload, making it hard to filter out noise and make sound decisions. Those seasoned security professionals who develop these skills take years of experience to hone their leadership ability. Even then, highly experienced professionals often end up operating outside their “normal range of performance” during a crisis, succumbing to fatigue and error. 

But that’s the plight of humans caught in our mortal coil. Therefore, the advent of systems that can suppress the effects of costly systematic human errors in judgment should be celebrated by the security field, should they not?

We know that AI systems are designed by humans to overcome reproducible human foibles and can work quite well when automating the analysis of large sets of data. To summarize decades of research, the model of you makes better judgments and predictions than you. That’s because AI systems operate without any of the “soft computer” problems of the human brain. They don’t get hungry. They don’t get fatigued. They aren’t motivated by ideology. They just do what they’re told to do with surprising repeatability, unaffected by the often grinding nature of the cybersecurity field. 

In a field like cybersecurity, AI and machine learning are already used in a number of cases to spot malware that hasn’t been seen before or flag emails that are likely phishing attempts. Cybersecurity operations is a field where basic automation of highly repetitive tasks has led to the development of the original SIEMs and even given rise to new fields like SOAR. 

But of course, these advancements don’t easily translate to more executive-level responsibilities in security, nor do they imply that AI systems are infallible. 

We understand that unintended consequences in complex systems are often a result of the systems encountering conditions its designers never considered. When this happens, we turn back to the humans who designed the systems—humans who are capable of complex and critical thought—and ask for design changes. We don’t ask the failed system to repair itself; we ask humans to intervene.

This gives us good reason to be optimistic about humans’ purpose, or at least for our gainful employment, in the increasingly cyber-connected world. Humans are still necessary contributors to the AI systems humans design. 

Humans solve human problems

One of the first areas of irreplaceable human contribution is the purpose of designing AI systems to solve our problems. A problem to be solved represents a mismatch between the state of the world and human preferences for the world. In the security space, we are still uncovering which problems are actually applicable to AI use cases, but most of these revolve around the problem that security generates large volumes of cyber event data. Google Cloud has integrated an AI Workbench into its Chronicle product for security operations professionals. IBM has integrated a version of its Watson system into the QRadar  Security Intelligence platform to help with risk analysis and incident response. And Darktrace has been working on its Enterprise Immune System for years to detect and respond to anomalous network activity.

While these new systems represent an evolution in the role of technology within security event data analysis, they are still largely used for automation, not risk analysis. They are tools humans use to solve the problem of having too much data to look at. 

While we will obviously employ advanced AI in our search for even more technical solutions to analyzing large data sets, we don’t want to presume that advanced AI is capable of “knowing” our human or business preferences or independently allowing it to take action on our risk by proxy. If AI systems become advanced enough to operate with their own sense of agency, they will solve their problems, not ours. Humans are needed in the loop to solve human problems.

Humans are a necessary database

Humans are often the initial source of data and information that feeds AI systems. All of the systems we design that ultimately function well do so because they have access to volumes of good data. But herein lies a problem: we simply don’t have access to volumes of good cyber incident data for all cybersecurity problems to be solved through the systems we would like to design and deploy. Cyber is relatively new, constantly evolving, and contingent on human markets. It is ultimately a human-made challenge that requires human-made solutions. 

To reach a point where AI systems are valuable beyond simply automation of data organization, we need humans to gather data on cyber incident management, human experience, expertise, and measured intuition. These facets can tell us a lot about the world before we have formal data collected by more impartial sensors. This is especially true in the case of critical cybersecurity issues, such as aligning our cybersecurity budgets with our risk tolerances or building good cyber governance programs that balance transparency and operations. It might be sloppy and imprecise, but that’s not as large a problem as one might suspect.  Not only can we think logically about how the past informs the present and near future, but we can creatively conceive of alternate futures to proactively plan for them. We can then apply even rough estimates through the mathematics of uncertainty to guide us first to reasonably accurate information and then to increasingly precise data as indicated by economic efficiency. 

Humans provide oversight and guidance

Decision-making represents a uniquely human exercise because it involves  applying creative and critical thinking to achieve the world that we desire. That desire is an expression of human values and preferences. In the cybersecurity world, for example, this might require deep consideration of how our preferences are ultimately satisfied by the way we handle an extortion attempt when backups have failed or by deciding to delay a product launch after discovering a new vulnerability. 

An essential characteristic of a good decision is one that is consistent with the decision maker’s values and preferences. The presence and expression of human preferences are essential to determine whether the systems that humans design are operating according to their intended goals and objectives.

That leads us to a bit of a paradox in this idea around AI offering oversight and guidance: we develop automated systems because they are less noisy and less prone to human error. Yet people still click links in emails that are flagged as suspicious. Why? Because human impulses are easily tricked. There probably isn’t a better example of these beneficial systems than those we use in aircraft to overcome spatial disorientation. This is a potentially deadly failure of perception and impulse that all pilots are subject to that can only be overcome by trusting the aircraft instruments. When a properly calibrated automated guidance system says, “pay attention to this system approaching failure mode,” we shouldn’t dismiss it even when it feels counterintuitive. And yet we know that no system is ultimately fail-proof. How do we resolve this paradox?

We have to learn to gain confidence in automated systems while at the same time remaining somewhat skeptical that they are performing as desired. The way around this is to trust at the moment but recalibrate retrospectively in the near term. The iterative recalibration is to ensure that AI systems continue to serve or satisfy our values as a whole. 

This is like the separation of powers in the administration of government. Each separated power is composed of a flawed set of actors, yet the tensions among the powers tend to cancel out biases and extreme agendas and policies. The human in the AI loop ensures that there is a “separation of powers” that provides recalibrated error correction along the way. 

The human-in-the-loop in cybersecurity and risk management is essential to contextualize and respond productively in a massively complex human ecosystem.  AI without the human element is doomed to fail due to its inability to understand the need to solve human problems, a frequent lack of immediately relevant and comprehensive data, or an inability to provide oversight and guidance to human values. Humans alone cannot perfectly monitor and manage cyber risk either– they need AI-powered tools that can quickly review mass amounts of data when it’s available, reliably identify potential threats, and promptly alert on action items. 

To offer our Edge solution clients this balance between human engagement and security tooling, Resilience provides access to regular meetings with its (human) security experts. Our teams work closely with clients to contextualize, analyze, and support informed decision-making based on the technical analysis provided through our AI-informed models. The human-in-the-loop element is invaluable when interpreting and applying the control recommendations made by our risk modeling as clients look to implement their quantified action plans. Our humans–or experts– in the loop, offer crucial clarity and context around items that cannot be explained or anticipated mathematically. 

Three cheers for humans!

Humans-in-the-loop when managing AI efforts in cybersecurity is essential. If we take “the loop” to mean the development and implementation of advanced AI systems, we need humans in the loop for AI to be relevant for cybersecurity beyond simply scaling technical automation. The loop needs critical and narrative thinking, and in fact, wouldn’t exist without humans in the first place. Humans provide initial data derived from expertise informed by years of managing security incidents and calibrated intuition based on what is the right level of risk tolerance for a specific organization. 

Most importantly, the humans in the loop exist to serve human preferences and values; these are the strategic decision-making factors that lead to smart security investments that support strong businesses. This expertise provided by Resilience has been incredibly well received by our clients, and in 2023, we achieved a 77 NPS score for our team’s invaluable efforts in helping clients understand their evolving state of cyber risk.  

Not bad for serving as the “meat-ware” layer for an AI system!

You might also like

third-party cyber risk management

New Frontier: Cyber Risk Mitigation with Superforecasting

You’re a CISO, bombarded from all sides. New vulnerabilities emerge daily, vendors tout countless security solutions, and your inbox overflows with security alerts. Your skilled analysts are stretched thin, struggling to keep pace with the ever-evolving threat landscape. How do you make sense of it all? How do you prioritize investments, allocate resources, and make […]

third-party cyber risk management

Cybersecurity Essentials: The Role of Vulnerability Management in Building Cyber Resilient IT Systems

Navigating the complexities of cybersecurity requires a strategic approach to mitigate risks and safeguard IT systems. Central to this approach is vulnerability management, a systematic process that identifies, assesses, and prioritizes vulnerabilities within organizations’ infrastructure. Understanding what vulnerability management entails and how it contributes to preemptive cyber defense is critical.  According to a recent report […]

third-party cyber risk management

Mastering Cybersecurity Risk Metrics: A New Way to Think About Cyber Risk

Digital threats are not just possibilities but inevitabilities; understanding and calculating cyber risk is more than a precaution – it’s a necessity. Understanding cybersecurity metrics is essential to safeguarding and improving business operations. Calculating cyber risks simplifies complex issues and empowers professionals to communicate them clearly to improve their organization’s digital security. This requires a […]

third-party cyber risk management

Evolving Cybersecurity: From Risk Management to Cyber Resilience

With an astonishing 95% of cybersecurity breaches attributed to human error, organizations must educate, train, and implement a security foundation for all employees. This staggering statistic highlights the vulnerability of humans within digital infrastructures and underscores the importance of building a security-forward mindset into the culture of resilient businesses.   As cyber threats continue to lead […]

third-party cyber risk management

Counting the Cost: Understanding the Financial Risk of Cybersecurity Breaches

Cybersecurity breaches stand as a relentless challenge for organizations worldwide, causing substantial financial repercussions. As cyber threats advance in complexity, the economic impact on businesses intensifies, affecting everything from upfront costs to sustained financial health.  A thorough investigation into the financial risks posed by cybersecurity breaches reveals the breadth of direct and indirect expenses that […]

third-party cyber risk management

Rewriting the Rules of Cyber Security Risks: Part II

Building Cyber Resilience requires a new approach to assessing, measuring, and managing risk. Traditional thinking from both the security and insurance sectors views risk management in binary silos that either stop an attack or fail to prevent loss. However, the truth is that cyber security risk is significantly more complex. Being resilient to cyber security […]