I outlined what we mean when discussing AI in cybersecurity in my previous article. The key takeaway: Currently, AI technology has crucial but limited applications within cybersecurity. This article will illustrate some of those limitations and the critical application areas in cyberdefense.
AI’s primary task (and primary limitation)
In the past, cyberdefense was all about analyzing system logs and alerts from IPS/ AV products. Today, advanced attacks produce more data to analyze, including network traffic, endpoint internals, application and transaction data, user behavior, cloud data, alerts from security products, threat intel, social media data, dark web data and more.
Modern cyberdefense requires analyzing a high quantity of data quickly. Today, a single computer can do more mathematical calculations per second than the entire human population combined.
And artificial intelligence (AI) technology is far superior to human intelligence at applying complex math on the scale necessary to detect threats. However, human intelligence is more than calculations and mathematical operations, and cybersecurity requires more than data analysis.
Where humans still outperform AI
What human intelligence lacks in calculation prowess, it makes up for in other ways. Our brains are complex biological computers that can perform specific cognitive tasks that even the fastest modern supercomputer can’t simulate. We can reason, hypothesize, explore, deduce and predict despite ambiguity and insufficient data.
Every cybersecurity expert will tell you that the ability to make reasoned, intuitive decisions despite ambiguity is critical to detect and respond to threats. In cybersecurity, when you try to evaluate risk, make a judgment on an alert, or determine an appropriate response, you need these aspects of human intelligence. Current AI technologies, including AI in cybersecurity, haven’t yet evolved to replicate these human intelligence capabilities.
AI can perform mathematical calculations at speed to augment human intelligence, which is the area where AI delivers the most significant benefit for cybersecurity.
AI augmentation scenarios
AI-based threat detection is highly efficient at discovering potential threats. AI can present potential threats to human analysts, answer questions, prove or disprove a human analyst’s hypothesis, and execute tasks that human analysts have approved. AI might not decide if an alert is an actual attack because that requires a human’s broad cognition skills. But AI can hasten the detection of an attack by augmenting an analyst’s ability to make that call.
More specific scenarios of augmentation include:
- Triaging: All rule-based detection systems suffer from the problem of false positives. This is not a problem of poorly designed or engineered products but the inherent logic of cybersecurity. Attacks are few and far between, but there is a heavy penalty for producing a false negative in our domain. If a product fails to detect an attack, the consequences are severe, so every security product tries to minimize false negatives by alerting every potential attack.This deluge of (mostly) false alerts overwhelms human analysts, who aren’t naturally suited to performing large-scale data analysis. In response, security operation center (SOC) analysts create rules to triage these alerts, then analyze a filtered set of alerts. This approach is also a problem because — given the nature of today’s advanced threats — that innocuous-looking simple network management protocol (SNMP) alert could be the actual attack, while the rule-based alert about an SQL injection may be false.
Here, AI techniques can be used to augment human analysts. Using machine learning methods like historical pattern analysis, clustering, association rules and data visualization, we can quickly filter the most relevant alerts and present only these triaged and enriched alerts for human analysts to investigate further.
- Threat hunting: Another inherent problem within cybersecurity is that it is asymmetric. A cyberattacker needs to be successful only once by exploiting just one weakness, while the defenders must be successful every time. To do so, we need to constantly comb for threats through all data—not just security data—in our environment. This process is called threat hunting, and AI is adept in looking for patterns, anomalies and outliers — which are then sent to human analysts for investigation.
Several products already use AI techniques for these analyses. Security information and event management (SIEM) is evolving beyond log analysis and correlation to add the capability to analyze network data (NetFlow, proxy, DNS, packets) using machine learning. User behavior analytics products apply machine learning to user data, and endpoint threat analytics (EDR) products are doing the same to detect advanced malware in endpoint data. New analytics categories apply data using RASP agents to detect application attacks and fraud. All these analytical products identify and present anomalous behavior or recurrent patterns found in the data. Today, analysts in advanced SOCs utilize AI and threat hunters to investigate these outputs (again, here, AI does not replace people but augments hunters in detecting threats).
- Incident analysis or investigation: Humans use reasoning skills to decipher a complete attack chain by investigating a triaged alert or hunting output. To investigate, you must constantly ask new questions, iteratively form a new hypothesis, and collect more evidence to confirm or reject those hypotheses, which AI lacks today.Investigating a cyber alert or incident and forming the attack story requires strong reasoning and large-scale data collating and mining capabilities. This is the classical exploitation versus exploration challenge. Machines are great at data exploitation, but humans are needed for exploration.Investigating a cyber alert or incident and forming the attack story requires strong reasoning and large-scale data collating and mining capabilities. This is the classical exploitation versus exploration challenge. Machines are great at data exploitation, but humans are needed for exploration.In this activity, AI models primarily answer:
• What happened to the asset? (impact)
• Who are the attackers? (attributes)
• What were the sequences in the attack chain?
• What is the blast radius? (what other assets are part of the attack)
• Who is “patient zero”? (where did the attack originate)To do this, AI must mine external threat data, including associated files, IOCs, attacker information and similar breaches, then correlate it with internal data past alerts, network, asset information and security logs to find clusters, associations and patterns. This will help the investigator recreate the event’s blast radius, attack progression and determine the patient zero.
- Threat anticipation: AI can also augment human capabilities during threat anticipation, which allows you to anticipate future threats by identifying breaches occurring at other companies. The key is quickly learning, extracting the relevant threat intel and applying it in your environment.We have already automated the collection of machine-readable threat intel data, but we can increase accuracy and fidelity by using AI to apply this data to each organization’s unique context. When it comes to mining human-readable threat data — blogs, forums, social media, dark web sources — AI-based text analytics and natural language processing can help identify the most relevant data for a human threat analyst. AI can automatically group and categorize this unstructured data along with topics and semantics, helping threat analysts apply the relevant actions.
- Incident response: Once an alert has been confirmed as an incident, AI can assist throughout the four steps of an effective response:
1. Containing the spread
2. Recovering affected systems
3. Mitigating the root causes of the attack
4. Improving your security posture for the futureIncident responders need to know what to do and how to automate that step at each stage.AI techniques like knowledge engineering and case-based reasoning can be used to create playbooks to guide incident responders. These playbooks are built by machines based on previous incidents and include codified knowledge from human experts. AI thus learns with each new incident and continuously modifies or creates branches of the main playbook which the incident responders can combine with their knowledge of organizational context to ensure the proper response.
AI in cybersecurity: Necessary, but not enough on its own
The scenarios above demonstrate AI’s strengths and weaknesses when it comes to cybersecurity. In the future, AI may develop the ability to replace human cybersecurity experts, reduce the burden of the cyberskills shortage, and ultimately simplify cybersecurity. However, in the short and medium-term, AI can only augment human capabilities — not replace people. Given the expansion of data, users, networks and IT systems in every organization, the future holds more threats and alerts which require human analysts. The good news is that — augmented by AI — they can more effectively investigate, hunt and respond to these threats.
That’s our vision at Eviden, and that’s why we fuse AI-based security with human intelligence to deliver our Managed Detection and Response (MDR) service.
Contact us to learn more about AI use cases in threat management.