- Huntbase Blog
- Posts
- The Human Edge: Why Analysts Remain Irreplaceable in an AI-Driven SOC
The Human Edge: Why Analysts Remain Irreplaceable in an AI-Driven SOC
Beyond automation and pattern matching, human intuition, creativity, and context remain the decisive factors in modern cybersecurity operations.
In Part 1 of this series, we examined how artificial intelligence is being used in today’s Security Operations Centers (SOCs) and where its current limits lie. We saw AI co-pilots summarizing alerts, autonomous agents triaging Tier-1 noise, and machine learning models reducing false positives. The benefits are real – faster processing of data, 24/7 consistency, and relief from many repetitive tasks. Yet we also noted that AI is no silver bullet. In fact, the vision of a fully autonomous SOC remains unrealistic, as recent Gartner research affirmed. Over-reliance on automation can even erode the core investigative skills of a team over time[1]. In this Part 2, we take a look at the irreplaceable role of human analysts. It’s about what AI gets wrong – and what analysts still do best.

AI can automate and augment, but it cannot replace the human intuition and context at the heart of real security analysis.
The Intuition and Context AI Cannot Grasp
One of the greatest strengths seasoned analysts bring is intuition – that gut feeling when something just doesn’t “smell right.” This isn’t magic; it’s the product of experience and contextual awareness. Security is all about context, and while AI is excellent at crunching data, it can’t truly understand your business’s unique environment or make nuanced judgment calls like an experienced human. An algorithm works on logic and learned patterns; it has no instinct for when an alert “just doesn’t fit” the way a human mind does.
Consider how many alerts in a SOC require understanding the business context. Is a spike in database access at 2 AM suspicious? An AI looking purely at patterns might flag it as anomalous, but a human analyst who knows the database team is working late tonight for a release would recognize it as normal. Conversely, AI might ignore a series of low-level events that individually seem benign, whereas an analyst familiar with the systems might sense a pattern – perhaps a user accessing files they never touched before, at odd hours, on systems they typically don’t use. That subtle pattern might evade a rules-based detection but not the notice of a curious human.
AI doesn't have the same real-world understanding that humans do. An AI assistant might confidently respond to any question, but its answers could be incorrect, off-topic, or lacking important details. You can't expect a beginner using a chatbot to perform like an experienced professional because the difference in intuition is important. A good Security Operations Center (SOC) understands this. The aim is not to replace human analysts with machines, but to relieve them of routine tasks so they can use their intuition and expertise to tackle more complex issues.
It’s dangerously easy to place too much confidence in AI. When a system says “no threat detected,” less experienced teams may take that at face value — assuming the machine must know best. But this is misplaced trust, not informed judgment.
The UK Government’s 2023 guidance from the Defence Science and Technology Laboratory (DSTL) [2] — Human-centred ways of working with AI in intelligence analysis — puts it plainly: AI should support analysis, not replace it. The most effective teams calibrate their trust in AI by weighing machine-generated output against human experience, environmental knowledge, and operational context.
A seasoned analyst doesn’t just ask “What does the model say?” — they ask “Does this make sense given what I know about our environment?” Without that human lens, AI output risks being misread or misapplied. Machines can’t yet account for subtle, undocumented details — like knowing a server was decommissioned last week, that a team changed shift patterns, or that a user is new to a role and still onboarding. These are insights AI can’t access unless someone explicitly tells it — and even then, it may not fully grasp the implications.
As the DSTL guidance emphasises, humans are essential for interpreting incomplete, ambiguous, or uncertain information. AI can spot patterns and surface data, but it can’t understand intent, context, or nuance. That’s why the analyst isn’t just a fallback — they’re the final arbiter of meaning. And in high-stakes environments like the SOC, human judgment is the difference between confidence and complacency.
Example: The Case of the Curious Admin (Human Context in Action)
To illustrate, imagine a scenario that plays out in an organizations SOC. A security AI platform was monitoring user activities and found nothing overtly malicious about an IT administrator’s behavior – after all, admins access critical systems all the time as part of their job. But one senior analyst’s intuition was triggered: the admin in question had been logging in at unusual hours and accessing finance databases unrelated to their usual duties. These actions were subtle enough to trigger an low level alert that the AI didn’t flag as violations worth investigation. Yet the analyst knew this was out of character for that role. Instead of dismissing it, they decided to dig deeper.
The analyst reached out to the admin’s manager and learned the employee had just put in their notice to quit. This contextual clue was gold. Armed with that knowledge, the SOC team launched a discreet investigation. They correlated logs across domains – VPN access records, database queries, file transfer logs – and uncovered that over the past month the admin had quietly downloaded sensitive financial reports and customer data. This was an insider data exfiltration in progress. AI alone would have missed it because the activity fell within normal usage parameters for an admin account. It took a human who could connect an odd technical signal with a real-world situation (an employee preparing to leave) to expose the threat. The incident was stopped, the data secured, and it was the analyst’s intuition and context awareness – not any advanced algorithm – that saved the day.
Thinking Outside the Algorithm: Creative Reasoning vs. Pattern Matching
If intuition and context are where humans shine in day-to-day triage, creative reasoning is what sets them apart in uncharted scenarios. AI, especially today’s machine learning and large language model tools, only knows what it’s been trained on or programmed to recognize. This doesn’t mean it can’t detect new threats of a similar nature but there are limitations. Give it a novel situation that falls outside its training data, and it’s likely to falter. Adversaries know this. Smart attackers actively probe our defenses and find ways to hide in the gaps. This is one fact we can likely expect to remain constant. They develop new exploits that don’t match yesterday’s indicators, or they abuse systems in ways our tools don’t expect. They merge the digital and physical landscape to their advantage. When that happens, the first ones to connect the dots are almost always humans – threat hunters, investigators, incident responders with a creative spark.
History has shown time and again that purely automated detection can be outmaneuvered. A striking example is the rise of fileless malware and living-off-the-land techniques. Early antivirus and endpoint AI systems heavily focused on catching malicious files written to disk. Attackers responded by doing the malicious work entirely in memory or by hijacking legitimate admin tools leaving few of the footprints those AI models were looking for. An unimaginative algorithm will happily report “no malware found” because it checked the disk and everything there looked fine. But a skilled human hunter, noticing odd process behavior or network traffic, might recognize the absence of expected data (no new files, despite suspicious behavior) as a clue in itself. They might say, “What if the attack is fileless? Let’s check memory and running processes.” That kind of hypothesis-driven thinking is a human forte. As Augusto Barros noted, there’s currently no AI system capable of truly analyzing brand-new attack behaviors and devising a detection approach on the fly – that’s still a human’s job, at least until some sci-fi level AI (AGI) comes along[3].
Another area where human creativity prevails is in correlating disparate clues that an algorithm might not link. A security AI works well when it’s told what patterns to search for (or has learned patterns from training data). But what about patterns that are extremely subtle, or span across different data sources in a way the AI isn’t set up to handle? For instance, an attacker might trigger a low-level event on an endpoint here, a strange network connection there, and maybe a social engineering email in the mix. Each component might look innocuous in isolation. It often takes a human analyst to synthesize these into a bigger picture – essentially saying, “None of these pieces alone is screaming ‘incident’, but taken together, something is definitely off.” In one APT intrusion case, a company’s SOC only unraveled the attack because a Tier-2 analyst took a step back and combined data from the firewall, Active Directory logs, and an employee tip about a suspicious email. The AI tools in place did raise a few low-severity alerts in each area, but only the human could fuse those weak signals into a strong case, because doing so required imagination and broad perspective rather than following a predefined playbook.
Human threat hunters thrive on asking “what if?” questions. What if the attacker is using a method we haven’t seen before? What if this anomaly is actually the tip of the iceberg? AI doesn’t ask “what if” – it just looks for what it was told to look for. This ability to step outside the established pattern and pursue an unorthodox hunch is where many major discoveries happen. It might lead to uncovering a new attacker tactic or closing a visibility gap. It’s the kind of creative problem-solving that makes the difference between catching a stealthy adversary or remaining blissfully unaware.
The Human Element: Empathy, Communication, and Collaboration
Security operations don’t happen in a vacuum. They are inherently human endeavors, because at the end of the day we’re protecting people and organizations, and often dealing with human-generated activity. This is an area where machines truly can’t hold a candle to human analysts: understanding the human element behind technical data, and working together to respond effectively.
Think about investigations that require interaction with users or other departments. If a SOC analyst suspects a user account is compromised, a typical step is to contact that user or their manager: “Hey, did you actually perform this password reset? We saw some odd behavior on your account.” The answer – the tone of the user’s response, the context they provide (“Oh, I was traveling and had trouble logging in, yes that was me” vs. “What? No, I wasn’t even online then!”) – can completely change the incident narrative. AI cannot conduct an interview or feel the hesitation in someone’s voice. It can’t pick up on the unspoken cues or adjust its questions on the fly to get more clarity. Human analysts do this naturally, and it often unlocks the real story behind an alert.
AI still struggles with intent. It may flag a late-night file transfer as exfiltration, missing that the user was just celebrating a team milestone. It may panic over unusual access patterns, unaware the employee is under stress or dealing with personal issues. Humans bring empathy and context. A good analyst might pick up the phone, ask a simple question, and learn it’s nothing — avoiding wasted time, unnecessary escalation, and mistrust. This judgment, rooted in human understanding, not only reduces false positives but builds credibility across the organization.
And when things get messy, people don’t just act, they learn. SOC teams debrief, share perspectives, and level each other up. “I’ve seen this before.” “Check that proxy log.” These conversations — the quick huddles, the hard-earned lessons — are how teams grow. AI can surface data, but it can’t mentor, teach, or spark breakthroughs in a war room. That’s what teams do. And that’s why analysts will always be at the heart of security operations.
Example: Chasing Ghosts Across the Network (Collaboration in Practice)
Picture this.
You’re part of a modern security operations team. You’ve got AI-driven detection tools, telemetry from every endpoint, and correlation rules humming in the background. The dashboards look clean. No priority alerts. Just a few low-severity logs: admin accounts created and deleted within seconds, minor config changes on some legacy servers, and odd access timings that don’t quite trigger anything known.
The system doesn’t flag it. No obvious malware, no failed logins, no traffic to known bad domains. But a Tier-2 analyst gets that itch — something doesn’t feel right. They message someone in identity management: “Hey, were there any planned changes on these boxes?” The answer? No. And that’s when another teammate — a threat hunter — chimes in: “I’ve been tracking unusual connections to that VLAN. It’s not a range we normally see.”
Suddenly, the picture starts forming. An attacker has gained access to a domain admin account. Quietly, they’re planting backdoors, making slight changes to stay persistent, and operating just below detection thresholds. They’re not noisy. They’re patient.
And they’re almost invisible — unless someone connects the dots.
That’s the kind of threat many SOCs will face today. Not malware that screams, but behavior that whispers. AI has the logs. But without human curiosity, communication, and context, no one sees the full story. The system says “low risk.” The team says, “Something’s off.”
It’s the collaboration — between Tier-2, threat hunting, and identity — that cracks it. AI doesn’t ask another analyst for gut checks. It doesn’t cross-reference with a colleague over Slack. It doesn’t imagine “what if this is a new form of persistence?”
That’s the point. The modern SOC isn’t just about coverage — it’s about connection. And it’s often the quiet, human moment — the ping, the gut feel, the shared hunch — that makes all the difference.
Beyond Tier-1: Keeping Humans at the Center of Security Operations
The examples and arguments above all point to a clear conclusion: the human mind remains central to real analysis and investigation. Yes, Tier-1 tasks like initial triage and role correlation can be heavily assisted (or even handled) by AI. We welcome that – it lets analysts focus on the higher value work. But the beyond–Tier 1 work – the complex investigations, the threat hunting, the nuanced decision-making in response – is where human analysts truly shine and are absolutely essential. This is the work that defines successful detection and response in modern security operations.
When an alert graduates from a simple check-the-box response to a deeper inquiry (“Is this an attack? How do we stop it? What else did they do?”), it enters the realm of analysis that benefits from human strengths: curiosity, skepticism, contextual understanding, creative problem solving, ethical judgment. These strengths align with the values and instincts of SOC analysts, threat hunters, and CTI (cyber threat intel) teams who care about doing the right thing – protecting the organization while seeking truth in data. They’re not satisfied with shallow answers. They know when to trust the tools and when to trust their gut. And critically, they understand that security is not purely a technology problem, but a human one.
We must also talk about the danger of over-automation. Handing too much over to AI can lull a team into complacency. If analysts become button-pushers who just approve AI-generated outputs without question, their investigative instincts and skills will atrophy. The best SOCs going forward will intentionally cultivate their analysts’ expertise even as they adopt AI tools. That means encouraging people to question the AI, double-check critical findings, and continue honing fundamentals like forensic analysis, threat modeling, and incident coordination. It also means training analysts on how AI models work – their assumptions and biases – so the team knows when not to take an AI suggestion at face value.
AI is an amazing force multiplier, but humans remain the strategists and decision-makers. The optimal future isn’t AI or analyst alone; it’s the two working in tandem, each doing what they do best. As one report wisely noted, the smart approach is to use AI for what it’s good at (speed, scale, pattern matching) while actively cultivating human skills in what machines can’t do – creative thinking, contextual understanding, intuition[4]. AI might surface an indicator, but a human will figure out what it means and how to act on it. AI might suggest a remediation, but a human will assess the broader impact of pulling that trigger.
Let’s avoid the hype: we’re not looking for AI to replace analysts, and we shouldn’t view the relationship as adversarial. Instead, we should position the human mind as the central driver, with AI as a powerful ally. An analyst-first, AI-augmented SOC is one where alert fatigue is reduced and mundane tasks are automated, but analysts remain in control – applying their judgment on top of machine outputs. It’s an approach that values empathy, realism, and deep respect for human expertise. After all, cybersecurity at its core is about outsmarting human adversaries and protecting human interests. Who better to do that than skilled human analysts equipped with helpful AI assistants?
As we continue to evolve our security operations, the key will be maintaining that human touch in everything we do. Trust the instinct that says “something’s not right here.” Lean on your teammates and share knowledge. Use AI to handle the grunt work, but never stop thinking and questioning. The art of analysis, the spark of insight, the moral and contextual compass – those remain human domains. And that is why, when it comes to security, there are some things AI just gets wrong, and the analysts still do best.
Sources
[1] Gartner: Emerging Tech: AI in Security Operations – https://www.gartner.com/en/newsroom/press-releases/2024-08-21-gartner-2024-hype-cycle-for-emerging-technologies-highlights-developer-productivity-total-experience-ai-and-security
[2] DSTL: Human-centred ways of working with AI in intelligence analysis – https://www.gov.uk/government/publications/human-centred-ways-of-working-with-ai-in-intelligence-analysis/human-centred-ways-of-working-with-ai-in-intelligence-analysis#practical-wisdom
[3] Information Week: The Limits of New AI Technology in the Security Operations Center – https://www.informationweek.com/machine-learning-ai/the-limits-of-new-ai-technology-in-the-security-operations-center
[4] Essential Human Skills in the Age of AI: Cultivating What Machines Cannot Replace – https://www.linkedin.com/pulse/essential-human-skills-age-ai-cultivating-what-cannot-taylor-a-vxqye/