Overreliance on Automated Tooling: A Big Cybersecurity Mistake

Ryan Standbridge
Author: Ryan Stanbridge, GCFA
Date Published: 30 December 2024
Read Time: 6 minutes

Automation and artificial intelligence (AI) have been a positive force multiplier in the cybersecurity industry. Improved time efficiency, resources, cost, and productivity are great strides in the field and should be celebrated. In a recent survey by Gigamon of Chief Information Security Officers (CISO), 70% have reported that existing tooling cannot detect security breaches as effectively.1 While increased funding may seem like a viable solution to this problem, it also carries inherent risk. Organizational leaders looking to understand these complexities must examine the issue from the perspective of cybersecurity practitioners.

The use of automation or AI technologies to drive cybersecurity decision making requires human contextualization, validation, or understanding of the processes employed by these technologies to make decisions. A lack of human oversight can have a detrimental impact on the trustworthiness and integrity of a cybersecurity professional. Human validation remains essential to interpret the nuances and wider context in various security scenarios that automated tools may miss. Thus, a balance should be sought between automation and human analysis to truly harness this positive force multiplier.

Automated cybersecurity tools have been, and continue to be, developed to enhance practitioner efficiency and accuracy in the identification of threats, however, reliance on such tooling can lead organizations into a false sense of security. This overreliance on the output of automated tooling, without first understanding the mechanisms by which these automated decisions are made, can result in unintended vulnerabilities in systems. One such example is that of analyst fatigue due to automated tooling often generating false positive results, whereby the volume of alerts generated desensitize analysts to genuine alerts. This can desensitize security teams and lead to actual threats being misclassified, ultimately introducing risk to the organization.

Human validation remains essential to interpret the nuances and wider context that automated tools miss. A balance should be sought between automation and human analysis to truly harness this positive force multiplier.

Vendors and their respective marketing teams have promoted the infallibility myth surrounding automated and/or AI-powered tooling, but this could not be further from the truth. According to research conducted by PWC, only 38% of organizations in the United Kingdom have a high level of confidence in their ability to manage cybersecurity risk despite increases in investment in tools.2 Clearly, more automated tooling is not the answer.

Human intervention is essential for the sustained success of a cybersecurity program. Cybersecurity practitioners provide the context, experience, and intuition that automated tools, including AI, cannot fully replicate. Validation or dismissal of tooling output is critical for ensuring that true positive threats are acted upon appropriately. By continually designing appropriate tuning on the alerts received to provide greater business contextualization, cybersecurity practitioners can positively impact the security posture of their organizations. Understanding expected operations or behaviors within an environment, and the technologies utilized by an organization can increase the tuning potentials for practitioners. To cultivate a culture of continuous improvement, cybersecurity practitioners should consider being in continuous and open dialogue with key systems and application stakeholders to learn and understand the nuances of their respective systems. This exercise not only assists with a reduction of false positive alerts received by cybersecurity practitioners, but also creates trust in the team.

Trust the Process and Understand the Context

To ensure success, cybersecurity professionals must switch their focus from the output of tooling to the process by which these tools work. Before purchasing a technology, which automates detections within cybersecurity, organizations should first understand what problem they are trying to solve. It may be the case that the existing technologies utilized by the organization are sufficient and just require additional configuration. Where a need is identified for the adoption of new technology, however, cybersecurity practitioners should first understand how this technology works, and more importantly, why alerts are generated by it. Without this knowledge, accurate interpretation of data cannot occur consistently, and tools cannot be utilized effectively. Understanding the underlying processes of systems and tooling is essential. Without this, cybersecurity experts leave themselves open to misinterpretation of data, leading to incorrect classifications on alerts. This can create gaps in the overall security posture and, eventually, result in a cybersecurity breach.

In cybersecurity, no one-size-fits-all approach can be effective. This is because no two organizations are the same. Every technical environment and its respective intricacies are unique, as are their protection priorities. It is important to tailor the use of tools to the specific environment of the organization (e.g., healthcare providers prioritize safeguarding patient records and protecting sensitive medical equipment).

By understanding their organization’s unique environment and specific needs, cybersecurity practitioners can tailor their responses to threats more effectively than any automated tool. A measured approach ensures that any security actions align with the organization’s risk appetite while maintaining operational requirements.

There are, however, solutions to mitigate such reliance on tooling. This lies within the comprehensive training in the principles and processes of cybersecurity when developing proficient cybersecurity practitioners in their respective fields. This could include facilitating an understanding of operating system processes and procedures; how networks facilitate computers to talk to one another; or how to determine normal activity within business applications, to name a few. Understanding the relevant principles per their respective environments allows professionals to grasp the nuances of their systems and how they interact with automated tooling. Certifications and training are important factors for this endeavor, and vendor specific training which directly relates to the technologies employed by an organization should be considered. Training and certifications, however, are not a silver bullet, and cybersecurity professionals should seek to understand the technologies within the environments which they protect. Furthermore, experience plays a pivotal role in this context as well, as more seasoned practitioners may be able to recognize subtle indicators of compromise (IoC) that automated tooling might miss. These practitioners can then distill this knowledge down to junior members of the team to ensure continual service improvement.

Conclusion

A holistic approach, combining human validation and contextualization with the positive force multiplier that automated tooling and AI technologies bring, will ultimately benefit organizations exponentially. Integrating human expertise and experience with automation and AI ensures that any potential security containment measures undertaken as part of detection are proportionate to the organization's risk appetite and do not impact the enterprise detrimentally.

Automated tooling and AI are positive force multipliers for cybersecurity practitioners, but they are not a panacea. External human analytical validation and contextualization should always be sought to avoid creating a false sense of security within organizations. The illusion of infallibility and the prevalence of false positives further highlight the need for cybersecurity professionals to remain critical to an organization’s cybersecurity program. Continuous professional development of cybersecurity professionals is crucial so that they stay up to date with the evolving threats and technologies in the digital world. By supplementing cybersecurity professionals with the automated tooling necessary to achieve their objectives, an organization can develop and mature a holistic cybersecurity program that adequately mitigates risk in line with enterprise objectives.

Ultimately, the human aspect, as it always has been, is the most imperative pillar of a successful cybersecurity program. Human analysts bring the intuition, contextual knowledge, and critical thinking necessary to protect organizations. This expertise should be complemented by automated tools that detect and address the specific security risk relevant to the organization. As the cybersecurity landscape continues to evolve, the synergy between automated and AI tooling and human expertise will be pivotal to safeguarding organizations against emerging threats.

Endnotes

1 Gigamon,“Gigamon Survey Reveals CISO Priorities for 2025 Amid Budget Pressures and Rising Cyber Threats,” 15 October 2024
2 PwC, The Cyber Security Podcast from PwC UK: Communicating Cyber Risk to the C-Suite,” The Cyber Security Podcast From PwC UK, 25 January 2021

Ryan Standbridge

Is a senior incident response consultant for Quorum Cyber. With nearly 10 years of experience working in security, Standbridge has previously held roles for Police Scotland, the University of Dundee (Scotland, UK) and Dynamic Edge Group.

Additional resources