The Evolution of Security Automation
Summary: Security automation has evolved over the last 20 years from home-grown scripts to SOAR playbooks that automate workflows and is now entering the era of AI agents. AI agents have capabilities that address the previous technology’s limitations and are accessible to organizations that didn’t have the resources to benefit from SOAR.
Early Automation with Scripts and Macros
While security automation is a major priority for businesses today, the fact is it’s always been a reality for security practitioners who deal with alerts. In the 2000s, teams would often develop bespoke scripts to perform repetitive security tasks, saving themselves time. As the range of tasks expanded and requirements changed, these custom scripts became increasingly complex and difficult to
maintain.
The introduction of Security Information and Event Management (SIEM) systems in the mid-2000s further accelerated the need for automation. SIEMs collected and correlated (a form of automation) large amounts of data from various security sources, generating an influx of events and alerts. The growing volume and variety of data generated created an additional automation imperative. In turn, this led to longer response times for Security Operations (SecOps) teams to address threats. As the complexity increased, teams sought more sustainable solutions that could handle the changing landscape of security threats and requirements. This led to the
emergence of a new category of product: Security Orchestration, Automation, and Response (SOAR) systems.
SOAR Platforms
The SOAR market emerged in the mid-2010s as a response to the growing needs for automation in cybersecurity. Early vendors like Invotas, Phantom, and Demisto defined key product capabilities:
- Orchestration: integrating various security tools and systems with different data formats and APIs into a unified platform,
- Automation: executing pre-defined actions or workflows based on specific triggers,
- Response: performing remediation across systems when threats are identified, and
- Collaboration: enabling team coordination and information sharing.
On paper, SOAR sounded great. In reality, it hasn’t met its lofty goals. Organizations often struggle with the substantial upfront and ongoing costs of using SOAR. The expected ROI frequently proves elusive, as these systems demand extensive resources for customization, integration, and maintenance. This creates a paradox where a solution meant to reduce workload and increase efficiency often results in additional overhead and resource strain.
There’s an analogy here to AI’s history. SOARs rely on rule-based decision-making, which was popular in AI during the 1980s with expert systems. Expert systems represented domain knowledge as if/else rules that could be automatically applied when given an input. They fell out of favor for several reasons: (1) capturing domain knowledge is expensive, (2) rules are brittle, and (3) maintaining systems is hard when the underlying data and knowledge changes over time… exactly the issues the cybersecurity industry has experienced with SOAR. The AI community moved on from expert systems to other techniques.
The Agentic AI Future
Though Agentic AI may feel like something brand new, the truth is that AI agents have been around for a long time. I started working with agents in 2003 and ended up writing my PhD thesis on a research topic related to reinforcement learning agents. Agents are getting attention now because they can be used in conjunction with large-language models (LLMs) to achieve goals on a wide variety of tasks.
So what is an AI agent? Simply put, an AI agent can perceive its environment and take actions autonomously or with limited human intervention to achieve goals. AI agents can learn, which gives them the ability to improve performance over time. Let’s unpack that definition into a few key components:
- Actions – the steps that an agent can take to interact with its environment,
- Sensors – the ways that an agent can collect data from the environment,
- Goals – the objectives an agent is trying to achieve, and
- Memory – the storage system within the agent where it keeps track of information it knows to help achieve its goals.
The emergence of AI agents for SecOps represents a fundamental shift in how we approach security automation. Unlike traditional automation tools that follow predetermined paths, AI agents can dynamically address security incidents by understanding context, learning from experience, and making nuanced decisions – much like human analysts do.
These AI systems address several key limitations of legacy automation:
- Sophisticated Decision-Making: Rather than relying on binary if/else logic, AI agents can weigh multiple factors simultaneously and make more flexible decisions. They can process unstructured data, recognize patterns, and consider contextual nuances that would be impossible to capture in traditional playbooks. This mirrors how experienced security analysts actually work – considering multiple data points and their relationships before making judgment calls.
- Dynamic Response Paths: Instead of following static playbooks, AI agents can dynamically determine the most appropriate next steps based on real-time findings. This adaptive approach means investigations can branch into unexpected directions when new evidence emerges, similar to how human analysts might pivot their investigation based on discovered artifacts. The agent can identify relevant tools and actions from its knowledge base, rather than being constrained by pre-programmed sequences.
- Reduced Maintenance Burden: Because AI agents can learn and adapt, they don’t require the constant playbook updates that traditional automation demands. When new threats emerge or tools change, the system can incorporate this information into its decision-making without requiring manual reconfiguration. This dramatically reduces the maintenance overhead that has historically made automation inaccessible to smaller security teams.
The future of security automation lies in intelligent systems that can reason about security problems, learn from experience, and collaborate effectively with human analysts. This shift promises to make advanced security automation accessible to organizations of all sizes, not just those with the resources to maintain complex playbook systems.
Whether you have no automations, are struggling to realize value from your existing tools, or have 100s of playbooks in production but want to reduce your maintenance burden, reach out. We’d love to share more about Embed’s approach to Agentic AI for security automation.