Navigating the AI Trust Journey

In today’s rapidly evolving digital landscape, trust in advanced forms of technology like artificial intelligence (AI) is a tough proposition. Everyone is familiar with examples of AI confidently claiming something is true, only to find out that it was completely fabricated (e.g., fake legal precedents, inaccurate claims about the James Webb Space Telescope, etc.). That’s created a healthy dose of skepticism. In a previous blog post, we talked about the cost of hallucinations for security operations teams.

In cybersecurity, our industry has a track record for over-hyping new technologies, only to fall into a “trough of disillusionment.” The story is largely the same when it comes to the use of AI in cybersecurity.

  • Over-promising, under-delivering: Early security AI solutions claimed to eliminate false positives and catch every threat. Like other detection mechanisms, AI isn’t immune to false positives and false negative, but has certainly proved its worth as an important tool in the toolkit.
  • Opaque models: AI models frequently lacked transparency, leaving security teams to question why certain decisions were made or what data influenced them.
  • Bias and drift: Often, models would be trained on narrow datasets, which introduced biases when the models were deployed in the real world, and ultimately made them unreliable when performing under more realistic conditions.

This isn’t all doom and gloom thoughthere are legitimately helpful AI tools solving real problems in cybersecurity todaybut the point is that new technologies have to overcome these barriers to truly have an impact.

At Embed, we recognize this challenge and are developing an approach to help security operations teams establish trust in our autonomous alert investigation solution. Trust is not something that happens overnight; it’s a journey built through transparency, accuracy, and adaptability. Our goal is to empower Embed users with the confidence they need to fully leverage our solution, reducing reliance on manual alert investigations while streamlining their security processes.

The Foundation of Trust: Transparency and Evidence-based Reasoning

The cornerstone of trust-building lies in transparency. AI-enabled tools must explain the data they’re using to make a decision and the reasoning associated with that data.

For alert investigation, that means explaining why a security alert is deemed a false alarm or a true threat. At Embed, we’ve broken the investigation process into a library of investigative steps, or iSteps™. An iStep answers an overarching analyst question. The answer to that question comes from two items: (1) data that is gathered and available as evidence, and (2) additional sub-questions and answers that our system asks. We expose both the evidence and sub-Q&A so that a security analyst can independently validate our reasoning. Whether it’s identifying suspicious IP addresses, determining a file is malicious, or interrogating email headers, our system is designed to provide clear explanations that align with analyst intuition and judgment.

Building Trust Through Accuracy

While transparency is crucial for building initial trust, accuracy remains the ultimate test of any system. We want Embed users to gain confidence in our results, not just because we explain them, but because our results are dependably correct.

Will we be perfect? Of course not. There’s no fool-proof, perfectly accurate solution to the SOC challenge today despite the best efforts of many in our industry. If anyone claims otherwise, smile, nod, and slowly back toward the exit. Our goal at Embed is to remove as much of the hay (false alarms) as we can so that security teams can focus their limited time and attention on the pile most likely to contain needles. Systems that are more accurate will naturally engender more trust. User confidence is further amplified when customers observe how much time and effort are saved due to automated triage and investigation, allowing them to focus on higher-priority tasks.

Most security teams don’t have the bandwidth to review all the alerts they receive each day, and so analysts are forced to prioritize. This prioritization is time consuming and error prone,  typically using simplistic, inadequate metadata like vendor severity ratings, attack lifecycle stages, or detection signature names. We know it’s hard to trust automated tools with this work. But as users build confidence in our transparent and accurate investigations, they rely less on manual prioritization because they see the value. We consider the biggest win to be when a team gains enough trust in Embed that they no longer even need to review the investigation report when we’ve concluded there’s a false alarm. That means security teams get more time back so they can focus their expertise where it matters most.

Maintaining Trust Over Time

Building trust through transparency and accuracy is just the beginning—maintaining that trust over time presents an entirely different challenge. Once AI automation tools establish credibility, a natural shift occurs: users begin to rely more heavily on the system’s conclusions and spend less time scrutinizing individual results. That’s a sign the system is working as intended, but it introduces a potential future blind spot if not properly managed.

The cybersecurity domain is constantly changing. Attackers continuously evolve their tactics, techniques, and procedures (TTPs), while defenders generate new detection capabilities in response. AI-based investigations must keep up with this arms race. The risk emerges when security teams, having developed trust in the AI’s capabilities, naturally reduce their oversight efforts. When the threat landscape shifts, it’s incumbent on AI systems to provide visibility into these changes so users can validate the system is still working properly.

At Embed, we recognize that maintaining trust requires continuous vigilance and proactive communication about the AI system’s evolution. We’re committed to developing features that provide security teams with insights into how the threat landscape—and our response to it—is changing over time. When our system encounters new attack types or generates alerts for previously unseen tactics, we will make these developments visible to users, showing them how investigative reasoning adapts to emerging threats.

This approach serves a couple purposes: it helps users understand when they should pay closer attention to results (particularly when new attack patterns are detected), and it demonstrates the AI system’s ongoing learning and adaptation. Rather than allowing users to “set it and forget it,” it’s important to surface moments when the data has shifted or the AI reasoning is evolving, allowing security teams to validate behavior and effectiveness.

The Road Ahead

Navigating the AI trust journey requires us to demonstrate our value through action, not promises. We’re focused on making Embed as transparent and accurate as possible right now—showing our work, explaining our reasoning, and delivering results that security teams can easily validate.

What energizes us is witnessing our early customers’ feedback as they experience using Embed firsthand. We’re seeing security teams move from skepticism to cautious confidence as they begin to trust Embed’s investigations. That measured confidence from experienced practitioners validates our approach: in an industry built on “trust but verify,” providing clear explanations and consistently accurate investigations is how you move from the “verify” column to the “trust” column.

At Embed, we’re committed to unlocking the full potential of AI in cybersecurity through transparency, accuracy, and adaptability—empowering our customers to lead their security operations with confidence.