The multiplication of incidents, the rapid evolution of attackers’ TTPs (techniques, tactics, and procedures), and the growing complexity of information systems present a major challenge for SOCs.
How can they maintain a high level of responsiveness while ensuring exhaustive and accurate investigations?
And just as importantly, how can all authorized stakeholders on the client side be given simple access to the relevant information they need: obtaining a clear explanation of an ongoing threat, viewing a concise overview of incidents from the past 45 days, identifying the most recurrent users, devices, or threats… and thereby taking concrete, effective action quickly, without waiting for a steering committee.
We are exploring a new approach: orchestrated agentic AI, capable of automating certain critical steps in incident handling—without compromising human oversight.
1. The SOC facing its limits
Traditional ticketing and monitoring tools (SIEM, EDR, XDR, SOAR) provide power and visibility, but still rely heavily on fixed rules and sequential processing chains.
The results:
- Analysts are overwhelmed with tickets and requests.
- Cross-source correlations are rare or manual.
- The average response time is increasing.
In addition, integrating “off-the-shelf” AI modules that require sending personal data to LLMs hosted by third parties—often US-based—currently represents an unacceptable risk, especially since the traceability and management of this data remain largely opaque.
To address these limitations and risks, we are designing an automated investigation agent based on orchestrated agentic AI, integrating LLM, LangGraph, and RAG mechanisms, fully sovereign!
2. A hybrid architecture: LLM agents + cyber APIs
Our autonomous investigation agent, built on a multi-agent architecture orchestrated by LangGraph, combines workflow logic, context management, and knowledge retrieval.
Each LLM agent plays a specialized cognitive role (log analysis, CTI contextualization, correlation with client data stored in our internal wiki, synthesis). It has:
- Access to a persistent vector memory (FAISS/Chroma)
- Business rules injected via prompting or encoded in graph transitions
Our agentic system is powered by Qwen 2.5 – 32B, a high-performance, open-weight, multilingual LLM, which fuels the cognitive agents through:
- Alignment with open-source standards (vLLM, HuggingFace) for seamless integration with LangGraph
- Local execution, meeting cybersecurity compliance requirements
Direct SOC ecosystem integration:
These agents can dynamically query knowledge bases (historical tickets, internal documentation, MITRE observables, OpenCTI…) as well as detection solutions (Splunk via REST API on logs, MS Sentinel, etc.), then formulate hypotheses or extract indicators useful for investigation.
3. From theory to practice: the autonomous investigation agent
Initially deployed with 100% Human-in-the-Loop, the agent operates as follows:
- An incident occurs (phishing, data exfiltration, suspicious behavior, etc.).
- The intelligent agent retrieves the ticket, queries the associated logs, identifies critical events, cross-references them with MITRE sources and CTI data, then produces a comprehensive operational summary with the corresponding criticality level.
- The analyst validates or adjusts the output before sending the investigation to the client.
This initial phase is designed to train the AI to improve its relevance across all investigation scenarios.
The main expected outcomes are:
- Accelerate incident analysis
- Improve the quality of investigations
- Automatically delegate low-value-added tasks, safely
4. Secure chatbot: client-side embedded AI
In parallel, a chatbot powered by Qwen 2.5 – 32B would be connected to our incident ticketing tool, allowing clients to:
- Automatically query each ticket
- Add additional context
- Receive contextualized responses, guided by RAG
All with authentication, multi-turn context management, and strict access control.
Towards an Intelligent, Collaborative SOC
Artificial intelligence does not replace human expertise, it amplifies it. By structuring AI around specialized agents, interconnected with SOC tools, we are building a hybrid environment capable of learning, adapting, and responding faster, without ever compromising the rigor demanded by cybersecurity professions.
At SERMA, we are advancing this work to build an AI-augmented SOC, capable of permanently freeing our analysts from repetitive tasks so they can focus on high-value activities. Because AI is only truly useful if it is aligned with real operational challenges, understandable to its users, and perfectly controllable…