Automated security tools are invaluable in managing the vast number of alerts generated in modern networks, but there is a risk they will lead to complacency. Security responders may feel tempted to let these tools handle everything, which can lead to missed opportunities for catching sophisticated attacks.
It is vital that responders don’t become reliant on these tools, they also need to ensure they cultivate an attacker’s mindset and act accordingly.
The power of red teaming
In the cyber security world, red teaming is an essential exercise that pits a group of ethical hackers (the red team) against a company’s security team (the blue team) in a controlled environment. This method tests the effectiveness of a company’s security posture by simulating an attack scenario that mimics the tactics, techniques and procedures (TTPs) of real-life adversaries. At Cyberis, my colleagues and I engage in these simulations to challenge and enhance the defensive strategies of our clients.
Red team exercises offer several benefits. Primarily, they help organizations identify weaknesses in their detection capabilities and in the methods responders use to contain breaches and resolve incidents. During post-exercise analysis, we often discuss with the blue team where their responses could have been improved. An alert might have been triggered by our actions, but it was disregarded or not escalated properly due to the assumption that the security controls had already mitigated the threat.
In the early stages of an attack, specifically during initial attempts to breach a target’s defenses, it is common for alerts from endpoint detection and response (EDR) systems to be inadequately investigated.
For instance, if we attempt to deliver a malicious payload via email, web or direct to a workstation and the security systems flag and block this payload, the alert that follows might not be further examined. The prevailing assumption is that since the security control blocked the threat, no further action is needed. However, a deeper analysis of the blocked payload could reveal significant details about the attack techniques and the infrastructure used, such as the domains involved in the attack.
The challenge of high security alert volumes
The challenge arises from the volume of alerts that security teams receive; investigating every phishing email or every alert is practically impossible. This limitation often works in favor of the red team in the initial stages of an attack, allowing us to remain undetected as we attempt to gain a foothold within the target environment.
Once inside, as we attempt to move laterally through the network, different controls might trip us up. For example, we might access a server from an unusual endpoint or attempt to use credentials that turn out to be invalid. Such actions might trigger further alerts within an extended detection and response (XDR) system. Occasionally, our actions might even trigger a block event from an antivirus product.
However, the reliance on automated systems to resolve these alerts can sometimes be a double-edged sword.
For example, during one simulation, we uploaded a Trojanized document to a server, which was automatically detected as malware and deleted, with an alert sent to the security team. When we uploaded a modified version of the payload that bypassed the detection mechanisms, it passed through because the initial alert had been closed after the automated deletion of the first document. The second, successful upload did not raise any alarms, demonstrating a critical gap in the security process.
Automated systems don’t always think like attackers
The fundamental issue is that automated systems – and the responders operating them – don’t always think like attackers. If a document is flagged and removed, the threat isn’t necessarily neutralized. The threat is not the document; it’s the adversary behind it, who doesn’t stop after one failed attempt and will often continue to adapt their tactics until they succeed.
We’ve observed scenarios where activities such as the creation of new users in sensitive groups like Domain Admins failed to raise sufficient suspicion. For instance, in one test, we added a user to the Domain Admins group, expecting it to trigger an immediate response. However, the alert generated was dismissed because the account used to create the new user was itself an authorized Domain Admin. The responder handling the alert assumed the activity was legitimate and closed the alert without further investigation.
This points to a critical gap in security training: the ability to think like an attacker. Effective security response requires not just an understanding of what each alert signifies, but also an appreciation of how seemingly isolated events might be linked in a broader campaign. It requires substantial intelligence about the business, excellent situational awareness and a healthy dose of skepticism.
So red teaming exercises serve a dual purpose. They not only test an organization’s security framework, but also train responders to think more like attackers, linking disparate events to form a coherent narrative.
Cultivating this adversarial mindset is crucial as it helps analyze security events in the context of an ongoing, sophisticated cyber attack. This approach is fundamental in evolving from a reactive security posture to a more proactive, strategic one.