Techie Tonic: Cybersecurity in a box? What are the promises and perils of automated purple teaming
An informal evening conversation between security leaders has sparked a serious question now echoing across boardrooms and security forums 'what if running a sophisticated cyber‑defence exercise didn’t require a room full of experts?'
What began as a peer discussion about reducing operational cost, risk and complexity quickly gained momentum, crystallising into a provocative idea for a unified platform capable of simulating cyberattacks, testing controls, training analysts and continuously improving readiness. In short, a “purple team in a box.”
Get updated faster and for FREE: Download the Gulf News app now - simply click here.
What is purple teaming?
Purple teaming, widely regarded as a best practice in mature security programmes, brings offensive red teams and defensive blue teams together to expose weaknesses and sharpen response. Traditionally, however, such exercises are expensive, time‑consuming and heavily reliant on scarce expertise. As budgets tighten and attack volumes escalate, many organisations are asking whether automation can shoulder part of that burden.
To explore the question, I spoke with two seasoned Chief Information Security Officers (CISO), Owen Connolly, CISO at talabat, and Neil Haskins, CISO at PwC Middle East from the Many CXO community. Both see promise in automation and both urge caution.


Breaking silos or creating new ones?
One of the most persistent frustrations in large enterprises is fragmentation. Tools for attack simulation, detection, response and training often exist in isolation, with limited integration.
Neil said: “In the real world, attackers don’t operate in silos, but defenders often do, largely because our tools don’t talk to each other.”
Frameworks such as MITRE ATT&CK have thoroughly mapped adversary techniques, yet both CISOs agree the industry’s challenge is no longer a lack of knowledge.
“We already have the map,” Neil said. “What’s missing is a guided journey, something that actually helps teams execute, learn and mature over time.”
A teaching system, not just another tool
Advocates of the “purple team in a box” concept envision far more than bundled technology. At its best, such a platform would act as a teaching system, simulating attacks, validating defences, highlighting gaps and crucially explaining why those gaps exist.
Owen added, “Most tools will tell you what failed, but very few tell you why it failed, or how your team should think differently next time.”
Neil reinforced this view, emphasising that automation should educate as much as it evaluates. “You don’t train defenders with dashboards alone, instead they need lived experience, even if it’s simulated. Automation should behave like a teacher as much as a colleague.”
Force multiplier, not a replacement
Both CISOs were emphatic on one point “automation must not replace human‑led exercises”.
“Automated purple teaming is a force multiplier, not a substitute,” Neil said. “Automation tests what it is programmed to test. Humans bring creativity, unpredictability and business context. They jump steps, challenge assumptions and behave, well, human.”
Used correctly, automation provides continuous assurance, by validating known controls, detections and workflows at scale, while human‑led red teaming tests decision‑making, resilience and real‑world attack paths. Used poorly, it risks creating a dangerous illusion of security.
The risk of false confidence
For Owen, false confidence is the greatest concern.
“These tools are excellent at baseline validation, especially in high‑velocity CI/CD environments,” he said. “But they don’t think laterally, adapt to your organisation, or chain attack paths the way a skilled adversary does.”
He warned against allowing automated platforms to “check their own homework” without independent validation. The most critical failures, he argued, rarely lie in detection logic, but in human response, communication breakdowns, flawed assumptions and leadership under pressure.
Integration is the real test
Neil also urged realism about implementation.
“The biggest challenge is assuming these tools will simply plug and play to integrate with identity systems, logging platforms and operational workflows. In reality, it is difficult technically and culturally. Poor ownership, alert fatigue and excessive customisation can derail even well‑funded rollouts”.
“Cybersecurity in a box only works when it becomes part of a broader symbiotic ecosystem,” both CISOs added. “Otherwise, you risk duplication, blind spots and a false sense of coverage.”
A turning point for the industry
We need to conclude, despite their cautions, both CISOs agree automation excels at scale.
“Continuous control validation and regression testing are time sinks that don’t require creativity,” Owen said. “Offload those tasks and let humans focus on adversarial thinking and response readiness.”
As the debate continues, cybersecurity finds itself at a turning point. The question is no longer whether automation has a role, but how far it should go and how responsibly it is deployed.
“The attackers are human,” both concluded. “Defending against them still requires human judgment in the loop. The future of cybersecurity isn’t just about stopping attacks but it’s about teaching systems and people to think ahead of them.”
Stay tuned for more interviews and discussions…



