Traditional approaches to penetration testing have been overtaken by the changes in information security threats that we have seen in recent years. Nick Rafferty explains why this is the case and what can be done about it.
Ten years ago, penetration testing was viewed as a luxury service, typically aimed at ensuring that companies’ network perimeters were secured against malicious external attacks. The majority of organizations doing this type of test would extend the service to their internal networks, so they could establish how far an external attacker would get if they breached the perimeter, and also to ensure they understood the level of protection against any insider threats.
The tests were typically conducted once per year, with the time in between tests spent wading through the output – most likely PDF documents – to extract the key findings and turn them into operational activities aimed at rectifying the issues that were discovered.
More recently, we have seen the emergence of vulnerability scanning software, an automated way to perform more frequent vulnerability testing, but not to the level of rigour the company would receive from a penetration test performed by a security expert. These automated scans were seen as a major step forward in security assurance, with the penetration test providing the ‘rigour and depth’ of human testers and the vulnerability scanning being seen as the ‘frequency and breadth’ that automation could deliver.
The commonality across the vulnerability scanning providers was that they all had a management capability which would deliver the output in the form of interactive reports, and automate the remediation process. So for a number of years we were left with the scenario whereby the company would be penetration testing annually, and vulnerability scanning on a monthly or bi-monthly basis.
But if we look at how the security landscape has evolved over the last one to three years alone, we can see significant shifts:
There are only two main reasons why companies do not penetration test on a more regular basis – the cost of testing, and the ability to consume the output. Although budget allocation is an issue, the ability to use the test results is perhaps even bigger.
If we look at the historical ‘annual big bang’ approach to penetration testing and see how that has changed over the past decade, then tests have absolutely increased in frequency, but the mechanism to deliver the results of the test has pretty much stayed the same – a static PDF report or Excel spreadsheet, which then needs to be converted into actionable information and integrated into a business process to implement the recommendations. This all takes time, and has two resulting issues:
Therefore, a new approach to security testing and assurance is urgently required.
Penetration testing needs to be performed much more frequently than it is today by most organizations, and that requires a new type of service offering – ‘pentesting-as-a-service.’ With this type of approach, companies would subscribe to services with a guaranteed number of testing days available and call them off as required, in between regular scheduled testing.
The increased volume of testing performed also has to be much more effective than it is today. As such, a management platform would sit at the heart of this service, offering similar capabilities to the vulnerability scanning solutions (or even integrating them), providing dynamic reporting capabilities, trend analysis, remediation management workflow, and on-demand technical support, all designed to cut the management overhead and therefore the restriction on consuming more tests. Most importantly, it will also drive fixing of the vulnerabilities that are discovered.
With more regular check-ups on the security of their networks via pentesting-as-a-service, replacing the outdated annual approach, organisations will gain a better understanding of their business risk, and improve their defence against attacks and breaches over time.
Nick Rafferty is COO, SureCloud