Toby Scott-Jackson explores how penetration testing has changed: and how it must continue changing in the future to remain a crucial tool in helping manage cyber risks as organizations move into the epoch of the Internet of Things.
Penetration testing has, quite rightly, become part of the basic IT security vocabulary. It is a core element in any organization’s security strategy but that’s not to say that penetration testing is a static entity. Rather, it has undergone an evolution to get to this point – and, crucially, it must continue evolving in order to continue supporting and enhancing organizations’ security postures.
To explore the future evolution of penetration testing, we must first consider how the traditional cycle looks, and what catalysed it in the first place. The need for penetration testing is always driven by sharing of resources – by multiple users accessing one system. In years gone by, such resource-sharing might simply involve multiple users using the same database or admin system. Then came enterprise networks that link up different departments within an organization. The internet itself is another resource-sharing innovation – and now we are moving into the next era, with the rise and rise of connected devices and the Internet of Things (IoT).
The problem is that, as soon as more than one user has a connection to the same resource, then compromising that resource without having physical access becomes possible. And because each of the innovations outlined above has been exactly that – a true innovation, deploying entirely new technology concepts – it is very difficult for IT professionals to predict the vulnerabilities that may occur.
This leads to the second stage of the penetration testing cycle – the abuse of a new system by a cybercriminal, in which a vulnerability is identified and compromised to gain access to network resources. Following this is stage three of the cycle – the recreation of that abuse, so that security professionals can determine what those hackers can do, and how to resolve the problem.
This process of launching a new computing system, discovering unexpected bugs and flaws, and then repairing them was as true at the dawn of computer security in the 1960s as it is now, with the advent of the IoT. Nowadays, security professionals do their best to skip the abuse step, and go straight to recreating potential attacks before they occur. But the underlying logic remains the same.
New era, new challenges
The trouble is that the IoT era has thrown up some new challenges:
- New technologies, evolving at lightning speed – computing has always been about change and development, but the IoT era has made that pace of change – and growth in scale and complexity – happen at a faster rate than ever before – an organization supporting connected devices makes significant changes to its infrastructure every day.
- Keeping up with sophisticated adversaries and threats – this is a pentesting challenge that has always existed, but the IoT era has amplified it enormously.
- Maintaining the big picture – somewhat in opposition to the first two challenges – if organizations are continually trying to keep pace in this hugely dynamic world, how can they ensure that they retain a strategic view of IT security, which is precisely tailored to their unique context? One example of this might be native desktop applications, which are largely ignored by the major security standards across the industry, yet still process some of organizations’ most sensitive assets.
The author
Tobby Scott-Jackson, Principal Security Consultant
Read more here