Your network? Not that – it’s not really hiding after the weekly vulnerability scan and bi-annual penetration test. Neither could it be your web applications; you know all about the cross-site scripting, and that you should probably stop using MD5 to hash passwords. It’s not even the human element; you’re pretty sure you’ve cracked social engineering with that pamphlet you handed out. So what is it? What could it possibly be?
The element that hardly ever seems to be covered by organizations’ penetration tests are native desktop applications. You know, the old fashioned ones outside of Internet Explorer. The ones that tend to be powering your finance, human resources and the general backbone of your company. Nevertheless, we have a much higher percentile “hit-rate” when security testing these applications than web or mobile (where a ‘hit’ would be a comprehensive takeover of the app’s data) even with the relatively small proportion that we conduct.
A typical example might be a client application that connects directly to a backend database. The end-user opens the application, and is met with a login prompt. After successfully authenticating to the application, only then is any sensitive data revealed. However, it is surprisingly common for such applications to handle authentication within the database; that is to say, a table of usernames and password hashes must be queried in order to authenticate the user. This causes the fatal flaw in which the client application must be able to access the database before it has authenticated the user.
This is often excused because it seems, at first glance, to operate like a web application. In a typical web architecture, the application server holds a single set of credentials for the database, that gives it access to all the data it needs.
However, the application server is not running in the context of the user – it is a remote gatekeeper that can enforce authorisation controls. Our client-side desktop application flaw is the equivalent of allowing the user’s web browser to directly connect to a datasource. The web browser is running in the context of the user, and can be manipulated by them, just like client-side desktop applications. No web developer in their right mind would be trusting the browser with this authority, so why is it so common in native desktop applications?
This is a fundamental architectural issue. An application, under the control of the end-user, must possess database credentials prior to user authentication. Because the application is running under the context of the user, any database credentials that it has access to should in theory be available to the user as well. We’ve seen a large number of attempts to obfuscate these credentials, from storage in configuration files, the registry, and application binaries, as well as an array of different encoding and encryption techniques. But the fact remains, no matter how obscure the database credentials are, if the application can retrieve them when being executed by a user, the user can do the same thing themselves.
If they can figure out how, an attacker can likely identify a method by which they can connect directly to the database. The database account will often need access to all data, and the user will assume this level of access without needing to have provided any valid user credentials for the application itself – a total authentication bypass. What’s notable, is that such an application appears to be entirely secure on the surface.
As well as bypassing authentication, the attacker has probably bypassed any authorisation controls. If the database account has access to all data, it would usually be the client that is making authority decisions regarding what the user can access. This concept of enforcing security controls on the client-side almost always allows them to be subverted.
The proportion of native client-database applications we have seen with fundamental issues like this is staggering – they make up the majority by a long way. Furthermore, other key weaknesses are common, such as clients failing to validate the remote server’s identity (communicating with a man-in-the-middle attacker); as well as the use of home-grown encryption implementations that are often woefully insufficient.
Many of these applications appear to have evolved over a number of years, and one often-cited explanation is that the same security awareness wasn’t as widespread then as it is now. This fits with that fact that many of the issues are embedded into the overall design. Additionally, native business applications are typically used within internal networks, and have therefore received significantly less attention than their exposed web counterparts. Web and mobile application security have both been the subject of much more scrutiny in recent years.
One key challenge for organizations is transparency regarding the security of such critical business applications. For all intents and purposes, they will appear to be secure from an end-user perspective, potentially leading to a false sense of security. Appearances can be deceiving. Transparency can often be provided by the vendor. They might be able to reveal their security model, or the results of third-party security assessments. This may be a necessary step in gaining assurance over critical business applications. Whitebox security code reviews and blackbox penetration tests are ideal methods to assure the security pedigree of such applications.
Native desktop applications often play a key role in modern businesses, and protecting critical data is a vital function that they cannot fall short of. However, they are often overlooked even though many have the potential for significant vulnerability. Organizations should look to gain assurance over these important assets; security assessments and transparency from the vendor can be powerful tools in achieving this.