Toggle Menu
Request a demo Contact us Resources

Author: GRC Practice Director, Alex Hollis.

Blog Series Introduction

In this Third Party Risk Management blog series, Alex Hollis will guide you through developing effective information gathering for third parties using five key steps to the formulation of a third party questionnaire. The webinar is available on-demand via BrightTALK here.


There are five key steps to the formulation of a third party questionnaire:

  • Requirements – establishing the needs of the organization both in terms of the risks that need to be managed and the compliance needs from regulation and any stakeholder commitments.
  • Research – obtaining an understanding of the types of information needed to satisfy the requirements and prioritizing the needs among the various types of third parties the organization has.
  • Planning – consideration for the method, structure, and number of assessments (this can also include non-questionnaire approaches such as audits and interviews)
  • Writing questions – Formulating the actual questions themselves and the method of response.
  • Testing – Obtaining validation and identifying any areas of improvement.

In the ninth instalment, Alex will continue to discuss the importance of shortening your questionnaires to ensure you have your readers attention. This will include the science behind your respondents’ concentration levels and the number of questions asked. The blog includes real-life examples.

3) Limit the length of the questionnaire

You are limited in the amount of energy and attention that a respondent has. Involuntary surveys later questions in an assessment will often suffer from Krosnick’s satisficing issue, providing sub-optimal responses, when they become fatigued before they hit full assessment fatigue and stop answering any further questions all-together.

With third-party assessments where there is a business relationship, the same human psychology applies, however dropping out is not an option. So what you will find is that the same satisficing issue occurs but will continue on providing sub-optimal questions.

We analyzed 30 RFP, RFQ and due diligence questionnaires we were sent in 2018 and the number of questions ranges from 5 to 365 with the mean average number of questions being asked is 70.8 (60.6 median)

We also ran a relatively crude test whereby we asked the team completing the assessments to record every 15 minutes the number of questions answered and the level of focus they felt they had on a scale of 1-10.

The survey had 170 questions and as you can see the progress being made was relatively consistent. After 45 minute with 49 questions completed and 121 to go, the responder reported that their attention was starting to suffer. They then work a further 45 minutes and see the same perceived drop in attention.

The respondent answered roughly 1 question every 60 seconds. This started off at 45 seconds decreasing over time and by the end of the survey was 74 seconds.

The attention then decreases over the remaining time until 180 minutes where the assessment is complete, but there is likely a period of review or uploading additional evidence.

Research by Survey Monkey looked at a random sample of 100,000 surveys of between 1-30 questions in length and found there is a steady decline in the amount of time someone spends on a question. Given that time in consideration, it could be argued that the quality of the response deteriorates.

What this suggests is that when completing a voluntary assessment the amount of time per question decreases and we see Krosnick’s satisficing issue, however, in third-party assessments, the time per question is increasing slightly.

We have to consider the third party questionnaire when a question cannot be answered in the moment might be skipped in favour of answering easier questions. As such the harder questions are answered later. Also, the questionnaire being answered could already have been optimized with the straight forward questions first.

What the research here suggests that the satisficing issue still occurs, the person is trying to reduce the cognitive load in answering the questions. The respondent is reducing the time spent in the recall phase, in collecting facts. This manifests as then having to spend more time summarising, or crafting a satisfactory response from the less optimal facts collected.

Aim to have as few questions as possible to ensure the quality of each answer. If you have more than 100 questions you should look to reduce or split into separate assessments.

4) Don’t ask for more than you need

Where the respondent isn’t likely to have easy access to specific data, asking for specifics is going to generate more cognitive load and therefore create more fatigue in the respondent.

What are the most recent average page load times for your application to the millisecond?

 

Could be improved by asking for the information you are looking to confirm. Typically third parties will actively provide data if it is available.

Are the page load times less than 1 second on average? (provide evidence where available)

 

5) Softer questions are easier than harder questions

The softer you make the question, the easier it is to answer. Phrases such as, ‘approximately’, ‘estimate’ and ‘as best as you can remember’, reduce the pressure on the respondent to provide exact answers.

This is difficult when dealing with compliance as we want to ensure that the information can be relied upon. However it is worth keeping in mind this additional stress that is placed and if there is a scenario where specifics are not required letting the respondent know that will go further to reducing the fatigue.

Next Week…

Stay tuned for the next blog in this series TPRM Blog 10- Increasing the Reliability of your Respondents’ Answers. The blog will discuss the importance of giving your respondent the correct options to ensure their answers contain reliable data.

To view the previous blogs in the series click here.

See you next week!

A

How can we help?