Assessment fatigue comes from the medical world and relates to the sickness and false symptoms a patient may display when subjected to an excessive level of diagnostic testing. In the third party world, we have borrowed the term to explain the exhaustion felt when third parties are hit with a large number of assessments and questions.
Studies around assessments have found that increasing the number of questions reduces the time spent per question, which may, therefore, have an impact on the quality of the answer. SureCloud’s research found that the focus of someone answering due diligence assessments drops by over 40% after the first 100 questions and exponentially increases.
We would love it if a third party gave us a list of the risks they manage on our behalf with a rating and any mitigating controls. However, that won’t happen. So, we need to reach that same level of trust through enquiry.
A bespoke survey assessment is low cost and universally applicable solution to the problem of collecting answers. This can be enhanced with technology to manage the relevant questions asked and ensure answers are given in a way that can be automatically processed and scored.
Further, the transmission and return of the assessment can also be managed, making sure that reminders are sent, and activity is tracked.
The goal is asking the right number of questions to reach an opinion on the risk a third party presents without incurring quality issues due to assessment fatigue. Using technology to strip back questions which are irrelevant to the third party and context of the engagement, duplicate, aimless, unnecessarily detailed, or for general exploration, reduces the exposure to assessment fatigue. While it is possible to improve questions generally, the relevance of questions may only be realized when other questions are answers; referred to as question dependence. Question dependence needs a technology solution at the point of answering to introduce or remove questions.
Once an assessment is completed and sent back, there is then a review of the information submitted. Again, the reviewer can also suffer assessment fatigue, and the quality of that review deteriorate.
I’d suggest higher risk questions or those that present information not easily computed, should be reviewed earlier than those who present little or no risk and are straightforward to be automatically interpreted. The attention of your human risk expert is valuable and expensive, therefore ensure it is pointed towards the responses where it can provide the most value.
Often the nature of these interactions is a set of clarifications around the questions and answer given done over email. This occurs most often when questions are asked which are not in keeping with the context of the organization— for example, asking online-only software companies whether their software can be deployed in a virtual machine. The asynchronous nature means that often the context of the question and even the question text is lost in the email exchange. This adds confusion and further fatigue in explaining why the question is not relevant or why alternative controls have been chosen. Keeping the conversational exchange, a part of the assessment is a useful approach for keeping all context together.