Industry
Mortgage
Company
Koodoo
Improving Call Evaluation Tool to Enhance Trust in Compliance Reviews
So a bit of Context
Our Compliance team uses Koodoo Evaluate for compliance reviews, so we set up a feedback loop to help us do structured discovery. 🤔
Feedback mechanism in place at the moment: The team will conduct a compliance review in an:
Structured (Regular Check-ups)
Unstructured (Slack Messages)
What have we heard about Compliance team's process and pain points?
From these structured sessions, we have heard there are two main outcomes we need to drive:

Outcomes
Prioritising our Outcomes
Based on sessions with the compliance team, we prioritized which outcomes had the biggest impact on evaluation scores.

WHY Trust and Accuracy?
We could improve the time to grab conversations, but if we were grabbing and processing conversations inaccurately, then the value would be low.
The How
How do we improve trust and accuracy?
When we started looking into WHY trust and accuracy was low, we found the following opportunities:


Opportunity 1: Improving Conversation accuracy
Hypothesis: Enabling non-voice communication will enhance conversational accuracy by providing complete information.
Testing: Run a test of 10x side-by-side comparisons to determine if improved coverage also improves accuracy

What do we need to do to run this test: Support non-voice communication channels via manual upload so the compliance team can start including non-voice comms in her side-by-side evaluation.
Assumptions We Needed to Challenge
Important information from a compliance perspective is being communicated via non-voice channels.
Most conversations contain non-voice communications.
We need to order conversations for an accurate evaluation, and we can easily order conversations based on the time at which they happen.
People don’t need to know which conversation a check was in, they just need to know that a check was done.
People need to know that both voice and non-voice conversations were uploaded to have confidence, and they need to see which conversations are voice and non-voice on the platform.
User Testing of Designs with the Compliance Team
Key Tasks :
Basic Navigation: Assessing button locations and overall navigation.
Audio Player Navigation: Evaluating ease of use and identifying first touchpoints.
Transcript(now Conversation) Navigation: Reviewing navigation through the transcripts(now conversation).
Questions asked against assumptions:
What type of conversations are you able to identify by just looking at the report page?
How are you able to distinguish between voice and non-voice in the transcript?
How do you find a particular check in the conversation?

Key Learnings
Assumptions:
Validated:
Important information from a compliance perspective is being communicated via non-voice channels.
Most conversations contain non-voice.
People need to know that both voice and non-voice conversations were uploaded to have confidence.
Challenges:
We need to order conversations for an accurate evaluation, and we can easily order conversations based on the time at which they happen - conversation order can be a bit more complex than we think based on non-voice.
New Opportunities:
Improved usability of where in the transcript specific checks were failed or passed.
With large amounts of conversation history, there is a need to see when conversations start and end in the transcript to view disclaimers, etc.
Grouping voice and non-voice conversations together.