How does the AI work and what does the AI look at to provide an overall score for an assessment?
What we have developed is not a simple “one size fits all” approach. Our machine-learning algorithms are multilayered, unique and unlike any other. Not only do we learn about the candidates, but we also learn the unique preferences of each individual employer.
Vervoe's AI feature can predict scores for any type of skills assessment. We have found a way to create bespoke algorithms for every employer and specifically for every one of their open roles. The algorithms adapt to the unique preferences of the employer and don’t require a specific set of questions.
Vervoe's machine learning feature consists of 3 models that help calculate candidate scores:
1. The "how" model: This model analyzes a candidate's behavior on the platform- how they complete the assessment.
Example: it observes how long it takes a candidate to respond to questions, spelling, and grammar, how many times they make changes to an answer, are they copying and pasting, etc. It is looking at many data points to help score their performance. (It's important to note that there are no right or wrong behaviors. Our machine learning models simply learn what kind of behaviors correlate with good or bad grades, and it's different for every type of role.)
2. The "what" model: The system analyses the quality of the candidate's response. All questions require a correct answer sample to compare the candidate's answers to the answers provided. Through natural language processing, a candidate's responses are viewed by looking for words, phrases, sentence structure, and other sentiments that accurately reflect the outcomes required.
Example: Comparing how closely a candidate answers the question vs. the correct answer sample provided within the assessment.
3. The "preferences" model: This model requires input from the user to train it to understand what the scale of bad to good answers look like for their specific use case. This method uses a model called 'iterative' where a user blindly grades a set of candidate responses to individual questions by giving them a score from 0-10. The set of questions that are exposed to the user to grade are the furthest apart from each other. This scoring feedback teaches the AI what you value; things like spelling and grammar or a positive sentiment.
Example: If a user grades one response as a 10 our model will then look for an answer that appears to be completely different to see how you score that answer. This variation in responses helps the model quickly identify and plug the gaps in between the potential score ranges to accurately grade all candidates with your preferences in mind.
Please note that for video and audio answers, the AI is not looking at facial features or voice intonation. It reviews and analyzes a transcript for that response.
The AI feature will score and rank the candidates based on the combination of data points compiled from the three models.
To get the most out of this feature, we recommend you review this article here that explains how to optimize and train the AI to grade with more accuracy in your account.