AI Optimize 2.0 delivers major performance and usability improvements across Vervoe’s Artificial intelligence.
What’s new with AI and what does the AI health update mean for you?
AI Health is available on all Vervoe plan types. No special configuration is required to enable it. AI health will trigger on new candidate completions or manual grading of all questions for one candidate's assessment. The latest changes reflect usability improvements bringing the feature into a customers workflow and some enhancements by adding new abilities. These are outline below:
AI Speed has undergone significant speed and performance improvements, allowing candidate AI scores to return in minutes. This helps organizations validate skills even faster than before, reducing their time-to-hire. The average time it takes for a score to return is 5 minutes.
Optimize stage (AI Health)
Organizations can now track and monitor how their AI score is performing across assessment stages, thanks to AI Health. Allowing hirers to see which machine learning models are activating so they can make direct optimizations in-real time, including optimizing the difference between AI and team scores to build trust.
The optimize stage of an assessment will no longer exist; instead it will be baked into both the Invite and Select stages via the top statistics modal. This will be visible at all times with different statuses to help the customer understand if there’s action required.
The statuses reflected for an assessment are:
- Low: More candidate answers required to predict accurate scores
- Average: Questions require correct answer samples
- Medium: Candidate answers require initial grading
- High: More grading required as more variety has appeared in responses
- Very High: Grade now to personalization your AI model
- Optimizing: Recalculating based on grading
- Optimized: Assessment Grading Complete
A few improvements have been made to the design to improve usability, these include:
Estimated Score Range: A range guide has been added to the grading box to assist users in their score selection. This improvement allows people to reach the optimal score for each grading bucket.
Grading Required: The right number is the number of responses you need to grade in order to optimize the assessment. The left number is the amount that have already been graded from this target.
Responses: This is the total number of candidates that have respond to this assessment question.
Average team score gives you the ability to see who gave what scores to each candidate response. This feature should be used as a safety measure to ensure that candidates are being manually graded fairly and without bias. If a user feels a response has been graded incorrectly they can click the flag icon to the right to mark the response for review.
Here you’ll be able to see individual candidate responses and scores, we’ll anonymise the candidate details. You can flag any outliers or inconsistencies that may introduce bias or affect your AI grading models.
Once a grade has been flagged they will automatically be added to the new grading inconsistency’s count on Assessment Insights. From here they can be periodically reviewed by the customer and their account manager. If it’s deemed that the score should be updated or removed the Product and Engineering teams can action this request manually.
Question Insights shows the AI models used to grade each question. This can be accessed at any time by clicking on the graph icon in the top right corner of the optimize modal.
The panel highlights in green the modals that have been activated and used to grade the question along with what they actually review. The status next to a deactivate modal alerts the user that grading is required to activate that model.
Watch a quick overview here.