AI Health gives you visibility into how your assessment questions are being graded by Vervoe's AI — and lets you improve grading accuracy over time. This article explains how it works and how to get the most out of it.
What is AI Health?
AI Health shows you how accurately the AI is grading candidate responses for a given assessment, and prompts you to take action when grading can be improved.
Where to find it
The AI Health indicator appears at the top of both the Invite and Select stages of your assessment. You can check it at any time — there's no separate "Optimize" tab to navigate to.
Clicking the indicator opens the Optimize modal, where you can grade candidate responses and improve your AI model.
AI Health statuses
Your assessment will display one of the following statuses:
| Status | What it means |
|---|---|
| Low | More candidate answers are needed to predict accurate scores. Only the How model is active. |
| Average | Questions require grading. The How model and example answers are active. |
| Medium | Questions require grading. The How model and example answers are active. |
| High | Candidate answers require more grading. The How model, example answers, and 3 iterative model score buckets are active. |
| Very High | Grade now to personalise your AI model. |
| Optimizing | The AI is recalculating based on recent grading. |
| Optimized | Your AI model is fully optimised. The How model, example answers, and 5+ iterative model score buckets are active. |
What to expect by default:
- Assessments from the Vervoe library typically start at High, as they already have a dataset behind them.
- New assessments typically start at Medium, as they include sample answers but no candidate data yet.
Once an assessment reaches Optimized, no further grading is needed — unless a large volume of new completions come in that the AI can't confidently grade, or you make changes to the assessment questions.
Grading in the Optimize modal
When you open the Optimize modal, you'll see the following to help you grade efficiently:
- Estimated Score Range — A suggested score range is shown as a guide. You're free to choose any score you see fit, but the estimate helps you reach the right score faster and reduces the number of responses that need manual grading.
- Responses — The total number of candidates who have answered this question.
- Grading Required — Shows how many responses need to be graded to optimise the assessment (right number) and how many have already been graded (left number).
- Average Team Score — The average score given to this question across all manual grades. Click it to see a full breakdown by team member (see below).
- Answered In — How long the individual candidate took to answer the question.
- Skills Group — The skill group the question belongs to.
- Suggested Answers — Flags if the question is missing a sample answer.
Average Team Score
The Average Team Score panel lets you see who graded each candidate response and what score they gave. This is designed to help ensure candidates are being graded fairly and without bias.
To review scores:
- Click See Scores next to the average score statistic.
- A panel will open showing all manual grades for that question, including the score and the team member who gave it.
- If you believe a response has been graded incorrectly, click the flag icon to mark it for review.
Note: Score updates are handled by the Vervoe team on a case-by-case basis. Minor score changes affecting a single candidate are unlikely to be actioned. Flags are best used for significant grading errors — for example, where a candidate has been given a score of 1 when it should have been a 9.
Question Insights
Question Insights shows you which AI models are being used to grade each question.
To access it: Click the graph icon in the top right corner of the Optimize modal.
Active models are highlighted in green, along with a description of what they assess. If a model is inactive, a Grade status will appear next to it, indicating that more manual grading is needed to activate it.