Job Setup
Evaluation Criteria
Define how the AI evaluates candidate responses after their interview.
How Evaluation Works
After an interview completes, you can generate an evaluation. The AI reviews the transcript against your criteria and produces:
- An overall score (1-100)
- A summary of the candidate's performance
- Scores for each criterion you've defined
- Knockout flags if applicable
Overall Evaluation Criteria
This is the general guidance for how the AI should assess candidates. Describe what a strong candidate looks like for this role. Include:
- Key skills and experience to look for
- Communication style expectations
- Red flags to watch for
- Any must-haves vs nice-to-haves
Evaluation Criteria
Add specific criteria for a more detailed breakdown. Each criterion has:
Name
A label for this criterion (e.g., "Technical Skills", "Communication", "Culture Fit")
Criteria Description
What the AI should look for when scoring this. Be specific about what good, average, and poor looks like.
Example:
Name: Technical Problem Solving
Criteria: "Assess how the candidate approaches technical problems. Strong candidates explain their thought process clearly, consider edge cases, and can break down complex problems. Weak candidates give vague answers or can't articulate their approach."
Per-Question Criteria
For each interview question, you can add specific evaluation criteria. This is useful when different questions assess different skills.
Document Evaluation
If you've set up document requirements (like CVs or certificates), you can define criteria for evaluating those documents separately. This is done in the Document Requirements section.
Running Evaluations
Evaluations can be triggered:
- Manually: Click "Evaluate" on a candidate with a completed interview
- Bulk: Select multiple candidates and click "Evaluate"
- Automatically: Use workflows to evaluate after interviews complete
Viewing Results
Click on a candidate's interview to see their full evaluation, including the transcript, scores, and AI summary.