In the latest webinar in our Early Careers Expert Series, Martin Kavanagh, Head of Assessment at Amberjack, deep dived into the 2024 – 2025 Early Careers recruitment season and what it taught us about Assessment.
Amberjack’s Philosophy
Aided by access to comprehensive data, Assessment at Amberjack operates on three core beliefs.
- Assess for potential, not privilege or experience.
- Assessment is an extension of attraction.
- We are an end-to-end service provider, from attraction to onboarding and development.
Using these philosophies as a base, Martin explored the key learnings from the most recent Early Talent season.
Application Numbers
The increase of application numbers has been a hot topic over the last year. The impact of Generative Artificial Intelligence (Gen AI), the decrease in number of available Early Careers schemes, and the increase in location flexibility due to hybrid working, means that competition for any one role is much higher than it used to be.
The 2024 – 2025 season has revealed some key trends in this area:
- Although there is variation based on client size, we are seeing a 20% average increase in applications across campaigns.
- Large employers are seeing mixed results (some doubled, some declined).
- Medium employers had the largest increase, possibly due to applicants broadening their search.
- Small employers actually saw a 14% decrease in applications – are candidates trying to minimise risk? There are opportunities here for smaller organisations.
Martin’s key takeaway for attendees was that application growth is real but nuanced; smaller employers may need to improve attraction strategies.
The Impact of Gen AI
Looking at the year-on-year changes in autoscoring, the last season has showed us that AI may be having less of an impact than has been feared. When comparing 2024 to 2025, Martin and Amberjack’s expert Assessment Team have seen some interesting findings:
- Auto-scored assessments such as SJTs and applied intellect, etc.:
- No significant change in average scores.
- No evidence of AI gaming.
Is the capability of AI overstated? Or are we seeing less adoption of AI by candidates than anticipated?
- Manually scored assessments such as video interviews:
- Slight anomalies such as score peaks at a mid-range level, but on average scores are stable or actually slightly lower.
- Up to 10% of candidates flagged for AI/scripted responses.
Overall, AI use is not yet degrading assessment integrity and is having no noticeable impact on the spread of autoscored elements of assessment, but manual scoring needs continued monitoring.
Predictive Validity of Assessment Methods
Martin’s exploration here focused on the question: which assessment methods are most predictive?
Interestingly, the rapid adoption of AI seems to be, at present, having no consistent impact on the predictive validity of Amberjack assessments. The most important takeaways from the session centred on the design and structure of assessment methods.
- Blended assessments (auto + manual scoring) are most predictive of success at assessment centres.
- Hurdled approaches (sequential assessments) are less effective.
- Consistency in design (fully bespoke or fully off-the-shelf) yields better predictive validity than mixed models.
Candidate Experience
Net Promoter Score (or NPS) is scored on a scale of -100 to +100 and posits questions such as ‘would you recommend this organisation to your family and friends?’. Scores below 0 and above 0 indicate a generally net-negative or net-positive feedback respectively.
For large employers, NPS scores were higher at the online assessment stage. This is perhaps due to bespoke design, garnering more engagement from candidates. This reverses for small employers, who see higher NPS scores at the Assessment Centre stage, likely related to the naturally more personal and attentive experiences.
Generally, for Online Assessments, Martin suggests that +50 signifies a strong score. For Assessment Centres, this increases to +80 for a strong NPS score.
Another question that drew Martin’s attention was: ‘do candidates hate Video Interviews?’. The results are perhaps surprising.

Video Interviews aren’t necessarily as unpopular as initially assumed. Although candidates might not love Video Interviews, we are seeing that this is not necessarily enough to put them off completing an assessment. As a whole, completion rates and NPS scores were higher when Video Interviews were included in a Blended Assessment format.
Most Effective Methods for High Volumes
As we neared the end of the session, Martin turns to discussing the different types of Assessment methods, and which ones are most effective for the high volumes we are currently seeing. His key takeaways are clear:
- The blended your Assessment is, the better.
- Consistency in your design matters.
- Shortlisting candidates based on total score is better than shortlisting based on individual elements.
The Future of Assessment
Ultimately, the future of Assessment in Early Careers remains uncertain. There are some big players such as AI and application volumes rearing their heads, and the full impact of these remain to be seen. Although initial findings suggest that the impact may be lower than anticipated, this varies depending on organisation size, assessment methods and design, and more.
If you’d like to discuss the findings of this webinar with a member of the team, or learn more about Assessing with Amberjack, you can get in touch via our website, or email Martin Kavanagh, Head of Assessment directly. Our team are more than happy to help.
You can also watch the full recording of the session now on YouTube.