Go
Go

We can’t tie our shoelaces so it’s too soon to start running: the upside to data limitations in the era of AI and Deep Learning.

Whilst we live in an area of technological possibility, from a Future Talent Perspective, organisations are still struggling with their key data foundations.

Until our Data Lakes are full of the right data and that data is properly scrubbed and organised, the application of deep learning and AI is just as likely to take us around in circles as take us forward. Arguably this is a good thing for now as the pace of change in the industry means that even the largest, most capable and best resourced teams are struggling to keep up; it buys us all just a little bit of time to think about the implications of the brave new world that it seems inevitable we will enter.

On the one hand it seems strange that we would worry more about imperfections in robot-run assessment process than we currently worry about the imperfections in human-run processes: our inherent human biases mean that, at best, the processes of today can aspire to be as un-unfair as possible.  On the other hand, it is perhaps only natural that a conscious bias is less tolerable than an unconscious bias and, whilst machines can now learn, the foundations for that machine learning need to be consciously set.  There is also a realistic limit on what can be done to address human biases juxtaposed against a hope that the same may not be true of machine bias: if we can optimise the set-up of algorithms and continually improve them, maybe we can achieve a utopian bias-free selection process?  It’s a high stakes situation: AI for assessment, if applied well, could be a true force for good, but if applied badly, could be disastrous at a societal level.

So, what are the moral and ethical boundaries in the application of AI for assessment?  Who should set them and how do we police them?  How do we ensure that we continually improve from the best possible starting point?  Whilst even the most progressive organisations still only give themselves a 4 out of 10 for data readiness (Source: Amberjack Future Focus, 14th September 2019), we should use the temporary reprieve to set standards that ensure that the 5th Industrial Revolution results in the Future Talent processes of dreams not nightmares.

Future Talent specialists clearly aren’t the only stakeholders in the debate about the application of AI for assessment.  They are, however, arguably at the forefront of that debate: the nature of Future Talent Programmes (high volume, fewer hiring variables, strategic sponsorship) means that they are usually the most obvious place to start the implementation of new assessment technologies aimed at driving efficiency and effectiveness.  As a result, as Future Talent specialists, Amberjack have a deeply vested interest in helping to set augmented assessment up for success.  Therefore, whilst our clients wrestle with their data and work to lay the best possible data foundations, we will be wrestling with the principles of fairness, reliability, transparency, privacy/security and accountability as they relate to the application of AI to Assessment.  Along with many of the pioneers in the application of AI for Assessment, as well as leading thinkers from more broadly across the AI/Data Science community, we will be forming an Advisory and Ethics board to offer support and best practice advice and guidance for employers.  Whilst it is difficult to confidently define governance boundaries whist AI technology is still evolving, we will work to create and evolve consensus on societal principles/values and best practices in order to maximise the chances of AI adding as much value in Assessment for Selection as it already is in Accessibility.

To find out more, call us on 01635 584130 or fill out your details on the right.

Find out more