top of page

AI Can't Read Resumes Better Than You Think


created using Envato


"Our AI resume screener is game-changing."


Maybe. Or maybe it's learning your biases at scale.


Recent research on AI resume screening found a troubling pattern: When trained on historical hiring data, algorithms amplify existing biases rather than reduce them.


Example: Amazon's recruiting AI penalised resumes containing the word "women's" (as in "women's chess club"). Why? Historical data showed they rarely hired from those profiles.


The algorithm didn't create bias. It learned it from human decisions.


For practice leaders, this means:


→ AI is a mirror, not a fix

→ Garbage in, garbage out applies to humans too

→ Historical hiring data might encode discrimination

→ "Objective" doesn't mean fair


Before deploying AI screening:


1. Audit training data for demographic patterns

2. Test outputs for adverse impact

3. Maintain human review for borderline cases

4. Update training data to reflect desired diversity


AI can improve hiring. But only if we feed it better examples than our biased past.


How are you validating that AI hiring tools reduce rather than amplify bias?


 
 
 

Comments


  • Twitter
  • LinkedIn
  • Facebook

©2022 by OmniPsi Consulting.

bottom of page