Automated Prediction of Job Interview Performances

Ever wondered why you have been rejected from a job despite being a qualified candidate? What went wrong? In this paper, we provide a computational framework to quantify human behavior in the context of job interviews. We build a model by analyzing 138 recorded interview videos (total duration of 10.5 hours) of 69 internship-seeking students from Massachusetts Institute of Technology (MIT) as they spoke with professional career counselors. Our automated analysis includes facial expressions (e.g., smiles, head gestures), language (e.g., word counts, topic modeling), and prosodic information (e.g., pitch, intonation, pauses) of the interviewees. We derive the ground truth labels by averaging over the ratings of 9 independent judges. Our framework automatically predicts the ratings for interview traits such as excitement, friendliness, and engagement with correlation coefficients of 0.73 or higher, and quantifies the relative importance of prosody, language, and facial expressions. According to our framework, it is recommended to speak more fluently, use less filler words, speak as “we” (vs. “I”), use more unique words, and smile more

MIT INTERVIEW DATASET:

We release the MIT Interview Dataset containing the audio-visual recordings of 138 mock job interviews, conducted by professional career counselors with 69 undergraduate MIT students. In addition to the videos, we release the Amazon Mechanical Turk ratings for each of the videos, the final ground truth ratings, and the processed feature values. Due to the sensitive nature of the dataset, anyone accessing the dataset must agree to the terms and conditions of using the dataset. Please fill out the following form to request our dataset: https://goo.gl/forms/xF4DEsg9RIsto6nu1"

More information:

I. Naim, I. Tanveer, D. Gildea, M. E. Hoque, Automated Prediction of Job Interview Performance: The Role of What You Say and How You Say It, Automated Face and Gesture Recognition (FG), May 2015. [Appendix]