In talking to hundreds of devs, I've learned: • They are generally accepting of the idea of automated skills assessments • They hate when the task is irrelevant to their skills or desired role • They are frustrated with tasks being too hard In working with our clients, I've learned: • They often create needlessly hard tests • They fine-tune the process differently for senior devs • They use automated assessments as a tool for increasing diversity • The most successful clients treat automatic assessments as an additional signal, not as a hard qualification Ask me about: • Any of my learnings above • How automated skills assessment might influence job seekers and employers Looking forward to talking to you!
What exactly do you mean by automated skills assessment? Is this like an online test? Is the interviewer present when the person is taking the test?
I mean automated skills assessments as in a take-home web-based coding test administered as a part of a recruitment process for developers. Typically done after a phone screen but before coming in for an in-person interview.
Is this similar to Leetcode?
I imagine it’s possible to design an automated skills assessment that measures one’s ability to approach a novel challenge creatively, but it’s not immediately obvious to me how. Since that seems to be an in-demand skill in our industry, I’d love to hear about your approach.
We try to model many of our tasks as close as we can to a developer’s day-to-day work - we call these real-life tasks. So oftentimes there is an element of “was the code efficient, creative, and in an effective style” because you had a good problem-solving approach vs. just measuring whether your code was correct. Any environment you can think of - database servers, proxies, web servers - can be an environment for the task. Imagine interviewing for a Django Developer role and getting an actual Django task. Or being a front-end developer and getting Angular task instead of a request to build a version of some fancy algo. This is how it is supposed to work :)
Why do you think your clients tend to make needlessly challenging tests? What are they trying to evaluate by doing this?
Sometimes they have too many candidates applying to their roles...so they increase the difficulty so that less people get through initial stages of the recruitment process. But then they are missing out on great candidates that would’ve scored well on a slightly easier test (or candidates that are turned off by such a difficult test and refuse to take it altogether). Sometimes it’s an unconscious bias, like we showed in our Gender Bias Report*. Males creating tests tend to choose tasks that are more difficult, for instance. Sometimes it’s this nonsense that companies only want to hire the best of the best and assume that automated technical assessments will do everything for them. They will not. Companies should treat this assessment as a signal, not as the final decider. * https://info.codility.com/hubfs/Research/Codility%20Gender%20Bias%20Report%202019.pdf
Sign! Everytime women perform worse, it has to be bias?!!
How are the answers to the questions graded? Isn’t it important to understand why a candidate came to a certain answer? Are the questions graded by the interviewer or do you provide some sort of score?
Code submitted is graded automatically. We basically run tests against your solution so we can measure both the correctness of the code and performance of your solution. The interviewer can look at your keystroke playback during your coding session so they can see how were you thinking at each step. They can then later take your solution into the remote live interviewing platform and talk to you about the assessment (kind of like Skype + pair-programming).
This is another messed up part. Using your own editor (vim/sublime/et al) since web-based ones are pretty crappy and pasting your solution misguides the opinion of the interviewer to incorrectly judge that the candidate is cheating.
How should you prepare for an automated technical interview? Are there any differences to preparing for a traditional technical interview?
It's certainly different and it takes time being used to it. E.g. as hard as we work to make our online interface great it is still far away from native IDE. I recommend taking actual tests as a practice for the interview. If the company that you’re applying cares about the performance of your solution then you probably need to make sure you have a solid background in algorithms and data structures. If you’re not ready for taking the tests, find the best way for you to learn how to solve those problems. https://app.codility.com/programmers/lessons/1-iterations/ Trying out hard challenges: https://app.codility.com/programmers/challenges/
Which companies have an automated technical component in their interview process? Why have they decided to automate the process?
Great question, some of our customers include Microsoft, Zalando, Amazon, Okta, Rakuten, SurveyMonkey, Paypal, Twitch, Barclays, Nissan, BMW, Volvo, Intel, and Jaguar/Land Rover and other great brands! Many of these companies are focused on balancing how they can be more efficient when working through large volumes of technical applicants, without sacrificing on the candidate experience they deliver.
Reading Blind I’ve realized that it’s pretty common for people to take time off to actually prepare for FAANG interviews. What do you think about this trend?
How exactly are automated assessments used to increase diversity? Why would the automated assessment lead to diversity? Is it because these tools weed out any effects from internal biases?
Yes, pretty much. The more human touches, the more bias introduced. With automated technical assessments, the candidate is solely being reviewed for their skills, technical aptitude, and problem-solving ability. Someone that you might subconsciously not consider can take the test and after scoring high will get your full attention.
Actually, this is all wrong. Diverse candidates don’t come from the same background as students with higher education. Testing on algorithms actually puts diverse candidates at a disadvantage
To what extent have your org experimented with time constraints? Broadly helpful or hurtful, or is it better to have no constraints but still track time spend?
At Codility there are coding tasks of different difficulty levels, so the time constraints suggested for hiring teams use depends on that. However, it’s generally better to make them shorter to not overly impact developers’ time invested in the assessment. Companies should avoid not time-boxing because then developers don’t have a good gauge for how much time to spend on it.
Sounds right. A company I applied for expected me to write a complete application from scratch over a weekend (but otherwise did not time-box). In fairness it was a “full-stack” position, but when I sat down to design the app there were too many requirements to satisfy without working for a long time (it wasn’t hard, of course, just time-consuming and with lots of unspoken usability issues to consider). In the time it took me to get through that one assessment another company interviewed me twice, gave me a one-hour coding challenge, and showed me a very competitive offer....