Most people will agree that interviewing is one of the most difficult and least enjoyable professional activities in which we engage. Given the recent demand for data analytics and data scientist skills, it has become an increasingly daunting task for managers to adequately test and qualify candidates.
Our team at TCB Analytics has interviewed hundreds of individuals with various backgrounds over the years and needed a more efficient way of quantifying technical and cultural fit. This led us to design a deceptively simple data exercise, which reveals a surprising amount of information about the interviewees. We’ve administered this test to dozens of candidates and were compelled to share our learnings as well as the test itself.
Major points to consider first:
- Don’t whiteboard test candidates in real-time. It adds unnecessary stress to an environment that’s inherently high stress and not particularly relevant to real-world situations. We don’t care if a candidate memorized every algorithm in existence, since that knowledge alone is rarely useful in a business setting. Instead, this test focuses on real questions, real data, and how the candidate presents their approach and results. Explain the test to the candidate and allow one week or so for them to complete it on their own time.
- This test can be given to PhD level data scientists or entry level data analysts. We’ve seen a wide spectrum of responses, ranging from the levels of complex data science to the confines of simple data aggregation and manipulation. It’s important to judge their results accordingly given their background.
- Task the candidate with presenting their results to your team. This is extremely important and has helped us weed out candidates who put together an impressive written report but failed to communicate their results effectively with the rest of team. Not only does this step help your organization gauge the communication skills of a candidate, but it allows you to evaluate cultural fit.
In an effort to make the interviewing experience a bit more fun, we use a dataset that involves beer. This dataset consists of 1.5 million beer reviews from Beer Advocate. It is an ideal dataset for testing candidates since it is too large to fit into Excel, but small enough to process on a single laptop in Python or R. We prefer that candidates complete the test in either Python or R, and generally if they are hesitant about using or trying to learn either of these languages, that should signal a red flag.
Now onto the test. Here are the questions and instructions that we give to the candidates:
- Which brewery produces the strongest beers by ABV%?
- If you had to pick 3 beers to recommend using only this data, which would you pick?
- Which of the factors (aroma, taste, appearance, palette) are most important in determining the overall quality of a beer?
- Lastly, if I typically enjoy a beer due to its aroma and appearance, which beer style should I try?
This test is extensible because the questions only require the candidate to do several basic things well:
- Read the data into R or Python and be able to summarize and explore the data.
- Aggregate and manipulate the data accordingly (simple means, thresholds, grouping and subsetting).
- Visualize and communicate results (extra points for presenting the findings and code in a well-documented RMarkdown or iPython Notebook).
However, some of the questions and concepts can be non-trivial and this becomes clearer when giving the test to more experienced candidates. For example, we’ve had wildly varied responses to this question: “If you had to pick 3 beers to recommend using only this data, which would you pick?” We’ve had candidates develop full-blown recommendation systems, some that have used Principal Components Analysis and more junior analysts that have used simple averaging and ranking of the beers. There’s no single right answer.
Another good example arises when answering questions 2-4. These questions require a certain amount of data in order for the findings to be considered valid. Some of the beers only have 1 or 2 reviews, so it would make sense to determine a cutoff before including those beers in the analysis. The candidate should justify how they determined this cutoff, but the responses will vary based on the approach.
Finally, don’t expect to hire a unicorn that can do everything from data munging and engineering, analysis, visualization and also be stellar at communicating. The person responsible for managing your data team should take this test and determine the skill levels and responses appropriate for your open positions.
Here’s a recap of steps to follow:
- Review the candidate’s coding and writing skills in their written results. This should reflect their ability to understand a question, use the right data to answer the question, and document their results to promote easy collaboration.
- Gauge their ability to communicate their findings. Have them present these findings not only to the technical team, but also to executives if you expect this candidate to be presenting complicated results to business stakeholders. They should tailor their presentation accordingly.
- Be mindful of their ability to take feedback and constructive criticism from your team. Some candidates may be more defensive about their approach and not respond well to questions. This is a clear sign of someone that may be difficult to work with in a team-based environment.
Lastly, if they’ve managed to complete this test with little to no glaring errors and can justify their approach – hire that person. They’re likely already far ahead of most candidates.