I’ve been through my fair share of technical interviews. Typically, they’ll ask you to complete a live exercise, provide code samples, or answer mundane questions about a particular language or framework. Generally, those approaches are useless. Even the most talented software engineers do not always remember how to implement hashCode(), the differences between various search/sort algorithms, or the most appropriate data structure to use for a specific context.
In my stint as a software engineering manager, I set out with all the typical “I can do this better” mantras. I won’t ask the stupid technical bits — I’d rather have someone who knows the right questions to ask and how to use Google. I won’t ask for a live coding exercise, since that’s so far from reality and folks are often nervous. I’ll focus more on how they’ve used specific technologies to tackle problems, what they’re proud of, their hobbies, communication skills, blah blah blah.
All that is well and good, but I found it missed critical areas: how well can this candidate critically think about design patterns, component design, the composition of components, governance, and working effectively in a team? I’d argue those skills are the most important to have, but also the hardest to gauge and discover. We tried all sorts of tactics, but never really found the sweet spot.
I was recently ready for a career change and went through another round of calls as the interviewee. It was frustrating to again go through all the same types of questions, especially after having struggled in the interviewer seat.
But then Walt Disney Studios happened and lightbulbs lit up. The interview process used a tactic that seems so blatantly obvious in hindsight.
They showed me a Spring service class that was used to run a query against a REST interface and return the results. This was supposedly in production, one of the interviewers had written it a long time ago, and it had all manner of code smell. “Pretend you’re reviewing this service as a part of a pull request. What comments would you make? Be brutal. We know it’s rough.”
That led us into all sorts of interesting conversations:
- API design, abstraction, and statically-typed contracts. The method argument was a query String that was assumed to be syntactically-specific to the end REST service. Similarly, the method returned a List<Object> (eek). Instead, argument and return-type classes were needed to form a firm contract, abstract away the implementation details, and make the service more portable.
- The service did not make use of an interface, which led into discussions about unit testing (interfaces make mocks easier), multiple implementations, RPC, and other best practices.
- The method included annotations for in-memory caching, latency isolation, and fault tolerance. At first glance, not a problem, but then we got into edge cases involving cache eviction, faults, downtime, and other issues that might be missed when naively adding annotations with default configs.
Why haven’t more companies thought of this?! Again, hindsight is 20/20, but the approach seems so obvious now. Not only were they able to cover technical bases, but they were also able to dig into how candidates look at the bigger picture and think critically.