Down Tools Week Cometh: Kissing Goodbye to CVs/Resumes and Cover Letters

I haven’t blogged about what I’m doing in my (not so new) temporary role as Red Gate’s technical recruiter, mostly because it’s been routine, business as usual stuff, and because I’ve been trying to understand the role by doing it. I think now though the time has come to get a little more radical, so I’m going to tell you why I want to largely eliminate CVs/resumes and cover letters from the application process for some of our technical roles, and why I think that might be a good thing for candidates (and for us).

I have a terrible confession to make, or at least it’s a terrible confession for a recruiter: I don’t really like CV sifting, or reading cover letters, and, unless I’ve misread the mood around here, neither does anybody else. It’s dull, it’s time-consuming, and it’s somewhat soul destroying because, when all is said and done, you’re being paid to be incredibly judgemental about people based on relatively little information. I feel like I’ve dirtied myself by saying that – I mean, after all, it’s a core part of my job – but it sucks, it really does. (And, of course, the truth is I’m still a software engineer at heart, and I’m always looking for ways to do things better.)

On the flip side, I’ve never met anyone who likes writing their CV. It takes hours and hours of faffing around and massaging it into shape, and the whole process is beset by a gnawing anxiety, frustration, and insecurity. All you really want is a chance to demonstrate your skills – not just talk about them – and how do you do that in a CV or cover letter? Often the best candidates will include samples of their work (a portfolio, screenshots, links to websites, product downloads, etc.), but sometimes this isn’t possible, or may not be appropriate, or you just don’t think you’re allowed because of what your school/university careers service has told you (more commonly an issue with grads, obviously).

And what are we actually trying to find out about people with all of this? I think the common criteria are actually pretty basic:

*Of course, everyone has off days, and I don’t honestly think we’re too worried about somebody being a bit grumpy every now and again.

We can do a bit better than this in the context of the roles I’m talking about: we can be more specific about what “gets things done” means, at least in part.

For software engineers and interns, the non-exhaustive meaning of “gets things done” is:

  • Excellent coder

For test engineers, the non-exhaustive meaning of “gets things done” is:

  • Good at finding problems in software
  • Competent coder

Team player, etc., to me, are covered by “not an a55hole”. I don’t expect people to be the life and soul of the party, or a wild extrovert – that’s not what team player means, and it’s not what “not an a55hole” means. Some of our best technical staff are quiet, introverted types, but they’re still pleasant to work with.

My problem is that I don’t think the initial sift really helps us find out whether people are smart and get things done with any great efficacy. It’s better than nothing, for sure, but it’s not as good as it could be. It’s also contentious, and potentially unfair/inequitable – if you want to get an idea of what I mean by this, check out the background information section at the bottom.

Before I go any further, let’s look at the Red Gate recruitment process for technical staff* as it stands now:

  • (LOTS of) People apply for jobs.
  • All these applications go through a brutal process of manual sifting, which eliminates between 75 and 90% of them, depending upon the role, and the time of year**.
  • Depending upon the role, those who pass the sift will be sent an assessment or telescreened. For the purposes of this blog post I’m only interested in those that are sent some sort of programming assessment, or bug hunt. This means software engineers, test engineers, and software interns, which are the roles for which I receive the most applications. The telescreen tends to be reserved for project or product managers.
  • Those that pass the assessment are invited in for first interview. This interview is mostly about assessing their technical skills***, although we’re obviously on the look out for cultural fit red flags as well.
  • If the first interview goes well we’ll invite candidates back for a second interview. This is where team/cultural fit is really scoped out. We also use this interview to dive more deeply into certain areas of their skillset, and explore any concerns that may have come out of the first interview (these obviously won’t have been serious or obvious enough to cause a rejection at that point, but are things we do need to look into before we’d consider making an offer).
  • We might subsequently invite them in for lunch before we make them an offer. This tends to happen when we’re recruiting somebody for a specific team and we’d like them to meet all the people they’ll be working with directly. It’s not an interview per se, but can prove pivotal if they don’t gel with the team.
  • Anyone who’s made it this far will receive an offer from us.

*We have a slightly quirky definition of “technical staff” as it relates to the technical recruiter role here. It includes software engineers, test engineers, software interns, user experience specialists, technical authors, project managers, product managers, and development managers, but does not include product support or information systems roles.

**For example, the quality of graduate applicants overall noticeably drops as the academic year wears on, which is not to say that by now there aren’t still stars in there, just that they’re fewer and further between.

***Some organisations prefer to assess for team fit first, but I think assessing technical skills is a more effective initial filter – if they’re the nicest person in the world, but can’t cut a line of code they’re not going to work out.

Now, as I suggested in the title, Red Gate’s Down Tools Week is upon us once again – next week in fact – and I had proposed as a project that we refactor and automate the first stage of marking our programming assessments. Marking assessments, and in fact organising the marking of them, is a somewhat time-consuming process, and we receive many assessment solutions that just don’t make the cut, for whatever reason. Whilst I don’t think it’s possible to fully automate marking, I do think it ought to be possible to run a suite of automated tests over each candidate’s solution to see whether or not it behaves correctly and, if it does, move on to a manual stage where we examine the code for structure, decomposition, style, readability, maintainability, etc. Obviously it’s possible to use tools to generate potentially helpful metrics for some of these indices as well. This would obviously reduce the marking workload, and would provide candidates with quicker feedback about whether they’ve been successful – though I do wonder if waiting a tactful interval before sending a (nicely written) rejection might be wise.

I duly scrawled out a picture of my ideal process, which looked like this:


The problem is, as soon as I’d roughed it out, I realised that fundamentally it wasn’t an ideal process at all, which explained the gnawing feeling of cognitive dissonance I’d been wrestling with all week, whilst I’d been trying to find time to do this.

Here’s what I mean.

Automated assessment marking, and the associated infrastructure around that, makes it much easier for us to deal with large numbers of assessments. This means we can be much more permissive about who we send assessments out to or, in other words, we can give more candidates the opportunity to really demonstrate their skills to us.

And this leads to a question: why not give everyone the opportunity to demonstrate their skills, to show that they’re smart and can get things done? (Two or three of us even discussed this in the down tools week hustings earlier this week.)

And isn’t this a lot simpler than the alternative we’d been considering? (FYI, this was automated CV/cover letter sifting by some form of textual analysis to ideally eliminate the worst 50% or so of applications based on an analysis of the 20,000 or so historical applications we’ve received since 2007 – definitely not the basic keyword analysis beloved of recruitment agencies, since this would eliminate hardly anyone who was awful, but definitely would eliminate stellar Oxbridge candidates – #fail – or some nightmarishly complex Google-like system where we profile all our currently employees, only to realise that we’re never going to get representative results because we don’t have a statistically significant sample size in any given role – also #fail.)

No, I think the new way is better.

We let people self-select.

We make them the masters (or mistresses) of their own destiny.

We give applicants the power – we put their fate in their hands – by giving them the chance to demonstrate their skills, which is what they really want anyway, instead of requiring that they spend hours and hours creating a CV and cover letter that I’m going to evaluate for suitability, and make a value judgement about, in approximately 1 minute (give or take).

It doesn’t matter what university you attended, it doesn’t matter if you had a bad year when you took your A-levels – here’s your chance to shine, so take it and run with it.

(As a side benefit, we cut the number of applications we have to sift by something like two thirds.)


OK, yeah, sounds good, but will it actually work?

That’s an excellent question.

My gut feeling is yes, and I’ll justify why below (and hopefully have gone some way towards doing that above as well), but what I’m proposing here is really that we run an experiment for a period of time – probably a couple of months or so – and measure the outcomes we see:

  • How many people apply? (Wouldn’t be surprised or alarmed to see this cut by a factor of ten.)
  • How many of them submit a good assessment? (More/less than at present?)
  • How much overhead is there for us in dealing with these assessments compared to now?
  • What are the success and failure rates at each interview stage compared to now?
  • How many people are we hiring at the end of it compared to now?

I think it’ll work because I hypothesize that, amongst other things:

  • It self-selects for people who really want to work at Red Gate which, at the moment, is something I have to try and assess based on their CV and cover letter – but if you’re not that bothered about working here, why would you complete the assessment?
  • Candidates who would submit a shoddy application probably won’t feel motivated to do the assessment.
  • Candidates who would demonstrate good attention to detail in their CV/cover letter will demonstrate good attention to detail in the assessment.
  • In general, only the better candidates will complete and submit the assessment.
  • Marking assessments is much less work so we’ll be able to deal with any increase that we see (hopefully we will see).

There are obviously other questions as well:

  • Is plagiarism going to be a problem?
  • Is there any way we can detect/discourage potential plagiarism?
  • How do we assess candidates’ education and experience?
  • What about their ability to communicate in writing?
  • Do we still want them to submit a CV afterwards if they pass assessment?
  • Do we want to offer them the opportunity to tell us a bit about why they’d like the job when they submit their assessment?
  • How does this affect our relationship with recruitment agencies we might use to hire for these roles?

So, what’s the objective for next week’s Down Tools Week?

Pretty simple really – we want to implement this process for the Graduate Software Engineer and Software Engineer positions that you can find on our website. I will be joined by a crack team of our best developers (Kevin Boyle, and new Red-Gater, Sam Blackburn), and recruiting hostess with the mostest Laura McQuillen, and hopefully a couple of others as well – if I can successfully twist more arms before Monday.*

Hopefully by next Friday our experiment will be up and running, and we may have changed the way Red Gate recruits software engineers for good!

Stay tuned and we’ll let you know how it goes!

*I’m going to play dirty by offering them beer and chocolate during meetings.

Some background information: how agonising over the initial CV/cover letter sift helped lead us to bin it off entirely

The other day I was agonising about the new university/good degree grade versus poor A-level results issue, and decided to canvas for other opinions to see if there was something I could do that was fairer than my current approach, which is almost always to reject. This generated quite an involved discussion on our Yammer site:







I’m sure you can glean a pretty good impression of my own educational prejudices from that discussion as well, although I’m very open to changing my opinion – hopefully you’ve already figured that out from reading the rest of this post.

Hopefully you can also trace a logical path from agonising about sifting to, “Uh, hang on, why on earth are we doing this anyway?!?”