Not much, says ThoughtWorks Chief Scientist Martin Fowler. He should know, in that his company employs plenty of people with skills for which certification schemes exist in the computer industry, but he says that that certification fails a basic test. There is no correlation with competence.
For a certification to be useful, it needs a correlation with competence in the thing that it certifies. So if Alice has a certification in, say, clojure programming; then there should be a high probability that Alice is a competent clojure programmer. High probability isn’t a guarantee, but it should be significantly higher than the general programmer population. The reason we have disdain for most software certification programs is because we’ve not seen such a correlation (indeed sometimes we feel there’s a negative correlation).
He has a nice graphic to illustrate the point, and I encourage you to take a look.
It is a gloomy post:
At the moment the only way you can tell if someone is a good programmer is to find other good programmers to assess their ability. Such assessment is difficult, time-consuming, and needs to be repeated by each hiring organization. If you are a non-programmer looking to hire someone, such an assessment is particularly daunting.
Worse still, Fowler describes the micro-industry of certification schemes and the books, courses and assessments which support them as a form of corruption. That strikes me as harsh, though if they are as unfit for purpose as he suggests I see his point.
Personally I have never liked the fact that many assessments are based on multiple choice answers. There are several problems with these. One is that if there are four answers and you just have to pick the right one, you have a 25% chance of appearing competent by mere luck. In fact, sometimes one of the answers stands out as obviously wrong, giving you a 33% chance. Of course the scoring can take account of this; but I still dislike the approach, which is sometimes more about getting the answer the assessment expects than about getting the right answer.
It is my turn to be cynical, but I expect the ease of marking multiple choice papers, which can be completely automated online, is a factor. Having a human interpret a reasoned explanation for your choice would be more expensive but also more effective.
If you have many certifications to your name, there is no need to despair. Fowler just advises you not to show them off as a badge of competence.
It also has to be admitted that certifications do open doors and may well help you get that next post; not all employers take Fowler’s view.
Is he right? I would be interested in other opinions. If there are good ones, which are they? And if Fowler is even half-right, surely this industry is now sufficiently mature that it could devise certifications that actually do correlate with competence? It does not seem too much to ask, and would help employers to avoid costly mistakes.