There are two popular ways for measuring the safety benefit of bicycle helmets. One method is to look at hospital admission records, comparing the relative number of helmeted and non-helmeted patients. The use of hospital records is an indirect measure, because researchers don’t have reliable data on the number of helmeted cyclists in the general population. Due to this limitation, researchers guesstimate risk exposure rates, which is difficult.
The other method is to study the effect of mandatory helmet laws. A time-series analysis of crash-data before and after implementation of a helmet law provides a direct measure of helmet effectiveness. This is the preferred method because it covers a much larger population in real-world conditions (without having to infer risk exposure rates). Time-series studies of helmet laws in Australia, Canada, and Spain have found no discernible impact on bicycle safety.
When direct measurements of helmet laws failed to find any safety benefit, that should have ended helmet debates. But like any zombie idea, the helmet issue shambles along. Why is that? Perhaps one reason is the hospital case-control studies that promised huge safety benefit from helmets (as high as 85%). But what is the reason for the huge discrepancy between direct and indirect measurements?
One reason for the discrepancy may be due to a methodological error in the hospital case-control studies. That is according to a new paper, Overestimation of the effectiveness of the bicycle helmet by the use of odds ratios, by Th. Zeegers. It was presented at the 2015 International Cycling Conference held last month in Hanover, Germany.
Zeegers argues that case-control studies overestimate the risk of cycling for the control (i.e. non-helmeted cyclists) thereby exaggerating the benefit of helmets. He then re-analyzed data from three popular helmet studies. After correcting for the error, he found that the supposed benefit of bike helmets completely vanished:
Due to lack of data on exposure rates, odds ratios of helmeted versus unhelmeted cyclists for head injury and other injuries on hospitalized victims are broadly used in case-control studies. A general necessary and sufficient condition can be formulated rigorously, for which odds ratios indeed equal risk ratios. However, this condition is not met in case-control studies on bicycle helmets. As a consequence, the real risk of cycling with a helmet can be underestimated by these studies and therefore the effectiveness of the bicycle helmet can be overestimated. The central point is that a wrong estimate of the risk for non-head injuries (the controls) paradoxically can lead to an overestimation of the usefulness of the helmet in protecting against head injuries.
Three cases could be found in the literature with sufficient data to assess both risk ratios and odds ratios: the Netherlands, Victoria (Australia) and Seattle (U.S.A). In all three cases, the problem of overestimation of the effectiveness of the helmet by using odds ratios did occur. The effect ranges from small (+ 8 % ) to extremely large ( > + 400 %). Contrary to the original claim of these studies, in two out of three cases the risk of getting a head injury proved not to be lower for helmeted cyclists. Moreover, in all three cases the risk of getting a non-head injury proved to be higher for cyclists with a helmet.
It must be concluded that any case-control study in which the control is formed by hospitalized bicyclists is unreliable and likely to overestimate the effectiveness of the bicycle helmet. As a direct consequence, also meta-analyses based on these case-control studies overestimate the effectiveness of the bicycle helmet. Claims on the effectiveness of the bicycle helmet can no longer be supported by these kind of studies. This might explain the discrepancy between case-control studies and other studies, such as time-analysis. It is recommended to use other methods to estimate the risk ratio for the bicycle helmet, along the lines described in this article.