by Ray Gowlett
I was recently made aware of the ‘latest’ study on injury rates in the CrossFit community. A quick read of the abstract to the conclusion left the reader with this line: “The injury incidence for athletes participating in CrossFit was 56.1%.” Within a short period of time, another gym had posted the study with the response: “This is why I don’t like CrossFit.” If you’re like me, you want to get to the bottom of the study. If you end up talking to somebody about this study, see if they have any answers to following questions…you can tell them you’re asking for a friend.
Why is the conclusion in the abstract different than the conclusion in the body of the report? The conclusion in the abstract states, “The injury incidence for athletes participating in CrossFit was 56.1%.”, and the conclusion in the body of the report states, “The injury incidence rate for athletes participating in CrossFit among the studied cohort was 56.1%.” Is the difference because the authors know that hardly anyone reads to the end, and even fewer understand the limitations of the term ‘studied cohort’? Is it because when generalizing to the CrossFit population, there are some serious limitations to your ‘studied cohort’? Was it a mistake, or just being manipulative?
Why wasn’t the questionnaire distributed in a random manner? The methodology stated that box owners were made aware of the study, and the questionnaire was made available to anyone who wanted to fill it out…on ‘Facebook’. Fewer than 3% of the CrossFit participants replied. So, now I’m left with the following questions: “Perhaps only the CrossFit athletes who get injured have sufficient motivation to fill in an online questionnaire about injuries?” I guess we won’t be able to tell from these results.
I’m not sure what to do with the following line from the study: “It is unclear whether the discomfort felt by these athletes is better categorized as soreness or as an injury.” Well, I guess that leaves me unclear whether or not I can trust the number of 56.1%.
There are a few more limitations, some conflicts of interest, but I guess the big takeaway is to always read the entire work. In the authors’ defense, they did an ok job outlining the limitations of the study, it’s just unfortunate that they stated what appears to be an unsupported conclusion in the abstract. We’re already seeing the damage this has caused. There will be more.
I’ll end by offering what I feel would be a more honest conclusion, and I’m open to being convinced otherwise…if presented with better evidence.
“The non-random, self-reported, injury incidence for athletes who had the access, time and unclear motives to complete a ‘survey monkey’ questionnaire marketed on Facebook with vague language and who felt compelled to participate (less than 3%) seem to have an injury rate of 56.1% (although we didn’t assess the injuries ourselves and they could have just been ‘sore’). This could just indicate that athletes who injure themselves are far more likely to fill out a survey about it, suggesting that injury rates are way lower…or, CrossFit owners who get asked to pass on injury questionnaires won’t do it and injury rates could be way higher. Due to the serious, non-trivial limitations outlined in the report, it would be incredibly difficult to make strong claims about injury rates among CrossFit athletes based off of the data presented in this study. More work needs to be done.”