Statistics

Most evidence is not absolute, but is relative with a high statistical likelihood that it is correct.  The statistical test often used, the 5% probability of the test being wrong, means that if twenty studies are undertaken one may come to the wrong conclusion.  The larger the ‘sample size’ of people studied the more likely the answer is to be right.  Where multiple studies are compared, this gives a much more accurate view, and a study of studies, sometimes called ‘meta-analysis’ is the best source of evidence.

Where possible the brief statistical strength of evidence is also published.  This may be a ‘probability’, usually a decimal figure after the abbreviation ‘p’ or ‘p value’, where a value of less than 0.05 (or one in twenty being wrong) is generally considered significant and reasonable evidence, while a value of less than 0.01 (or one in a hundred wrong) is substantially stronger evidence. 

It is also possible to calculate a measure of confidence in the answer, called a ‘confidence interval’, abbreviated to ‘CI’.  This is given alongside an actual value.  For example, someone who works with isocynate paints may have a risk getting asthma that is double the normal population, a relative risk of 2.  A statistical analysis to find where a p value of 0.05 lies either side of this number will give a range of perhaps 1.8 to 2.2.  This would demonstrate a high likelihood that isocyanate causes asthma.  If only a small number of cases were studied, the range may be 0.8-7.  If the actual risk was 0.8 this would suggest that using isocyanate was actually beneficial, so where the CI range includes 1 it is not a statistically powerful study and cannot be considered to be good evidence.

When using this website, if you look at the numbers of people in the study, the type of study, check the p values and confidence intervals, you will have a reasonable idea of whether the evidence presented is strong or not.  In many cases only weak evidence is available; we have to use this in the absence of any other evidence available, but we need to appreciate that it may be wrong.

Most articles start with an ‘abstract’ or summary.  This will often present the main finding, but will not always state the size of the study or the statistical accuracy of the findings.  Much of the evidence base in occupational health is found within studies, not as the main finding but as an incidental finding that is only presented in the ‘results’ part of the study.  Just reading the abstract can be misleading.  Many journals only allow access to the full article on payment, providing only the abstract for free.  Full articles may be available through libraries, or may be published freely on the author’s own website.