Article Title

Allied Health Research

For over a decade, there has been continuous advance in the technical processes underpinning systematic reviewing. For allied health disciplines in particular, the technical questions raised when producing believable, secondary evidence have created the opportunity for researchers, educators, and policy-makers to reflect on how to improve research methodology, study quality, and reporting, and how to make it more clinically useful. Greater understanding of the processes of systematic reviewing has assisted researchers, educators, and policy-makers to justify why allied health research sometimes should take a different approach to produce defensible and implementable findings (for instance conducting N=1 studies, or qualitative research to establish patient perspectives). Initially, the focus in systematic reviewing was on synthesising the findings of primary experimental studies to produce evidence of whether an intervention was effective and for whom. More recently, systematic reviews of other research designs have been produced, providing clinically useful information about diagnostic tests and risk screening

Systematic reviewing has assisted researchers to address issues which should be helpful for ensuring that primary allied health research is more useful. This has included questioning the homogeneity of subjects enrolled into a study, describing how the interventions are applied, considering whether (how) blinding is possible, how risk factors are described and measured, and how outcomes are chosen and measured. Having a decade or more of systematic review findings available has also facilitated reflection with respect to who is excluded from primary studies and why (for instance, why exclude subjects over 65 years, even though they suffer the same condition as younger subjects, and still require treatment. Researchers and clinicians seem now to be more comfortable questioning the applicability of evidence synthesis findings in specific clinical settings.

This questioning has been reflected in the emergence of interest in the application of evidence synthesis findings. A Cochrane Group has been established (Effective Practice and Organisation of Care [EPOC]), which focuses specifically on undertaking reviews of interventions related to improving practice and delivery of health services. This reflects an increasing interest in the basis on which clinicians make their clinical decisions and how they choose to adopt clinical practice recommendations from systematic reviews. Mitton et al (2007) recently examined the evidence for strategies to effectively translate knowledge into practice, finding that no one approach works for all clinicians in all settings, and that clinicians are often at different stages of readiness to adopt changed practice behaviours. This reinforces that evidence synthesis findings should be presented in terms of methodological quality as well as clinical utility.

The Australian National Health and Medical Research Council (NH&MRC, 2005) proposed a matrix by which the strength of the body of evidence for a clinical question can be assessed in terms of its quality (hierarchy and quality, consistency, clinical impact) and its clinical utility (applicability and generalisability).


Submission Location