My project this year will be on investigating the use of robotics and virtual reality in rehabilitating children with cerebral palsy. My first step will be to start working on a systematic review. Even though only a literature review is required for Honours (the main difference being that a lit review is a simpler version of a systematic review in which you don't have to be so anal about your search terms), my primary supervisor is pushing for me to do a systematic review as it represents a potential publication. (Yes, my supervisors want me to get published during my honours year! In fact, my secondary supervisor wants me to get two publications! :O)
Not much has really happened with regards to my systematic review so far, partly because my primary supervisor was on leave and then she was summoned for Jury Duty. She has, however, sent me some articles about different quality scales that I can use in evaluating the quality of articles that I find in my literature search. Something to blog about, I suppose...!
First article: The methodological quality assessment tools for preclinical and clinical studies, systematic review and meta-analysis, and clinical practice guideline: a systematic review by Zeng et al. (https://www.ncbi.nlm.nih.gov/pubmed/25594108)
This article basically listed off a bunch of study types as well as methodological quality tools that they recommend for each study type. They also do provide more details about some of the studies, though some of this information are in the appendices which my supervisor didn't send and which I can't be bothered looking for right now. Here's a quick and dirty list of some of the study types and recommended tools:
- Randomised Controlled Trial (RCT)
- Cochrane Collaboration's risk of bias tool
- Physiotherapy Evidence Database (PEDro) scale
- Modified Jadad Scale
- Delphi List
- CASP checklist for RCT
- NICE methodology checklist for RCT
- Nonrandomised interventional study
- Methodological Index for Non-Randomised Studies (MINORS)
- Reisch et al. Aid to the evaluation of therapeutic studies
- Cohort and case-control studies
- CASP checklist
- SIGN methodology tools
- Newcastle-Ottawa Scale (NOS)
- Case study
- Moga et al. Modified Delphi Technique
- Diagnostic Test Accuracy (DTA) study
- Quality Assessment of Diagnostic Accuracy Studies (QUADAS)
- QUADAS-2
- CASP checklist
- Preclinical animal study
- Stroke Therapy Academic Industry Roundtable (STAIR) tool
- Recommendations for Ensuring Good Scientific Inquiry
- The Collaborative Approach to Meta-Analysis and Review of Animal Data from Experimental Studies (CAMRADES)
- Systematic Review Centre for Laboratory Animal Experimentation's (SYRCLE) risk of bias tool
- Qualitative study
- JBI
- CASP
- NICE
- Systematic Review (SR) and meta-analysis
- Sack's Quality Assessment Checklist
- Overview Quality Assessment Questionnaire (OQAQ)
- Assessment of Multiple Systematic Reviews (AMSTAR)
- CASP
- NICE
- JBI-SUMARI
- Clinical Practice Guidelines (CPG)
- Appraisal of Guidelines for Research and Evaluation (AGREE)
- AGREE II
Second article: the PEDro (Physiotherapy Evidence Database) scale
The PEDro scale was listed in the above article as a recommended tool for evaluating RCTs. This is basically an 11-item questionnaire with yes/no options. The first item relates to external validity (a.k.a. "generalisability" or how well the results can be generalised to a wider population), the 2nd through 9th items relate to internal validity (stuff like bias and confounding- basically how well the study is designed), and the last two items relate to the statistical information provided in the study. The scale also comes packaged with more detailed explanations of each criterion.
Third article: the Risk of Bias in Non-randomised Studies- of Interventions (ROBINS-I) assessment tool
This is a monster 22-page document. However, pages 15-22 are dedicated to tables that clarify some of the points in the actual questionnaire, and not all questions need to be answered (e.g. if you answer question 1.2 with "no" you can skip to 1.4). I can't really see how this questionnaire is supposed to be scored, other than that if you choose a response underlined in green, it suggests a reduced risk of bias, and if you choose a response in red, it suggests an increased risk of bias. (So if you're going to use this scale, maybe don't print it out in black and white.) Maybe the point of this scale isn't so much to give it a hard number but rather to raise points for the reviewer to think about.