Five top tips on quality assessment from Covidence
Once the data extraction stage of a systematic review is complete, the next task is to assess the quality of the included studies. Even a well-conducted systematic review can produce misleading results if it doesn’t conduct this important step. Why? Because of bias.
What is bias?
Bias is systematic error introduced into sampling or testing by favouring one outcome over another one. If a study is biased, it can understate or overstate the true effect of an intervention. And if that study makes it into a systematic review, the results of the review will, of course, be biased. That’s a big deal, given that the entire purpose of a systematic review is to give a reliable estimate of intervention effect ⚠️.
The good news? Some studies, such as randomised controlled trials, are designed to minimise bias. The bad news? They can’t eliminate it completely.
What is quality assessment?
From inadequate blinding (which might lead to trial participants finding out whether they are on the treatment or the control drug) to selective reporting (e.g. writing up only the positive results of a trial), bias can creep into the most carefully designed studies. As a systematic reviewer, you can’t change how the studies in your review were conducted. Your mission, should you choose to accept it, is to look carefully at each study report and make a set of judgements about the risk of bias of each one 🕵️. This is study quality assessment.
Why do we need it?
A formal assessment of study quality helps review teams decide what to do with the study data they find, for example whether or not to include them in a synthesis. Information on the risk of bias can be presented alongside study results in a meta-analysis to show any flaws in the data that were used to produce the overall result. Sometimes the risk of bias varies across studies in a meta-analysis and review teams decide to include only those at low risk of bias. If this happens, sensitivity analysis can be used to explore how including or excluding certain studies affects the result of the meta-analysis.
Using a form based on a standard template ensures a structured approach to quality assessment. Covidence’s default quality assessment template splits the risk of bias judgements into seven areas, known as domains, that can be examined and assessed separately. For each study, reviewers consider each domain and rate the risk of bias as either ‘High’, ‘Low’, or ‘Unsure’. In addition to this rating, reviewers provide snippets of text from the studies themselves in support of their judgements. Reviewers can use this template straight out of the box, adapt it, or build a custom form from scratch.
The seven domains in Covidence’s quality assessment template
1 Sequence generation: Describe the method used to generate the allocation sequence in sufficient detail to allow an assessment of whether it should produce comparable groups.
2 Allocation concealment: Describe the method used to conceal the allocation sequence in sufficient detail to determine whether intervention allocations could have been foreseen in advance of, or during, enrolment.
3 Blinding of participants and personnel: Describe all measures used, if any, to blind study participants and personnel from knowledge of which intervention a participant received. Provide any information relating to whether the intended blinding was effective.
4 Blinding of outcome assessment: Describe all measures used, if any, to blind outcome assessors from knowledge of which intervention a participant received. Provide any information relating to whether the intended blinding was effective.
5 Incomplete outcome data: Describe the completeness of outcome data for each main outcome, including attrition and exclusions from the analysis. State whether attrition and exclusions were reported, the numbers in each intervention group (compared with total randomized participants), reasons for attrition/exclusions where reported, and any re-inclusions in analyses performed by the review authors.
6 Selective reporting: State how the possibility of selective outcome reporting was examined by the review authors, and what was found.
7 Other sources of bias: State any important concerns about bias not addressed in the other domains in the tool. If particular questions/entries were pre-specified in the review’s protocol, responses should be provided for each question/entry.
Five top tips for successful quality assessment
Good quality assessment starts at the planning stage of a review. It’s useful to consider where bias might arise, based on what is already known about the review topic and the types of data that the search is likely to find. This will inform the content and structure of a form that review teams will create and apply to each included study in order to make judgements about study quality. Let’s take a look at five tips for successful quality assessment.
1. Follow your plan 🗺️
In setting up a form for quality assessment, it’s a good idea to follow a prespecified process, ideally in a protocol or research plan before work starts on the review itself. Putting the work in upfront means that quality assessment time can be used efficiently, rather than having to stop, discard work, and start again because the chosen form turns out to be unsuitable. Specifying quality assessment at the protocol stage also helps minimise bias in the review process itself. This can arise if review teams have prior knowledge of the studies when they decide on the quality assessment methods.
2. Double up 😃😃
As with data extraction, it’s usually advisable for quality assessment to be done by two reviewers working independently. Not only does this ensure that decisions are not reliant on the input of a single person, it also reduces the risk of user error. Differences between the two reviewers need to be investigated and this can highlight mistakes.
3. Pilot the form 📋
It is useful to pilot the form using a small number of studies. Review teams can use this exercise to check that they are applying judgements about the risk of bias in a consistent way. This is an opportunity to train reviewers and to test the pre-specified process for resolving disagreements. For example, the process might be for the two reviewers whose judgements conflict to discuss their decisions and to refer to a third reviewer who will make the final decision if the conflict cannot be resolved.
4. Gather extra information 📧
When the risk of bias for a particular domain seems to be high, it’s worth considering whether this is down to the study methods themselves or the way in which the methods were reported. If it’s the latter, it’s usually worth emailing study authors to check on what actually happened, particularly if the methods are not described in sufficient detail for a judgement to be reached. Formulate your enquiries carefully to avoid leading questions.
5. Prepare to report ✍🏾
Rigorous quality assessment in a systematic review can be let down by poor reporting. Fortunately, PRISMA has your back.
The methods section of a systematic review should specify the methods used to assess risk of bias in the included studies, including details of the tool(s) used, how many reviewers assessed each study and whether they worked independently, and if applicable, details of automation tools used in the process (PRISMA checklist item #11 ☑️). Keeping a reliable record of decisions about risk of bias, from the review planning stages through to documenting the assessment itself, will help you to meet this reporting requirement.
The results section of a systematic review should present assessments of risk of bias for each included study (PRISMA checklist item #18 ☑️). Whether you decide to present these data as tables or in forest plots, comprehensive and well-structured QA forms are key to good reporting.
Conclusion
Quality assessment and the attempt to minimise bias are important features of the systematic review process and should be considered carefully before work gets underway on identifying studies for inclusion.
Quality assessment in Covidence is flexible and fully customisable. The intuitive workflow allows reviewers to make judgements independently and submit them for consensus. The system keeps a record of the votes so you don’t have to.