Basic Overview of how the CEBC Research Staff Conducts a Literature Search
The CEBC research staff conducts a literature search on each program in multiple electronic databases to obtain as comprehensive a list of published articles from peer-reviewed publications as possible. This is referred to as research evidence on the CEBC website. This process can be replicated based on which databases are accessible to you.
Google Scholar is an excellent research search engine. It is available on the Internet without charge and includes the ability to locate a copy of the identified publication in local libraries through its setting function. It will also cite the articles in the appropriate format and allow you to import them directly into citation software, such as EndNote.
Many university libraries have access to numerous electronic databases that can be utilized in order to conduct a literature search for relevant research articles. Many of these are fee-based and use is restricted to university staff, faculty, and students. The CEBC research staff routinely searches the following:
- Campbell Collaborative
- Cochrane Collaborative
- Child Welfare Information Gateway
- Google Scholar
- Social Services Abstracts
Types of Study Designs
CEBC research staff encounters many study designs during an academic literature search; however experimental and quasi-experimental studies are the focus when rating a program on the Scientific Rating Scale. The main types of study designs that will be found on the CEBC website are:
- Randomized controlled trial (RCT)
- Nonrandomized trial with a control group or Pretest-posttest study with a control group
- Posttest only study with a control group
The CEBC research staff does not consider the following two study designs when rating a program due to the lack of a control/comparison group:
How to evaluate outcomes in a study
When reviewing a research article, the CEBC research staff focuses on the outcomes of the study that relate to what is being reviewed (e.g., a study of a depression treatment should have clear measurements of depression symptoms over time, etc.). Most studies examine multiple outcomes, so it is important to clarify which measures relate to which outcomes.
The CEBC research staff summarizes each research article used in the review process. A brief sample description is provided, as well as a description of the study methods and measures. The results of the study are listed including potential biases and other limitations. To see a research summary, click on the name of any of the programs listed here and then click on “Relevant Published, Peer-Reviewed Research” near the bottom of the program’s page, and the summaries will appear below it. The CEBC research staff reviews all programs and, if applicable, assigns them a rating of 1-5 on the CEBC Scientific Rating Scale within the topic area in which the program is being highlighted. A program can have different ratings for different topic areas since the outcomes being measured change from topic area to topic area. If there is no comparison research evidence available, then the program is given a “NR – Not able to be Rated” classification, but any research evidence is still summarized and added to the program’s entry. A program can also be listed as NR in multiple topic areas due to having goals that match the outcomes looked for in different topic areas.
Examining bias in a study
In order to examine possible bias in a study, the CEBC staff reviews research articles for the following information:
Attrition Bias: Do study authors report either attrition/drop-out statistics or state that all participants who started the study completed the study? What was the overall attrition for the study? If the study is a randomized controlled trial, the question to determine attrition bias would be – Did the investigators use an “intention to treat” (ITT) analysis, examining outcomes in all subjects in the trial regardless of whether they completed the study?
Confounding bias: Were differences between groups taken into account in the statistical analysis?
Detection Bias: Were outcome assessors unaware of which intervention the participants received (blinded)? Were outcome measures equally applied across all groups?
Performance Bias: Were measures were taken to ensure intervention fidelity?
Reporting bias: Are all outcomes measures reported in the results?
Selection Bias: Were groups similar at baseline? When reviewing an RCT in order to detect selection bias, the CEBC staff will determine if the intervention/treatment allocation was concealed and randomization was adequate? In nonrandomized trials and observational studies in order to determine selection bias, the CEBC staff will determine whether the groups were recruited from the same population sources and whether inclusion and exclusion criteria were equally applied in both groups.
The CEBC raters may take biases and limitations into consideration during the rating process if they directly impact the rating criteria. For example, a research study that allows the study’s staff to assign subjects to groups and does not use appropriate randomization techniques (e.g., random numbers assignment or similar method) has a selection bias and cannot be considered a randomized controlled trial. In addition, small sample sizes may lead to statistical power issues, making it impossible to state that the groups are statistically different.
Page last updated on 10/21/2019.