Glossary

Adaptation

Making slight changes to a practice while maintaining fidelity to the core elements of the intervention in order to improve fit with client, organization, and/or system characteristics. Conversely, it is often the case that service systems and organizations need to adapt to the delivery standards of an evidence-based practice in order to support implementation and sustainment with fidelity.

Adoption

Two definitions of this are used within the CEBC. 1) The act of adopting a child by becoming their parent(s). 2) In implementation, the decision to make full use of an innovation.

Anecdotal

Information that is based on casual observations or indications rather than rigorous or scientific analysis.

Assessment tool

The CEBC defines an assessment tool as an in-depth questionnaire or procedure used to understand a child's and/or family's strengths and needs, such as functioning, family and individual history, symptoms, and the impact of trauma. 

Attrition

The loss of participants from a sample being used in a study. Attrition may be due to participants dropping out of the study, or to losing contact with researchers.

Control Group

The use of a control in a comparison research study. A control is a standard against which experimental observations may be evaluated. In a controlled group study, one group of participants is given an intervention, while another group (i.e., the control group) is given the standard treatment or a placebo. For example, one classroom of 4th graders may receive an interventional health curriculum while the classroom of 4th graders across the hall receives the standard health curriculum and serves as the control group.

Correlation

A measure of the relationship between scores. Correlation scores vary between -1.0 and +1.0, with zero indicating no correlation. Scores that measure similar things, such as related items on a scale or scores on scales measuring the same concept, should be highly correlated. Scores that are measuring different things should show a low correlation. It is also possible to have negative correlation scores, indicating an opposing relationship. For example, depression should be negatively correlated with well-being.

Dissemination

The targeted distribution of information and intervention materials to a specific public health or clinical practice audience. The intent is to spread knowledge and the associated evidence-based interventions.

Effectiveness Trial

Focuses on whether a treatment works when used in the real world. An effectiveness trial is done after the intervention has been shown to have a positive effect in an efficacy trial.

Efficacy Trial

Focuses on whether an intervention can work under ideal circumstances and looks at whether the intervention has any effect at all.

Empirical Research

Research conducted 'in the field', where data are gathered first-hand and/or through observation. Case studies and surveys are examples of empirical research.

Factor Analysis

A statistical method used to verify that a scale or assessment has a certain number of dimensions. That is, correlations among items in a scale should correlate with each other to form subscales that each represents a single concept.

Fictive Kin

Any nonrelative adult that has a familiar and long-standing relationship or bond with the child or the family, which relationship or bond will ensure the child's social ties

Fidelity

The extent to which an intervention is implemented as intended by the designers of the intervention. Thus, fidelity refers not only to whether or not all the intervention components and activities were actually implemented, but whether they were implemented in the proper manner.

Implementation

The use of strategies to introduce or change evidence-based health interventions within specific settings.

Intervention Developers

Individuals or companies that develop new programs/interventions for the child welfare population. They generally give priority to developing the most efficacious interventions possible. Often this involves intervention development in an academic setting that is quite different from usual care public child-welfare settings. Because of this, it is important to work with intervention developers on potential adaptations of processes or content to fit within usual care service settings.

Meta-analysis

A statistical technique that summarizes the results of several studies into a single estimate of their combined result. It is a key element of many systematic reviews.

Nonrandomized Controlled Trial, or Pretest-Posttest Study with a Control Group

A study type that assigns participants to treatment conditions but does not use random assignment and is not as rigorous as a Randomized Controlled Trial. Standardized assessment measures are administered at intake and post-intervention to monitor symptom improvement and determine outcomes. The study may administer measures at established time points during and after treatment to determine the long-term effects of the intervention. Treatment outcomes for the intervention group(s) are evaluated to determine if the intervention was more effective than services as usual.

One-group pretest–posttest study

A type of study that evaluates a treatment or intervention in a single sample using standardized assessment measures administered at intake and post-intervention to monitor symptom improvement and determine outcomes. The study may administer measures at established time points during and after treatment to determine the long-term effects of the intervention. This is a type of study that does not have another group to compare results objectively against. In this case, only the group that receives the intervention is examined, so one cannot be certain that any changes seen were caused by the intervention itself, as other factors may have caused the changes.

Peer-Review

A refereeing process used to check the quality and importance of research studies. It aims to provide a wider check on the quality and interpretation of a report. For example, an article submitted for publication in a peer-reviewed journal is reviewed by other experts in the field. For a more detailed explanation of the peer-review process, please click here for the CEBC's page on published, peer-reviewed research. For a quick and easy to understand tutorial on the University of California - Berkeley's website on how the peer-review process works and why it is beneficial, please click here.

Posttest Only Study

A study type that evaluates a treatment or intervention in a single sample using standardized assessment measures administered at post-intervention only to determine outcomes. The study may administer measures at established time points after the conclusion of treatment to determine long-term effects of the intervention.

Prevention (Primary)

Type of prevention consisting of activities designed to impact families prior to any allegations of abuse and neglect, and include public education activities, such as parent education classes, family support programs, public awareness campaigns, etc.

Prevention (Secondary)

Type of prevention consisting of activities targeted to families that have one or more risk factors, including families with substance abuse or domestic violence issues, teenaged parents, parents of special needs children, single parents and low-income families. These services include parent education classes for high-risk parents, respite care, home visiting programs, crisis nurseries, etc.

Prevention (Tertiary)

Type of prevention consisting of activities targeted to families in which abuse has already occurred and include early intervention and targeted services, such as individual, group, and family counseling; parenting education - such as Parent-Child Interactive Therapy (PCIT); community and social services referrals for substance abuse treatment, domestic violence services, psychiatric evaluations, and mental health treatment; infant safe-haven programs; family reunification services (including follow-up care programs for families after a child has been returned); temporary child care; etc.

Randomization

A process that reduces the likelihood of bias by assigning people to specific groups (e.g., experimental and control groups) by chance alone (randomly). When groups are created by random assignment, individual characteristics are less likely to make the results inaccurate.

Randomized controlled trial

A study type where participants are randomly assigned to receive either an intervention or control treatment (often usual care services). This allows the effect of the intervention to be studied in groups of people who are: (1) the same at the outset and (2) treated the same way, except for the intervention(s) being studied. Any differences seen in the groups at the end can be attributed to the difference in treatment alone, and not to bias or chance.

Reliability

The extent to which the same result will be achieved when repeating the same measure or study again. There are four types of reliability mentioned on this website:

  1. Inter-rater - Persons independently administering the same assessment to the same person should have highly similar results.
  2. Internal - Items on an assessment aimed at measuring the same thing or parts of the same thing (e.g., physical symptoms of anxiety) should be correlated.
  3. Split-half - A method of measuring internal reliability by verifying that half of the items on a scale are correlated with the other half.
  4. Test-retest - A method in which the same measure is administered multiple times and the resulting scores are compared. Assuming no important intervening events, a person's scores on a measure taken multiple times should be correlated.
Research Evidence

Defined on this website as research study outcomes that have been published in a peer-reviewed journal.

Screening tool

The CEBC defines a screening tool as a brief questionnaire or procedure that examines risk factors, mental health/trauma symptoms, or both to determine whether further, more in-depth assessment is needed on a specific area of concern, such as mental health, trauma, or substance use. Since the goal is to identify specific needs among a broad group, screening is usually done with a large population, like all children referred to Child Welfare Services or all children entering out-of-home care. A positive result on a screening tool should result in a referral for a more thorough assessment.

Sensitivity

A measure of how well a test identifies people with a specific disease or problem. For example, a depression screening tool with high sensitivity will have a positive result for most people who have depression, and a negative test result means depression is unlikely (few false negatives).

Specificity

A measure of how well a test excludes people without a specific disease or problem. For example, a depression screening tool with high specificity will give a negative result for most people who do not have depression, and a positive test means the person likely has depression (few false positives).

Validity

The degree to which a result is likely to be true and free of bias. There are many types of validity:

  • Concurrent - Scores on an assessment should be related to scores on a previously-validated measure of the same or similar construct/concept.
  • Construct - The assessment measures content related to the theoretical definition of the assessment's purpose (the construct/concept). For example, items on a depression assessment measure should address the diagnostic criteria for depression.
  • Content - Similar to construct validity. Assessment items should address the full range of the criteria for the construct/concept being measured.
  • Convergent - Scores on assessments designed to measure the same construct (e.g., different depression assessment measures) should be positively correlated.
  • Criterion - Scores on an assessment should relate to or predict outcomes relevant to its theoretical construct/concept. For example, an assessment of mathematical aptitude should predict performance in a mathematics class.
  • Divergent - Measures of constructs/concepts that are not theoretically related (e.g., age and intelligence) should not be correlated across different scales.
  • External validity - External validity is the extent to which the results of a study can apply to people other than the ones that were in the study. This is a measure of how generalizable the results are to others outside of the study.
  • Face validity - Items on an assessment should appear to the reader to measure what the assessment is designed to measure. Note: However, for some assessments intended to measure socially undesirable traits or behaviors, concealing the nature of the assessment may make it a more valid measure of the construct. For example, an assessment of abusive behavior might not contain the term "abuse," but might focus instead on specific acts.
  • Internal validity - Internal validity is the extent to which a study properly measures what it is meant to.