- Absorptive capacity
The maximum amount of information or change that an organization, system, and/or individual can manage within a given time frame. The concept is that change must often proceed in a way that considers how much can be effectively managed at given time.
Making slight changes to a practice while maintaining fidelity to the core elements of the intervention in order to improve fit with client, organization, and/or system characteristics. Conversely, it is often the case that service systems and organizations need to adapt to the delivery standards of an evidence-based practice in order to support implementation and sustainment with fidelity.
Two definitions of this are used within the CEBC. 1) The act of adopting a child by becoming his/her parent(s). 2) In implementation, the decision to make full use of an innovation.
Information based on casual observations or indications rather than rigorous or scientific analysis.
An inability to experience pleasure from normally pleasurable life events such as eating, exercise, and social interaction.
- Assessment tool
The CEBC defines an assessment tool as an in-depth questionnaire or procedure used to understand a child’s and/or family’s strengths and needs, such as functioning, family and individual history, symptoms, and the impact of trauma. The focus of the CEBC is on tools used during assessments done by child welfare staff, and not on tools used during clinical assessments that may be completed by licensed mental health or medical professionals.
The loss of participants from a sample being used in a study. Attrition may be due to participants dropping of the study out or losing contact with researchers.
- Case-control Study
A type of study that compares people with a disease or condition ('cases') to another group of people from the same population who don't have that disease or condition ('controls'). A case-control study can identify risks and trends, and suggest some possible causes for disease, or for particular outcomes. For example, a study could compare 4th graders with ADHD to a group of 4th graders without ADHD.
- Cohort Study
A type of study where 'cohort' (a group of people) is clearly identified, This cohort is followed over time, and what happens to them is reported. A cohort study is an observational study, and it can be prospective (following people forward over time) or retrospective (looking at what happened in the past). For example, a cohort study of 4th graders could follow them forward as they age, or look back at their previous health and school histories.
- Consumer Support/Advocacy
The impact of formal and informal advocacy organizations and efforts in impacting health, mental health, or social services. For example, the National Alliance for Mental Illness (NAMI) has an influence on policies that are set by legislative bodies or service systems, and such policies, in turn, impact provider organizations.
- Controlled Settings
The use of a control in a comparison research study. A control is a standard against which experimental observations may be evaluated. In a controlled group study, one group of participants is given an intervention, while another group (i.e., the control group) is given the standard treatment or a placebo. For example, one classroom of 4th graders may receive an interventional health curriculum while the classroom of 4th graders across the hall receives the standard health curriculum and serves as the control group.
A measure of the relationship between scores. Correlation scores vary between -1.0 and +1.0, with zero indicating no correlation. Scores that are measuring similar things, such as related items on scale or scores on scales measuring the same concept should be highly correlated. Scores that are measuring different things should show a low correlation. It is also possible to have negative correlation scores, indicating an opposing relationship. For example, depression should be negatively correlated with well-being.
A method of testing validity by using more than one sample of people from the same population.
- Diagnostic and Statistical Manual of Mental Disorders
Manual written and published by the American Psychological Association in Washington, DC, which is used by mental health professionals to diagnose mental disorders in children and adults. It is usually abbreviated DSM and the Roman numeral following it designates which version of the manual the citation is referring to (i.e., DSM-III , DSM-III-R , DSM-IV , DSM-IV-TR , DSM-5 ).
The process by which an innovation is communicated through certain channels over time among the members of a social system.
The targeted distribution of information and intervention materials to a specific public health or clinical practice audience. The intent is to spread knowledge and the associated evidence-based interventions.
- EBP Characteristics
The content; materials; training requirements; certification; philosophical and scientific approach; and other characteristics of a given evidence-based practice. All of these factors can impact the fit of the practice with the system, organization, providers, and clients.
- EBP Organization Fit
The fit of the evidence-based practice with the mission and vision of a given organization and the structure and processes used to deliver services in that organization.
- EBP Provider Fit
The perceived and actual fit of a given evidence-based practice with the attitudes, beliefs, needs, values, skills, and abilities of direct service providers. It is important to note that when implementing a new practice there may be a perception of poor fit until providers become familiar and skilled in use of the practice.
- EBP System Fit
The fit of a given evidence-based practice with the policies, funding, and contracting of a given service system.
- Effectiveness Trial
Focuses on whether a treatment works when used in the real world. An effectiveness trial is done after the intervention has been shown to have a positive effect in an efficacy trial.
Power or capacity to produce a desired effect.
- Efficacy Trial
Focuses on whether an intervention can work under ideal circumstances and looks at whether the intervention has any effect at all.
- Empirical Research
Research conducted 'in the field', where data are gathered first-hand and/or through observation. Case studies and surveys are examples of empirical research.
- Factor Analysis
A statistical method used to verify that a scale or assessment has a certain number of dimensions. That is, correlations among items in a scale should correlate with each other to form subscales that each represents a single concept.
The extent to which an intervention is implemented as intended by the designers of the intervention. Thus, fidelity refers not only to whether or not all the intervention components and activities were actually implemented, but whether they were implemented in the proper manner.
The use of strategies to introduce or change evidence-based health interventions within specific settings.
- Individual Adopter Characteristics
The demographic, experiential, and attitudinal characteristics of clinicians and case-managers that provide health, mental health, and social services. Demographic characteristics include, age, race/ethnicity, and gender. Experiential characteristics include discipline (e.g., social work, psychology, nursing), education level, years of experience, and years working in a given organization or setting, work with specific client populations, and experience in using EBPs. Attitudinal characteristics include work attitudes such as job satisfaction and organizational commitment, adaptability, attitudes toward EBPs, and team vs. individual work preference.
- Inner Context
The interplay for intraorganizational characteristics with individual adopters and others in the organization that can support or detract from effective evidence-based practice (EBP) implementation. Strong leadership that supports the importance of evidenced-based practices in the organization can help to promote more positive staff attitudes toward adopting evidence-based practices.
- Innovation-values fit
The extent to which an evidence-based practice fits with the values, mission, and vision of a given organization or service system. In a practical sense, it also refers to the fit of an evidence-based practice with the needs of clients, and with the values and theoretical orientation of service providers.
- Inter-Organizational Environment
The relationships and interactions of service systems, advocacy organizations, community-based organizations, regulatory bodies, and any other entities that might impact the type, amount, or quality of service provision.
- Intervention Developers
Individuals or companies that develop new programs/interventions for the child welfare population. They generally give priority to developing the most efficacious interventions possible. Often this involves intervention development in an academic setting that is quite different from usual care public child-welfare settings. Because of this, it is important to work with intervention developers on potential adaptations of process or content to fit within usual care service settings.
- Intraorganizational Characteristics
The leadership, culture, and climate of an organization. It also includes the policies and practices that are sanctioned and supported by organization management. Such characteristics can be important in creating a fertile environment for the implementation and sustainment of evidence-based practices (EBPs).
- Matched Comparison Study
A study type in which groups who will be compared are created by a non-random method, but where participants in each group are assigned so that they are similar in important characteristics such as ethnic or socioeconomic status, assessment scores, or other variables that might affect study outcomes.
- Matched Wait List Study
A study type where subjects are matched based on certain characteristics, such as age, gender, or race/ethnicity, into pairs. One is then assigned to the intervention group, while the other half of each pair is assigned to a wait-list group, which will receive the intervention at a later time. The wait-list group serves as the control group.
A statistical technique which summarizes the results of several studies into a single estimate of their combined result. It is a key element of many systematic reviews.
- Nonrandomized Controlled Trial, or Pretest-Posttest Study with a Control Group
A study type that assigns participants to treatment conditions but does not use random assignment and is not as rigorous as a Randomized Controlled Trial. Standardized assessment measures are administered at intake and post-intervention to monitor symptom improvement and determine outcomes. The study may administer measures at established time points during and after treatment to determine long-term effects of the intervention. Treatment outcomes for the intervention group(s) are evaluated to determine if the intervention was more effective than services as usual.
Being able to see the process and outcomes or interim results/measures for a given evidence-based practice.
- One Group Pretest-Posttest Study, or Uncontrolled Group Study
A type of study that evaluates a treatment or intervention in a single sample using standardized assessment measures administered at intake and post-intervention to monitor symptom improvement and determines outcomes. The study may administer measures at established time points during and after treatment to determine long-term effects of the intervention. This is a type of study that does not have another group to compare results objectively against. In this case, only the group that receives the intervention is examined, so one cannot be certain that any changes seen were caused by the intervention itself, as other factors may have caused the changes.
- Operant Conditioning
A process of behavior modification in which the likelihood of a specific behavior is increased or decreased through positive or negative reinforcement each time the behavior is exhibited, so that the subject comes to associate the pleasure or displeasure of the reinforcement with the behavior.
- Outer Context
The sociopolitical context that refers to the larger system wide political, legislative, and funding environment of a service system. This can be construed at the federal (i.e., country), state, county, or city level, depending on the nature of the service system. The United States has certain legislative mandates that influence policies related to required services and funding to support those services.
A refereeing process used to check the quality and importance of research studies. It aims to provide a wider check on the quality and interpretation of a report. For example, an article submitted for publication in a peer-reviewed journal is reviewed by other experts in the field. For a more detailed explanation of the peer-review process, please click here for the CEBC’s page on published, peer-reviewed research. For a quick and easy to understand tutorial on the University of California – Berkeley’s website on how the peer-review process works and why it is beneficial, please click here.
- Placebo Group
A group that is given a placebo in a research study. A placebo is something that does not directly affect the behavior or symptoms under study in any specific way. A researcher must be able to separate placebo effects from the actual effects of the intervention being studied. For example, in a drug study, subjects in the experimental and placebo groups may receive identical-looking medication, but those in the experimental group are receiving the study drug while those in the placebo group are receiving a sugar pill. Typically, subjects are not aware whether they are receiving the study drug or a placebo.
- Posttest Only Study
A study type that evaluates a treatment or intervention in a single sample using standardized assessment measures administered at post-intervention only to determine outcomes. The study may administer measures at established time points after the conclusion of treatment to determine long-term effects of the intervention.
- Posttest Only Study with a Control Group
A study type that evaluates a treatment or intervention in one or more samples using a control or comparison group. Standardized assessment measures are administered at post-intervention only to determine outcomes. The study may administer measures at established time points after the conclusion of treatment to determine long-term effects of the intervention. Treatment outcomes for the intervention group(s) are evaluated to determine if the intervention was more effective than services as usual.
- Prevention (Primary)
Type of prevention consisting of activities designed to impact families prior to any allegations of abuse and neglect, and include public education activities, such as parent education classes, family support programs, public awareness campaigns, etc.
- Prevention (Secondary)
Type of prevention consisting of activities targeted to families that have one or more risk factors, including families with substance abuse or domestic violence issues, teenaged parents, parents of special needs children, single parents and low-income families. These services include parent education classes for high-risk parents, respite care, home visiting programs, crisis nurseries, etc.
- Prevention (Tertiary)
Type of prevention consisting of activities targeted to families in which abuse has already occurred and include early intervention and targeted services, such as individual, group, and family counseling; parenting education - such as Parent-Child Interactive Therapy (PCIT); community and social services referrals for substance abuse treatment, domestic violence services, psychiatric evaluations, and mental health treatment; infant safe-haven programs; family reunification services (including follow-up care programs for families after a child has been returned); temporary child care; etc.
A process that reduces the likelihood of bias by assigning people to specific groups (e.g., experimental and control groups) by chance alone (randomly). When groups are created by random assignment, individual characteristics are less likely to make the results inaccurate.
- Randomized Controlled Trial
A study type where participants are randomly assigned to receive either an intervention or control treatment (often usual care services). This allows the effect of the intervention to be studied in groups of people who are: (1) the same at the outset and (2) treated the same way, except for the intervention(s) being studied. Any differences seen in the groups at the end can be attributed to the difference in treatment alone, and not to bias or chance.
A stimulus, such as a reward, the removal of an unpleasant event, or punishment, that in operant conditioning maintains or strengthens a desired response.
The extent to which the same result will be achieved when repeating the same measure or study again. There are four types of reliability mentioned on this website:
- Inter-rater - Persons independently administering the same assessment to the same person should have highly similar results.
- Internal - Items on an assessment aimed at measuring the same thing or parts of the same thing (e.g., physical symptoms of anxiety) should be correlated.
- Split-half - A method of measuring internal reliability by verifying that half of the items on a scale are correlated with the other half.
- Test-retest - A method in which the same measure is administered multiple times and the resulting scores are compared. Assuming no important intervening events, a person's scores on a measure taken multiple times should be correlated.
- Research Evidence
Defined on this website as research study outcomes that have been published in a peer-reviewed journal.
A staged approach to implementation in which the implementation process begins with a portion of the organization or service system and eventually moves to complete implementation in the whole organization or larger community.
- Screening tool
The CEBC defines a screening tool as a brief questionnaire or procedure that examines risk factors, mental health/trauma symptoms, or both to determine whether further, more in-depth assessment is needed on a specific area of concern, such as mental health, trauma, or substance use. Since the goal is to identify specific needs among a broad group, screening is usually done with a large population, like all children referred to Child Welfare Services or all children entering out-of-home care. A positive result on a screening tool should result in a referral for a more thorough assessment.
A measure of how well a test identifies people with a specific disease or problem. For example, a depression screening tool with high sensitivity will have a positive result for most people who have depression, and a negative test result means depression is unlikely (few false negatives).
- Service Environment
The larger environment of service delivery - in particular those aspects of the sociopolitical context that have a bearing on the funding and contracting for services. Funding streams at the state level may be a function of legislative actions that promote funding for specific types of services. For example, the California Mental Health Services Act (Prop. 63) began in November of 2004 and instituted a tax on individual income earnings over one-million per year and set those dollars aside for mental health services. Counties within the state then received some of these dollars to support mental health services. However, services took a number of forms including services provided by the public agencies, and also through contracts to community-based non-profit organizations.
- Single Subject Study
A study type with a prospective observation design that focuses on a single subject and typically involves reporting data individually over time.
A measure of how well a test excludes people without a specific disease or problem. For example, a depression screening tool with high specificity will give a negative result for most people who do not have depression, and a positive test means the person likely has depression (few false positives).
The extent to which an evidence-based practice lends itself to being tried out and tested in a way that gives a determination of its applicability before deciding on a full-scale implementation.
- Uncontrolled Group Study
A study that does not have another group to compare results objectively against. In this case, only the group that receives the intervention is examined, so you cannot be certain that any changes seen were caused by the intervention itself, as other factors may have been acting.
- Untreated Group
A group that is untreated in a research study. This group serves as a control group for comparison with the treatment or intervention group. This group receives no treatment at all during the study.
- Validation Sample
A group of people used to test the validity of a measure.
The degree to which a result is likely to be true and free of bias. There are many types of validity:
- Concurrent – Scores on an assessment should be related to scores on a previously-validated measure of the same or similar construct/concept.
- Construct – The assessment measures content related to the theoretical definition of the assessment's purpose (the construct/concept). For example, items on a depression assessment measure should address the diagnostic criteria for depression.
- Content – Similar to construct validity. Assessment items should address the full range of the criteria for the construct/concept being measured.
- Convergent – Scores on assessments designed to measure the same construct (e.g., different depression assessment measures) should be positively correlated.
- Criterion – Scores on an assessment should relate to or predict outcomes relevant to its theoretical construct/concept. For example, an assessment of mathematical aptitude should predict performance in a mathematics class.
- Divergent – Measures of constructs/concepts that are not theoretically related (e.g., age and intelligence) should not be correlated across different scales.
- External validity – External validity is the extent to which the results of a study can apply to people other than the ones that were in the study. This is a measure of how generalizable the results are to others outside of the study.
- Face validity – Items on an assessments should appear to the reader to measure what the assessment is designed to measure. Note: However, for some assessments intended to measure socially undesirable traits or behaviors, concealing the nature of the assessment may make it a more valid measure of the construct. For example, an assessment of abusive behavior might not contain the term "abuse," but might focus instead on specific acts.
- Internal validity – Internal validity is the extent to which a study properly measures what it is meant to.