THIS PAGE IS UNDER CONSTRUCTION
Designs Used In Evaluation Research:
As already mentioned on the main page, the gold standard for evaluation is a “randomized controlled trial,” also referred to as a “randomized experiment.” In a randomized controlled trial so-called “units” (e.g., individuals, classrooms, branches, departments, factories) are randomly assigned to one of two experimental conditions, either the “intervention condition” or the “control condition.” Relevant outcomes are then measured in all units several days, months, or years later.
A closely related evaluation method is the “randomized rollout design.” Like with a randomized controlled trial, “units” are randomly assigned to experimental conditions. Those in the intervention condition are exposed to the intervention right away, while those in the control condition are put on the “waitlist.” Outcomes are measured in all units several days, months, or years later. Once the measurement of the outcomes is completed, units in the control condition are also exposed to the intervention.
In the “non-equivalent control group design” individuals in a particular setting are exposed to a particular intervention and outcomes are assessed. Their responses are then compared to individuals in other settings that are as similar as possible. For example, Hull et al. (2017) implemented their Acceptance Journey intervention to reduce homophobia against gay men in Milwaukee, WI. They then compared Milwaukee to two comparable cities.
In some settings, evaluation researchers employ a “regression discontinuity design” where all units below or above a certain threshold are exposed to the intervention. For example, a fast food company may decide to implement a particular pro-diversity initiative in all its restaurants where the yearly turn-over rate of ethnic minority employees exceeds 40%.
A “correlational design” consists of measuring outcomes at one point in time. For example, one may examine if employees who attended a voluntary diversity training workshop have more positive atttiudes towards marginalized groups than employees who did not attend the workshop. A correlational design is poorly suited for evaluation purposes, because one does not know if prior positive attitudes caused individuals to attend the workshop or if the workshop caused more positive attitudes.
A “pretest-posttest design” requires two measurements of outcomes, once before and once after the intervention. This type of design is also poorly suited for evaluation purposes, because one does not know whether the observed changes between pretest and posttest are due to the intervention or to some other event that occurred between the two measurement moments. Sometimes participants respond differently when measured a second time, even if the intervention has no effect. Combining pretest and posttest measurements with a non-equivalent control group design (see above) permits more reliable conclusions.
The primary tool to measure the effectiveness of a pro-diversity intervention is a climate survey. Below there is a list of constructs that are frequently included in a climate survey. Climate surveys are informative, but suffer from a major shortcoming; they measure self-reports, and self-reports do not always reflect how people actually behave in real-life situations. This is why it is preferable to also include “hard” outcomes in a rigorous evaluation study (see list further below).
Here are constructs that are frequently measured in climate surveys. The text in parentheses is a sample item:
- Perceptions of the social climate (The [Name of Organization] is a welcoming and inclusive environment for people from all social backgrounds).
- Support for pro-diversity initiatives (I support the [Name of Organization]’s pro-diversity initiatives).
- Positive attitudes toward diversity (I enjoy the diversity in this organization).
- Positive attitudes toward others (In general, I have positive attitudes about people from different ethnic and racial groups.).
- Perceived peer norms (The overwhelming majority of my co-workers at [Name of Organization] do their best to behave inclusively).
- Perceived leadership norms (Overall, most managers do their best to treat employees from minority groups with respect).
- Sense of belonging (I feel as though I belong in this organization).
- Intergroup anxiety (I often feel anxious when interacting with someone from a different social group).
- Bystander intervention (I speak up when I see people being treated unfairly because of their social background).
- Self-reported inclusive behaviors (How frequently in the last three months have you talked to someone with a different social background about their experiences?)
- Physical health (Rate your physical health over the last two months )
- Mental health (Rate your mental health over the last two months)
- Intentions to leave (In the last six months how frequently have you considered leaving [Name of Organization]?)
- Experiencing discrimination (Have you witnessed managers at [Name of Organization] engage in discriminatory behaviors toward members of particular social groups? [If yes:] What percentage of managers do you think engage in these kinds of behaviors at least occasionally? ___%.)
- Experiencing exclusion (Have you witnessed managers at [Name of Organization] engage in exclusionary behaviors toward members of particular social groups? [If yes:] What percentage of managers do you think engage in these kinds of behaviors at least occasionally? ___%.)
- Behaviors to be discouraged (What do you think is the most hurtful behavior to address at [Name of Organization]? In other words, if you could eliminate one specific type of behavior that has the most negative impact on your sense of belonging what would it be? ____ [Open-ended])
- Behaviors to be encouraged (If you could get your managers/co-workers to adopt certain behaviors more frequently, what would they be? What behaviors would signal to you that you are respected, welcome, and included by your managers/co-workers? ____ [Open-ended])
For additional information on climate surveys and more sample items click this link.
Here are some examples of “hard” outcomes that allow evaluators to assess the effectiveness of a pro-diversity initiative:
- Grades (in educational settings)
- Drop-outs (in educational settings)
- Disciplinary actions (in educational settings)
- Number of sick days (in companies)
- Rate of turn-over (in companies)
- Number of women and ethnic minorities in leadership positions (in companies)
- Salary (in companies)
- Recruitment, assessed through CV testing (in companies)
- Recruitment, assessed by number of employees belonging to different social groups who were recruited in the last 12 months (in companies)
- Health data
- Actual behaviors (e.g., standardized assessment of the friendliness of employees when they talk to customers of different ethnicities on the phone, standardized coding of audio recorded team meetings).
Return to “How to Promote Inclusion in 750 Words” here