Why was this Toolkit Developed?
This toolkit is designed to support the evaluation of school-based initiatives designed to improve youth’s acceptance of diversity, reduce exposure to hate, and improve online safety. Below we outline the requisite steps to conduct an evaluation and make available templates of logic models and surveys we have utilized in the past. For any questions, please reach out to preparedness@hsph.harvard.edu.
Evaluation Step 1: Developing a Logic Model
The first step in conducting an evaluation is to create a logic model to identify the program’s resources, outputs, and outcomes. Below are two examples of logic models developed by the EPREP team that may be useful to identify relevant components of a program:
Evaluation Step 2: Survey Development
Following the identification of program outcomes, the next step is to develop a survey to measure such outcomes. Below are examples of surveys we have used to conduct evaluations that have been published in academic journals that you may use or adapt. N.B.: Please ensure that you cite all the references included in the document.
- Empower PEACE
- Program goal: improved youth acceptance of ethnocultural diversity
- Kombat With Kindness
- Program goals: improved youth acceptance of ethnocultural diversity; reduction of exposure to hate messages; increased exposure to messages of acceptance
- Online Safety Utah
- Program goal: improve youth online safety
- Online Safety Massachusetts
- Program goal: improve youth online safety
You may also choose to develop your own survey using some of the items we have used in the past. Below you will find questions used to operationalize various constructs we have measured in our surveys. This will allow you to easily tailor your survey to program outcomes. N.B.: Please ensure that you cite all the references included in the document.
Evaluation Step 3: Evaluation Implementation
Evaluation Implementation is the following step. It is vital to understand that program evaluation takes place in the real-world context, and as a result, there can be many unforeseen challenges in its implementation. Below you will find a document outlining some of the key lessons learned from our evaluations. We hope that you will be able to utilize these findings to better understand some of the challenges you may face during implementation:
Presentation of EPREP Program Evaluation Study Results
Below you will find a presentation outlining the process described above, as well as in-depth descriptions of evaluation results.
EPREP Program Presentation: Evaluating School-Based Interventions
Related Publications
- Savoia, E., et al., Assessing the Impact of the Boston CVE Pilot Program: A Developmental Evaluation Approach. Homeland Security Affairs, 2020. 17(6).
- Savoia, E., et al., Evaluation of a School Campaign to Reduce Hatred. Journal for Deradicalization, 2019. Winter(21).
- Harriman, N., et al., Youth Exposure to Hate in the Online Space: An Exploratory Analysis. International Journal of Environmental Research and Public Health, 2020. 17(22).
- Savoia, E., et al., Adolescents’ Exposure to Online Risks: Gender Disparities and Vulnerabilities Related to Online Behaviors. International Journal of Environmental Research and Public Health, 2021. 18(11).