Effective Evaluation – How to Plan and Conduct it
Evaluating any initiative – regardless of the topic – is a critical step in determining an action’s overall impact and long-term effectiveness. A comprehensive evaluation establishes a solid understanding of the effects of work and provides a stable platform for future enhancements. As the Peter Drucker saying goes: You can’t manage what you can’t measure.
Within the PPHS-coordinated INDEED project, the consortium is currently working on a tool to facilitate the evaluation process.
INDEED, is a project devoted to evaluating initiatives that focus on preventing and countering violent extremism (PVE/CVE) and radicalisation. This article explores the concept of evaluation, focusing on evidence-based evaluation, a fundamental aspect of the INDEED methodology. We address the importance and guidance evaluation offers to improve the effectiveness of preventative initiatives. The tools developed by the INDEED consortium will assist practitioners and policymakers working in the violent extremism and radicalisation domain. However, the general conclusions will also be helpful in other fields of crime prevention.
What the Evaluation is
The conclusions drawn from an implementation process influence the achievement of better results in the future. However, a summary of successes and failures is insufficient. With a coherent evaluation approach, which is based on a reliable scientific method, it is easier to properly understand the results of an action and explain the factors hindering the achievement of objectives. Evaluation must be thoughtful and systematic to be fully effective.
The expert of the Polish Platform for Homeland Security, Ph.D. Marzena Kordaczuk-Wąs has been working on the prevention of security threats for many years. Following the definition proposed by the INDEED project consortium, she emphasizes that evidence-based evaluation describes:
”a process (…) which integrates available external evidence, professional expertise and stakeholder values, preferences and circumstances” (Klose, 2022, p. 40).
Evaluation, in general terms, means assessing the initiative’s implementation or effects as systematically and impartially as possible. With that being said, it is important to clarify that evaluation has several functions. Most commonly, it is used to understand how the implementation of an initiative is proceeding and what kind of outcomes it has delivered. Often, it is associated with checking the effectiveness of a particular activity. However, it also allows us to examine whether the objectives of a given initiative correspond with the identified risks and the real needs of the local environment. It provides information on whether the goals of a particular initiative have been correctly formulated. Evaluation is also a tool that helps practitioners to achieve even better results by using their own experience and the experience of other teams, which decided to disseminate the results of the evaluation of previously implemented initiatives.
Mistakes in Designing and Implementing the Evaluation
Although the necessity of evaluation is increasingly recognised and appreciated, in practice, it is still either not carried out at all, or many mistakes are made at various stages. For example, a significant concern here is the lack of evaluation planning already at the stage of designing an initiative. Still, quite often the idea of conducting an evaluation arises during the implementation of the initiative. As a consequence, the assessment of completed activities becomes difficult. Furthermore, such situations also lack a properly prepared and introduced methodology that would clearly and reliably identify which aspects of the initiative are worth evaluating and how they should be examined.
Also, it’s worth mentioning that properly planned evaluation requires the involvement of all relevant stakeholders of the initiative at each stage of work. A sense of agency and impact on the quality of an initiative can help in the process of collecting more comprehensive evaluation data. Unfortunately, it still often happens that information collected comes only from a narrow group of people managing a given initiative.
Another serious mistake is that evaluations are conducted for and in a way that allows the collection of arguments supporting the funding of an initiative and the demonstration of politically oriented effects. In other cases, the evaluation may also be used as a control tool.
INDEED Project and Building an Evaluation Culture
As previously mentioned, a solidly conducted evaluation offers the possibility to increase the effectiveness of initiatives aimed at preventing and countering violent extremism (PVE/CVE) and De-radicalisation, as well as activities focusing on other security threats. Thus, it is very important to develop and disseminate solutions to improve the quality of evaluation. That’s why, the Polish Platform for Homeland Security initiated the EU-funded INDEED project coordinated by Ph.D. Marzena Kordaczuk-Wąs. Its overarching goal is to build and promote a culture of evaluation by providing tools and practical competence supporting effective evaluation.
This project uses the theoretical findings, developed by academia in the field of evidence-based evaluation and applies them in the field of designing and evaluating preventive initiatives.
Using the methodology based on the ‘5 I’ approach, consisting of identifying, involving, innovating, implementing, and impacting, the INDEED project consortium has developed an innovative Toolkit and will soon launch an e-learning program that integrates practical tools supporting designing and conducting evaluation.
“The INDEED Toolkit is intended to be inclusive, suiting various professional scenarios as well as different types of initiatives” – highlights Irina Van Der Vet, a member of the research team at the University of Helsinki, one of the consortium partners. She also says: “That’s what we strive for: addressing solutions to facilitate stakeholders’ assessment of the procedures, allowing them to take part in the evaluation processes or independently design and conduct them.”
Evidence-Based Evaluation Model
One of the essential elements of the previously mentioned Toolkit developed by the consortium is, the Evidence-Based Evaluation Model. It was designed based on the theoretical and practical approaches to evaluation. The model, in turn, was used to develop a practical evaluation tool that helps to plan and conduct evidence-based evaluation.
The model includes the universal components of the evidence-based evaluation process and can be applied to initiatives designed and implemented in the area of PVE/CVE and De-radicalisation. It is also suitable in the context of preventing other security threats. It creates a framework for the evidence-based evaluation of these initiatives, addressing the necessary components of such a process, as well as its different stages and steps.
Furthermore, it presents four phases: preparation, design, execution, and utilisation. These are integral components of the iterative evaluation process. The first phase involves building a clear sense of the initiative and identifying the evaluation needs, its context, and the resources available to carry it out. Stage number two provides guidelines essential for developing a detailed evaluation plan. The third stage involves the execution of the developed plan and the processing of its results. The goal of the fourth phase is to utilise and disseminate the evaluation results which come from the analysed data and the overall process.
INDEED Evaluation Tool for Evidence-Based Evaluation
The INDEED project developed a practical tool to help to plan and conduct evidence-based evaluation. It can be used at all stages of the implementation of PVE/CVE and De-radicalisation initiatives. The tool offers detailed information and step-by-step instructions for planning and conducting evaluation, including those of limited scope. The tool is built on the Evidence-Based Evaluation Model developed by INDEED.
The tool is currently in the development phase and will be released for testing and validation in October. In turn, at the end of November, it will be available on the project website as a component of the INDEED Toolkit. Its final version will be available in multiple languages in August 2024.
Practitioners and policy makers working in the field interested in testing or validating the tool can contact the INDEED project manager: firstname.lastname@example.org for more details.
Marzena Kordaczuk-Was, PhD
Expert on Radicalisation Prevention
Senior Communication Specialist
Polish Platform for Homeland Security