Tuesday, 26 April 2022

Monitoring and evaluation system at the project level


Monitoring and evaluation as tools of effective management began to actively develop in our (and not only) country precisely with the advent of various donor projects, because there was a need to monitor the implementation of budgets, the fulfillment of tasks and the achievement of goals. It is interesting that in the first years, mainly foreign specialists were involved in conducting M&E, since the local potential has just begun to form. 


Now the tendency to attract local specialists has strengthened, which is an indicator of the increased level of professionalism of Kyrgyz specialists in this field. Monitoring and evaluation of projects are considered today as an indispensable tool for managing project activities, their importance is recognized by an increasing number of managers. Monitoring and evaluation are important not only in terms of improving the effectiveness of specific projects already under way, but also in terms of the importance of the information and conclusions received for the initiation and organization of new projects and programmes. 


In addition, with the increasing role and degree of participation in projects of public and government structures, the issue of accountability to all project participants on the results of the work done (ensuring transparency) and conclusions that can ensure the continuity of the activities carried out arises acutely. Partners working in the project are interested in obtaining information about the practical results of project activities, allowing them to assess the performance and productivity of the system in which they work.

Most of the implemented projects build a monitoring and evaluation system for themselves at the planning stage. At this stage, as a rule, as a rule, a system of certain indicators (indicators) is developed, by which it is possible to judge the progress of the program or project, to assess the degree of project implementation. The monitoring and evaluation itself begins later and is built within the framework of the management system. The complexity of this system depends on the complexity of the activity that needs to be monitored.

At the "output", the monitoring system should provide operational information about how things are going, whether there are deviations from the plan. If the monitoring system works normally, then this allows the manager to influence the progress of the project in time, making certain decisions.

It is necessary to pay attention to the fact that if monitoring is built in each project as a mandatory part of it, the approach to evaluation is not so unambiguous, especially for social projects, where the effects and results of the project often appear in a delayed mode, and the timing of projects does not always allow you to fully see this. More often, the results of only intermediate evaluations are used as a review of the work done, to make a forecast of possible results, as a way to determine the changes that need to be made to the project to ensure its productivity and effectiveness. However, for educational projects (however, as well as for projects, for example, in the field of healthcare), it is the assessment of the impact that is an extremely important task, because it makes it possible to see its impact on a significant area for each person. Given that these industries are developed mainly at the expense of donor funds, the assessment (both of the effectiveness of already implemented projects and the assessment of needs) makes it possible to plan projects that are really relevant to the needs, taking into account development trends and local potential.

Turning to the experience of building a monitoring and evaluation system for the USAID Project to Improve the Quality of Education, it should first of all be said that the USAID Education Quality Improvement Project (QLP) is a five-year project (2007 – 2012) aimed at promoting the improvement of the quality of primary and secondary education in the republics of Central Asia: Kyrgyzstan, Tajikistan, and Turkmenistan.

The project is aimed at improving the quality of training, retraining and advanced training of teaching staff and increasing the efficiency of the school financing system. And although the main changes are planned at the systemic, institutional level, all other levels were involved in assessing the effectiveness of these impacts.

At each of them, certain results are expected, and a number of indicators (both process and result) have been developed, intermediate (for each year of project implementation) and targets for each of these indicators are planned. The table shows examples of indicators1. In total, 40 indicators have been developed for the project, taking into account all the main planned results (outcomes) and outputs of the project.

Progress indicators are monitored on a quarterly or annual basis. This data is collected and analyzed as part of quarterly and annual reports, discussed at employee and management meetings. The data is collected by the M&E department of the project, entered into the database, analyzed, discussed and used in the planning of each subsequent year of project implementation, adjustment of actions and deadlines. Thus, the components of the project monitoring system are:

  • - indicators and planned indicators;
  • - data system, their input and processing;
  • - procedures for reporting and analysing and discussing information to be used for decision-making;
  • - The structure within the project (M&E department) that supports this process.

Due to the fact that the main component of the project is training, a methodology for assessing its effectiveness was developed. The study of the effectiveness of training is based on a model developed by the American scientist Donald Kirkpatrick2. D. Kirkpatrick's model is a conceptual approach to the evaluation of training programs, which is a comprehensive system for assessing the effectiveness of training. This model has now become a classic and is used in the practice of evaluative research.

The model adapted for the Project involves the evaluation of the training conducted at the following levels:

To assess level 1, 2 and 3, the following tools are used:

  • Questionnaire for the participant of the training. It is filled in by each participant of the training and passed to the trainer (trainers). Analysis of questionnaires is carried out by trainers and provided by them in the report along with recommendations for improving the training.
  • Questionnaire for measuring the increase in knowledge and understanding - by target groups: pre- and post-training (test). The test is used for some areas of the project.
  • Program monitoring: attendance at classes in schools, universities, IPC; conversations and discussions with participants of on-site trainings, etc.
  • It is carried out by software specialists during the entire period of work.

Typically, measurement coverage depends on resources and capabilities. Taking this into account, it is recommended to monitor and evaluate the trainings within the framework of the Project to make 100% measurement at level 1 (for program purposes - performed and analyzed by trainers / program specialists), and 30-40% coverage for measurements at level 2 (that is, to conduct a survey of 30-40% of students for each target group or in 30% of trainings). The third level is the prerogative of the programme specialists and is not formalized (some of the questions of the 3rd level are measured during the evaluation at the fourth level).

The evaluation of the project uses outcome indicators (Level 4 according to the Kirkpatrick model) related to the level of the project goals, and an assessment approach has been developed to measure them, in which the measurement of the effectiveness and impact of training under the updated professional development programs on teachers' practice in the classroom and the learning outcomes of students is carried out in specially selected target schools of the project. These schools are not pilot schools, but it is in these schools that all measurements are made, so the project guarantees that teachers of target subjects and administrators of these schools take refresher courses in updated programs and receive the necessary teaching materials to implement changes in practice. In total, during the period of the project, it was planned to select 75 target schools in the Kyrgyz Republic, in Tajikistan - 86 schools. Prior to the start of operations in these schools, in April 2009, in a sample consisting of 15 target schools, as well as in five control schools, baseline data were collected: student testing, lesson observation and interviewing of teachers, surveys of school administrators and parents. 


During 2009, teachers of primary classes, mother tongue and mathematics and administrators of these schools were trained in advanced training courses conducted under the programs developed by the Project, in 2010 these schools received developed materials, mentoring support from the project. 


The next data collection in these same schools will be conducted in 2011 (i.e. two years after the baseline data were evaluated) to assess the impact of the project. In 2012, there will be a final study that will show how much this effect has a lasting effect (sustainability). Approximately the same sequence was supposed to be used on the second sample of schools (another 20 schools in each republic), the work with which begins a year later. Thus, it was planned to conduct an assessment in more than 45% of the schools in which the project operates.

Thus, a quasi-experimental model was used to evaluate the project, which allows the most reliable and reliable way to assess the extent to which the project has achieved the planned results at the level of the professional development system, at the level of school management practice, at the level of teacher practice and, finally, at the level of the student. Of course, this project evaluation model is not perfect. The main problem with its use is related to the short-term period of time between the start of project interventions and the impact assessment. 


That is, it is quite difficult to assume that over a period of two years there may be any changes at the level of students. Nevertheless, this is the specificity of any interventions in the field of education, where the effect of a reform or a significant change can be realistically assessed only after a few years.

No comments:

Post a Comment