Header Ads

EPISODE-6: Evaluation

EPISODE-6: Evaluation

WHAT IS EVALUATION?
The systematic collection and analysis of information about the characteristics and outcomes of programs and projects as a basis for judgments, to improve effectiveness, and/or to inform decisions about current and future programming. Evaluation in has two primary purposes: accountability to stakeholders and learning to improve effectiveness.

WHAT IS EVALUABILITY ASSESSMENT?
Evaluability assessment is a method for determining:
  1. The extent to which a project or activity is ready for an evaluation
  2. The changes that are needed to increase readiness
  3. The type of evaluation approach most suitable to assess the project or activity’s       performance and/or impact
While most staff consider these concepts at least partially when writing an evaluation
SOW, an evaluability assessment offers a systematic process for assessing readiness. It can also generate recommendations for necessary changes to the project or activity be implemented before the evaluation takes place. Evaluability assessment can take many forms depending on the specific context. Experts and partners can conduct the assessment internally, or engage an outside consultant or consultant team. If engaging consultants, it is important to keep in mind that their role is to facilitate the process rather than to produce a deliverable independently. In either case, project staff should expect to dedicate time and effort to the activity.  

TYPES OF EVALUATION:
FORMATIVE EVALUATION-
Formative evaluations strengthen or improve the object being evaluated. They help form it by examining the delivery of the program or technology, the quality of its implementation, and the assessment of the organizational context, personnel, procedures, inputs, and so on.

Example: Need assessment, process evaluation, and structured conceptualization.  

ADVANTAGES:
  1. The result can be used to improve project performance, instruction, or learning outcome before the project has concluded.
  2. The events are recent thus of making accuracy and preventing distortion of time
  3. Emphasize the outputs and outcomes instead of goal; so there are lots of chance to improve in different components of the project
 DISADVANTAGES:
  1. Making judgment before activities are being completed; judgment may not always true
  2. Interrupt the flow of project activities & outputs
  3. Sometimes it is intrusive
SUMMATIVE EVALUATION:
Summative evaluations, in contrast, examine the effects or outcomes of some object. They summarize it by describing what happens subsequent to delivery of the program or technology; assessing whether the object can be said to have caused the outcome; determining the overall impact of the causal factor beyond only the immediate target outcomes; and, estimating the relative costs associated with the object.

Example: Outcome evaluation, impact evaluation, meta-analysis.

ADVANTAGES:
  1. All finding comes after completion of project; no chance to make any prediction or judgment
  2. Result and finding cannot affect to implementation process of the project/program
  3. Learning can be used to improve for next or another project/program
 DISADVANTAGES:
  1. No chance to use the finding, recommendation, and learning in the same project for further improvement
  2. Comparatively, time-consuming due to data & information collection and report writing
  3. It is very difficult to ensure all stakeholder with this assessment at the end of lifetime of the project/program  

IMPACT EVALUATION:
Impact evaluations are useful for determining the effect of USAID activities on specific outcomes of interest. They test USAID development hypotheses by comparing changes in one or more specific outcomes to what would have happened in the absence of the intervention, called the counterfactual. Impact evaluations use a comparison group, composed of individuals or communities where an intervention will not be implemented, and one or more treatment groups, composed of project beneficiaries or communities where an intervention is implemented. The comparison between the outcomes of interest in the treatment and comparison group creates the basis for determining the impact of the USAID intervention. An impact evaluation helps demonstrate attribution to the specific intervention by showing what would have occurred in its absence.  

QUASI-EXPERIMENTAL EVALUATIONS:
To estimate intervention effects, quasi-experimental designs estimate the counterfactual by conducting measurements of a non-randomly selected comparison group. In many cases, intervention participants are selected based on certain characteristics, whether it is level of need, location, social or political factors, or some other factor. While evaluators can often identify and match many of these variables (or account for them in a regression analysis), it is impossible to match all factors that might create differences between the treatment and comparison groups, particularly characteristics which are more difficult to measure such as motivation or social cohesion.

EXPERIMENTAL EVALUATION:
In an experimental evaluation, the treatment and comparison groups are selected from the target population by a random process. Because the selection of treatment and control groups involves a random process, experimental evaluations are often called randomized evaluations or randomized controlled trials (RCTs).  

WHAT ARE EXTERNAL EVALUATIONS?
An external evaluation is one in which is not implementing by the organization itself or partner; mostly commissioned by a third party to implement an evaluation. As per standard of the organization, to be counted as an external evaluation, the team lead must be an independent expert from outside the agency who has no fiduciary relationship with the IP of the activity or project being evaluated. External evaluations may include an organization staff member, but the team leader must be from outside the agency. An evaluation contracted through a subcontract of the IP is not an external evaluation.

WHAT PURPOSE DO EXTERNAL EVALUATIONS SERVE?
An external evaluation serves the purpose of validating the standard of independence, which aims to mitigate bias and ensure credibility and integrity to the evaluation process. The importance of an evaluator’s independence from program management often provides greater credibility of the evaluation findings and report. An external evaluation reduces real or perceived conflict of interest a situation in which a party has interests that could improperly influence that party’s performance of official duties or responsibilities, contractual obligations, or compliance with applicable laws and regulations. A real or perceived conflict of interest of an evaluator translates to a lack of impartiality, objectivity, and integrity and has the potential to jeopardize the credibility and validity of the findings.

DEMERITS OF EXTERNAL EVALUATION:
Most of the time we conduct the study by external sources that called a third-party consultant. This may need huge cost and time involvement. There are some demerits of external sources also. The first demerit is external usually takes a long time to understand the program/project approach and nature of activities. As a result, the enumerator cannot ask question rightly to the respondent. This may affect the quality of data and information. The aftermath is unrealistic study findings.     

WHAT ARE INTERNAL EVALUATIONS?
Internal evaluations are those that are commissioned by the organization own where the evaluation team leader is own staff and conducted or commissioned by themselves.

WHAT PURPOSE DO INTERNAL EVALUATIONS SERVE?
The purposes for planning and implementing internal evaluations include:
(1) To benefit from insider expertise and knowledge of program or Agency operations; (2) to better ensure that learning from an evaluation is captured internally, utilized, and institutionalized in the organization;
(3) To develop the capacity of organization staff in the process of planning and implementing high-quality evaluations; and
(4) To more quickly (than possible through the procurement process) answer a specific development question or collect urgently needed data on a project’s performance.  

DEMERITS OF INTERNAL EVALUATION:
Sometimes we conduct these studies by ourselves using own resources to minimize cost and time. These are the main merits of internal evaluation. On the other hand, there are many demerits; the main demerit is to control bias. When organization own employee goes to collect data and information they may bias. However, if we can develop control mechanism then we can get good quality data. The organizational employee does won the program and has a clear understanding of the nature of activities. They are able to ask question rightly to respondent which is important to get quality data.

HOW TO CONDUCT EVALUATION (PROCESS & STEPS)
STEP-1: Literature Review
STEP-2: ToR Development
STEP-3: Study Team formation & Plan
STEP-4: Tools & database design
STEP-5: Enumerator training & field test
STEP-6: Data & Information collection
STEP-7: Data entry, cleaning & analysis
STEP-8: Prepare tables, graphs & report writing
STEP-9: Report sharing
STEP-10: Capture learning & recommendation       

References:
The Midterm Evaluation Report: The Dairy Value Chain Project in Bangladesh; International Food Policy Research Institute, 2033 K Street, N.W., Washington, D.C. 20006, U.S.A.  

Evaluation: Concepts & Principles; Aruan A P; Batch MSC Nursing, India

No comments

Powered by Blogger.