Home
CATEGORIES
Business Ethics & Philanthropy Managerial Notes for Assessing Change through CSR Programmes
Managerial Notes for Assessing Change through CSR Programmes
Change is a measure of success in the development sector, in almost the same way the balance sheet is to the corporate world. Measuring success in terms of a balance sheet is relatively simpler than measuring change due to social sector programmes, as everything is measured in monetary terms in the former.
Change due to social sector programmes (corporate social responsibility in India) can be measured through assessments that must be carried out periodically. Assessments inform us if we are on the right track and the achievements of programmes goals and purposes.
Why are assessments important?
Many question the need for assessments in the social sector programmes. Let us understand why there is a need for assessments from a stakeholder perspective –
1. Informs programming – the foremost benefit of assessment is that it informs our programme design and implementation. Short feedback loops (like monitoring) help inform whether the programme is on track and is able to fulfil its immediate purpose. Long feedback cycles (or evaluations), on the other hand, help us understand if goals are achieved, inform our programme design as we learn from mistakes.
2. Inform internal stakeholders – the corporate social responsibility committee, the internal constituency within a company, must be informed of change as result of monies being spent on programmes. The CSR committee is responsible for the programmes being carried out using corporate social responsibility money, and have a right to know of the outcomes of an expenditure.
3. Inform donors and governments – the government and other donors stand to benefit from measuring change and reporting the same. This allows convergence among efforts by various agencies, as also help build complementarity in programming.
4. Inform communities – in the development sector, our first line of accountability is to the communities we work with. Communities have a right to know the outcomes resulting from monies being spent on problems they live with.
5. Inform peers – sharing learnings on what works and what does not helps peers avoid making similar mistakes. Eventually this saves monies that can be used for more deserving projects.
Monitoring and Evaluation
Monitoring (also sometime loosely referred to as process evaluation) of a programme involves the collection of routine data that measures progress toward achieving programme objectives. Monitoring is therefore an –
• Ongoing process
• Requires collection of data at multiple points throughout the programme life-cycle
• Can be used to determine if activities need adjustment during the intervention to improve desired outcomes
Evaluation measures how well programme activities have met expected objectives and/or the extent to which changes in outcomes are achieved. Evaluation studies could be carried out during the mid-term of a project (referred to as mid-term evaluation) or at it conclusion (end-term evaluation). Evaluation could further be categorized into –
• Operational evaluation which looks at how well programmes were implemented and whether they realised their outcomes (and thereby the objectives), and
• Impact evaluation which looks into whether the intended goal was achieved and whether the change can be attributed to the programme. Impact evaluation measures how changes in outcomes can be attributed to the programme or intervention.
Mid-term evaluation is usually done around the middle stages of a programme to understand if inputs and activities are resulting in the desired outputs and perhaps early-stage outcomes. Monitoring and midterm evaluation together should ring warning bells if things are not in order.
Two other terms that one needs to be aware of –
• Prospective evaluation – these are designed at the same time as the programme is being developed. Baseline data is collected for both the intervention and the counterfactual (also referred to as the control group). Data collection during midline and endline stages therefore allows us for comparison with the baseline data and allows us to measure change.
• Retrospective evaluation – these assess programme impacts after the programme has being implemented.
Steps to designing and conducting assessments
This section captures the key steps required to put a place a robust monitoring and evaluation system.
Selecting key performance indicators and data sources
Selection is performance indicators are an essential first step towards the initiation of any assessment. An indicator is a variable that measures one aspect of a programme or project that is directly related to the programme objectives/outcomes
Data sources are the resources used to obtain data for M&E activities. It can come from various sources – clients,
programme, service, environment, population etc. and can be generalized into two categories –
• Routine data – Data collected on a continuous basis – like patient data utilizing clinic services – collected daily, aggregated monthly, reported quarterly – they should be cheap
• Non-routine data – Data collected on a periodic basis, usually quarterly or annually – they are usually extensive – they are expensive Some amount of discretion needs to be used for collection of both routine and non-routine data as they
could fall under both monitoring and evaluation.
Data collection tools and quality issues
There are basically two types of techniques for data collection – qualitative techniques and quantitative techniques. Quantitative techniques, as the name suggests, collect data points in number formats using surveys, tests, existing databases etc. Quantitative techniques use structured and unstructured observations, Key Informant Interviews (KII), Focused Group Discussions (FGDs) etc to collect data. Each technique has its own share of pros and cons. Any evaluation will be as good (or as bad) as the data collected. Therefore, high level of integrity is required for
data collection, management, cleaning, storage issues.
Some data quality issues to keep in mind are –
1. Coverage – data is collected from the correct target group.
2. Completeness – data collected is complete
3. Consistent – data collected is consistent in and from various sources
4. Unique – record for data collected is unique across various records
5. Valid – data collected is valid and within ranges offered
6. Accurate – data collected is accurate and correctly describes the real-world situation
7. Timeliness – data collected should also be relevant for the period it is collected.
The most common errors that happen during any data collection exercise during assessments are –
• Sampling error – this type of error occurs if data is collected from the wrong group. This mostly happens when there are errors in sampling of target and control groups.
• Non-sampling error – these are errors that occur due to two reasons – either the survey administrator has not understood the context and question well or the respondent has not understood the same well, or both. Often questions are framed in a way that elicits erroneous responses from respondents. At times the units are misunderstood.
Informed consent and ethical issues
All evaluation studies must begin with an informed consent of the participants. All such studies (and data therein) are also closely guarded by ethical guidelines which could comprise of issues like – confidentiality of the data, traceability of respondents and thereby protecting their identity, and data safety issues. Ethical guidelines are stringent for studies around healthcare related topics.
Designing an impact evaluation study
Measure of change using prospective evaluation requires identification of intervention and control household/individuals (depending on the targeting nature of a programme) and measuring change between the baseline and endline stages. But, what happens if the prospective evaluation did not identify a control group or how does one measure change if there is neither a baseline or a control group? Let us try to understand the possibilities under these conditions –
Conditions/
Conditions |
Baseline data for control group | No baseline data for control group |
Baseline data for intervention group | If baseline data for both intervention and control group exists, comparing the same with the endline for both gives is the ‘change’ as well as ‘attribution’. Most good evaluation will have this design. This is the ideal condition, but rarely seen. | Baseline for intervention group exists, but control was either not chosen or data not collected for them. This allows us to measure change, however attribution may be difficult. This is the second mostly likely situation seen in project evaluations. |
No baseline data for intervention group | This is least likely scenario. | This condition is encountered in prospective evaluations where no data exists for either group (in many cases the control is not even thought through). Measuring change can be done by either recreating baseline conditions for both baseline and endline, or merely by comparing endline data from intervention and control groups. |
Need for an M&E plan
This plan should capture the procedures that will be implemented to determine whether or not the objectives
of a project are met. The M&E plan typically comprises –
• The underlying assumptions on which achievement of programme goal depends
• The anticipated relationships between activities, outputs and outcomes
• Defined conceptual measures and definitions, along with baseline values
• A monitoring schedule
• A list of data sources
• Cost estimates for M&E activities
• Analysis and reporting plan
• Dissemination and utilization plan
Most organizations are wary of allocating funds for M&E purposes, however any project should typically allocate anything between 5-10% for such purposes.
‘Learning’ from assessments
Unfortunately many assessments are often a tick-mark exercise in the life-cycle of a project, never meant to be ‘learnt’ from. Assessments should be used inform and improve upon planning and implementation, rather than proving a certain point of view. Assessments done seriously can improve organizational culture, improve programme outcomes and direct funds towards worthy causes.
This column was published in the print edition of our magazine. To buy a copy, click here
Apart from teaching various courses at TISS, Dr Bhaskar Mittra is responsible for managing India-based operations for TCI. He is the TISS Representative to the Technical Assistance and Research for Indian Nutrition and Agriculture (TARINA) Steering Committee. He is closely associated with several well-known civil society organizations working at the grassroots level throughout India.
Views of the author are personal and do not necessarily represent the website’s views.
Thank you for reading. Your thoughts and inputs will genuinely make a difference to us. Please drop a line.
Regards,
The CSR Journal Team
Subscribe