Title

Caring in Field Research: Acknowledging and Managing a Major Tension of Practice

Location

3033

Format Type

Paper

Format Type

Panel

Start Date

13-1-2017 3:40 PM

End Date

13-1-2017 5:00 PM

Abstract

Evaluators observe, analyze, and interpret social program data to promote improvement and quality, encompassing the discipline’s commitment to “social betterment” in the form of greater service quality to high needs individuals (Mark, Henry, & Julnes 1999). Evaluators do, and are expected to, care about the individuals served. For example, program evaluation standards (Scarborough et al., 2011) promote caring values - the protection of human rights, respect, fairness, responsiveness, inclusion, and concern for evaluation consequences. The American Evaluation Association guiding practice principles include “Respect for People” and “Responsibility for General and Public Welfare.” Despite these foundations reflecting concern for the individuals served by evaluators, evaluation event realities introduce opposing pressures. Funders want group-level cause-effect statements, with less interest in individual participant experiences. Experimental-based designs are typically privileged, often consuming evaluation budgets. Funders are less interested in providing evaluators the time needed to better understand program participants as individuals. In response to competing demands, the authors of this study (all evaluators) have employed various strategies to maintain fidelity to participant concerns while meeting the contractual obligations and limitations. We will report on these strategies (longitudinal designs, strategic site visits, combined electronic and in-person data collection, and volunteer effort), offering relevant case examples.

This document is currently not available here.

Share

COinS
 
Jan 13th, 3:40 PM Jan 13th, 5:00 PM

Caring in Field Research: Acknowledging and Managing a Major Tension of Practice

3033

Evaluators observe, analyze, and interpret social program data to promote improvement and quality, encompassing the discipline’s commitment to “social betterment” in the form of greater service quality to high needs individuals (Mark, Henry, & Julnes 1999). Evaluators do, and are expected to, care about the individuals served. For example, program evaluation standards (Scarborough et al., 2011) promote caring values - the protection of human rights, respect, fairness, responsiveness, inclusion, and concern for evaluation consequences. The American Evaluation Association guiding practice principles include “Respect for People” and “Responsibility for General and Public Welfare.” Despite these foundations reflecting concern for the individuals served by evaluators, evaluation event realities introduce opposing pressures. Funders want group-level cause-effect statements, with less interest in individual participant experiences. Experimental-based designs are typically privileged, often consuming evaluation budgets. Funders are less interested in providing evaluators the time needed to better understand program participants as individuals. In response to competing demands, the authors of this study (all evaluators) have employed various strategies to maintain fidelity to participant concerns while meeting the contractual obligations and limitations. We will report on these strategies (longitudinal designs, strategic site visits, combined electronic and in-person data collection, and volunteer effort), offering relevant case examples.