Statewide Data Collection System

The Cornell Cooperative Extension (CCE) Parent Education Data Collection System is used to assess the impact of participation in parent education programs on families in New York State. The Statewide Data Collection System allows parent educators to enter data collected from pre- and post-tests at the initial and final parenting classes.

Parent educators must follow a specific protocol in order for us to report the statewide impact parenting programs have on families in New York State, if you are unfamiliar with it please review the Protocol for Use.

In addition, participant data must include at least six hours of content delivery (workshops, sessions, etc.) and participants will complete both a pre- and post-survey (at program entry and exit). The survey is available below for you to reproduce. Note that this survey does not replace your own evaluative surveys; rather it can be used to supplement your regular data collection instruments. 

All educators who administer the evaluation surveys must receive an Informed Consent Form from each parent/caregiver who participates. For more information regarding human participants' standards, refer to Cornell's Human Research Protection Program

After you have participant data, you may upload it to the Parent Data Survey.

Strengthening Families evaluation

These forms are for parent educators conducting the Strengthening Families education program. 

All educators who administer the evaluation surveys must comply with the necessary human subjects requirements.

Parenting a Second Time Around (PASTA) evaluation

Parenting A Second Time Around (PASTA) is a Cornell Cooperative Extension program designed for caregivers who are not the biological parents of the child in their care. PASTA consists of several sessions which focus on topics including child development, discipline and guidance, caring for yourself as a caregiver, rebuilding a family, living with teens, legal issues, and advocacy.

Glossary of evaluation terms

Evaluation, in its most basic definition, determines the worth, or condition of, a program, policy or intervention usually by careful appraisal and study.

Process (sometimes referred to as “formative”) evaluations are conducted to increase the efficiency of service delivery, modify program strategies or underlying assumptions, or to change method of service delivery.

Outcome (also known as “summative”) evaluations are generally conducted to assess a program's impact on a specific population, to make a judgment on the program's worth.

Qualitative methods include the use of open-ended interviews, direct observation, and/or written documents to find out the way in which a program has impacted its participants (Patton, 1990).

Quantitative methods require the use of standardized measures to classify responses of varied participants into pre-determined categories for generalization about the program's impact on its participants.

Mixed method approaches to evaluation combine elements of both qualitative and quantitative data collection procedures in hopes of generating “deeper and broader insights (Greene and Cicarelli, 1997) into the effect of a program on its participants.

 Comparison Groups: To accurately assess the effect of a parent education program, the best method to use is to compare outcomes among two randomly assigned groups: one who received the program, one who did not. This is not necessarily feasible for many parent educators, but it is important to understand why equivalence of comparison groups is important to lending credibility to your results.

Randomization: Assigning members to two groups randomly allows for the groups to be equivalent, and means that one can infer that the two groups were the same before your program or intervention, and that any difference in outcomes is due to your program, and not due to differences between your two groups.

Pre/post tests: One commonly used method to assess program impact is to administer a pre and posttest survey of program participants. While this method may illustrate some important impacts of the program on its participants, it also has some limitations. For example, if the evaluation of a given program does not use any comparison group, the outcomes shown cannot be definitively attributed to the curriculum itself. The mere fact that the only group used to determine effect was composed of those who participated means that any program impact may be the result of population characteristics of the participants themselves. Specifically, if people who participate in a program do so voluntarily, that implies a willingness to change behavior in and of itself – and thus, any outcomes from program participation could be the result of pre-existing attitudes of the participants and not the intervention.

One central component of research design is if the evaluation utilized an experimental, quasi-experimental or non-experimental design (i.e., measuring changes in outcomes with a comparison group as well as a treatment group).


There are three primary types of research designs:

Randomized or true experimental designs, which are the strongest design for establishing a cause and effect relationship, as random assignment is used - and the groups involved are considered equivalent. An experimental design randomly assigns participants to a treatment (composed of those who did participate in an education program) and control group (composed of those who did not participate in an education program). The random assignment of these two groups allows for researchers to determine if the outcome found was attributable to the parent education program (the “treatment”) or to another reasonable factor. This design is considered the strongest method to make such an assessment.

Quasi-Experimental designs also involve the comparison of two groups; however, members of the treatment and control groups are not randomly assigned, and thus, the comparability of the two groups is less certain. The ability of these designs to establish a cause-effect relationship is dependent upon the degree to which the two groups in the study are equivalent. As a result, it is more difficult to conclude that an outcome is the result of the treatment itself and not due to some other difference between the groups.

Non-experimental designs do not use a control group and do not use any random assignment in its design. These are usually descriptive studies conducted using a simple survey instrument only once. While they are useful in their own right - they are weak in establishing cause/effect relationships.