Beginning Farmer Program Evaluation Resource Library

The Beginning Farmer Program Evaluation Resource Library is a compilation of materials to assist beginning farmer and rancher training programs to conduct evaluation.

This Resource Library was created as part of the Gaining Results through Evaluation Work (GREW) project, funded through a US Department of Agriculture Beginning Farmer and Rancher Development Program (BFRDP) grant. This project supports the development of strong, effective and long-lasting farmer and rancher training programs so that beginning farmers enter the field of farming and establish successful farm businesses.

This library contains hundreds of resources focused on running effective and thorough program evaluations collected by the GREW team. Some resources focus explicitly on farming projects and others provide more general program evaluation instruction. You can use the topic of interest buttons below to search for the types of materials of interest or you can type a search directly “I’m looking for…” bar.

Please visit again – more resources will be added regularly.

If you have a resource you would like to see, have a resource you’d like to share, or have any feedback about the Resource Library, please contact nesfp@tufts.edu.

Source: Pennsylvania State University - Rama B. Radhakrishna, Rhemilyn Z. Relado

"Asking the right evaluation questions is very important to documenting program outcomes. This article provides a roadmap to link evaluation questions to program outcomes. Understanding the program, the purpose of the evaluation, and its utility are critical steps in developing focused evaluation questions. Further, grouping evaluation questions into process and outcome questions will help answer both program implementation efforts (activities and resources) and program effects (KASA change) on participants. Developing a program outcome chart also helps in writing focused evaluation questions. An important strategy in developing evaluation questions is to integrate program evaluation into the program development process."

Source: Penn State Cooperative Extension - Nancy Ellen Kiernan; Horticulture Extension Educator - Emelie Swackhamer

"A multipurpose evaluation, developed to measure impact of master gardener training in Pennsylvania, quantitatively measured both learning and increase in confidence, applying data from before and after the training. The authors demonstrate how the same data can be summarized in different ways to better achieve program improvement or demonstrate accountability. The evaluation compiled feedback from a 16-county area, including a majority of trainees in the state. This uniform evaluation strategy eliminated duplication of effort by county educators, provided a high quality tool for the state, and serves as a model for evaluating multi-topic programs taught by many instructors."

Source: University of Georgia - K. S. U. Jayaratne , Gail Hanula, Connie Crawley

"This article describes how to evaluate the impact of a series-type Extension program. Evaluating program impact is essential for Extension accountability. The evaluation method described in this article is simple and effective in documenting the impact of one Extension program taught as a series. This approach can be used to evaluate other series-type Extension programs by modifying the behavior section of the instrument presented in this article to match the program content and objectives. This evaluation tool not only helps Extension agents document impact but also helps them to focus on the program objectives during the program delivery process."

Source: North Carolina State University - K. S. U. Jayaratne

"Inadequate evaluation tools and limited evaluation capacity prevent many Extension agents from effectively assessing program impact. A user-friendly and reliable resource kit is now available to help agents evaluate their financial education programs. This resource kit has an online evaluation manual and a database. The manual is available to help educators understand basic evaluation concepts and learn how to use the database. The database is available to help agents design customized evaluation instruments based on their specific program needs. A reliable evaluation instrument can be created within about 10 minutes." 

Pages