Unsupported Browser
The American College of Surgeons website is not compatible with Internet Explorer 11, IE 11. For the best experience please update your browser.
Menu
Become a member and receive career-enhancing benefits

Our top priority is providing value to members. Your Member Services team is here to ensure you maximize your ACS member benefits, participate in College activities, and engage with your ACS colleagues. It's all here.

Become a Member
Become a member and receive career-enhancing benefits

Our top priority is providing value to members. Your Member Services team is here to ensure you maximize your ACS member benefits, participate in College activities, and engage with your ACS colleagues. It's all here.

Become a Member
ACS
RISE

How to Determine Whether Your Residency Evaluation System Is on Target

“An evaluation not worth doing is not worth doing well.”

Evaluation is a complex concept and has evolved into a field all its own. It has its own professional societies (i.e., Evaluation Research Association), journals (i.e., Evaluation and the Health Professions, Evaluation and Program Planning), centers for study (i.e., University of California Los Angeles’ Center for the Study of Evaluation), and government agencies (i.e., US Department of Education’s Office of Planning, Budgeting, and Evaluation). College students acquiring degrees in education can even major in evaluation.

The purpose of this article is to familiarize readers with some basic nomenclature of evaluation systems to help them better understand what is really wanted when they say something or when someone needs to be evaluated. The article provides a list of questions and evaluator typically asks when evaluating a residency program; a description of the appropriate labels for the designated kind of evaluation; and outlines the decisions or actions the evaluation would likely render.

1. Questions:

  • Are we meeting the learning needs of our residents?
  • What parts of the residency program need attention?
  • Where are we failing?

Appropriate labels:

  • Needs assessment
  • Organizational review

When people ask for an evaluation and any of these questions are raised, they are actually requesting a needs assessment. Most needs assessments are accomplished using written and verbal survey techniques. For example, in various disciplines (i.e., thoracic surgery program directors) selected graduates of a fellowship program complete a needs assessment survey which queries their perceptions of selected objectives’ relevance to their practice and the adequacy of “coverage” during their fellowship training. Results of needs assessments are typically used to identify those objectives that may need to be deleted from the program, given expanded emphasis, or left unchanged.

2. Questions:

  • What do residents actually experience during the residency (or rotation)?
  • What is the nature of the decisions they make?
  • What activities make up their typical day?
  • How much does it vary from rotation to rotation?

Appropriate labels:

  • Program documentation
  • Evaluation of program processes

These types of questions are often approached using qualitative methods such as direct observation, oral feedback, and document review (i.e., OR logs). The information can be compiled and analyzed by the program director and faculty to check whether the activities thought to take place actually do. It is an analysis of “intended” versus “actual” learning/clinical activities. It provides a “reality check” and can be used as a basis for ensuring that residents receive the experiences they should be getting, and in sufficient quantity and quality. Few would argue that what looks good on paper doesn’t always pass muster when examined more closely. Also, new initiatives introduced by the program director often start out well, but without sufficient reinforcement tend to slide backward toward the status quo. This type of evaluation provides the “big picture” of a curriculum.

3. Question:

  • Is the residency meeting its goals?

Appropriate labels:

  • Objectives-based evaluation
  • Measurement of goal achievement

Residency curricular goals are often vague and, even if written, not known or agreed upon by all faculty and residents. Criteria for meeting curricular goals are sometimes stated as learner outcomes (test scores). The evaluator needs to keep in mind affective goals, such as satisfaction with the program and skills along with the cognitive goals. Otherwise, the evaluation becomes too narrowly focused.

4. Questions:

  • How can the residency be improved?
  • How can it become more effective or efficient?

Appropriate label:

  • Formative program evaluation

A unique feature of formative evaluation is that the person conducting the evaluation becomes involved with whatever activity or program is being evaluated. An example would be a program director who wants to evaluate a new skills lab module. As the new module is implemented, the program director or designee would observe residents while they work in the lab, speak to randomly selected residents and faculty to glean their opinions, and/or periodically talk with the faculty member and resident to see if they believe the skill being learned in the lab is improving or proficient. This information would be used to help make the necessary adjustments in the skills lab (more practice needed, more feedback from faculty needed, etc.) to further improve its format, frequency, or duration.

5. Question:

  • Does the program or activity within the program require major re-engineering or expulsion?

Appropriate label:

  • Summative program evaluation
  • Outcome evaluation

The goal of a summative program evaluation is to collect and present information needed to determine the program’s value. It is best when there is a basis against which to compare. Let’s say one wanted to determine whether a rotation at an outside hospital was valuable to the residents versus keeping all residents at the main hospital. Variables such as the number and type of operative skills, performance evaluation completion compliance, extent of appropriate autonomy in decision-making and/or operating, and resident and faculty satisfaction would be identified and studied. Whatever variables the program directors choose to study, the results would be used to determine whether to continue or discontinue the rotation.

Summary

Some readers may think that entirely too much fuss is being made over defining (splitting hairs to some) evaluation. But the meaning of words is critical in the world of evaluation because words influence action. Evaluation is a complex field of study, and is no longer a simple matter of stating behavioral objectives, building a test, or analyzing data, although it may include these activities. To better understand the terminology and the implication of various definitions will help us anticipate the old adage, “Evaluation data is similar to garbage; you have to know what you are going to do with it before you collect it.”