• AI
  • Molecular Imaging
  • CT
  • X-Ray
  • Ultrasound
  • MRI
  • Facility Management
  • Mammography

Assessing the Radiology Trainee

Article

Effective assessment of radiology trainees.

One of the best moments in my academic life was walking out of the testing center at the end of the last USMLE exam. That feeling was sweet but transient.

Now in 22nd grade as a radiology resident, I am still taking tests. As an advocate for lifelong learning, I figure they are part of the bargain: learn, test, teach, repeat. The purpose of tests is to provide constructive feedback to both student and teacher. However, not all academic assessments are fair nor are they effective.

It can be a daunting task for programs to provide trainees with concrete and useful feedback. ACGME requires program directors to report on resident performance annually, which includes milestones for medical knowledge and competency. The annual ACR in-training examination is one means of providing every trainee with a reportable percentage score. However, limitations to this test make it generally unhelpful in terms of being constructive feedback. Scores are biased towards the most recent rotations a trainee has passed through. The questions include a broad assessment of knowledge meant to be accumulated over four years of training, including physics and quality assessment questions. This is problematic for the younger trainees. As a first-year resident, I took the exam before experiencing several rotations, and my topic-specific scores ranged from 2% to 98%, which was both laughable and predictable.

In addition to the formative yearly assessment, end-of-rotation exams are a common way to assess trainees. These questions are typically written by one of the attendings in the department assigned as the “resident liaison,” and are supposed to address knowledge appropriate to the trainee level.  However, there is significant variability within the exams regarding the broadness of questions and how well they are written. Sometimes there is low correlation to the assigned reading or clinical work. The most effective assessments are ones that are created in conjunction with self-study, rotation-based learning, and post-assessment feedback. [[{"type":"media","view_mode":"media_crop","fid":"61392","attributes":{"alt":"Radiology student","class":"media-image media-image-right","id":"media_crop_9681801218722","media_crop_h":"0","media_crop_image_style":"-1","media_crop_instance":"7764","media_crop_rotate":"0","media_crop_scale_h":"0","media_crop_scale_w":"0","media_crop_w":"0","media_crop_x":"0","media_crop_y":"0","style":"height: 200px; width: 200px; border-width: 0px; border-style: solid; margin: 1px; float: right;","title":"©Phoelix/Shutterstock.com","typeof":"foaf:Image"}}]]

At the graduate level of training, there is more convergence of knowledge even among trainees who have come from different backgrounds. In this setting, tests should focus more on feedback for the trainee and using questions as a tool of improvement rather than for judgment or punitive action. One method for effective, low-risk self-assessment is to test knowledge retained from didactic lectures, which make up approximately 20% of time spent in the hospital. The low-stakes nature of lecture-based exams help to give constructive feedback without necessarily reporting to the program director. Furthermore, the act of going through the “testing-retrieval process” helps to solidify didactic knowledge. As studies in neurobiology have shown, for example by Nabavi et al 2015 in Nature, induction of memory encourages long term potentiationwhich counters the time-dependent loss of retained material and expands working memory. Regular didactics coupled with self-assessment exams can be geared towards junior residents, and spread out over longer periods of time for senior residents.

Shifting from the annual In-Training exam to periodic topical assessments would be a more helpful method for evaluating trainees. Formal testing is currently sponsored by individual departments. These exams lack formal peer-review and demonstrate a high degree of variability among institutions. Recent interest in developing educational tools reveal peer-reviewed national question banks on the horizon, which can be filtered by subspecialty. For example, RadExam went live in May 2016 and aims to create thousands of de novo questions and provide a platform for program directors to compare trainee test data with national data. The horizon is promising for improving the educational experience. Ensuring that exam questions are relevant, encouraging self-assessment, and focusing on constructive feedback are all ways to improve the learning process during radiology training.

“Essential characteristics of proficient performance have been described in various domains and provide useful indices for assessment. We know that at specific stages of learning, there exist different integrations of knowledge, different forms of skill, differences in access to knowledge, and differences in the efficiency of performance. These stages can define criteria for test design. We can now propose a set of candidate dimensions along which subject matter competence can be assessed. As competence in a subject matter grows, evidence of a knowledge base that is increasingly coherent, principled, useful, and goal-oriented is displayed and test items can be designed to capture such evidence.” (Glaser, R. “Expertise and assessment.” In M. C. Wittrock & E. L. Baker (Eds.), Testing and cognition. Englewood Cliffs, NJ: Prentice Hall. 1991:17-30.)

Related Videos
Related Content
© 2024 MJH Life Sciences

All rights reserved.