Isaacson J, Stacy A: Rubrics for clinical evaluation: objectifying the subjective experience. Nurse Educ Pract 9: 134, 2009.
Magarian G, Mazur D: Evaluation of students in medicine clerkships. Acad Med 65: 341, 1990.
Plymale M, Donnelly M, Lawton J, et al: Faculty evaluation of surgery clerkship students: important components of written comments. Acad Med 77: S45, 2002.
Colletti L: Difficulty with negative feedback: face-to-face evaluation of junior medical student clinical performance results in grade inflation. J Surg Res 90: 82, 2000.
Sander R, Trible K: The virtual clinical evaluation tool. J Nurs Educ 47: 33, 2008.
Speer A, Solomon D, Fincher R: Grade inflation in internal medicine clerkships: results of a national survey. Teach Learn Med 12: 112, 2000.
Kassebaum D, Eaglen R: Shortcomings in the evaluation of students' clinical skills and behaviors in medical school. Acad Med 74: 841, 1999.
O'Donohue W, Wergin J: Evaluation of medical students during a clinical clerkship in internal medicine. J Med Educ 53: 55, 1978.
Fowell S, Bligh J: Recent developments in assessing medical students. Postgrad Med J 74: 18, 1998.
Awad SS, Liscum KR, Aoki N, et al: Does the subjective evaluation of medical student surgical knowledge correlate with written and oral exam performance? J Surg Res 104: 36, 2002.
Boateng B, Bass L, Blaszak R, et al: The development of a competency-based assessment rubric to measure resident milestones. J Grad Med Educ 1: 45, 2009.
“Rubric,” in Random House Webster's College Dictionary , 2nd Ed, p 1679, Random House Reference & Information Publishing Group, New York, 1993.
Stevens D, Levi A : Introduction to Rubrics: An Assessment Tool to Save Grading Time, Convey Effective Feedback and Promote Student Learning , p 3, Stylus Publishing LLC , Sterling, VA, 2004.
Marienfeld R, Reid J: Subjective vs. objective evaluation of clinical clerks. N Engl J Med 302: 1036, 1980.
Geertsma R, Chapman J: The evaluation of medical students. J Med Educ 42: 938, 1967.
Andre K: Grading student clinical practice performance: the Australian perspective. Nurse Educ Today 20: 672, 2000.
Walsh C, Seldomridge L: Clinical grades: upward bound. J Nurs Educ 44: 162, 2005.
Dudas R, Colbert J, Goldstein S, et al: Validity of faculty and resident global assessment of medical students' clinical knowledge during their pediatrics clerkship. Acad Pediatr 12: 138, 2012.
Kreiter C, Ferguson K: The empirical validity of straight-line responses on a clinical evaluation form. Acad Med 77: 414, 2002.
Hepworth S: Professional judgment and nurse education. Nurse Educ Today 9: 408, 1989.
Mahara M: A perspective on clinical evaluation in nursing education. J Adv Nurs 28: 1339, 1998.
Ramsey P, Shannon N, Fleming L, et al: Use of objective examinations in medicine clerkships: ten-year experience. Am J Med 81: 669, 1986.
White CB, Dey EL, Fantone JC: Analysis of factors that predict clinical performance in medical school. Adv Health Sci Educ Theory Pract 14: 455, 2009.
Roop S, Pangaro L: Effect of clinical teaching on student performance during a medicine clerkship. Am J Med 110: 205, 2001.
van Hell E, Kuks J, Schönrock-Adema J, et al: Transition to clinical training: influence of pre-clinical knowledge and skills, and consequences for clinical performance. Med Educ 42: 830, 2008.
Chambers D: Do repeated clinical competency ratings stereotype students? J Dent Educ 68: 1220, 2004.
Lasater K: Clinical judgment development: using simulation to create an assessment rubric. J Nurs Educ 46: 496, 2007.
Background: We assessed the differences in podiatric medical students' clinical professionalism objective scores (CPOSs) by comparing a previous nonrubric evaluation tool with a more recently implemented objective-centered rubric evaluation tool. This type of study has never been performed or reported on in the podiatric medical education literature.
Methods: We conducted a retrospective analysis of 89 third-year podiatric medical students between academic years 2010-2011 and 2011-2012. A Pearson correlation coefficient analysis was performed to compare CPOSs from the students' first (CPOS1) and second (CPOS2) rotations. A correlation analysis was performed comparing students' grade point averages (GPAs) with each of the individual CPOSs to verify the validity of the rubric evaluation tool.
Results: The Pearson correlation coefficients for the relationship between 2012 CPOS1 and CPOS2 and GPA were r = 0.233 (P ≤ .093) and r = 0.290 (P < .035) and for the relationship between 2013 CPOS1 and CPOS2 and GPA were r = 0.525 (P = .001) and r = 0.730 (P < .001).
Conclusions: These findings suggest that the use of a rubric in the evaluation of podiatric medical students' CPOSs is correlated with their GPAs, and CPOS2 demonstrated a higher correlation than CPOS1. We believe that implementation of the rubric evaluation tool has increased the accuracy of the evaluation of podiatric medical students with respect to CPOSs.