Abstract
Purpose: Competency-informed clinical education includes rigorous and specific performance outcomes with an emphasis on demonstrated outcomes. The purpose of this study was to assess faculty and dental hygiene (DH) student perceptions and elicit feedback regarding the transition to a competency-informed clinical evaluation model in the DH program at the University of North Carolina Adams School of Dentistry for the purpose of continuous quality improvement.
Methods: A mixed-methods approach was utilized to survey senior DH student (n = 36) and clinical DH faculty (n = 15) during the 2018 -19 academic year. Cohort-specific surveys included demographics, Likert-scale questions, and open-ended questions to gauge perceptions of the new system. Two debriefing sessions were held, one for faculty and one for students, to provide open feedback and expand discussions. Survey responses were compared using descriptive statistics. Open-ended responses and debriefing comments were reviewed to identify common themes.
Results: All senior DH students (n=36) and two-thirds of the faculty (67%, n=10) completed the survey. Findings revealed an overall preference to the new evaluation system and indicated that it was a more accurate reflection of clinical performance. Open-ended and debriefing comments revealed an increased quantity and quality of faculty feedback with an emphasis on patient-centered care, rather than a grade-based focus. Students reported decreased stress levels regarding asking clinical care questions and grade outcomes. While improvement in faculty calibration was reported, students also noted a need for continued calibration.
Conclusion: Surveys and debriefing sessions revealed areas of strengths and challenges in a competency-informed clinical evaluation system. Transitioning to a competency-based system provided an environment that is conducive to learning and patient-centered care rather than focused on grades.
- dental hygiene education
- clinical competencies
- clinical education
- patient centered care
- clinical evaluation
Introduction
Dental hygiene programs are transforming their educational experiences to prepare future oral health care providers for the challenges of a disruptive health care environment.1 This transformation must occur in a response to ongoing changes in clinical practice and educational environments as well as the accreditation standards from the Commission on Dental Accreditation (CODA).2,3 Methods of clinical evaluation must also be taken into consideration when looking at educating graduates who are competent health care providers.
Grades have traditionally been used for student motivation; however, grades may have the opposite effect and be in direct competition with learning outcomes.4 Grades may demotivate learners and potentially reduce interest in learning, desire for challenging tasks, and the quality of the thought process.4 When considering the impact of grades on student well-being, health professional schools are investigating and transitioning to a pass/fail grading system.5,6 Pass/fail systems, particularly in medical institutions, are shown to reduce stress6 and depression,5 promote less competition between peers, and foster deeper learning.7 White and Fantone found that medical students in programs using a three-tier or higher grading system reported higher levels of stress, emotional exhaustion, burnout, and depersonalization.8 By contrast, other studies found that students in schools with a pass/fail system reported a more positive well-being, reduced stress, and depression, while still ensuring the integrity of technical skill, evidence-based practice, and professionalism.5
Competency-informed clinical education includes performance outcomes that are specific and rigorous with an emphasis on demonstrated outcomes.9 Advantages of competency-informed education programs include a focus on individual abilities and needs, objectives, efficiency, and improved use of feedback.9 Competency-informed education was introduced in dental education by CODA dating back to 1995. Competencies can serve to guide changes in student learning methods and restructuring of clinical evaluation systems.10
As the dental hygiene (DH) program at the University of North Carolina (UNC) Adams School of Dentistry was undergoing curricular modifications,11 the opportunity arose to transition the DH clinical evaluation system from a requirementsbased system to a competency-informed system. The traditional, requirement-based system consisted of a five-tier number grading system that translated into a letter grade. The 1-5 grading system was used daily to evaluate student clinical performance. However, this system had shortcomings for both students and faculty. The subjective components were often difficult to calibrate leading to grade inflation and student frustration. Students frequently focused on the daily grade and often overlooked aspects of patient care, ultimately impacting the learning experience and patient care outcomes. Faculty were also impacted by the grading system by assigning grades to students that were not necessarily earned. The high number of daily clinical evaluation grades at the end of the semester also diluted the integrity of the evaluation process and the quality of feedback.
The UNC DH faculty were interested in creating and implementing a less traditional method of clinical evaluation, specifically pass/fail daily grading, that would not compromise student academic performance or the integrity of evaluation. A pass/fail daily grading system would help eliminate grade inflation, and more importantly shift the overall clinical experience from being grade-centered to patient-centered. The implementation of a two-tier evaluation system also aligns with the current shift in dental and DH programs to a competency-informed education.7 Jham et al. also reported that the basic motivational shift from the grade itself, could be a positive aspect of a pass/fail system.7 With the development of a pass/fail system for the UNC DH program, the goal was to diminish student feelings of threat in the clinics, reduce faculty stress levels in a graded situation, enhance the faculty/student relationships and environment to foster collegiality, and bring education to the forefront of all clinical activities.8 Further, daily evaluations were transitioned to competencies for all procedures and skills and these competencies became the graded portion of the clinical course. The purpose of this quality improvement study was to assess faculty and student perceptions and feedback on the new clinical evaluation system to guide future changes.
Methods
A mixed methods approach was utilized to gain feedback from faculty and students of the new clinical evaluation system. Second-year DH students (n=36) and DH clinical faculty (n=15) at UNC were recruited for this study following the fall semester in December 2018. Inclusion criteria were DH students and DH faculty who had experienced both the previous and the new evaluation system. The study was given exempt status by the UNC Chapel Hill Office of Human Research Ethics.
The new evaluation system was developed over the summer of 2018 when DH clinics were not in session. Competencies were developed by the clinical directors for the various procedures (e.g. adult prophylaxis) based on CODA standards for DH programs.2 A centralized tracking method was developed for logging student experiences course of the year. The dental hygiene patient care coordinator audited the clinical notes to ensure the accuracy of the logged experiences. A separate day-long faculty calibration and student orientation was completed to review the new system and student/faculty expectations prior to beginning the fall semester 2018. A comparison of the two clinical evaluation systems is shown in Table I.
Survey instrument
Senior DH students (n=36) and clinical faculty members (n=15) were invited to complete an online survey (Qualtrics; Provo, UT, USA) via email following the conclusion of the fall semester in December 2018. This timing allowed the students and faculty to have experienced one semester of the previous system (spring 2018) and one semester of the revised system.
The survey contained demographic and 14 Likert-type questions. Demographics included age and role (faculty or student). The Likert-type items included 14 statements comparing levels of agreement of the previous and new evaluation systems. A forced Likert-type scale was chosen to gain specific opinions regarding participants’ opinions for each statement. Forced response options for each statement were: strongly agree, agree, disagree, and strongly disagree. The survey also included four open-ended questions to allow further elaboration on overall opinion, strengths, weakness, and improvements needed of the new clinical evaluation system. The questionnaire was pilot tested by two recent DH graduates and two non-clinical DH faculty members. Adjustments were made based on feedback from the pilot testers.
Debriefing sessions
Dental hygiene students (n=36) and faculty (n=15) were invited via email to participate in a debriefing session following the completion of the fall semester. Two one-hour debriefing sessions were scheduled: one for DH students and one for clinical faculty, with one facilitator and one note taker. The debriefing session questions included overall thoughts, improvements, and recommendations still needed of the new clinical evaluation system. Debriefing sessions were audio recorded, transcribed by research support staff, and assessed to identify common themes from open-ended responses and debriefing comments.
Data analysis
Quantitative data were aggregated, and descriptive statistics were used to summarize the results (Stata®; College Station, TX, USA). A 2x2 chi-square for independence table was used to calculate agreement between students and faculty for each statement (p < 0.05). The four-point Likert scale was collapsed into two categories of agree and disagree. Inductive thematic analysis,11 through descriptive coding in the first cycle and pattern coding as the second cycle, was utilized for open-ended responses and debriefing session transcripts. This allowed for the generation of categories based on patterns across participants’ open-ended responses within the data set of the surveys and debriefing sessions.12 Inclusion and exclusion criteria, along with definitions, were outlined in a codebook.
Results
Quantitative results
All senior DH students (n=36) and ten clinical faculty members (67.0%, n=10) completed the survey. All DH student participants were female, ranging in age from 21-50 years with an average age of 24.61 years (SD 6.29). The faculty respondents were 24-59 years of age, with an average age of 39.86 years (SD 15.02). Findings revealed that most of the students (81.0%, n=29) and clinical faculty (90.0%, n=9) of preferred the new evaluation system (p=0.4858). A majority of students (83.0%, n=30) and all if the faculty (100%, n=10) agreed the new system enhanced faculty calibration (p=0.6036) and clinical competence (p=0.6036). Eighty-three percent (n=30) of students and 90% (n=9) of faculty agreed the new system fostered a learning-centered environment (p=0.1662). Both groups agreed (77.13%, n=27 DH students; 90.0%, n=9 faculty) agreed the new clinical evaluation system resulted in a more accurate reflection of performance as compared to the previous evaluation system (p=0.3090). Levels of agreement between the two groups on the 14 Likert scale items are shown in Table II. The open-ended responses from the survey were coded in the same manner as the debriefing session and is described within the qualitative findings of this section.
Qualitative findings from open-ended responses and debriefing sessions
Fifteen DH students (n=15) and nine faculty members (n=9) attended either the student or the faculty debriefing session at the conclusion of the fall semester. The findings from the debriefing sessions highlighted the responses from the open-ended questions from the survey. Results from the debriefing sessions included five themes: 1) focus on patient care; 2) increased morale; 3) enhanced feedback; 4) faculty calibration; and 5) too much paperwork. Representative quotes from the qualitative themes are shown in Table III.
General impressions
When asked about overall thoughts to the new clinical evaluation system, student participants focused primarily on shifts in focus and feedback, while faculty participants focused on logistics. Students noted that compared to the previous system, the new system had more focus on patient care and less focus on grades. Further, students stated that they felt more comfort in receiving constructive feedback.
Faculty noticed a shift in students being less ‘grade-focused’ to more ‘learner focused’ and that students appeared to ask more questions. However, faculty participants also noted the new system seemed like too much paperwork and commented on the additional time needed to complete the various forms. One faculty expressed difficulties in adjusting to the transition and was not in favor of moving away from daily grades expressing having a hard adjustment and did not agree with moving to the pass/fail system.
System improvements
Students and faculty remarked on the improved communication between one another and a reduction in nerves, anxiety and perceived stress levels. Students discussed the enhanced clinical environment leading to a shift from a mindset of perfectionism to a mindset of growth. Further, students noted that the new clinical evaluation system provided an environment that allowed for increased self-improvement and less comparison to other peers. One participant noted that the pass/fail system seemed comparable to private practice. Students discussed feeling like they could ask questions to faculty without fear of being penalized. Due to the safer clinical environment, students indicated feeling like they provided better patient care.
Faculty noticed students seemed to be less pressured by not having every appointment graded and an increased willingness to experience and embrace learning opportunities. Several faculty members commented that students appeared fearful regarding completing competencies and there were mixed opinions regarding students’ fears of accepting failure.
The quality of faculty feedback improved with the new system. Feedback was more specific and comprehensive. Both groups commented on how the documentation simplified the written feedback given and allowed both faculty and students to see trends and consistencies in errors. Faculty also felt that the documentation format also allowed for better facilitation of discussion with students at the end of each clinic session. Faculty also noted that students were more inquisitive and engaged in their learning, while students indicated feeling more receptive to receiving feedback and an improved ability to self-assess.
Both groups independently noted perceptions of improvement in faculty calibration. Faculty noted increased objectivity due to the objective list and point system of the new system. Students however had concerns regarding faculty inconsistency and variability in the faculty assessments.
Recommendations
Recommendations to strengthen the new clinical evaluation system focused primarily on changes to documentation and clearer expectations for clinical faculty. Students noted that consideration to rename certain assessments should be considered and to reduce overlap between competencies. Faculty made suggestions to enhance the daily tracking forms and other logs. Students suggested additional calibration is warranted. Further, one participant commented that faculty should take students seriously when they indicate readiness to complete a competency.
Discussion
The development of an enhanced and effective clinical evaluation model may greatly impact delivery of high-quality learner-focused education. Efficiency of clinical sessions can be increased, therefore maximizing use of time, increasing productivity, and improved delivery of patient care. Patient outcomes may also improve if students are more focused on patient care, rather than grade focused. Clinical evaluation systems designed to follow a competency-informed model rather than a requirements-based model, align with CODA standards2 and the delivery of patient-centered care.
It is natural to assume that doubts may be raised regarding the value of a clinical competencies and a daily pass/fail evaluation system. One of the faculty participants in this study voiced opposition to the new system during debriefing. Concerns regarding how a pass/fail system can accurately evaluate competency and provide meaningful feedback are understandable and valid. However, these concerns may also be due to fear of the unknown coming from a system that has been steeped in traditional numerical and letter grades. Greater value for the new system may be found through the use of a rubric-based assessment that fosters qualitative, rather than quantitative feedback, while continuing to assess measurable objectives and well-defined clinical competencies.
Faculty calibration is a continual challenge in clinical teaching and inconsistencies may arise in both formative and summative feedback. Full and part-time faculty bring a diversity of experiences into their clinical teaching and insufficient calibration can create confusion and frustration, inhibit student learning, influence students’ clinical performance to satisfy an instructor’s grading style, and even impact the quality of patient care.14 Research studies have examined use of various instruments and professional development activities to enhance faculty calibration.15,16 There is also a shift to develop and include entrustable professional activities (EPA) with competencies to provide an additional, objective evaluation on trust for professional tasks. The use of EPA structured rubrics can enhance calibration among multiple faculties and serve as guiding benchmarks for differentiation of pass/fail evaluations for clinical procedures.17
Quality assessment implications
The development and review of a competency-informed pass/fail clinical evaluation system is a critical process for the transformation of any traditional clinical teaching program. As schools explore options for transitioning to a pass/fail evaluation system, knowledge must be gained through research to support evidenced-based data-driven decision making. This quality outcomes assessment improvement study included data points with specific information to evaluate the change impact, make improvements, and calibrate clinical faculty. The outcomes of this study were critical to evaluate teaching and learning outcomes of students, assess calibration of clinical faculty, and support measurement of the overarching goal to improve quality of the clinical DH education at UNC Adams School of Dentistry. This project was also essential in executing the DH program’s efforts for continuous quality improvement. Other clinical teaching programs may glean useful take-aways from this systematic approach to include value in a pass/fail assessment system and the need for calibration to ensure quality and efficiency. Continuous quality improvement must be part of each educational program to ensure incorporation of best practices, high-impact change, use of current and data-driven decision making, and follow-up on the quality analysis.
When considering future directions in clinical evaluation systems such as pass/fail, standard setting is warranted in dental hygiene education. The Association of Medical Education in Europe (AMEE) has developed guides for standard setting processes.18 Dental hygiene education programs also have the discretion to define their own means of what deems a pass or fail in clinical dental hygiene setting. Future studies should compare standards regarding what qualifies as a pass or fail across dental hygiene programs. Another significant consideration is the impact of the coronavirus pandemic on clinical evaluation. Changes were made to the UNC DH program clinical evaluation prior to the onset of the pandemic in 2020. All aspects of dental education were disrupted with the transition to remote or online education and the need to develop flexible teaching and learning options that included pass/fail systems of evaluation.19 The CODA also recognized the need for the need for alternative clinical education and evaluation models. These conversations will likely continue due to the ongoing impact of the pandemic. Traditional class and clinical teaching environments will continue to evolve to a blended alternative setting with options for diverse teaching, learning, and evaluation strategies.
This study had limitations. Data included small numbers of faculty and students from one institution and was limited to those impacted by the transition to the new system. The sample size likely was too small to detect any effects between faculty and student responses. Larger numbers would have more generalizability to other cohorts. Another limitation was the four-point Likert scale, due to the potential to distort results by forcing a choice when the participant had no opinion.
Conclusion
A systematic approach to continuous quality improvement provides the opportunity for ongoing enhancement of the elements of clinical evaluation. Transitioning from a requirements-based clinical evaluation system to a competency-informed system revealed an increase in the quantity and quality of faculty feedback that promoted a positive learning experience. Both students and faculty noted an increased emphasis of patient-centered care rather than a focus on student grades. Students preferred the pass-fail grading method and reported decreased stress levels related to grades and were more comfortable asking questions regarding patient care. Feedback from both groups indicated the strengths and improvements related to the competency-informed system. An increased focus on feedback rather than a numerical score/grade demonstrated the development of collegial relationships, a growth mindset, and a patient-centered care environment. Improvements in the delivery and quality of feedback and faculty calibration are still needed.
Footnotes
This manuscript supports the NDHRA priority area, Professional Development level: Education (educational models).
Disclosure
This study was supported by an educational research grant from the Office of the Dean, University of North Carolina Chapel Hill, Adams School of Dentistry.
- Received September 28, 2020.
- Accepted February 4, 2021.
- Copyright © 2021 The American Dental Hygienists’ Association