Sense-Making with an AI-Enhanced Coaching Tool: A Think-Aloud Study
Abstract
We investigate the sense–making processes of instructional coaches
who engage with an AI feedback tool, Hybrid Human–Agent Tutoring (HAT)
platform, in the context of a high–dosage human tutoring program. HAT provides
automatic feedback on the discourse practices of tutors embedded in a coached–
tutor model. This study describes the co–design process behind HAT and presents
the results from a think–aloud study that examined how coaches engage in sense–
making while using HAT, how the tool has enhanced their coaching workflows,
and their assessment of HAT’s usability. Using the professional noticing frame-
work, the results show that coaches used contrasting sense–making strategies to
enhance their coaching with HAT which ranged from narrow to broad noticing,
isolated to contextual interpreting, and directive to reflective use of HAT information to provide tutors with feedback. Results also show that coaches success-
fully embedded HAT in their workflows either as a central or supplemental tool,
leading to more efficient coaching processes and data–driven feedback for tutors.
These findings highlight the relationship between HAT’s design affordances and
sense–making strategies for feedback and offer valuable design principles for de-
veloping AI feedback tools for instructional coaches.
Generalization, Natural language processing, CollaborationHigh–Dosage Tutoring, Instructional Coaching, Automated Feedback, Tu-
tor Professional Learning, Think–Aloud, Sense–Making, Design PrinciplesHigh–Dosage Tutoring, Instructional Coaching, Automated Feedback, Tu-
tor Professional Learning, Think–Aloud, Sense–Making, Design Principles
analyti