Sense-Making with an AI-Enhanced Coaching Tool: A Think-Aloud Study

Abstract

We investigate the sensemaking processes of instructional coaches 
who engage with an AI feedback tool, Hybrid HumanAgent Tutoring (HAT)
platform, in the context of a highdosage human tutoring program. HAT provides
automatic feedback on the discourse practices of tutors embedded in a coached
tutor model. This study describes the codesign process behind HAT and present
the results from a thinkaloud study that examined how coaches engage in sense
making while using HAT, how the tool has enhanced their coaching workflows, 
and their assessment of HAT’s usability. Using the professional noticing frame-
work, the results show that coaches used contrasting sensemaking strategies to 
enhance their coaching with HAT which ranged from narrow to broad noticing
isolated to contextual interpreting, and directive to reflective use of HAT information to provide tutors with feedback. Results also show that coaches success-
fully embedded HAT in their workflows either as a central or supplemental tool
leading to more efficient coaching processes and datadriven feedback for tutors.
These findings highlight the relationship between HAT’s design affordances and 
sensemaking strategies for feedback and offer valuable design principles for de-
veloping AI feedback tools for instructional coaches.

Keywords

Generalization, Natural language processing, CollaborationHighDosage Tutoring, Instructional Coaching, Automated Feedback, Tu-
tor Professional Learning, ThinkAloud, SenseMaking, Design Principles
HighDosage Tutoring, Instructional Coaching, Automated Feedback, Tu-
tor Professional Learning, ThinkAloud, SenseMaking, Design Principles
analyti