Development of Scenario-based Mentor Lessons

In this demonstration, we showcase the recent advancement of  scenario-based tutor training with a focus to scale by applying the  learn-by-doing approach to teaching strategies to provide socio motivational support. These short (~15 min.) self-paced lessons use  the predict-observe-explain inquiry method to develop mentor  capacity in bolstering student motivation (i.e., fostering growth  mindset). These custom training modules are being created to  provide supplemental mentor support within the Personalized  Learning2 system, an app which combines human tutoring and  student math software to improve mentoring

We used GPT-4 to rewrite the same set of math word problems, prompting it to follow the same guidelines that the human authors followed. We lay out our prompt engineering process, comparing several prompting strategies: zero-shot, few-shot, and chain-of-thought prompting. Additionally, we overview how we leveraged GPT’s ability to write python code in order to encode mathematical components of word problems. We report text analysis of the original, human-rewritten, and GPT-rewritten problems. GPT rewrites had the most optimal readability, lexical diversity, and cohesion scores but used more low frequency words. We present our plan to test the GPT outputs in upcoming randomized field trials in MATHia.

Learning engineering, learning sciences, human-computer  interaction, scenario-based learning, mentor learning

Access the full paper