Analyzing Student Attention and Acceptance of Conversational AI for Math Learning: Insights from a Randomized Controlled Trial Abstract The significance of nurturing a deep conceptual understanding in math learning cannot be overstated. Grounded in the pedagogical strategies of induction, concretization, and exemplification (ICE), we designed and developed a conversational AI using both ruleand generation-based techniques […]
Students’ perceived roles, opportunities, and challenges of a generative AI-powered teachable agent: a case of middle school math class
Students’ Perceived Roles, Opportunities, and Challenges of a Generative AI-powered Teachable Agent: A Case of Middle School Math Class Abstract Ongoing advancements in Generative AI (GenAI) have boosted the potential of applying long-standing “learning-by-teaching” practices in the form of a teachable agent (TA). Despite the recognized roles and opportunities of TAs, less is known about […]
Math Multiple Choice Question Generation via Human-Large Language Model Collaboration
Math Multiple Choice Question Generation via Human-Large Language Model Collaboration Abstract Multiple choice questions (MCQs) are a popular method for evaluating students’ knowledge due to their efficiency in administration and grading. Crafting high-quality math MCQs is a labor-intensive process that requires educators to formulate precise stems and plausible distractors. Recent advances in large language models […]
Automated Feedback for Student Math Responses Based on Multi-Modality and Fine-Tuning
Automated Feedback for Student Math Responses Based on Multi-Modality and Fine-Tuning Abstract Open-ended mathematical problems are a commonly used method for assessing students’ abilities by teachers. In previous automated assessments, natural language processing focusing on students’ textual answers has been the primary approach. However, mathematical questions often involve answers containing images, such as number lines, […]
Improving the Validity of Automatically Generated Feedback via Reinforcement Learning
Improving the Validity of Automatically Generated Feedback via Reinforcement Learning Abstract Automatically generating feedback via large language models (LLMs) in intelligent tutoring systems and online learning platformshas the potential to improve the learning outcomes of many students.However, both feedback generation and evaluation are challenging: feedback content has to be valid especially in subjects like math, […]
An Evaluation of Perceptions Regarding Mentor Competencies for Technology-based Personalized Learning
This study discusses the development of Personalized Learning 2 (PL2), an online human mentoring system. PL2 uses student math learning data and mentor input to write custom feedback. This particular research is focused on finding a more efficient and research-based way to organize resources for PL2. 18 PL2 partner members completed a survey that revealed that Engaging and Motivating Students was the most important skill and Underingstanding Educational Norms and Policies was the least important. Reorganization will optimize mentor training and their ability to help students overcome barriers.
Development of Scenario-Based Mentor Lessons
This demonstration shows the recent advancement of scenario based tutor training and its focus on using the learn-by-doing approach. The 15 minute lessons outlined in this study use the predict-observe-explain inquiry method to develop tutor skills in helping student motivation. These methods are being developed within the Personalized Learning2 (PL2) program. PL2 is an app that combines student software with human tutors to improve mentoring ability. Enhancing mentor training will help to increase student ability while also maintaining low costs. This form of training works best when tutors have scenario based practice with response specific feedback.
Rewriting Math Word Problems with Large Language Models
In a recent study, math problems in Carnegie Learning’s MATHia adaptive learning software were rewritten by human authors and AI to improve clarity. Findings showed that readers spent less time reading rewritten human content and achieved higher mastery than did readers who read the original content. The team conducting the study also used GPT-4 to rewrite the same set of math word problems with the same guidelines that the human authors used. comparing zero-shot, few-shot and chain-of-thought prompting strategies. Overall, report analysis of human-written, original and GTP-written problems showed that GTP rewrites have the most optimal readability, lexical diversity and cohesion scores, though used more low frequency words. Carnegie Learning plans to present their outputs at randomized field trials in MATHia.
Scenario-Based Training and On-The-Job Support for Equitable Mentoring
Personalized Learning2 (PL2) is a professional mentoring platform created by researchers at Carnegie Mellon. Its goal is to improve workplace efficiency and utilize personalized learning to teach through situation-based instruction. PL2 combines both AI and mentor driven research training to help under-trained tutors with personalized learning. This platform includes social-emotional learning, math content and culturally responsive teaching practices to address the gap between historically marginalized students by training tutors to be more efficient and productive. PL2 offers a lower cost option for deliberate practice in order to increase impact and learning capacity of tutors.
Computer-Supported Human Mentoring
In a recent study, math problems in Carnegie Learning’s MATHia adaptive learning software were rewritten by human authors and AI to improve clarity. Findings showed that readers spent less time reading rewritten human content and achieved higher mastery than did readers who read the original content. The team conducting the study also used GPT-4 to rewrite the same set of math word problems with the same guidelines that the human authors used. comparing zero-shot, few-shot and chain-of-thought prompting strategies. Overall, report analysis of human-written, original and GTP-written problems showed that GTP rewrites have the most optimal readability, lexical diversity and cohesion scores, though used more low frequency words. Carnegie Learning plans to present their outputs at randomized field trials in MATHia.