Because artificial intelligence or machine learning models are trained on data from different contexts, they are also susceptible to biases introduced through the selection and processing of datasets. Ensuring that learned models do not introduce or amplify systematic biases against underrepresented or historically marginalized groups is critical. This guide from the Fairness Hub serves as a starting point for more contextualized examinations of bias and fairness by providing (1) a simple metric for measuring bias, (2) an overview of bias mitigation strategies, (3) an overview of bias analysis and mitigation toolkits, and (4) a quick demonstration of how to measure and mitigate bias with the help of a toolkit.
Carnegie Learning’s MATHstream: Merging the hottest trends in tech to engage math students
Born out of Carnegie Mellon University 25 years ago, Carnegie Learning has been on the cusp of cutting-edge educational technology for nearly a quarter-century, fine-tuning products to help students learn mathematics best via years of data analytics and software improvements.