skip to content
Site header image Mimansa Jaiswal

Biases in ML


While one of the biggest problems in machine learning remain effective safety guards — a bigger problem is measurement of safety and “alignment” per se on the end goal of what an unbiased machine learning model looks like. For something like credit scoring, it is more obvious, gender should not be a factor in deciding what the credit prediction is. For something more human-centered that is affected by gender, the question remains - do you want to learn the associations that exist in the society (between say, emotion and gender → which possibly can help with passively tracking mood fluctuations in say, bipolar disorder), or do you want the model to “not see” those variables. Which of those two is the ideal outcome?