but catherine jar moules says algorithms aren't solely to blame because they're programmed and fed by humans algorithm learns because we train it using machine learning so either whether it's deep learning with a neural network so to speak or it's shallow machine learning where feeding it data and were asking it to form an opinion and of course when that opinion is based on data that has unfair treatment of groups then the algorithm or very much learned this like translation software that the millions of texts that teach it mention more female than male kindergarten teachers the algorithm learns and remembers that . picture recognition learns from large image databases here you find more pictures of women in the kitchen than men and so the algorithm learns to associate women with kitchens changing that is tough and time consuming. the coding is often top secret so it's difficult to prove an algorithm is prejudiced the theme is program can check for discrimination. it simulates fake accounts that are identical in everything but gender. accountability transparency and fairness are reall