MACHINES NOT ONLY OBEY, THEY LEARN (AND CAN LEARN ALL WRONG)
The field of AI is extremely broad and somewhere in there appears Machine Learning (ML), the automated learning of machines. This system is widely used these days and it basically consists of computers learning. In other words, these machines’ performance improves with experience. For example, Netflix uses ML to recommend personalized titles that users will probably enjoy based on the data users provide. This data can be explicit, like “liking” a certain movie or series or, as it usually happens, they are implicit, like pausing after 10 minutes or taking two days to watch a whole movie. All this information we provide Netflix with, together with that provided by all the other users, is the “dataset” that the system uses to constantly improve its recommendation system.
There are diverse and concrete cases deriving from this phenomenon that prove how the use of IA to automate processes replicate gender biases. For example, in 2014 Amazon began developing a program which, using ML, allowed those in charge to automatedly evaluate hundreds of CVs. The system learned from the CVs received during the previous decade and from the performance of the people hired within that period. In 2015 the company realized that this new system ranked women lower than men for software development or other technical labor. This happened because the dataset mirrored the historical male dominion of the technological industry. Despite the efforts, Amazon could not revert that lesson learnt by the machine and was forced to drop the program.
Joy Buolamwini y Timnit Gebru, MIT researchers, evaluated IBM, Microsoft and Face++’s facial recognition softwares and discovered that all three companies had better performance recognizing male faces. And white skin. When analyzing sub-groups, facial recognition of black women turned out to have the highest error rate in the three companies, with an average of 31% error. This result contrasted with the less than 1% error for white males.
These error rates are critical. ML technology is used today by governments in the detection of criminal tendencies. And we know that it can be used to generate reports on recidivism probabilities or that it can influence how sentences are determined in court.
These examples show the importance that AI has in the creation of automated systems. The facts that said systems are trained with a dataset generated by humans (and their life experience) implies that the systems will learn and maintain the subjectivity that can come from them. Summing up: biases that exist in the offline world are replicated online. It is necessary to promote strategies that guarantee the massive adoption of new technologies that do not create or further gender and race inequalities.
This is a scary reality, and it becomes a critical scenario when we think that this is, and will continue to be, the technology that states prefer. It will directly influence people’s freedom and rights. It is highly probable that this is how scholarships, housing benefits and immigration permits will be decided. And it will have a key role in public safety.
Comments
We encourage anyone to comment, please consult the oD commenting guidelines if you have any questions.