Every week during February, Black History Month, Hold The Code plans to feature a story related to AI and racial equality.
In an article published by the MIT Technology Review, Deborah Raji explains several ways that data encodes systematic racism. From predictive policing tools that disproportionately affect communities of color, to self-driving cars that are more likely to hit Black pedestrians, Raji writes:
“Data sets so specifically built-in and for white spaces represent the constructed reality, not the natural one.”
She argues that we must resist technological determinism and accept responsibility for the technology we create. There is a tendency to view data as perfectly objective, removed from our own biases.
According to Raji, the machine-learning community problematically accepts a level of dysfunction, displacing blame from human to the machine. Only by recognizing this, Raji argues, can the technologists begin to institute better practices, such as: disclosing data provenance, deleting problematic data sets, and explicitly defining the limitations of every model’s scope.
Read Ruji’s full piece here. And If you’re interested in a more in-depth study regarding the ethical consideration of predictive policing, check out Rashida Richardson’s paper, “Dirty Data, Bad Predictions.”