Regulating the use of facial recognition in criminal cases has been an ongoing challenge for lawmakers. AI ethicists have pointed to technology’s inaccuracy when it comes to identifying women and people of color.
But at the same time, automated facial recognition can be an incredibly powerful investigative tool: it has helped identify child molesters and, as described recently in a previous Hold The Code edition, the people who participated in the Jan. 6 riot at the Capitol.
In weighing these pros and cons, lawmakers have historically fallen into two camps: those who’ve outright banned the use of facial recognition technology in criminal cases, and those who have not. City Councils in Oakland, Portland, San Francisco, Minneapolis, and elsewhere have banned police use of the technology, whereas other policymakers have refused to regulate the technology, citing its use in solving recent homicide and sexual abuse cases.
What makes the new law in Massachusetts so interesting is that it strikes a difficult balance— regulating the technology, allowing law enforcement to harness the benefits of the tool, while concurrently preventing the false arrests that have happened before. Here’s how:
A lot of the work surrounding the new bill has been attributed to Kade Crockford, an activist at the ACLU of Massachusetts. Describing the motivation behind her efforts, Crockford said:
“One of my concerns was that we would wake up one day in a world resembling that depicted in the Philip K. Dick novel “Minority Report,” where everywhere you go, your body is tracked; your physical movements, habits, activities and locations are secretly compiled and tracked in a searchable database available to god knows who.”
Legal activists are optimistic that the work in Massachusetts can set a nationwide example, providing both space and opportunity for facial recognition technology to be used to its full, most ethical potential.