Although credit scores have been used for decades to assess credit-worthiness, their scope is far greater now than ever before. Advances in AI have meant that risk-assessment tools consider vastly more data and increasingly affect whether you can buy a car, rent an apartment, or get a full-time job. The rapid adoption of these technologies means that algorithms now dictate which children enter foster care, which patients receive medical care, and which families get access to stable housing.
Automated decision-making systems have created a web of interlocking traps for low-income individuals, and they disproportionately impact minorities. Having a bad credit score can have cascading effects with other systems and become unmanageable, if not downright impossible, to escape.
Additionally, a primary issue of these programs is that they lack transparency. How data is used—and how decisions are reached—are seldomly made publicly available. The lack of public vetting also makes the systems more prone to error. Take, for example, what happened in Michigan 6 years ago:
A growing group of civil lawyers is organizing around this topic. Michele Gilman, a fellow at the Data and Society research institute, authored a report outlining the various algorithms poverty lawyers might encounter. Gilman’s aim is to bring more public scrutiny and regulation to the hidden web of algorithms that poverty lawyers’ clients face. “In some cases, it probably should just be shut down because there’s no way to make it equitable,” she says.