Welcome to our 7th edition of Hold The Code. We're eager to share some interesting AI news, but first, we'd like to extend a few invitations:
Without further ado, here are this week's stories.
ICE offices have recently tapped into a once-private database (with millions of phone, electricity, and utility records) to find undocumented immigrants. The database, called CLEAR, contains 400 million names, addresses, and services from over 80 companies.
This situation is just another instance of how government agencies have exploited commercial sources to supplement their surveillance efforts with the information they are not authorized to access on their own. ICE uses this information to pursue undocumented immigrants who may have stayed off the grid by avoiding activities like getting a driver’s license but cannot live without paying for utilities for their home.
CLEAR, which is run by Thomson Reuters and based on data from Equifax, is also used by a number of other organizations including:
“There needs to be a line drawn in defense of people’s basic dignity. And when the fear of deportation could endanger their ability to access these basic services, that line is being crossed,” states Nina Wang, a policy associate at the Georgetown Law’s Center on Privacy & Technology. “It’s a massive betrayal of people’s trust. … When you sign up for electricity, you don’t expect them to send immigration agents to your front door.”
No year has been more revealing than 2020 in terms of how we are divided into two economies: the many who have been struggling to make ends meet while trying to avoid getting infected by a dangerous virus, and the few, who control the companies that are now an essential part of everyday life.
According to tech critic Paris Marc,
In an economy that is shrinking for many and growing for few, the defining features of many digital platforms have been to be a monopoly no matter what. And this model has proved to be good business as users often have no other options but to use a specific platform.
This approach is best understood as an expansion of rentierism - owning property and extracting rent from those who live and work on it. This “Internet of Landlords” is transforming our social and economic interactions into services that are mediated by corporate platforms.
Think of what Amazon does for e-commerce, or what Google does for search and productivity tools. In our everyday life, we are forced to deal with an ever-growing number of landlords (often without choice), constantly paying the rent with our money and our data. By controlling the property that is required for productive and essential work and life activities, these companies hold a tremendous amount of power over the people who use these products.
Crafting policies that address data control could potentially address some of these issues. We can find inspiration in similar policies like rent and capital control. By restricting the conditions and purposes for capturing and using data, we could begin to redistribute power away from these platforms in the digital economy.
More than 500 universities across the United States use EAB's (an education research company) Navigate advising software. The program uses a predictive model to recommend students for classes and majors. It estimates student success using a range of variables.
Documents acquired by The MarkUp reveal that race is used as a predictor for student success in Navigate's model. In turn, there are large disparities in how the software treats students of different races. Black students are deemed "high risk" quadruple the rate of their white peers, which means they're far less likely to be recommended for STEM-based classes and majors.
Navigate’s racially influenced risk scores “reflect the underlying equity disparities that are already present on these campuses and have been for a long time,” says Ed Venit, who manages student success research for EAB.
Don't act too surprised: it saves money. In fact, EAB has aggressively marketed itself as a "financial imperative." Student retention is a big concern for colleges — especially public universities. EAB prides itself on, for instance, its integration at Georgia State University: since using the EAB program, Georgia state increased degrees awarded by 83%.
Here at "Hold The Code," we're averse to data sets that are, as in this case, explicitly biased. But this doesn't mean that AI can't be used productively to increase student retention. Eliminating factors like race, considering students holistically, and doing closer research on the actual causes of student drop-outs can illuminate a meaningful application of AI.
A recently published piece in The Atlantic calls upon internet reform as the path towards salvaging our democracy. The premise of the argument is the idea that:
"An internet that promotes democratic values instead of destroying them—that makes conversation better instead of worse—lies within our grasp."
Our current social media landscape has eroded democratic values, the authors contend. We're living in what they describe "a Toquevillian nightmare" (a nod to Alexis De Tocqueville, who advocated for social discourse and engagement), but "instead of participating in civic organizations that give them a sense of community as well as practical experience intolerance and consensus-building, Americans join internet mobs, in which they are submerged in the logic of the crowd, clicking Like or Share and then moving on."
In short: memes, lulz, and "ironic" bigotry have won the internet. Aided by corporate America, conversations are now ruled by algorithms designed to capture attention, harvest data, and amplify the loudest, most radicalized voices.
According to the authors, in this type of internet wilderness, "democracy is impossible."
Alternatives are possible; we know this because we have used them. Before private commercial platforms definitively took over, online public-interest projects briefly flourished.
Another point The Atlantic writers make is one that warms the heart of your Hold The Code writers: algorithms can be used to promote better internet governance.
Nathan Matias, a scholar in AI ethics, observed that when users on Reddit worked together to promote news from reliable sources, the Reddit algorithm itself began to prioritize higher-quality content. In his own lab, Matias works on making digital technologies that serve the public, and not just private companies. He reckons that if more labs like this exist, a new generation of citizen-scientists can work with the companies to understand how their algorithms function, find ways of holding them accountable if they refuse to cooperate and experiment with fresh approaches to governing them.
Read the full essay here.
Written by Lex Verb and Molly Pribble