From an entire California school board resigning after being caught on camera mocking parents to the #TedFled scandal being confirmed through a United Airlines employee releasing Ted Cruz’s flight information to the press, this week, and every week, we’ve seen the powers and perils of living in a data-saturated society. Hold The Code is a community passionate about exploring these topics, and we thank you for being part of it. Without further ado, here are this week’s top stories.
As reported by a student reporter, Waverly Long, of The Daily Northwestern, the university is facing a lawsuit accusing the school of improperly capturing and storing students’ biometric data. The lawsuit was filed in late January by an anonymous junior, attesting that Northwestern violated the Illinois Biometric Information Privacy Act.
Biometric data & its regulations
Biometrics can simply be defined as the analysis of people’s unique physical and behavioral characteristics; the technology is mainly used for identification. The Illinois Biometric Information Privacy Act (BIPA) was enacted to protect residents from companies that collect this type of data. The law says that companies must gain explicit permission and fully inform users.
The Northwestern lawsuit
Calls to bad online test proctoring software
Across universities in the United States, many others have pointed out the invasive nature of this software. The lawsuit mentions petitions currently circulating at several institutions, and references this Forbes article, further highlighting the privacy concerns at stake.
Check out the Northwestern Daily’s reporting here.
Ladies and gentlemen, it’s happened again: Google has fired yet another one of its top AI Ethics researchers. On Friday, the company announced that they had fired Margaret Mitchell, the founder, and co-head, of its artificial intelligence ethics unit. The announcement comes three months after the controversial departure of Timnit Gebru, another senior figure in Google’s ethics unit.
Why was Mitchell fired?
Google claims that Mitchell violated the company’s code of conduct and security policies, yet there is speculation as to the circumstances and severity of the firing.
What are people saying?
Though many details surrounding the firing remain unknown or vague, this news has sparked a discussion over the function and necessity of AI ethics boards at tech companies, with opinions ranging from some recognizing the importance of evaluating the social effects of AI systems to others viewing it as “a way for humanities types to wedge themselves into a hot, high paying field.”
Users of Hacker News, a social news aggregator that focuses on computer science topics, have had no shortage of opinions on the necessity of AI ethics in the tech industry:
Here at RAISO, we think it is extremely important to understand the social, political, and economic effects AI systems can have on our society. We believe it is vital for people to be educated and think critically about these systems, especially as they become increasingly integrated into our daily lives.
As the reach of AI continues to expand, it is extremely important to develop these systems equitably, ethically, and responsibly.
Nasa’s Perseverance successfully touched down on Martian soil on Thursday after 7 months of space travel. The rover is Nasa’s most ambitious search for life on Mars since its Viking missions in the 1970s, with the mission scheduled to last for a full Martian year (or roughly 687 Earth days).
Perseverance is outfitted with an advanced AI system called the Planetary Instrument for X-ray Lithochemistry (or PIXL if you don’t have that much time). This system differs from ones used on past missions in that it has an incredibly powerful X-ray beam that can pinpoint surface features as small as a grain of salt. Nasa is using this technology to look for textures in Martian rocks that may indicate the presence of certain chemicals that are linked to possible life forms on Mars.
Controlling PIXL is an AI-powered hexapod, a device with six legs that control how the PIXL beam is positioned. This device autonomously decides how to execute microscopic movements that can aim PIXL’s beam with extreme precision.
This mission is just the beginning of Nasa’s plans:
Although credit scores have been used for decades to assess credit-worthiness, their scope is far greater now than ever before. Advances in AI have meant that risk-assessment tools consider vastly more data and increasingly affect whether you can buy a car, rent an apartment, or get a full-time job. The rapid adoption of these technologies means that algorithms now dictate which children enter foster care, which patients receive medical care, and which families get access to stable housing.
Automated decision-making systems have created a web of interlocking traps for low-income individuals, and they disproportionately impact minorities. Having a bad credit score can have cascading effects with other systems and become unmanageable, if not downright impossible, to escape.
Additionally, a primary issue of these programs is that they lack transparency. How data is used—and how decisions are reached—are seldomly made publicly available. The lack of public vetting also makes the systems more prone to error. Take, for example, what happened in Michigan 6 years ago:
A growing group of civil lawyers is organizing around this topic. Michele Gilman, a fellow at the Data and Society research institute, authored a report outlining the various algorithms poverty lawyers might encounter. Gilman’s aim is to bring more public scrutiny and regulation to the hidden web of algorithms that poverty lawyers’ clients face. “In some cases, it probably should just be shut down because there’s no way to make it equitable,” she says.
Written by: Molly Pribble and Lex Verb