Evanston (along with a lot of the US) is currently an arctic tundra, and Northwestern is in the depths of midterms season. But, your friends here at RAISO hope to bring a smile to your frostbitten face and relevant AI news to your inbox with this week’s edition. As a reminder, we’re a non-engineer friendly newsletter that aims to connect students across fields in raising awareness and understanding about AI and contemporary technology (CTech).
You can join our slack group here to stay most up to date with our programs. We’re working on featuring speakers in the field of ethical AI and AI research - join and be among the first to know (this is open to all).
Finally, don’t forget to subscribe if you haven't already.
Most college students are familiar with the recruitment and job-seeking process, a notoriously impersonal and challenging experience. These trends may continue as a growing number of companies expand their use of AI algorithms to decide whether to reject applicants, especially at the early stages of the hiring process.
Here are a few notable AI platforms:
Before we worry about robots taking all of our jobs, we may need to worry about robots giving us our jobs in the first place.
AI is shaping businesses and playing an increasingly important role in the markets. A clear example of this occurred last week when Palantir (PLTR) and IBM announced a global partnership.
Why it Matters
According to an IBM study, nearly 75% of businesses surveyed say they are exploring or implementing AI. Yet, over 30% cited limited AI expertise and data complexities as barriers to adoption. IBM and Palantir’s joint product is called “Cloud Pak for Data,” and it is specifically designed to enable users to access, analyze, and take action on the vast amounts of data that is scattered across hybrid cloud environments – without the need for deep technical skills.
We tend to think of computers as small devices we can carry around in a briefcase or our back pocket. Since computers are relatively small and inexpensive to power, we forget the impact computing has on our energy consumption and the environment. Supercomputers and computationally expensive algorithms require vast amounts of energy and resources. In fact, training a single AI model can emit as much carbon as five cars can in their entire lifetimes.
Common carbon footprint benchmarks (in pounds of carbon dioxide):
The environmental impact of training AI models is something that is often overlooked by researchers. Siva Reddy, a postdoc at Stanford working on an NLP model, says “What many of us did not comprehend is the scale of [our carbon footprint] until we saw these comparisons.”
Reddy continues, noting that “Human brains can do amazing things with little power consumption. The bigger question is how we can build such machines.”
Timnit Gebru is a trailblazing AI ethics researcher, highly regarded within the AI research community. She is attributed to a groundbreaking paper that showed facial recognition to be less accurate at identifying women and people of color (making it more likely to be used in ways that discriminate against them). She also co-founded the Black in AI affinity group and has consistently championed diversity in the tech industry. Most recently, she has been at the center of a recent controversy with Google—after she left Google over tensions surrounding a paper she co-authored.
Timnit was born and raised in Ethiopia; she eventually received political asylum in the United States. She earned her bachelor’s and master's degrees at Stanford University and worked at Apple, and then Microsoft, before eventually accepting a position as the co-lead of Google’s ethical AI team.
What Happened at Google
While many of the details surrounding Gebru’s departure are unclear, here’s what we know:
Implications & Next Steps
Some have argued that Google’s actions could have “a chilling effect on the future of AI ethics research.” Given that many top experts in AI ethics work at large tech companies, misaligned incentives and a lack of scholastic openness can present immense barriers to future research.
In an interview with TechCrunch, Gebru said she doesn’t see herself working at another corporation. Instead, she aims to pursue ethical AI research within the non-profit space and build on her work with Black in AI.
Written by: Lex Verb and Molly Pribble.