Dr. Timnit Gebru

A discussion of Dr. Timnit Gebru and her work.
Author

Bridget Ulian

Published

April 25, 2023

Dr. Timnit Gebru and Her Work

Section 1: Introduction

In my time at Middlebury College, I have been lucky enough to take two classes that focus on ethics as they pertain to technology: Gender, Technology and Future with Professor Gupta and Politics of Virtual Realities with Professor Stanger. Readings either about or by Dr. Timnit Gebru were included in both classes’ curricula. Dr. Gebru is a computer scientist, an Ethiopian refugee who found political but not emotional asylum in the United States, and someone who has single-handedly pushed research on the ethics of Artificial Intelligence into entirely novel waters.

Dr. Gebru’s current work is focused on the Distributed Artificial Intelligence Research Institute (or DAIR), a collective of multidisciplinary researchers who examine the outcomes of AI technology, particularly as it pertains to the African continent and African immigrants in the United States. She previously acted as Google’s co-lead of the Ethical Artificial Intelligence Team, a position which ended in contention when Google asked Dr. Gebru to not publish a paper examining the dangers of bias in large language models. Dr. Gebru claims she was fired; Google claims she resigned. Either way, Google faced extensive internal and external criticism in response.

Dr. Gebru will be virtually visiting Middlebury College to give a lecture on bias and the social impacts of artificial intelligence, and more narrowly, will be visiting our class for a Q&A on Monday, April 24.

Section 2: Dr. Gebru’s Talk

Dr. Gebru’s talk at the conference on Computer Vision and Pattern Recognition 2020 focuses on aspects of bias in artificial intelligence less explored by many discussing the topic. I find her idea of a dominant group close to the money very interesting; it follows a theme I noticed in articles about Dr. Gebru, that she focuses on the power dynamics of AI rather than just the biases. She talks about how visibility isn’t inclusion, which can translate to an understanding that there is bias does not mean those biases disappear.

It is easy for companies like Google or Amazon or Microsoft to put out a statement saying “we understand there is bias in our algorithms and datasets. We are working to diversify our datasets and hone our algorithms.” Doing the work is much more difficult and multifaceted. Dr. Gebru explains this very well, particularly in an example of Google attempting to diversify their facial recognition datasets. In doing so, Google put out advertisements asking for darker skinned people to join their dataset in a predatory manner. In a similar way, when developers came to realize gender recognition technology isn’t trained on trans people, they scraped YouTube for images of trans creators without notifying said creators. Furthermore, Dr. Gebru argues this harm towards marginalized people goes even deeper. Why is there a gender recognition system that categorizes based on a binary, socially-constructed idea anyways?

The point is it takes a lot of work and time to understand the implications of different technologies. In a competitive, for-profit, industry, work and time are only worth cutting. Why would a company spend the time and resources hiring experts on biases and social implications of a technology when they could make millions of dollars and cut costs simply by sending the product to the public? To harness the power dynamic between for-profit corporations and marginalized individuals, educated experts and resources need to focus on social implications of technology. It is likely that ensuring corporations take the time and resources to hire experts will require government intervention.

tl;dr Visibility isn’t inclusion, acknowledging biases and inequity in technology development does not solve the problem of said biases and inequity, it takes much more work and depth of research into implications of technology.

Section 3: Questions

I have a few questions for Dr. Gebru, one which has plagued me since I wrote a paper for the Politics of Virtual Realities and one which is simply a curiosity.

  1. Quote from Meredith Whittaker, the senior advisor on AI to the Federal Trade Commission: “What I am concerned about is the capacity for social control that [AI] gives to a few profit-driven corporations.” Question: Do you think the government has the capacity to regulate the power dynamics between massive for-profit tech corporations and the individual citizen, particularly marginalized citizens? Would this have to be an international institution, or is it feasible for individual governments to have different regulations for tech corporations?

  2. Do you think if there were an industry-wide oath that all technologists should take, similar to the hippocratic oath, it could help mitigate some of the issues we see in technology? If yes, what would be in that oath?

Optional Section 4: an excerpt from the paper I wrote for Politics of Virtual Realities that inspired question 1

To avoid re-enchantment with AI and to retain our human dignity and autonomy, government leaders must take initiative in discussing and questioning how AI fits into our current world; AI gone unchecked will not follow moral or ethical guidelines necessary in decision-making, particularly when it comes to governance. This is not a simple task, particularly at a time where competition is so fierce between the United States and China, two technological and economic superpowers. However, without this discussion, humans at all levels of life will promote AI to the superior thinker in our world. In doing so, humanity will give up its autonomy and dignity. Once artificial intelligence begins making decisions for humans and humans stop questioning the validity or ethical implications of said decisions, regaining human autonomy will be impossible. As Heidegger argues, the questioning of the essence of technology is necessary to avoid becoming a standing-reserve and to continue humanity’s progression. This questioning must start with world leaders, who have the experts and means available to understand the implications of artificial intelligence on us as human beings.

After Dr. Gebru’s Talk

Section 1: Dr. Gebru’s Argument

Dr. Timnit Gebru’s argument about AGI and second-wave eugenics is this; people are fixated on AI as a means to a utopia or an apocalypse, a transhuman experience, and in doing so are not paying attention to the current problems of AGI and the fact that reaching that transhuman experience requires discriminatory choices, abusive labor, and deep wealth disparities. She draws a very clear connection between first-wave eugenics (think sterilization of the disabled and people of color, something that I learned recently in my American Psycho class continued deep into the 1960s) and second-wave eugenics. She paints both first and second wave eugenics as problematic in somewhat similar ways; both define intelligence in ways that play into casual racism and promote certain traits as positive that are typically found in wealthy, well-educated, white people. She also touched on the monopolization of AGI and the “race to the bottom” in creating larger, more generalized, more versatile models. The problem with larger models, Timnit argued, is that they do not feed back into the communities they come out of.

Second-wave eugenics are harmful, Dr. Gebru argues, partially because they choose what qualifies as ‘intelligent’ and what traits are desirable in humans going forward to a posthuman existence. One thing I found interesting in particular was how often second-wave eugenicists cite Charles Murray. It was clear that Dr. Gebru did not understand the history between Charles Murray and Middlebury College, but that history being as contentious as it is means this point was especially salient. If second-wave eugenicists are worried about being labeled discriminatory and racist, they should not draw a connection between themselves and a homophobic, racist, supposed scholar.

Another issue with the second-wave eugenicists is their treatment of AI as some mystical, magical, future superpower. Treating AI as a path toward either utopia or apocalypse takes away from the fact that AI is developing currently, under discriminatory, rushed, and vastly unfair circumstances. AGI is not a future mystical superpower, but a current ailment. In order to change the problematic foundation of Artificial Intelligence, computer scientists and billionaires should focus on how to fix the problems of today; how can we stop large models and corporations from monopolizing the market? How can we provide less abusive career paths for people labeling datasets, or moderating models, or having their images and artwork used in datasets?

I agree in general with Dr. Gebru’s argument. I think she brings up very interesting points about posthumanism and the TESCREAL community, and I believe I will bring a lot of the context from this talk with me as I continue reading about the forays of Elon Musk and crew. I also liked that she reached beyond the technicalities of artificial intelligence models, beyond the ‘check that your data is not biased’ and ‘think about who these algorithms affect.’ I think both of these things are wildly important, but I was also searching for something new from Dr. Gebru; she definitely provided something new to chew on. I do wish, however, that she touched a little more on her interactions with legislators and her hopes for government regulation going forward. I came close to asking a question about it, but then another question was asked that provided a vague answer to what she looks for in regulation. As a political science major (as well as computer science), I am so intrigued by tech regulation in governance. I would have loved to hear what a typical talk with a member of congress sounds like for Dr. Gebru; does she explain the technicalities to each legislator? What does the current state of AI regulation look like in the government? I know that Congress was mocked endlessly for their questioning of Mark Zuckerberg; is it even possible for today’s Congress to regulate tech with their lack of expertise?

Reflection

I loved Dr. Gebru’s talk and I think I could’ve sat there listening to her for a lot longer. She is so thoughtful in her speech, something that I greatly admire about very smart people. I appreciated her expertise in fields outside of technical computer vision and her willingness to dip into the theories behind TESCREAL. This past summer, I found myself searching endlessly for jobs that combine political science/policy and computer science. I stumbled across Schmidt Futures, Eric Schmidt’s philanthropic fellowship for computer science students, applied, and was promptly rejected. I have considered going to graduate school for tech public policy, and am very interested in trying to limit the reach of massive tech corporations. Side note, going to the former CEO of Google’s philanthropic fellowship was probably not the best way to take down big tech corporations. Hearing someone speak who is so knowledgeable in a field I have been trying to find my way into was magical. Dr. Gebru inspired the same fascination in me in just one hour that semester-long classes have inspired.

Like I said, my search for a job at the crux of political science and computer science was rather difficult. I found myself locked into a corporate software engineering job, with hopes of going to graduate school in the future. Whenever I find the spark waning (money is a very strong pull, losing money a pretty strong push), I feel as though I can think back on Dr. Gebru’s talk. I would do pretty much anything to sit there and pick her brain for hours. Thank you Phil for emailing her to come talk to us.