Timnit Gebru facts for kids
Quick facts for kids
Gebru in 2018
|Born||1982/1983 (age 37–38)
|Alma mater||Stanford University|
|Known for||Algorithmic bias|
Timnit Gebru (born 1982/1983) is a computer scientist who works on algorithmic bias and data mining. She is an advocate for diversity in technology and co-founder of Black in AI, a community of black researchers working in artificial intelligence.
In December 2020, her employment with Google as technical co-lead of the Ethical Artificial Intelligence Team ended after higher Google managers asked her to either withdraw an as-yet-unpublished paper, or remove the names of all the Google employees from that paper (that is, five of the six coauthors, leaving Emily M. Bender). She refused, demanding to know the names and reasons of everyone who made that decision, and threatened to resign if not provided with that information. Google did not meet her request and immediately terminated her employment. Google stated that the paper in question, titled "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?", ignored recent research which showed methods of mitigating the bias in those systems. Her departure caused public controversy.
Early life and education
Gebru was born and raised in Addis Ababa, Ethiopia. Her father died when she was five years old and she was raised by her mother. Both her parents are from Eritrea. As an Eritrean, she faced a possible forced deportation during the Ethiopian-Eritrean War and was forced to flee Ethiopia at age 15. She eventually received political asylum in the United States.
After completing high school in Massachusetts, she was accepted to study at Stanford University. There she earned her Bachelor's and master's degrees in electrical engineering. Gebru worked at Apple Inc., developing signal processing algorithms for the first iPad. Gebru earned her doctorate under the supervision of Fei-Fei Li at Stanford University in 2017. She used data mining of publicly available images. She was interested in the amount of money spent by governmental and non-governmental organisations trying to collect information about communities. To investigate alternatives, Gebru combined deep learning with Google Street View to estimate the demographics of United States neighbourhoods, showing that socioeconomic attributes such as voting patterns, income, race and education can be inferred from observations of cars. If the number of pickup trucks outnumbers the number of sedans, the community are more likely to vote for the Republican party. They analysed over 15 million images from the 200 most populated US cities. The work was extensively covered in the media, being picked up by BBC News, Newsweek, The Economist, and The New York Times.
Gebru presented her research at the 2017 LDV Capital Vision Summit competition, where computer vision scientists present their work to members of industry and venture capitalists. Gebru won the competition, starting a series of collaborations with other entrepreneurs and investors. Both during her PhD program in 2016 and in 2018, Gebru returned to Ethiopia with Jelani Nelson's programming campaign AddisCoder. After receiving her PhD, Gebru joined Microsoft as a postdoctoral researcher in the Fairness, Accountability, Transparency and Ethics in AI (FATE) lab.
Career and research
Gebru worked at Google on the ethics of artificial intelligence. She studied the implications of artificial intelligence, looking to improve the ability of technology to do social good. She collaborated with the MIT research group Gender Shades. Gebru worked with Joy Buolamwini to investigate facial recognition software; finding that black women were 35% less likely to be recognised than white men. When Gebru attended an artificial intelligence conference in 2016, she noticed that she was the only black woman out of 8,500 delegates. Together with her colleague Rediet Abebe, Gebru founded Black in AI, a community of black researchers working in artificial intelligence.
Gebru also worked on Microsoft's Fairness, Accountability, Transparency, and Ethics in the AI team. In 2017, Gebru spoke on the Fairness and Transparency conference, where MIT Technology Review interviewed her about biases that exist in AI systems and how adding diversity in AI teams can fix that issue. In her interview with Jackie Snow, Snow asked Gebru, "How does the lack of diversity distort artificial intelligence and specifically computer vision?" and Gebru pointed out that there are biases that exist in the software developers. Gebru and other artificial intelligence researchers signed a letter that reflected the systemic issues that reside in Amazon's facial recognition software. A study that was conducted by MIT researchers shows that Amazon's facial recognition system had more trouble identifying darker-skinned females than any other technology company's facial recognition software. In a New York Times interview, Gebru has further expressed that she believes facial recognition is too dangerous to be used for law enforcement and security purposes right now.
Exit from Google
Gebru ceased working for Google in December 2020. The circumstances of her departure are disputed. Gebru and some of her co-workers claimed that she had been fired from Google. Google executives Jeff Dean and Megan Kacholia claimed that she offered to resign and that her resignation was subsequently accepted by Google. She stated that she never offered to resign, only threatened to resign.
Gebru had co-authored a paper on the risks of very large language models, regarding their environmental and financial costs, inscrutability leading to unknown dangerous biases, inability of the models to understand the concepts underlying what they learn, and the potential for using them to deceive people. In a mail sent to an internal collaboration list, Gebru describes how she was summoned to a meeting at short notice where she was asked to withdraw the paper, and says that her subsequent inquiries into the identities of the reviewers as well as how or why the decision had been taken were ignored. Gebru asked for certain conditions to be met in order to prevent her resignation, but Google's AI team was unwilling to meet those conditions and accepted her resignation on the same day. Jeff Dean, Google's head of AI research, replied with an email saying that they took the decision because the paper ignored too much relevant recent research on ways to mitigate some of the problems described in it, about environmental impact and bias of these models.
Following the controversy, Google CEO Sundar Pichai issued an apology without admitting wrongdoing. Close to 2700 Google employees and more than 4300 academics and civil society supporters signed a letter condemning Gebru's alleged firing. In the aftermath, two Google employees resigned from their positions at the company.
On 16 December 2020, Google's Ethical AI research team demanded that Vice President Megan Kacholia be removed from the team's management chain. Kacholia had allegedly fired Dr. Gebru without notifying Gebru's direct manager Dr. Samy Bengio first. Google's Ethical AI team also demanded Megan Kacholia and Google's chief of AI, Jeff Dean, apologize for how Dr. Gebru was treated.
Nine members of Congress sent a letter to Google asking it to clarify the circumstances around Timnit Gebru's exit.
The controversy led to an online harassment campaign against Gebru and her supporters.
Gebru, Joy Buolamwini, and Inioluwa Deborah Raji won VentureBeat's 2019 AI Innovations Award in the category AI for Good for their research highlighting the significant problem of algorithmic bias in facial recognition.
|Scholia has a profile for [[:toolforge:scholia/Script error: The function "pageId" does not exist.|Template:Ifnoteq then show (Script error: The function "pageId" does not exist.)]].|
Timnit Gebru Facts for Kids. Kiddle Encyclopedia.