kids encyclopedia robot

Timnit Gebru facts for kids

Kids Encyclopedia Facts
Quick facts for kids
Timnit Gebru
Gebru in 2018
Gebru in 2018
Born 1982/1983 (age 41–42)
Addis Ababa, Ethiopia
Alma mater Stanford University
Known for
  • Algorithmic bias
  • Fairness in machine learning
Scientific career
Fields Computer science
Institutions
Doctoral advisor Fei-Fei Li

Timnit Gebru (Amharic and Tigrinya: ትምኒት ገብሩ; born 1982/1983) is a computer scientist from Ethiopia and Eritrea. She works with artificial intelligence (AI), focusing on how AI can be unfair (called algorithmic bias) and how to find patterns in large amounts of information (known as data mining).

She helped start Black in AI, a group that supports Black people in AI research and development. Timnit Gebru also founded the Distributed Artificial Intelligence Research Institute (DAIR).

In 2020, there was a big discussion when Timnit Gebru left Google. She was a leader on their Ethical Artificial Intelligence Team. She had written a paper about the risks of very large AI language models. Google asked her to remove the names of Google employees from the paper or take it back. Timnit Gebru asked for more information about this request. Google then ended her job, saying they accepted her resignation. However, Timnit Gebru said she had not actually resigned.

Many people recognize Timnit Gebru for her knowledge in AI ethics. Fortune magazine named her one of the World's 50 Greatest Leaders. Nature listed her as one of ten people who shaped science in 2021. In 2022, Time magazine called her one of the most influential people.

Timnit Gebru's Early Life and School

Timnit Gebru was born and grew up in Addis Ababa, Ethiopia. Her father, an electrical engineer, passed away when she was five. Her mother, an economist, raised her. Both of her parents are from Eritrea.

When Timnit Gebru was 15, during the Eritrean–Ethiopian War, she left Ethiopia. Some of her family had been sent to Eritrea and forced to join the war. She first tried to get a visa for the United States but was denied. She lived in Ireland for a short time. Later, she received political asylum in the US. She described this time as "miserable."

She settled in Somerville, Massachusetts, for high school. There, she quickly faced racism. Some teachers would not let her take certain advanced classes, even though she was a very good student.

After high school, an event with the police made her think about ethics in technology. A friend, who was a Black woman, was attacked in a bar. Timnit Gebru called the police to report it. She said that instead of helping, the police arrested her friend. Timnit Gebru called this a key moment and a clear example of "systemic racism."

In 2001, Timnit Gebru was accepted into Stanford University. She earned her Bachelor of Science and Master of Science degrees in electrical engineering. In 2017, she received her PhD in computer vision. Her advisor during her PhD program was Fei-Fei Li.

During the 2008 US presidential election, Timnit Gebru helped campaign for Barack Obama.

She presented her PhD research at a competition in 2017. This competition was for computer vision scientists to show their work to companies and investors. Timnit Gebru won the competition. This led to her working with other business people and investors.

In 2016 and 2018, while working on her PhD, Timnit Gebru returned to Ethiopia. She helped with Jelani Nelson's programming program called AddisCoder.

While studying for her PhD, she wrote a paper that was not published. It was about her worries for the future of AI. She wrote about the dangers of not having enough different kinds of people working in AI. She based this on her experiences with the police. She also mentioned a report that showed how human biases can appear in machine learning systems.

Timnit Gebru's Career in Technology

Visual Computational Sociology
Gebru discussing her findings that one can predict, with some reliability, the way an American will vote from the type of vehicle they drive

Early Work at Apple (2004–2013)

Timnit Gebru started as an intern at Apple while at Stanford. She worked in the hardware part, making circuits for audio devices. The next year, she was offered a full-time job. Her manager told Wired that she was "fearless" and well-liked.

At Apple, Timnit Gebru became more interested in creating software. She focused on computer vision that could find human shapes. She then helped develop signal processing programs for the first iPad. At that time, she said she did not think about how this technology could be used for watching people. She just found it "technically interesting."

Later, in 2021, during the #AppleToo movement, Timnit Gebru shared her own bad experiences at Apple. She said she had "so many egregious things" happen there. She felt that Apple needed to be held responsible for its actions. She also said that the news media often protects big tech companies like Apple from public review.

Research at Stanford and Microsoft (2013–2017)

In 2013, Timnit Gebru joined Fei-Fei Li's lab at Stanford. She used data mining to study public images. She was interested in how much money groups spent to gather information about communities.

To find other ways, Timnit Gebru combined deep learning with Google Street View. She used this to guess the types of people living in United States neighborhoods. Her research showed that things like how people vote, their income, race, and education could be guessed by looking at cars. For example, if there were more pickup trucks than sedans, the community was more likely to vote for the Republican party. They looked at over 15 million images from 200 big US cities. Many news outlets, like BBC News and The New York Times, covered this work.

In 2015, Timnit Gebru went to a top AI conference called NIPS in Montreal, Canada. Out of 3,700 people, she noticed she was one of only a few Black researchers. The next year, she counted and found only five Black men and she was the only Black woman among 8,500 attendees.

Because of this, she and her colleague Rediet Abebe started Black in AI. This is a group for Black researchers who work in artificial intelligence.

In 2017, Timnit Gebru joined Microsoft as a researcher in the Fairness, Accountability, Transparency, and Ethics in AI (FATE) lab. She spoke at a conference about fairness and transparency. MIT Technology Review interviewed her about biases in AI systems. She explained how having more diverse people on AI teams can help fix these problems. She pointed out that biases can come from the software developers themselves.

While at Microsoft, Timnit Gebru helped write a research paper called Gender Shades. This paper gave its name to a bigger project led by her co-author Joy Buolamwini. They studied facial recognition software. They found that in one system, Black women were 35% less likely to be recognized than White men.

AI Ethics at Google (2018–2020)

Timnit Gebru started working at Google in 2018. She co-led a team focused on the ethics of artificial intelligence with Margaret Mitchell. She looked at how AI affects society and how technology can be used for good.

In 2019, Timnit Gebru and other AI researchers signed a letter. They asked Amazon to stop selling its facial recognition technology to police. They said it was unfair to women and people of color. This was based on a study that showed Amazon's system had more trouble identifying darker-skinned women. In an interview, Timnit Gebru said she believes facial recognition is too risky for law enforcement and security right now.

Timnit Gebru's Departure from Google

In 2020, Timnit Gebru and five other authors wrote a paper. It was called "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜". This paper looked at the risks of very large language models. These risks included their environmental impact, high costs, and how hard they are to understand. It also discussed how these models might show unfairness against certain groups. The paper also mentioned that these models don't truly understand language and could be used to spread false information.

In December 2020, Timnit Gebru's job at Google ended. Google management asked her to either take back the paper or remove the names of all Google employees from it. Only one of the six authors, Emily M. Bender, did not work at Google at the time.

Timnit Gebru sent an email to an internal group. She explained that she was asked to withdraw the paper. She asked for the names of those who made the decision and advice on how to change the paper. She also said she would discuss a leaving date if this information was not given. Google did not meet her request and ended her job right away. They said they accepted her resignation. Jeff Dean, Google's head of AI research, sent an email. He said the paper did not include enough recent research on how to fix some of the problems it described.

Timnit Gebru and her supporters said that Google's actions led to her being harassed online. Many Google employees and academics signed a letter criticizing how Timnit Gebru was treated. Some members of Congress also asked Google to explain what happened.

Timnit Gebru has always said that she was fired. After the negative news, Sundar Pichai, the CEO of Google's parent company, Alphabet, apologized publicly. He started an investigation into the event. After the review, Jeff Dean announced that Google would change how some employees leave the company. He also said there would be changes to how research papers on "sensitive" topics are reviewed. Timnit Gebru said she expected nothing more from Google. She pointed out that the changes were for the same reasons she was supposedly fired.

Independent Research Since 2021

In November 2021, a group called the Nathan Cummings Foundation asked Alphabet to do a "racial equity audit." This audit would look at how Google affects "Black, Indigenous and People of Color (BIPOC) communities." The proposal also asked to check if Google punished minority employees who raised concerns about unfairness. It mentioned Timnit Gebru's departure and her work on racial biases in Google's technology.

In December 2021, Reuters reported that Google was being investigated. This was for how it treated Black women, after many complaints of unfairness and harassment. Timnit Gebru and other BIPOC employees said that when they told Human Resources about racism and sexism, they were told to take medical leave or go to therapy. Timnit Gebru and others believe her leaving was a punishment and shows that Google has unfair systems. Google said it "continues to focus on this important work."

In June 2021, Timnit Gebru announced she was raising money to start her own research center. It would be based on her work at Google's Ethical AI team and her experience with Black in AI.

On December 2, 2021, she launched the Distributed Artificial Intelligence Research Institute (DAIR). This institute plans to study how AI affects groups that are often left out, especially in Africa and for African immigrants in the United States. One of its first projects will use AI to look at satellite images of townships in South Africa. This will help understand the lasting effects of apartheid.

Timnit Gebru and Émile P. Torres have used the term TESCREAL. This term criticizes what they see as a group of ideas in the future of technology. These ideas include transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism, and longtermism. Timnit Gebru thinks these ideas are a right-leaning influence in Big Tech. She compares those who support them to "the eugenicists of the 20th century." She believes they create harmful projects but say they are "benefiting humanity."

Timnit Gebru has also said that research into Artificial General Intelligence (AGI) is based on eugenics. She believes that the focus should move away from AGI. She states that trying to build AGI is not a safe practice.

Awards and Special Recognition

Timnit Gebru, Joy Buolamwini, and Inioluwa Deborah Raji won an award in 2019. It was VentureBeat's AI Innovations Award for "AI for Good." They won for their research that showed the big problem of unfairness in facial recognition AI.

Fortune named Timnit Gebru one of the world's 50 greatest leaders in 2021. The science journal Nature included her in a list of ten scientists who played important roles in science in 2021.

In 2022, Time magazine named Timnit Gebru one of the most influential people.

In 2023, the Carnegie Corporation of New York honored Timnit Gebru with a Great Immigrants Award. She received this award for her important work in ethical artificial intelligence.

In November 2023, she was named to the BBC's 100 Women list. This list includes some of the world's most inspiring and influential women.

Images for kids

See also

Kids robot.svg In Spanish: Timnit Gebru para niños

  • Coded Bias
  • Claire Stapleton
  • Meredith Whittaker
  • Sophie Zhang
kids search engine
Timnit Gebru Facts for Kids. Kiddle Encyclopedia.