kids encyclopedia robot

Google DeepMind facts for kids

Kids Encyclopedia Facts
Quick facts for kids
DeepMind Technologies Limited
Trade name
  • Google DeepMind
  • DeepMind
Subsidiary
Industry Artificial intelligence
Founded 23 September 2010; 14 years ago (2010-09-23) (incorporation)
15 November 2010; 14 years ago (2010-11-15) (official launch)
Founders
Headquarters London, England
Key people
Products
Revenue Increase £1.53 billion (2023)
Operating income
Increase £136 million (2023)
Increase £113 million (2023)
Owner Alphabet Inc.
Number of employees
c. 2,600 (2024)
Parent DeepMind Holdings Limited


DeepMind, also known as Google DeepMind, is a company that researches artificial intelligence (AI). It started in the UK in 2010. Google bought DeepMind in 2014. In April 2023, DeepMind joined with Google AI's Google Brain team. Together, they became Google DeepMind. The company has its main office in London, England. It also has research centers in other countries.

DeepMind has created many smart computer programs. These programs use a special way of learning called reinforcement learning. This helps them learn by trying things and getting feedback. They have used this to teach computers to play video games and board games. In 2016, their program AlphaGo beat a world champion in the game Go. This was a very big deal! Another program, AlphaZero, learned to play Go, chess, and shogi (Japanese chess). It became super good at these games just by playing against itself. DeepMind also made programs like MuZero and AlphaStar for games. They also created AlphaGeometry for geometry and programs like AlphaDev for finding new computer instructions.

In 2020, DeepMind made a huge step forward with AlphaFold. This program helps predict how proteins fold. Proteins are tiny building blocks in living things. Knowing how they fold helps scientists understand diseases. By July 2022, DeepMind had released predictions for over 200 million protein structures. This includes almost all known proteins.

Google DeepMind is now in charge of making Gemini. Gemini is Google's family of very large language models. These are AI programs that can understand and create human-like text. They also develop other creative AI tools. These include Imagen, which makes pictures from text. Another is Veo, which creates videos from text.

History of DeepMind

DeepMind was started in November 2010. The founders were Demis Hassabis, Shane Legg, and Mustafa Suleyman. Demis Hassabis and Shane Legg first met at University College London.

Demis Hassabis said they started by teaching AI to play old video games. These games were from the 1970s and 1980s. Games like Breakout, Pong, and Space Invaders were used. The AI learned each game without knowing the rules beforehand. After some practice, the AI became an expert. The goal was to create a general AI. This means an AI that can learn and be useful for many different things.

Big investment companies helped fund DeepMind. Famous people like Elon Musk also invested. In January 2014, Google bought DeepMind. The price was between $400 million and $650 million. After this, the company was called Google DeepMind for a couple of years.

In 2014, DeepMind won the "Company of the Year" award. This was from the Cambridge Computer Laboratory.

Logo from 2015–2016
Logo from 2016–2019

After Google bought DeepMind, they set up an AI ethics board. This board helps make sure AI is developed in a good way. In October 2017, DeepMind started a new research team. This team focuses on the ethical questions of AI.

In December 2019, co-founder Mustafa Suleyman left DeepMind. He joined Google in a policy role. In March 2024, he became a leader at Microsoft's new AI unit.

In April 2023, DeepMind and Google Brain merged. They formed Google DeepMind. This was done to speed up AI work. It also gave DeepMind more freedom within Google.

DeepMind's Technologies

By 2020, DeepMind had published over a thousand research papers. Thirteen of these were in top science journals. DeepMind got a lot of attention when AlphaGo became famous.

AI in Games

DeepMind's early AI programs were different from older ones. Older AIs, like IBM's Deep Blue, were made for one specific task. DeepMind's AIs were meant to be general learners. They used reinforcement learning. This means the AI learns by trying things and getting rewards or punishments. They started by feeding the AI raw pixels from video games.

They tested the system on old arcade games. Games like Space Invaders and Breakout were used. The same AI could play these games better than humans. It did this without changing its code for each game.

In July 2018, DeepMind trained an AI to play Quake III Arena. In 2020, DeepMind released Agent57. This AI agent can beat humans in all 57 Atari 2600 games. In July 2022, DeepMind announced DeepNash. This AI can play the board game Stratego as well as a human expert.

AlphaGo and its Successors

In October 2015, DeepMind's AlphaGo program beat a professional Go player. This was the first time an AI beat a professional Go player. Go is a very complex game. It is much harder for computers than chess. This is because there are so many possible moves.

In March 2016, AlphaGo beat Lee Sedol. He was one of the world's best Go players. AlphaGo won 4 out of 5 games. This match was even shown in a documentary film. In 2017, AlphaGo also beat Ke Jie, who was the world's top player.

Later in 2017, an improved version called AlphaGo Zero was created. It beat the original AlphaGo in every game. AlphaGo Zero learned by playing millions of games against itself. It did not need any human game data. Then, AlphaZero, a changed version of AlphaGo Zero, became super good at chess and shogi. It also learned by playing against itself.

In 2019, DeepMind released MuZero. This new model mastered Go, chess, shogi, and Atari games. It did this without any human data or knowing the rules beforehand. MuZero was also used to help compress videos better. This helps reduce data usage on sites like YouTube.

AlphaStar

In January 2019, DeepMind showed AlphaStar. This program plays the game StarCraft II. StarCraft is a complex strategy game. AlphaStar learned by watching human games. Then it played against itself to get better.

AlphaStar won many games against professional players. At first, it had an unfair advantage. It could see the whole game map. Later, this was fixed. By October 2019, AlphaStar reached the top league in StarCraft II. It was the first AI to do this in a popular esport game.

Datacenter Operations

In 2016, DeepMind helped Google save energy. They used AI to manage Google's huge computer centers. The AI learned to recommend actions to cool the centers. Human engineers would then make these changes. This saved 15% of the energy used for cooling.

Later, a more advanced system was used. The AI's actions were checked for safety. If safe, the AI would make the changes itself. This led to a 30% saving in energy. The AI even found new ways to cool that surprised human experts.

Protein Folding

In 2016, DeepMind started using AI for protein folding. This is a big challenge in molecular biology. Proteins are like tiny machines in our bodies. How they fold into shapes is very important.

In December 2018, DeepMind's AlphaFold program won a major competition. It predicted protein structures very accurately. In 2020, AlphaFold's predictions were as good as lab experiments. Scientists said the problem of protein folding was "largely solved."

In July 2021, DeepMind released AlphaFold to the public. This let scientists use the tool themselves. A week later, DeepMind announced AlphaFold had predicted almost all human proteins. It also predicted proteins for 20 other organisms. These predictions are available in a public database. By July 2022, over 200 million protein predictions were released.

The newest version, AlphaFold3, came out in May 2024. It can predict how proteins interact with DNA, RNA, and other molecules. In October 2024, Demis Hassabis and John Jumper won half of the Nobel Prize in Chemistry. They won it for their work on protein structure prediction with AlphaFold2.

Language Models

In 2016, DeepMind created WaveNet. This system turns text into speech. It sounds very natural. At first, it was too complex for everyday use. But by late 2017, it was ready for products like Google Assistant. Google later used WaveNet for its Cloud Text-to-Speech product.

In May 2022, DeepMind released Gato. This is a very flexible AI model. It was trained on over 600 different tasks. These included describing images and having conversations. Gato performed better than humans on many tasks. It does not need to be retrained for each new task.

Sparrow is an AI chatbot from DeepMind. It helps build safer AI systems. Chinchilla is another language model they developed. In April 2022, DeepMind also showed Flamingo. This AI can describe pictures accurately with only a few examples.

AlphaCode

In 2022, DeepMind showed AlphaCode. This AI helps write computer programs. It can write code as well as an average human programmer. DeepMind tested it on coding challenges. AlphaCode earned a rank similar to many human programmers.

Gemini

Gemini is a very advanced language model. It was released on December 6, 2023. It can understand and create different types of information, like text and images. Gemini comes in different sizes: Nano, Pro, and Ultra. The chatbot that uses Gemini was previously called Bard.

On December 12, 2024, Google released Gemini 2.0 Flash. This model can also create images and audio. It is part of Google's plan to put AI into smart agents. On March 25, 2025, Google released Gemini 2.5. This model can "think" before giving an answer. All future models will have this ability. Gemini 2.5 became available to all free users on March 30, 2025.

Gemma

Gemma is a group of open-source language models. The first ones came out on February 21, 2024. They are available in two sizes. These models were trained on a huge amount of text. They use similar methods as the Gemini models.

In June 2024, Google started releasing Gemma 2 models. In December 2024, they introduced PaliGemma 2. This is an improved model that understands both images and language. In February 2025, they launched PaliGemma 2 Mix. This version is good for many tasks.

In March 2025, Google released Gemma 3. They said it is the most powerful model that can run on a single computer graphics card. It comes in four sizes. In March 2025, Google also introduced TxGemma. This model helps make new medicines more efficiently.

In April 2025, Google introduced DolphinGemma. This research AI model aims to understand dolphin communication. They want to train it to learn dolphin sounds. It could even create new dolphin-like sounds.

SIMA

In March 2024, DeepMind introduced SIMA. This stands for Scalable Instructable Multiword Agent. SIMA is an AI agent that can understand and follow instructions. It can complete tasks in different 3D virtual worlds. It was trained on nine video games. SIMA can adapt to new tasks without needing game code. It uses language to understand what to do.

Habermas Machine

In 2024, Google DeepMind published an experiment. They trained two large language models. These models helped find common ideas among thousands of people online. The project is named after Jürgen Habermas. In one test, people liked the AI's summaries more than a human's.

Generative AI

Video Generation

In May 2024, a video-making AI called Veo was announced. It can create high-quality videos longer than a minute. In December 2024, Google released Veo 2. It can make 4K resolution videos. It also understands physics better. In April 2025, Veo 2 became available for advanced users.

In May 2025, Google released Veo 3. This version not only makes videos but also adds sound. It creates dialogue, sound effects, and background noise to match the video. Google also announced Flow, a video tool powered by Veo and Imagen.

Music Generation

Google DeepMind developed Lyria. This AI model creates music from text. As of April 2025, it is available for testing.

Environment Generation

In March 2024, DeepMind introduced "Genie." This AI model can create game-like virtual worlds. It uses text descriptions, images, or sketches. Genie allows you to interact with the world frame by frame. Its next version, Genie 2, came out in December 2024. It can create even more diverse 3D environments.

Robotics

Released in June 2023, RoboCat is an AI model. It can control robotic arms. The model can learn to work with new types of robotic arms. It can also learn new tasks. In March 2025, DeepMind launched Gemini Robotics. These AI models help robots interact better with the real world.

Other Contributions

Football (Soccer)

DeepMind researchers have used AI for football. They model how players behave. This includes goalkeepers, defenders, and strikers. They look at different situations, like penalty kicks.

AI models could also help the football industry. They could automatically pick interesting video clips. This would create highlights of games. This is possible because AI can analyze videos. It can also use data from player movements and game strategies.

Archaeology

Google has a new program called Ithaca. It is named after a Greek island. This AI helps researchers restore old Greek documents. It can fill in missing text. It also helps find the date and origin of the documents. Ithaca is 62% accurate at restoring damaged texts. It is 71% accurate at finding locations. It can also date documents within 30 years.

The team is working to use this model for other ancient languages. These include Demotic, Akkadian, Hebrew, and Mayan.

Materials Science

In November 2023, Google DeepMind announced GNoME. This tool suggests millions of new materials. These materials were not known to chemistry before. It found hundreds of thousands of stable crystal structures. Some of these have been made in labs.

Mathematics

AlphaTensor

In October 2022, DeepMind released AlphaTensor. This AI uses reinforcement learning. It is similar to the methods used in AlphaGo. AlphaTensor finds new ways to do matrix multiplication. This is a basic math operation. For multiplying two 4x4 matrices, AlphaTensor found a method with fewer steps. This was better than methods known since 1969.

AlphaGeometry

AlphaGeometry is an AI that solves geometry problems. It uses a mix of AI and traditional math rules. It solved 25 out of 30 geometry problems from a big math competition. This is as good as a gold medalist.

Traditional geometry programs use only human-made rules. AlphaGeometry combines these rules with a special language model. This model helps when the rules alone cannot find a solution. It suggests new ways to approach the problem.

AlphaProof

AlphaProof is an AI model that combines a language model with the AlphaZero learning method. AlphaZero taught itself to master games. The language model helps translate math problems into a formal language. This creates many math problems of different difficulty. At the 2024 International Mathematical Olympiad, AlphaProof and an adapted AlphaGeometry reached the level of a silver medalist.

AlphaDev

In June 2023, DeepMind announced AlphaDev. This AI searches for better computer algorithms. It uses reinforcement learning. AlphaDev found a faster way to sort information. It also found a faster way to organize data (hashing). The new sorting method was much faster for short lists. It was also faster for very long lists.

The new sorting method was added to the C++ Standard Library. This was the first change to these sorting methods in over ten years. It was also the first time an AI found such an improvement. Google thinks these two algorithms are used trillions of times every day.

AlphaEvolve

In May 2025, Google DeepMind showed AlphaEvolve. This AI uses language models like Gemini to design better algorithms. AlphaEvolve starts with an algorithm and ways to measure how good it is. Then, it uses the AI to create new versions of the algorithm. It picks the best ones to keep improving.

AlphaEvolve has found several new algorithms. This includes improvements in matrix multiplication. Google says AlphaEvolve matched top algorithms in most cases. It also found better solutions 20% of the time. For example, it found a new way to schedule tasks in data centers. This saved Google's computer resources.

Chip Design

AlphaChip is an AI that helps design computer chips. It uses reinforcement learning. DeepMind said it reduced the time to create chip layouts from weeks to hours. Its chip designs have been used in Google's special AI chips since 2020.

Safety

Google Research published a paper in 2016 about AI safety. It discussed how to avoid bad behavior when AI learns. In 2017, DeepMind released GridWorld. This is a test program to see if an AI learns to turn off its own safety features.

Other Contributions to Google

DeepMind helps Google Play recommend apps to users. DeepMind also worked with the Android team at Google. They created two new features for Android Pie phones. These are Adaptive Battery and Adaptive Brightness. They use AI to save battery power and make the screen easier to see. This was the first time DeepMind used AI on such a small scale.

DeepMind Health

In July 2016, DeepMind started working with Moorfields Eye Hospital. They wanted to use AI for healthcare. DeepMind analyzed eye scans to find early signs of diseases that cause blindness.

In August 2016, they started a project with University College London Hospital. The goal was to create an AI that could tell the difference between healthy and cancerous tissues.

They also worked with other hospitals to make new mobile apps. These apps help doctors manage patient records. Hospital staff said the app saved a lot of time. It made a big difference in treating patients with kidney problems. The app sends test results to doctors' phones. It alerts them to changes in a patient's condition.

In November 2017, DeepMind partnered with Cancer Research UK. They aimed to improve breast cancer detection. They used AI on mammography images. In February 2018, DeepMind also worked with the U.S. Department of Veterans Affairs. They tried to use AI to predict kidney injury in patients. They also aimed to predict when a patient's health might get worse.

DeepMind developed an app called Streams. It sends alerts to doctors about patients at risk of kidney injury. In November 2018, DeepMind announced that its health division and the Streams app would join Google Health. DeepMind said patient data would still be kept separate from other Google services.

Data Privacy in Healthcare

In 2016, a data-sharing agreement between DeepMind and a hospital trust was reviewed. This agreement allowed DeepMind Health to access patient information. This included details about their health conditions. The goal was to research better health outcomes.

A complaint was made about this. It said that patient data should be made anonymous. In 2017, an investigation found that the hospital did not follow data protection rules. Patients were not fully told that their data would be used. DeepMind said they needed to do better. They started new efforts for transparency and public involvement.

DeepMind Ethics and Society

In October 2017, DeepMind started a new research group. It is called DeepMind Ethics & Society. Their goal is to understand the ethical questions of AI. They fund research on topics like privacy, fairness, and how AI affects jobs. They also look at how AI can help solve world problems. This group wants to make sure AI is developed in a way that benefits everyone.

See also

Kids robot.svg In Spanish: Google DeepMind para niños

  • Anthropic
  • Cohere
  • Glossary of artificial intelligence
  • Imagen
  • Model Context Protocol
  • Robot Constitution
kids search engine
Google DeepMind Facts for Kids. Kiddle Encyclopedia.