Google plans to roll out the Gemini Artificial Intelligence Chatbot next week for children under 13 who manage their Google accounts as tech companies compete to attract younger users with AI products.
“The Gemini app will soon be available to your kids,” the company told the 8-year-old parent in an email this week. “That means your child can use Gemini,” asks the question and creates a story with the help of homework.
The chatbot will be available to children whose parents use Family Link. This is a Google service that allows families to set up Gmail and opt for services like YouTube for their kids. To sign up for a child account, parents provide personal data such as their child's name and date of birth to tech companies.
Gemini has specific guardrails to prevent younger users from creating certain unsafe content, says Karl Ryan, a Google spokesman. If a child with a family link account uses gemini, he added, the company will not use that data to train AI
The introduction of Gemini for children will help schools, universities, businesses and others tackle the effectiveness of common generator AI technologies, thus increasing the use of chatbots among vulnerable populations. Trained with a huge amount of data, these systems can generate human-like text and realistic looking images and videos.
Google and other AI chatbot developers are locked into fierce competition to capture younger users. President Trump recently urged schools to adopt tools for teaching and learning. Millions of teenagers are already using chatbots as learning support, writing coaches and virtual peers. A group of children warns that chatbots can pose serious risks to the safety of children. Bots sometimes make up things too.
UNICEF, United Nation's children's agents, and other children's groups note that AI systems can confuse, misinform and manipulate young children who have difficulty understanding that chatbots are not human.
“Generative AI is generating dangerous content,” UNICEF's Global Research Office said in a post on AI risks and child opportunities.
Google acknowledged some risks in an email to family members this week, warning parents that “Gemini can make mistakes,” and suggested “help children think critically” about chatbots.
The email also recommended that parents teach children how to fact-check Gemini's answers. The company then suggested that parents remind children that “Gemini are not human” and “do not enter any Gemini sensitive or personal information.”
The email added, despite the company's efforts to filter inappropriate material.
Over the years, the tech giant has developed a variety of products, features and protection measures for teens and children. In 2015, Google introduced YouTube Kids. This is a standalone video app for kids that is popular among toddler families.
Other efforts to attract children online have sparked concerns from government officials and children's advocates. In 2021, Meta stopped plans to introduce Instagram Kids Service, a version of the Instagram app under the age of 13, after dozens of state attorney generals wrote to the company that “historically failed to protect the welfare of children”;
Some well-known tech companies, including Google, Amazon and Microsoft, are also paying millions of dollars in fines to resolve government complaints that they violated the Children's Online Privacy Protection Act. Federal law requires that children use online services before collecting personal information such as home addresses and selfies from children under the age of 13.
Under Gemini rollout, children with family-controlled Google accounts can now initially access the chatbot themselves. However, the company warned parents, saying it could manage the settings of their children's chatbots to “include access turning off.”
“Your child will soon have access to the Gemini app,” an email to parents said. “We'll also let you know when your child first visits Gemini.”
Google spokesman Ryan said the approach to providing Gemini to younger users is compliant with federal children's online privacy laws.