Ai Video Flood On YouTube: Experts Warn Of Dangerous Misleading Messages For Toddlers

Child-development specialists are warning about a growing wave of AI-generated videos that appear on platforms such as YouTube disguised as “educational” content but may contain safety risks and harmful cues for toddlers. One example described is an animated song about riding in a car that has been viewed nearly 50,000 times since it was posted five months ago. The clip communicates inconsistent or incorrect traffic rules, while the characters visibly change from frame to frame. Scenes show children riding unbuckled in the front seat of a moving car, a child “floating” alongside a moving vehicle, another apparently sitting on the hood, and children walking in the middle of the road with cars driving behind them.

Researchers such as Kathy Hirsh-Pasek of Temple University and Dr. Dana Suskind of the University of Chicago see such content as an early sign of a rapidly escalating problem and are calling for tighter limits on AI products aimed at babies and children. Suskind describes the material as non-neutral and refers to “AI misinformation for toddlers at an industrial scale.” While low-quality AI content consumption among adults is sometimes linked to “brainrot,” Suskind argues that for children the effect is different: because the brain is still being built, false or contradictory stimuli could blunt development and steer neural wiring in problematic directions.

The extent of this type of content is difficult to quantify, but the trend is described as visible and worsening. A November 2025 analysis by the video-editing company Kapwing is cited as finding that about 21% of YouTube’s feed consists of low-quality, AI-generated videos. As an illustration of production speed, the channel “Jo Jo Funland,” which posted “Vroom Vroom! Car Ride Song,” is described as having uploaded more than 10,000 videos in seven months since launching in August 2025—around 50 new videos per day on average. By comparison, “Sesame Street” is said to have published about 3,900 YouTube videos over roughly 20 years on the platform.

Producer and AI educator Carla Engelbrecht, who says she has created digital experiences for brands such as Sesame Street, PBS Kids, and Highlights for Children, reports finding numerous AI videos that could pose direct physical harm. Examples include clips of a frightened child being chased by a T. rex, depictions of a baby eating what appears to be a bloody apple, swallowing whole grapes, consuming honey, and a teacher eating raw elderberries. Engelbrecht also points to a second category of videos that claim to support learning but, due to errors, teach incorrect associations and misinformation. Examples include a vowel video showing consonants, a purported lesson on the 50 U.S. states with distorted names such as “Ribio Island,” “Conmecticut,” “Oklolodia,” and “Louggisslia,” and a continents video repeatedly displaying a compass with incorrect markings.

Another concern described is that misinformation can appear only midway through a video and may go unnoticed if parents preview only the first seconds. Engelbrecht says mixed signals can slow learning because children need longer to form cause-and-effect associations, which can push back later developmental steps.

Responsibility is portrayed as shared among platforms such as YouTube, companies behind large language models such as OpenAI, Google, and Anthropic, the channels publishing the videos, and parents. YouTube’s policy requires creators to disclose AI-generated or AI-altered content when it “seems realistic,” but this does not generally apply to cartoons and animation because such content has long been treated as fictional. A YouTube spokesperson, Boot Bullwinkle, points to stricter quality principles for children’s content and a child-safety policy, though the publicly available pages are described as not specifically addressing AI. Because of the platform’s scale, YouTube is said not to catch every policy-violating video; action was reportedly taken against at least seven channels in response to a separate investigation, including terminating two.

YouTube Kids is described as a separate, curated version intended for children from birth to age 12 and is promoted with parental controls, while many families continue to use the main YouTube platform, where creators can still reach audiences and earn revenue. The AI videos reviewed in the account are described as not having been found on YouTube Kids, though more recent reporting by The New York Times is said to have found AI videos there as well.

Kid-focused alternatives such as Sensical by Common Sense Media and Meevee are mentioned as existing but struggling to gain traction against YouTube’s dominance. Clearer labeling of AI content is proposed, similar to “content credentials” that LinkedIn is rolling out to indicate whether media was created or edited by AI in part or in whole. Engelbrecht supports labeling for AI literacy but warns that broad labels could also penalize creators who use AI carefully; she says she is developing a tool to detect low-quality AI children’s videos on YouTube.

Some of the content is described as originating overseas, but much of it is said to be produced domestically by Americans using phones or computers to make quick money. AI is described as being used across the full production pipeline—generating themes and scripts, creating the video, and automating frequent uploads to “faceless” channels with anonymous creators. A video posted a little over a year ago by a popular creator is also mentioned as promoting AI-generated animated children’s videos as a fast opportunity, and it is described as having been viewed more than 335,000 times.

Producer Sierra Boone of Boone Productions, which makes original content for children ages 2 to 6 and produces “The Nap Time Show,” describes the flood as evidence of limited concern for children’s development when publishing is primarily financially motivated. Boone describes a labor-intensive production process involving research, scripting, table reads, filming, editing, publishing, and promotion, and calls for greater accountability for uploaders. In the absence of stronger regulation or moderation, the practical burden is described as falling on parents, who are urged to monitor content more closely despite being overstretched, and YouTube Kids is described as a curated version for children from birth to age 12.

Source: Principia Scientific