Experts say parental controls are progress, but AI oversight must go further to protect kids

Experts say parental controls are progress, but AI oversight must go further to protect kids

RICHMOND, Va. (WRIC) -- In response to rising concerns about how young people interact with artificial intelligence, OpenAI announced it will introduce new parental controls later this month.

The announcement follows reports of a teenager’s suicide that was allegedly tied to the use of ChatGPT.

The upcoming features will allow parents to set limits on usage and receive alerts if the system detects signs of “acute distress.”

Specialists at Virginia Tech say the move marks an important development—but warn that it's not a full solution.

“The legal responsibility of these platforms is going to be a major issue moving forward,” said Cayce Myers, professor in the School of Communication. “Parental notification and control is a step in the right direction, but ultimate oversight of these platforms is far more complicated. It involves design choices, user responsibility, and access for vulnerable populations.”

Myers added that AI tools differ from traditional forms of media because of their unpredictable, humanlike qualities.

“As chatbots become more conversational, they can foster deep connections with users,” he said. “For some, that can reduce feelings of isolation. But those same interactions can also heighten existing struggles with mental health.”

While families have long debated how to manage children’s access to television, video games and social media, experts say artificial intelligence brings a new set of unknowns.

“We don't know a lot about the protective and risk factors associated with ChatGPT or other chatbots,” said Rosanna Breaux, child psychologist and director of Virginia Tech’s Child Study Center. “But we do know that when parents are involved in monitoring media use, children tend to have better academic outcomes and stronger social skills.”

According to Breaux, keeping tabs on how often and in what ways kids interact with AI could yield similar benefits. Still, she warns that most teens face relatively little parental oversight online, and setting restrictions alone won’t necessarily solve the problem.

“Alerts based on distressing or violent content could help parents stay informed without needing to ban or limit use outright,” she said. “But those alerts should also connect families to resources, like counseling support, when concerning patterns appear.”

Breaux recommends several strategies families can use alongside parental controls:

  • Lead by example. Parents should model self-care and talk openly about emotions, including difficult topics like suicide.
  • Encourage professional help. Normalize therapy as a healthy option for managing stress or life transitions.
  • Stay alert to warning signs. Changes in behavior, mood, sleep or appetite can all signal that a child may be struggling.

Experts agree the rollout of parental controls represents a meaningful step—but say stronger safeguards, combined with family involvement and access to mental health resources, are needed to protect children in the age of AI.