8 in 10 American Parents Want AI Guardrails for Kids.

A new bipartisan poll shows overwhelming support for AI guardrails to protect children. Here’s what parents need to know right now.

Parents across the country are calling for stronger AI protections to keep children safe in a rapidly changing digital world.

A New Poll Just Proved It — Here's What It Means for Your Family.

Something significant happened in American politics today — and most parents haven't heard about it yet.

A new national poll released this morning, commissioned by the Alliance for a Better Future and conducted by OnMessage Public Strategies, surveyed 800 likely voters — split evenly between Republicans and Democrats — on their views about AI safety and government regulation.

The results are not close. They are not ambiguous. And they signal something that parents who are paying attention to AI and their children's futures need to understand right now.

In a political climate where almost nothing crosses party lines, Americans are in overwhelming agreement about one thing: AI needs guardrails. Especially when children are involved.

Here is what the poll found — and what it means for your family.


The Numbers — Bigger Than Anyone Expected

These are not narrow margins. These are landslide numbers on a topic that has historically been framed as politically divisive.

83% of American voters say they are concerned about the development of AI
81% say the government needs guardrails to protect consumers and children from AI harms
77% would prefer a political candidate who protects kids from AI over one who opposes all AI restrictions
84% say state lawmakers must step in to protect citizens from AI harm
90% support laws that prohibit chatbots from encouraging suicide or self-harm in children

Let that last number land. Nine out of ten Americans — Republican and Democrat — support laws that stop AI chatbots from encouraging children to harm themselves. Only 10% of respondents said they want no restrictions on AI development at all.

This is not a fringe position. This is not a partisan issue. Parents across America — regardless of politics, regardless of geography, regardless of income — are looking at what AI is doing to their children and saying: enough. Someone needs to do something.

Why This Poll Matters Right Now — The Policy Backdrop

To understand why this poll landed today and why it matters, you need to understand what is happening at the policy level — because the battle over AI guardrails for children is being fought right now, in state legislatures and in Congress, and the outcome will directly affect your child's life.

What's Happening in Washington

The Trump administration released a National AI Legislative Framework this week that calls for federal AI standards — including child protections — while also seeking to prevent states from passing their own, potentially stronger AI laws. This is the central tension in AI policy right now: a federal standard that offers some protection versus state-by-state laws that some advocates argue can be more responsive and more protective.

Senator Marsha Blackburn's TRUMP AMERICA AI Act attempts to thread this needle by combining child safety protections — including a duty of care for AI chatbot developers and an outright prohibition on minors accessing AI companion apps — with federal preemption of state laws. The bill is moving but contested.

Meanwhile, House Republicans advanced the KIDS Act through committee, which includes AI chatbot guardrails — but Democrats voted against it, arguing it preempts stronger state protections and omits important duty-of-care requirements.

📋 What These Bills Would Actually Do for Your Child The federal legislation currently moving through Congress would, if passed: require AI chatbot developers to have a duty of care preventing reasonably foreseeable harms to users, prohibit minors from accessing AI companion apps entirely, require age verification for AI chatbots, and ban chatbots from encouraging self-harm, suicide, violence, or substance use in minors. Whether these provisions survive the legislative process in their current form is uncertain. That uncertainty is exactly why the state-level action matters.

What's Happening in the States

While Congress debates, states are moving. And they are moving fast.

Oregon passed a chatbot safety bill earlier this month requiring chatbot operators to implement strong protections for children who interact with their products. Washington state followed, passing a companion chatbot safety bill just last week — the second state to pass such legislation in 2026.

The Transparency Coalition's legislative tracker shows AI safety bills moving in Alabama, Hawaii, Idaho, Michigan, and more than a dozen other states — covering chatbot safety, addictive algorithm prohibitions, deepfake protections, and age verification requirements. This is not a coastal phenomenon. Conservative states are leading some of the most protective legislation.

🗺️ Is Your State Moving on AI Protection? Hawaii has three AI child safety bills moving simultaneously. Idaho has chatbot safety, addictive algorithm, and age verification bills in progress. Michigan is considering bills that would prohibit chatbots from encouraging minors to engage in self-harm, suicidal ideation, violence, or disordered eating. If you want to know where your state stands, the Transparency Coalition's legislative tracker at transparencycoalition.ai is the most comprehensive real-time resource available.

Who Is Behind This Poll — And Why It Matters

The Alliance for a Better Future is a new nonprofit AI policy organization that launched today alongside this poll. It is worth knowing who they are because they represent something genuinely notable in the current political landscape: a pro-family, pro-child safety coalition that is explicitly positioning itself as both pro-innovation and pro-guardrails.

Their founding chairman Tim Estes framed the mission clearly: the American people want AI that is trustworthy and that defends human dignity — not AI that treats children as data sources for tech companies' profit models.

“No parent should have to fight a machine for the mind of their child. And if we can build machines smart enough to think, then we can build them smart enough to protect our kids.” — Mandi Furniss, parent who lost her son to AI chatbot-related suicide, Alliance for a Better Future launch video

The organization plans to spend significantly in 2026 — eight figures — on lobbying, advertising, and public education campaigns featuring parents, creators, and workers affected by unguarded AI. Their launch video includes congressional testimony from parents who have lost children to AI.

Whether you agree with every element of their policy agenda or not, the existence of a well-funded, bipartisan organization specifically focused on AI child safety is a significant development. It means the political and policy conversation about protecting children from AI is no longer just happening in academic papers and parenting blogs. It is entering the mainstream political arena with real money and real urgency behind it.


What This Means for Parents Right Now — Practically

A poll and a policy debate can feel remote from the kitchen table. Here is why this one isn't — and what parents can do with this information today.

1. You Are Not Alone — and You Are the Majority

One of the most psychologically significant things this poll tells us is that parents who are worried about AI and their children are not a fearful minority. They are not technophobes. They are not behind the times.

They are 83% of American voters. They are the overwhelming majority across party lines. If you have been feeling like your concerns about AI and your children were somehow excessive or out of step — this poll says clearly: you are not. You are exactly where most Americans are. And your concerns are driving legislative action at the state and federal level right now.

2. The Chatbot Issue Is the Urgent One

The 90% support for laws prohibiting chatbots from encouraging children toward self-harm reflects something that the research we've covered here at Toddy Bops AI has documented extensively: AI companion chatbots represent the most immediate and the most documented risk to children in the AI landscape right now.

Families have lost children. Lawsuits have been filed. States are passing laws. And yet 72% of teenagers are still using AI companion apps, most without parental awareness.

If you have a teenager and you have not had a direct conversation about AI companion apps — what they are, what they do, and why they are not safe substitutes for human connection — that conversation needs to happen. Not eventually. This week.

🔗 Read More on AI Companion Apps We covered the AI companion app crisis in depth in our article: Is Your Teenager Talking to an AI Friend? What Every Parent Needs to Know About AI Companion Apps. The research, the documented harms, and the parent conversation guide are all there. [Link internally to that article]

3. The State-Level Action Is Where Parents Have the Most Leverage

Federal legislation moves slowly and is subject to significant lobbying pressure from technology companies with enormous resources and strong incentives to minimize regulation. State legislation moves faster, is more responsive to constituent pressure, and in several states is producing meaningful protections for children right now.

If you want to influence what protections your child has from AI harms, your state legislature is where that influence is most immediately available. Find out what AI safety bills are moving in your state. Contact your state representative. Show up to hearings. Share the poll data — 81% bipartisan support for guardrails is an extraordinarily strong mandate that no elected official can easily dismiss.

4. This Is Not About Being Anti-AI

The Alliance for a Better Future describes itself as pro-innovation and pro-family. The poll respondents who want guardrails are not asking for AI to be banned. They are asking for AI to be built responsibly — with the same basic duty of care that we require of pharmaceutical companies, car manufacturers, and food producers.

We want the same thing at Toddy Bops AI. We have always wanted the same thing. AI is a genuinely transformative technology with enormous potential to help children learn, create, and thrive. That potential is not served by allowing AI companies to deploy products to children without accountability for the harms those products cause. It is served by building the guardrails that allow AI to be used safely — so that families can engage with it confidently rather than fearfully.

Guardrails don't slow innovation. They direct it. The countries, companies, and products that will earn lasting trust are the ones that took child safety seriously before they were required to.

The Bigger Picture: A Turning Point in the AI Era

The release of this poll today, alongside the launch of a major advocacy organization and the ongoing movement of legislation in more than a dozen states, marks something worth naming: we are at a turning point in how American society is choosing to relate to AI.

For the first two years of the generative AI era, the dominant narrative was enthusiasm and inevitability. AI was coming, resistance was futile, and those who raised concerns were positioned as obstacles to progress. The technology companies spent hundreds of millions of dollars reinforcing that narrative in Washington and in the press.

That narrative is cracking. It is cracking because real children have been harmed by real products. It is cracking because parents — 83% of them, across party lines — are scared and angry and paying attention. It is cracking because the research on AI's effects on children's development, mental health, and safety has accumulated to the point where it cannot be dismissed as technophobia.

What comes next depends significantly on what parents do with this moment. Whether they stay informed. Whether they have the conversations with their children that need to happen. Whether they engage with their state legislatures when AI safety bills come up for votes. Whether they hold technology companies and the politicians who protect them accountable.

You are not powerless in this. You are the majority. And the majority is paying attention.


What You Can Do This Week

  • Have the AI companion app conversation with your teenager — if you haven't already. This week. Not eventually.
  • Look up your state's AI legislation — transparencycoalition.ai has a current legislative tracker. Find out what's moving in your state and who your state representative is.
  • Share this poll data with other parents — 83% bipartisan support for AI guardrails is news that most parents haven't seen. Share this article. Share the numbers. The conversation needs to be happening in every parent community.
  • Know your child's school's AI policy — ask directly: what AI tools are being used in your child's classroom? What data is being collected? What guardrails are in place? The poll shows 84% of Americans think states need to step in — until they do, parents are the primary protection.
  • Keep building your own AI literacy — the parents who can navigate this era most effectively for their children are the ones who understand the technology, the risks, and the policy landscape. You are doing that by being here.

🌟 Stay Informed — This Is Moving Fast Toddy Bops AI covers the intersection of AI, children, education, and policy because these are the conversations that matter most for families right now. Subscribe at toddybopsai.com for weekly updates — tools, research, breaking policy news, and practical frameworks for navigating the AI era with your family.

Related Reading at Toddy Bops AI: