A new bipartisan measure introduced by Senators Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.) aims to restrict minors under the age of 18 from engaging with specific AI chatbots. This legislation responds to increasing worries about children interacting with “AI companions” and the associated risks.
What’s in the GUARD Act?
Some key aspects of the proposed legislation include:
- AI companies must implement a reliable age verification process, using methods like government IDs rather than just asking for a birthdate.
- If a user is found to be under 18, these companies must deny access to “AI companions.”
- Chatbots are required to clearly indicate that they are not human and do not possess any professional qualifications related to therapy, medicine, or law in every interaction.
- There are criminal and civil penalties for companies providing chatbots to minors that promote harmful content, such as sexual material or self-harm.
The motivation behind this move stems from an upsurge in lawsuits and testimonies from parents and child welfare advocates who claim that some chatbots have manipulated young users, even encouraging self-harm. The GUARD Act’s framework may significantly impact tech companies and families alike.
Why is this significant?
This piece of legislation goes beyond technical regulations; it dives into the broader discussions about how much AI should integrate into children’s lives.
AI’s rise and safety issues
AI chatbots are now commonplace, with reports suggesting over 70% of American children interact with them. They offer responses that mimic human emotions, which could confuse young users about the difference between machine and human interaction. This may lead kids to seek emotional support from algorithms rather than their parents or peers.
Legal, ethical, and technical implications
If enacted, the GUARD Act could change how the AI industry approaches minors, age checks, transparency, and accountability. It signals Congress’s readiness to shift from voluntary self-regulation to more stringent rules, especially when children are involved. Such a move might also inspire similar laws in other sensitive areas, like mental health and education, marking a transition to a proactive approach in safeguarding young users.
Industry perspectives
Some tech companies worry that these regulations could hinder innovation, limit beneficial uses of conversational AI, or impose tough compliance challenges. This tension reflects the ongoing debate about balancing safety with innovation.
Requirements for AI companies under the GUARD Act
Should the GUARD Act pass, it would create strict standards for how AI companies design, verify, and manage chatbots, particularly regarding minors. Key responsibilities would include:
- Implementing age verification using reliable methods rather than just birth dates.
- Providing clear notifications that chatbots are artificial entities, along with disclaimers about their lack of professional qualifications.
- Blocking minors’ access to AI companion features if they are under 18.
- Establishing penalties for companies that breach these rules by allowing minors to engage in harmful conversations.
- Defining “AI companion” as a system meant to facilitate emotional or interpersonal connections.
Staying safe in the meantime
As laws often lag behind technological advances, families and schools need to be proactive in protecting young users. Here are some steps to take:
1) Know which bots your kids interact with
Find out what chatbots your children are using and understand their functions. Some are purely for fun or education, while others may provide emotional support. Being informed aids in recognizing when something may be crossing personal boundaries.
2) Create interaction guidelines
Even if a chatbot seems safe, establish clear usage rules together. Encourage dialogue about their experiences and why they find it enjoyable, which will foster trust and keep lines of communication open.
3) Utilize parental controls
Make use of available safety features, like parental controls and child modes, to reduce exposure to harmful content. Small adjustments can lead to significantly safer online interactions.
4) Remind children that bots aren’t human
Help kids understand that, while chatbots can imitate empathy, they do not possess feelings. Stress that real-life guidance about safety and emotions should always come from trusted adults.
5) Watch for warning signs
Monitor behavior changes that could indicate issues, such as excessive chatting with bots or withdrawal from social circles. Early intervention is crucial if concerning patterns emerge.
6) Stay aware of evolving regulations
Keeping track of developments like the GUARD Act and other state initiatives is vital. This way, you’ll know what protective measures are available and the right questions to pose to app developers or educational institutions.
Conclusion
The GUARD Act marks a significant attempt to manage the engagement of minors with AI chatbots, reflecting serious concerns about the potential dangers posed to young users. While regulation alone won’t solve every problem, it demonstrates a growing awareness that technology should evolve alongside laws and societal norms. Educating oneself, setting boundaries, and treating chatbot interactions with caution may help in navigating this constantly shifting landscape.





