The rise of AI has sparked a diverse array of reactions, from excitement to significant concern over its potential capabilities.
This type of response to emerging communication technologies isn’t new. Artificial intelligence introduces unique obstacles that demand careful consideration and empathy.
Banning certain practices through forceful Congressional legislation, which undermines long-standing American freedoms, won’t solve the issue. Recently, the Senate Judiciary Committee approved SB 3062. This “Guard Act” serves to highlight the risks associated with Congress’s urge to take swift action, particularly regarding free speech.
The proposed restrictions infringe on First Amendment rights by regulating developers’ editorial choices and infringing on individuals’ rights to create and access lawful expressions.
The legislation specifically targets AI chatbots, particularly those marketed as “AI companions,” by imposing access restrictions, design mandates, and disclosure requirements, with penalties reaching up to $100,000 for violations.
If this bill goes through, federal authorities would effectively dictate the development and use of this technology, consequently restricting access to vital information and persuasive dialogue.
There’s an increasing push for a federal approach to the disjointed state regulations, reflecting a political desire for tangible legislative measures. A unified national standard seems appealing for businesses seeking consistency across different regions. However, consistency does not inherently equate to constitutional validity.
If proposals like the GUARD Act simply echo free speech constraints found in existing state laws, they risk embedding those same problems within federal legislation.
For instance, age verification procedures in the GUARD Act necessitate that individuals create accounts and validate their ages. Existing accounts would be suspended until verification occurs, and companies would need to periodically confirm user ages.
Such requirements strip individuals of anonymity when seeking information, a right the Supreme Court has repeatedly recognized as crucial to free expression.
When compelled to disclose their identities, many might hesitate to pose sensitive questions. For instance, would a person in an abusive situation be more willing to seek chatbot assistance if it meant sacrificing their privacy? Or think about an employee who faces workplace harassment—would they feel comfortable reaching out for help?
There’s a reason the Federalist Papers were penned under aliases. Even in public discussions, distancing oneself from the speaker can be vital. This safeguard is just as necessary today to enable people to inquire about sensitive topics without fearing mandatory identification.
The bill also imposes content-related rules, forbidding the creation, deployment, or availability of chatbots that the government perceives as “promoting” certain types of constitutionally protected speech.
Who should decide what falls under these prohibitions? Such regulations infringe upon the First Amendment by constraining editorial decisions and stifling individuals’ rights to engage with lawful expressions.
Moves like the GUARD Act would essentially determine how chatbots interact, jeopardizing editorial independence as Congress gets to dictate permissible speech. This approach would involve controlling who speaks, what they express, and how their messages are conveyed.
These decisions shape the broader discourse and threaten to narrow the range of voices to a singular government-sanctioned viewpoint.
Additive requirements for disclaimers could also venture into unconstitutional territory by mandating speech. The GUARD Act compels chatbots to deliver federally mandated communications in all interactions. Applied uniformly, this alters the communication’s content and dynamics, impacting the information users receive.
All of this underscores the fact that AI systems can’t perfectly predict or control all possible outputs. This variability isn’t a flaw; it’s intrinsic to how these models function, generating responses based on probabilistic patterns.
Chatbots, in particular, have recently become favorite targets for political criticism in Washington. Accusations of manipulation and risks have prompted a flurry of legislative proposals aimed at controlling this burgeoning technology. It’s not just the GUARD Act; newly introduced regulations are also emerging.
This same rush to legislate is evident across various states, like Minnesota, Florida, and Washington, all proposing measures targeting chatbots through accessibility restrictions and disclosure mandates.
The Constitution does not permit the government to tackle concerns surrounding AI by broadly curtailing protected speech. The First Amendment dictates that solutions should address illegal activities without encumbering the flow of ideas.





