Texas recently became the latest state to enter the conversation about AI governance regulation with House Bill 149, the Texas Responsible Artificial Intelligence Governance Act (TRAIGA) in June. As states continue to grapple with the potential and risks of AI, trends are emerging on how they approach regulation of AI models and systems.

For companies building, deploying or relying on AI, understanding these state trends provides an opportunity to be proactive about your compliance programs, whether as the developer or user of AI. We provide a high-level overview of trends in the key areas of Regulated Entities, Disclosures and Transparency, Documentation and Governance, and Enforcement.

With the U.S. Senate recently striking the 10-year moratorium on states’ regulation of AI under the One Big Beautiful Bill, a federal mandate on AI is not imminent. Similar to the absence of a federal privacy law, the lack of a comprehensive federal AI law will force companies to navigate a complex network of state laws with different obligations. Therefore, it is essential to gear up for AI governance compliance.

Regulated Entities

A notable similarity across varying state laws is the categorization of entities as developers (those who build AI systems) and deployers (those who implement or use AI systems). Most of the states have exemptions for certain entities. For example, the Colorado AI Act (CAIA) includes a limited small business exception for deployers meeting specific criteria, and is slightly more narrow than other laws being implemented as it targets regulation on developers and deployers of high-risk AI systems. CAIA further excludes AI tools/systems deployed for certain purposes of anti-fraud, malware, cybersecurity and robocall filtering unless facial recognition is involved.

California Assembly Bill 2013 and Senate Bill 942 apply to covered providers, namely developers of an AI system or those that substantially modify an AI system with more than one million monthly visitors or users. Likewise, Texas’ TRAIGA regulates any person or entity offering an AI system to Texas residents, including government entities and developers and deployers of AI systems. In contrast, the Utah AI Policy Act and Senate Bill 226 (collectively, UAIPA) target suppliers that use generative AI to interact with an individual in connection with a consumer transaction or in certain contexts, such as the use of AI by a regulated occupation (e.g., nurses, physicians, lawyers).

  • Trend Alert: It is important to distinguish between different AI stakeholders, such as developers and deployers, because it affects their compliance obligations and disclosure requirements. While the state laws do not expressly establish a tiered risk-based classification, they focus on high-risk or high-impact AI systems.  

Disclosures and Transparency

While all four states prioritize transparency, the nature of such transparency requirements are distinct. Utah is the least restrictive in comparison to the other states, as it requires a simple disclosure at the beginning that a consumer is interacting with AI. Similarly, Texas requires a clear conspicuous notice with no dark patterns that an AI system is in use, even when such use may be obvious.

California and Colorado require stronger disclosure standards. California AB 2013 requires a high-level summary of the data sets used in the development of AI, which can include details such as if data is copyrighted material or personal information, whether data was cleaned or modified, and the use and justification for synthetic data. Under California SB 942, developers of generative AI must include latent disclosure in AI-generated content and offer users a tool to include manifest disclosure in the AI-generated content. Colorado aims to mitigate the risk of algorithmic discrimination by having developers and deployers regularly update statements on their websites summarizing the types of high-risk AI systems they offer and how to mitigate associated risks.

  • Trend Alert: A unifying goal of state laws is ensuring that consumers know when they are interacting with AI and, these consumer-facing disclosures must be transparent, conspicuous, and explainable.

Documentation and Governance

All these states view documentation and governance as a key factor in responsible AI use. Texas grants authority to the attorney general to issue a demand for information to a company offering an AI system if there is an investigation or a complaint filed against the company. Companies that operate AI systems in Texas should be ready to produce performance metrics on their AI system, monitoring and user safeguards, guidelines, high level description of the data used and outputs. Further, Texas prohibits AI systems that intentionally manipulate human behavior, apply social scoring or engage in discriminatory practices.

On the other side of the spectrum, Colorado and California have higher documentation standards due to their depth of disclosure requirements. In order to provide disclosures to California residents and create an AI identification tool, companies must maintain records of training data in order to capture how content was generated and identify what generated content. In Colorado, developers of high-risk AI systems must implement a risk management policy and provide a high-level summary of the training data, any known or foreseeable risks of the use of AI system, and how the AI system is evaluated for algorithmic discrimination.

  • Trend Alert: Effective and thorough AI governance frameworks and documentation ensure accountability and explainability to internal stakeholders and external stakeholders (e.g., consumers and regulators).

Enforcement

Texas, California, Utah and Colorado do not provide a private right of action. However, the penalties can be significant for violations of these laws. Utah may impose administrative fines of up to $2,500 for each violation, and California’s penalty is $5,000 per violation. On the other hand, Texas imposes even more burdensome penalties – ranging from $10,000 to $200,000 per violation, with additional daily fines for ongoing noncompliance.

  • Trend Alert: These penalties act as a deterrent to companies in order to protect consumers from high-risk and high-impact AI systems. The recent myriad state privacy law enforcement actions demonstrate that regulators will not hesitate to take enforcement action.

Next Steps

The UAIPA is already in effect and the California, Colorado and Texas laws go into effect in early 2026. As state laws and regulations are implemented and evolve, companies must prepare for a complex and fragmented compliance landscape. Legal counsel can help navigate state-specific obligations and nuances in order to:

  • Assess compliance obligations,
  • Draft appropriate disclosures,
  • Develop internal and external AI policies, and
  • Provide AI risk-mitigation strategies.

If you have any questions about how state AI governance regulations may impact your business, please contact Chiara Portner or Bushra Samimi, or your regular Lathrop GPM attorney.