Table of Contents

Texas Responsible AI Governance Act (TRAIGA)

The Texas Responsible Artificial Intelligence Governance Act (TRAIGA), codified as HB 149, was signed into law by Governor Greg Abbott on June 22, 2025, and took effect on January 1, 2026. 1) It makes Texas one of the first US states with comprehensive AI governance legislation, establishing foundational prohibitions and governance structures while reflecting the state's business-friendly regulatory philosophy.

Scope and Applicability

TRAIGA applies broadly to any individual or entity conducting business in Texas, offering products or services to Texas residents, or developing or deploying AI systems within the state. 2) This includes Texas-based, out-of-state, and international organizations whose AI systems are accessible to Texas users.

The law defines AI systems as any machine-based system that, for any explicit or implicit objective, infers from inputs how to generate outputs. 3)

Prohibited Practices

TRAIGA establishes four categories of prohibited AI use:

  1. Behavioral Manipulation: Developing or deploying AI systems with intent to manipulate human behavior to incite or encourage self-harm, harm to others, or criminal activity 4)
  2. Constitutional Rights Infringement: AI systems developed with the sole intent to infringe, restrict, or impair rights guaranteed under the Constitution
  3. Unlawful Discrimination: AI systems intended to discriminate against protected classes (race, color, national origin, sex, age, religion, disability) in violation of state or federal law 5)
  4. Prohibited Content: AI systems intended for producing or distributing certain sexually explicit content

Governmental entities are additionally barred from using AI to assign social scores or similar valuations that could result in detrimental or disproportionate treatment. 6)

Governance Structures

TRAIGA creates an AI Advisory Council tasked with issuing best-practice guidelines and shaping enforcement priorities. 7) The law also establishes a 36-month regulatory sandbox to allow innovation within supervised boundaries. 8)

Disclosure Requirements

Plain-language disclosure is required when individuals interact with AI instead of humans. 9) This transparency provision ensures consumers are aware when AI systems are making or influencing decisions that affect them.

Government Entity Obligations

Government entities face additional requirements including AI system inventory, impact assessment, and reporting obligations that go beyond the requirements placed on private-sector deployers. 10)

Legislative History

The initial December 2024 draft proposed a sweeping regulatory scheme modeled after the Colorado AI Act and EU AI Act, focusing on high-risk AI systems with substantial compliance requirements for developers and deployers. March 2025 amendments significantly scaled back the scope, resulting in legislation that establishes foundational prohibitions and governance structures while avoiding prescriptive compliance mandates. 11)

Comparison with Other Frameworks

Unlike the EU AI Act's detailed high-risk classification system with extensive technical requirements, TRAIGA focuses on prohibited practices rather than broad disclosure requirements. 12) Compared to the Colorado AI Act's prescriptive approach, TRAIGA takes a lighter regulatory touch while still addressing key AI safety concerns. The law's future applicability is potentially clouded by the possibility of federal preemption on state AI regulations. 13)

See Also

References