TRAIGA (Texas): The AI Governance Framework Every Organisation Needs to Understand
Executive Summary
Texas Governor Greg Abbott signed the Texas Responsible AI Governance Act (TRAIGA) into law on June 22, 2025, making Texas the second state to enact comprehensive AI consumer protection legislation. Taking effect January 1, 2026, TRAIGA represents a distinctly different approach from Colorado's risk-based framework, focusing on specific prohibited uses rather than comprehensive risk assessments. For organizations operating across multiple states, TRAIGA's prohibition-based model offers both clarity and complexity as part of an emerging patchwork of state AI regulations.
Key Provisions Breakdown
Prohibited Uses Framework
TRAIGA categorically prohibits development or deployment of AI systems designed to:
- Manipulate human behaviour to encourage self-harm, violence, or criminal activity
- Unlawfully discriminate against protected classes (with exemptions for regulated financial institutions)
- Infringe constitutional rights or engage in criminal activity
- Produce or distribute sexually explicit content or child pornography
- Create deepfakes for unlawful purposes
Government Entity Requirements
- Mandatory disclosure when consumers interact with AI systems, regardless of obviousness
- Prohibition on social scoring and biometric identification without consent
- Enhanced transparency obligations for healthcare providers using AI
Privacy Law Amendments
- Updates to Texas biometric privacy law clarifying consent requirements for AI development
- Modifications to TDPSA requiring processors to assist controllers with AI-related personal data compliance
- New exceptions for AI training and development activities
Enforcement and Safe Harbours
- Texas Attorney General exclusive enforcement authority with 60-day cure period
- Penalties ranging from $10,000-$12,000 for curable violations to $80,000-$200,000 for uncurable ones
- Affirmative defences for substantial compliance with NIST AI RMF or equivalent frameworks
- 36-month regulatory sandbox program for innovative AI testing
Business Implications
Immediate Compliance Challenges
- Broad applicability: Covers any entity conducting business in Texas, producing products for Texas residents, or deploying AI systems in Texas
- Integration complexity: Must harmonize with existing privacy law obligations under TDPSA
- Vendor relationships: Third-party AI tools and services must be evaluated against prohibition framework
Strategic Considerations
- State-level permanence: With federal preemption efforts defeated, state AI laws like TRAIGA represent the new regulatory reality
- Multi-state complexity: Different approaches between Texas (prohibitions) and Colorado (risk assessments) require flexible frameworks
- International alignment: TRAIGA's approach contrasts with EU AI Act's comprehensive risk categorization
Operational Impacts
- Documentation requirements: Must demonstrate AI systems don't fall under prohibited uses
- Process integration: AI governance must align with existing information security and privacy programs
- Training needs: Teams must understand both technical and legal boundaries of AI deployment
Implementation Recommendations
Phase 1: Immediate Assessment (Q4 2025)
- Conduct AI system inventory across all Texas-touching operations
- Map current AI applications against TRAIGA's prohibited uses
- Review vendor contracts and third-party AI tools for compliance gaps
- Establish legal review process for AI deployments
Phase 2: Policy Integration (January 2026)
- Embed TRAIGA requirements into existing risk management frameworks
- Update vendor management procedures to include AI prohibition screening
- Implement disclosure mechanisms for government-facing AI applications
- Create incident response procedures for potential violations
Phase 3: Ongoing Governance (Q1 2026+)
- Establish quarterly AI system reviews against prohibition framework
- Monitor regulatory developments and emerging state legislation
- Develop training programs for development and deployment teams
- Create compliance reporting mechanisms for leadership oversight
Critical Success Factors
- Framework alignment: Leverage existing NIST AI RMF or ISO 42001 compliance for safe harbour protections
- Cross-functional coordination: Ensure legal, technical, and business teams understand prohibition boundaries
- Scalable processes: Design compliance procedures that work across multiple state jurisdictions
- Vendor management: Establish clear contractual requirements for AI service providers