
Introduction and Scope
On June 22, 2025, Texas Governor Greg Abbott signed House Bill 149 into law, enacting the Texas Responsible Artificial Intelligence Governance Act (“TRAIGA”). This makes Texas one of the first U.S. states (after Colorado and Utah) to adopt a broad AI governance statute[^1]. TRAIGA establishes a statewide framework for the development, deployment, and use of AI systems, but its approach is not a generic risk-based regime. Instead, the law targets a set of clearly defined prohibited AI practices and imposes certain duties, while otherwise allowing room for innovation. The Act applies expansively to any person or business that “promotes, advertises, or conducts business” in Texas, offers products or services to Texas residents, or develops or deploys an AI system in Texas. In effect, even companies based outside Texas are covered if their AI systems are available to Texas users. Notably, TRAIGA preempts local regulations – cities and counties cannot impose their own AI rules. The law will take effect on January 1, 2026, giving organizations roughly 6 months to comply. However, its future could be influenced by potential federal legislation that might curtail state AI laws[^2].
Key Prohibited AI Practices
TRAIGA’s core provisions outlaw several high-risk or harmful uses of AI. In essence, developers and deployers of AI systems (as defined in the Act) are prohibited from engaging in the following practices in Texas:
- Behavioral Manipulation: It is unlawful to develop or use an AI system with the intent to manipulate human behavior in ways that incite self-harm, violence, or criminal acts. In other words, AI should not encourage someone to harm themselves or others or commit crimes (e.g. an AI chatbot deliberately urging violent behavior).
- “Social Scoring” by Government: Government entities in Texas may not use AI for “social scoring” – i.e. classifying or scoring individuals based on their social behavior or characteristics in a manner that leads to detrimental or unfair treatment unrelated to the original context. This bans “social credit” systems that could penalize citizens extrajudicially (a practice inspired by concerns over systems used elsewhere).
- Biometric Surveillance Without Consent: Texas government agencies are prohibited from using AI to identify people via biometric data (e.g. facial recognition on public images) without consent, if doing so would violate an individual’s rights under the U.S. or Texas Constitution or other laws. In plain terms, law enforcement or agencies cannot scrape online photos to build AI face-recognition systems targeting individuals, at least not without legal authority or permission.
- Violation of Constitutional Rights: No person may develop or deploy an AI system “with the sole intent” to impair or infringe upon someone’s constitutional rights. This acts as a catch-all safeguard for civil liberties – for example, an AI system designed specifically to suppress free speech or other protected rights would violate the Act.
- Unlawful Discrimination: It is illegal to use AI with the intent to unlawfully discriminate against individuals in any protected class (such as race, gender, religion, etc.) in violation of state or federal anti-discrimination laws. TRAIGA clarifies that mere disparate impact is not sufficient by itself to demonstrate an intent to discriminate. In practice, this provision extends existing civil rights laws to AI tools – one cannot evade liability by “blaming the algorithm.” (Notably, Texas requires a showing of intent to discriminate, a higher bar than some other jurisdictions[^3].) The Act also carves out that this particular section doesn’t apply to insurance companies using AI in underwriting, so long as they comply with other insurance anti-discrimination laws [^8].
- Sexually Explicit Content Involving Minors: TRAIGA forbids developing or deploying AI systems for certain sexual content: specifically, using AI to produce or distribute child pornography or other unlawful sexually explicit material depicting minors, including “deepfake” images or videos of minors. It also bans AI systems that engage in text-based sexual conversations while impersonating a minor (e.g. a chatbot posing as an under-18 child in sexual contexts). These provisions target some of the most egregious potential abuses of generative AI. Violations would likely overlap with existing criminal laws on child sexual abuse material and fake pornography, reinforcing those prohibitions in the AI realm[^9].
Importantly, many of the above restrictions focus on intentional misuse of AI. The law aims to stop clearly harmful or manipulative AI applications, rather than regulating ordinary or beneficial uses of AI. Companies deploying AI in Texas should ensure none of their systems are designed or used in a manner that falls into these banned categories. In general, TRAIGA’s prohibitions align with activities that would be widely viewed as unethical or already illegal, signaling that Texas is addressing extreme AI abuses as a first step in AI regulation.
Transparency and Disclosure Requirements
Texas places a targeted transparency mandate on government and certain high-stakes AI uses, while sparing most private-sector AI from blanket disclosure rules. State agencies (and any “governmental agency” providing services to consumers) must disclose to individuals when they are interacting with an AI system, before or at the time of the interaction. The disclosure must be clear, conspicuous, in plain language, and avoid any deceptive design (“dark patterns”). It may be provided via an obvious notice or even a hyperlink, as long as it effectively informs the user. This means, for example, if a Texas state agency uses a chatbot or an AI-powered system to interface with citizens, it needs to affirmatively notify the user that AI is involved.
A special disclosure rule applies in the healthcare context: if an AI system is used in providing health care services or treatments, the provider must inform the patient (or their representative) that AI is being used. This notice is required by the first time the AI-assisted service is delivered (or as soon as reasonably possible in an emergency). In other words, hospitals or medical professionals in Texas deploying AI (say, for diagnostics or patient communication) must disclose that fact to patients upfront. This ties into medical consent and transparency principles in healthcare.
Crucially, TRAIGA does not impose AI disclosure requirements on private businesses in general. Unlike some other jurisdictions, Texas stops short of forcing companies to label all AI interactions. For example, a private e-commerce site or social media platform using AI bots or recommender algorithms is not broadly required by this law to announce “AI is at work” to consumers. This is a notable contrast with states like Colorado (whose AI law will require notices for certain consequential automated decisions)[1] or pending proposals in California that would mandate AI interaction disclosures[2][^4]. Texas’s legislature deliberately limited the transparency mandate to public-sector and critical scenarios, avoiding new burdens on businesses in everyday AI uses. Nonetheless, companies contracting with Texas government agencies or operating in regulated fields (like healthcare) will need to build compliance with these disclosure obligations into their practices.
Biometric Data and Privacy Guardrails
TRAIGA strengthens protections around biometric data (e.g. facial images, fingerprints, voiceprints) in response to AI’s growing appetite for such data. First, the Act amends Texas’s existing Biometric Identifier Act to clarify that individuals do not consent to the capture of their biometric identifiers simply because their images or videos are publicly available online. In practical terms, just because your photograph or voice is posted on the internet does not mean an AI company can scrape it and use it to build a facial recognition model without violating Texas law. The only exception is if the image or media was made public by the individual themselves – in that case, one could argue implied consent for that specific data use. This change closes a potential loophole that some AI developers might claim, and reinforces personal privacy rights over biometric information.
The Act also updates the law to accommodate AI training with biometric data: it provides that using biometric identifiers for developing or training AI models is exempt from the usual consent requirement so long as the AI system is not being used to identify individuals. This means a company can use, say, a collection of voice recordings to train a speech recognition AI without obtaining prior consent for each voiceprint, provided the resulting AI isn’t deployed to recognize specific people. However, if that data or the AI model is later used for a commercial purpose that involves identifying someone, the normal biometric consent and retention rules kick in. In short, Texas is giving AI developers some leeway to train algorithms on biometric data, but not to misuse it for surveillance or identification in secret.
TRAIGA’s earlier-mentioned ban on government AI biometric identification (using publicly-sourced images without consent) is another privacy safeguard. It prevents state or local authorities from circumventing privacy rights by outsourcing identification tasks to AI. There are sensible exceptions – for instance, the law does not impede biometric AI used for security or fraud prevention purposes, or similar legitimate uses, as long as those uses don’t violate existing law. Financial institutions using voiceprint authentication and businesses that handle biometric data purely for internal AI training (not for identifying people) are also exempt from certain consent requirements. Additionally, in relation to unlawful discrimination, insurance sectors remain governed by their own stringent laws (e.g. insurance anti-discrimination rules), and TRAIGA defers to those regimes for any AI applications in those fields. These carve-outs indicate that Texas aimed to avoid double-regulating industries that already have oversight, deferring to such existing agency oversight, and focusing instead on gaps specific to AI[^8].
Safe Harbors and Compliance Defenses
Reflecting its industry-friendly approach, the Texas AI Act incorporates several safe harbors and affirmative defenses to liability. These provisions are crucial for companies to note, as they can significantly mitigate enforcement risk if followed in good faith:
- Discovery Through Testing: If a company discovers a violation of the Act through its own proactive testing – including adversarial testing or “red-team” exercises – and presumably takes corrective action, then that violation cannot be used to impose liability. In essence, the law rewards organizations for actively probing and identifying flaws or biases in their AI. If you find a problem before the regulators do, you get a chance to fix it without penalty. (This encourages robust internal AI audits and testing programs[^6]).
- NIST AI Framework Compliance: A company that is in substantial compliance with a recognized AI risk management framework – specifically, the NIST Artificial Intelligence Risk Management Framework (RMF) (including its Generative AI Profile) or a similar widely accepted standard – has an affirmative defense against enforcement. This means if you can show you diligently followed best practices for AI governance and risk mitigation, that can shield you from fines or injunctions. Aligning AI processes with NIST guidelines (or ISO, etc.) will thus not only improve AI safety but also provide a legal safeguard under Texas law.
- Third-Party Misuse: TRAIGA stipulates that a company (AI developer/deployer) will not be held liable for prohibited AI outcomes caused solely by someone else’s misuse of its AI system. For example, if you provide a general AI tool and an unrelated bad actor uses it to generate disallowed content or commit unlawful discrimination without your intent or knowledge, your company isn’t automatically on the hook. The liability focuses on the party deploying AI with wrongful intent.
- Unintentional Violations and Cure: As discussed further below, the Act gives violators an opportunity to cure issues after notice, and there is even a statutory presumption that a person used reasonable care to comply. These features, while not traditional safe harbors, underscore that Texas is not looking to punish companies for honest mistakes or hidden biases if they are committed to remediation.
- Sectoral Exemptions: As noted, the law exempts certain regulated activities from specific provisions – e.g., an insurance company using AI is exempt from the AI anti-discrimination clause if it’s already complying with insurance discrimination laws; a bank using voice biometrics for account security isn’t subject to the biometric consent rule; and uses of biometric AI for fraud prevention or identity verification are largely permitted. These aren’t “blanket” safe harbors for those industries, but they prevent conflicts between TRAIGA and existing legal obligations.
Together, these safe harbor provisions send a clear message: Texas wants to encourage responsible AI practices rather than create a trap for the unwary. If companies are proactive – conducting risk assessments, following best practices, and policing their own AI for problems – the law will offer them protection. By contrast, willful bad actors (or those who ignore known issues) can still face stiff penalties. This calibrated approach appears intended to strike a balance between innovation and accountability[^11].
AI Sandbox for Innovation
One of the most novel features of TRAIGA is the establishment of an AI Regulatory Sandbox program – the first of its kind for AI at a U.S. state level. The sandbox is a 36-month program run by the Texas Department of Information Resources (DIR), in consultation with the new AI advisory council, allowing approved participants to test innovative AI systems in a controlled environment. Companies (or other entities) must apply and get approval from DIR and any relevant regulatory agency to enter the sandbox. The application requires detailed information about the AI system to be tested, its intended use, an assessment of benefits and potential risks to consumers or public safety, and a plan for mitigating those risks. This ensures that only well-prepared, thought-out AI pilots are admitted.
Once in the sandbox, a participant may deploy and experiment with the AI system for up to 36 months without needing to obtain certain otherwise-required state licenses or regulatory approvals that might normally apply. During this period, many regulatory requirements can be waived or suspended for the participant to facilitate innovation. For example, a fintech startup testing a new AI-driven credit underwriting tool might get temporary relief from some state lending regulations. However, TRAIGA’s core prohibitions (Subchapter B) – the banned practices outlined earlier – cannot be waived even in the sandbox. In other words, the sandbox is not a free pass to engage in high-risk or harmful AI uses; it’s meant for thoughtful experimentation under oversight. If a participant does violate any of the fundamental prohibitions, they can still be subject to enforcement even while in the program.
The sandbox program includes safeguards to protect the public. Participants must submit quarterly reports to DIR, including performance metrics of the AI, updates on risk mitigation, and any consumer or stakeholder feedback received. This ongoing reporting lets the state monitor how the AI test is progressing. The DIR, along with the AI Council and any other relevant agency, can recommend removing a participant from the sandbox if the AI is found to pose undue risk to public safety or violate laws that weren’t waived. Additionally, regulatory immunity is limited: the Texas Attorney General and other agencies cannot pursue enforcement for violations of the specifically waived regulations during the sandbox testing period. But this immunity would not cover, for example, a violation of federal law or a non-waived state law.
Notably, the sandbox precludes punitive actions for good-faith tests – even the Attorney General may not bring charges for activities covered by the sandbox approval. This gives participants confidence to experiment without fear of immediate lawsuits, as long as they stay within the agreed parameters. The concept is akin to fintech sandboxes seen in other jurisdictions and the “regulatory sandbox” mandate in the EU AI Act (which will require EU countries to set up sandboxes by 2026) [^5].[1] Texas’s sandbox is an ambitious attempt to foster AI innovation in fields like healthcare, finance, education, and public services while maintaining oversight. Companies with cutting-edge AI ideas may find Texas an attractive proving ground, knowing that they can work with regulators to pilot new technologies in the sandbox program. Going forward, DIR must report annually to the legislature on the sandbox’s outcomes and may suggest regulatory changes to support AI innovation based on the program’s findings. This feedback loop could influence future Texas AI policy, making the sandbox not just a testing space but a learning mechanism for lawmakers as well.
Enforcement Mechanisms and Penalties
Enforcement Authority: TRAIGA vests exclusive enforcement authority in the Texas Attorney General (AG) for any violations of the Act. No other state or local agency can independently enforce these AI requirements (except in certain licensing contexts noted below), and the law explicitly creates no private right of action. In other words, individuals cannot sue companies under this Act; only the AG can bring enforcement actions. This centralized enforcement is intended to ensure uniform application and to prevent a flood of private litigation. The AG’s office is tasked with creating an online complaint portal for consumers to report suspected AI violations, and the AG can use civil investigative demands (subpoenas) to investigate complaints or potential non-compliance.
Notice and Cure: If the Attorney General believes a person or company has violated the AI law, the AG must provide written notice describing the specific violation and give the party 60 days to cure the issue before any lawsuit is filed. This “notice-and-cure” provision is significant. It provides companies a chance to fix problems (for example, disable a non-compliant AI feature or implement safeguards) without immediate penalties. If the company cures the violation within 60 days and notifies the AG with a written statement of the cure and the steps taken to prevent future issues, the AG cannot proceed with an enforcement action. Only if the violation is not cured (or is incurable) after the cure period can the AG sue. This mechanism encourages cooperation and remediation over punishment, reflecting a less adversarial approach than some other tech regulations[^7].
Civil Penalties: For violations that are not cured, TRAIGA imposes tiered civil penalties depending on the nature of the violation. Courts may assess fines as follows: (1) for each curable violation (or a breach of a provided cure statement), a penalty between $10,000 and $12,000; (2) for each uncurable violation, a penalty between $80,000 and $200,000; and (3) for any continuing violation, additional fines of $2,000 to $40,000 per day that the violation continues. These amounts, while substantial, are targeted mostly at serious or ongoing non-compliance. By comparison, they are lower than the massive penalties contemplated in the EU AI Act regime (which can reach tens of millions of dollars or more for global companies) [^3].[1] Texas’s fines are more in line with typical state consumer protection penalties, indicating an effort to deter bad actors but not cripple businesses for first-time mistakes.
In addition to fines, the AG can seek injunctions to stop the unlawful AI activity and can recover attorneys’ fees and investigative costs. The Act also creates a rebuttable presumption that the defendant “used reasonable care” to comply – effectively giving the benefit of the doubt to companies unless the state can show reckless disregard. A defendant who believes they are wrongly accused can even seek an expedited declaratory judgment or hearing to clarify that their conduct is not in violation, providing a mechanism to quickly clear compliant companies. Furthermore, if an AI system has not yet been deployed, the AG cannot bring an enforcement action for it. This last point aligns with the law’s spirit of encouraging testing and development (e.g., in the sandbox or pilot stage) without fear of penalties before a product is launched.
Additional Remedies for Regulated Professions: If a company or individual that is licensed by a state agency (for example, a professional license or a business certification) is found by the AG to have violated the AI law, the AG can recommend further disciplinary action to the relevant licensing agency. The state agency may then impose sanctions such as suspension or revocation of the license, probation, or an extra fine up to $100,000. This could come into play, for instance, if a licensed healthcare provider or financial institution egregiously violates TRAIGA – their professional license or charter could be at risk on top of the civil penalties. However, such agency action can only occur after the AG has won a case and proved a violation.
Overall, enforcement of TRAIGA will depend heavily on the priorities and resources of the Texas Attorney General’s office. Since there is no private litigation allowed, companies will not face class actions or individual lawsuits under this law[^7]. This centralized enforcement model can be a relief for businesses (reducing litigation exposure) but also means that compliance will likely be monitored through a combination of consumer complaints and proactive AG investigations. Companies should be mindful that the Texas AG (and possibly a dedicated unit or the new AI Council in an advisory role) will be watching for the most flagrant abuses – especially those harming consumers or violating rights – and those are likely to be the initial targets for enforcement actions.
Oversight and Governance Structure
Beyond setting rules and penalties, Texas is putting governance structures in place to guide AI policy. The law creates a Texas Artificial Intelligence Council, a state advisory body tasked with studying AI developments and making policy recommendations. The Council will have seven members, appointed by state leadership, with expertise in relevant areas (technology, ethics, law, etc.). It is administratively attached to the DIR (the state IT agency), which will support its operations. The Council’s role is primarily advisory and educational: it can publish reports on AI usage, compliance with laws, ethical issues, data privacy concerns, and potential liabilities related to AI in Texas. It is also charged with providing training and educational outreach to state and local government agencies on the use of AI systems. For example, the Council might develop best practice guidelines or training seminars for government employees deploying AI, to ensure they understand both the benefits and the risks.
Importantly, the AI Council has no regulatory or enforcement power – the Act explicitly prohibits it from issuing binding rules or interfering with other agencies. This limitation means the Council will function as a think tank or advisor, not a new regulator. Its advice could, however, influence future legislation or agency policies. By institutionalizing an AI advisory panel, Texas aims to keep pace with technological change and involve experts in guiding the state’s approach (somewhat analogous to federal advisory committees or the EU’s planned AI Office, albeit without enforcement authority[^10][1]).
TRAIGA also integrates AI oversight into existing government processes. For instance, the law updates Texas’s Sunset review criteria (the periodic review of state agencies) to require evaluating each agency’s use of AI in its operations. When a Texas agency comes up for Sunset review, there must be an assessment of how it is using or considering AI, and how that impacts its mission. Additionally, state agencies will need to report information about their AI systems as part of their IT management duties – the DIR will collect data on agencies’ AI use and inventory of AI applications. These measures ensure that the state government itself remains transparent and accountable in its adoption of AI, leading by example.
In summary, the governance provisions of TRAIGA create a supportive infrastructure around the new AI rules: an expert council to advise and educate, and requirements for government self-audit of AI usage. This reflects a comprehensive strategy to not only regulate private actors but also to build the state’s internal capacity for responsible AI governance. Companies, especially those that may interact with the Council or participate in the sandbox, can view this as an opportunity to engage with policymakers and shape best practices going forward.
Outlook and Practical Implications for Businesses
TRAIGA’s enactment signals that companies operating in Texas (or serving Texas users) need to incorporate AI compliance into their legal and operational planning. Companies implementing AI should take proactive steps now to prepare for the law’s January 2026 effective date. Key practical implications and recommendations include:
- Inventory Your AI Systems and Uses: Begin by identifying and cataloging all AI and automated decision systems used by your organization that could impact Texas residents or are deployed in Texas. This inventory should include the purpose of each AI system and an initial risk assessment. Flag any use cases that might fall under TRAIGA’s prohibited categories (e.g. algorithms that could be perceived as discriminatory or any generative AI features that might inadvertently produce disallowed content). Early identification will help focus compliance efforts where they’re needed most.
- Policy Review and Training: Review and update internal policies, procedures, and developer guidelines to address TRAIGA’s requirements. For example, ensure your AI development policies explicitly forbid building features that would violate the Act (such as bias in algorithms or manipulation of users). Implement bias mitigation and testing protocols – e.g. require documentation of datasets and fairness testing to demonstrate no intent to discriminate. Train your technical teams and business units about the new law so they understand these constraints and the importance of design controls. Contracts with vendors should also be revisited to require AI systems provided to you (or on your behalf) comply with TRAIGA’s standards. In short, incorporate AI compliance into your broader governance, risk, and compliance (GRC) framework. This may also involve aligning with frameworks like NIST AI RMF, which not only helps with compliance but could serve as a legal defense as discussed above.
- Implement Oversight and Testing (Use Safe Harbors to Your Advantage): Establish regular AI testing and auditing processes. This includes adversarial testing or “red teaming” your AI systems to find potential problematic behavior (e.g. could the system be tricked into hate speech or unlawful recommendations?). By doing so, you not only improve the system but also avail yourself of TRAIGA’s safe harbor for self-discovered issues. Maintain thorough documentation of these tests and any remedial actions. Similarly, adopting and documenting compliance with industry best practices (such as NIST’s AI guidelines or ISO 42001 for AI management) will put you in a strong position to assert the affirmative defense if ever challenged. In essence, rigorous internal oversight is now not just good practice but a defensive shield under Texas law.
- Prepare for Disclosure Obligations (Public Sector and Healthcare): If your company provides AI-powered services or products to Texas government agencies (as a vendor or contractor), be prepared to implement the required consumer disclosures. This may mean building user interface features that clearly label AI interactions or developing scripts and notices for chatbots that announce themselves as AI. Government procurement contracts will likely start including clauses about TRAIGA compliance – including the disclosure rule. Similarly, healthcare organizations employing AI (for instance, AI diagnostics or decision support tools in patient care) should create a process to inform patients about AI involvement, whether through consent forms, patient portals, or on-site notices. These disclosures should be reviewed by legal and compliance teams to ensure they meet the “clear and conspicuous” standard and are given at the correct time. By acting now, businesses can avoid scrambling to retrofit disclosure mechanisms later[^10].
- Consider the Sandbox for Innovative Projects: If you are developing a cutting-edge AI application that doesn’t neatly fit existing regulations (especially in sectors like fintech, telehealth, or edtech), evaluate the opportunity to participate in Texas’s AI sandbox program. The sandbox could provide a relatively low-risk environment to pilot your AI solution with oversight and temporary relief from certain rules. Work with counsel to prepare a strong sandbox application – outlining consumer benefits, how you’ll manage risks, and compliance with baseline requirements – to increase the chance of approval. While inside the sandbox, maintain diligent reporting and open communication with DIR regulators. Success in the sandbox might not only help refine your product but could also influence favorable regulatory changes down the line. Even if you don’t enter the program, staying informed about sandbox outcomes and guidance from the Texas AI Council can give insight into the state’s evolving expectations for AI safety.
- Monitor Legal Developments: The regulatory environment for AI is fast-moving. Companies should keep an eye on federal legislation that could impact TRAIGA’s enforceability[^2] as well as new AI laws emerging in other states. For instance, if Congress enacts a moratorium preempting state AI laws, compliance efforts may need to refocus on federal standards. Conversely, if the federal proposal stalls, more states (or Texas itself) could expand on these regulations. California, New York, and others are considering their own AI bills – which may impose additional or different obligations. Harmonizing compliance across jurisdictions will be a challenge, so staying plugged into legal updates (through counsel or industry groups) is essential. Consider also engaging in public comment or advocacy through trade associations to help shape balanced AI regulations.
By taking these steps, companies will not only ensure compliance with Texas’s new law but also demonstrate a commitment to responsible AI practices that is likely to serve them well with regulators, customers, and business partners. From a broader perspective, TRAIGA is an opportunity for forward-looking companies to distinguish themselves by building ethical, transparent, and well-governed AI systems while being able to innovate ahead of markets and explore how they can redefine industries using AI systems.
As always, organizations should consult with legal counsel to tailor compliance strategies to their specific AI use cases. Texas’s new law may be unique in its details, but it fits into a global trend of AI governance that no company can afford to ignore.
Endnotes:
[^1]: Texas is the third U.S. state to enact a comprehensive AI law, following Colorado[1] and Utah in 2024.[2] Each state’s approach differs – Colorado’s law (effective 2026) emphasizes risk assessments and transparency for high-impact AI, and Utah’s law focuses on government AI accountability. Texas, by contrast, zeroes in on outright prohibitions of certain practices, aiming for an innovation-friendly balance.
[^2]: The federal initiative that could have substantially impacted TRAIGA has been resolved. The “One Big Beautiful Bill Act” (H.R. 1)[3] originally contained a provision that would have imposed a 10-year moratorium on state and local AI regulations, with one version requiring states to adopt the ban to receive federal broadband funding. However, the Senate voted 99-1 to remove the AI moratorium provision from the bill by adding a proviso, effectively killing the federal preemption effort. After an over 24-hour vote-a-rama, the bill passed the Senate on July 1, 2025, in a mostly party-line 51–50 vote and the House July 3 voted 218-214 to pass the final version of the One Big Beautiful Bill Act (H.R. 1). The bill was enacted after being signed by the President on July 4, 2025. The removal of the AI moratorium means that TRAIGA and other state AI laws can proceed without federal interference. The Senate voted 99-1 to remove the provision from budget reconciliation bill H.R.1, leaving AI-specific state laws and regulations enforceable.
[^3]: Comparative Penalties: Texas’s maximum AI violation fines (~$200k per uncurable violation) are significant but modest next to the EU’s AI Act penalties (which can reach €30–35 million or 6–7% of global turnover for serious infractions). This reflects the generally lighter-touch regulatory philosophy in Texas. Likewise, to prove unlawful discrimination under TRAIGA requires intent, whereas the EU framework and some U.S. proposals focus on discriminatory impact regardless of intent. Businesses operating internationally will need to navigate these differing standards.
[^4]: Transparency in Other Jurisdictions: Texas limits AI disclosure duties to public agencies and healthcare, which is notably narrower than approaches elsewhere. For instance, Colorado’s AI law will require companies to inform consumers when AI is used in certain consequential decisions (e.g. those affecting someone’s legal or employment rights).[1] California is considering legislation (AB 331) that would mandate clear disclosure whenever a business provides an AI-driven product or service likely to be perceived as human.[2] Texas’s decision to exclude private-sector interactions from disclosure was likely to avoid overburdening businesses, but companies should be mindful that future laws (or even Texas’s own future amendments) may expand transparency requirements.
[^5]: Regulatory Sandboxes: Texas’s AI sandbox is a pioneering concept at the state level. No other U.S. state currently offers a dedicated AI sandbox program. The idea is inspired in part by fintech sandboxes (e.g., Arizona’s fintech sandbox in 2018[3]) and global trends. Notably, the EU AI Act explicitly encourages innovation through sandboxes, requiring EU member states to set up at least one AI regulatory sandbox by 2026.[4] Texas is effectively acting ahead of the curve domestically. Companies accepted into the sandbox should treat it as a privilege – regulatory tolerance for experimentation – and ensure they meet all conditions to keep that trust.
[^6]: Safe Harbor via NIST Framework: The inclusion of a NIST AI Risk Management Framework safe harbor is a unique incentive. It parallels notions in cybersecurity law where adherence to frameworks like NIST or ISO can mitigate penalties after data breaches. By following NIST’s AI guidelines (which cover governance, mapping AI risks, measuring and managing those risks), companies create not only better AI outcomes but also a defensive evidence trail. In any enforcement inquiry, being able to show “substantial compliance” with NIST or equivalent standards could be a game-changer in arguing that the company took responsible steps (potentially invoking the presumption of reasonable care as well). We recommend documenting compliance efforts in detail – it may serve as a legal “insurance policy” under TRAIGA.
[^7]: No Private Lawsuits: TRAIGA’s ban on private enforcement aligns with the trend in state AI laws to centralize enforcement (Colorado does similarly[1]). This contrasts with certain privacy laws like Illinois’s Biometric Information Privacy Act (BIPA),[2] which allow individuals to sue for violations and have led to costly class actions. Texas clearly wanted to avoid opening floodgates of litigation against AI developers. That said, companies shouldn’t be complacent – the Texas AG has strong investigative powers, and a single enforcement action could lead to substantial fines and injunctive relief. Moreover, lack of a private right of action in this AI law does not shield companies from other legal theories (e.g., negligence or product liability claims) if an AI system causes harm. In-house counsel should therefore view TRAIGA compliance as part of a broader risk management, not simply a narrow regulatory checkbox.
[^8]: Industry Carve-outs: Texas acknowledged that certain industries are already regulated with respect to risks TRAIGA addresses. For example, insurance practices are heavily regulated to prevent unfair discrimination; thus, TRAIGA defers to existing insurance laws for AI-driven underwriting. Likewise, financial institutions have know-your-customer and fraud detection processes (often using voice biometrics or facial recognition) under banking laws – TRAIGA’s biometric provisions don’t impede those, provided institutions comply with banking regulations And healthcare data use is governed by HIPAA, so the law exempts HIPAA-compliant activities from additional requirements. These carve-outs aim to avoid duplication or conflict. Companies in regulated sectors should primarily ensure their AI adheres to their sector’s rules, while also meeting TRAIGA’s general prohibitions (which largely echo fundamental legal and ethical norms).
[^9]: Criminal Law Overlap: The ban on AI-generated child sexual abuse material and sexual deepfakes of minors is a direct response to growing concerns about “deepfake” crimes. Texas Penal Code §43.26[3] already criminalizes child pornography, and §21.165 (enacted in 2019)[4] criminalizes the creation or distribution of certain deepfake videos (specifically, unauthorized pornographic deepfakes of real persons). TRAIGA complements these by targeting the tools – it stops developers from creating or distributing AI systems intended for such illicit content. Thus, even if a particular AI-generated image hasn’t been used yet to commit a crime, the act of providing the AI with the purpose to generate that image is itself unlawful. AI companies should have robust content moderation and use policies to prevent their technology from being turned toward these heinous ends. Expect close scrutiny from authorities on any AI capabilities that could be misused for exploitation of minors.
[^10]: Impact on Vendors and Contracts: Companies that sell AI solutions or services to Texas state agencies (or other public sector entities in Texas) will likely see new contract requirements flowing down from this law. For instance, government RFPs and contracts may require the vendor to warrant that the AI system complies with TRAIGA, that it can provide the necessary consumer disclosures, and that it does not include any prohibited functions (such as algorithmic bias or unlawful profiling). Vendors should be prepared to answer detailed questionnaires about their AI’s design and to possibly undergo assessments or certifications of compliance. In effect, TRAIGA could make AI ethics and compliance a competitive differentiator in winning business. Those who have built in transparency features and strong AI governance will have an edge. Private companies using third-party AI tools should also seek contractual assurances and indemnities from vendors regarding compliance with AI laws, as part of their vendor risk management.
[^11]: “Innovation-Friendly” Regulation: Observers note that the final version of TRAIGA is more business-friendly than earlier drafts, which originally contemplated a broader, EU-style regulatory scheme with risk tiers and extensive compliance obligations. During the legislative process, the bill was pared back to focus on egregious harms and to incorporate flexibility like sandboxes and safe harbors. Texas policymakers were evidently balancing economic development goals – attracting AI investment to Texas – with the need to address genuine dangers of AI. The result is a law that sets guardrails without unduly stifling innovation. Companies that engage constructively with this framework (e.g. by participating in the sandbox or sharing best practices via the Council) may find Texas to be a hospitable environment for AI development. It will be important to monitor how this “light-touch” model performs in practice: if it succeeds in curbing abuses without hindering growth, it could influence other U.S. states’ and even federal legislation to follow a similar path. Conversely, any high-profile AI mishaps in Texas could prompt calls to tighten the rules in the future. As of now, TRAIGA represents a significant first step in state-level AI governance, one that seeks to foster responsible AI rather than instill fear of it.
[1] Future of Privacy Forum. The Colorado Artificial Intelligence Act: Policy Brief. Colorado General Assembly, https://leg.colorado.gov/sites/default/files/images/fpf_legislation_policy_brief_the_colorado_ai_act_final.pdf
[2] Illinois General Assembly. Public Act 103-0769 – The Biometric Information and Privacy Act. https://www.ilga.gov/Legislation/PublicActs/PrinterFriendly/103-0769
[3] Texas Legislature. Texas Penal Code Section 43.26. FindLaw, https://codes.findlaw.com/tx/penal-code/penal-sect-43-26/
[4] Texas Legislature. Texas Penal Code Section 21.165. Texas Public Law, https://texas.public.law/statutes/tex._penal_code_section_21.165#google_vignette
[1] Future of Privacy Forum. The Colorado Artificial Intelligence Act: Policy Brief. Colorado General Assembly, https://leg.colorado.gov/sites/default/files/images/fpf_legislation_policy_brief_the_colorado_ai_act_final.pdf
[2] Senator Josh Becker. “Governor Signs Landmark AI Transparency Bill Empowering Consumers to Identify AI-Generated Content.” Senate District 13, 19 Sept. 2024, https://sd13.senate.ca.gov/news/press-release/september-19-2024/governor-signs-landmark-ai-transparency-bill-empowering
[3] European Union. Artificial Intelligence Act: Article 57 – AI Regulatory Sandboxes. https://artificialintelligenceact.eu/article/57/
[4] European Union. Artificial Intelligence Act: Article 57 – AI Regulatory Sandboxes. https://artificialintelligenceact.eu/article/57/
[5] European Commission. “Governance and Enforcement of the AI Act.” https://digital-strategy.ec.europa.eu/en/policies/ai-act-governance-and-enforcement#:~:text=The%20European%20AI%20Office%2C%20established%20in%20February%202024,most%20powerful%20AI%20models%2C%20so-called%20general-purpose%20AI%20models.
[6] Colorado General Assembly. SB24-205: Consumer Protections for Artificial Intelligence. https://leg.colorado.gov/bills/sb24-205
[7] Utah State Legislature. SB0149 Artificial Intelligence Policy Act (2024), https://le.utah.gov/~2024/bills/static/SB0149.html
[8] U.S. Congress. H.R.1 – One Big Beautiful Bill Act, 119th Congress (2025). Congress.gov, https://www.congress.gov/bill/119th-congress/house-bill/1/summary/00
Click here to subscribe to our AI Newsletter.
Looking for guidance on your AI implementation journey?
Connect with Ajay Mago or any member of EM3’s Artificial Intelligence & Machine Learning practice for professional support.

Ajay Mago, Managing Partner at Maxson Mago & Macaulay, LLP (EM3 Law LLP).
