Back to blog

Back to blog

Texas Just Passed a Major AI Law — Here's What It Means for Your Business

Texas Just Passed a Major AI Law — Here's What It Means for Your Business

Texas Just Passed a Major AI Law — Here's What It Means for Your Business

Governor Greg Abbott has signed the Texas Responsible Artificial Intelligence Governance Act (TRIAGA) into law, making Texas the second state (after Colorado) to pass a comprehensive AI statute applicable to both public and private actors. But unlike Colorado’s broad requirements, Texas kept it tight, focusing on a few specific, high-risk uses of AI rather than sweeping regulation.

If you’re working with facial authentication, security systems, or AI-based decision tools, this law should be on your radar. Below, we break down what’s in it, how it interacts with Texas’ biometric privacy law (CUBI), and why this might be the most startup-friendly AI legislation in the country.

 At a Glance

  • Effective Date: January 1, 2026

  • Applies to: Government, businesses, individuals

  • Focus: Harmful behavior, biometric misuse, transparency

  • Key Update: Major clarification of biometric privacy law (CUBI)

  • Enforcement: Texas Attorney General only (no private lawsuits

Legislative Background: A Targeted Response to Targeted Concerns

TRIAGA didn’t show up out of nowhere. It followed national conversations around AI misuse—deepfakes, discriminatory algorithms, and opaque government systems. But it also followed serious engagement from technologists and privacy advocates.

At the Texas Capitol, developers asked for clarity. Privacy organizations wanted meaningful guardrails. Legislative testimony and committee reports reflect a balance between those camps. Importantly, state legislators clarified during hearings that tools like facial authentication used for security—especially with notice—are not the focus of this law.

Legislative analysis, including the Texas House Research Organization’s report, emphasized avoiding the EU’s overreach while still creating enforceable rules. Testimony also led to a clearer carveout for security tools, which helped keep common applications, like facial recognition for building access, out of the penalty box.

The House Research Organization’s bill analysis highlighted TRIAGA’s intent: Limit harmful or rights-infringing AI but make room for tools that help Texans live and work more safely.

What TRIAGA Covers: Two Tracks, One Goal

TRIAGA regulates both the public and private sectors, but not identically. It outlines specific prohibited uses of AI that apply to all persons and adds a few extra restrictions for governmental entities.

Key Rules That Apply to Everyone

Section

Prohibition

§ 552.052

Using AI to manipulate someone into self-harm, harming others, or committing crimes

§ 552.055

Using AI to intentionally violate someone’s constitutional rights

§ 552.056

Using AI for intentional unlawful discrimination

§ 552.057

Using AI to create deepfake pornography or child sexual abuse material

These are targeted, outcome-focused prohibitions. If you're not using AI for harm, you're likely in the clear.

Extra Rules That Apply to Governmental Entities

Section

Requirement / Prohibition

§ 552.051

Agencies must notify users when they interact with AI

§ 552.053

No AI-based social scoring systems (e.g., ranking citizens to control access to services)

§ 552.054

No biometric identification via AI if it violates rights or other laws

Let’s drill into that last one, because it's where confusion tends to live

Biometrics: Permitted if Rights Are Respected

TRIAGA does not ban biometrics, facial recognition, or facial authentication by government entities. It only restricts how they use it:

A governmental entity may not develop or deploy an artificial intelligence model for the purpose of uniquely identifying a specific individual using biometric data… without the individual’s consent if the gathering would infringe on any right under the U.S. or Texas Constitution or any other law. (§ 552.054)

Plain English:

  • You can use AI with biometrics if it doesn’t violate rights and you’ve given notice or obtained consent where needed.

  • Uses for security, access control, and safety are permitted, especially if accompanied by a privacy policy or signage.

This is consistent with the legislative history, which emphasizes keeping high-value government tech uses intact while avoiding China-style surveillance.

Here’s a better breakdown of how TRIAGA treats common government use cases:

Use Case

Consent Needed?

Permitted?

Why

Facial authentication at secure entry

❌ No. Only if it violate rights

✅ Yes (with safeguards)

Security use + signage/policy likely satisfies TRIAGA

Chatbot for public services

❌ No

✅ Yes

No two-way interaction, no rights impact

Mass surveillance without due process

✅ Yes

❌ Likely prohibited

Violates rights if deployed without oversight


So, facial authentication for facility access or security screening? That’s 100% permitted, especially with notice and safeguards in place.

The Subtlety of Section 552.054: Consent “If” Rights Are Violated

One of the most legally nuanced provisions is Section 552.054. It states:

A governmental entity may not develop or deploy an artificial intelligence system for the purpose of uniquely identifying a specific individual using biometric data or the targeted or untargeted gathering of images or other media from the Internet or any other publicly available source without the individual’s consent, if the gathering would infringe on any right of the individual under the United States Constitution, the Texas Constitution, or a state or federal law.”

Translation: Government agencies are not prohibited from using AI for biometric identification without consent—unless that use infringes on rights under the U.S. Constitution, Texas Constitution, or any state or federal law.

Important: It does not include "any other state's law." TRIAGA limits this to Texas and U.S. law, so California’s BIPA or Illinois' biometric law would not trigger this clause unless adopted by Texas.

This is a conditional restriction, not a categorical ban. It requires careful legal interpretation of whether a given use crosses a rights threshold (e.g., unreasonable search, due process violation, etc.).

The only enforcement authority, the Texas Attorney General, has broad discretion in how this is applied.

What About CUBI — Texas’ Biometric Privacy Law?

CUBI (the Capture or Use of Biometric Identifier Act) has existed since 2009. It says private entities can’t collect biometric data without consent. But it’s had gray areas, particularly what is and isn’t a “commercial purpose” triggering CUBI requirements. TRIAGA brings some clarity:

Security/Safety Exemption

Under a new clarification added under TRIAGA, CUBI does not apply to biometric data used for a security or safety purpose.

So if you’re using facial authentication to protect access to facilities, systems, or physical spaces, CUBI likely doesn’t apply.

AI Training Carveout

CUBI also doesn’t apply to biometric data used to train an AI model, as long as the data isn’t being used to identify a real person in the process.

These two changes make CUBI more navigable for innovators and those purchasing the innovation. 

Are Government Entities Bound by CUBI? TRIAGA?

Generally, no. CUBI applies to “commercial purposes.” State agencies, local governments, and public colleges and hospitals in Texas are generally considered governmental entities rather than private entities. As such, they are typically not engaged in activities for a “commercial purpose” as contemplated by CUBI. 

So:

Entity

Covered by CUBI?

Regulated by TRIAGA?

Private companies

✅ Yes

✅ Yes (if applicable)

State agencies

❌ No

✅ Yes

Local governments

❌ No

✅ Yes

Public colleges/hospitals

❌ No for Subchapter C

❌ No under TRIAGA § 552.054

Enforcement and Judicial Clarity

Under TRIAGA, there is no private right of action. Enforcement is exclusive to the Texas Attorney General, and violators have 60 days to fix the problem before penalties kick in.

That matters. It means the law is meant to guide, not punish. And it gives companies a chance to learn and adapt.

Yes, courts may still have to interpret terms like “interact,” “publicly available,” or “social scoring.” But the plain language gives us a lot to work with.

Under Texas law, when a statute doesn’t define a term, courts typically look first to the plain language—the ordinary, everyday meaning—of the word. That’s why it’s important to be clear about how these key terms are likely to be interpreted:

Term

Plain Meaning

Interact

A two-way exchange between a user and an AI system—like input/output via a chatbot or virtual agent.

Social Scoring

The use of AI to rank or assign value to individuals based on personal behavior or attributes outside of financial data. Often modeled on China's controversial social credit system.

Publicly Available Sources

Content that can be legally accessed online or in public spaces without special authorization—such as open social media photos or public security camera feeds.

Similarly, HB 149 doesn't define security or safety, but by plain language, legislative context, and analogy, it likely includes:

Use Case

Security or Safety?

Facial authentication to enter a building or data center

Yes

AI-powered tailgating detection

Yes

Facial match for fraud prevention

Yes

In-store sentiment analysis for marketing

No

Demographic analytics from camera footage

No

These definitions help reduce legal ambiguity while providing compliance clarity for product and security teams.

No Portland-Style Bans in Texas

Remember when Portland banned private companies from using facial recognition—even for basic security? Yeah, Texas didn’t do that.

Thanks to a preemption clause in TRIAGA, local governments can’t pass their own AI laws that are stricter than the state’s:

“A political subdivision of this state may not adopt or enforce a law… that conflicts with or is more stringent than this subchapter.” (Texas Gov’t Code § 552.059)

That means:

  • ✅ No city-by-city patchwork

  • ✅ One consistent statewide standard

  • ✅ More legal clarity for developers and buyers

In other words: Texas won’t let cities go rogue on AI policy.

The Sandbox: Innovation Gets a Launchpad

TRIAGA doesn’t just restrict bad behavior—it encourages innovation with a regulatory sandbox. If you're developing new AI products, especially those that need real-world testing, this is a big deal.

How It Works:

  • Administered by the Texas Department of Information Resources (DIR)

  • You apply, describe your use case, and show public benefit

  • If approved, you get temporary regulatory relief to test your AI responsibly

  • You’re still under oversight and transparency rules

This is modeled on fintech and health sandboxes. Texas wants you to test smart ideas here, not somewhere else.

So if you’re building something new and important but don’t want to trip over unclear rules, this is the off-ramp you need.

Final Take: Texas Built a Law That Balances Innovation and Accountability

TRIAGA isn’t perfect, but it’s practical. If you’re developing or deploying AI in Texas, especially in facial authentication, identity security, or automation, this law:

✅ Defines what’s clearly off-limits
✅ Leaves space for practical tools
✅ Adds a sandbox for next-gen ideas
✅ Aligns CUBI with real-world development needs

Texas didn’t say “no” to AI. It said, “Do it right.”

Need help navigating TRIAGA, CUBI, or putting together a sandbox application? Unified Law works with AI and biometric companies across sectors. Let’s build responsibly—without red tape.

Contact us to learn more.