Sep 5, 2025

Don’t tell Texas—it already knows—but yesterday I was one CLE hour shy of my required hours.
So, I did what any overworked, tech-adjacent lawyer with a mild disdain for vendor booths would do: I fired up the CLE portal and started browsing the online options. No watered-down wine. No “networking” with someone trying to sell me redlines-as-a-service. Just me, a coffee, and Lawline.
Because I work in AI, privacy, and tech, I figured I’d check those categories.
Big mistake.
AI this. AI that. AI for breakfast. AI for bedtime.
I’m exhausted.
Not from AI itself—I love this stuff—but from content fatigue. So much of it is either vague philosophy (“keep humans in the loop!”), theoretical frameworks without connection to the real world, or practical takeaways that translate into more sticky notes on my monitor.
In my hunt for one last non-AI CLE hour, and with my 25th law school reunion around the corner, I thought this may be the year I crack the Rule Against Perpetuities. (Nope.)
Instead, predictably, I clicked on “State of the Union: AI Edition” by Alex Proctor, and there it was, right in the opening slides: “AI Exhaustion is Real.”
Finally. Someone said it.
We’re All Tired. But We Can’t Tap Out.
Here’s the thing: we don’t get to check out just because AI governance feels overwhelming.
Governance isn’t the problem. It’s the protection.
Oddly enough, it wasn’t the CLE that re-energized me. Alex led me down the rabbit hole of AI 2027—a forecasting paper that reads like science fiction but offers some real-world perspective. It models two futures:
1. The “High-Speed Race” Scenario
Where oversight collapses, global competition takes over, and safety gets sidelined.
2. The “Controlled Progress” Scenario
Where we actually govern the thing—slow enough to stay in control, fast enough to still lead.
First, let me say that I’m an optimist. As General Counsel of Alcatraz, I see daily how AI helps us work smarter and keeps the world safer. I’m not worried about how we use AI.
We have oversight. We think before we deploy. We’ve built in legal guardrails.
But what keeps me up at night is what happens when there aren’t any guardrails—and when companies (especially smaller ones) believe that governance isn’t their issue.
The Biggest Mistake: Thinking You’re Too Small to Worry
Let me say it clearly:
The biggest risk I see right now isn’t what companies are doing with AI—it’s what they’re not doing, because they think the laws don’t apply to them.
If you use ChatGPT in your workflow or to edit a LinkedIn article (cough cough)? You’re touching AI.
If your sales team is testing new AI tools on client data? You’re touching AI.
If you work in hiring, healthcare, finance, or anything consumer-facing? You’re already under the lens of emerging AI regulation.
This isn’t hypothetical.
The U.S. AI Action Plan—especially Pillar 2—lays out a very real and very practical roadmap:
Document your tools
Monitor how they’re used
Embed oversight and risk-based safeguards
Make transparency a default
It’s not a restriction. It’s a framework. And it will separate the companies that scale safely from the ones that make headlines for all the wrong reasons.
I Know You’re Tired. So, Start Small.
Governance doesn’t have to be overwhelming. It just has to start.
So, if you’re reading this and thinking, “I know we should do something, but we don’t have time to build a whole AI program”—good news:
You don’t have to. Not all at once. Just take it little by little.
AI Governance: Start Here (Not Everywhere)
AI Governance Starter Checklist
One Step Beats No Step
You probably don’t need a Chief AI Officer.
You definitely don’t need a 50-slide deck on AI governance written by someone like me.
You need a plan. You need ownership. And you need to start—even if it’s messy, even if it’s just a Google Doc.
This is what we help teams with every day at Unified Law Group, PB LLC. We take the overwhelming, the abstract, the someday, and turn it into clear, usable guardrails that actually fit how you work.
Exhausted? Yes.
Inspired? Also, yes.
(Five stars, Alex.)