If you’re a regulation agency proprietor in 2026, you’re being requested to do one thing that might have felt reckless ten years in the past: put extra of your agency’s information—extra shopper information, extra communications, extra paperwork—into the cloud.
And now, we’re layering AI on high of it.
That’s not a small ask.
I not too long ago sat down with Jonathan Watson, Clio’s CTO, to speak about precisely that rigidity: as AI turns into extra embedded within the authorized tech stack, what occurs to safety? What occurs to privilege? And what ought to attorneys truly be doing proper now to guard themselves?
Right here’s the quick model: safety isn’t a characteristic. It’s self-discipline. And in an AI world, it has to be the primary line merchandise—not the compliance checkbox on the finish.
Safety First. Not Safety Finally.
One of many issues Jonathan emphasised is that, inside Clio, safety isn’t one thing you “add.” It’s one thing you construct round.
Each product, each acquisition, each new AI functionality has to move via the identical gating precept: if buyer and shopper information can’t be protected at a excessive customary, it doesn’t ship.
That’s not advertising language. That’s operational actuality.
They run exterior audits. They run inner and exterior penetration assessments. They’ve purple groups attempting to break methods and blue groups constructing them stronger. And after they purchase firms (like vLex or ShareDo), these methods get stress-tested and introduced as much as the identical safety requirements earlier than being absolutely built-in.
That’s the half most attorneys don’t see. However it’s the work that enables innovation to maneuver ahead with out eroding belief.
And belief, in authorized, is the entire ballgame.
Subscribe
Get knowledgeable insights and sensible suggestions delivered to your inbox each week.
AI Adjustments the Danger Profile (However Not the Accountability)
The AI query is the place issues get fascinating.
We’re now not speaking a few apply administration device that shops contacts and billing entries. We’re speaking about:
Doc classification
That’s deep integration.
So, the plain query turns into: How do you construct AI on high of shopper information with out compromising it?
In line with Jonathan, the strategy is cautious by design. Information is de-identified. Anonymized. Processed solely after customers choose in. And new use circumstances are reviewed by inner teams whose job is to problem whether or not one thing is merely “quick” or truly “proper”.
That may gradual innovation down.
However right here’s the truth: in authorized tech, shifting quick and breaking issues isn’t a viable technique.
Belief on this house is hard-earned and simply misplaced. And when you lose it, you don’t get it again.
Your Information Is Yours
There’s additionally a persistent concern amongst attorneys that AI methods are “coaching on my paperwork” to assist different corporations.
Jonathan was clear: that’s not taking place. Agency information isn’t getting used to energy different corporations’ drafting or workflows. If something like that have been ever launched, it might be express and opt-in—not silent or buried in advantageous print.
That issues.
As a result of the distinction between “AI-assisted drafting inside my agency” and “my information bettering another person’s work product” is huge.
And attorneys are proper to care about that distinction.
Communications: The Subsequent Frontier (and the Subsequent Nervousness)
If paperwork are thrilling, communications are nerve-wracking.
Bringing AI into shopper emails, name transcripts, or messaging threads triggers an instinctive privilege panic. Are we introducing a 3rd get together? Are we risking waiver?
Right here’s the uncomfortable fact: most corporations are already routing communications via cloud-based transcription methods. Many depend on third-party instruments to document, retailer, and course of communications.
AI doesn’t essentially create a brand new class of threat—it typically replaces human intermediaries with automated methods. In lots of circumstances, that may enhance accuracy and cut back publicity.
It looks like a leap.
However typically, it’s simply stepping up a curb.
Quantum Computing Is Not Your Greatest Downside
At one level, I requested Jonathan about quantum computing—as a result of if we’re going to panic, we would as effectively panic correctly.
His response was sensible: sure, firms are watching it. Sure, cryptography will evolve. But when you’re nonetheless utilizing weak passwords, sharing accounts, or skipping multi-factor authentication, quantum isn’t your greatest menace.
That’s the piece attorneys want to listen to.
We love debating edge-case technological futures whereas ignoring the very actual vulnerabilities sitting in our inboxes as we speak.
Subscribe
Get knowledgeable insights and sensible suggestions delivered to your inbox each week.
The Three Issues Each Regulation Agency Ought to Do (Now)
When you do nothing else after studying this, do these three issues:
1. Use a Password Supervisor
Cease reusing passwords. Cease storing them in browsers. Use one thing like 1Password and create robust, distinctive credentials for each service.
Sure, it feels uncomfortable to place every thing in a single place. No, that doesn’t make it much less safe than utilizing “Summer2024!” all over the place.
2. Flip On Multi-Issue Authentication (Particularly for Electronic mail)
Electronic mail is the key to the dominion. Most account compromises begin with e mail entry.
Activate MFA for:
Follow administration software program
In all places.
3. Cease Sharing Accounts
Account sharing destroys audit trails and makes remediation exponentially more durable.
If one thing goes mistaken, you could know who accessed what. Shared logins eradicate that visibility—and enhance your moral publicity.
The Larger Image
AI isn’t optionally available anymore. It’s turning into foundational to how authorized work will get executed.
However AI with out safety is simply acceleration towards threat.
The corporations that can win on this subsequent part aren’t those chasing each shiny device. They’re those constructing layered defenses, selecting companions who deal with safety as a self-discipline—not a certification—and tightening up their very own inner practices.
You don’t want to grasp quantum encryption.
You do must cease utilizing the identical password for every thing.
And you could demand that your expertise distributors take into consideration safety no less than as obsessively as you consider your purchasers.
As a result of ultimately, that’s what that is about: defending belief in a occupation that relies on it.




















