6 minutes learn
Up to date Jan 30, 2026
SB 574 wouldn’t ban attorneys from utilizing AI, however would make clear that present skilled duties apply when generative AI instruments are utilized in authorized follow, specializing in 4 key areas:
Defending shopper confidentiality by prohibiting attorneys from coming into confidential or nonpublic data into public AI methods.
Requiring attorneys to confirm the accuracy of AI-generated content material, right hallucinated or misguided output, and stay liable for work produced with AI or by others on their behalf.
Stopping discriminatory or biased outcomes ensuing from AI use that might unlawfully impression protected teams.
Requiring attorneys to think about whether or not disclosure is suitable when AI is used to create public-facing content material.
California’s Senate has handed SB 574, laws that will remodel present bar steerage on AI into enforceable statutory necessities. The invoice displays a rising consciousness amongst lawmakers that the speedy adoption of generative AI poses distinct dangers in regulated professions akin to regulation.
If the invoice turns into regulation, attorneys may face sanctions for citing AI-hallucinated instances or mishandling shopper data by means of public AI methods.
The invoice, launched by State Senator Tom Umberg, responds to a nationwide concern: Attorneys submitting courtroom paperwork containing fictitious case citations generated by AI instruments that lack built-in verification and citation-checking workflows. In keeping with a Bloomberg Legislation evaluation, these AI-generated quotation errors are showing with rising frequency in courtroom filings throughout the nation.
SB 574 will now advance to Meeting committees for hearings and a possible vote earlier than lawmakers adjourn in late August. Right here’s what you must know.
What SB 574 requires: 4 concerns for attorneys utilizing AI
The invoice doesn’t ban AI use in authorized follow. As an alternative, it clarifies that present skilled obligations (confidentiality, competence, accuracy, and equity) nonetheless apply when utilizing AI instruments.
The invoice defines generative synthetic intelligence as an “synthetic intelligence system that may generate derived artificial content material, together with textual content, pictures, video, and audio that emulates the construction and traits of the system’s coaching knowledge.”
1. Consumer confidentiality and AI
SB 574 would prohibit attorneys from coming into confidential, personally figuring out, or nonpublic data into public AI methods.
What the invoice says: Attorneys should make sure that “confidential, private figuring out, or different nonpublic data just isn’t entered right into a public generative synthetic intelligence system.”
The invoice doesn’t outline what “public generative AI methods” are, nevertheless it does outline private figuring out data to incorporate:
Driver’s license numbers.
Dates of start.
Social Safety numbers.
Nationwide Crime Info and Legal Identification numbers.
Addresses and telephone numbers of events, victims, witnesses, and courtroom personnel.
Medical or psychiatric data.
Monetary data.
Account numbers.
Some other content material sealed by courtroom order or deemed confidential by courtroom rule or statute.
In follow, this implies attorneys can’t copy-paste shopper emails, case information, or discovery supplies into public AI platforms that don’t assist attorney-client privilege.
2. AI quotation verification
The invoice requires attorneys to confirm and proper AI-generated content material earlier than utilizing it.
What the invoice says: Attorneys should take affordable steps to:
“Confirm the accuracy of generative synthetic intelligence materials, together with any materials ready on their behalf by others.”
“Right any misguided or hallucinated output in any materials utilized by the lawyer.”
“Take away any biased, offensive, or dangerous content material in any generative synthetic intelligence materials used, together with any materials ready on their behalf by others.”
Which means attorneys should confirm factual accuracy, determine hallucinations, preserve high quality management over all AI-generated work, and take away problematic content material, together with work executed by others on their behalf. AI could help with the work, however duty stays with the lawyer.
3. Stopping AI bias in authorized follow
The invoice requires that AI use can’t lead to discrimination towards protected teams.
What the invoice says: Guarantee “the usage of generative synthetic intelligence doesn’t unlawfully discriminate towards or disparately impression people or communities based mostly on age, ancestry, shade, ethnicity, gender, gender expression, gender identification, genetic data, marital standing, medical situation, navy or veteran standing, nationwide origin, bodily or psychological incapacity, political affiliation, race, faith, intercourse, sexual orientation, socioeconomic standing, and some other classification protected by federal or state regulation.”
This provision addresses considerations that AI methods can perpetuate biases from their underlying coaching knowledge. Beneath SB 574, attorneys might be held accountable in the event that they use AI identified to provide discriminatory outcomes.
4. AI disclosure concerns
Attorneys should take into account whether or not to reveal AI use when creating content material for the general public. Whereas disclosure isn’t obligatory in all circumstances, corporations could wish to develop clear insurance policies on when and find out how to inform the general public about AI-generated content material.
What the invoice says: “The lawyer considers whether or not to reveal the usage of generative synthetic intelligence whether it is used to create content material supplied to the general public.”
Why California’s AI invoice issues for attorneys nationwide
California isn’t alone in wrestling with AI’s position in authorized follow. The invoice acknowledges what many within the occupation already know: Not all AI instruments are appropriate for authorized follow. The California Bar already issued steerage on AI use, however SB 574 would codify these ideas into enforceable regulation, shifting from “ought to” to “should.”
Different states are more likely to observe go well with. The 2025 Authorized Traits Report discovered that 79% of authorized professionals use AI, and practically half of them are utilizing generic AI instruments akin to ChatGPT, Gemini, and Claude. As adoption grows, so does the necessity for sensible guardrails that assist corporations use AI with out compromising skilled tasks to guard each attorneys and shoppers.
The way to use AI responsibly in authorized follow
For a lot of corporations, the query now’s what accountable AI use really appears to be like like everyday, and find out how to make intentional selections about the place and the way AI matches into their authorized work.
If you happen to’re evaluating your present AI setup or interested by including new instruments, the next concerns can assist information safer, extra sensible adoption, no matter whether or not SB 574 is enacted.
Use authorized AI grounded within the regulation
Normal-purpose AI instruments, whereas useful for brainstorming or common analysis, generate responses based mostly on patterns of their coaching knowledge fairly than retrieving data from verified authorized databases. That’s why hallucinated instances can seem in courtroom filings when attorneys fail to confirm citations. The AI doesn’t comprehend it’s inventing citations as a result of it doesn’t really examine case regulation.
Authorized AI platforms akin to Clio Work are constructed on a distinct basis. They’re grounded in authenticated authorized databases, with entry to major and secondary regulation in related jurisdictions. This considerably reduces the danger of hallucinations, which implies attorneys can conduct analysis extra effectively and get outcomes they will belief.
Select AI instruments that assist skilled oversight
Assembly verification obligations requires that attorneys have the means to evaluate and make sure AI-generated work. With general-purpose AI, verification typically means first determining whether or not a quotation or assertion is actual in any respect, monitoring down sources from scratch and untangling potential hallucinations. That course of is time-consuming and will increase the danger that errors slip by means of.
AI constructed for authorized workflows shortens that hole. By offering direct entry to verified supply paperwork and enabling side-by-side comparability, authorized AI shifts verification from a hunt for lacking sources to an easy evaluate, making it simpler to fulfill oversight necessities with out slowing down authorized work.
Shield shopper data with AI constructed for confidentiality
Client AI instruments, particularly free or publicly out there variations, function below commonplace phrases of service. This implies your shoppers’ confidential knowledge could also be used to enhance these fashions, as there aren’t any contractual ensures stipulating that shopper data is exempt. In the end, these platforms weren’t designed for attorney-client privilege as a result of they weren’t designed for attorneys within the first place.
As an alternative, corporations ought to search for AI platforms that supply contractual ensures that knowledge received’t be retained or used to coach fashions, encryption at relaxation and in transit, SOC 2 or comparable safety certifications, and integration with follow administration methods.
When AI is constructed into authorized software program, shopper data stays inside a managed ecosystem below the lawyer’s management, and that knowledge isn’t retained by an AI neither is it shared or used for coaching. Confidentiality turns into a foundational function.
Develop firm-wide AI insurance policies
Deciding on the proper instruments is simply a part of accountable AI adoption. Companies additionally want clear insurance policies that outline when and the way AI can be utilized.
An efficient AI coverage ought to deal with which instruments are authorised for several types of work, what data can and might’t be entered into AI methods, and find out how to confirm AI-generated content material earlier than use. It also needs to set up coaching necessities so everybody on the staff understands each the capabilities and limitations of your AI instruments.
Companies that put together now, by evaluating present AI instruments, documenting verification processes, and establishing clear insurance policies, will discover themselves higher positioned as rules like SB 574 proceed to evolve.
Undertake AI responsibly with authorized AI
Even when SB 574 doesn’t develop into regulation, this invoice displays a rising recognition amongst lawmakers that AI use in authorized follow requires clear safeguards. The invoice would codify necessities for verification, confidentiality protections when utilizing public AI methods, {and professional} oversight.
Whereas AI gives actual advantages, the dangers range considerably relying on the kind of device getting used. Public, consumer-focused AI platforms could also be helpful for common duties, however they will pose severe challenges for attorneys when shopper confidentiality and accuracy are at stake.
Authorized-specific AI instruments like Clio Work, in contrast, are constructed for the realities of authorized follow. They’re designed to assist attorney-client privilege, depend on verified sources, and assist the extent of evaluate and accountability required for attorneys.
See how Clio Work helps attorneys use AI responsibly. Discover AI constructed for authorized follow, with verified sources, built-in evaluate instruments, and safeguards designed to guard shopper confidentiality.
Guide a Clio demo
Loading …




![9th Seth Jagannath Bajaj Memorial National Moot Court Competition 2026 at Faculty of Law and Arts, RNB Global University, Bikaner [Feb 27- March 1; Cash Prizes of Rs. 60k]: Register by Feb 17](https://i2.wp.com/cdn.lawctopus.com/wp-content/uploads/2026/01/9th-Seth-Jagannath-Bajaj-Memorial-National-Moot-Court-Competition-2026.jpg?w=350&resize=350,250&ssl=1)




![One-Week Faculty Development Programme (FDP) on Literature as a Repository of Indian Knowledge Systems by NLU Tripura [Online; Aug 25-30; 7 Pm-8:30 Pm]: Register by Aug 24](https://i2.wp.com/cdn.lawctopus.com/wp-content/uploads/2025/08/Faculty-Development-Programme-FDP-on-Literature-as-a-Repository-of-Indian-Knowledge-Systems-by-NLU-Tripura.png?w=120&resize=120,86&ssl=1)



![CfP: Nyaayshastra Law Review (ISSN: 2582-8479) [Vol IV, Issue II] Indexed in HeinOnline, Manupatra, Google Scholar & Others, Free DOI, Certificate of Publication, Manuscript Booklet, Hard Copy & Internships Available: Submit by Sept 7!](https://i2.wp.com/www.lawctopus.com/wp-content/uploads/2024/09/NYAAYSHASTRA-Law-Review-1-1.png?w=120&resize=120,86&ssl=1)
![Internship Opportunity at AGISS Research Institute [August 2024; Online; No Stipend]: Apply by August 9!](https://i2.wp.com/www.lawctopus.com/wp-content/uploads/2024/07/Internship-Opportunity-at-AGISS-Research-Institute-July-2024.jpg?w=120&resize=120,86&ssl=1)




