In 1979, an
Assembly Bill 316 eliminates a defense that some AI developers have attempted to assert: that the AI system, not the company, caused the harm.2 The law reflects a straightforward corollary to
What the Law Prohibits
AB 316 adds a single provision to
The prohibition is narrow but targeted. It forecloses arguments that an AI system's independent decision-making absolves the humans and organizations behind it. The provision responds to instances where defendants have attempted to characterize AI tools as separate actors—most notably,
What Remains Available
The law explicitly preserves other defenses. Defendants may still present evidence relevant to causation, foreseeability, and comparative fault.5 A plaintiff must still establish that the AI system caused the alleged harm—and that the harm was a foreseeable consequence of the defendant's conduct.
This distinction matters. AB 316 does not create strict liability for AI-related harms. It removes one specific argument—"the AI did it autonomously"—while leaving intact the traditional framework for establishing (or contesting) liability. Companies that implement reasonable safeguards, conduct appropriate testing, and maintain adequate documentation retain the ability to demonstrate they acted appropriately.
The Supply Chain Question
AB 316 applies to anyone who "developed, modified, or used" an AI system. This language encompasses the entire AI supply chain: the foundation model developer, the company that fine-tunes or customizes the model, the integrator that builds it into a product, and the enterprise that deploys it.
For organizations using third-party AI tools, this breadth has contract implications. Indemnification provisions, limitation of liability clauses, and warranty terms may warrant review. If an AI vendor's tool causes harm and the downstream deployer faces suit, AB 316 prevents the deployer from arguing that the AI acted on its own—even if the deployer had limited visibility into how the model operates.
Organizations may wish to consider how their vendor agreements allocate risk in this environment. Key questions include whether the vendor provides adequate documentation of system behavior, what indemnification obligations exist for AI-related claims, and how limitation of liability provisions interact with the inability to assert an autonomous-harm defense.
Documentation as Defense
If the "AI did it" argument is unavailable, the remaining defenses—causation, foreseeability, comparative fault—become more important. Demonstrating reasonable care in AI deployment may require documentation that many organizations do not currently maintain: records of testing and validation, monitoring logs showing system performance, evidence of human oversight in high-stakes decisions, and audit trails for model updates.
This documentation serves dual purposes. It supports compliance with emerging AI transparency requirements, including
Looking Ahead
For in-house counsel, the practical response is the same work that supports AI governance more broadly—vendor due diligence, deployment documentation, and clear allocation of responsibility in contracts. The 1979 principle endures, just inverted: because a computer can never be held accountable, those who deploy it may find that they are.
Footnotes
1
2 Cal. Civ. Code § 1714.46, added by A.B. 316, 2025-2026 Reg. Sess. (Cal. 2025), https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202520260AB316.
3 Id. at subd. (a).
4 Moffatt v.
5 Cal. Civ. Code § 1714.46, subd. (b).
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.
77002-4995
Tel: 713229 1234
Fax: 713229 1522
E-mail: bobby.bitner@bakerbotts.com
© Mondaq Ltd, 2026 - Tel. +44 (0)20 8544 8300 - http://www.mondaq.com, source


















