AI Governance In (and Beyond) Privacy: Regulatory Tensions in Automated Decision-Making, the Digital Authenticity Crisis, and Restrictions on Professional Use

I. Introduction

Artificial intelligence has fundamentally altered the relationship between individuals, organizations, and their data — and in doing so, has created one of the most complex and rapidly evolving compliance landscapes that privacy counsel have ever been asked to navigate. What was once a relatively straightforward concern — who collects your information and how they use it — has evolved into something far more consequential.

Today, AI systems do not merely collect and process personal data; they weaponize it, replicate it, and transform it into entirely new constructs that can impersonate, deceive, and harm in ways that existing legal frameworks were never designed to address. Data is now used to generate recommendations, shape consequential decisions, simulate identity, produce persuasive but false content, and alter the way professionals exercise judgment. In this new environment, privacy remains central, but no longer is “privacy by design” alone sufficient to encompass all data governance needs.

What is emerging instead is a broader law of AI accountability developing along at least three distinct but overlapping fronts. One front concerns automated decision-making and the use of AI to make or materially influence decisions in areas such as employment, housing, insurance, healthcare, and education. A second front concerns synthetic content, impersonation, and the growing instability of digital authenticity that is resulting in a proliferation of theft and erosion of public trust. A third front concerns profession-specific governance, including the rules that govern how attorneys may use AI consistently with duties of confidentiality, competence, and supervision. Recent developments across U.S. jurisdictions show that this body of law is taking shape not through one comprehensive statute, but through a fragmented mix of privacy regulation, AI-specific regulations, court rules, ethics opinions, and institutional guidance.

The federal government has picked up on this, issuing a Legislative Framework in March 2026 that urges Congress to create minimum standards at a federal level that would preempt the emerging patchwork of state laws. The Framework follows President Trump’s December 2025 Executive Order, which revoked President Biden’s prior Executive Order asking federal agencies to regulate generative AI. However, there has been no immediate action, creating continued confusion and a need for privacy counsel to address the full impact of AI on their companies through comprehensive, privately-driven frameworks.

II. The Expanding State Law Patchwork on Automated Decision-Making and AI in the Consumer and Employee Context

In the absence of a comprehensive federal framework, states have moved to regulate automated decision-making systems that may affect consequential outcomes for individuals in areas such as employment, housing, credit, healthcare, insurance, and education. Lawmakers and regulators have focused on familiar themes of privacy governance: transparency, notice, and meaningful opportunities to contest or opt out of machine-assisted decisions, as well as additional concepts such as bias and human review. These frameworks reflect growing concern not simply with the collection of personal data, but with the use of data-driven systems to make or materially influence decisions that affect people’s lives.

The result is a rapidly expanding, deeply fragmented patchwork of state laws that varies significantly in scope, definitions, compliance obligations, and enforcement mechanisms. California’s landmark automated decision making technology (ADMT) regulations under its consumer protection act (CCPA), Colorado’s first-of-its-kind AI consumer protection act, and a growing set of employment-focused AI laws in jurisdictions including New York City and Illinois represent only the leading edge of what is becoming a nationwide wave of state-level AI regulation. For organizations deploying AI tools at scale, the challenge is not merely understanding each law in isolation but mapping the overlapping obligations they collectively impose, often without the benefit of final implementing rules or regulatory guidance.

A. California’s CCPA ADMT Regulations

In September 2025, the California Office of Administrative Law approved the California Privacy Protection Agency’s (CPPA) regulations on ADMT, risk assessments, and cybersecurity audits, marking the most significant expansion of the CCPA since its enactment. The regulations layer enterprise-level accountability requirements onto California’s existing consumer privacy framework. The regulations define ADMT as any technology that processes personal information and uses computation to substantially replace human decision-making, with obligations triggering when ADMT drives a “significant decision” affecting a consumer’s finances, housing, education, employment, or healthcare.

By January 1, 2027, businesses using ADMT for significant decisions must provide pre-use notices describing the specific purpose, logic, and type of output the ADMT generates, along with information about the consumer’s right to opt out. Under the final rules, these ADMT disclosures can be integrated into a business’s existing California Notice at Collection – the privacy notice that CCPA-covered businesses are already required to provide to consumers at or before the point of data collection – rather than issued as standalone notices. Businesses must also offer opt-out mechanisms (or, where exceptions apply, a human appeal process) and respond to consumer requests about how ADMT was used in decisions affecting them.

These consumer-facing obligations are paired with a separate risk assessment requirement. Businesses engaged in high-risk processing activities including, but not limited to, ADMT use for significant decisions, must conduct detailed written risk assessments and submit attestations and summary information to the CPPA on a phased schedule beginning April 1, 2028, with tiered deadlines based on annual revenue through 2030.

To get up to speed with these requirements, businesses should begin by inventorying all AI and ADMT tools currently in use and identifying what each tool does, what data it processes, and whether its output drives or materially influences a “significant decision.” Where it does, the regulation requires organizations to evaluate whether existing human review protocols satisfy the regulation’s “meaningful human involvement” standard, which requires the reviewer to understand, independently evaluate, and retain the authority to override the ADMT’s output. This is a higher bar than a rubber-stamp review, and many organizations may find that their current workflows fall short. Service provider agreements should also be updated to require cooperation with ADMT compliance, risk assessments, and cybersecurity audits.

B. Colorado’s Comprehensive AI Consumer Protection Framework

The Colorado AI Act, signed in May 2024, is the first state law to establish a comprehensive, risk-based regulatory framework for artificial intelligence systems. While California’s ADMT regulations operate as an extension of an existing privacy law, granting consumers transparency and opt-out rights when AI drives certain decisions, Colorado’s framework is rooted in consumer protection and anti-discrimination principles. The Act targets “algorithmic discrimination” across consequential decisions in employment, housing, lending, healthcare, education, and insurance, and it regulates the full AI supply chain, imposing a “reasonable care” standard on both “developers” (the companies that build AI systems) and “deployers” (the businesses that use them). Developers must provide deployers with documentation sufficient to complete impact assessments and must publicly disclose known risks. Deployers must implement risk management programs, conduct impact assessments, provide consumers with notice and opt-out mechanisms, and report discovered algorithmic discrimination to the Colorado Attorney General within 90 days.

The Act has had a turbulent path to implementation, driven largely by industry concern that it could chill AI adoption and disproportionately affect smaller businesses — concerns amplified by the December 2025 Executive Order, which specifically identified Colorado’s law as an example of onerous state regulation. Previous attempts to amend the law culminated in a special session that produced only a five-month delay to June 30, 2026. A Governor’s working group, convened in part to address mounting pressure from both industry stakeholders and the federal administration, released a proposed amendment bill in March 2026, but as of this writing no substantive changes have been enacted, and the June 2026 effective date stands. The Colorado AG has not yet promulgated implementing rules, leaving organizations without detailed guidance on how to operationalize many of the Act’s requirements. Organizations deploying AI systems that touch Colorado residents should nonetheless begin inventorying high-risk AI tools and aligning governance frameworks with nationally recognized standards, which the Act recognizes as part of demonstrating reasonable care.

C. Employment-Focused AI Laws: A Targeted but Growing Trend

Alongside these broader regulatory frameworks, a set of laws specifically targeting AI’s use in employment decisions has been gaining momentum as AI-powered hiring, performance management, and workforce analytics tools become more prevalent. New York City’s Local Law 144, in effect since 2023, requires employers using automated employment decision tools to conduct annual bias audits and provide notice to candidates and employees. Illinois HB 3773, effective January 1, 2026, amends the Illinois Human Rights Act to make it a civil rights violation to use AI that has a discriminatory effect on employees with respect to recruitment, hiring, promotion, discharge, or the terms and conditions of employment. The Illinois law also prohibits the use of zip codes as a proxy for protected classes and imposes notice obligations on employers deploying AI in these contexts. The Illinois Department of Human Rights has circulated draft implementing rules, but final guidance remains pending.
A critical feature of these employment-focused laws is their reliance on a disparate impact framework, meaning that an employer can be liable if an AI tool produces discriminatory outcomes, even if the employer had no discriminatory purpose in deploying it. This tracks traditional employment discrimination law, but applying it to third-party AI systems raises practical questions about how employers should audit, validate, and monitor tools they did not build. A single AI recruiting or performance management tool deployed across multiple jurisdictions can trigger distinct notice, audit, impact assessment, and opt-out obligations in each.

III. Synthetic Content, Digital Authenticity, and Corporate Exposure

Generative AI creates concerns around synthetic content, impersonation, and the broader erosion of digital authenticity, which is problematic for individuals and businesses alike. The use of genAI has made it easier to fabricate voices, likenesses, communications, websites, and other indicia of identity with remarkable realism and at trivial cost. This environment creates significant new risks for businesses that go well beyond traditional data privacy and compliance concerns.

Data misuse may have occurred in training the generative AI models used to create these so-called “deepfakes” in the first place. But that is a relatively minor problem for privacy counsel in comparison to the proliferation of synthetic content created for the express purpose of further data theft. The creation of these piracy devices is largely unchecked by existing privacy frameworks, which are focused on the protection of existing data. Much synthetic content online is generated from publicly available data that was obtained legitimately. Moreover, genAI models can memorize and reproduce patterns without retaining identifiable data in a conventional sense.

In other words, synthetic, genAI-driven outputs are problematic because of the new data they create, and how that data is used, in turn, to harvest proprietary information and assets. At the same time, deepfakes magnify these harms by creating ripple effects in the marketplace in the form of consumer confusion and erosion of trust. Existing legal frameworks only address these issues in fragments, providing post hoc relief in certain limited contexts without proactively regulating the root of the problem.

The stakes are huge, especially with respect to intangible corporate assets. Trade secrets are uniquely vulnerable because they depend entirely on confidentiality and controlled access. Unlike patents or copyrights, trade secrets lose protection the moment secrecy is compromised — and deepfake-induced disclosures raise urgent questions about whether an organization’s security controls satisfy the “reasonable measures” standard required to maintain that protection. When deepfakes are used to impersonate executives, induce employees to disclose sensitive data, or authorize fraudulent transactions, the resulting damage will likely extend beyond a violation of name, image, and likeness of any individual. It could include permanent loss of trade secret status, erosion of IP value, regulatory exposure, and significant litigation risk. Deepfake breaches targeting source code repositories can diminish the commercial value of copyrighted software, compromise proprietary AI training data, and jeopardize patent protection for inventions not yet reduced to a filing. During mergers and acquisitions, such breaches can trigger due diligence disclosure obligations, reduce company valuation, and in extreme cases provide grounds for an acquirer to claim a material adverse event. A single deepfake incident can produce irreversible legal harm across multiple contexts simultaneously.

Harms are magnified even more when considered through the lens of trademark infringement. GenAI has enabled the creation of convincing counterfeit digital presences — websites, social media profiles, and customer service interfaces — that mimic the look, feel, and voice of legitimate businesses and individuals with remarkable fidelity. Bad actors deploy these synthetic environments as infrastructure for sophisticated fraud schemes involving fake investment platforms bearing the logos of established financial institutions and counterfeit e-commerce storefronts replicating trusted retailers.

Duplication of branded content in order to create a gloss of legitimacy over what is actually fraudulent content blurs the perception of reality, resulting not only in financial harm and loss of proprietary intangible assets but also creating misinformation, consumer confusion, reputational injury, and the degradation of trust in digital environments themselves. Use of AI-powered chatbots impersonating real company representatives to harvest credentials, financial information, and valuable intangible assets further undermine consumer trust in the marketplace.

Unfortunately, existing privacy, intellectual property, consumer protection, and unfair competition doctrines provide only partial and uneven remedies, exposing the limits of legal regimes built for a world in which falsity was harder to manufacture and easier to detect.

The federal Lanham Act, which governs trademark infringement, unfair competition, and false advertising, offers remedies to redress some of these harms. But often, irreparable harm to reputation has occurred before fraudulent material is discovered. And even after discovery, it may be difficult to take action against an anonymous infringer hiding online. The Federal Trade Commission Act’s Section 5, which regulates unfair or deceptive practices in commerce, is similarly reactive and increasingly constrained by the courts. Sector specific data privacy laws, such as HIPAA, address only narrow contexts that largely exclude AI-generated harms. The American Data Privacy and Protection Act came closest to establishing a federal framework regarding digital content forgeries, but stalled in 2022 over preemption disputes. The federal TAKE IT DOWN Act became law in 2025, but is limited to nonconsensual intimate images, including AI generated deep fakes.

State right of publicity laws exist to reach some deepfake-specific harms related to unauthorized use of name, image, or likeness, but generally predate generative AI and vary widely in scope and enforceability. Tennessee’s ELVIS Act, which explicitly covers AI-generated replicas, is a notable but isolated exception. The result is a regulatory vacuum that leaves organizations piecing together incomplete and inconsistent remedies against rapidly escalating harm.

IV. Professional Responsibility and Client Data Protection in the Era of Artificial Intelligence

Yet another area of concern involves professional responsibility and other regulated uses of AI, particularly where lawyers and similarly situated professionals deploy AI tools in the course of handling confidential and private personal information, advising clients, or exercising judgment on matters with legal or ethical significance. The compliance complexity described in this article creates not only significant obligations for clients but heightened and unavoidable responsibilities for the attorneys advising them.

Implementation of generative AI in the legal field is leading to an increase of inappropriate uses, as is evidenced by a rapidly growing number of court sanctions for such use. Against this backdrop, some states are considering legislation to regulate the use of generative AI by the legal profession, among them California and New York. While a number of states have already adopted a variety of court rules, ethics opinions and standing orders on the use of AI, the introduction of formal bills reflects the importance of these considerations. The proposed legislation contains the same types of themes reflected in other AI and data privacy-related regulations discussed herein.

A. California Senate Bill 574 (Introduced February 20, 2025)

This bill would obligate an attorney who uses generative artificial intelligence to ensure: (1) that confidential, personal identifying, or other nonpublic information is not entered into a public generative AI system; (2) that the use of generative AI does not result in discrimination against a protected class; and (3) that reasonable steps are taken to verify the accuracy of generative AI material and to correct any erroneous or hallucinated output in any material used by the attorney. It further specifically restricts arbitrators from delegating any part of their decision-making process to any generative AI tool.

SB 574 was passed by the California Senate earlier this year. It will now proceed to Assembly committees for hearing and a potential vote before lawmakers adjourn in August.

B. NY State Senate Bill 2025-S2698 / Assembly Bill A8546 (2025-2026 Session)

This bill would require that a certification be submitted in connection with any court filing produced using generative AI. The certificate must state that a human has reviewed the source material and confirmed that any artificially generated content is accurate. The bill has not substantively been advanced in a number of months so it is unclear if it will see a full floor vote prior to the end of this legislative session.

C. A Patchwork of Court Rules, Ethics Opinions and Standing Orders Attempt to Address Client Confidentiality Challenges in the Age of AI

Numerous courts across a variety of states – from California to Florida to New York, as well as many in between – have issued guidance with respect to the use of generative AI. While the specific requirements vary between jurisdictions, certain themes recur. One of these is confidentiality and protection of client data.

As such, attorneys must confront the implications of their own AI tool usage. A lawyer who inputs clients’ personal or confidential information, privileged communications, or work product into an unsecured generative AI platform may be violating duties of confidentiality regardless of intent. Even closed AI environments may present risks, depending on the set up. Law firms and legal departments that have not yet developed AI acceptable use policies governing use of such tools are exposed to risks that could have serious consequences.

V. Privacy as a Floor, Not a Ceiling – Developing an AI Governance Strategy in an Evolving Legal and Technological Landscape

Taken together, these three fronts reveal a common theme: Organizations should resist treating these requirements as isolated compliance exercises. Instead, they should build proactive, multi-layered AI governance programs capable of addressing the full spectrum of legal risk that generative AI presents.

AI Governance Committee. The trajectory of U.S. AI regulation is clear, even if details are still evolving. Organizations should start with governance by creating a cross‑functional AI committee spanning legal, compliance, HR, IT, product, and procurement. Its first priority should be a complete inventory of all AI and AI‑enabled tools—who owns them, how they’re used, what data they process, and what decisions they affect. With that foundation, organizations can assess applicable regulatory regimes, identify gaps in notice, opt‑out, and human oversight, and begin building the risk assessments and documentation increasingly required by state laws.

Digital Authentication & Monitoring. Digital authentication and both internal and external monitoring are foundational. Content provenance standards provide guidelines for authenticating brand assets, and digital monitoring tools can scan the web, social media, and domain registries for unauthorized trademark use. Rapid response protocols – including pre-approved Digital Millennium Copyright Act takedown procedures, the Uniform Domain Name Dispute-Resolution Policy complaint processes, and Lanham Act enforcement strategies – should be established before an incident occurs to mitigate harm.

Internal security protocols must be AI-aware, incorporating data loss prevention solutions that monitor and block transmission of sensitive information to unauthorized AI platforms and layered access governance frameworks that reduce the risk of data exfiltration. Where misappropriation is suspected, the Defend Trade Secrets Act’s ex parte seizure provisions provide a powerful but time-sensitive emergency remedy requiring immediate legal mobilization.

Employee Training. Employee training must address not only traditional privacy and security risks but the evolving threats posed by deepfakes and generative AI. Employees must understand that inputting proprietary information into unauthorized AI platforms creates serious risks of inadvertent disclosure, model training on confidential content, and breach of organizational confidentiality obligations. Moreover, employees must be trained to spot social engineering and phishing campaigns facilitated by increasingly realistic deepfakes seeking access to corporate assets through nefarious means.

Vendor Agreements. Vendor agreements must be updated to prohibit AI vendors from using submitted content for model training, require data deletion upon request, and include indemnification provisions addressing liability for inadvertent third-party IP infringement.

VI. Conclusion

The modern law of artificial intelligence is developing in fragments. Privacy statutes, consumer protection rules, employment discrimination frameworks, professional conduct obligations, and intellectual property doctrines are all evolving at different speeds. That fragmentation can feel inefficient, even chaotic. Yet it is also producing a clear structural consequence. Organizations cannot rely on any single legal regime, whether privacy, cybersecurity, or professional ethics, to define the boundaries of responsible AI use. Compliance has become a cross-cutting exercise that touches technology, operations, legal judgment, and organizational culture at once.

What emerges from this landscape is a practical imperative: enterprise AI governance must be broader than privacy and more concrete than abstract ethics. Privacy law still provides the vocabulary of notice, consent, and data protection, and professional responsibility rules still supply the language of competence, confidentiality, and candor. But neither framework alone fully addresses the risks created by automated decision systems, synthetic content, and AI-assisted professional work. The common thread across jurisdictions is not a new doctrine, but a new expectation. Organizations are expected to understand what their AI systems do, where they obtain data, how their outputs are used, and who remains accountable when those outputs shape decisions, communications, or representations to others.

In that sense, fragmentation in the law is not merely a compliance burden; it is a design signal. Regulating bodies are all pointing toward the same destination from different directions. They are insisting on governance structures that inventory AI tools, classify risk, verify outputs, protect confidential information, supervise vendors and personnel, and preserve meaningful human oversight over consequential decisions. These expectations are beginning to converge even where formal rules do not.

The organizations best positioned for the next phase of AI regulation will not be those that chase individual statutes or react to enforcement actions after the fact. They will be those that build durable governance systems capable of adapting as legal requirements evolve. In practice, that means treating AI not as a discrete technology problem or a narrow privacy issue, but as an enterprise risk domain that demands coordination across legal, compliance, technology, and operational leadership. An ideal framework will look beyond internal compliance measures and will also account for the reality of external threats posed by gen-AI produced content. Ultimately, the central question of AI governance is not whether artificial intelligence will be used. That question has already been answered. The real question is whether institutions can preserve accountability, professional judgment, and trust in an environment where decision-making is increasingly automated, identity can be simulated, and information can be generated at scale. The law is moving toward that answer in pieces. Enterprise governance is the mechanism that allows those pieces to function as a coherent whole.