Skip to content

Synthetic Stars, Real Stakes: What Every Brand Must Know About the AI Likeness Rules Reshaping Advertising Law

Adrian Thomas

14 Apr 2026

5 min read

pexels ron lach 8371404 scaled

Your brand just launched a campaign featuring a stunning digital influencer — charismatic, on-message, and available 24/7. But here’s the question your legal team is asking: Did you get permission? And did you tell anyone she isn’t real?

The age of synthetic spokespeople is no longer a futuristic concept — it’s a present-day marketing reality. AI-generated influencers, digital replicas of real celebrities, and algorithmically cloned voices are now woven into mainstream advertising campaigns. And where marketing innovation goes, the law is never far behind.

In 2026, a wave of new state laws, FTC enforcement guidance, and platform-level policies has fundamentally changed the compliance calculus for any brand using AI-generated or AI-altered human likenesses in advertising. The message from regulators is clear: transparency is no longer optional. And the consequences of getting it wrong — fines, litigation, and serious brand damage — are very real.

What Exactly Is an AI “Likeness”?

Before diving into the rules, it helps to understand precisely what triggers them. Under the emerging legal framework, a “likeness” is far broader than a photograph or a video clip. It encompasses a person’s face, body, voice, distinctive gestures, signature phrasing, and even a digital twin trained on that person’s public content.

This matters for brands working with AI tools, because even a synthetically generated persona that was inspired by or closely resembles a real individual — without ever naming them — can potentially trigger disclosure obligations and right-of-publicity claims.

KEY LEGAL DEFINITION: A “likeness” under current advertising law includes any face, voice, body, gestures, style transfer, or hybrid AI edit that a reasonable viewer could associate with — or mistake for — a real, identifiable person. Once a virtual persona is based on, trained on, or styled after an identifiable individual, both disclosure obligations and rights-clearance analysis are triggered.

The Legal Landscape in 2026

Federal: FTC Enforcement

At the federal level, the Federal Trade Commission has made its position unambiguous. Both sponsorship and AI involvement in content must be clearly disclosed. Virtual influencers are no longer exempt from endorsement rules, and the FTC’s rationale is straightforward: audiences have a right to know when content is both paid for and artificially generated. The agency views undisclosed synthetic endorsements as a deceptive practice under Section 5 of the FTC Act.

Critically, brands can no longer shift responsibility entirely onto creators or agencies. Influencer advertising rules now assign shared liability — brands, agencies, and influencers are all responsible for compliance, and if violations occur, regulators will examine what the brand did to prevent them.

“If AI changes who appears to speak, endorse, or perform — disclose it plainly, and secure permission before publishing.” — Emerging 2026 Standard, FTC & State Law Guidance

New York: A8887-B (Effective June 9, 2026)

New York’s law is one of the most specific and immediately actionable new rules brands must track. Effective June 9, 2026, it requires “conspicuous” labeling for any advertisement featuring a synthetic performer — that is, an AI-generated human likeness not depicting a real, identified person. The obligation runs to whoever produces or creates the content, meaning brands, agencies, and production companies are all in scope.

California: AB 1836 & AB 2602

California has been especially active on this front. AB 1836 addresses the digital likeness rights of deceased personalities, while AB 2602 governs the enforceability of digital replica contracts for living performers — operative since January 1, 2025. Brands working with talent on AI-generated or digitally replicated content must ensure their contracts explicitly address these provisions, or risk having those agreements invalidated.

California’s AI Transparency Act further requires providers to disclose when content is AI-generated and mandates the embedding of latent provenance data — timestamps, origin identifiers — in covered content.

Washington State: HB 1170 & HB 2225 (Signed March 2026)

Washington became the latest state to enter the arena when Governor Ferguson signed two landmark bills in March 2026. The legislation mandates that AI companies identify when images, video, or audio have been substantially modified or created using their systems. Notably, HB 2225 includes a private right of action — meaning individual consumers, not just regulators, can sue for violations. For brands, this dramatically expands exposure.

What Must Be Disclosed, and How?

The emerging standard across federal and state rules converges on a clear principle: if a reasonable viewer could believe a real influencer created, appeared in, or approved content — when AI actually generated or altered that impression — disclosure is required.

In practice, this means brands should not rely on vague phrases like “enhanced with technology.” Platform rules and regulators alike expect specific, readable disclosures that communicate AI’s role clearly. Guidelines suggest language such as “AI-generated likeness” or “synthetic voice recreation” appearing visibly on-screen or in caption copy. On audio-only placements, the disclosure should be spoken at an understandable pace and volume.

For advertisers with international reach, the stakes are even higher: the European Union’s AI Act (Article 50) requires machine-readable provenance marking for AI-generated content, with full enforcement beginning August 2, 2026.

The Brand Equity Argument for Transparency

Beyond legal compliance, there is a compelling brand strategy argument for getting ahead of these rules. Consumers are increasingly sophisticated about AI-generated content, and brands that disclose AI use proactively are beginning to distinguish themselves as trustworthy and credible. Brands that are later caught using undisclosed synthetic likenesses face not just regulatory penalties, but the far more damaging problem of eroded consumer trust.

Transparency, in this environment, is itself a brand asset. Companies that treat disclosure as a strategic communication — rather than a reluctant legal footnote — will be better positioned to scale synthetic content without the reputational risk.

Your 2026 AI Likeness Compliance Checklist

The following seven steps represent a practical baseline for any brand using AI-generated or AI-altered content in advertising:

  1. Define “AI-Generated Likeness” Internally. Draft a clear internal policy covering face, voice, body, gestures, style transfer, and hybrid edits. Every team member touching a campaign should know what triggers your obligations.
  2. Audit Your Current Campaigns. Review all active and in-production campaigns for AI use. Identify any synthetic performers, cloned voices, or digitally altered likenesses and assess whether existing disclosures are adequate.
  3. Secure Written Permissions. Obtain explicit written consent from any real individual whose likeness, voice, or persona is being used or approximated. Review California AB 2602 requirements for living performers specifically.
  4. Build Channel-Specific Disclosure Language. Work with legal counsel to develop disclosure templates for each platform and ad format — social posts, video, audio, OOH, and digital display — that meet the “conspicuous” standard.
  5. Update Vendor & Agency Contracts. Ensure that all production partners and agencies are contractually required to meet your AI disclosure standards. Include representations about data handling and rights management.
  6. Document Everything. Keep contracts, approvals, scripts, media plans, final assets, and evidence of rights clearance. Good records are your best protection if a platform, regulator, or creator later questions the campaign.
  7. Assign a Compliance Lead. Designate a legal or brand ops team member to own AI disclosure compliance, monitor new state legislation, and serve as the escalation point for all AI-generated content decisions.

The Bottom Line

The synthetic influencer is here to stay. AI’s ability to generate compelling, cost-effective brand personalities will only grow more sophisticated — and more persuasive. What is also growing, however, is the legal infrastructure designed to ensure that audiences are never deceived about what they’re seeing.

For brands, the message in 2026 is this: the rules are no longer aspirational. New York’s law takes effect in June. California’s transparency requirements are already in force. Washington’s private right of action is coming. The window to build compliant, future-ready AI advertising practices is now — and it is narrowing.

Secure permission. Label plainly. Document everything. And never let the realism of your synthetic content outrun the honesty of your brand.

Is Your Brand’s AI Strategy Legally Protected?

Our team advises brands and marketing teams on FTC compliance, right-of-publicity law, and AI advertising disclosure. Schedule a consultation to review your current campaigns and contracts.


Disclaimer: This article is provided for general informational and educational purposes only and does not constitute legal advice. No attorney-client relationship is formed by reading this content. Laws referenced reflect developments as of April 2026 and are subject to change. Consult qualified legal counsel regarding your specific situation.

Subscribe to newsletter

Subscribe to receive the latest blog posts to your inbox every week.

Newsletter Form (#4)

By subscribing you agree to with our Privacy Policy.