NY AI Disclosure Law for Ads: Disclosures Fail to Prevent Deception

Despite new laws mandating disclosure for AI-generated content, a survey of 1,601 participants found that labels indicating content was AI-generated did not significantly affect their attitude change

LH
Leo Hartmann

April 24, 2026 · 3 min read

A courtroom scene where AI-generated advertisements are displayed on a screen with an 'AI-Generated' label, but the jury remains skeptical.

Despite new laws mandating disclosure for AI-generated content, a survey of 1,601 participants found that labels indicating content was AI-generated did not significantly affect their attitude change toward policies or judgments of message accuracy. This research, detailed in Labeling messages as AI-generated does not reduce their persuasiveness or mitigate their influence on public policy views - PMC, revealed that while AI-generated messages were highly persuasive, influencing public policy views by 9.74 percentage points on average, labeling failed to mitigate this inherent power.

New York is enacting laws to mandate disclosure for AI-generated performers in advertising, but studies show such disclosures do not significantly alter consumer perception or persuasion. This creates a tension: legislative intent to inform clashes with the practical ineffectiveness of mandated labels.

Companies will face increased compliance costs for disclosures that may not achieve their intended effect. This regulatory landscape prioritizes optics over impact. New York's focus on AI content disclosure for advertising appears to be a legislative misfire, imposing burdens without protecting consumers from AI's subtle, yet powerful, persuasive influence.

New York's AI Content Disclosure Laws for Advertising

New York legislation mandates disclosure for AI-generated performers in advertising, a clear regulatory step in the evolving media sector. According to Reuters, this addresses the increasing use of synthetic media commercially. Specifically, two new laws, including S.8420-A/A.8887-B, require disclosure of AI-generated synthetic performers in advertisements, as stated by the governor. This proactive stance by New York aims to establish transparency frameworks for AI content in advertising.

Advertisers and content creators now face increased compliance burdens. This regulatory intervention, while intended to safeguard consumers, places the onus firmly on businesses without guaranteed consumer benefit.

Why AI Content Disclosure May Not Affect Media Trust

A survey of 1,601 participants revealed AI-generated messages about public policies were notably persuasive. These messages influenced views by 9.74 percentage points on average, according to research in Labeling messages as AI-generated does not reduce their persuasiveness or mitigate their influence on public policy views - PMC.

Crucially, the study found that labels indicating AI-generated or human-generated content did not significantly affect participants' attitude change toward policies, judgments of message accuracy, or intentions to share. This directly challenges the assumption that labeling alone informs or protects consumers from AI's persuasive power. New York's new AI disclosure laws for advertising are essentially a legislative placebo, imposing compliance burdens without delivering on their core promise to protect consumers from AI's persuasive power.

The study's finding that AI-generated messages influenced views by nearly 10 percentage points, even with disclosure, reveals an unmitigated persuasive force. New York's current mandates fail to address this, leaving consumers susceptible. The efficacy of these regulations in fostering greater media trust appears limited.

Expected Regulations for AI-Generated Content Disclosure

New York's regulatory efforts extend beyond advertising. Legislation S.8391/A.8882 requires consent from heirs or executors for commercial use of a deceased individual's likeness, as reported by the governor. This addresses posthumous rights in an era of AI replication.

The state also introduced the Responsible AI Safety and Education Act, requiring safety disclosures by major AI developers, according to iapp. These laws represent a multifaceted effort to address ethical and safety concerns beyond content labeling, acknowledging AI's broad impact.

This broad approach to AI regulation—covering advertising disclosures, posthumous rights, and general AI safety—creates a fragmented strategy. Some legislative interventions will likely prove less effective than others, leading to inconsistent compliance and varied consumer understanding.

Ethical Implications of AI Content in Advertising

AI's regulatory landscape continues to evolve. New York state Assembly Bill A3411B passed its third reading in the senate on March 9, 2026, according to jdsupra. This sustained legislative push confirms governmental interest in governing AI technologies, particularly regarding ethical implications of AI-generated content in advertising.

AI regulation is an ongoing process, but its success hinges on whether transparency measures truly empower consumers. The disconnect between legislative intent and the ineffectiveness of disclosure labels means New York addresses the symptom (AI presence) rather than AI's unmitigated persuasive impact.

Building genuine consumer trust demands more than mandated labels. It requires a deeper understanding of how AI influences perception and the development of effective tools to counter its subtle persuasive power. Without this, the compliance burden on advertisers for AI content disclosure in 2026 will likely yield minimal consumer protection benefits.

By the close of 2026, advertisers like those at Dynamis LLP will likely navigate a complex web of AI content disclosure mandates that increase operational costs without fundamentally altering consumer susceptibility to AI's persuasive force.