New York Governor Kathy Hochul signed two distinct AI laws within eight days in December 2024, rapidly establishing a new regulatory framework for synthetic media. These legislative actions aim to protect individual identity from unauthorized AI-generated likenesses, directly impacting content creators and advertisers. The rapid legislative pace demands an urgent response to the growing presence of AI-generated content across digital platforms and traditional media.
New York is aggressively regulating AI in media and advertising, but its laws contain significant exemptions that complicate enforcement and industry adaptation. This tension risks creating a two-tiered system where large creative industries can exploit specific carve-outs for synthetic media, potentially undermining the very consumer protections these laws purport to establish.
Other states and nations are likely to follow New York's lead in AI regulation, creating a patchwork of laws that will challenge global content creators and advertisers. This evolving regulatory environment demands careful navigation from companies operating across jurisdictions.
New York's Bold Stance on AI Transparency
Governor Hochul signed S.8420-A/A.8887-B on December 11, 2024, and the Responsible Artificial Intelligence (“AI”) Safety and Education Act (the “RAISE Act”) on December 19, 2024, reports Alston & Bird. These laws, detailed by Skadden, Arps, Slate, Meagher & Flom LLP, mandate disclosure for AI-generated performers in advertising and expand publicity rights for deceased personalities. This rapid legislative action places New York at the forefront of AI regulation, aiming to protect both living and deceased individuals from unauthorized AI likenesses.
This legislative sprint reveals a deep concern for identity integrity as synthetic media replicates human appearance and voice with increasing fidelity. New York's swift regulatory push outpaces the industry's immediate adaptation capacity, likely forcing a rushed compliance scramble in late 2025.
Navigating the Nuances and Exemptions
Advertisements for expressive works, such as movies, TV, and video games, are exempt from synthetic performer disclosure requirements if the AI use is consistent with its use in the original work itself, Reuters reports. This specific exemption creates a significant loophole, allowing creative industries to potentially bypass disclosure requirements for synthetic media if framed as artistic expression. Such a carve-out implicitly acknowledges the unique challenges of regulating creative industries but risks undermining the very consumer protection the laws aim to establish.
Furthermore, the RAISE Act specifically targets companies with more than USD500 million in revenue for safety disclosures, according to IAPP. This revenue threshold suggests a strategic regulatory approach focused on major players, creating a two-tiered system. It implicitly greenlights smaller, potentially less resourced AI developers to operate with fewer immediate regulatory burdens, which could foster innovation but also introduce unmonitored risks. While some AI legislation is already law, other significant regulatory efforts, like a bill requiring generative AI systems to display inaccuracy notices, are still pending, according to JD Supra, further fragmenting the legal landscape.
Beyond Disclosure: Redefining Rights in the AI Era
New York legislation strengthens post-mortem publicity rights, Reuters reports, expanding protections for deceased personalities against unauthorized exploitation. Concurrently, the Responsible AI Safety and Education Act mandates safety disclosures from major AI developers, as noted by IAPP. These combined provisions establish a legal recognition of digital personhood and demand accountability from powerful AI entities. This marks a societal shift towards defining and protecting individual identity in the digital realm. New York state Assembly Bill A3411B passing its third reading in the senate on March 9, 2025, further emphasizes the legislature's sustained focus on AI governance.
The Road Ahead for Industry and Regulation
The law, mandating AI transparency in the film industry, takes effect on January 1, 2026, according to IAPP. This delayed effective date offers a grace period, but New York's aggressive legislative push in December 2025 will force a rapid re-evaluation of content creation workflows and legal compliance strategies across entertainment and advertising. This sets a precedent for other jurisdictions, suggesting content creators and advertisers will navigate a complex, evolving patchwork of mandatory AI disclosure across states and nations. If other states follow New York's lead, the industry will likely face a fragmented regulatory landscape by late 2027, demanding agile compliance from global content creators.










