In 2024, Meta will begin requiring advertisers running political or issue ads on its platforms to disclose when their ads are “digitally created or altered” through the use of AI.

Facebook and Instagram ads about elections, politics and social issues will soon require the extra step, which advertisers will handle when they submit new ads.

Advertisers will need to make the disclosures when an ad “contains a photorealistic image or video, or realistic sounding audio” that falls into a handful of categories.

Meta’s new rules are designed to rein in deepfakes — digitally manipulated media designed to be misleading. The company will require disclosures on ads that were either created or manipulated to show a person doing or saying something they didn’t.

The other special cases requiring disclosures include ads depicting photo-realistic people that don’t exist or events that look realistic but never happened (including altered imagery from real life events) and ads showing “realistic event[s] that allegedly occurred” but that are “not a true image, video, or audio recording of the event.

Meta makes it clear that normal digital alterations like image sharpening, cropping and other basic adjustments don’t fall under the new disclosure policy. The information about digitally altered ads will be captured in Meta’s Ad Library, a searchable database that collects paid ads on the company’s platforms.

“Advertisers running these ads do not need to disclose when content is digitally created or altered in ways that are inconsequential or immaterial to the claim, assertion, or issue raised in the ad,” Nick Clegg, Meta’s President of Global Affairs, wrote in a press release.

The new policy around social and political issue ad disclosures follows the news that Meta would place new limitations on the kind of ads its own generative AI tools could be used for.

Early last month, the company rolled out a suite of new AI tools designed for advertisers. The tools allow advertisers to quickly generate multiple versions of creative assets and easily adjust images to fit various aspect ratios, among other uses.

Those AI tools are now off limits for campaigns related to politics, elections and social issues, as Reuters first reported. The company announced this week that it would disallow the AI tools for ads in “potentially sensitive topics” across industries, including housing, employment, health, pharmaceuticals and financial services. Those are all areas where the company could easily get in regulatory hot water given the attention on AI right now — or areas where Meta has already found itself in trouble, like in the case of discriminatory housing ads on Facebook.

Lawmakers were already scrutinizing the intersection of AI and political advertising. Earlier this year, Senator Amy Klobuchar (D-MN) and Rep. Yvette Clarke (D-NY) introduced legislation that would require disclaimers on political ads altered or created using AI.

“Deceptive AI has the potential to upend our democracy, making voters question whether videos they are seeing of candidates are real or fake,” Klobuchar said of Meta’s new restrictions on its own in-house AI tools. “This decision by Meta is a step in the right direction, but we can’t rely on voluntary commitments alone.”

While Meta is putting some guardrails up around the use of AI in political and social issue ads, some platforms are happy to stay out of that business altogether. TikTok doesn’t wade into political advertising at all, banning any kind of paid political content across brand ads and paid branded content.



Source link