
Meta, the parent company of Facebook, has laid down the law on who can harness the power of its latest generative artificial intelligence tools for advertising campaigns. The much-anticipated AI capabilities won’t be in the hands of political campaigners, as the company specified that advertisers running campaigns related to housing, employment, credit, social issues, elections, politics, health, pharmaceuticals, or financial services are currently barred from using these generative AI features.
This announcement provides a sneak peek into a not-so-distant future where political campaigns will leverage AI to craft advertisements and grapple with the implications when rival campaigns attempt to use the technology to sway public opinion. Back in April, the Republican National Committee proudly claimed to have birthed the first-ever US political advertisement “built entirely” by AI. This ad featured AI-generated images of President Joe Biden and Vice President Kamala Harris, combined with images of boarded-up stores, surging crime, and shuttered banks, sparking global debate about the potential for AI to manipulate voters.
Timothy Kneeland, a political science and history professor at Nazareth College in upstate New York, voiced concerns about the affordability and ease of mass-producing AI-generated political ads, emphasizing the public’s ability to discern their authenticity. Kneeland also underscored the potential economic impact of AI on political campaigns, which could significantly reduce campaign staff and reshape the landscape.
Nonetheless, Kneeland noted that AI has the potential to level the political campaign playing field, democratizing campaigns for individuals without deep pockets who can’t afford massive fundraising efforts. In October, UN Secretary General Antonio Guterres shared his own unease about easily produced AI-generated content that could deceive people, recounting an experience with an AI app that created a flawless speech in Chinese despite his inability to speak the language.
Meta’s move concerning its AI-generated advertising tool is part of a broader trend among tech giants to address concerns surrounding AI’s role in political campaigns. In September, Alphabet, Google’s parent company, pledged to require disclosure for AI-generated political advertising content. They stipulated that ads with synthetic content making it appear as if a person is saying or doing something they didn’t, or altering real event footage to generate a realistic portrayal of events that never occurred, must be disclosed.
From a regulatory standpoint, the US Federal Election Commission, an independent agency tasked with enforcing campaign laws, recently approved a petition to address “deliberately deceptive artificial intelligence campaign ads.” However, the extent of any subsequent actions remains uncertain.
Check out the latest news in our Global News section
Stay updated on environmental data and insights by following KI Data on Twitter