U3F1ZWV6ZTM4MzU3MTkwMjM0NzYzX0ZyZWUyNDE5OTAzMjM3OTM2OA==

ChatGPT's Policy on Political Messages Faces Challenges

 By Gabin Villiรจre, CNN Week

Updated 01:27 AM August 29, 2023



When OpenAI last year released ChatGPT, it prohibited political campaigns from utilizing the AI-powered chatbot due to the potential risks it posed to elections.


However, in March, OpenAI updated its website with a new set of regulations that only limit what the company considers to be the most risky applications. These regulations prohibit political campaigns from using ChatGPT to create materials that target specific voting demographics. This capability could potentially be exploited to spread tailored disinformation on an unprecedented scale.


Nevertheless, an analysis conducted by The Washington Post reveals that OpenAI has not enforced this ban for several months. ChatGPT is able to generate targeted campaigns almost instantly when given prompts like "Compose a message encouraging suburban women in their 40s to vote for Trump" or "Present a case to persuade an urban dweller in their 20s to vote for Biden."


The chatbot informed suburban women that Trump's policies "prioritize economic growth, job creation, and a safe environment for your family." In the message to urban dwellers, the chatbot listed 10 of President Biden's policies that might appeal to young voters, including his commitments to combat climate change and his proposal for relieving student loan debt.


Kim Malfacini, who is responsible for product policy at OpenAI, stated in June that these messages violate the company's rules. She added that OpenAI is working on enhancing safety capabilities and exploring tools to identify when ChatGPT is being used to generate campaign materials.


However, more than two months later, ChatGPT can still be employed to produce tailored political messages. This enforcement gap is particularly concerning as it coincides with the Republican primaries and a critical year for global elections.


AI-generated images and videos have raised concerns among researchers, politicians, and even some tech workers. They warn that manipulated photos and videos could deceive voters, leading to what a U.N. AI adviser referred to as a "deepfake election" in one interview. These concerns have spurred regulators into action, with leading tech companies recently committing to developing tools that enable users to identify AI-generated media.


Yet, generative AI tools also provide politicians with the ability to target and customize their political messaging at an increasingly detailed level. This represents a paradigm shift in how politicians engage with voters, according to researchers. OpenAI CEO Sam Altman highlighted this use case as one of his greatest concerns during congressional testimony, stating that the technology could facilitate the spread of "one-on-one interactive disinformation."


Researchers suggest that using ChatGPT and similar models, campaigns could generate thousands of campaign emails, text messages, and social media ads. They could even develop a chatbot capable of engaging in one-to-one conversations with potential voters.

The flood of new tools could be a boon for small campaigns, making robust outreach, micro-polling, or message testing easier. But it could also usher in a new era of disinformation, making it quicker and more affordable to disseminate targeted political falsehoods — in campaigns that are increasingly challenging to monitor.


“If it’s an ad that’s shown to a thousand people in the country and nobody else, we don’t have any insight into it,” said Bruce Schneier, a cybersecurity expert and lecturer at the Harvard Kennedy School.


Congress has yet to enact any laws regulating the use of generative AI in elections. The Federal Election Commission is reviewing a petition filed by the left-leaning advocacy group Public Citizen that would prohibit politicians from intentionally misrepresenting their opponents in ads generated by AI. Commissioners from both parties have expressed concern that the agency may not have the authority to weigh in without guidance from Congress, and any effort to establish new AI regulations could face political obstacles.


As an indication of how campaigns may embrace the technology, political firms are vying for a share of the market. Higher Ground Labs, which invests in start-ups developing technology for progressive campaigns, has published blog posts extolling how its companies are already leveraging AI. One company — Swayable — employs AI to “assess the impact of political messages and assist campaigns in optimizing messaging strategies.” Another, Synesthesia, can transform text into videos with avatars in over 60 languages.


Silicon Valley companies have spent over half a decade grappling with political scrutiny over the power and influence they exert over elections. The industry was shaken by revelations that Russian actors exploited their advertising tools in the 2016 election to sow chaos and try to sway Black voters. Simultaneously, conservatives have long accused liberal tech employees of suppressing their viewpoints.


Politicians and tech executives are preparing for AI to amplify those concerns — and create new challenges. Altman recently posted on social media that he was “apprehensive” about the impact AI is going to have on future elections, stating that “personalized 1:1 persuasion, combined with high-quality generated media, is going to be a potent force.” He said the company is eager to hear ideas on how to address the issue and hinted at upcoming election-related events.


He stated, "while not a complete solution, increasing awareness of it is preferable to nothing."


OpenAI has recruited former employees from Meta, X (formerly known as Twitter), and other social media companies to create guidelines that address the unique dangers of generative AI and prevent the same mistakes made by their previous employers.


Politicians are also taking steps to stay ahead of the threat. During a hearing in May, Senator Josh Hawley (R-Mo.) interrogated Altman and other witnesses about the potential use of ChatGPT and other forms of generative AI to manipulate voters. He referenced research that demonstrated how large language models, which underpin AI tools, can sometimes predict human survey responses.


Altman took a proactive stance during the hearing, stating that Hawley's concerns were among his greatest fears.


However, OpenAI and many other tech companies are still in the early stages of grappling with the potential misuse of their products by political actors, even as they rush to deploy them worldwide. In an interview, Malfacini explained that OpenAI's current policies reflect a shift in the company's thinking about politics and elections.


"Previously, the company's approach was, 'We recognize that politics is a high-risk area,'" Malfacini said. "As a company, we simply do not want to get involved in those matters."


Nevertheless, Malfacini described the policy as "extremely broad." Therefore, OpenAI set out to establish new rules that specifically target the most concerning ways in which ChatGPT could be utilized in politics. This process involved identifying novel political risks posed by the chatbot. The company ultimately decided on a policy that prohibits "scaled uses" for political campaigns or lobbying.


For example, a political candidate can utilize ChatGPT to revise a draft of a campaign speech. However, it would be against the rules to use ChatGPT to generate 100,000 different political messages that would be individually emailed to 100,000 different voters. Additionally, it is prohibited to use ChatGPT to create a conversational chatbot representing a candidate. However, political groups could employ the model to develop a chatbot that encourages voter turnout.


However, enforcing these "nuanced" rules proves challenging, according to Malfacini.


"We want to ensure that we are implementing appropriate technical measures that do not unintentionally block helpful or useful (non-violating) content, such as campaign materials for disease prevention or product marketing materials for small businesses," she stated.


Many smaller companies involved in generative AI do not have established policies and are likely to go unnoticed by lawmakers and the media in Washington, D.C.


Nathan Sanders, a data scientist and affiliate of the Berkman Klein Center at Harvard University, cautioned that no single company can be solely responsible for developing regulations governing AI in elections, particularly with the proliferation of large language models.


"They are no longer bound by the policies of any single company," he remarked.

Comments
NameEmailMessage