Public Perception Unveiled: Majority of U.S. Adults Anticipate AI Contributions to Election Misinformation in 2024

"Public Concerns Escalate: Majority of U.S. Adults Fear AI's Role in Spreading Election Misinformation in 2024"

As the specter of the 2024 presidential election looms, the warnings about the potential amplification of misinformation through artificial intelligence (AI) tools have intensified. A recent poll conducted by The Associated Press-NORC Center for Public Affairs Research and the University of Chicago Harris School of Public Policy reveals that a significant majority of U.S. adults share these apprehensions. The poll indicates that nearly 6 in 10 adults (58%) believe that AI tools, equipped to micro-target political audiences, produce persuasive messages on a mass scale, and generate realistic fake images and videos swiftly, will contribute to the proliferation of false and misleading information during the upcoming elections.

In contrast, only 6% of respondents believe that AI will diminish the spread of misinformation, while one-third express skepticism, stating that it won't make much of a difference. Reflecting on the impact of social media in the 2020 elections, 66-year-old Rosa Rangel from Fort Worth, Texas, a Democrat, voices her concerns, foreseeing AI as an exacerbating factor in 2024, akin to a pot "brewing over."

The poll further reveals that only 30% of American adults have engaged with AI chatbots or image generators, with less than half (46%) having some awareness of AI tools. Despite this, there is a widespread consensus that candidates should refrain from utilizing AI in their campaigns. A significant majority deems it detrimental for presidential candidates in 2024 to use AI to create false or misleading media for political ads (83%), edit or touch up photos or videos for political ads (66%), tailor political ads to individual voters (62%), or answer voters' questions via chatbot (56%).

This aversion to the use of AI in political campaigns is bipartisan, with majorities of both Republicans and Democrats expressing concerns. The sentiment extends to creating false images or videos (85% of Republicans and 90% of Democrats) and answering voter questions (56% of Republicans and 63% of Democrats). The bipartisan apprehension follows the deployment of AI in the Republican presidential primary, raising questions about the ethical implications and potential consequences of its use in future political campaigns.

"AI in Politics: Ethical Dilemmas Emerge as Deepfake Campaigns Spark Concerns"

The integration of artificial intelligence (AI) in political campaigns is sparking ethical concerns, particularly as the deployment of deepfake technology becomes more prevalent. In a notable instance, the Republican National Committee unveiled an entirely AI-generated ad in April, portraying a dystopian future if President Joe Biden is reelected. Featuring fake but convincingly realistic images of boarded-up storefronts, military patrols, and waves of immigrants causing panic, the ad disclosed its AI origin in small lettering.

Florida's Republican Governor Ron DeSantis similarly embraced AI in his campaign for the GOP nomination. An ad employed AI-generated images to depict former President Donald Trump embracing Dr. Anthony Fauci, the infectious disease specialist overseeing the COVID-19 response. Additionally, a super PAC supporting DeSantis utilized an AI voice-cloning tool to mimic Trump's voice in a social media post.

Critics, like 42-year-old Andie Near from Holland, Michigan, argue that politicians should campaign on their merits rather than exploiting AI to instill fear in voters. Having used AI tools for image retouching at a museum, Near believes that politicians leveraging technology to mislead can intensify the impact of conventional attack ads.

Thomas Besgen, a 21-year-old Republican college student from Connecticut, expresses moral opposition to campaigns employing deepfake sounds or imagery to manipulate a candidate's statements. Advocating for a ban on deepfake ads or, alternatively, mandatory labeling as AI-generated, Besgen underscores the need for ethical considerations in political discourse.

The Federal Election Commission is presently deliberating on a petition urging the regulation of AI-generated deepfakes in political ads leading up to the 2024 election. Despite reservations about AI's role in politics, Besgen remains enthusiastic about its potential for the economy and society. Actively using AI tools like ChatGPT to explore historical topics and brainstorm ideas, he acknowledges their value while advocating for responsible use. However, he stands among a minority, with just 5% of adults expressing a likelihood to use AI tools like ChatGPT to learn more about presidential candidates.

"Informative Landscape: Americans Lean on Traditional Sources Over AI Chatbots for Election Insights"

As the 2024 presidential election approaches, Americans are turning to familiar sources for information, with news media (46%), friends and family (29%), and social media (25%) topping the list, while AI chatbots lag behind in popularity. Thomas Besgen, a college student, reflects the prevailing sentiment, approaching AI-generated responses with caution, acknowledging the need to take them "with a grain of salt."

The skepticism extends to the credibility of information provided by AI chatbots, as only 5% of respondents express extreme or very high confidence in its factual accuracy. A significant majority (61%) lack confidence, aligning with warnings from AI experts cautioning against relying on chatbots for information due to their propensity for generating potentially inaccurate content.

Concerns about the misuse of AI-generated content in political ads prompt bipartisan openness to regulations. Approximately two-thirds of respondents support government intervention to ban AI-generated content featuring false or misleading images in political ads. A similar proportion advocates for technology companies to label all AI-generated content produced on their platforms.

President Biden's recent executive order aimed at guiding AI development and establishing safety and security standards underscores the growing recognition of the need for regulatory frameworks. The order directs the Commerce Department to issue guidance for labeling and watermarking AI-generated content, signaling a federal initiative to address the challenges posed by AI.

A shared responsibility narrative emerges, with 63% of respondents attributing a significant portion of responsibility to technology companies that create AI tools. Additionally, about half of the respondents emphasize the roles of the news media (53%), social media companies (52%), and the federal government (49%) in preventing AI-generated false or misleading information during the 2024 presidential elections. While Democrats lean slightly more towards assigning responsibility to social media companies, overall, there is general agreement across party lines regarding the shared responsibility of technology companies, the news media, and the federal government.

"Insights from the Public: Examining Perspectives on AI in Politics"

The survey, conducted from October 19 to 23, 2023, encompassed 1,017 adults, employing a sample derived from NORC's AmeriSpeak Panel, a probability-based approach designed to mirror the U.S. population. The margin of sampling error for all respondents is plus or minus 4.1 percentage points, providing a nuanced view of public sentiment regarding the intersection of artificial intelligence and political discourse.

Reporting for this study extends gratitude to the Associated Press writer Linley Sanders in Washington, D.C., for her valuable contributions to this comprehensive report.

"In conclusion, the survey conducted from October 19 to 23, 2023, sheds light on the public's nuanced perspectives on the intersection of artificial intelligence and political dynamics. With a sample drawn from NORC's AmeriSpeak Panel, designed for comprehensive representation, the survey captures the sentiments of 1,017 adults across the United States. Notably, the findings reveal a cautious approach to AI-generated content in political campaigns, with concerns about misinformation and ethical considerations surfacing prominently. As the 2024 presidential election looms, the data underscore the importance of addressing public apprehensions and implementing responsible AI practices in the realm of politics. The insights gained from this survey contribute valuable information to ongoing discussions surrounding AI regulation, ethical use, and the evolving landscape of political communication. Appreciation is extended to all participants and contributors who have enriched our understanding of this critical intersection between technology and democracy."