Could AI Rig the Next Election?
Shane Harris – In the ongoing battle over election integrity, don’t forget about the rising influence of artificial intelligence (AI). While most Americans understand how legacy media and Big Tech biases can unfairly influence the outcome of elections, AI could be an even more powerful tool to tip the balance of power.
By now it is obvious that the search engines we once assumed were neutral are anything but. Google’s results are shaped by liberal nonprofits that score left-leaning news outlets as more “trustworthy” and right-leaning outlets as less trustworthy. These scores are then fed directly into the algorithm. The end result is that stories with a liberal slant consistently rise to the top of search queries while conservative sources are buried.
People also generally understand how social media companies use their institutional power to shape political outcomes. We are only five years removed from X and Facebook banning The New York Post for accurately reporting on the Hunter Biden laptop scandal. The New York Times, The Washington Post, and virtually every major outlet except the Post insisted that the story was Russian disinformation, and Big Tech was happy to aide in that censorship.
Only after the 2020 election was safely in the rearview mirror did those same outlets admit the story was true all along. We will never know with certainty, but it seems entirely plausible that if the scandal had been shared and investigated properly, the outcome of the election could have been different.
The question almost no one is asking is whether we could see a similar scenario play out with AI.
Every day, more Americans are relying on AI tools, most notably ChatGPT, for everything from a new holiday recipe to dating advice. A Pew study released this past June found that 34 percent of American adults have used ChatGPT, which is roughly double the share from 2023. At that pace, more than 70 percent of the country could be turning to ChatGPT for answers by the time the 2028 election arrives.
Some Americans are even becoming emotionally dependent on AI. ChatGPT parent company OpenAI is already facing lawsuits from families who say it contributed to loved ones taking their own lives. Even OpenAI has acknowledged concerns that its new voice mode could cause users to become emotionally attached to and dependent on the product.
AI adoption is even more pronounced among young people, who are still forming their political identity. Pew found that 58 percent of adults aged 18 to 29 have used ChatGPT. The chatbot debuted only three years ago yet it has already become an epidemic in high school and college classrooms. It stretches credibility to imagine that these same young people will not ask ChatGPT to guide their political choices as well.
On the surface, this may appear harmless – but that’s assuming AI provides neutral and unbiased information. After all, AI is a machine and not a person, so it supposedly cannot carry the same ideological baggage that voters now associate with the legacy press, right?
But we know beyond a shadow of a doubt that AI chatbots are anything but unbiased. A June 2025 study in the Journal of Economic Behavior and Organization concluded that Chat GPT-4’s responses align more with left-wing political values than with the views of the average American. A Stanford study reached a similar conclusion and found that even self-identified Democrats think ChatGPT and comparable models lean left. A third study that examined more than two dozen AI models found that every single one had a left-wing bias except for the one model that had been explicitly trained to lean right.
Moreover, the increasing public reliance on AI creates an enormous incentive for both political parties to shape what AI models say. Whoever controls the flow of information that an AI tool provides could influence millions of voters at once. That is dangerous in any scenario, but especially so when we already know that AI has exhibited bias.
Deepfake technology is also advancing at remarkable speed, and almost everyone has been fooled at least once by a convincing AI-generated video or image. It is not at all far-fetched to picture a scenario in which a cascade of deepfake videos is unleashed just days before a close election. In a world where a few thousand votes can determine the presidency, the consequences could be enormous.
This is not a matter that should divide Republicans and Democrats. Lawmakers in both parties have a responsibility to understand the risks posed by AI and to guard against the possibility that these tools could be weaponized to mislead voters. They must conduct serious oversight of AI companies and insist that safeguards are in place to prevent these platforms from becoming partisan megaphones.
Everyday Americans have a responsibility as well. Voters should educate themselves about how AI works and approach its political answers with the same skepticism they now apply to traditional media. Blind trust in any information source is dangerous, and AI should be no exception.
The rise of AI presents both opportunity and peril. If left unchecked, it could become the most powerful propaganda tool ever invented. If handled responsibly, it could instead become a valuable supplement to an informed citizenry. The choice is still ours and the time to get it right is right now.
SF Source AMAC Dec 2025