Mittelsteadt suggests that former President Trump might have the capacity to impact companies through various means. One example is the cancellation of a significant federal contract with Amazon Web Services, a move likely influenced by Trump’s perception of the Washington Post and its owner, Jeff Bezos.
Policymakers could easily find evidence of political bias in AI models, which can exhibit biases on both sides of the political spectrum. A 2023 study by researchers from the University of Washington, Carnegie Mellon University, and Xi’an Jiaotong University revealed a range of political leanings across large language models. The study also illustrated how these biases might affect the performance of hate speech or misinformation detection systems.
Another study led by the Hong Kong University of Science and Technology identified biases in several open-source AI models concerning divisive issues such as immigration, reproductive rights, and climate change. Yejin Bang, a PhD candidate involved in the research, notes that most models tend to be liberal and US-centric, though they can display varying liberal or conservative biases depending on the topic.
AI models capture political biases because they are trained on vast datasets from the internet, which contain diverse perspectives. Many users might be unaware of these biases due to guardrails that prevent the models from generating harmful or biased content. However, biases can manifest subtly, and additional training aimed at restricting model output can introduce more partisanship. Bang suggests that developers should ensure models are exposed to multiple perspectives on contentious topics to provide balanced responses.
Ashique KhudaBukhsh, a computer scientist at the Rochester Institute of Technology, expressed concern that the issue might worsen as AI systems become more widespread. He developed the Toxicity Rabbit Hole Framework to analyze societal biases in language models. KhudaBukhsh fears a vicious cycle arising from LLMs (Large Language Models) trained increasingly on data tainted by AI-generated content.
Luca Rettenberger, a postdoctoral researcher at the Karlsruhe Institute of Technology, also believes that biases within LLMs are already a concern and are likely to grow in the future. Rettenberger warns that political groups might attempt to influence LLMs to elevate their views over others’. He views the manipulation of training data as a genuine threat.
Efforts have been made to counterbalance biases in AI models. In March, a programmer developed a chatbot with a more right-leaning perspective to draw attention to perceived biases in tools like ChatGPT. Elon Musk, whose company xAI developed the Grok chatbot, vowed to make it “maximally truth-seeking” and less biased than other AI tools. However, in practice, the chatbot hedges on challenging political questions, which could align with Musk’s personal views that lean right, given his strong support for Trump and tough stance on immigration.
The upcoming election in the United States is unlikely to alleviate the political tensions between Democrats and Republicans. However, if Trump prevails, discussions surrounding anti-woke AI could intensify. Musk presented a troubling perspective on the matter, recalling an incident where Google’s Gemini suggested that nuclear war would be preferable to misgendering Caitlyn Jenner. Musk speculated that an AI programmed with such priorities might conclude that annihilating humanity would eliminate the possibility of future misgendering altogether.