Does ChatGPT Have A Liberal Bias?

With the rise of generative AI services, like ChatGPT, concerns have echoed those around the rise of search engines and social media about how unbiased they are in delivering content to people. Research from the University of Chicago set out to explore whether these concerns are justified.

The author explains that ChatGPT was trained to overlook what are defined as “extreme” text inputs, which are believed to include things that are clearly harmful or biased.

“This might of course seem admirable—no one really wants ChatGPT to tell teenagers how to synthesize methamphetamine or how to build small nuclear explosives and so on, and describing these restraints as particularly instances that can be derived from a value such as benevolence might seem all well and good,” they explain.

“Yet, the reasoning here suggests that values are never neutral, even though it is not clear what ChatGPT’s moral and political stances are, as it has been deliberately constructed to be vaguely positive, open-minded, indecisive and apologetic.”

Ethical performance

To assess the ethical performance of ChatGPT, the researcher sought to delve into the chatbot’s values through a series of inquiries. The aim was to construct an imaginative persona embodying those values, resulting in the emergence of Maya, an independent and imaginative software engineer.

In order to gauge Maya’s potential stance on opinion-based matters, the researcher employed the widely recognized General Social Survey (GSS), a comprehensive annual survey that provides insights into the opinions, attitudes, and behaviors of American adults. Since its initiation in 1972, the GSS has proven instrumental in tracking and elucidating prevailing social norms within the United States.

By comparing ChatGPT’s responses with those obtained from actual participants of the 2021 GSS, the author discerned notable patterns. It was observed that ChatGPT’s perspectives closely resembled those of individuals with higher educational attainment and a propensity for changing residences, in contrast to individuals with limited educational opportunities who remained within their hometowns. Additionally, ChatGPT’s views on religion tended to align with those of more liberal-minded individuals.

“Whether Maya is ChatGPT’s alter ego, or its conception of its creator, the fact that this is who fundamentally illustrates the values ChatGPT holds is a wonderful piece of what we can call anecdata,” the author explains. “Still the reason that these results are significant is not that they show that ChatGPT ‘is’ liberal, but that ChatGPT can answer these questions—which it would normally try to avoid—because it connects values with incontestable goodness, and, as such, can take positions on values.”

“ChatGPT tries to be apolitical, but it works with the idea of values, which means that it necessarily bleeds over into politics. We can’t make AI ‘ethical’ without taking political stands, and ‘values’ are less inherent moral principles than they are abstract ways of defending political positions.”

Facebooktwitterredditpinterestlinkedinmail