“More Human than Human: Measuring ChatGPT Political Biases”

From an article in Public choice by Fabio Motoki, Valdemar Pinho Neto and Victor Rodrigues:

We investigate the political bias of a large language model (LLM), ChatGPT, which has become popular for factual information retrieval and content generation. While ChatGPT assures that it is unbiased, the literature suggests that LLMs exhibit biases involving race, gender, religion, and political orientation. Political biases in LLMs can have similar adverse political and electoral consequences to traditional and social media biases. Furthermore, political biases can be more difficult to detect and eradicate than gender or racial biases.

We propose a new empirical design to infer whether ChatGPT has political bias by requiring it to impersonate someone from a given side of the political spectrum and comparing these responses to its default setting. We also propose tests of dose-response robustness, placebo and profession-policy alignment. To reduce concerns about the randomness of the generated text, we collect responses to the same questions 100 times, with the order of the questions randomized in each round.

We find strong evidence that ChatGPT has significant and systematic political bias against Democrats in the United States, Lula in Brazil, and the Labor Party in the United Kingdom. These findings translate into real concerns that ChatGPT, and LLMs in general, may extend or even amplify existing challenges involving political processes posed by the Internet and social media. Our findings have important implications for policy makers, media, politics and academic stakeholders.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *