How Biased is ChatGPT When Discussing the Palestine-Israel Issue?

AI • Palestine
Nov 7, 2023
By: Sakina Mohamed
Graphics: Adib Zainal Abidin
Web Design: Ummul Syuhaida Othman
KUALA LUMPUR, November 7 (Bernama) -- ChatGPT has about 180.5 million users worldwide, but not all of them are aware that it is prone to artificial intelligence (AI) bias.
AI bias is the tendency of algorithms to reflect human biases. The way AI learns is from data fed by users. This is what ChatGPT does too – it learns from large datasets of text from the internet. This is called training data.
These training data can reflect human bias or unfair correlations, causing the system to give answers that adversely affect certain groups. This can include discrimination on the basis of gender, race, nationality or religion.
A lot of these datasets may be scraped off platforms that predominantly present views in the English language, where the majority of users might present strong biases on an issue.
In addition to that, the training data of the current version of ChatGPT, GPT-3.5, only goes up to January 2022. As such, it may not reflect the changing worldview on the Palestine-Israel issue.
As of Sept 2023, the website has generated over 1.5 billion visits. The majority of these users are between 18-34 years old, many of whom are students.
While ChatGPT has immense potential for good, users need to be aware of these biases and other limitations of large language models. Otherwise, they risk spreading misinformation and amplifying harm.
(This information is correct as of November 2023)
-- BERNAMA
This explainer can also be viewed on our Instagram page.