Categories USA News

OpenAI wants to stop ChatGPT from validating users’ political views

“ChatGPT shouldn’t have political bias in any direction.”

That’s OpenAI’s stated goal in a new research paper released Thursday about measuring and reducing political bias in its AI models. The company says that “people use ChatGPT as a tool to learn and explore ideas” and argues “that only works if they trust ChatGPT to be objective.”

But a closer reading of OpenAI’s paper reveals something different from what the company’s framing of objectivity suggests. The company never actually defines what it means by “bias.” And its evaluation axes show that it’s focused on stopping ChatGPT from several behaviors: acting like it has personal political opinions, amplifying users’ emotional political language, and providing one-sided coverage of contested topics.

Read full article

Comments

More From Author

You May Also Like

In these difficult times, Farragut’s Public League title in 16-inch softball is source of pride

Operation Midway Blitz, the federal immigration crackdown that began in September, has disrupted daily life…

‘Yo tengo papeles’: Teacher detained by immigration agents in North Center day care

Federal immigration agents entered a North Center day care and arrested a teacher Wednesday before…

“You must be held accountable”: Mamdani gears up for ICE showdown

Zohran Mamdani sent a message to Immigration and Customs Enforcement agents mere hours after handily…