OpenAI’s new model is better at reasoning and, occasionally, deceiving

Photo collage of a computer with the ChatGPT logo on the screen. Illustration by Cath Virginia / The Verge | Photos by Getty Images

In the weeks leading up to the release of OpenAI’s newest “reasoning” model, o1, independent AI safety research firm Apollo found a notable issue. Apollo realized the model produced incorrect outputs in a new way. Or, to put things more colloquially, it lied.

Sometimes the deceptions seemed innocuous. In one example, OpenAI researchers asked o1-preview to provide a brownie recipe with online references. The model’s chain of thought — a feature that’s supposed to mimic how humans break down comple...

Read Entire Article

© 2024 Thiratti. All rights reserved.