OpenAI’s new model, called o1, appears to think and ponder as you use it. But is it thinking? Or pondering? And what does it mean if it is? Would that make it worth the risks, which appear to be both greater and more plausible than ever? How do you balance the risks of destroying humanity with the possibility of improving it? This is the thing about talking about artificial intelligence: it has this nasty penchant of getting all existential on you.