As the fervor around generative AI grows, critics have called on the creators of the tech to take steps to mitigate its potentially harmful effects. In particular, text-generating AI in particular has gotten a lot of attention — and with good reason. Students could use it to plagiarize, content farms could use it to spam and bad actors could use it to spread misinformation.
OpenAI bowed to pressure several weeks ago, releasing a classifier tool that attempts to distinguish between human-written and synthetic text. But it’s not particularly accurate; OpenAI estimates that it misses 74% of AI-generated text.
In the absence of a reliable way to spot text originating from an AI, a cottage industry of detector services has sprung up. ChatZero, developed by a Princeton University student, claims to use criteria including “perplexity” to determine whethe...