With the advent of AI-generated content, especially where text is concerned, there are a number of services that are supposed to tell you if you’re reading something that’s well, generated rather than typed or written. One of them was AI Classifier, made by the creators of ChatGPT, OpenAI. Though slightly ironic is the fact that the company has decided to close down the tool. And the reason behind the decision? Its “low rate of accuracy”.
This was posted as an update to a blog post that initially announced the availability of the AI Classifier itself. The update was a quietly added one as well, as it reads that the tool was made unavailable last week, on 20 July. That being the case, OpenAI says that it has “made a commitment to develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated”.
To be fair, OpenAI did mention up front that its AI Classifier tool had loads of limitations from the get go. Not only was the tool deemed “very unreliable” by its own makers for texts below 1,000 characters, but even longer ones are sometimes mislabeled. Even lightly-edited AI-generated texts can evade detection by the tool.
With all that in mind, OpenAI shutting down the AI Classifier can probably be chalked up to the tool not showing significant improvement since its introduction. At this stage, it’s probably easier to just check the citation in the body of text to find out if it was a piece written by a person after some research, or generated by a tool. While humans are prone to misinterpretation of references at times, at least most in the field professionally don’t make up their own sources.
(Source: OpenAI)
Follow us on Instagram, Facebook, Twitter or Telegram for more updates and breaking news.