OpenAI confirms that it has a watermarking ability and detection tool for ChatGPT-generated text. In fact, the firm revealed that both of these have already been developed for about a year, but are yet to roll out due to conflicting decisions within the company.
Initially discovered by The Wall Street Journal (WSJ) and later confirmed by OpenAI itself via a recently published blog post, the detection tool is said to have a high accuracy rate in identifying text generated by ChatGPT. This is made possible by the addition of the aforementioned watermarks within the text.
However, the company notes that the system does have its limitations. It is said to be less effective in situations where the text has been altered by another generative model, translation tools, or even complex workarounds such as asking ChatGPT to insert a special character in between every word and then deleting it.
In the same blog post, OpenAI clarified that both tools have not been released due to concerns of reduced usage by surveyed ChatGPT users. It also said that doing so could also stigmatise use of AI as a “useful writing tool for non-native English speakers”. Despite this, WSJ reports that some of the firm’s employees believe that watermarking is the best way forward.
Apart from watermarking, OpenAI says it is also working on other potential solutions to identify ChatGPT-generated text. These include embedding classifiers and metadata, which are still in the early stages of development.
(Source: WSJ / OpenAI [official blog])
Follow us on Instagram, Facebook, Twitter or Telegram for more updates and breaking news.