Сlarity and Trust – We take pride in being the site where you can feel free to express your opinion and leave feedback. Whenever you click on the websites of products reviewed by us, we participate in the revenue sharing and get commissions that help us maintain our project. Read more about how we work.

OpenAI Introduced a Tool for Detecting Text Generated by Artificial Intelligence

Now Reading
OpenAI Introduced a Tool for Detecting Text Generated by Artificial Intelligence

OpenAI has released a tool that should determine whether the text was generated using artificial intelligence (for example, ChatGPT) or written by a person.

However, according to the developers themselves, this tool is “not entirely reliable” and correctly identifies text written by AI in only 26% of cases.

Information security specialists, by the way, said that Russian Cybercriminals Seek Access to OpenAI ChatGPT.

Let me remind you that we also wrote that Researchers Created a TickTock Device to Detect Wiretapping, and also that Apple Introduces Lockdown Mode to Protect against Spying.

The problem of using AI in general and ChatGPT in particular is currently a serious concern for teachers in many educational institutions.

For example, according to BusinessInsider, ChatGPT is already banned in school districts in New York, Seattle, Los Angeles, and Baltimore, and recently, NYU professors told students on the first day of class that using ChatGPT without explicit permission would amount to to plagiarism with all the ensuing consequences.

The New York Times even writes that some teachers now require students to hand in handwritten papers, while others, on the contrary, are trying to include ChatGPT in classes, for example, by analysing AI responses with students.

While the CEO of OpenAI, the company that created the ChatGPT language model, recently stated in an interview that “generative text is something we all need to adapt to,” as happened with calculators in the past, it has already become clear that in many areas there is a great need for tools to detect AI-created content.

Last week, 22-year-old student Edward Tian already introduced such a ChatGPT-generated content detector named GPTZeroX.

You can insert a piece of text or upload a document into this application and it will provide an estimate of how much of this text was written by AI, highlighting the corresponding sentences.

According to Tian, the app was wrong less than 2% of the time when tested on a dataset of BBC news articles and AI-generated articles with the same queries. And it seems that this approach allowed the student to create a more reliable tool than the solution that the OpenAI developers themselves have now presented.

In a blog post, the OpenAI team immediately warns that their “classifier is not entirely reliable.” The developers say that the tool correctly identifies only 26% of AI-written text, while human-written texts are mistakenly labelled as AI-generated in 9% of cases (that is, we are talking about false positives). In addition, while the tool does not work well in any languages other than English and short texts. True, the reliability of the classifier should increase as the length of the input text increases.

OpenAI promises that over time the tool will definitely improve, but for now, developers are waiting for any feedback from teachers, parents, students, educational service providers, journalists and other people whose work ChatGPT can significantly influence.

What's your reaction?
Love It
0%
Like It
0%
Want It
0%
Had It
0%
Hated It
0%
About The Author
Vladimir Krasnogolovy
Comments
Leave a response

Leave a Response