AI chatbots are undoubtedly powerful and useful tools. However, the ability to distinguish between human-generated and AI-generated content is becoming a prominent issue.
MUO VIDEO OF THE DAY
SCROLL TO CONTINUE WITH CONTENT
To address this issue, tools like ZeroGPT have emerged. These are designed to differentiate between AI and human-generated content. But do they work?
Let’s take a closer look at AI detection tools and see whether they can tell the difference between human and AI-generated text.
Testing AI-Detection Tools
They say that the proof of the pudding is in the eating. So, let’s try some tests and see just how effective these tools are. It is impossible to test every tool, so we’re testing one of the most popular tools—ZeroGPT.
For material, we thought it would be quite fun to give ChatGPT a crack at writing an intro for this article and then comparing it against the “human-generated” intro:
Test One: Comparing a Human and AI-Generated Article Intro
The first thing we did was get ChatGPT to generate an introduction. We entered the title and gave it no further information. For the record, we used GPT-3.5 for the test.
We then copied the text and pasted it into ZeroGPT. As you can see, the results were less than stellar.
An inconspicuous start, but it does illustrate just how effective AI chatbots are. To complete the test, we let ZeroGPT analyze a human-created draft intro.
At least it got this part correct. Overall, ZeroGPT failed in this round. It did determine that at least part of the AI-generated introduction was suspect but failed to highlight specific issues.
Title: Skepticism Grows Over ZeroGPT & Detection Tools