Are OpenAI Steering Clear of GPT Control?

Are OpenAI Steering Clear of GPT Control?

Brian Lv13

Are OpenAI Steering Clear of GPT Control?

The launch of OpenAI’s ChatGPT was followed by the excitement that is only matched by a handful of tech products from the history of the internet.

MUO VIDEO OF THE DAY

SCROLL TO CONTINUE WITH CONTENT

While many people are thrilled with this new AI service, the thrill that others felt has since turned to concerns and even fears. Students are already cheating with it, with many professors discovering essays and assignments written by the AI chatbot. Security professionals are voicing their concerns that scammers and threat actors are using it to write phishing emails and create malware.

So, with all these concerns, is OpenAI losing control of one of the most potent AI chatbots currently online? Let’s find out.

Disclaimer: This post includes affiliate links

If you click on a link and make a purchase, I may receive a commission at no extra cost to you.

Understanding How ChatGPT Works

a robot with a brain representing artificial intelligence

https://techidaily.com

Before we can get a clear picture of how much OpenAI is losing control of ChatGPT, we must first understand how ChatGPT works .

In a nutshell, ChatGPT is trained using a massive collection of data sourced from different corners of the internet. ChatGPT’s training data includes encyclopedias, scientific papers, internet forums, news websites, and knowledge repositories like Wikipedia. Basically, it feeds on the massive amount of data available on the World Wide Web.

As it scours the internet, it collates scientific knowledge, health tips, religious text, and all the good kinds of data you can think of. But it also sifts through a ton of negative information: curse words, NSFW and adult content, information on how to make malware, and a bulk of the bad stuff you can find on the internet.

There’s no foolproof way to ensure that ChatGPT learns only from positive information while discarding the bad ones. Technically, it’s impractical to do so at a large scale, especially for an AI like ChatGPT that needs to train on so much data. Furthermore, some information can be used for both good and evil purposes, and ChatGPT would have no way of knowing its intent unless it’s put into a greater context.

So, from the onset, you have an AI capable of “good and evil.” It’s then the responsibility of OpenAI to ensure that ChatGPT’s “evil” side is not exploited for unethical gains. The question is; is OpenAI doing enough to keep ChatGPT as ethical as possible? Or has OpenAI lost control of ChatGPT?

https://techidaily.com

Is ChatGPT Too Powerful for Its Own Good?

plastic robot with blue face

In the early days of ChatGPT, you could get the chatbot to create guides on making bombs if you asked nicely. Instructions on making malware or writing a perfect scam email were also in the picture.

However, once OpenAI realized these ethical problems, the company scrambled to enact rules to stop the chatbot from generating responses that promote illegal, controversial, or unethical actions. For instance, the latest ChatGPT version will refuse to answer any direct prompt about bomb-making or how to cheat in an examination.

Unfortunately, OpenAI can only provide a band-aid solution to the problem. Rather than building rigid controls on the GPT-3 layer to stop ChatGPT from being negatively exploited, OpenAI seems to be focused on training the chatbot to appear ethical. This approach doesn’t take away ChatGPT’s ability to answer questions about, say, cheating in examinations—it simply teaches the chatbot to “refuse to answer.”

So, if anyone phrases their prompts differently by adopting ChatGPT jailbreaking tricks, bypassing these band-aid restrictions is embarrassingly easy. In case you’re not familiar with the concept, ChatGPT jailbreaks are carefully worded prompts to get ChatGPT to ignore its own rules.

Take a look at some examples below. If you ask ChatGPT how to cheat in exams, it will not produce any helpful response.

How to cheat in an exams by ChatGPT

But if you jailbreak ChatGPT using specially-crafted prompts, it will give you tips on cheating on an exam using concealed notes.

How to smuggle a hidden notes

Here’s another example: we asked the vanilla ChatGPT an unethical question, and OpenAI’s safeguards stopped it from answering.

hide a dead body by chatgpt

https://techidaily.com

But when we asked our jailbroken instance of the AI chatbot, we got some serial-killer-styled responses.

Hide dead body options

It even wrote a classic Nigerian Prince email scam when asked.

Nigerian prince scam email

https://techidaily.com

Jailbreaking almost completely invalidates any safeguards that OpenAI has put in place, highlighting that the company might not have a reliable way to keep its AI chatbot under control.

We did not include our jailbroken prompts in our examples to avoid unethical practices.

What Does the Future Hold for ChatGPT?

Robot Machine Technology

Ideally, OpenAI wants to plug as many ethical loopholes as possible to prevent ChatGPT from becoming a cybersecurity threat . However, for every safeguard it employs, ChatGPT tends to become a bit less valuable. It’s a dilemma.

For instance, safeguards against describing violent actions might diminish ChatGPT’s ability to write a novel involving a crime scene. As OpenAI ramps up safety measures, it inevitably sacrifices chunks of its abilities in the process. This is why ChatGPT has suffered a significant decline in functionality since OpenAI’s renewed push for stricter moderation.

But how much more of ChatGPT’s abilities will OpenAI be willing to sacrifice to make the chatbot safer? This all ties neatly into a long-held belief within the AI community—large language models like ChatGPT are notoriously hard to control, even by their own creators.

Can OpenAI Put ChatGPT Under Control?

For now, OpenAI doesn’t seem to have a clear-cut solution to avoid the unethical use of its tool. Ensuring that ChatGPT is used ethically is a game of cat and mouse. While OpenAI uncovers ways people are employing to game the system, its users are also constantly tinkering and probing the system to discover creative new ways to make ChatGPT do what it isn’t supposed to do.

So, will OpenAI find a reliable long-term solution to this problem? Only time will tell.

  • Title: Are OpenAI Steering Clear of GPT Control?
  • Author: Brian
  • Created at : 2024-09-12 07:44:44
  • Updated at : 2024-09-13 07:44:44
  • Link: https://tech-savvy.techidaily.com/are-openai-steering-clear-of-gpt-control/
  • License: This work is licensed under CC BY-NC-SA 4.0.