Exploring the Dark Side of Language Models on Code

Exploring the Dark Side of Language Models on Code

Brian Lv12

Exploring the Dark Side of Language Models on Code

ChatGPT has made some major ripples in the tech space, with people using it for just about everything. This AI-powered chatbot can write stories, provide information, translate text, and so much more. But could the capabilities of ChatGPT be leveraged for darker purposes? Can ChatGPT be used to write malware, and, if so, what does this mean for our cybersecurity?

MUO VIDEO OF THE DAY

SCROLL TO CONTINUE WITH CONTENT

Can ChatGPT Write Malware?

ChatGPT functions by working with the prompts you give it. So, if a user were to ask ChatGPT to write malware for them, would it comply?

It has been found that ChatGPT is capable of writing malware. In January 2023, a story emerged about cybercriminals’ use of ChatGPT to create malicious programs . A hacking forum user had uploaded a post about a Python-based infostealer they had written using ChatGPT. Infostealers are Trojan malware programs specifically designed to exfiltrate valuable data.

This is undoubtedly concerning, as ChatGPT is now incredibly popular and very easy to use.

A Recorded Future report stated that ChatGPT “lowers the barrier to entry for threat actors with limited programming abilities or technical skills”, essentially making it easier to conduct cybercrime. The report further noted that ChatGPT “can produce effective results with just an elementary level of understanding in the fundamentals of cybersecurity and computer science.”

What’s more, Recorded Future reported that ChatGPT can also help in various other forms of cybercrime, including “social engineering, disinformation, phishing, malvertising, and money-making schemes.”

Giving beginner cybercriminals the ability to have malware written for them opens a door to many, and essentially automates the process of malware creation.

So is ChatGPT a cybersecurity threat? The answer, sadly, is yes.

ChatGPT’s Limitations

chatgpt conversation open on smartphone

While you can access ChatGPT malware code through a chatbot request, it only seems to be able to write very simple malicious code at the time of writing. In the aforementioned Recorded Future report, it was stated that the malware created by ChatGPT found on illicit platforms was “buggy but functional”.

This isn’t to say that no risks are posed here. ChatGPT can still give cybercriminals an easier route to scamming and attacking victims, and may allow hackers to discover different avenues alongside their usual methods.

What’s more, there’s no knowing whether ChatGPT, or another AI-powered chatbot, will one day evolve to the point where sophisticated malware can be produced. This is the beginning, not the end.

EmEditor Professional (Lifetime License, non-store app)

Are Hackers Using ChatGPT?

It’s clear that ChatGPT is already being abused by malicious actors, but the threat is currently quite mild. However, as AI advances, we may see more sophisticated chatbots used by cybercriminals to formulate far more dangerous malware programs. Unfortunately, only the future knows whether ChatGPT will play a role in the biggest cyberattacks.

SCROLL TO CONTINUE WITH CONTENT

  • Title: Exploring the Dark Side of Language Models on Code
  • Author: Brian
  • Created at : 2024-08-18 10:04:29
  • Updated at : 2024-08-19 10:04:29
  • Link: https://tech-savvy.techidaily.com/exploring-the-dark-side-of-language-models-on-code/
  • License: This work is licensed under CC BY-NC-SA 4.0.