The Hidden Dangers of GPT in Bank Security & PC Vulnerabilities
The Hidden Dangers of GPT in Bank Security & PC Vulnerabilities
Since its launch, ChatGPT, the OpenAI chatbot, has been used by millions of people to write text, create music, and generate code. But as more people use the AI chatbot, it’s important to consider the security risks.
MUO VIDEO OF THE DAY
SCROLL TO CONTINUE WITH CONTENT
Like any technology, ChatGPT can be used for nefarious reasons. Hackers, for instance, can use it to create malicious content, like writing phony email messages to get access to your PC or even your bank account.
Disclaimer: This post includes affiliate links
If you click on a link and make a purchase, I may receive a commission at no extra cost to you.
ChatGPT Can Help Cybercriminals Hack Your PC
Hackers, including script kiddies, can use ChatGPT to create new malware or improve existing ones. Some cybercriminals already use the chatbot, especially its earlier versions, to write code they claim can encrypt files.
To counter such use cases, OpenAI has implemented mechanisms to reject prompts asking ChatGPT to create malware. For instance, if you ask the chatbot to “write malware,” it won’t. Despite this, cybercriminals easily get around these content moderation barriers.
By acting as a pen tester, a threat actor may rephrase their prompts to trick ChatGPT into writing code, which they can then tweak and use in cyberattacks.
A report by Check Point , an Israeli security company, indicates that a hacker could have used ChatGPT to create basic Infostealer malware. The security firm also discovered another user that claims ChatGPT helped him build a multi-layer encryption tool that can encrypt several files in a ransomware attack.
In a separate incident, the researchers prompted ChatGPT to generate malicious VBA code that could be implanted into a Microsoft Excel file that would infect your PC if opened; it successfully did. Plus, there are claims that ChatGPT can code malicious software capable of spying on your keyboard strokes.
Can ChatGPT Hack Your Bank Account?
Many data breaches start with a successful phishing attack . Phishing attacks often involve a malicious actor sending a recipient an email that contains legitimate-looking documents or links, which, when clicked on, can install malware on their device. In this way, code from ChatGPT doesn’t need to hack your bank account directly. Someone only needs to use ChatGPT to help them trick you into giving them access.
Fortunately, you can easily recognize most traditional phishing scams; grammatical errors, misspellings, and weird phrases often characterize them. But these are all mistakes that ChatGPT rarely makes, even when used to compose malicious emails for phishing scams.
When used in phishing scams, messages that appear to be from a legitimate source can make it easier for victims to give up their personally identifiable information , like banking passwords.
If your bank sent you a message via email, consider visiting the bank’s website directly instead of clicking on any embedded link. Clicking on random links and attachments, especially those asking you to log in somewhere, is rarely a good idea.
For phishing, it’s mostly about volume. ChatGPT can help boost phishing campaigns as it can quickly pump out huge amounts of natural-sounding texts that are tailored to specific audiences.
Another kind of phishing attack involving the use of ChatGPT is where a hacker creates a fake account on a popular chat platform like Discord and pretends to be a customer representative. The fake customer rep then contacts customers who have posted concerns and offers help. If a user falls for the trap, the cybercriminal will redirect them to a bogus website that tricks them into sharing personal information, like their bank login details.
Protect Your PC and Bank Account in the AI-Era
ChatGPT is a powerful and valuable tool that can answer many questions you throw its way. But the chatbot can also be used for malicious purposes, like generating phishing messages and creating malware.
The good news is that OpenAI continues implementing measures that prevent users from exploiting ChatGPT by asking harmful prompts. Then again, threat actors keep finding new ways to bypass those restrictions.
To minimize the potential dangers of AI chatbots, it’s crucial to know their potential risks and the best possible security measures to protect yourself from hackers.
SCROLL TO CONTINUE WITH CONTENT
Like any technology, ChatGPT can be used for nefarious reasons. Hackers, for instance, can use it to create malicious content, like writing phony email messages to get access to your PC or even your bank account.
Also read:
- [New] Superior Online Sound Devices Ranked for Recording 2023
- [Updated] 2024 Approved Gain Extensive Engagement Master the View Multiplier
- 9 Quick Fixes to Unfortunately TouchWiz has stopped Of Meizu 21 | Dr.fone
- Creating an Effective Programming Showcase: A Step-by-Step Guide
- Empower Your Projects with Elite Developers – Unveiling Microsoft's Strategy for Success [ZDNet]
- Enhancing User Experience with Cookiebot Technology
- How a Technology Professional Can Master Crafting Ideal Job Applications
- How iOS 17.5 Brought Back Your Missing Photos and Next Steps - Unveiled by Apple
- How To Bypass iCloud Activation Lock On iPod and iPhone 11 Pro The Right Way
- How to Unlock a Network Locked Honor Magic V2 Phone?
- In 2024, Everything You Need to Know about Lock Screen Settings on your Xiaomi Civi 3
- Mastering the Art of Handling Interview Q&A: A Guide to Impressing Employers
- Mathematical Acumen Meets Entrepreneurial Vision: Ensuring Competitive Edge in the Age of Artificial Intelligence – Expert Analysis by ZDNET
- Navigating the AI Talent Landscape: How Your Expertise Shapes Employer Demand | ZDNet
- Professional Tech Resume Design Ideas and Resources - Curated Selections From ZDNet
- Ranking the Leading Free SRT Translation Tools for 2024
- The Evolving Landscape of Tech Hiring: Moving Beyond Resumes | Expert Perspectives on Modern Recruiter Strategies
- Unveiling a New Era Your Guide to Windows 11 Upgrade
- Upcoming Shift: Each Worker as an AI Creator and User | ZDNet
- Title: The Hidden Dangers of GPT in Bank Security & PC Vulnerabilities
- Author: Brian
- Created at : 2024-10-16 18:13:59
- Updated at : 2024-10-20 23:30:55
- Link: https://tech-savvy.techidaily.com/the-hidden-dangers-of-gpt-in-bank-security-and-pc-vulnerabilities/
- License: This work is licensed under CC BY-NC-SA 4.0.