Can Artificial Intelligence Systems Be Manipulated Through Social Engineering Tactics Similar to Humans?
Can Artificial Intelligence Systems Be Manipulated Through Social Engineering Tactics Similar to Humans?
“Social Engineering” is a tried and tested tactic that hackers use against the human element of computer security systems, often because it’s easier than defeating sophisticated security technology. As new AI becomes more human-like, will this approach work on them?
What Is Social Engineering?
Not to be confused with the ethically-dubious concept in political science , in the world of cybersecurity social engineering is the art of using psychological manipulation to get people to do what you want. If you’re a hacker, the sorts of things you want people to do include divulging sensitive information, handing over passwords, or just directly paying money into your account.
There are lots of different hacking techniques that fall under the umbrella of social engineering. For example, leaving a malware infected flash drive lying around depends on human curiosity. The Stuxnet virus that destroyed equipment at an Iranian nuclear facility may have made it into those computers thanks to planted USB drives .
But that’s not the type of social engineering that’s relevant here. Rather, common attacks such as “spear phishing “ (targeted phishing attacks) and “pretexting” (using a false identity to trick targets) where it’s one person in conversation with another that leads to the deception, are relevant here.
Since the “person” on the phone or in a chatroom with you will almost certainly eventually be an AI chatbot of some description, this raises the question of whether the art of social engineering will still be effective on synthetic targets.
AI Jailbreaking Is Already a Thing
Chatbot jailbreaking has been a thing for some time, and there are plenty of examples where someone can talk a chatbot into violating its own rules of conduct, or otherwise doing something completely inappropriate.
In principle the existence and effectiveness of jailbreaking suggests that chatbots could in fact be vulnerable to social engineering. Chatbot developers have had to repeatedly shrink down their scope and put strict guardrails in place to ensure they behave properly, which seems to inspire another round of jailbreaking to see if those guardrails can be exposed or circumvented.
We can find some examples of this posted by users of X (formerly Twitter), such as Dr. Paris Buttfield-Addison who posted screenshots apparently showing how a banking chatbot could be convinced to change its name.
Can Bots Be Protected From Social Engineering?
The idea that, for example, a banking chatbot could be convinced to give up sensitive information, is rightly concerning. Then again, a first line of defense against that sort of abuse would be to avoid giving these chatbots access to such information in the first place. It remains to be seen how much responsibility we can give to software such as this without any human oversight.
The flipside of this that for these AI programs to be useful, they need access to information. So it’s not truly a solution to keep information away from them. For example, if an AI program is handling hotel bookings, it needs access to the details of guests to do its job. The fear, then, is that a savvy con-artist could smooth-talk that AI into divulging who’s staying at that hotel and in which room.
Another potential solution could be to use a “buddy” system where one AI chatbot monitors another and steps in when it starts going off the rails. Having an AI supervisor that reviews every response before its passed on to the user could be one way to mitigate this sort of method.
Ultimately, when you create software that mimics natural language and logic, it stands to reason that the same persuasion techniques that work on humans will work on at least some of those systems. So maybe prospective hackers might want to read How to Win Friends & Influence People right alongside books on cybersecurity.
- Title: Can Artificial Intelligence Systems Be Manipulated Through Social Engineering Tactics Similar to Humans?
- Author: Brian
- Created at : 2024-08-29 19:38:39
- Updated at : 2024-08-30 19:38:39
- Link: https://tech-savvy.techidaily.com/can-artificial-intelligence-systems-be-manipulated-through-social-engineering-tactics-similar-to-humans/
- License: This work is licensed under CC BY-NC-SA 4.0.