Introduction
Good security is there to maintain a network’s confidentiality, integrity, and availability. IT service providers work around the clock to identify and head off vulnerabilities and threats before they happen, and if that’s not enough, to stop damage before it is catastrophic. One of those threats is social engineering. There is very little that can be done on the technology side to prevent these threats, because they attack points outside of the network itself: your employees, and you. A social engineering attack can hand over the keys to the kingdom, and circumvent hardware firewalls, passwords, and really anything else. I’ll be summarizing this paper, which talks about the psychology involved.
First, some background.
Social engineering is very similar to traditional fraud. Social engineering deceives people into giving out information or access. A lot of the tools are the same, which you can read about below. Without a good security policy in place, the tools that your organization uses to run smoothly – like authority, a chain of command, and uniforms or badges – can be turned inside-out to trick people. I’ve written about good policy, and the paper linked above has its own prescriptions. Here are seven tools that social engineering provides, and why they work according to psychology.
1. Strong Affect
Affect is what psychologists call emotion. A strong affect can be used to override people’s logical centers – hence the phrase “blinded by rage” and the numerous people you’ve encountered who are too afraid or excited to act rationally. A hacker can call someone on the phone and tell them, “you have won a million dollars, give me your credit card number and I’ll wire you the money”, or “I am with Microsoft, and we have detected your PC is infected with a virus.” The second example also uses authority, which comes up at point number 6.
Having a strong affect interferes with the logical brain. You can’t make rational counterarguments or evaluate claims properly if you’re affected like that. Hackers are also fond of catching you by surprise, by calling very early or very late, or by using emotionally charged material.
2. Overloading
Have you ever been the victim of fast talk? A salesman can trick you into signing on the dotted line by presenting you with a lot of information very quickly, and a hacker will do the same. You don’t have the time or the presence of mind to challenge a premise if it’s presented quickly and sandwiched between truisms. Effectively, people can be forced to mentally shut down if you make them process too much. Analysis paralysis is another name for this phenomenon. If you have too many options, you can’t decide on any of them. That’s why restaurant menus have been shrinking over time, except for some places like The Cheesecake Factory and other places that put emphasis on quantity and options.
Arguing from an unexpected perspective can also cause one to feel overloaded. You’re spending so much time trying to understand the new perspective that you can’t interact logically with the argument itself. The principle of overloading is based on limiting your ability to scrutinize and process information. If you’ve ever seen a professional debate, there’s a chance you’ve seen a “Gish Gallop” take place. One contestant keeps throwing arguments out without regard for their opponent and the opponent inevitably misses something, and that’s used as a “gotcha”.
3. Reciprocation
It’s only polite to give back when people provide you with a gift. Hackers take advantage of this principle, giving you some small tidbit and expecting to be paid back. The tidbit could even be a promise of giving you something, and that’s how they get you. This works even if you didn’t request anything from them. They promise you something, so you promise something in return. The problem is, you were making that promise in good faith, and they weren’t.
One insidious tactic, known as reverse social engineering, means that a hacker harms your system in some way, and then calls you up wondering if you need help with your computers. This tactic takes advantage of reciprocity, even though you never asked for their help, and they’re the ones who caused the problem. You don’t know that, of course, so they are elevated in your eyes. You’re indebted to them, because they came in your hour of need. This is obviously an ideal situation for the attacker.
Behavioral experiments have shown that if you have two people in disagreement, if one yields on one point, no matter how small, the other will feel compelled to do the same. A hacker makes more than one request, yields on one, and then the target yields another. This system works on corporate environments, too! There is an unwritten bartering system between employees and departments that can be tapped into by an attacker. This system is invaluable to an employee who wants to succeed, and invaluable for an attacker who takes part in bad faith.
4. Deceptive Relationships
Hey, everyone likes friends. But sometimes people are wanting you to think you both are friends, but they just want to take advantage. For example, well-known hacker Kevin Mitnick conned someone by sharing information and technology, and bad-talking “Kevin Mitnick”. The target, of course, didn’t know that he was talking to Kevin Mitnick. Another example is when AOL was attacked. Someone called in and talked to technical support for over an hour. Over that time period, the hacker mentioned that their car was for sale. The tech provided his email address, and when he opened the email from the hacker, the system was compromised and a backdoor was implemented.
Another way a hacker can quickly form a relationship is by making it seem that they have a lot in common with their target. Believing that someone is similar to you provides a strong incentive to treat them favorably. People use commonality all the time when forming relationships; consider how easy it is to make friends at church, or at your workplace, compared to strangers off the street.
5. Diffusion of Responsibility and Moral Duty
If someone believes they will be held accountable, it will make them more conscious of what they are doing. So, a hacker makes targets feel like it won’t be their fault for giving out information. Moral duty is a common trigger, too: the target is made to feel like they are doing something to save another employee or help the company. In effect, the target is forced to believe it is their moral duty to perform the requested action, and that they won’t be held personally responsible for anything bad happening.
6. Authority
People respond to authority. A study was done in which nurses were instructed over the phone to provide incredibly high doses of medications that patients weren’t supposed to get in the first place. The orders were said to be from a physician that the nurses had, of course, never met. These orders should not have been carried out. They went against the Hippocratic oath and company policy. 95% of the nurses went for it and had to be intercepted. All a hacker has to do in a lot of cases is just tell the target that they are acting for their boss.
An environment where there can be no questioning of authority is a security risk, but that’s not to say that you can allow employees to do whatever they want. So what you want to have is an environment where orders from above are verified. A call-back procedure, as detailed in Anatomy of a Social Engineering Attack, also linked above, is instrumental.
7. Integrity and Consistency
Everyone wants to have integrity. We want to follow through with things we promise even if those commitments weren’t entirely wise. This tendency is so strong, that we’ll even follow through for our coworkers. We may not even necessarily like them. Another feature of this is that people will believe others, according to how honest they, themselves are. If a hacker were to get a hold of a vacation schedule, they could spoof a coworker that’s on vacation and have a target fulfill a “request” that never existed.
Conclusion
I hope you enjoyed this writeup. If you’d like to learn about defending against these things, please read “Multi-Level Defense Against Social Engineering“. It is not a very long read, but its second half is very useful if you are concerned about your vulnerability to social engineering attacks, and you should be.