It is undeniable that social networks are an intrinsic part of everyday life for most people. The possibilities of interactivity with different individuals and organizations are vast, to the point of being able to establish relationships with people far away from oneself, or even get to know wholly new acquaintances. Nevertheless, interactions with strangers can carry some risk within – to the point of deception.
Identity supplantation is not an unusual practice in the virtual medium – especially amongst individuals who aspire to take advantage of the supplanted victim’s life, and connections, or to worsen their public image. Another activity related to this phenomenon is the creation of “screens” in the form of fictitious individuals that cover the true identity of the perpetrator. Automatization of these deceptions is frequent within today’s technological landscape in the form of social botnets.
Social botnets, as a variant of botnets (networks of infected computers under the control of a single attacking party), use social media platforms to create a network of fake profiles linked together, attacking victims in coordination. Deceivers may create a multitude of dummy profiles themselves or use specially designed programs to create and clone their false personas.
In spite of the sophistication of their methods, identifying a social botnet in the wild may not be complicated. Accordingly, to cybersecurity expert James Foster, these entities act following these patterns with great probability:
- Hashtag hijacking. By appropriating organization-specific hashtags, bots distribute spam or malicious links that subsequently appear in the organization’s circles and news feeds, effectively focusing the attack on that group.
- Trend-jacking/watering hole. Attackers pick the top trends of the day to disseminate the attack to as broad an audience as possible.
- Spray and pray. Spray and pray involves posting as many links as possible, expecting to get only a click or two on each. These bots will often still intersperse odd or programmatically generated text-based posts, simply to fly under the social network’s Terms of Service radar.
- Retweet storm. One clear indicator of malicious botnet activity is a post that is instantly reposted or retweeted by thousands of other bot accounts. The original posting account is generally flagged and banned, but the reposts and retweets remain. The parent account, known as the martyr bot, sacrifices itself to spread the attack.
- Click/Like Farming. Follower inflation – a seedy marketing strategy designed to make a page or conversation look more popular.
- Phishing. Cybercriminals pose as a trusted source to trick victims into giving up sensitive information. They send out thousands of messages on social media platforms and a few unsuspecting users will wind up being snared.
As seen, they characterize themselves as following repetitive patterns based on spamming and an abundance of shared content. Analyzing related behavioral, it is not complicated to figure their presence out.
For more information: