There are people in the world who believed, at one time – and perhaps still do – that computer viruses, like viruses in nature, develop spontaneously to plague the computing world without human intervention and despite preventive measures. Certainly there are times when this explanation might seem preferable; we would rather attribute to factors beyond our control the constant siege of our personal computers by nuisance programs and malware. It would be comforting to think that random snippets of code generating unintended consequences were to blame, rather than malicious, even malevolent human beings bent on everything from fraud and theft to the online equivalent of vandalism. But people, always people, are involved, inevitably and invariably. This is the weak link in any technological process or system.
There is no better example of this attitude than the recent revelation that the popular Twitter “spambot,” @horse_ebooks, is not a “bot” at all, but the result of human input. You may not have heard the term spambot before, but you’ve seen the handiwork of one (and probably several). A spambot is any automated program or code that generates text online while masquerading as a human being. Usually, the purpose of a spambot is to spread links to sales sites, gaming sites, or viruses. Spambot programs regularly register at Internet forums and leave comments in the interactive fields of blogs, news sites, bulletin boards, comment threads and so on.
You know those annoying challenges you regularly fill out when joining a site online, employing certain searches and even logging in to your bank account? Whether you’re typing in a randomly generated number or interpreting distorted text to supply the “secret code,” you’re experiencing CAPTCHA. This is an acronym for “Completely Automated Public Turing test to tell Computers and Humans Apart,” a term trademarked by Carnegie Mellon and dating back to 2000. It’s a means of proving that a given registrant or user is, in fact, a thinking and breathing human being, and not an algorithm running in an online sweatshop in some boiler room or basement in Russia or China.
A typical WordPress installation, for example, will be inundated with fake subscriber registrations shortly after it goes live. Once registered, that army of spambots will start leaving nonsense comments and strange, almost stream-of-consciousness messages that almost always link back to somewhere else. Usually, that “somewhere else” is a site loaded with malware. Infected machines pass on that code and thus propagate the malicious program.
On the microblogging site Twitter, bots are usually entertaining, not malicious. Many of them are triggered by keywords. Tweet the word, “Robocop,” for example, and the bot @for_a_dollar will respond with a quote from the original science fiction film (a reboot of which is imminent). Tweet the term “Inconceivable!” and the account @iaminigomontoya will provide what its creator calls a “classic response” inspired by the movie “The Princess Bride.” The account @Emeraq responds instantly to tweets involving “Ghostbusters,” “2001: A Space Odyssey,” and “Better Off Dead,” in addition to “The Princess Bride.”
There have been instances in which users not hip to the idea have gotten into arguments with Twitter bots. This can be both hilarious and painful to watch; the typical Twitter bot is extremely limited in its responses. To argue with something that is the online text equivalent of a Magic 8-Ball requires either profound stupidity or remarkable persistence. Yet people can and do accomplish this.
Given how integral to the online experience spambots (and other automated simulacra of human input) have become, parody seems logical. So it was that @horse_ebooks, supposedly an automated program spitting out random, often hilarious pieces of terrible online literature, developed an impressive fan base on Twitter. When creator Jacob Bakkila admitted that the account was performance art, many were disappointed.
Bakkila acquired the account from its Russian owner in 2011 and then began deliberately imitating a spambot’s behavior. He would tweet links to the e-books the account was originally set up to sell, interspersed with random and often absurd messages he found and selected through his own searches. To the account’s fans, learning this was a letdown. Rather than a brilliantly awkward computer program, they were following an artist whose goal was to appear brilliantly awkward. What started as an in-joke among fellow users became a joke on those users, awareness of which spoiled the effect.
Spambots, however, are not always amusing, nor are they always (or even often) benign. Security Watch’s Neil Rubenking reported this week on the resurgence of a spambot dubbed “SpamSoldier,” malicious code that targets Android mobile devices. That malicious code has been resurrected deliberately by some user or group of users. Spamsoldier distributes link messages that, when clicked, install code that takes remote orders from Spamsoldier’s “command and control servers,” turning your phone into a wireless zombie that both disseminates the infection and endangers your personal data.
Therein lies the central message and warning inherent to all spambots viruses, and malware. The code is not a living, breathing entity, regardless of how well it simulates interactivity. You can argue with a spambot, but it isn’t really talking to you. Your computer or phone may “catch” a virus, but this isn’t like getting sick in the “meatspace” of real life. The code infecting your machine was created by a human being and inflicted on the Internet by human action.
Popular Science recently shut down the comment feature of its site, effectively ending the interactivity most marketing professionals will tell you is of great benefit to an online enterprise. While it mentioned spambots, Popular Science’s decision was based primarily on the “aggressively negative” comments of “trolls” – human beings intent on disrupting the site.
Popular Science has learned a lesson we all should take to heart when it comes to the online world. Spambots and malware can be dealt with using software – but the human element behind all online activity is inescapable. It isn’t the “bots” who are the problem. It is the bad actors behind those programs who are, and always will be, to blame.