Chatbots have turned to crime, using ever-slicker methods to steal
cash or identities – and these cheating algorithms are passing the Turing test
every day
MY NAME is Peter and I was seduced by a machine.
Jen introduced herself via a social networking website by asking
if I had any advice about getting into journalism. Boy, did I. She was pretty,
about the same age as me and lived in my home town in Canada.
We messaged back and forth. Soon, she asked me if I'd like to
catch a baseball game with her. Wow. An attractive girl with the same interests
and career aspirations - how lucky could a guy be?
Still, it was the internet, so I asked Jen for more details about
herself. She sent me a link. I clicked and was taken to a page that asked me to
input my personal information, including credit card details. The game was up.
Jen was a chatbot, programmed to scour social network profiles for
personal information then initiate conversations with the intention of
suckering people into divulging their financial details. By poking around
online, I discovered she went by many different names, but always used the same
conversation strings, filling in the blanks with details such as her marks'
professions. The bot had fooled dozens of men, as far as I could tell. I'm sure
a handful had entered their credit card numbers, which doubtless led to them
getting fleeced. By a machine, no less.
Criminal chatbots have become quite a menace on the internet. They
lurk in social networks, messaging apps and webmail, and in some chatrooms they
can outnumber humans by more than two to one. Many of these tricksters are
designed to build relationships with their marks before soliciting cash or
attempting identity theft, whereas others simply try to lure people into
clicking on a link that leads to malware. Their abundance and success is
forcing researchers and companies to seek out ever-smarter ways to catch them.
It's not exactly what the pioneers of artificial intelligence had in mind. We
have been watching and waiting for the moment when machines become smart enough
to pass as humans - but it seems to have already happened right under our
noses.
The first text-based artificial dialogue system appeared in 1966,
when Joseph Weizenbaum created the Eliza program to mimic the conversational
style of a psychotherapist. Eliza would ask questions of its human partner and
make statements without divulging details about itself.
That sparked a new field of development - chatbots - and a wave of
imitators. In 1990, the Loebner prize was established to
celebrate achievements in chatbot proficiency. It is awarded based on a test
devised in the 1950s by mathematician Alan Turing -
whose legacy will be widely celebrated this month. To pass the Turing
test, a bot needs to be able to fool a series of people into believing it is
human in a typed conversation.
The internet has led to a step-change in chatbot ability. Rather
than pre-programming thousands of script lines, creators can now add a
self-learning program that will be fed by millions of internet users. This
allows modern chatbots, such as the Cleverbot, to work by monitoring
and mirroring what conversational partners say to them online, says Rollo
Carpenter, whose Jabberwacky program won the Loebner prize in 2005 and 2006.
The abilities of these learning chatbots are therefore growing exponentially.
It's not surprising, then, that many corporations have replaced
human customer service agents with commercial chatbots on their websites. More
than 380 companies, from HSBC and Toys R Us to AT&T and Intel, have
incorporated automated programs, according to directory site Chatbots.org. Many are finding that
bots not only cut costs, but can serve customers better. "A computer can
deliver 10,000 times as much information as real people would," says
Carpenter.
Inevitably, as a technology gets better and cheaper, it is co-opted
by criminals. Bad chatbots started popping up six or seven years ago. In 2006,
for example, Richard Wallace reported that his popular chatbot, Alice, had been
cloned and used
for nefarious purposes on MSN's instant messaging service. A year later, the CyberLover
chatbot was discovered hunting on dating websites. Subsequent self-replicating
malware such as Koobface and Kelvir also incorporated chatbot technology on
Facebook and other social networking websites.
In each case, the bots sought to lure people either into clicking
spam links and infecting their computers with malware, or into divulging their
personal information, including bank details. They are not necessarily more
sophisticated than the best "good" chatbots, but the point is, they
work.
It's not just naive schmucks who fall for them. I was the
technology editor of The New Zealand Herald and generally wary of
internet fraudsters when Jen came calling. Psychologist and former Loebner
prize director Robert
Epstein wrote in Scientific American Mind about being similarly
fooled. He entered into email correspondence with a bot called Ivana that
lasted for more than two months. As he put it: "I certainly should have
known better... I am, you see, supposedly an expert on [chat]bots."
Chatbots that use seduction have certainly proved effective, but
sex is not their only gambit: they can also appeal to a victim's interest in
themselves, or simple curiosity - and all kinds of people are routinely
tricked. On social networks, many bots adopt the identity of somebody you trust.
Some Twitter bots, for instance, hack into people's accounts to encourage
others to click on links that lead to viruses and malware.
It's difficult to say exactly how many chatbots are active,
because so many go undetected and unreported. However, a study by Steven Gianvecchio at Mitre Corporation, a
non-profit technology consultancy in Mclean, Virginia, found that up to 85 per
cent of participants in Yahoo chatrooms were bots, as were 15 per cent of
Twitter users (IEEE/ACM
Transactions on Networking, vol 19, p 1557).
Chester Wisniewski of security firm Sophos has also noticed a rise
in recent years - and an improvement in quality. "Five years ago, when we
first started seeing malicious chatbots on social networks, a lot of it was
poorly translated. It was quite clearly written by Chinese or Russian people
who don't have a great grasp of English and grammar," he says.
Now, however, the vocabulary of the bots is being professionally
translated into English, which is helping their growth. And as websites and
companies get better at blocking traditional spam and phishing attacks,
criminals are turning to more sophisticated bots. Earlier bots worked from,
perhaps, 100 hard-coded conversational rules, whereas current versions use up
to 11,000 or more, which makes them much harder to detect. "If you stopped
to read some of their posts in a chatroom, you'd think they were human,"
says Gianvecchio.
It's difficult to know who is building and operating these
chatbots, but the clues point to organised cross-border criminal gangs.
According to a review by researchers at London Metropolitan University,
published in March, more
than 80 per cent of internet crime is now conducted by these sophisticated
perpetrators.
Tracking these people down is tricky given their international
reach, says Wisniewski. For example, while US company Facebook alleged earlier
this year that it had tracked down the identities of the hackers behind
Koobface, no arrests have yet been made in Russia, where they were based.
This suggests prevention is the best cure - although traditional
efforts are falling short. One common tactic is to try to block bots at the
door. During login, Yahoo chatrooms and other websites ask its human users to
read a piece of distorted text called a CAPTCHA, which stands for
"completely automated public Turing test to tell computer and humans
apart". Machines once struggled to comprehend this text, but they can now circumvent
them via automated image recognition.
Some researchers are therefore taking more creative approaches.
One idea is to build honeypots that
turn the tables on bots. Decoy users in instant messaging programs or social networks
could respond to the bot's advances so they can be identified and blocked.
Because these decoys are passive, human users are unlikely to notice their
presence.
Another approach is called virtual
biometrics,
and involves using human-like
forensic techniques. One of its advocates is Roman Yampolskiy of the University of
Louisville, Kentucky, who is applying stylometry to chatbots. Yampolskiy and
colleagues studied Loebner prize logs to see whether chatbots exhibited a
particular writing style, as human writers do. They found that chatbots such as
Alice and Jabberwacky could be identified in this way reasonably accurately. In
principle, he says, linguistic fingerprints could be used to seek out chatbots
in the wild, as well as helping the police link them to the known programs of
criminal controllers.
The problem is that bots can be reprogrammed in a flash. "If
the bot gradually learns and changes over a period of years, we can keep up
with that," Yampolskiy says. "If, all of a sudden, someone replaces
all source codes with new ones, obviously we won't be able to do much about
it."
Erwin Van Lun, founder of Chatbots.org, favours a more drastic
approach. He says rather than creating tools to detect bots, it should be
humans who have to prove their identities to use the internet.
"Governments are responsible for citizens and issue them passports to
travel the world. They should also say, 'We are responsible for your behaviour
on the internet, so we will issue you an internet passport'," he says.
"That's where it's heading." It's not so unlikely: advocates of a
more civil internet are already
converging on this idea as a way to discourage cyberbullying and other bad
behaviour.
In the meantime, I now avoid anyone named "Jen" online,
and, naturally, I am much more vigilant. Still, criminals are always thinking
of creative ways to sucker people. A few months ago, I received a message from
a Twitter friend that said, "Wow, have you seen this photo of you?
Crazy!" In hindsight, I realise it was a scam to spread malware, but how
could anybody resist clicking on that? I sure couldn't.
Rogues' gallery
Name:
CyberLover
Crimes: Identity theft
Hangouts: Dating sites and chatrooms
Modus operandi: Provocative sexual seduction to elicit personal details
Chat: "What's your date of birth? I'm planning a surprise for your birthday."
Hangouts: Dating sites and chatrooms
Modus operandi: Provocative sexual seduction to elicit personal details
Chat: "What's your date of birth? I'm planning a surprise for your birthday."
Name:
Koobface
Crimes: Stealing logins to build botnet
Hangouts: Social networks
Modus operandi: Adopts identities of friends to spread virus-ridden link
Chat: "Hi, how are you? Wanna laugh? It's you in the video, want to see?"
Hangouts: Social networks
Modus operandi: Adopts identities of friends to spread virus-ridden link
Chat: "Hi, how are you? Wanna laugh? It's you in the video, want to see?"
Name:
Kelvir
Crimes: Infecting computers with viruses
Known hangouts: Instant messaging apps
Modus operandi: Makes small talk before sending links to malicious sites
Chat: "Heeey! I found your retarded face online haha, this is you right?"
Known hangouts: Instant messaging apps
Modus operandi: Makes small talk before sending links to malicious sites
Chat: "Heeey! I found your retarded face online haha, this is you right?"
Spot the human
A: Space is big.
B: No doubt.
A: Space ghetto?
B: Space is gigantic.
A: Space is never-ending.
B: Space is enormous.
A: Space is fast.
B: Space is bigger than something that's really big.
A: No, really?
B: YES.
A: That's nifty.
B: Don't be worried.
A: I wasn't worried. Should I be?
B: Let's not blow this whole thing outta proportions.
A: I'm the kindest person to you.
B: I don't think so.
A: You think so.
B: No, not really.
A: So you don't think so?
B: No.
A: Why is that?
B: What song is your favorite?
A: I don't know. What is your favorite?
B: The Windup Bird Chronicle by Haruki Murakami.
A: You should ask it back.
B: Stop twisting the conversation!
A: I never said that.
B: Yes you did.
A: I think you're confusing me with someone else.
B: Are there more of you there?
A: I don't believe you. Are you telling the truth?
B: Yes.
A: Ok how old are you, boy?? TELL ME THE TRUTH!!!
B:
25.
A: Whoa I'm 15.
B: My nose is itchy.
A: Then itch it, dumby.
Peter Nowak is a writer based in Toronto, Canada