Philip K. Dick wrote a science fiction novel in 1968 entitled, “Do Androids Dream of Electric Sheep?” where he explored the philosophical issue of what it is to be human (Dick, 1968). This begs the question of how to define intelligence as it relates to a machine. In 1950, Alan Turing proposed a methodology to determine if a machine can be considered intelligent or “think like a human” (Turing,1950). The Turing Test, or the imitation game, has endured as a de facto standard for providing a measurable method for determining whether a machine could be considered intelligent. The Turing test enables the assessment of only “verbal” intelligence since it uses text-based interactions. This was the first attempt to quantitatively determine if a machine could think by defining tangible performance metrics. However, some still consider the Turing Test to be a more behavioral test while others believe it to be an inductive test (Saygin, Cecekli, & Akman, 2000). One recent interpretation of the Turing test is the following: the question of “can a computer program think” was replaced with “on the average, after n minutes or m questions, is the interrogator’s probability of correctly identifying the subjects not significantly greater than 50 percent?” (Sing, Wong, Fung, & Depickere, 2006). Regardless, the interactions are rated by human perception, and if enough users could not tell the difference between a human and computer, Turing thought this would mean that the computer could think.
The Turing test has served as an impetus for the development of natural language systems (Saygin et al., 2000). In fact, Weizenbaum created ELIZA, the first natural language system, in reaction to the Turing test (Weizenbaum, 1966). ELIZA is considered one of the first chatbots. Chatbots (short for chat robot) are also known as machine conversation systems, virtual agents, dialog systems and chatterbots, and conversational agents (De Angeli & Brahnam, 2008; Kerly, Hallb, & Bulla, ). The term chatterbot originated with one of the first robots (Julia) in a MUD (De Angeli & Brahnam, 2008). While some chatbots can include visual and audio capabilities, the interaction with the user is mainly focused on exchange of text (input and responses).
Human nature has further driven the interest in developing natural language systems like chatbots. Humans want to interact with computers in a social manner, similar to the way they interact with other humans (De Angeli & Brahnam, 2008). An important area of research in Human Computer Interactions is improving the capability of computers to communicate with humans using natural language (HCIwebsite, 2009). According to (Zadrozny, Budzikowska, Chai, Kambhatla, Levesque, & Nicolov, 2000), humans want to use their own language (natural language) to communicate with computers. From (Graham-Rowe, 2005) “it seems that we anthropomorphise technology,
and consciously or unconsciously attribute feelings and intentions to robot pets, chatbots, or even cars.” The significance of a natural language interface is that users would interact with the computer in a way that is familiar and comfortable. Natural language researchers attempt to build artifacts that enable users to relate with the computer on a personal level.
The viability of the Turing test has been contested in recent years by artificial intelligence (AI) researchers. Over the years, thoughts of AI researchers regarding thinking machines has shifted to different concepts of “intelligence” such as considering the spectrum/range of strength of an AI (De Angeli & Brahnam, 2008). At one extreme, a strong AI would be an AI application that is human-like in intelligence, in that it is self-thinking machine (Searle, 1980). At the other end of the spectrum, a weak AI would be more of a helper application capable of imitating intelligence. Current chatbots are usually considered weak AIs because they are imitators of intelligence (L’Abbate, Thiel, & Kamps, 2005). However, despite subsequent criticism, the Turing test has served a pivotal role in the evolution of chatbots.