Some researchers are designing chatbots with the intent of integrating more natural human-like interactions (De Angeli & Brahnam, 2008). Specifically, the developer may program certain responses that would sacrifice accuracy, but confer more human-like traits (Sing et al., 2006). For example, if a chatbot is presented with a math problem it could wait, as if to imitate thinking about the problem, or even give the wrong answer. This could be interpreted by the user as a human trait (failing or making a mistake), but in reality, it is a trick built into the programming. Researchers have attempted to add a sense of humor, through the use of self-contained jokes to their version of the chatbot ALICE (De Boni, Richardson, & Hurling, 2008). One could envision a scenario where a user would ask a question and the chatbot would reply: Jim, I’m not a machine, Im a doctor!”, I’m on my break, come back later.” or Are you sure you want me to check for this, I know its a waste of time”, which gives a jocular feel to the conversation.
Attempts to improve human-like characteristics include personality matching and relationship maintenance (De Boni et al., 2008; Abbatista, Degemmmis, Licchelli, Lops, Semeraro, & Zambetta, 2002). One researcher used shallow parsing and statistical natural language processing methods to change the style of interaction of their chatbot according to personality matching deboni08. For example, the chatbot could take on a submissive role when the user is portraying a dominate one. However, the authors found that users’ feelings were mixed about the chatbot using this style of interaction. Two different groups have developed chatbots that build upon previous interactions with the same user (De Boni et al., 2008; Abbatista et al., 2002). If a chatbot could incorporate the ability to recognize” a logged-in user and interact accordingly, the chatbot would be seen to be engaging in relationship maintenance (Abbatista et al., 2002). The chatbot would use a stored user profile and make recommendations in a proactive fashion (Abbatista et al., 2002). Both studies found that familiarity with a user did produce a more positive view of the chatbot’s interaction with that user.
Improving the chatbot’s ability to shift conversational topics is another approach to improving its human-like quality. Chat between humans is dynamic and a single utterance can become the focus of conversation (Montero – Enhancing computer). Another subsequent utterance (considered a catastrophe) may change the topic subject and the focus of the conversation would shift. One study attempted to model human chat by modeling relating utterances and changing topics more naturally (Montero & Araki, 2005). Based on ALICE, the chatbot database had categories to match patterns (user input and a template for chatbot reply). The authors programmed the chatbot to ask a question if it did not know a reply, trying to smoothly shift topic. Their methods included using a Data mining tool called KeyGraph to identify relationships between terms in a document, in particular, co-occurrence relationships of both high-probability and low-probability events. Utterances were broken down into words, visual results showed clusters of interrelated sentences, and links showed the shift in topics during chat. The researchers added critical categories by making the chatbot interject intelligent questions. Finally, they tested their chatbot’s performance by having one user interact with their chatbot, after which the user then gave an opinion of their conversation with the chatbot. Their results showed that the user rated the experience as better after adding the ability for the chatbot to shift topics. Although the study results were subjective, using this type of modeling could be a viable approach for a chatbot to control conversation flow.
Another attempt to add a proactive conversational element to a chatbot used a mixed-initiative interaction (L’Abbate et al., 2005). A mixed initiative refers to a exible interaction strategy where each participant can contribute to task what it does best. The agent can show initiative in conversation by introducing a change in conversation flow to improve believability. This approach was tested in Virtual Insurance Risk Management (VIRMA), an on-line risk management tool. Keeping with the rule-based approach, they were able to create user profiles and reduce the number of question and answer options. They tested the tool by having the users interact with the Risk Manager Online (RMO) user interface. From their results, the authors concluded that their approach enabled the conversations to be more domain specific, and improved the quality of the human-chatbot interaction.
Therefore, using approaches such as programming a chatbot to mimic human behavior like humor or shift conversations in certain situations, and adding conversation memory to the chatbot design, may confer more humanity to a chatbot. Perhaps combining a number of these tricks could increase the feeling of communicating with a human conversationalist.