My Resume

Cyber Intelligence Visionary

Posted on

I am a decorated (17 years) war veteran who has worked as a member of the Cyber and Threat Intelligence community for over 25 years. Current positions held: Army Reserve Signal Officer (Cyber Warrior) in the only Army Cyber Reserve Brigade of its kind, under ARCYBER working with INSCOM and Senior Big Data Computer Scientist at the Defense Information Systems Agency working where cyber intersects with intelligence (big data).

  • Formulated strategic plans for cyber security and threat management for multiple organizations.
  • Excel at building and maintaining partnerships with industry and military with a proven ability to achieve results for complex cross-agency initiatives.

Areas of expertise: Enterprise architecture, big data, cyber and threat management.

Certifications: SANS GSLC; Joint Network Attack Course N-9 Cyber Operational Planner; ISO 27000-2, DoD Incident Handling.

Industry experience: Financial, Defense, Technology and Intelligence.


Acceptable Use Policy (AUP)

Posted on

Use of the Services (Website) is subject to this acceptable use policy (“AUP”). If not defined here, capitalized terms have the meaning stated in the applicable contract (“EULA”) between Open Business Foundation Member, or other authorized user (“You”) and the Website Owner.

You agree not to, and not to allow Your End Users, Employees, Contractors, or third parties to use the Services:

  • to claim ownership over Intellectual Property;
  • to violate, or encourage the violation of, the legal rights of others;
  • for any unlawful, invasive, infringing, defamatory, or fraudulent purpose;
  • to intentionally use destructive or deceptive practices that are against the OBF charter or the spirit for which it was written;
  • to interfere with the use of the Services used to provide the Services, by members, or other authorized users;
  • to alter, disable, interfere with or circumvent any aspect of the Services;
  • to reverse-engineer the Services or Processes;
  • to use the Services, or a component of the Services, in a manner not authorized by the Website Owner.

Your failure to comply with the AUP may result in suspension or termination, or both, of the Services pursuant to the Agreement.



Posted on

Website Visitors – Like most website operators, collects non-personally-identifying information of the sort that web browsers and servers typically make available, such as the browser type, language preference, referring site, and the date and time of each visitor request.’s purpose in collecting non-personally identifying information is to better understand how’s visitors use its website. From time to time, may release non-personally-identifying information in the aggregate, e.g., by publishing a report on trends in the usage of its website. also collects potentially personally-identifying information like Internet Protocol (IP) addresses. does not use such information to identify its visitors, however, and does not disclose such information, other than under the same circumstances that it uses and discloses personally-identifying information, as described below.

Gathering of Personally-Identifying Information – Certain visitors to’s websites choose to interact with in ways that require to gather personally-identifying information. The amount and type of information that gathers depends on the nature of the interaction. For example, we ask visitors who use our forums to provide a username and email address. In each case, collects such information only insofar as is necessary or appropriate to fulfill the purpose of the visitor’s interaction with does not disclose personally-identifying information other than as described below. And visitors can always refuse to supply personally-identifying information, with the caveat that it may prevent them from engaging in certain website-related activities.

Aggregated Statistics – may collect statistics about the behavior of visitors to its websites. However, does not disclose personally-identifying information other than as described below.

Protection of Certain Personally-Identifying Information – discloses potentially personally-identifying and personally-identifying information only to those of its employees, contractors, and affiliated organizations that (i) need to know that information in order to process it on’s behalf or to provide services available at’s websites, and (ii) that have agreed not to disclose it to others. Some of those employees, contractors and affiliated organizations may be located outside of your home country; by using’s websites, you consent to the transfer of such information to them. will not rent or sell potentially personally-identifying and personally-identifying information to anyone. Other than to its employees, contractors, and affiliated organizations, as described above, discloses potentially personally-identifying and personally-identifying information only when required to do so by law, or when believes in good faith that disclosure is reasonably necessary to protect the property or rights of, third parties, or the public at large. If you are a registered user of a website and have supplied your email address, may occasionally send you an email to tell you about new features, solicit your feedback, or just keep you up to date with what’s going on at We primarily use our blog to communicate this type of information, so we expect to keep this type of email to a minimum. If you send us a request (for example via a support email or via one of our feedback mechanisms), we reserve the right to publish it in order to help us clarify or respond to your request or to help us support other users. takes all measures reasonably necessary to protect against the unauthorized access, use, alteration, or destruction of potentially personally-identifying and personally-identifying information.

Cookies – A cookie is a string of information that a website stores on a visitor’s computer, and that the visitor’s browser provides to the website each time the visitor returns. uses cookies to help identify and track visitors, their usage of website, and their website access preferences. visitors who do not wish to have cookies placed on their computers should set their browsers to refuse cookies before using’s websites, with the drawback that certain features of’s websites may not function properly without the aid of cookies.

Privacy Policy Changes – Although most changes are likely to be minor, may change its Privacy Policy from time to time, and in’s sole discretion. encourages visitors to frequently check this page for any changes to its Privacy Policy. Your continued use of this site after any change in this Privacy Policy will constitute your acceptance of such change.

270 INC

From the Cyber Front

Posted on

Let me break down the L33t ( for you and take the time to officially thank you. Especially, I would like to thank all of the n00bs out there that make my job easy. You know who you are; all those n00bs using your computers without taking the time or understand how to protect your systems from me. You think that it’s not such a big deal to pay your bills, do your banking and send email with all that personal information without taking proper precautions, “Please, keep on doing that”. You are keeping me in hot pockets and all the Red Bull I can drink.

My only goal is to infect your system with one of my DarkBots and sell your zombiefied system to one of my many clients, ( I’m making mad bank; usually pulling in around $800 per 10,000 bots, and with no shortages of systems on the Webs. I have the ability to create pools of Bots waiting to sell them to clients whenever I want. I’m in no way of looking for a real job as long as you keep helping me; just keep doing what you are doing.

I’m really not that bad hacker guy you read about, I don’t try to steal bank account or identities – I only set the stage. However, my clients are the ones that do the real damage – they use your system to commit all types of crimes with your zombiefied computers. They launch spam attacks, attack web sites, conduct phishing attacks, spread computer viruses, launch DoS attacks, download pornography, steal your personal information or blackmail you into paying a fee to remove a bit of nasty code. I just get my bank on and go, always trying to stay low and under the radar, (

I’ll let you in on my BotNet secret; my choice of attack is pure simplicity, its downright poetry. I just use is a little social engineering via instant messages and emails. It’s so easy, after I infect your machine, I lure your friends in with a “check this out”-link. And, since the message came from you; most of your friends will take the bait. And, Bam! another zombie for my BotNet herd, ChaChing! I typically don’t need to do anything special to exploit the vulnerabilities I find on your system; I simply let you place my Bots on your system for me.

Most of you never have any indication that you are a Zombie. You go about your day doing your normal computing activities without a clue. You blame Microsoft for your computer running like a slug or ComCast being slow, but in reality my clients are eating up you processing power and your Internet bandwidth. Better yet, my smart clients won’t bother your system while you are using it. They will wait till low system usage before taking it over to do their bidding.

You could look for symptoms such as “a suddenly slow broadband connection, an unresponsive mouse or keyboard, excessive hard drive activity, or bounce notifications from people you’ve never tried to contact” ( Besides, I’m not really hurting anyone anyway. Please for your sake; don’t bother trying to look. Think about it, “What could you do anyway?” And, by chance you were able to stop me this time; it would only be temporary and I will be back. The next time it will be personal, you would be interfering with my greenage and would need to make an example of you.

Cyber Stats:

  • Spam by botnet shows that 40% was sent from the Grum botnet and 12% from Cutwail botnet.
  • More than 50% of all seasonal spam has been sent from just two botnets: Grum and Cutwail.
  • Almost 10% of all broadband users in the US will be infected with a botnet.
  • Email will continue to be the primary social engineering tool for botnet attacks in 2010.
  • The Nuwar botnet uses social engineering, a fake postcard or electronic card, to infect it’s victims.
270 INC

The Dark Underside of the Internet

Posted on

In this series of articles, you will be exposed to the dark side of the Internet. Through interviews, independent research and real-world examples, you will experience the dangers of the World Wide Web.Travel with me down the “rabbit hole” as we explore the dark side of the Internet. Let me set the stage. Before sharing stories about who is trying to steal your information and how they do it, I need to introduce you to our company of virtual world villains.

Most users have no idea really how dangerous it can be to navigate the Internet or how easy it is for bad guys to steal your personal information. One simple exploit example could be a Bot Herder using automated techniques to locate a computer, your computer, on the Internet, and install a simple bot(1). Once that happens, the Bot Herder is able to control your system at will; it becomes a zombie. Most people will not even be aware of that it happened until you log int your credit card account to find out that you have been buying cell phones in Mexico.

Read more about “The Dark Side of the Internet” @ 270INC.

Meet the Cast of Characters

  • DarkBot is a BotMaster from parts unknown, she is on the forefront of the cyber war. A college dropout who in her spare time likes to listen to stolen trash metal mp3s, collect Hello Kitty figurines and burns pirated copies of Disney movies which she shares with the rest of the world on limeWire.

(1) BotNets according to wikipedia is jargon for a collection of robots commonly referred to as bots.


Website Disclaimer and Intellectual Property Ownership Statement

Posted on

This website is owned and operated by Robert C. Whetsel (Owner). The Terms of use (‘Privacy | Copyright | Legal Notice | Acceptable Use Policy | End User’s License Agreement’) governs your use of this website and it’s content. By using the website, you automatically accept the Terms associated with this site. The Owner may change the Terms at any time and such changes will take immediate effect. By your continued use of the website thereafter, you agree to be bound by such changes. You should visit the Terms of Use page from time to time to review the then-current terms.

This website contains information that may be privileged or confidential and is the property of Robert C. Whetsel. It is intended only for the use on this site and is defined in the Terms of use. You are not authorized to read, print, retain, copy, disseminate, distribute, or use any any part without permission.

Terms of use: Privacy | Copyright | Legal Notice | Acceptable Use Policy | End User’s License Agreement

The views are my own and do not reflect the opinion of DISA or the Department of Defense.


Improving Chatbot Accuracy -Draft

Posted on

Researchers have employed a variety of methods to improve their chatbot’s accuracy. Most current chatbots use a dialog management module and knowledge base and rules, which they use with templates to match user input. Improving chatbot accuracy may be accomplished through expanding the chatbot knowledge base, improving upon the standard rule-based conversation method for chatbots, or using alternative methods to the standard chatbot rule-based conversation method.

Expanding a knowledge base from which a chatbot may learn can improve accuracy. A drawback to creation of chatbot knowledge bases is that it is usually accomplished through hand coding, or hard-coding, which is time and labor intensive (?). The time taken to program, test and refine a chatbot’s knowledge base can be extensive. Furthermore, hard-coding the rules into a knowledge base may be restricted by the style and ability of the programmer. However, if chatbots could be programmed to learn from an established knowledge base themselves, it would reduce the time needed to create large knowledge bases for the chatbot and potentially improve accuracy of responses given to the user.

One researcher has automated chatbot learning by using language modeling to train chatbots in three different languages (Abu Shawar & Atwell, 2005). The authors created a model of a language to build a corresponding chatbot, and through a java application they translated text-based corpora into AIML. They approached the problem using two different machine learning techniques: first word and the most significant word approaches. Armed with this solution, they retrained their Alicebot with transcripts from human dialog, they compared human and chatbot to human and human dialog. A frequency list was built (words listed most utterances), patterns (the number of times this word appears with another word) and templates were established, and patterns and templates were rearranged to build AIML files. The authors concluded that this machine learning approach based on transcripts of human dialog, enabled their chatbot to converse in different styles and languages.

Another researcher used autonomic knowledge base creation and machine learning approaches to increase the efficiency of building their chatbot knowledge database. They presented an approach for extracting a chatbot knowledge base using <thread-title, reply> pairs for a new knowledge base from a corpora (an online discussion forum) (Huang, Zhou, & Yang, 2007). They normalized and cleaned the corpora by removing irrelevant replies, used humans to rate the training set, and employed author identity to grade the quality of a reply. The researchers applied a machine learning approach to a specific domain of large repositories of archived threads and reply records from online discussion forums. Their results showed that they were able to extract 2,000 pairs in two minutes. Given the speed of their results, the authors considered their approach to constructing a knowledge base for a chatbot to be superior to the by-hand” approach.

Yet another researcher has also focused on automatic chatbot knowledge base creation (Wu, Wang, Weisheng, & Li, 2008). They used a classification model based on rough-set assembly theory to process incomplete data, as applied to a system model data analysis on a controlled set. This  data analysis was coupled to ensemble machine learning algorithms, and was based on related replies and IR-related replies. The authors also concluded that their approach of automatic acquisition of the knowledge for a chatbot was effective.

Others have tried to improve the current rule-based approach to learning for chatbots (Pilato, Vassallo, Augello, Vasile, & Gaglio, 2005). The researchers used latent semantic analysis (LSA) to attempt to predict replies in conversations with humans. They built their chatbots on Alicebot’s software, generating one generic and three domain specific chatbots. Their corpora was 850 documents from the Internet to create the knowledge base. They used LSA to generate algorithms that analyzed the corpora and placed it into a matrix which in turn constituted the knowledge base. Upon interaction with a user, the first interactions would be directed towards the general knowledge (generic) chatbot which had the index” for what the other domain-specific chatbots contained within their knowledge base. This generic chatbot would direct the user to each domain-specific chatbot as deemed appropriate (rules as programmed by the developers). The user then interacted with the domain-specific chatbot until its limit of knowledge was reached, and by default, the user  was then directed back to the generic chatbot. The authors concluded that the LSA allowed them to overcome restrictions of the traditional chatbot rule-based approach.

Recently, some chatbot researchers have deviated from using the standard rule-based conversation approach used by most standard chatbots. For example, one approach involves the automatic generation of a trivial dialog phrase database through the use of genetic algorithms (Montero & Araki, 2007). They evaluated the phrase database for correctness using N-gram analysis to generate statistical data, which was then analyzed to observe the system’s performance. The authors approach is different from traditional AIML rule-based natural language interfaces in that they used phrases and associations. They acknowledge that creating a hand-crafted knowledge base (a dialog corpus) is highly time-consuming and difficult. They generated pairs of phrases, applying fitness function of a genetic algorithm which indicated a well-generated phrase, then evaluated to see if it was acceptable phrase. They used a ratio of acceptable phrases divided by well-generated phrases.

Another deviation from the standard rule-based conversation approach is the use of “semantic-driven interaction in an intelligent system that uses natural language dialog to explain concepts” (Pirrone, Pilato, Rizzo, & Russo, 2007). By means of a 3-tier system, their study attempted to utilize Latent Semantic Analysis (LSA) to cluster documents in a vector space (creating relationships between different words within that vector) (Pirrone). The architecture contained business, presentation and data tiers that consisted of two main parts: an assessment component and supply component. The researchers used the CYC knowledge base in this application to integrate the ontology with other parts of their application. The authors believe that their ontology helped to create relationships (context) and the vectors as a self-organizing map. They considered their approach to  improve the efficiency of their chatbot.

One group has deviated from the standard rule-based approach by combining an expanded knowledge base with machine learning and a neural network (Sing, Fung, Wong, & Depickere, 2007). They crafted an artificial intelligent neural network identity (AINI) based on understanding and reasoning of natural language, rather than being based on pattern-matching like other chatbots. AINI is especially unique since it uses a top-down natural language query in a multi-layered natural language query engine. It uses an AIML engine, natural language understanding and reasoning, FAQ metadata and a spellchecker in a three-tier layered architecture with a mySQL back-end. AINI can parse input, which goes into a natural language reasoning module where sentences are separated into words. Keywords are then chosen and a document retrieval from the Web (Internet) is executed. Information is extracted from the Web, answers are ranked using advanced reasoning, and a requisite response is given. The authors concluded that their agent performed comparably to ELIZA and  ALICE using this new methodology. Given that their chatbot uses a different approach to the rule-based pattern matching used by ELIZA and ALICE, it is understandable that they would compare the performance of their chatbot to these common chatbots. However, the authors plan to improve their chatbot performance further by adding more machine learning algorithms and expanding the chatbot’s knowledge base. Their expectation is that their chatbot’s performance would surpass that of ELIZA and ALICE.


Improving Human-Like Qualities of Chatbots -Draft

Posted on

Some researchers are designing chatbots with the intent of integrating more natural human-like interactions (De Angeli & Brahnam, 2008). Specifically, the developer may program certain responses that would sacrifice accuracy, but confer more human-like traits (Sing et al., 2006). For example, if a chatbot is presented with a math problem it could wait, as if to imitate thinking about the problem, or even give the wrong answer. This could be interpreted by the user as a human trait (failing or making a mistake), but in reality, it is a trick built into the programming. Researchers have attempted to add a sense of humor, through the use of self-contained jokes to their version of the chatbot ALICE (De Boni, Richardson, & Hurling, 2008). One could envision a scenario where a user would ask a question and the chatbot would reply: Jim, I’m not a machine, Im a doctor!”, I’m on my break, come back later.” or Are you sure you want me to check for this, I know its a waste of time”, which gives a jocular feel to the conversation.

Attempts to improve human-like characteristics include personality matching and relationship maintenance (De Boni et al., 2008; Abbatista, Degemmmis, Licchelli, Lops, Semeraro, & Zambetta, 2002). One researcher used shallow parsing and statistical natural language processing methods to change the style of interaction of their chatbot according to personality matching deboni08. For example, the chatbot could take on a submissive role when the user is portraying a dominate one. However, the authors found that users’ feelings were mixed about the chatbot using this style of interaction. Two different groups have developed chatbots that build upon previous interactions with the same user (De Boni et al., 2008; Abbatista et al., 2002). If a chatbot could incorporate the ability to recognize” a logged-in user and interact accordingly, the chatbot would be seen to be engaging in relationship maintenance (Abbatista et al., 2002). The chatbot would use a stored user profile and make recommendations in a proactive fashion (Abbatista et al., 2002). Both studies found that familiarity with a user did produce a more positive view of the chatbot’s interaction with that user.

Improving the chatbot’s ability to shift conversational topics is another approach to improving its human-like quality. Chat between humans is dynamic and a single utterance can become the focus of conversation (Montero – Enhancing computer). Another subsequent utterance (considered a catastrophe) may change the topic subject and the focus of the conversation would shift. One study attempted to model human chat by modeling relating utterances and changing topics more naturally (Montero & Araki, 2005). Based on ALICE, the chatbot database had categories to match patterns (user input and a template for chatbot reply). The authors programmed the chatbot to ask a question if it did not know a reply, trying to smoothly shift topic. Their methods included using a Data mining tool called KeyGraph to identify relationships between terms in a document, in particular, co-occurrence relationships of both high-probability and low-probability events. Utterances were broken down into words, visual results showed clusters of interrelated sentences, and links showed the shift in topics during chat. The researchers added critical categories by making the chatbot interject intelligent questions. Finally, they tested their chatbot’s performance by having one user interact with their chatbot, after which the user then gave an opinion of their conversation with the chatbot. Their results showed that the user rated the experience as better after adding the ability for the chatbot to shift topics. Although the study results were subjective, using this type of modeling could be a viable approach for a chatbot to control conversation flow.

Another attempt to add a proactive conversational element to a chatbot used a mixed-initiative interaction (L’Abbate et al., 2005). A mixed initiative refers to a exible interaction strategy where each participant can contribute to task what it does best. The agent can show initiative in conversation by introducing a change in conversation  flow to improve believability. This approach was tested in Virtual Insurance Risk Management (VIRMA), an on-line risk management tool. Keeping with the rule-based approach, they were able to create user profiles and reduce the number of question and answer options. They tested the tool by having the users interact with the Risk Manager Online (RMO) user interface. From their results, the authors concluded that their approach enabled the conversations to be more domain specific, and improved the quality of the human-chatbot interaction.

Therefore, using approaches such as programming a chatbot to mimic human behavior like humor or shift conversations in certain situations, and adding conversation memory to the chatbot design, may confer more humanity to a chatbot. Perhaps combining a number of these tricks could increase the feeling of communicating with a human conversationalist.


Building Better Chatbots -Draft

Posted on

The major goal of building a better chatbot is to improve its interactions with humans. A wide variety of techniques including developing different architectures, incorporating quirks or tricks, or using different machine learning approaches have been used to improve either chatbot accuracy or human-like qualities. First, some ways that researchers are trying to improve human-like qualities of chatbots will be discussed, followed by some methods that researchers are trying to improve the accuracy of chatbots.


How do Humans View Chatbots? -Draft

Posted on

Since chatbots have moved into the public realm, there has been interest in evaluating how users interact with chatbots. For example, after analyzing conversations with the chatbot jabberwacky, one study found that the topics and style of conversations were broad (De Angeli & Brahnam, 2008). Users displayed different attitudes ranging from nice to nasty and derogatory. It was often found that users switched style and personalities during conversation with the chatbot. In one experiment, users were found to continue to abuse chatbots longer than they would abuse another human. Some of the reactions did not appear driven by specific reasons.

Other studies have found specific reasons for user disenchantment with chatbots. Another researcher conducted an observational study with an ELIZA-style chatbot, using a systematic, but subjective evaluation (Kirakowski, O’Donnell, & Yiu, 2007). Fourteen college students interacted with the chatbot for 3 minutes, after which participants were given a copy of their conversation transcript and were asked to identify unnatural examples. Essentially, they evaluated the human-chatbot interaction as compared to human-human interaction, and identified general differences in interaction and speci c discrepancies. “Maintenance of themes, failure to respond to a question, appropriately responding to social cues (questions), use of formal or colloquial language, greetings and personality, offers a cue, phrases delivered at inappropriate times, damage control.” (Kirakowski et al., 2007). Other users may find a tool unsatisfactory because it did not answer questions accurately (Abu Shawar & Atwell, 2005).

The specific reasons cited in these studies provide a focus for researchers to address their efforts to improve human and chatbot interaction. The following section presents different ways in which developers are attempting to built better chatbots.