Why build a Chatbot in the first place? In the timeline of Chatbot history, the year 2020 will be remembered as the year when chatbots made their presence hard to ignore. One of the most talked-about and viewed commercials of Super Bowl 2020 was the one for Amazon Alexa starring Ellen DeGeneres and Portia de Rossi. When a company decides to pay millions of dollars for a 30-second advertisement run during a break, there must be more than one reason behind it. And not just Amazon alone, if you look around, almost all the tech giants like Google, IBM, Facebook or Microsoft are heavily investing in the research and development of their chatbots and chatbot platforms/framework. AI Chatbots are probably one of the most promising use-cases of Natural Language Processing (NLP) today. They also mark a significant point in the technical evolution of Question Answering systems. Even though chatbots have existed since the 1960s (read more about “ELIZA” the first chatbot here), the recent increase in the use of messaging platforms like Whatsapp, Facebook Messenger, Slack, Twitter, Telegram, etc. both for work and personal messaging is providing the companies a big opportunity to simplify and deliver their services through a new user interface. At the same time leveraging chatbots, reduces customer support costs and gives them a branding opportunity as well.  

Why use RASA framework to Build an AI Chatbot?

Before understanding the reason behind using RASA framework, lets first understand what RASA framework is. RASA open source is a framework for building AI chatbots (text/voice-based). RASA open-source framework includes the following components:

  1. RASA NLU (Natural Language Understanding)

This part of the framework is the tool/library for intent classification and entity extraction from the query text. The entities and intents further enable response retrieval and composition of the utterance text.

You can build simple/minimal AI chatbots by using just this component itself. This is usually the case while building an AI chatbot to respond to FAQs, simple retrieval queries etc. In fact, in this blog, we will be using it to code our chatbot too. 

2. RASA Core 

This component is the dialogue engine for the framework and helps in building more complex AI assistants that are capable of handling context (previous queries and responses in the conversation) while responding. Though RASA recommends using both NLU and Core, they can be used independently of each other. We will learn how to build a truly conversational chatbot using this component in our next blog in this series. 

3. Channels and Integrations

These components let you connect and deploy your bot on popular messaging platforms. You can read more about these here. These help the developers in focusing more on the bot functionalities and less on the plumbing required to deploy it in the real world. RASA X is a toolset that takes your bot (developed using RASA opensource) to the next level. This is a free but closed source toolset available to all developers. You can read more about the success stories and use-cases that have leveraged this platform in this link. This will give you a much better understanding of the possibilities in this area. 

RASA open-source framework fits the profile best when you can’t or don’t want to upload your data to an external service. You can build, deploy and host the implementation internally which makes the chatbot and the related data more secure. Further, it also gives you better control and flexibility in deploying your chatbot in production. Additionally, it is open-source and free which makes it a go-to choice for building chatbots. In this blog, we will focus on building a secure chatbot using just RASA NLU. You can read more about the basics here.

Tutorial: Building “Trippy” the Travel Agency Chatbot

In our use-case, we will build a chatbot called “Trippy” which will be capable of interacting with customers and do the following:

  1. Greet the customer.
  2. Give information regarding flights or trains from one source to another destination.
  3. Show upcoming itineraries for a user. 

For simplicity, we will assume that we have some backend services that can connect to the database and do the following:

  • Search for flight or train giving a source and a destination
  • Retrieve upcoming itineraries for a particular user (by using some unique identifier). 

Before we jump on to installations and coding, there are four important concepts which are critical to what we are going to do next. 

Query – It is the sentence/text typed/spoken by the end-user. 

Intents – It describes what the user wants to do. In our case, it can be “search flights”, “search trains” or “retrieve bookings/itineraries”. This usually involves text classification. You can read more about classification techniques here at our blog. 

Entities – These are pieces of information identified in the query. Here is an example to understand the concepts better:

Query: Find me flights from Bangalore to Mumbai. 

Intent: “search_flights”

Entities: ‘source’: ‘Bangalore’, ‘destination’: ‘Mumbai’

Query: Find me trains from Delhi to Mumbai. 

Intent: “search_trains”

Entities: ‘source’: ‘Delhi’, ‘destination’: ‘Mumbai’

Utterance – Text that the chatbot responds with i.e. the response.

Installations & Setup of AI Chatbot

For creating the bot, we need to install Python, RASA NLU and spaCy language models along with few dependencies. It would be good to create a separate virtual environment so as to keep the installations clean and together at one place. I will be using Conda to do the setup and installations. You are free to use virtualenv for the same as well.

# Create Virtual environment with python 3.6

conda create –name chatbot_env python=3.6

# Activate the environment

conda activate chatbot_env

# Install RASA using the link

pip install rasa_nlu

pip install rasa_nlu[spacy]

python -m spacy download en_core_web_md

python -m spacy link en_core_web_md en –force

# Create Project folder/structure

mkdir trippy

cd trippy/

mkdir data

cd data

Creating a Training Data File

There are different formats in which you can provide the training data. However, Markdown is the easiest Rasa NLU format to create and read. You can read more about the training data formats here.

# Create the training data file in the data folder

touch nlu.md

  1. ## intent:greet  
  2. – Hi  
  3. – hi  
  4. – Hello  
  5. – Hey  
  6. – Hey There  
  7. ## intent:thanks  
  8. – thanks  
  9. – thank you  
  10. – thank you so much  
  11. ## intent:bye  
  12. – bye  
  13. – bye bye  
  14. – see you later  
  15. – catch you later  
  16. – bbye  
  17. ## intent:search_flights  
  18. – find flights from [bangalore](source) to [mumbai](destination)  
  19. – get me flights from [bangalore](source) to [mumbai](destination)  
  20. – what are flights from [bangalore](source) to [mumbai](destination)  
  21. – fetch me flights from [bangalore](source) to [mumbai](destination)  
  22. – find flights from [lucknow](source) to [delhi](destination)  
  23. – get me flights from [lucknow](source) to [delhi](destination)  
  24. – what are flights from [lucknow](source) to [delhi](destination)  
  25. – fetch me flights from [lucknow](source) to [delhi](destination)  
  26. ## intent:search_trains  
  27. – find trains from [bangalore](source) to [mumbai](destination)  
  28. – get me trains from [bangalore](source) to [mumbai](destination)  
  29. – what are trains from [bangalore](source) to [mumbai](destination)  
  30. – fetch me trains from [bangalore](source) to [mumbai](destination)  
  31. – find trains from [lucknow](source) to [delhi](destination)  
  32. – get me trains from [lucknow](source) to [delhi](destination)  
  33. – what are trains from [lucknow](source) to [delhi](destination)  
  34. – fetch me trains from [lucknow](source) to [delhi](destination) 
      
  35. ## intent:find_itineraries  
  36. – get me my bookings  
  37. – show me my bookings  
  38. – find my upcoming trips  
  39. – fetch my upcoming trips

Usually, the training data format for Rasa NLU is structured in 4 parts:

  • Common examples
  • Synonyms
  • Regex
  • Lookup tables

Only the common example part is mandatory. You can include other parts also to refine the prediction of the model. For toy dataset, it is possible to do it manually but for real-life applications, you can generate your training data file using some tools like Chatito and Tracy.

Creating the Model Configuration File

The next step is to configure the pipeline through which the input query/data flows and the intent classification and entity extraction will take place. You can read more about the supported pipeline components and configurations here. The config file is a .yml file and should be located in the base folder of the project which is the “trippy” directory in our case.

Training & Testing the model

Though Rasa provides an excellent command-line interface, we will use a python file to train and test the model as that helps in integrating the chatbot further down the line and also keeps the commands at one place. We will also test if the model can figure outsource/destination entities that are not present in the training data. We will create a file rasa_train.py in the base folder for this. 

import logging, pprint  
from rasa_nlu import config
from rasa_nlu.training_data import load_data  
from rasa_nlu.model import Interpreter, Trainer  
from rasa_nlu.test import run_evaluation  

logfile = "rasa_trippy.log"  

# set logging level  
logging.basicConfig(filename=logfile, level=logging.DEBUG)

# load the training data
train_data = load_data("./data/nlu.md") 

# create Trainer object using the config file to define the pipeline
trainer = Trainer(config.load("config.yml"))  

# train the model
trainer.train(train_data)  

# persist the model to store it for future use
model_directory = trainer.persist("./models/nlu", fixed_model_name="current")  

# load the model from the file
interpreter = Interpreter.load("./models/nlu/default/current")  

# perform few tests  
pprint.pprint(interpreter.parse("hey there")) 
pprint.pprint(interpreter.parse("find trains from bangalore to mumbai"))
A screenshot of a cell phone

Description automatically generated
# perform a complete evaluation 
run_evaluation("./data/nlu.md", model_directory)  

Along with the other outputs, you will be able to see the report on the intent evaluation: 

A screenshot of a cell phone

Description automatically generated

As you can see, the outputs from the interpreter are all Dict() types. You can easily integrate this with a bunch of if-elif-else conditions based on the intent.name and other helper functions to return appropriate responses back to the user. However, writing and maintaining if-elif-else conditions to handle different intents and cases could become a bit cumbersome and a much better way to deal with this situation is the use of RASA Core components. 

We successfully created a bot which can handle basic natural language. To take this to the next level and add dialogue capabilities we will discuss more in our next blog in the series. 

To fully explore and utilize the pipeline components, it is important to have a deeper understanding of Classification techniques and Natural Language Processing. Springboard’s courses on machine learning provide excellent learning opportunities and understanding on NLP and ML that comes with a 1:1 mentoring-led and project-driven approach along with a job guarantee.