Basic intuition of Conversational Question Answering Systems (CQA)

The researcher has been working to develop an array of intelligent dialogue system that not only matches or surpasses a human’s level in carrying out an interactive conversation but also answers questions on a variety of topics. LaMDA— short for “Language Model for Dialogue Applications”, google’s latest research breakthrough, adds pieces to one of the most tantalizing sections of that puzzle: conversation. LaMDA is developed with an open-ended nature means they can start with one specific topic and end up somewhere completely on different topics. There are several successful dialog agents such as Amazon Alexa, Apple Siri, Microsoft Cortana, etc. In this blog, I want to cover the main building blocks of a conversational question answering systems

“We’re no longer teaching people how to communicate with systems, we’re teaching systems to communicate with people.”

The field of conversational AI can be divided into three different dialog systems:

Task-Oriented Dialogue System: This dialogue system is required to perform tasks on the user’s behalf such as making a reservation in a restaurant or scheduling an event.

Chat-Oriented Dialogue System: This system needs to carry out a natural and interactive conversation with the users.

QA Dialogue System: This dialogue system is responsible to provide clear and concise answers to the users’ questions based on information deduced from different data sources such as text documents or knowledge bases.

In the following blocks to come, I will elucidate the QA Dialogue System.

Types of Conversational AI

Question answering in general involves accessing different data sources to find the correct answer for an asked question. Since CQA is categorized as a sub-category of QA, The categorization of the CQA model could be realized on the basis of the data domain, types of questions, types of data sources, and the types of systems.

Categorization of CQA on the basis of i) data domains, ii) types of questions, iii) types of data sources, and iv) types of systems

The above figure manifests the details of each category of a CQA system. We will try to put more emphasis on the Conversational Machine Reading Comprehension system and also the CoQA dataset. In my upcoming blogs, I will try to explain Open-Domain CQA in more detail which is currently a very famous topic of research in QA state-of-art of NLP.

Conversational Machine Reading Comprehension (CMRC)

Mostly, all the machine reading comprehension is based on single-turn QA which is unlikely from the real-world application where users ask multiple questions in turns. For instance, a user might ask, “Who is Elon Musk? ” and based on the answer received, he might further investigate, “Where did he study?” and “What was he famous for? ”. It is easy for a human to understand that here “he” in the follow-up questions refer to “Elon Musk” from the first question. But when it comes to a machine comprehending the context, it poses a set of challenges such as how to co-reference the conversation history which means answering the current question based on the information that the model understands from the previous questions.

The practical use of text-based QA agents also referred to as CMRC agents, is more common in mobile phones and search engines (like Google, Bing, etc), wherein concise and direct answers are provided to the users.

One reason for the rapid growth in the field of CQA is the advent of largescale conversational datasets for machine comprehension. In a further section, we will discuss the CMRC datasets.

Datasets for Conversational Machine Reading Comprehension

Generally, there are three types of CMRC datasets based on the type of answers they provide:

Multiple-choice option datasets provide text-based multiple-choice questions and expect the model to identify the right answer out of the available options. Examples: MCTest, MCSript, etc.

Descriptive answer datasets allow answers to be in any free-form text. Examples: MS Marco and Narrative QA.

Span prediction datasets require the model to extract the correct answer span from the given source passage. Example: CoQA, SQuAD, TriviaQA, etc.

Let’s get our hand on the CoQA dataset

CoQA (Conversational Question Answering)

CoQA (Conversational Question Answering) was introduced by Siva Reddy with three objectives in mind.

The first is the nature of questions in human conversations. In this dataset, every question except the first one is dependent on the previous conversation history.
CoQA allows free-form answers while providing a text span from the given passage as a rationale for the answer.
CoQA allows the development of CQA systems across multiple domains.

CoQA dataset contains 127k questions with answers, obtained from 8k conversations about text passages from seven diverse domains. The questions are conversational, and the answers are free-form text with their corresponding evidence highlighted in the passage.

A conversation from the CoQA dataset.Interested! What are the input parameters do the CMRC model consider and How would the model understand those parameters to reply back with the relevant answer?

CoQA can be modeled as a conversational response generation problem or a reading comprehension problem. We evaluate the baseline for each modeling type when given a passage p, the conversation history {q1, a1, . . . qi−1, ai−1}, and a question qi, the task is to predict the answer ai. Here model will use the information from gold answers a1, a2, . . . , ai−1 to predict ai.

Let’s discuss DrQA briefly in order to learn document/span retrieval from the provided context source.

DrQA open-domain reading comprehension question answering system

To combine the challenges of both large-scale open-domain QA and machine comprehension of text. To answer any question, one must first retrieve the few relevant articles among more than 5 million items using Wikipedia articles and then scan them carefully to identify the answer. Authors of the DrQA paper termed this setting, machine reading at scale (MRS).

DrQA was developed to evaluate MRS by using an open-domain system for multiple existing QA datasets. DrQA, a system for question answering from Wikipedia composed of:

Document Retriever: a module using bigram hashing and TF-IDF matching designed to, given a question, efficiently return a subset of relevant articles.

Document Reader: it is a multi-layer recurrent neural network machine comprehension model trained to detect answer spans in those few returned documents.

An overview of the DrQA question answering system is shown below and an image is taken from DrQA’s original paper 🙂

An overview of our question answering system DrQA.

We will discuss Open-Domain Conversational QA in more detail in my upcoming blogs. Here, a basic overview of DrQA is given and for more information please visit here.

Future works & Challenges in CQA

The current CQA benchmark mainly focuses on resource-rich languages such as English. There is plenty of room for an improvement in the low-resources language in conversational question answering. Multilingual and few-shot cross-lingual transfer learning is still a necessary area of research in CQA.

Also, the present system is not self-sufficient to continue question-answering along with topic switching across different articles.

Thanks for reading this article! Share with me your reviews/ideas. If you have any doubts or face an error, let me know in the comments. Don’t forget to give me a clap👏 🙂




[3] Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to Answer Open-Domain Questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870–1879, Vancouver, Canada. Association for Computational Linguistics.

[4] Siva Reddy, Danqi Chen, and Christopher D. Manning. 2019. CoQA: A Conversational Question Answering Challenge. Transactions of the Association for Computational Linguistics, 7:249–266.



Basic intuition of Conversational Question Answering Systems (CQA) was originally published in AR/VR Journey: Augmented & Virtual Reality Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read More

Generated by Feedzy