Research/Blog
Meeting Minutes from AI Lab Hands-On Workshop on Saturday 19th Oct in Bengaluru
- October 24, 2019
- Posted by: vsinghal
- Category: Deep Learning Natural Language Processing
#CellStratAILab #disrupt4.0 #WeCreateAISuperstars
Last Saturday, our AI Lab Team Leads Abdul Azeez and Indrajit Singh conducted a superb workshop on developing Question-Answer systems (QAS) with Natural Language Processing or NLP.
![](http://www.cellstrat.com/wp-content/uploads/2019/10/Collage-2.jpg)
The Question and Answering systems are going to change the world. Understanding human language interactively, accurately and faster is the need of the century. There are tons of real time applications that can be built using QAS.
Workshop Details :
There is a huge misconception by a lot of people who think that Chatbots and Q&A systems are same or similar but in reality they are not ,they follow completely different approaches, methods, algorithms and models, except the input and output which is a text.
Primarily the QA systems are built for giving out a human-language question and giving a proper answer for it. Sometimes a specific question is asked and also sometimes a open-ended question can also be asked. Recent research works has given even more difficult questions.
In this Workshop, SQuaD dataset (Stanford Question Answering Dataset) was used to train the QA model. SQuAD consists of over 100,000 question-answer pairs from context paragraphs found on Wikipedia. The SQuAD dataset consists of questions, context paragraphs, and answer spans.
Also, GLoVe word embeddings were used. GLoVe is a pre-trained embedding set trained on Wikipedia 2014 and Gigaword 5.
(Research papers referenced in this article – 2749028 and 2761899 by Stanford University).
![](http://www.cellstrat.com/wp-content/uploads/2019/10/QA-example-rajpurkar.github.io-qa-and-squad.png)
https://rajpurkar.github.io/mlx/qa-and-squad/
The QA model uses sequence-to-sequence Encoder-Decoder architecture (bi-directional LSTM) with additional logic of Pointer Network Architecture, a simple Question coattention mechanism and a full coattention encoder for improved accuracy. With the coattention encoder, performance reaches 65.036% F1 and 53.205% EM on the SQuAD test set.
In the baseline model, we feed the question embeddings to one Encoder LSTM to obtain final hidden state “q”. Then we feed context embeddings to another encoder LSTM (which has initial hidden state “q”) to obtain final hidden state “c” and context vectors c1′, c2′,.., cn’. Combined, this Encoder represents the “Reader” which reads both Question and Pragraph Context.
In the Decoder, an LSTM operates on inputs [(c1′, q), (c2′, q)…(cn’, q)] to produce outputs which are taken through softmax to classify each context token as start, end, or neither of the answer.
![](http://www.cellstrat.com/wp-content/uploads/2019/10/QA-model-Research-paper-2761899-by-web.stanfordedu.png)
https://web.stanford.edu/class/archive/cs/cs224n/cs224n.1174/reports/2761899.pdf
![](http://www.cellstrat.com/wp-content/uploads/2019/10/Network-Architecture-semanticscholars.org_.png)
https://www.semanticscholar.org/paper/Assignment-4-%3A-Question-Answering-on-the-SQuAD-with-Creus-Costa-Hwang/5927c2170fb03f9f34ef886c65fdf2cc6ec34089/figure/1
The final model was providing fairly accurate answers for questions asked with good F1 scores and EM (exact_match) scores.
Come and explore world-class AI research and development in our AI Lab. Attend our AI Lab meetup this Saturday (26th Oct ’19) in BLR :-
BLR AI Lab (Saturday 26th Oct) :-
Register : https://www.meetup.com/Disrupt-4-0/events/vcqljryznbjc/
Topic : Fake Video Detection, Monte Carlo Simulation
Loc. : WeWork, Embassy Tech Village, ORR, BLR
Presenters : Jani Basha, Pushparaj M., Atmabit Pattanaik
See you this weekend for the AI Lab meetup ! Let’s disrupt with AI, bigtime !
Questions ? Call me at +91-9742800566 !
Best Regards,
Vivek Singhal
Co-Founder & Chief Data Scientist, CellStrat
91-9742800566