Enter your keyword

2-s2.0-85059948625

[vc_empty_space][vc_empty_space]

Comparative Study of Topology and Feature Variants for Non-Task-Oriented Chatbot using Sequence to Sequence Learning

Dzakwan G.a, Purwarianti A.a

a School of Electrical Engineering and Informatics, Bandung Institute of Technology, Bandung, Indonesia

[vc_row][vc_column][vc_row_inner][vc_column_inner][vc_separator css=”.vc_custom_1624529070653{padding-top: 30px !important;padding-bottom: 30px !important;}”][/vc_column_inner][/vc_row_inner][vc_row_inner layout=”boxed”][vc_column_inner width=”3/4″ css=”.vc_custom_1624695412187{border-right-width: 1px !important;border-right-color: #dddddd !important;border-right-style: solid !important;border-radius: 1px !important;}”][vc_empty_space][megatron_heading title=”Abstract” size=”size-sm” text_align=”text-left”][vc_column_text]© 2018 IEEE.On language generation system such as chatbot and machine translation, there is a recent approach called sequence to sequence learning. This approach takes advantages of two recurrent neural networks (encoder and decoder) as an end-to-end mapping tool to generatively build the output from a certain input. In this paper, we try to find a combination of topology and feature which produces the highest result according to automatic evaluation metrics BLEU for non-task-oriented chatbot as the case study. The topologies used in the experiment are RNN, GRU, and LSTM along with their modifications, which are bidirectional encoder and attention-based decoder. The features used in the experiment are word-based feature and character-based feature. The experiment is conducted using Papaya English dialogue dataset. From the dataset, ten thousand pairs of conversation are picked for training data and a thousand pairs of conversation are picked for testing data. The result shows that bidirectional LSTM encoder with attention-based decoder and word based feature produced the highest cumulative BLEU-4 score amongst other topologies, which is 0.31.[/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”Author keywords” size=”size-sm” text_align=”text-left”][vc_column_text]attention-based decoder,bidirectional encoder,character-based feature,Chatbot,Sequence learning,word-based feature[/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”Indexed keywords” size=”size-sm” text_align=”text-left”][vc_column_text]attention-based decoder,bidirectional encoder,character-based feature,non-task-oriented chatbot,recurrent neural network,sequence to sequence learning,word-based feature[/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”Funding details” size=”size-sm” text_align=”text-left”][vc_column_text]ACKNOWLEDGEMENT This research is partially funded by Rinna Microsoft collaboration research project titled “Image Descriptions with Sentiments in Indonesian Language”.[/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”DOI” size=”size-sm” text_align=”text-left”][vc_column_text]https://doi.org/10.1109/ICAICTA.2018.8541285[/vc_column_text][/vc_column_inner][vc_column_inner width=”1/4″][vc_column_text]Widget Plumx[/vc_column_text][/vc_column_inner][/vc_row_inner][/vc_column][/vc_row][vc_row][vc_column][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][/vc_column][/vc_row]