[vc_empty_space][vc_empty_space]
Incremental sentence compression using LSTM recurrent networks
Sakti S.a, Ilham F.a,b, Neubig G.a, Toda T.a,c, Purwarianti A.b, Nakamura S.a
a Graduate School of Information Science, Nara Institute of Science and Technology, Japan
b School of Electrical Engineering and Informatics, Bandung Institute of Technology, Indonesia
c Information Technology Center, Nagoya University, Japan
[vc_row][vc_column][vc_row_inner][vc_column_inner][vc_separator css=”.vc_custom_1624529070653{padding-top: 30px !important;padding-bottom: 30px !important;}”][/vc_column_inner][/vc_row_inner][vc_row_inner layout=”boxed”][vc_column_inner width=”3/4″ css=”.vc_custom_1624695412187{border-right-width: 1px !important;border-right-color: #dddddd !important;border-right-style: solid !important;border-radius: 1px !important;}”][vc_empty_space][megatron_heading title=”Abstract” size=”size-sm” text_align=”text-left”][vc_column_text]© 2015 IEEE.Many of the current sentence compression techniques attempt to produce a shortened form of a sentence by relying on syntactic structure such as dependency tree representations. While the performance of sentence compression has been improving, these approaches require a full parse of the sentence before performing sentence compression, making it difficult to perform compression in real time. In this paper, we examine the possibilities of performing incremental sentence compression using long short-term memory (LSTM) recurrent neural networks (RNN). The decision of whether to remove a word is done at each time step, without waiting for the end of the sentence. Various RNN parameters are investigated, including the number of layers and network connections. Furthermore, we also propose using a pretraining method in which the network is pretrained as an autoencoder. Experimental results reveal that our method obtains compression rates similar to human references and a better accuracy than the state-of-the-art tree transduction models.[/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”Author keywords” size=”size-sm” text_align=”text-left”][vc_column_text]Compression rates,Long short term memory,Network connection,Recurrent networks,Recurrent neural network (RNN),Sentence compression,State of the art,Syntactic structure[/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”Indexed keywords” size=”size-sm” text_align=”text-left”][vc_column_text]long short term memory,recurrent neural network,Sentence compression[/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”Funding details” size=”size-sm” text_align=”text-left”][vc_column_text][/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”DOI” size=”size-sm” text_align=”text-left”][vc_column_text]https://doi.org/10.1109/ASRU.2015.7404802[/vc_column_text][/vc_column_inner][vc_column_inner width=”1/4″][vc_column_text]Widget Plumx[/vc_column_text][/vc_column_inner][/vc_row_inner][/vc_column][/vc_row][vc_row][vc_column][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][/vc_column][/vc_row]