Enter your keyword

2-s2.0-85068416072

[vc_empty_space][vc_empty_space]

Summarizing Indonesian news articles using Graph Convolutional Network

Garmastewira G.a, Khodra M.L.a

a School of Electrical Engineering and Informatics, Institut Teknologi, Bandung, Indonesia

[vc_row][vc_column][vc_row_inner][vc_column_inner][vc_separator css=”.vc_custom_1624529070653{padding-top: 30px !important;padding-bottom: 30px !important;}”][/vc_column_inner][/vc_row_inner][vc_row_inner layout=”boxed”][vc_column_inner width=”3/4″ css=”.vc_custom_1624695412187{border-right-width: 1px !important;border-right-color: #dddddd !important;border-right-style: solid !important;border-radius: 1px !important;}”][vc_empty_space][megatron_heading title=”Abstract” size=”size-sm” text_align=”text-left”][vc_column_text]© 2010, Universiti Utara Malaysia Press.Multi-document summarization transforms a set of related documents into a concise summary. Existing Indonesian news article summarization does not take relationships between sentences into account and depends heavily on Indonesian language tools and resources. This study employed Graph Convolutional Network (GCN) which allows for word embedding sequence and sentence relationship graph as input for Indonesian news article summarization. The system in this study comprised four main components: preprocess, graph construction, sentence scoring, and sentence selection components. Sentence scoring component is a neural network that uses Recurrent Neural Network and GCN to produce scores for all sentences. This study used three different representation types for the sentence relationship graph. The sentence selection component then generates a summary with two different techniques: by greedily choosing sentences with the highest scores and by using the Maximum Marginal Relevance (MMR) technique. The evaluation showed that the GCN summarizer with Personalized Discourse Graph, a graph representation system, achieved the best results with an average ROUGE-2 recall score of 0.370 for a 100-word summary and 0.378 for a 200-word summary. Sentence selection using the greedy technique gave better results for generating a 100-word summary, while the MMR performed better for generating a 200-word summary.[/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”Author keywords” size=”size-sm” text_align=”text-left”][vc_column_text][/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”Indexed keywords” size=”size-sm” text_align=”text-left”][vc_column_text]Graph Convolutional Network,Personalized discourse graph,ROUGE-2,Summarization[/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”Funding details” size=”size-sm” text_align=”text-left”][vc_column_text]This work was supported by P3MI Grant 2018.[/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”DOI” size=”size-sm” text_align=”text-left”][vc_column_text]https://doi.org/10.32890/jict2019.18.3.6[/vc_column_text][/vc_column_inner][vc_column_inner width=”1/4″][vc_column_text]Widget Plumx[/vc_column_text][/vc_column_inner][/vc_row_inner][/vc_column][/vc_row][vc_row][vc_column][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][/vc_column][/vc_row]