Enter your keyword

2-s2.0-85087085843

[vc_empty_space][vc_empty_space]

Semantic relation detection based on multi-task learning and cross-lingual-view embedding

Sholikah R.W.a, Arifin A.Z.a, Fatichah C.a, Purwarianti A.b

a Informatics Department, Faculty of Information and Communication Technology, Institut Teknologi Sepuluh Nopember, Surabaya, Indonesia
b School of Informatics and Electrical Engineering, Institut Teknologi Bandung, Bandung, Indonesia

[vc_row][vc_column][vc_row_inner][vc_column_inner][vc_separator css=”.vc_custom_1624529070653{padding-top: 30px !important;padding-bottom: 30px !important;}”][/vc_column_inner][/vc_row_inner][vc_row_inner layout=”boxed”][vc_column_inner width=”3/4″ css=”.vc_custom_1624695412187{border-right-width: 1px !important;border-right-color: #dddddd !important;border-right-style: solid !important;border-radius: 1px !important;}”][vc_empty_space][megatron_heading title=”Abstract” size=”size-sm” text_align=”text-left”][vc_column_text]© 2020, Intelligent Network and Systems Society.Semantic relation extraction automatically is an important task in NLP. Various methods have been developed using either pattern-based approach or distributional approach. However, existing research only focuses on single task modeling without considering the possibility of generalization with other tasks. Besides, the methods that exist only use one view from task language as an input representation that might lack of features. This happens especially in languages that are classified as low resource language. Therefore, in this paper we proposed a framework for semantic relations classification based on multi-task architecture and cross-lingual-view embedding. There are two main stages in this framework, data augmentation based on pseudo parallel corpora and multi-task architecture with cross-lingual-view embedding. Further, extensive experiment of the proposed framework has been conducted. The results show that the use of rich resource language in cross-lingual-view embedding is able to support low-resource languages. This is shown by the results with accuracy and F1-scores of 85.8% and 87.6%, respectively. The comparison result also shows that our proposed model outperforms another state-of-the art.[/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”Author keywords” size=”size-sm” text_align=”text-left”][vc_column_text][/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”Indexed keywords” size=”size-sm” text_align=”text-left”][vc_column_text]Cross-lingual-view embedding,Distributional approach,Multi-task learning,Semantic relation[/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”Funding details” size=”size-sm” text_align=”text-left”][vc_column_text][{‘$’: ‘This work was supported by the Ministry of Research, Technology and Higher Education of Republic Indonesia under PMDSU program that enable this joint-research with Hiroshima University.’}, {‘$’: ‘This work was supported by the Ministry of Re-search, Technology and Higher Education of Republic Indonesia under PMDSU program that enable this joint-research with Hiroshima University.’}][/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”DOI” size=”size-sm” text_align=”text-left”][vc_column_text]https://doi.org/10.22266/IJIES2020.0630.04[/vc_column_text][/vc_column_inner][vc_column_inner width=”1/4″][vc_column_text]Widget Plumx[/vc_column_text][/vc_column_inner][/vc_row_inner][/vc_column][/vc_row][vc_row][vc_column][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][/vc_column][/vc_row]