Enter your keyword

2-s2.0-84961119874

[vc_empty_space][vc_empty_space]

Automatic extraction phonetically rich and balanced verses for speaker-dependent quranic speech recognition system

Yuwan R.a, Lestari D.P.a

a School of Electrical Engineering and Informatics, Institut Teknologi Bandung, Bandung, Indonesia

[vc_row][vc_column][vc_row_inner][vc_column_inner][vc_separator css=”.vc_custom_1624529070653{padding-top: 30px !important;padding-bottom: 30px !important;}”][/vc_column_inner][/vc_row_inner][vc_row_inner layout=”boxed”][vc_column_inner width=”3/4″ css=”.vc_custom_1624695412187{border-right-width: 1px !important;border-right-color: #dddddd !important;border-right-style: solid !important;border-radius: 1px !important;}”][vc_empty_space][megatron_heading title=”Abstract” size=”size-sm” text_align=”text-left”][vc_column_text]© Springer Science+Business Media Singapore 2016.This paper discussed how to collect phonetically rich and balanced verses as speech corpus for quranic recognition system. The Quranic phonology was analyzed based on the qira’a of ‘Asim in the riwaya of Hafs to transform arabic text of Holy Quran into alphabetical symbols that represent all possible sounds (QScript) when Holy Quran is read. The entire verses of Holy Quran were checked to select verses-set which met the criteria of a phonetically rich and balanced corpus. The selected verses contained 180 verses of 6236 whole verses in Quran. Statistical phonemes distribution similarity of selected verses was 0.9998 compared to phonemes distiribution in whole Quran. To determine the effect of using this corpus, early development speaker-dependent Quranic recognition system based on CMU Sphinx was developed. MFCC was used as feature extraction. The system used HMM with 3-emitting-states based on tri-phone. For language model, the system used N-gram with word as a basis. The system was trained using recitation from 3 speakers and obtained a recognition accuracy of 97.47 %.[/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”Author keywords” size=”size-sm” text_align=”text-left”][vc_column_text]Acoustic model,Automatic speech recognition,Phonetically rich and balanced quranic corpus,Quran phonology,Statistical language modeling[/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”Indexed keywords” size=”size-sm” text_align=”text-left”][vc_column_text]Acoustic model,Phonetically rich and balanced quranic corpus,Quran phonology,Quranic automatic speech recognition,Statistical language model[/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”Funding details” size=”size-sm” text_align=”text-left”][vc_column_text][/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”DOI” size=”size-sm” text_align=”text-left”][vc_column_text]https://doi.org/10.1007/978-981-10-0515-2_5[/vc_column_text][/vc_column_inner][vc_column_inner width=”1/4″][vc_column_text]Widget Plumx[/vc_column_text][/vc_column_inner][/vc_row_inner][/vc_column][/vc_row][vc_row][vc_column][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][/vc_column][/vc_row]