Enter your keyword

2-s2.0-80054029299

[vc_empty_space][vc_empty_space]

A comparative study of feature ranking methods as dimension reduction technique in Genome-Wide Association Study

Ayuningtyas C.H.a, Putri Saptawati G.A.a, Mengko T.L.E.R.a

a School of Electrical Engineering and Informatics, Institut Teknologi Bandung, Indonesia

[vc_row][vc_column][vc_row_inner][vc_column_inner][vc_separator css=”.vc_custom_1624529070653{padding-top: 30px !important;padding-bottom: 30px !important;}”][/vc_column_inner][/vc_row_inner][vc_row_inner layout=”boxed”][vc_column_inner width=”3/4″ css=”.vc_custom_1624695412187{border-right-width: 1px !important;border-right-color: #dddddd !important;border-right-style: solid !important;border-radius: 1px !important;}”][vc_empty_space][megatron_heading title=”Abstract” size=”size-sm” text_align=”text-left”][vc_column_text]In the recent years, Genome-Wide Association Study (GWAS) has been performed by many scientist around the world to find association between genetic profiles of different individuals with the risk of developing certain diseases. GWAS are performed using the Single Nucleotide Polymorphism (SNP) data which represents the genotypes of two different groups of individuals: the case group of individuals with the disease and the control group of individuals without the disease. The very high dimensional SNP data poses challenges in analyzing GWAS result. This issue can be tackled by performing feature ranking to remove non-relevant features for reducing the dimension of the original data. This work compares several feature ranking methods including the chi-square statistics, information gain, recursive feature elimination and Relief algorithm by analyzing the performance of different learning machines combined with the feature ranking. The highest performance is gained by combining recursive feature elimination with linear SVM while the worst performance is shown by the Relief algorithm. The experiments show that the classifiers generally benefit from the feature selection, but that the highest ranked features are not the best classifier. © 2011 IEEE.[/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”Author keywords” size=”size-sm” text_align=”text-left”][vc_column_text]Chi-square statistics,Comparative studies,Control groups,Dimension reduction,Dimension reduction techniques,Feature ranking,Genome-wide association,GWAS,High-dimensional,Information gain,Learning machines,Linear SVM,Recursive feature elimination,Relief algorithm,Single-nucleotide polymorphisms,SNP[/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”Indexed keywords” size=”size-sm” text_align=”text-left”][vc_column_text]classification,dimension reduction,feature ranking,feature selection,GWAS,SNP[/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”Funding details” size=”size-sm” text_align=”text-left”][vc_column_text][/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”DOI” size=”size-sm” text_align=”text-left”][vc_column_text]https://doi.org/10.1109/ICEEI.2011.6021621[/vc_column_text][/vc_column_inner][vc_column_inner width=”1/4″][vc_column_text]Widget Plumx[/vc_column_text][/vc_column_inner][/vc_row_inner][/vc_column][/vc_row][vc_row][vc_column][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][/vc_column][/vc_row]