[vc_empty_space][vc_empty_space]
Parallelized k-means clustering by exploiting instruction level parallelism at low occupancy
Prahara A.a, Ismi D.P.a, Kistijantoro A.I.b, Khodra M.L.b
a Department of Informatics, Faculty of Industrial Technology, Universitas Ahmad Dahlan, Indonesia
b Department of Informatics, School of Electrical Engineering and Informatics, ITB, Bandung, Indonesia
[vc_row][vc_column][vc_row_inner][vc_column_inner][vc_separator css=”.vc_custom_1624529070653{padding-top: 30px !important;padding-bottom: 30px !important;}”][/vc_column_inner][/vc_row_inner][vc_row_inner layout=”boxed”][vc_column_inner width=”3/4″ css=”.vc_custom_1624695412187{border-right-width: 1px !important;border-right-color: #dddddd !important;border-right-style: solid !important;border-radius: 1px !important;}”][vc_empty_space][megatron_heading title=”Abstract” size=”size-sm” text_align=”text-left”][vc_column_text]© 2017 IEEE.Clustering is a technique to cluster data into defined number of cluster. K-means clustering is the most well-known and widely used clustering algorithm. While data become large in terms of volume, the needs of high performance computing (HPC) to perform data clustering is raising. One of the solutions with compromised budget but high efficiency is to utilize highly parallel architecture of Graphics Processing Unit (GPU). In this research, k-means clustering algorithm is implemented on GPU and optimized by exploiting instruction level parallelism (ILP) at low occupancy. ILP on k-means clustering algorithm is achieved by running a number of independent instruction per thread i.e. when calculating distance or sum of data in each cluster. By loading more works into thread at lower occupancy, the higher utilization can be achieved. Experiment on clustering several data shows that the proposed method can speed up k-means clustering several times faster than other parallelized k-means clustering and k-means implementation on CPU.[/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”Author keywords” size=”size-sm” text_align=”text-left”][vc_column_text]CUDA,Graphics Processing Unit (GPU),High performance computing (HPC),Instruction level parallelism,K – means clustering,K-Means clustering algorithm,low occupancy,Number of clusters[/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”Indexed keywords” size=”size-sm” text_align=”text-left”][vc_column_text]CUDA,instruction level parallelism,k-means clustering,low occupancy,parallel computing[/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”Funding details” size=”size-sm” text_align=”text-left”][vc_column_text][/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”DOI” size=”size-sm” text_align=”text-left”][vc_column_text]https://doi.org/10.1109/ICITISEE.2017.8285516[/vc_column_text][/vc_column_inner][vc_column_inner width=”1/4″][vc_column_text]Widget Plumx[/vc_column_text][/vc_column_inner][/vc_row_inner][/vc_column][/vc_row][vc_row][vc_column][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][/vc_column][/vc_row]