Enter your keyword

2-s2.0-85017143440

[vc_empty_space][vc_empty_space]

Classification method for prediction of human activity using stereo camera

Alfuadi R.a, Mutijarsa K.a

a Electrical and Information Departement, Bandung Institute of Technology, Bandung, Indonesia

[vc_row][vc_column][vc_row_inner][vc_column_inner][vc_separator css=”.vc_custom_1624529070653{padding-top: 30px !important;padding-bottom: 30px !important;}”][/vc_column_inner][/vc_row_inner][vc_row_inner layout=”boxed”][vc_column_inner width=”3/4″ css=”.vc_custom_1624695412187{border-right-width: 1px !important;border-right-color: #dddddd !important;border-right-style: solid !important;border-radius: 1px !important;}”][vc_empty_space][megatron_heading title=”Abstract” size=”size-sm” text_align=”text-left”][vc_column_text]Type any human activity can be tracking to recognize the type of activities. Human motion recognition is a huge discussion on gesture recognition. one to recognize human activity is the activity recognition. There are three basic positions to perform various activities, standing, sit down and lie down. These three basic positions can indicate whether a man in a room doing an activity or not. Kinect camera can recognize gestures of the human body, and it can classify parts of the human body into several points. This study was used the three machine learning with the classification method, SVM, MLP and Naive Bayes. SVM and MLP excellent in activity recognition. This study uses 10 body skeleton joint obtained from Kinect camera. SVM and MLP showed a good performance, respectively 99,23% and 98,8%. Naive Bayes quite well with 73,43%. Naive Bayes about speed ahead with 0,02 seconds followed by SVM and MLP with 0,52 seconds and 16,75 seconds. Future expected the research applied to Smart home and Smart care by adding a type of activity.[/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”Author keywords” size=”size-sm” text_align=”text-left”][vc_column_text]Activity recognition,Classification methods,Human activities,Human gesture recognition,Human motion recognition,Kinect cameras,Skeleton joints,Stereo cameras[/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”Indexed keywords” size=”size-sm” text_align=”text-left”][vc_column_text]Activity recognition,Classification method,Human gesture recognition,Kinect camera[/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”Funding details” size=”size-sm” text_align=”text-left”][vc_column_text][/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”DOI” size=”size-sm” text_align=”text-left”][vc_column_text]https://doi.org/10.1109/ISEMANTIC.2016.7873809[/vc_column_text][/vc_column_inner][vc_column_inner width=”1/4″][vc_column_text]Widget Plumx[/vc_column_text][/vc_column_inner][/vc_row_inner][/vc_column][/vc_row][vc_row][vc_column][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][/vc_column][/vc_row]