[vc_empty_space][vc_empty_space]
Facial Features Extraction Based on Distance and Area of Points for Expression Recognition
Rusydi M.I.a, Hadelina R.a, Samuel O.W.b, Setiawan A.W.c, MacHbub C.c
a Universitas Andalas Padang, Department of Electrical Engineering, Faculty of Engineering, Indonesia
b Shenzen Institutes of Advanced Technology, Chinese Academy of Sciences, Beijing, China
c School of Electrical Engineering and Informatics, Institut Teknologi Bandung, Bandung, Indonesia
[vc_row][vc_column][vc_row_inner][vc_column_inner][vc_separator css=”.vc_custom_1624529070653{padding-top: 30px !important;padding-bottom: 30px !important;}”][/vc_column_inner][/vc_row_inner][vc_row_inner layout=”boxed”][vc_column_inner width=”3/4″ css=”.vc_custom_1624695412187{border-right-width: 1px !important;border-right-color: #dddddd !important;border-right-style: solid !important;border-radius: 1px !important;}”][vc_empty_space][megatron_heading title=”Abstract” size=”size-sm” text_align=”text-left”][vc_column_text]© 2019 IEEE.Facial expression is a means of non-verbal communication that provides information from which an individual’s emotional status/mind could be decoded. Facial expression recognition has been applied in various fields and it has become an increasingly interesting research field in the recent years. A significantly important aspect of facial expression recognition is the feature extraction process. Hence, this paper presents a new facial feature extraction method for expression detection. The proposed method is based on the computation of distances and areas that are formed by two or three facial points provided by Kinect v.2. This computation is used to obtained the facial features. Then, the features which potentially can be used to distinguish happiness, disgust, surprise and anger expressions, will be selected. From the results of the extraction process, a total of 6 facial features were formed from the 12 points that are located arround the mouth, eyebrows, and cheeks. The facial features were later applied as inputs into an artificial neural network model built for expression prediction. The overall result shows that the proposed method could achieve 75% success rate in correctly predicting the expressions of the participants.[/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”Author keywords” size=”size-sm” text_align=”text-left”][vc_column_text]Artificial neural network modeling,face,Facial expression recognition,Facial feature extraction,Facial features extractions,Kinect,Non-verbal communications,recognition[/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”Indexed keywords” size=”size-sm” text_align=”text-left”][vc_column_text]face,Kinect,recognition[/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”Funding details” size=”size-sm” text_align=”text-left”][vc_column_text][/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”DOI” size=”size-sm” text_align=”text-left”][vc_column_text]https://doi.org/10.1109/ACIRS.2019.8936005[/vc_column_text][/vc_column_inner][vc_column_inner width=”1/4″][vc_column_text]Widget Plumx[/vc_column_text][/vc_column_inner][/vc_row_inner][/vc_column][/vc_row][vc_row][vc_column][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][/vc_column][/vc_row]