Enter your keyword

2-s2.0-85090236582

[vc_empty_space][vc_empty_space]

Validity and inter-rater reliability of postural analysis among new raters

Widyanti A.a

a Department of Industrial Engineering, Institut Teknologi Bandung, Bandung, Indonesia

[vc_row][vc_column][vc_row_inner][vc_column_inner][vc_separator css=”.vc_custom_1624529070653{padding-top: 30px !important;padding-bottom: 30px !important;}”][/vc_column_inner][/vc_row_inner][vc_row_inner layout=”boxed”][vc_column_inner width=”3/4″ css=”.vc_custom_1624695412187{border-right-width: 1px !important;border-right-color: #dddddd !important;border-right-style: solid !important;border-radius: 1px !important;}”][vc_empty_space][megatron_heading title=”Abstract” size=”size-sm” text_align=”text-left”][vc_column_text]© 2020.Work posture analysis is crucial in observing and reducing work-related musculoskeletal symptoms in the workplace. However, in a developing country, new raters are commonly assigned to conduct postural analysis to save on cost. This study aims to observe the validity and inter-rater reliability (defined as the degree of agreement among different raters) among new raters of three different commonly used work posture analysis methods: Rapid Upper Limb Assessment (RULA), Rapid Entire Body Assessment (REBA), and Ovako Workload Assessment System (OWAS). Fifty industrial engineering students, divided into five groups, who received prior training about the use of the methods, participated voluntarily in this study by observing ten different working postures in five different industries: the tofu, military equipment manufacturing, automotive maintenance and service, cracker, and milk-processing industries. One ergonomics expert also observed the working postures. Validity was observed based on the correlation between new raters’ ratings and the rating of the ergonomics expert. Inter-rater reliability within one group was calculated using the percentage of agreement and kappa value. The result shows high validity of RULA, REBA, and OWAS among new raters. There are insignificant differences in the inter-rater reliability of new raters among RULA, REBA, and OWAS. The implications of the result are discussed.[/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”Author keywords” size=”size-sm” text_align=”text-left”][vc_column_text][/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”Indexed keywords” size=”size-sm” text_align=”text-left”][vc_column_text]Inter-rater reliability,New raters,OWAS,REBA,RULA,Validity[/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”Funding details” size=”size-sm” text_align=”text-left”][vc_column_text][/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”DOI” size=”size-sm” text_align=”text-left”][vc_column_text]https://doi.org/10.37268/MJPHM/VOL.20/NO.SPECIAL1/ART.707[/vc_column_text][/vc_column_inner][vc_column_inner width=”1/4″][vc_column_text]Widget Plumx[/vc_column_text][/vc_column_inner][/vc_row_inner][/vc_column][/vc_row][vc_row][vc_column][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][/vc_column][/vc_row]