Skip to main content

Table 2 Intra and inter-rater agreement

From: Single leg squat performance in physically and non-physically active individuals: a cross-sectional study

 

Weighted Kappa Coefficient

95% Confidence Interval

Standard Error

Lower limit

Upper limit

INTER-RATER RELIABILITY among six raters

    

 Generalized weighted kappa – Time point 0

0.34

0.28

0.41

0.04

 Generalized weighted kappa – Time point 1

0.31

0.23

0.38

0.04

 Generalized weighted kappa – Time point 2

0.30

0.23

0.37

0.04

INTER-RATER RELIABILITY between each couple of raters

 Rater 1 vs Rater 2

0.57

0.40

0.74

0.09

 Rater 1 vs Rater 3

0.53

0.35

0.70

0.09

 Rater 1 vs Rater 4

0.49

0.33

0.66

0.09

 Rater 1 vs Rater 5

0.48

0.28

0.67

0.10

 Rater 1 vs Rater 6

0.41

0.26

0.57

0.08

 Rater 2 vs Rater 3

0.61

0.46

0.76

0.08

 Rater 2 vs Rater 4

0.55

0.40

0.70

0.08

 Rater 2 vs Rater 5

0.31

0.13

0.49

0.09

 Rater 2 vs Rater 6

0.24

0.09

0.39

0.08

 Rater 3 vs Rater 4

0.44

0.28

0.60

0.08

 Rater 3 vs Rater 5

0.30

0.13

0.47

0.09

 Rater 3 vs Rater 6

0.31

0.16

0.46

0.08

 Rater 4 vs Rater 5

0.35

0.18

0.53

0.09

 Rater 4 vs Rater 6

0.47

0.31

0.63

0.08

 Rater 5 vs Rater 6

0.46

0.30

0.62

0.08

INTRA-RATER RELIABILITY

 Rater 1

    

  1 trial vs 2 trial

0.58

0.39

0.76

0.09

  1 trial vs 3 trial

0.68

0.51

0.86

0.09

  2 trial vs 3 trial

0.58

0.40

0.76

0.09

  Generalized weighted kappa

0.61

0.51

0.72

0.05

 Rater 2

    

  1 trial vs 2 trial

0.90a

0.81

0.98

0.04

  1 trial vs 3 trial

0.85a

0.75

0.96

0.05

  2 trial vs 3 trial

0.88a

0.78

0.97

0.05

  Generalized weighted kappa

0.85a

0.79

0.92

0.03

 Rater 3

    

  1 trial vs 2 trial

0.56

0.39

0.72

0.08

  1 trial vs 3 trial

0.50

0.34

0.66

0.08

  2 trial vs 3 trial

0.77a

0.64

0.90

0.07

  Generalized weighted kappa

0.55

0.44

0.65

0.05

 Rater 4

    

  1 trial vs 2 trial

0.77a

0.66

0.89

0.06

  1 trial vs 3 trial

0.90a

0.82

0.99

0.04

  2 trial vs 3 trial

0.83a

0.73

0.94

0.05

  Generalized weighted kappa

0.80a

0.73

0.87

0.04

 Rater 5

    

  1 trial vs 2 trial

0.73

0.58

0.77

0.07

  1 trial vs 3 trial

0.71

0.57

0.86

0.07

  2 trial vs 3 trial

0.78a

0.64

0.92

0.07

  Generalized weighted kappa

0.70

0.61

0.79

0.05

 Rater 6

    

  1 trial vs 2 trial

0.86a

0.78

0.95

0.05

  1 trial vs 3 trial

0.81a

0.70

0.93

0.06

  2 trial vs 3 trial

0.91a

0.82

1.00

0.05

  Generalized weighted kappa

0.83a

0.77

0.90

0.03

  1. The weighted kappa scores were interpreted as follows: 81% and higher. Excellent agreement; from 61% to 80%. substantial levels of agreement; from 41% to 60%. moderate agreement; and below 40%. poor to fair agreement. [data are taken from Landis and Koch. Landis JR. Koch GG. A one-way components of variance model for categorical data. Biometrics 1977]
  2. aInterrater reliability of ordinal scale measures considered adequate for clinical use (≥0.75) [Portney and Watkins. 2009. Foundations of clinical research: Applications to practice. 3rd edn. Upper Saddle River. NJ. Pearson/Prentice Hall]