Table 1

Kappa values evaluating the inter-rater reliability (agreement) between human coders and supervised training of three-topic training models

TopicsTrain percentage
0.20.40.60.8
1240.67 (±0.01)0.70 (±0.02)0.68 (±0.03)0.70 (±0.04)
1350.66 (±0.02)0.66 (±0.02)0.67 (±0.05)0.67 (±0.06)
1450.67 (±0.02)0.71 (±0.02)0.70 (±0.02)0.72 (±0.02)
2450.67 (±0.04)0.69 (±0.03)0.71 (±0.03)0.72 (±0.06)
2570.68 (±0.02)0.70 (±0.01)0.71 (±0.02)0.70 (±0.02)
  • Topic labels are: (1) distrust, (2) media messaging, (3) trusted sources of information, (4) personal medical concerns, (5) family concerns, (6) societal concerns, (7) barriers to recommendations, (8) no worries, (9) other Broad.