Jump to content

Evidence-based suggestions on inter-rater reliability


Quirkdecay509

Recommended Posts

 

Hello,

I figured this was a good place to start if I had a question I didn't want to necessarily ask my PI, at least quite yet. 

I am conducting an inductive thematic analysis (qualitative study) that involves 3 raters designating a large body of text into different themes or categories. The plan is to identify and calculate the weight of matches when 2 or 3 raters agree on a statement and then compare that agreement to the amount of disagreement in ratings that didn't correspond along with the other raters'.

What is the best was to calculate IRR or agreement amongst raters in such a kind of study? My PI has mentioned intra-class correlations are something she's familiar with. I've also research Cohen's Kappa, but I'm not sure.

Also, what would be the best way to handle the folliwing; if a rater sent the same utterance to more than one theme (believing themes both applied), how should agreement be handled considering there a duplicate ratings for a single utterance?

Please, if you have an empirical references in relation to your suggestion(s), let me know.

Thanks for your help!

An anonymous master's student.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

This website uses cookies to ensure you get the best experience on our website. See our Privacy Policy and Terms of Use