Herramientas de usuario

Herramientas del sitio


# Medidas de acuerdo diagnóstico


## Dos examinadores kappa de Cohen ## Más de dos examinadores kappa de Fleiss

## Diagnóstico con escala discreta nomimal

a <- c("si", "no", "no", "si", "si", "no")
b <- c("si", "si", "si", "si", "si", "no")
df <- data.frame(a,b)
kappa2(df, "unweighted")

## Diagnóstico con escala discreta ordinal

a <- c("a", "a", "b", "c", "c", "c")
b <- c("a", "b", "b", "c", "c", "c")
df <- data.frame(a,b)

### a = b y a = c?

kappa2(df, "linear") #ponderado lineal

### a = b y a ≠ c

kappa2(df, "squared") #ponderado cuadrático

## Diagnóstico con escala continua

icc(df, model="twoway", type="agreement")

* Should only the subjects be considered as random effects ("//oneway//" model, default) or are subjects and raters randomly chosen from a bigger pool of persons ("//twoway//" model).
* If differences in judges’ mean ratings are of interest, interrater "//agreement//" instead of "//consistency//" (default) should be computed.
* If the unit of analysis is a mean of several ratings, unit should be changed to "//average//". In most cases, however, single values (unit="//single//", default) are regarded.
inferencia_estadistica/acuerdo_diagnostico.txt · Última modificación: 2017/12/21 06:00 por