Content Validity Ratio (CVR)

From GM-RKB
(Redirected from Content Validity Ratio)
Jump to navigation Jump to search

A Content Validity Ratio (CVR) is a ratio that measures the content validity of variable.



References

2021

  • (Wikipedia, 2021) ⇒ https://en.wikipedia.org/wiki/Content_validity#Measurement Retrieved:2021-12-4.
    • One widely used method of measuring content validity was developed by C. H. Lawshe. It is essentially a method for gauging agreement among raters or judges regarding how essential a particular item is. In an article regarding

      pre-employment testing, proposed that each of the subject matter expert raters (SMEs) on the judging panel respond to the following question for each item: "Is the skill or knowledge measured by this item 'essential,' 'useful, but not essential,' or 'not necessary' to the performance of the job?" According to Lawshe, if more than half the panelists indicate that an item is essential, that item has at least some content validity. Greater levels of content validity exist as larger numbers of panelists agree that a particular item is essential. Using these assumptions, Lawshe developed a formula termed the content validity ratio: [math]\displaystyle{ CVR = (n_e - N/2)/(N/2) }[/math] where [math]\displaystyle{ CVR= }[/math] content validity ratio, [math]\displaystyle{ n_e= }[/math] number of SME panelists indicating "essential", [math]\displaystyle{ N= }[/math] total number of SME panelists. This formula yields values which range from +1 to -1; positive values indicate that at least half the SMEs rated the item as essential. The mean CVR across items may be used as an indicator of overall test content validity.

      provided a table of critical values for the CVR by which a test evaluator could determine, for a pool of SMEs of a given size, the size of a calculated CVR necessary to exceed chance expectation. This table had been calculated for Lawshe by his friend, Lowell Schipper. Close examination of this published table revealed an anomaly. In Schipper's table, the critical value for the CVR increases monotonically from the case of 40 SMEs (minimum value = .29) to the case of 9 SMEs (minimum value = .78) only to unexpectedly drop at the case of 8 SMEs (minimum value = .75) before hitting its ceiling value at the case of 7 SMEs (minimum value = .99). However, when applying the formula to 8 raters, the result from 7 Essential and 1 other rating yields a CVR of .75. If .75 was not the critical value, then 8 of 8 raters of Essential would be needed that would yield a CVR of 1.00. In that case, to be consistent with the ascending order of CVRs the value for 8 raters would have to be 1.00. That would violate the same principle because you would have the "perfect" value required for 8 raters, but not for ratings at other numbers of raters at either higher or lower than 8 raters. Whether this departure from the table's otherwise monotonic progression was due to a calculation error on Schipper's part or an error in typing or typesetting is unclear. , seeking to correct the error, found no explanation in Lawshe's writings nor any publications by Schipper describing how the table of critical values was computed. Wilson and colleagues determined that the Schipper values were close approximations to the normal approximation to the binomial distribution. By comparing Schipper's values to the newly calculated binomial values, they also found that Lawshe and Schipper had erroneously labeled their published table as representing a one-tailed test when in fact the values mirrored the binomial values for a two-tailed test. Wilson and colleagues published a recalculation of critical values for the content validity ratio providing critical values in unit steps at multiple alpha levels.