Hostname: page-component-7c8c6479df-p566r Total loading time: 0 Render date: 2024-03-29T11:56:36.054Z Has data issue: false hasContentIssue false

Inter-rater Reliability of Ward Rating Scales

Published online by Cambridge University Press:  29 January 2018

John N. Hall*
Affiliation:
University Department of Psychiatry, 15 Hyde Terrace, Leeds, LS2 9LT

Extract

Psychiatrists, psychologists, and nursing staff are increasingly making direct observations and ratings of ward behaviour. Characteristically, a nurse may be asked to complete a multi-item rating scale on a group of patients during the course of a drug trial. Several factors are involved in the choice of an appropriate scale for a particular purpose. Among these factors are the number of points per item, which defines the sensitivity to change of the item, and the total number of items in the scale, which affects the time taken to complete the scale and hence the frequency of rating that can be permitted in an assessment schedule.

Type
Research Article
Copyright
Copyright © Royal College of Psychiatrists, 1974 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Baker, R. D., Hall, J. N. & Hutchinson, K. (1974) A token economy project with chronic schizophrenic patients. British Journal of Psychiatry, 124, 367—84.CrossRefGoogle ScholarPubMed
Cohen, J. (1960) A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20, 3746.CrossRefGoogle Scholar
Cohen, J. (1968) Weighted kappa: nominal scale agreement with provision for scaled disagreement or partial credit. Psychological Bulletin, 70, 213—20.CrossRefGoogle ScholarPubMed
Costello, A. J. (1973) The reliability of direct observations. Bulletin of the British Psychological Society, 26, 105108.Google Scholar
Everitt, B. S. (1968) Moments of the statistics kappa and weighted kappa. British Journal of Mathematical and Statistical Psychology, 21, 97103.CrossRefGoogle Scholar
Fleiss, J. L., Cohen, J. & Everitt, B. S. (1969) Large sample standard errors of kappa and weighted kappa. Psychological Bulletin, 72, 323—7.CrossRefGoogle Scholar
Fleiss, J. L., Spitzer, R. L., Endicott, J. & Cohen, J. (1972) Quantification of agreement in multiple psychiatric diagnosis. Archives of General Psychiatry, 26, 168—71.CrossRefGoogle ScholarPubMed
Guilford, J. P. (1954) Psychometric Methods. New York: McGraw-Hill.Google Scholar
Honigfeld, G. & Klett, C. J. (1965) The nurses observation scale for in-patient evaluation. Journal of Clinical Psychology, 21, 6571.3.0.CO;2-I>CrossRefGoogle Scholar
Philip, A. E. & McKechnie, A. A. (1969) The assessment of long-stay psychiatric patients. Social Psychiatry, 4, 66—8.CrossRefGoogle Scholar
Reid, J. B. (1970) Reliability assessment of observation data: a possible methodological problem. Child Development, 41, 1143—50.Google Scholar
Spitzer, R. L., Cohen, J., Fleiss, J. L. & Endicott, J. (1967) Quantification of agreement in psychiatric diagnosis. Archives of General Psychiatry, 17, 83—7.CrossRefGoogle ScholarPubMed
Wing, J. K. (1961) A simple and reliable subclassification of chronic schizophrenia. Journal of Mental Science, 107, 862—75.Google Scholar
Submit a response

eLetters

No eLetters have been published for this article.