Clinical indicators for routine use in the evaluation of early psychosis intervention: development, training support and inter-rater reliability

There are no files associated with this record.

Title Clinical indicators for routine use in the evaluation of early psychosis intervention: development, training support and inter-rater reliability
Author Catts, Stanley V.; Frost, Aaron D. J.; O'Toole, Brian I.; Carr, Vaughan J.; Lewin, Terry; Neil, Amanda L.; Harris, Meredith G.; Evans, Russell W.; Crissman, Belinda Rochelle; Eadie, Kathy
Journal Name Australian and New Zealand Journal of Psychiatry
Year Published 2011
Place of publication United Kingdom
Publisher Informa Healthcare
Abstract Aim: Clinical practice improvement carried out in a quality assurance framework relies on routinely collected data using clinical indicators. Herein we describe the development, minimum training requirements, and inter-rater agreement of indicators that were used in an Australian multi-site evaluation of the effectiveness of early psychosis (EP) teams. Methods: Surveys of clinician opinion and face-to-face consensus-building meetings were used to select and conceptually define indicators. Operationalization of definitions was achieved by iterative refinement until clinicians could be quickly trained to code indicators reliably. Calculation of percentage agreement with expert consensus coding was based on ratings of paper-based clinical vignettes embedded in a 2-h clinician training package. Results: Consensually agreed upon conceptual definitions for seven clinical indicators judged most relevant to evaluating EP teams were operationalized for ease-of-training. Brief training enabled typical clinicians to code indicators with acceptable percentage agreement (60% to 86%). For indicators of suicide risk, psychosocial function, and family functioning this level of agreement was only possible with less precise ‘broad range’ expert consensus scores. Estimated kappa values indicated fair to good inter-rater reliability (kappa > 0.65). Inspection of contingency tables (coding category by health service) and modal scores across services suggested consistent, unbiased coding across services. Conclusions: Clinicians are able to agree upon what information is essential to routinely evaluate clinical practice. Simple indicators of this information can be designed and coding rules can be reliably applied to written vignettes after brief training. The real world feasibility of the indicators remains to be tested in field trials.
Peer Reviewed Yes
Published Yes
Alternative URI
Volume 45
Issue Number 1
Page from 63
Page to 75
ISSN 0004-8674
Date Accessioned 2011-07-06
Date Available 2015-06-01T23:35:36Z
Language en_US
Research Centre Key Centre for Ethics, Law, Justice and Governance
Faculty Arts, Education and Law
Subject Health, Clinical and Counselling Psychology
Publication Type Journal Articles (Refereed Article)
Publication Type Code c1

Show simple item record

Griffith University copyright notice