- Title
- Clinical indicators for routine use in the evaluation of early psychosis intervention: development, training support and inter-rater reliability
- Creator
- Catts, Stanley V.; Frost, Aaron D. J.; O'Toole, Brian I.; Carr, Vaughan J.; Lewin, Terry; Neil, Amanda L.; Harris, Meredith G.; Evans, Russell W.; Crissman, Belinda R.; Eadie, Kathy
- Relation
- Australian and New Zealand Journal of Psychiatry Vol. 45, Issue 1, p. 63-75
- Publisher Link
- http://dx.doi.org/10.3109/00048674.2010.524621
- Publisher
- Informa Healthcare
- Resource Type
- journal article
- Date
- 2011
- Description
- Clinical practice improvement carried out in a quality assurance framework relies on routinely collected data using clinical indicators. Herein we describe the development, minimum training requirements, and inter-rater agreement of indicators that were used in an Australian multi-site evaluation of the effectiveness of early psychosis (EP) teams. Surveys of clinician opinion and face-to-face consensus-building meetings were used to select and conceptually define indicators. Operationalization of definitions was achieved by iterative refinement until clinicians could be quickly trained to code indicators reliably. Calculation of percentage agreement with expert consensus coding was based on ratings of paper-based clinical vignettes embedded in a 2-h clinician training package. Consensually agreed upon conceptual definitions for seven clinical indicators judged most relevant to evaluating EP teams were operationalized for ease-of-training. Brief training enabled typical clinicians to code indicators with acceptable percentage agreement (60% to 86%). For indicators of suicide risk, psychosocial function, and family functioning this level of agreement was only possible with less precise ‘broad range’ expert consensus scores. Estimated kappa values indicated fair to good inter-rater reliability (kappa > 0.65). Inspection of contingency tables (coding category by health service) and modal scores across services suggested consistent, unbiased coding across services. Clinicians are able to agree upon what information is essential to routinely evaluate clinical practice. Simple indicators of this information can be designed and coding rules can be reliably applied to written vignettes after brief training. The real world feasibility of the indicators remains to be tested in field trials.
- Subject
- programme evaluation; first episode; schizophrenia; quality; practice improvement
- Identifier
- http://hdl.handle.net/1959.13/1036392
- Identifier
- uon:13271
- Identifier
- ISSN:0004-8674
- Language
- eng
- Reviewed
- Hits: 7970
- Visitors: 3976
- Downloads: 0
Thumbnail | File | Description | Size | Format |
---|