Hostname: page-component-7c8c6479df-ws8qp Total loading time: 0 Render date: 2024-03-28T14:14:16.184Z Has data issue: false hasContentIssue false

Accountability of specialist child and adolescent mental health services

Published online by Cambridge University Press:  02 January 2018

Elena M. Garralda*
Affiliation:
Academic Unit of Child and Adolescent Psychiatry, Imperial College London, St Mary's Campus, Norfolk Place, London W2 1PG, UK. Email: e.garralda@imperial.ac.uk
Rights & Permissions [Opens in a new window]

Summary

Outcome auditing of specialist child and adolescent mental health services (CAMHS) is now well under way internationally. There is, however, debate about objectives and tools. A case is made for the achievable goal of enhancing service accountability through user satisfaction information and clinician-rated contextualised measures of improvements in symptoms and impairment.

Type
Editorials
Copyright
Copyright © Royal College of Psychiatrists, 2009 

The increasing costs of healthcare have heightened the importance of assessing efficacy and service cost-effectiveness. Over the past two decades there has been a drive by health commissioning agencies to promote service auditing and the measurement of health outcomes. This is especially important for comparatively new and evolving services such as child and adolescent mental health services (CAMHS), still needing to make the case for their raison-d'etre in low- and middle-income countries lacking identifiable mental health policies specifically relevant for children and adolescents. Reference Skuse1

The audit imperative and CAMHS

In high-income countries, CAMHS have risen to the challenge of outcome measurement. A major breakthrough was the development of a dedicated measure, Health of the Nation Outcome Scales for Children and Adolescents (HoNOSCA), addressing both symptom improvement and reduced impairment following specialist CAMHS use. Reference Gowers, Bailey-Rogers, Shore and Levine2 This measure has been thoroughly researched internationally and found to be fit for purpose, user-friendly, a good proxy measure for diagnosis, valid for use by specialist and in-patient CAMHS working within a multi-disciplinary framework, with excellent national and international interrater reliability, and congruent with parent and referrer outcome ratings. Reference Garralda, Yates and Higginson3,Reference Hanssem-Bauer, Gowers, Aalen, Bilenberg, Brann and Garralda4 It is sensitive to change, and when complemented by parental and referrer satisfaction scores, it provides a comprehensive outcome summary of CAMHS use. Reference Garralda, Yates and Higginson3 It has documented substantial improvement in children's symptoms and impairment following CAMHS use and in the process has provided average change scores that can be used as yardsticks to compare performance across units. Alongside HoNOSCA, other instruments such as generic parent-completed epidemiological screening questionnaires and clinician-reported impairment scales (e.g. Strengths and Difficulties Questionnaire, Children's Global Assessment Scale) with established validity, reliability and sensitivity to change, have gained popularity among CAMHS, as have a variety of disorder-specific instruments. Reference Garralda, Yates and Higginson3,Reference Johnston and Gowers5

Beyond CAMHS audit tools

The availability of adequate tools is only one step towards outcome measurement. Implementation needs to take account of the practice and policy context and the interlocking influences of government initiatives. In the UK these include New Ways of Working, which aims to enable all clinicians to extend their roles and work effectively in teams, thus making outcome measurement widely relevant across different professions (www.newwaysofworking.org.uk), and quality assurance mechanisms such as the Quality Improvement Network for Multi-agency CAMHS and the Quality Network for In-Patients which develop and apply standards for specialist CAMHS including outcome measurement through a system of self- and external peer-review (www.rcpsych.ac.uk/clinicalservicestandards/centreforqualityimprovement.aspx).

As CAMHS outcome measurement becomes more widespread Reference Johnston and Gowers5,Reference Ford, Tingay and Wolpert6 and service purchaser requirements more explicit, renewed attention has focused on the actual purpose and objectives of outcome measurement and on the advantages and disadvantages of ‘dedicated’ outcomes compared with ‘all-purpose’ screening instruments more likely to take a dimensional approach which is not driven by the presence of symptoms or disorders. The use of generic as opposed to disorder-specific measures which may be more appropriate for specialist clinics such as for children with obsessive–compulsive disorder has also been debated, and the extent to which existing outcome measures are efficient for children with intellectual disabilities needs to be tested further. Reference Lee, Jones, Goodman and Heyman7 There are differences of opinion about the appropriateness of primary reliance for outcome measurement on clinicians as opposed to users (parent, child, teacher and referrer) as symptom reporters.

Furthermore, a number of important implementation issues have arisen, including the best approach to documenting outcomes for children and young people seen for assessment only, for those whose management and/or treatment extends over many months and years, for work that primarily addresses parental and family concerns rather than psychopathology in the child, and for work done in bridging posts (‘tier two’ CAMHS) between specialist CAMHS and primary care. This has resulted in variations in the measures recommended and implemented across services. Although research projects and audit in more self-contained services such as in-patient units obtain good returns and, therefore, reliable results, Reference Garralda, Yates and Higginson3,Reference Garralda, Rose and Dawson8 implementation in routine clinical practice tends to be marred by small returns.

There is, nevertheless, a move afoot to introduce uniformity to the outcome measurement process in CAMHS as in other health services. What audit objectives and tools are appropriate for this task? Three possible alternatives – measuring service efficacy, league tables, and enhancing service and clinician accountability – will be addressed here.

Possible objectives of CAMHS audit

Measuring service efficacy

An approach under consideration would involve measuring and contrasting symptom change, following CAMHS use, in a particular service or given area against expected changes over time in a comparable non-referred population. In effect, the objective here is the measurement of CAMHS efficacy which is more in the realm of hypothesis-driven research than audit. This requires a rigorous research design, careful sample and instrument selection and description, and multiple measures of clinical change, taking into account the range of possible clinical and attitudinal confounding factors influencing referral and outcomes. It demands substantially higher return rates and analytic expertise than may be expected from clinical audit, and some knowledge of the interventions provided. The development of a single tool able to audit change meaningfully across different primary and specialist services seems, moreover, implausible. Epidemiological research comparing referred and non-referred samples has generally failed to show differences in outcome and highlighted the methodological flaws inherent in this approach. Reference Zwaanswijk, Verhaak, van der Ende, Bensing and Verhulst9

League tables

A different objective for CAMHS outcomes audit would be issuing league tables in order to guide service purchasers and prospective patients. Nevertheless, high acceptable return data is again a central tenet of league tables. Even if full representative data were obtainable and clinical improvement judged against published standards, account still needs to be taken of confounding contextual or complexity factors; not least initial problem severity, since higher initial symptom scores generally predict greater change and improvement. As an illustration of the influence of complexity, reduced HoNOSCA change and improvement Reference Garralda, Yates and Higginson3,Reference Andrade, Lambert and Bickman10 has been reported in children with intellectual disability attending generic out-patient CAMHS when compared with other attenders – possibly suggesting a desirability for the development of specialist CAMHS with a special remit in these areas – but not in pre-adolescent in-patient psychiatric units which may be more attuned to their needs. Similarly, parental attitudes towards CAMHS contact have been found not to predict outcome in the community, but do predict outcome in in-patient units. Reference Garralda, Yates and Higginson3,Reference Garralda, Rose and Dawson8

Service accountability

If measuring service efficacy and the use of league tables are, on current evidence, unrealistic premature goals, an achievable objective of outcome auditing is to enhance service accountability. This is intrinsic to the audit process and deliverable provided that expectations from users and clinicians are realistic and the actual process is adequately supported administratively and technically. It can be met by: (a) obtaining information on user satisfaction; and (b) by symptom/impairment reporting at clinic intake and discharge, together with brief measures of context and case complexity, as well as of the service use process.

The appropriateness of user satisfaction enquiry is self-evident and, moreover, applicable across services with different levels of care, whether primary, bridging/tier two, or specialist CAMHS; although a small and biased response rate is to be expected, the onus would be on services to show: (a) all eligible users over a certain and uniformly defined period of time have been approached; (b) returns are consistent with those of other units with comparable clienteles; and (c) acceptable user satisfaction levels have been obtained in line with published comparable data. Reference Garralda, Yates and Higginson3

What about symptom change? Who should be entrusted with this: clinicians, service users or referrers? The leading consideration here is what procedure is most likely to obtain fuller returns and more representative information. For specialist generic CAMHS, there is much to be said for this being primarily a task for the clinician. First, althoguh it is unrealistic to expect high, representative return rates from parents and referrers, the same does not apply to clinicians, provided – and this cannot be overemphasised – that the demands on clinician training and time are minimal and appropriate information technology and administrative support is available. Second, CAMHS clinician accounts represent a professionally informed summary statement of problems as reported by different informants such as parents, children, teachers and clinicians, and, therefore, are preferable to those from single informants. Administratively, it is a more parsimonious process than obtaining and numerically aggregating three (parent, child, teacher) or more individual reports. Third, the use of appropriate composite measures, made up of the range of symptoms seen in specialist care, can help ensure a degree of uniformity in both intake and outcome data collection among clinicians from different backgrounds and contribute towards a sense of both personal and collective accountability for service outcomes. Fourth, dedicated user-friendly and quick to complete clinician CAMHS measures are available with good validity and interrater reliability, as well as congruence with parental and referrer reporting of symptoms and/or symptom change.

Clinical service effectiveness

Ultimately, of course, outcome auditing only provides a small snapshot of clinical effectiveness and service quality; the latter will depend to a large extent on the availability of good clinical assessment and management skills, and the use of adequately implemented evidence-based treatment methods in an adequately administratively supported and managed service. Auditing outcomes can contribute to enhancing service accountability through the acquisition of adequately technically supported contextualised information on user satisfaction and symptom/impairment change.

References

1 Skuse, D. Child and adolescent psychiatry services in low- and middle-income countries. Int Psychiatry 2008; 5: 80–1.CrossRefGoogle ScholarPubMed
2 Gowers, S, Bailey-Rogers, SJ, Shore, A, Levine, W. The Health of the Nation Outcome Scales for Child and Adolescent Mental Health (HoNOSCA). Child Psychol Psychiatry Review 2000; 5: 50–6.CrossRefGoogle Scholar
3 Garralda, ME, Yates, P, Higginson, I. Child and adolescent mental health service use: HoNOSCA as an outcome measure. Br J Psychiatry 2000; 177: 52–8.CrossRefGoogle ScholarPubMed
4 Hanssem-Bauer, K, Gowers, S, Aalen, OO, Bilenberg, N, Brann, P, Garralda, ME, et al. Cross-national reliability of clinician-rated outcome measures in child and adolescent mental health services. Adm Policy Ment Health 2007; 34: 513–8.Google Scholar
5 Johnston, C, Gowers, S. Routine outcome measurement: a survey of UK child and adolescent mental health services. Child Adolesc Ment Health 2005; 10: 133–9.CrossRefGoogle ScholarPubMed
6 Ford, T, Tingay, K, Wolpert, M. CORC's survey of routine outcome monitoring and national CAMHS dataset developments: a response to Johnston and Gower. Child Adolesc Ment Health 2006; 11: 50–2.CrossRefGoogle ScholarPubMed
7 Lee, W, Jones, L, Goodman, R, Heyman, I. Broad outcome measures may underestimate effectiveness: an instrument comparison study. Child Adolesc Ment Health 2005; 10: 143–4.CrossRefGoogle ScholarPubMed
8 Garralda, ME, Rose, G, Dawson, R. Measuring outcomes in a child psychiatry in-patient unit. J Children's Services 2009; in press.Google Scholar
9 Zwaanswijk, M, Verhaak, PFM, van der Ende, J, Bensing, JM, Verhulst, FC. Change in children's emotional and behavioural problems over a one-year period. Eur Child Adolesc Psychiatry 2006; 15: 127–31.CrossRefGoogle ScholarPubMed
10 Andrade, AR, Lambert, W, Bickman, L. Dose effect in child psychotherapy: outcomes associated with negligible treatment. J Am Acad Child Adolesc Psychiatry 2000; 39: 161–8.CrossRefGoogle ScholarPubMed
Submit a response

eLetters

No eLetters have been published for this article.