Hostname: page-component-7c8c6479df-24hb2 Total loading time: 0 Render date: 2024-03-28T19:21:22.580Z Has data issue: false hasContentIssue false

Evidence-based mental health policy: Acritical appraisal

Published online by Cambridge University Press:  02 January 2018

Brian Cooper*
Affiliation:
Section of Old Age Psychiatry, Institute of Psychiatry, London SE5 8AF, UK. E-mail: spjubco@iop.kcl.ac.uk
Rights & Permissions [Opens in a new window]

Abstract

Background

Arguments for and against evidence-based psychiatry have mostly centred on its value for clinical practice and teaching. Now, however, use of the same paradigm in evaluating health care has generated new problems.

Aims

To outline the development of evidence-based health care; to summarise the main critiques of this approach; to review the evidence now beingemployed to evaluate mental health care; and to consider how the evidence base might be improved.

Method

The following sources were monitored: pub ications on evidence-based psychiatry and health care since 1990; reports of randomised trials and meta-analytic reviews to the end of 2002; and official British publications on mental health policy.

Results

Although evidence-based health care is now being promulgated as a rational basis for mental health planning in Britain, its contributions to service evaluation have been distinctly modest. Only 10% of clinical trials and meta-analyses have been focused on effectiveness of services, and many reviews proved inconclusive.

Conclusions

The current evidence-based approach is overly reliant on meta-analytic reviews, and is more applicable to specific treatments than to the care agencies that control theirdelivery. A much broader evidence base is called for, extending to studies in primary health care and the evaluation of preventive techniques.

Type
Review article
Copyright
Copyright © 2003 The Royal College of Psychiatrists 

In the ongoing debate over evidence-based psychiatry, most of the arguments pro and contra have centred around the relevance of this strategy for clinical practice and teaching. In Britain, however, analogous concepts are now being applied to service planning, as part of the government initiative to move health care from a system founded in clinical knowledge and authority to one based on systematic research (Reference Higgit and FonagyHiggit & Fonagy, 2002). Thus, the National Service Framework for Mental Health (Department of Health, 1999) draws on a ‘synthesis of evidence’ from research findings, rated on a five-point ordinal scale according to their inferential power (see Appendix). A closer scrutiny of the evidence in question seems due, and should comprehend clinical trial results as well as service evaluations, since the former are now being used increasingly to control population access to new forms of treatment via the National Health Service (NHS) approval and purchasing systems (Reference McKee and ClarkeMcKee & Clarke, 1995).

The aims of this review are fourfold: first, to outline the development of the ‘evidence-based’ project; second, to summarise growing clinical and social criticism of this approach; third, to examine the research evidence on which British mental health policy currently relies; and finally, to consider how this evidence base might in future be improved, in terms of balance and coverage. To achieve these aims the following sources were monitored:

  1. (a) publications on evidence-based medicine, psychiatry and health care listed on the main international databases or contained in specialist journals since 1990;

  2. (b) randomised controlled trials, and meta-analytic reviews of such trials, reported in the journal Evidence-Based Mental Health, the Cochrane Library review abstracts or British electronic databases, up to the end of 2002;

  3. (c) British official publications concerned with mental health policy and planning.

DEVELOPMENT OF THE CONCEPT

Although evidence-based medicine (EBM) grew out of ‘clinical epidemiology’ (Reference Sackett, Haynes and TugwellSackett et al, 1985), the emphasis was placed at first on clinical rather than public health issues. Evidence-based medicine was characterised as ‘the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients’ (Reference Sackett, Rosenberg and GraySackett et al, 1996), and since ‘evidence’ is here taken to mean the results of systematic research, as opposed to expert opinion, this definition, stripped of its question-begging adjectives, comes down to the use of research findings as a basis for individual case management. The recommended procedure consists of four distinct steps:

  1. (a) formulation of a clear medical question, based on the presenting condition;

  2. (b) search of the medical literature for relevant evidence;

  3. (c) critical appraisal of such evidence;

  4. (d) application of the evidence deemed valid and useful, in order to reach a clinical decision (Reference Rosenberg and DonaldRosenberg & Donald, 1995).

Soon, however, the original aim of guidance in the individual case was widened to include other objectives. As Sackett (Reference Sackett1995) explained, ‘We call the new approach evidence-based medicine when applied by individual clinicians to individual patients, and evidence-based health care when applied by public health professionals... to groups of patients and populations’. Systematic research, in other words, should help to determine both individual treatment and the broad range of health care provision within which illnesses are diagnosed and treatments delivered. This expansion of aims brings the ‘evidence-based’ project into direct contact with epidemiology and health services research, but at the same time raises new problems.

The enlarged concept was, in fact, already embodied in the international Cochrane Collaboration, a principal database for EBM, which was set up to prepare, maintain and disseminate systematic, updated reviews of randomised controlled trials of health care, or (when these were not available) reviews of the most reliable evidence from other sources (Reference ChalmersChalmers, 1993). In the new era of the World Wide Web the Collaboration won rapid acclaim, being hailed by one commentator as an enterprise that rivalled the human genome project in its potential implications for modern medicine (Reference NaylorNaylor, 1995). Within 10 years its membership grew to more than 6000 researchers in 15 countries around the world, with groups covering some 50 medical topics (http://www.update-software.com/collaboration/), while in Britain its reviews were recognised as an official part of the government's research and development programme on health care.

In psychiatry the evidence-based paradigm had immediate appeal. No branch of medicine has had greater experience of new remedies being greeted with enthusiasm, only to be abandoned later as ineffective or even disastrous. Psychiatrists, it was argued, need such a discipline to ensure that their clinical decisions are based upon accurate, up-to-date information (Reference Geddes and HarrisonGeddes & Harrison, 1997). The case was urged for a register of randomised controlled trials (Reference Adams and GelderAdams & Gelder, 1994). The American Psychiatric Association (1994) claimed that their revised classification system, DSM–IV, had been developed along EBM lines; the Royal College of Psychiatrists introduced a ‘critical review of evidence’ paper into their membership examination (Reference Brown and WilkinsonBrown & Wilkinson, 2000), and a new journal, Evidence-Based Mental Health, was launched. Evidence-based psychiatry had, in a word, arrived.

CLINICAL AND SOCIAL CRITICISM

Criticism in psychiatry, as elsewhere in medicine, has been directed less at the aims and aspirations of EBM than at the means proposed for achieving them and the claims made by some of its advocates. Most concern has been voiced by clinicians, worried about the implied downgrading of their experience and skills (Reference Berk and JanetBerk & Janet, 1999; Reference Williams and GarnerWilliams & Garner, 2002), and has been based on four main arguments.

First, EBM has been apostrophised as ‘old French wine with a new Canadian label’ (Reference RangachariRangachari, 1997), since even in the 19th century the fact-gathering method pioneered in France by Pierre Louis was influencing leading medical teachers in Europe and North America. Randomised, double-blind controlled trials were introduced into clinical medicine in the late 1940s, and were soon taken up in psychiatry, where they were used to evaluate phenothiazines and antidepressants, and led to the abandonment of deep insulin coma therapy (Reference TansellaTansella, 2002). Viewed in historical perspective, the only new development is an ability to gather and collate published research findings quickly by means of the internet.

Second, the EBM paradigm is seen as oversimplifying complex problems and offering only limited help in the grey zones of medicine, where scientific evidence is incomplete or conflicting (Reference NaylorNaylor, 1995). Application to the individual patient should always be mediated by clinical judgement, but here the EBM literature provides little guidance (Reference Williams and GarnerWilliams & Garner, 2002).

Third, in adoption of the randomised controlled trial as a ‘gold standard’, common failings in the application of this method were disregarded. The more carefully patients are selected, the less can clinical trial results be generalised to populations. Trial periods are usually short, and outcome measures may be highly artificial (Reference Thornley and AdamsThornley & Adams, 1999). Drug trials sponsorship by the pharmaceutical industry may cause selective bias in publication (Reference AngellAngell, 2000). Pragmatic trials, in which ‘real life’ questions are addressed in ‘real life’ settings, could address some of these concerns (Reference Hotopf, Churchill and LewisHotopf et al, 1999), but the current trend is in the opposite direction: towards multi-centred trials run by contract research and site management organisations, in which the participating clinician has neither a clear overview of patient outcomes, nor any control over data analysis and reporting (Reference BodenheimerBodenheimer, 2000). In the wider health care context, randomised clinical trials may in any case have a less central role. Their usefulness in evaluating socially complex interventions has been questioned (Reference WolffWolff, 2001), and it seems hardly feasible that they should be applied to all innovations in health care (Reference FeinsteinFeinstein, 1995). In practice, service evaluation may call for a hierarchy of different research methods, from simple medical auditing upwards.

Fourth, the EBM approach is overly reliant on meta-analytic reviews, which are likely to prove reliable only where trials are similar in design, sampling, treatment regimen and outcome measures (Reference Egger, Schneider and Davey-SmithEgger et al, 1998). Moreover, meta-analytic reviews cannot by themselves promote original research or open up new avenues, and may actually divert attention from the search for causal factors. This technique, in the words of Feinstein (Reference Feinstein1995), ‘concentrates on a part of the scientific domain that is already reasonably well lit, while ignoring the much larger domain that lies either in darkness or in deceptive glitters.’

Many of these objections would be met if EBM ceased to be reified, and its useful content simply became absorbed into good clinical practice and teaching. However, as soon as one moves from individual treatment to what Sackett calls ‘evidence-based health care’ (EBHC), the questions extend to policy issues. Thus, in Britain the National Institute for Clinical Excellence (Reference RawlinsRawlins, 1999) relies on systematic reviewing to decide which new therapies shall be made available under the NHS, but has neither powers to supervise international drug company research, nor resources to conduct independent trials.

These are basically issues in gauging effectiveness. One must remember, however, that effectiveness is not the only criterion of health care. Cochrane himself acknowledged this, albeit in a rather muddled fashion: ‘In my brief assessment of the preventive and therapeutic side of the NHS,’ he declared, ‘I have used effectiveness and efficiency as my yardsticks. These are not really applicable to the “care” side, so I have chosen “equality” as my main yardstick in this area, although it does of course, apply to the therapy as well’ (Reference CochraneCochrane, 1972: p. 70).

The question of ‘equality’ – meaning equity in health care provision – was dealt with cursorily in Cochrane's monograph, and has been largely ignored by the Cochrane Collaboration. This failure to address the project's underlying ethos has also attracted criticism. Tudor Hart (Reference Tudor Hart1997), for example, has pointed out that current notions of EBHC appear to be based on a utilitarian model in which health is regarded as a commodity, health care as a mode of production of value, measurable in purely economic terms, and health care provision (including medical consultation) as a set of transactions based on market relationships between ‘purchasers’ (or commissioners), ‘providers’ and ‘consumers’. Services based on such a desocialised transactional model will tend to become increasingly bureaucratic, and to subordinate patients' needs to managerial goals.

PRESENT STATUS OF THE MENTAL HEALTH EVIDENCE BASE

These various critiques cannot yet be countered by a demonstration of practical value, since, as Higgit & Fonagy (Reference Higgit and Fonagy2002) have pointed out, there is surprisingly little information on the benefits to clinical work of an evidence-based perspective. In respect of health services research one can, however, assess how far the reviewing system has provided clear guidance on the relative effectiveness of different service structures and strategies of care. Information on the British scene can be obtained from the Cochrane Collaboration, the journal Evidence-Based Mental Health and a number of other sources.

At the end of 2002, five of the Cochrane Collaboration's 49 cross-disciplinary review groups were focused on the mental health field (schizophrenia; depression, anxiety and neurosis; drugs and alcohol; dementia and cognitive impairment; developmental, psychosocial and learning problems). A sixth group, considering tobacco addiction, is relevant in that it deals with a condition included in the standard psychiatric classifications, but in practice concentrates on smoking cessation or prevention, and the public health implications more generally. The five central groups had completed a total of 166 reviews of mental health interventions, based as far as possible on systematic meta-analysis of published randomised or quasi-random controlled trials, and a further 101 were in progress. Of the 166 completed reviews, 148 (89.2%) dealt with relatively specific treatment methods or other interventions, directed at individuals or in some instances nuclear families (pharmacotherapy alone 114; physical techniques 6; psychological therapies 23; other methods 5), and the remaining 18 (10.8%) looked at different forms of service provision for populations, and were thus directly relevant to the concept of EBHC.

Closer examination of the second category, as summarised on the Cochrane Library website (www.update-software.com/abstracts), shows that the 18 completed reviews (15 of which were conducted by the Schizophrenia Group) were concerned with various aspects of specialist care provision, mostly for people with severe mental illness, and compared various forms of innovative care with what the summaries usually refer to as ‘standard care’ (Table 1). In five of these reviews no study was found that met the inclusion criteria, and hence no conclusion could be drawn; a further eight found on analysis no difference in outcome between trial and comparison groups; and five reported significant advantages for the trial groups. Evidence-Based Mental Health provides a similar picture: of a total of 96 meta-analytic reviews summarised in the 20 quarterly issues to the end of 2002, the great majority (88) focused on clinical trials of medical or psychological treatments, and only 8 looked at aspects of care provision, with varying conclusions.

Table 1 Cochrane Collaboration reviews of randomised controlled trials (RCTs) of mental health care programmes (n=18)1

Type of service under review Comparison group Trials included (n) Outcome measures Assessment of effectiveness
Assertive community treatment for severe mental disorders 1. Standard community care Unstated Accommodation status; employment; patient satisfaction; mental state; social function Trial groups generally better on administrative measures; no clinical difference
2. Hospital-based rehabilitation
3. Case management
Case management for severe mental disorders Standard community care Unstated Continuing contact; hospital admissions; clinical and social status; costs No significant advantage found
Community mental health teams for severe mental illness with personality disorders Non-team standard care Unstated Admission rates; in-patient stay duration; overall clinical outcome, etc. Advantage in treatment acceptance; possibly reduced admissions and suicide risk
Crisis intervention for severe mental illness Standard care 5 Mental state; hospital admissions; family burden Small advantages to home care, but no conclusion possible on crisis intervention model
Day centres for severe mental illness Standard care 0 Clinical status; burden on carers No relevant RCT found; no conclusion
Day hospital v. out-patient care for psychiatric disorder Psychiatric out-patient treatment Unstated Clinical status; social functioning; relative costs Day treatment superior in one trial only; otherwise no difference found
Planned short hospital stay for severe mental illness Long stay or standard care 5 Successful discharge; readmissions No disadvantage found to short-stay policy
Life skills programmes for chronic mental illnesses Standard care 2 Social disability measures (unspecified) Data sparse; no clear effect demonstrated
Patient-held clinical information for people with psychotic illness Standard care 0 Unstated No study meeting inclusion criteria; no conclusion
Prompts to encourage appointment attendance by people with serious mental illness Standard appointment management 3 Attendance records In two trials, text-based prompts increased attendance; telephone prompts ineffective
Reducing seclusion and restraint in serious mental illness Standard care 0 Prevention or reduction of aggression No study met inclusion criteria; no conclusion
Supported housing for people with severe mental disorders Outreach support or standard care 0 Mental status, social functioning, etc. No study met inclusion criteria; no conclusion
Token economy for schizophrenia Standard care 3 Change in negative symptoms No usable data on target symptoms or behaviour
Pre-vocational training and supported employment for those with severe mental illness Standard care 18 Employment records Supported employment more effective than prevocational training or standard care
Substance misuse treatment for people with severe mental illness Standard care 6 Relapse rates, violence, social functioning, etc. No clear advantage for any treatment programme
Primary prevention for alcohol misuse in young people Existing school and youth services 56 ‘Alcohol outcomes’, including related violence and crime Inconclusive: no evidence of short- or medium-term effects
Multi-disciplinary team care for delirium in elderly people with cognitive impairment Usual care 0 Prolonged stay, complications, etc. No study met inclusion criteria; no conclusion
Day care for preschool children No preschool day care 82 Behavioural development, school achievement, etc. Beneficial effects on child development and school success

Meta-analytic reviews thus appear to have contributed relatively little to service evaluation in this field, their findings to date projecting mostly onto the original, clinical concept of EBM rather than directly onto EBHC. This impression is confirmed by other relevant websites, such as those of the Centre for Evidence-based Mental Health at Oxford (www.cebmh.com), the, NHS Centre for Reviews and Dissemination at York (www.york.ac.uk/inst/crd/ebhc.htm) and Health Bulletin Wales (hebw.uwcm.ac.uk/mental/chapter5.htm), which digest and annotate the review evidence in user-friendly fashion. The most influential conclusions to date are those that stress the advantages of assertive community treatment and community mental health teams in the care of those with severe mental illness, although these effects may diminish as the comparator of ‘standard care’ itself changes (Reference Burns and CattyBurns & Catty, 2002).

A neglected issue concerns the national origins of health service research included in the systematic reviews, since conflicting results may emerge from countries with different health care infrastructures. This information cannot be derived for the above-cited reviews from their publicly available summaries, but one can obtain an overview from individual studies reported in Evidence-Based Mental Health, which provides a good coverage of the field. The quarterly issues to the end of 2002 contained reports on 181 intervention trials of which 152 (84%) dealt with treatments targeted on individuals or nuclear families, and 29 (16%) with health care provision and delivery. Twenty-nine of the former group (19.1%) and 8 of the latter group (27.6%) were conducted by researchers in the UK, compared with 123 (80.9%) and 21 (72.4%) respectively in other countries – predominantly the USA. The question as to how well mental health care models travel is thus highly relevant (Reference BurnsBurns, 2000).

REDEFINING THE EVIDENCE BASE

With continuing government support these imbalances might be corrected and the contribution of health services research increased. In addition, however, the evidence base needs to be extended in two directions.

Population-based evaluative research

The effectiveness of a mental health service, like that of any health service, should be gauged by how successfully it meets relevant needs for care in the population it serves. Thirty years ago, Wing (Reference Wing, Wing and Hailey1972) summed up this concept of evaluation in the following set of questions.

  1. (a) How many people are in contact with the existing mental health services, and what are the trends in contact?

  2. (b) What are the care needs of these people and their families?

  3. (c) Are the existing services meeting these needs effectively?

  4. (d) How many other people not in contact with services have similar needs?

  5. (e) What service innovations are required to cater for the unmet needs?

  6. (f) When service changes are introduced, do they in fact reduce unmet needs?

The first of these questions calls for improved service statistics, the second for standardised instruments to assess individuals' care needs and the third for reliable outcome measures – all now official concerns (Reference Wing, Beevor and CurtisWing et al, 1998; Reference Slade, Beck and BindmanSlade et al, 1999; Department of Health, 2001). To answer the last three questions, however, one requires information on the numbers and types of unreferred cases present in the area population. In terms of the well-known model of Goldberg & Huxley (Reference Goldberg and Huxley1992), the current evidence base is derived mainly from levels 4 and 5 (specialist referral, admission and treatment), and draws little on levels 1–3 (population morbidity, primary care contacts and general practitioner diagnosis). One report noted that expenditure on primary care research in general accounted for only 7% of the health service research and development budget, and that the research base of most primary care professions was minimal (Reference Campbell, Roland and BentleyCampbell et al, 1999).

Evidence bearing on prevention

Primary care research greatly widens the net, but by itself is still insufficient. If it is to provide guidance on public health policy as well as on individual treatment, the evidence base must include data on untreated cases, on variations in morbidity with the strength of suspected risk exposures, and on the scope for preventive action afforded by both high-risk group and whole population strategies (Reference RoseRose, 1985).

The National Service Framework (Department of Health, 1999) considers preventive action only under the heading of ‘mental health promotion’, a term it largely equates with measures aimed at reducing susceptibility and increasing resilience, whether by educational, medical or psychosocial programmes, both in high-risk groups (e.g. ethnic minority groups, and people who are long-term unemployed, homeless, substance misusers or prisoners) and in people at a vulnerable stage (e.g. pregnant women and preschool children). Here the randomised trial might be the appropriate strategy, provided members of the defined target group can be identified and assessed, an intervention package has been assembled and outcome measures are available.

Over the past two decades testing of this approach on a number of diverse target groups has indicated that community support, particularly when based on schools or primary health care teams, can be effective (NHS Centre for Reviews and Dissemination, 1997). Table 2, which summarises more recent British studies focused on people exposed to defined traumatic or stressful life events, supports this broad conclusion. Whereas single-session ‘psychological debriefing’ appeared to be ineffective or even harmful, time-limited psychosocial support and training proved beneficial in three of these projects. The same methodology has been applied to programmes both of secondary prevention based on early diagnosis and treatment (Reference Lewis, Tarrier and HaddockLewis et al, 2002), and of tertiary prevention based on containment of long-term disability (Reference Wykes, Leese and TaylorWykes et al, 1998).

Table 2 Prevention of psychiatric disorder in defined high-risk groups: randomised controlled trials in the UK, 1997-2002

Study Risk group Sample size (n) Intervention package Follow-up period Outcome measures Results
Proudfoot et al (Reference Proudfoot, Guest and Carson1997) Long-term unemployment 289 CBT 3-4 months GHQ, job-finding success 34% v. 13% in full-time work (P <0.001), slight increase in GHQ score change
Bisson et al (Reference Bisson, Jenkins and Alexander1997) Burn trauma victims 133 Psychological debriefing 13 months PTSD frequency 26% v. 9% diagnosis-positive at follow-up; trend to worse functional outcome
Rose et al (Reference Rose, Brewin and Andrews1999) Violent crime victims 157 ‘Education’ and psychological debriefing 11 months PTSD frequency, IES, BDI No inter-group difference in diagnosis or score changes (variance and covariance analyses)
Mayou et al (Reference Mayou, Ehlers and Hobbs2000) Road traffic accident victims 106 Psychological debriefing 3 years IES, BSI Significantly worse outcome for intervention group on both scales (P <0.05)
Elliott et al (Reference Elliott, Leverton and Sanjack2000) Vulnerable pregnant women 99 Antenatal preparation groups 3 months postnatally EPDS, PSE Intervention reduced depression among first-time mothers, on both EPDS (P <0.01) and PSE (P <0.05)
Brugha et al (Reference Brugha, Wheatley and Taub2000) Vulnerable women in first pregnancy 292 Antenatal preparation groups 3 months postnatally EPDS, GHQ, SCAN No significant difference in depression on any scale
MacArthur et al (Reference Macarthur, Winter and Bick2002) Pregnant women 36 general practices (2064 women) Community postnatal care 4 months postnatally EPDS, symptom checklist Intervention significantly reduced depression risk: EPDS case rating, odds ratio 0.57

Interventions aimed not at reducing vulnerability in defined subgroups, but rather at reducing risk exposures across whole populations, constitute a more radical approach (Reference RoseRose, 1985). Here comparisons must be made between contrasting areas or service populations, and as a rule randomisation will not be feasible. If an environmental risk factor is known to have relatively specific toxic or other harmful properties, observational studies alone might be considered sufficient evidence to justify decisions on public health policy. A case in point is the gradual accumulation of evidence from field studies to show that raised body lead levels caused by environmental pollution can affect children's cognitive abilities and behaviour, even at blood concentrations below 10 μg/dl (Reference Lamphear, Dietrich and AningerLamphear et al, 2000): findings which since the 1980s have led in Western countries to protective measures, including the removal of lead additives from petrol (Royal Commission on Environmental Pollution, 1983). The potential economic gains from reduction of children's exposure to lead have been estimated at from $110 billion to $139 billion annually in the USA (Reference Grosse, Matte and SchwartzGrosse et al, 2002).

Risk factors in the social environment are less specific, and their psychiatric sequelae may include delayed long-term or even intergenerational effects. In this context, prevention is more likely to occur as a byproduct of measures to reduce physical disease and disability, as one result of health promotion programmes, or as a consequence of socio-economic reforms targeted on, for example, unemployment, poverty, social deprivation and ethnic conflicts.

Unemployment provides a good example, having consistently emerged as a major risk factor for suicide. In the 1970s and 1980s, growth of mass unemployment in western Europe went hand in hand with a rising frequency of male suicide, especially in younger age groups. Data for the 14-year period 1974–1988 show a median increase in male suicide rates across European countries of 42%, and a median rank correlation with the preceding year's unemployment rates of 0.86 (Reference PritchardPritchard, 1992). In England and Wales unemployed status at the 1981 national census was predictive of suicide in the following decade, with an odds ratio of 2.6 (95% CI 2.0–3.4; Reference Lewis and SloggettLewis & Sloggett, 1998), while cohort studies in Sweden (Reference Johansson and SundqvistJohansson & Sundqvist, 1997), Italy (Reference Preti and MiottoPreti & Miotto, 1999) and the USA (Reference KpsowaKpsowa, 2001) likewise reported a doubling or trebling of suicide hazard among unemployed people of both genders. Suicidal behaviour forms part of a broad spectrum of health risks associated with unemployment (Reference Wadsworth, Montgomery and BartleyWadsworth et al, 1999; Reference Bartley and PlewisBartley & Plewis, 2002), and although the main impetus for national job creation schemes is bound to be economic rather than medical, it is important that, as and when such programmes are implemented, their effects on both mental and physical health should be monitored.

On the scale used in the National Service Framework to grade effectiveness, observational studies cannot be rated higher than IV and non-randomised intervention studies higher than III. A complementary scale for grading evidence, appropriate for use in the preventive field, could be based on principles of epidemiology and not necessarily rely on meta-analytic reviews (see Appendix).

DISCUSSION

Evidence-based medicine represents in essence a rebirth of the ‘numerical’ method, controversy over whose relative merits vis-à-vis clinical medicine and laboratory science stretches back over nearly two centuries (Reference SwalesSwales, 2000). Today this debate seems to us sterile – the real need being, as Swales remarks, for a synthesis based upon respect for all three methodologies. Clearly, if the ‘evidence-based’ approach is to realise its full potential as an integral component of medical practice and teaching, ways must be found to assess therapeutic effectiveness that use more pragmatic terms of reference, while at the same time maintaining independence from commercial and political pressures. However, even if one accepts that in future EBM will indeed rest on firmer foundations, should then the same paradigm be applied to the evaluation of health care provision? Is EBHC, in other words, primarily a convenient tool for politicians and managers, or can it be made to serve as a heuristically useful concept?

Questions of this kind are directly relevant to mental health research and policy. Organised psychiatry is naturally keen to attract a good share of health care resources to the speciality, and has learned how to play the political game. It has been pointed out that the most important way in which psychiatrists are likely to influence government policies is by committing themselves to health services research, and that governments today are influenced by well-designed clinical trials, particularly those concerned with cost-effectiveness (Reference KendellKendell, 1999). As a simple statement of realpolitik this can hardly be faulted. As a summary of the case for evidence-based psychiatry, however, it is seriously incomplete. Certainly, our professional bodies should seek to influence political decisions, but not only to secure better resources for the speciality. Research here, as in other fields of medicine, has to address matters of public health: to find out how many and which people in a population are suffering from psychiatric disorders, to establish where they are most thickly congregated, to identify the pathogenic forces that damage people's mental health or hold back their recovery and to demonstrate how these can be mitigated. Should not all this be seen as a part – perhaps even the most important part – of the evidence needed for rational mental health policies?

Official awareness of such concerns has found expression in the setting up of a National Institute for Mental Health in England, which ‘will ensure the development of evidence-based mental health services and take fully into account the wider issues of social inclusion and the development of the communities in which people live and work’ (Department of Health, 2001). Recognition of the problem is in itself an encouraging development, but the practical consequences must be awaited. Advances are now required on three fronts.

First, research methodology. The national information strategy has already yielded both a National Survey of Patients and a second National Psychiatric Morbidity Survey (Department of Health, 1999). Descriptive prevalence surveys are, however, of low predictive power and provide as a rule no more than type IV evidence (see Appendix). Ensuring that mental health policies for the 21st century are better informed will call for research that is hypothesis-driven and employs controlled analytic designs. In the health care field, randomised controlled trials can sometimes be based on clusters, for example of practice patients (Reference Crudace, Evans and HarrisonCrudace et al, 2003), whereas in other situations non-randomised case–control, cohort or area comparative studies may be appropriate.

Second, the administrative network. Effective cooperation between academic departments, health authorities and funding bodies will be necessary to implement the research agenda, to collate and disseminate the resulting evidence, and to incorporate it into professional standards and training. Government policy-making in the mental health field can at times seem uncoordinated and confusing, and there is a question as to how far it will be improved by creating more semi-autonomous bodies with no clear connections to the existing system (Reference LelliottLelliott, 2002). Given this background, it seems crucial that the role and authority of the newly created National Institute should be clearly defined.

Finally, research and legal coercion. Politicians and service planners may continue to promulgate evidence-based health care as a tool of cost-effectiveness, yet under pressure quickly turn to coercive measures for which there is no firm evidence. A striking case in point is that of recent government proposals to extend powers of compulsory detention and treatment over ‘high-risk patients’ (Department of Health, 2000), which have aroused professional concern about their untested human and financial resource implications and are regarded by many clinicians as probably unworkable (Royal College of Psychiatrists, 2002). It is in just such highly emotive areas of medical care that policy decisions are most likely to be misguided and the need for independent research is correspondingly strong.

APPENDIX

Grading the evidence

Rating scale used as a measure of effectiveness in the National Service Framework on Mental Health (Department of Health, 1999):

Type I evidence – at least one good systematic review, including at least one randomised controlled trial.

Type II evidence – at least one good randomised, controlled trial.

Type III evidence – at least one well-designed intervention study without randomisation.

Type IV evidence – at least one well-designed observational study.

Type V evidence – expert opinion, including the opinion of service users and carers.

Proposed scale for rating evidence from epidemiological and preventive research in the mental health field:

Type I evidence: effectiveness of public health action. In replicated studies, measures that diminish population exposure to an identified risk factor are followed by a reduction of psychiatric morbidity in the study population, relative to a comparison population.

Type II evidence: differential incidence in population cohorts. Psychiatric incidence rates differ consistently between population cohorts, in accordance with known differences in levels of risk exposure.

Type III evidence: association of illness onset with risk exposure. Onset of new cases of psychiatric disorder in a population is consistently found to be associated with preceding exposure to a suspected risk factor.

Type IV evidence: direct association of illness prevalence with level of risk exposure. Exposure to a suspected risk factor is consistently found to be higher among diagnosed psychiatric cases than among matched controls drawn from the same population.

Type V evidence: ‘ecological’ association between illness prevalence and risk indicators. Area rates of psychiatric morbidity are consistently found to vary with levels of risk exposure as shown by relevant administrative indices.

Clinical Implications and Limitations

CLINICAL IMPLICATIONS

  1. If evidence-based medicine is to realise its full potential in clinical practice and teaching, ways must be found to assess therapeutic effectiveness pragmatically while remaining independent of commercial and political pressures.

  2. Extending the evidence-based approach to health care evaluation in the UK will have important effects on clinical services through National Health Service (NHS) approval and commissioning mechanisms.

  3. This shift also implies that the evidence base should be widened to include mental illness in primary health care and research on preventive psychiatry.

LIMITATIONS

  1. Most systematic reviews in the evidence-based medicine framework have been focused on clinical trials of individual patient treatment. Corresponding reviews of health service provision are still scanty.

  2. Reported reviews of service provision have relied heavily on studies of health care systems that may not be directly relevant to the NHS.

  3. There is as yet no approved system for grading evidence from epidemiological and preventive research analogous to that recommended for clinical treatment.

References

Adams, C. & Gelder, M. (1994) The case for establishing a register of randomised controlled trials of mental health care. A widely accessible register will minimise bias for those reviewing care. British Journal of Psychiatry, 164, 433436.Google Scholar
American Psychiatric Association (1994) Diagnostic and Statistical Manual of Mental Disorders (4th edn) (DSM-IV). Washington, DC: APA.Google Scholar
Angell, M. (2000) Is academic medicine for sale? (editorial). New England Journal of Medicine, 34, 15161518.CrossRefGoogle Scholar
Bartley, M. & Plewis, J. (2002) Accumulated labour market disadvantage and limiting long-term illness: data from the 1971–1991 Office for National Statistics' Longitudinal Study. International Journal of Epidemiology, 31, 336341.Google ScholarPubMed
Berk, M. & Janet, M. L. (1999) Evidence-based psychiatric practice: doctrine or trap? Journal of Evaluation in Clinical Practice, 5, 97101.CrossRefGoogle ScholarPubMed
Bisson, J. I., Jenkins, P. L., Alexander, J., et al (1997) Randomised controlled trial of psychological debriefing for victims of acute burn trauma. British Journal of Psychiatry, 171, 7881.Google Scholar
Bodenheimer, T. (2000) Uneasy alliance: clinical investigators and the pharmaceutical industry. New England Journal of Medicine, 342, 15391544.Google Scholar
Brown, T. & Wilkinson, G. (eds) (2000) Clinical Reviews in Psychiatry (2nd edn). London: Gaskell.Google Scholar
Brugha, T. S., Wheatley, S., Taub, N. A., et al (2000) Pragmatic randomised trial of antenatal intervention to prevent post-natal depression by reducing psychosocial risk factors. Psychological Medicine, 30, 12731281.Google Scholar
Burns, T. (2000) Models of community treatments in schizophrenia: do they travel? Acta Psychiatrica Scandinavica, 407 (suppl. 102), 1114.CrossRefGoogle Scholar
Burns, T. & Catty, J. (2002) Mental health policy and evidence. Potential and pitfalls. Psychiatric Bulletin, 26, 324327.CrossRefGoogle Scholar
Campbell, S. M., Roland, M.O., Bentley, E., et al (1999) Research capacity in UK primary care. British Journal of General Practice, 49, 967970.Google Scholar
Chalmers, I. (1993) The Cochrane Collaboration: preparing, maintaining and disseminating systematic reviews of the effects of health care. Annals of the New York Academy of Sciences, 703, 156163.Google Scholar
Cochrane, A. L. (1972) Effectiveness and Efficiency. Random Reflections on Health Services. London: Oxford University Press for the Nuffield Provincial Hospitals Trust.Google Scholar
Crudace, T., Evans, J., Harrison, G., et al (2003) Impact of the ICD-10 Primary Health Care (PHC) diagnostic and managementguidelines for mental disorders on detection and outcomein primary care Cluster randomised controlled trial British Journal of Psychiatry, 182, 2030.Google Scholar
Department of Health (1999) National Service Framework for Mental Health: Modern Standards and Service Models. London: Department of Health.Google Scholar
Department of Health (2000) Reforming the Mental Health Act (Cm 5016-1, 5016-II). London: Stationery Office.Google Scholar
Department of Health (2001) The National Institute for Mental Health in England: Role and Function. London: Department of Health.Google Scholar
Egger, M., Schneider, M. & Davey-Smith, G. (1998) Spurious precision? Meta-analysis of observational studies. BMJ, 316, 140144.Google Scholar
Elliott, S. A., Leverton, T.J., Sanjack, M., et al (2000) Promoting mental health after childbirth: acontrolled trial of primary prevention of postnatal depression. British Journal of Clinical Psychology, 39, 223241.Google Scholar
Feinstein, A. R. (1995) Meta-analysis: statistical alchemy for the 21st century. Journal of Clinical Epidemiology, 48, 7179.Google Scholar
Geddes, J. R. & Harrison, P.J. (1997) Closing the gap between research and practice. British Journal of Psychiatry, 171, 220225.Google Scholar
Goldberg, D. & Huxley, P. (1992) Common Mental Disorders: A Biosocial Model. London: Routledge.Google Scholar
Grosse, S. D., Matte, T. D., Schwartz, J., et al (2002) Economic gains resulting from the reduction in children's exposure to lead in the United States. Environmental Health Perspectives, 110, 563569.CrossRefGoogle ScholarPubMed
Higgit, A. & Fonagy, P. (2002) Reading about clinical effectiveness. British Journal of Psychiatry, 181, 170174.CrossRefGoogle Scholar
Hotopf, M., Churchill, R. & Lewis, G. (1999) Pragmatic randomised controlled trials in psychiatry. British Journal of Psychiatry, 175, 217223.CrossRefGoogle ScholarPubMed
Johansson, S. E. & Sundqvist, J. (1997) Unemployment as an important risk factor for suicide in contemporary Sweden: an 11-year follow up study of a cross-sectional sample of 37 789 people. Public Health, 111, 4145.Google Scholar
Kendell, R. E. (1999) Influencing the Department of Health. Psychiatric Bulletin, 23, 321323.Google Scholar
Kpsowa, A. J. (2001) Unemployment and suicide: a cohort study of social factors predicting suicide in the US National Longitudinal Mortality Study. Psychological Medicine, 3, 127138.Google Scholar
Lamphear, B. P., Dietrich, K., Aninger, P., et al (2000) Cognitive deficits associated with blood lead concentrations <10μg/dL in US children and adolescents. Public Health Reports, 115, 521529.Google Scholar
Lelliott, P. (2002) The National Institute for Mental Health in England. Psychiatric Bulletin, 26, 321324.Google Scholar
Lewis, G. & Sloggett, A. (1998) Suicide, deprivation and unemployment: record linkage study. BMJ, 317, 12831286.Google Scholar
Lewis, S., Tarrier, N., Haddock, G., et al (2002) Randomised controlled trial of cognitive-behavioural therapy in early schizophrenia: acute-phase outcomes. British Journal of Psychiatry, 181 (suppl. 43), s9Is97.Google Scholar
Macarthur, C., Winter, H. R., Bick, D. E., et al (2002) Effects of redesigned postnatal care on women's health 4 months after birth: a cluster randomised controlled trial. Lancet, 359, 378385.Google Scholar
Mayou, R. A., Ehlers, A. & Hobbs, M. (2000) Psychological debriefing for road traffic accident victims. Three year follow-up of a randomised controlled trial. British Journal of Psychiatry, 176, 589593.Google Scholar
McKee, M. & Clarke, A. (1995) Guidelines, enthusiasm, uncertainty, and the limits to purchasing. BMJ, 310, 101104.Google Scholar
Naylor, C. D. (1995) Grey zones of clinical practice: some limits to evidence-based medicine. Lancet, 345, 840842.Google Scholar
NHS Centre for Reviews and Dissemination (1997) Mental health promotion in highrisk groups. Effective Health Care Bulletin, 3, 111.Google Scholar
Preti, A. & Miotto, P. (1999) Suicide and unemployment in Italy, 1982–1994. Journal of Epidemiology and Community Health, 53, 694701.CrossRefGoogle ScholarPubMed
Pritchard, C. (1992) Is there a link between suicide in young men and unemployment? A comparison of the UKwith other European Community countries. British Journal of Psychiatry, 160, 750756.CrossRefGoogle Scholar
Proudfoot, J., Guest, D., Carson, J., et al (1997) Effect of cognitive-behavioural training on job-finding among long-term unemployed people. Lancet, 350, 96100.Google Scholar
Rangachari, P. K. (1997) Evidence-based medicine: old French wine with a new Canadian label. Journal of the Royal Society of Medicine, 90, 280284.Google Scholar
Rawlins, M. (1999) In pursuit of quality: the National Institute for Clinical Excellence. Lancet, 353, 10791082.Google Scholar
Rose, G. (1985) Sick individuals and sick populations. International Journal of Epidemiology, 14, 3238.Google Scholar
Rose, S., Brewin, C. R., Andrews, B., et al (1999) A randomised controlled trial of individual psychological debriefing for victims of violent crime. Psychological Medicine, 29, 793799.Google Scholar
Rosenberg, W. & Donald, A. (1995) Evidence-based medicine: an approach to clinical problem-solving. BMJ, 310, 11221126.Google Scholar
Royal College of Psychiatrists (2002) Reform of the Mental Health Act 1983. Response to the Draft mental Health Bill (www.rcpsych.ac.uk/College/parliament/MHBill.htm). London: RCP.Google Scholar
Royal Commission on Environmental Pollution (1983) Lead in the Environment: Ninth Report. London: HMSO.Google Scholar
Sackett, D. L. (1995) Applying overviews and meta-analyses at the bedside. Journal of Clinical Epidemiology, 48, 6166.Google Scholar
Sackett, D. L., Haynes, R. B. & Tugwell, P. (1985) Clinical Epidemiology: A Basic Science for Clinical Medicine. Boston: Little, Brown.Google Scholar
Sackett, D. L., Rosenberg, W., Gray, J., et al (1996) Evidence-based medicine: what it is and what it isn't. BMJ, 312, 7172.Google Scholar
Slade, M., Beck, A., Bindman, J., et al (1999) Routine clinical outcome measures for patients with severe mental illness: CANSAS and HoNOS. British Journal of Psychiatry, 174, 432434.Google Scholar
Swales, J. (2000) The troublesome search for evidence: three cultures in need of integration. Journal of the Royal Society of Medicine, 93, 402407.Google Scholar
Tansella, M. (2002) The scientific evaluation of mental health treatments: an historical perspective. Evidence-based Mental Health, 5, 45.Google Scholar
Thornley, B. & Adams, C. (1999) Content and quality of 2000 controlled trials in schizophrenia over 50 years. BMJ, 317, 11811184.CrossRefGoogle Scholar
Tudor Hart, J. (1997) What evidence dowe need for evidence-based medicine? Cochrane Lecture, 1997. Journal of Epidemiology and Community Health, 51, 623629.Google Scholar
Wadsworth, M. E., Montgomery, S. M. & Bartley, M. J. (1999) The persisting effect of unemployment on health and social well-being in men earlyin workinglife. Social Science and Medicine, 48, 14911499.Google Scholar
Williams, D. D. R. & Garner, J. (2002) The case against the ‘evidence’: a different perspective on evidence-based medicine. British Journal of Psychiatry, 180, 812.Google Scholar
Wing, J.K. (1972) Principles of evaluation. In Evaluatinga Community Psychiatric Service (eds Wing, J. K. & Hailey, A.), pp. 1139. London: Oxford University Press for Nuffield Provincial Hospitals Trust.Google Scholar
Wing, J.K., Beevor, A.S., Curtis, R.H., et al (1998) Health of the Nation Outcome Scales (HoNOS): research and development. British Journal of Psychiatry, 172, 1118.Google Scholar
Wolff, N. (2001) Randomised trials of socially complex intervention: promise or peril? Journal of Health Services and Research Policy, 6, 123126.Google Scholar
Wykes, T., Leese, M., Taylor, R., et al (1998) Effects of community services on disability and symptoms. British Journal of Psychiatry, 173, 385390.Google Scholar
Figure 0

Table 1 Cochrane Collaboration reviews of randomised controlled trials (RCTs) of mental health care programmes (n=18)1

Figure 1

Table 2 Prevention of psychiatric disorder in defined high-risk groups: randomised controlled trials in the UK, 1997-2002

Submit a response

eLetters

No eLetters have been published for this article.