Jump to content
IndiaDivine.org

Fwd: Conflicts of Interest in Medical Research

Rate this topic


Guest guest

Recommended Posts

Guest guest

SSRI-Research , JustSayNo wrote:

CONFLICTS OF INTEREST

 

Presented by Vera Hassner Sharav

 

14th TRI-SERVICE

CLINICAL INVESTIGATION SYMPOSIUM

 

Sponsored By

 

THE U.S. ARMY MEDICAL DEPARTMENT

And

THE HENRY M. JACKSON FOUNDATION FOR

THE ADVANCMENT OF MILITARY MEDICINE

 

May 5-7, 2002

http://www.researchprotection.org/testimonypresentations/armymeddept.h

tml

 

 

The cornerstone of public trust in medical research is the integrity

of

academic institutions and the expectation that universities, which

rely on

public funding, have a responsibility to serve the public good.

Financial

conflicts of interest affect millions of American people‹those who are

subjects of clinical trials testing new drugs and those who are

prescribed

drugs after their approval. Yet, the leadership paid little attention

to the

issue until a stream of tragic and unseemly public revelations has

shaken

public trust in academic research.

 

In January 2002, the Association of American Medical Colleges (AAMC)

approved a report by its task force stating: " Financial conflicts of

interest of clinical investigators... [is] the single issue that

poses

the greatest threat to maintaining public trust in biomedical

research. " [1]

The report did not address institutional conflicts of interest which

create

a culture that collides with the humanist tradition.

 

Physicians reading the current issue of JAMA[2] will be startled to

learn

that a team of Harvard University professors are advising physicians

NOT to

prescribe new drugs to their patients because their safety has not

been

established, despite FDA approval. Adverse drug reactions,[3] they

acknowledge, is the leading cause of death in the U.S. They analyzed

the

25-year record of drug label changes (between 1975 to 1999) as they

appeared

in the Physician's Desk Reference and found that 548 new drugs were

approved

during that period. Of these 20% required subsequent black box

warnings

about life threatening drug reactions, half of these adverse effects

were

detected within 2 years others took much longer. Sixteen drugs had to

be

withdrawn from the market because they were lethal.

 

The JAMA report provides a basis for evaluating the value and

relevance of

clinical trial findings for clinical care. It also provides a basis

for

measuring FDA's performance as gatekeeper in preventing hazardous

drugs from

reaching the market. They found that clinical trials are underpowered

to

detect uncommon, but potentially lethal drug reactions. Their design,

biased

selection, short duration, and accelerated approval process almost

ensures

that severe risks go undetected during clinical trials. The JAMA

report

validates the findings of a Pulitzer Prize winning investigative

report in

the Los Angeles Times by David Willman.[4]

 

Willman uncovered evidence demonstrating the adverse consequences of

the

1992 Prescription Drug User Fee Act (PDUFA), the law that brought

industry

money and industry influence to the FDA. The approval process for new

drugs

was accelerated and the percentage of drugs approved by the FDA

increased

from 60% approval at the beginning of the decade to 80% approval by

the end

of the 1990s. Willman reported that the FDA was the last to withdraw

several

drugs that had been banned by European health agencies. There was a

concomitant precipitous rise in the approval of lethal drugs: between

Januray1993 and December 2000, seven deadly drugs were brought to

market

only to be withdrawn after they had been linked to at least 1,002

deaths.[5]

In a follow up article, August 2001, [6] Willman reported that the

list of

lethal drugs withdrawn since Sept 1997 had jumped to a dozen‹9 had

been

approved after 1993.

 

None of the drugs were for life-threatening conditions, one was a

diet pill,

another for heartburn, another an antibiotic that proved more

dangerous than

existing antibiotics. The approval of these drugs illustrates the

collision

between corporate interests and the public interest. Corporate

interests

revolve around maximizing profits through the marketing of new,

expensive

drugs, but corporate interests collide with public safety interests.

FDA's

" expert advisory panels " demonstrate FDA's loss of independence. Most

advisory panel members have undisclosed financial ties to the

manufacturer

whose drugs they recommend for FDA approval.4

 

Corporate influence in academia:[7]

 

Until 1980 a firewall existed separating industry and academia to

ensure

that academic pursuits were independent of commercial influence. When

the

Bayh-Dole Act of 1980 encouraged " technology transfer, " that firewall

was

removed, allowing federally funded universities to patent and license

inventions developed by faculty members. Researchers and institutions

were

free to enter into ventures and partnerships with biotechnology and

pharmaceutical companies, and they did. It is estimated that of the

$55

billion to $60 billion spent by the biomedical industry on research

and

development, large companies spend one fifth at universities, small

companies spend one-half.[8] With the flow of corporate money, came

corporate influence and control. The culture within academic

institutions

changed: business ethics swept aside the moral framework within which

academia had functioned. Gone were such niceties as intellectual

freedom and

a free and open exchange of ideas, so was full disclosure of research

findings. Gone was the culture of social responsibility, or a social

conscience. Finally, the absence of independent, third party review

has put

the integrity of the process and the quality of the products in

jeopardy.

 

 

 

The investigative series in the Seattle Times[9] provides insight

into that

changed culture at the Fred Hutchinson Cancer Center during the mid

1980s.

The copiously documented series examined the conduct of research and

patient

care in two cancer trials. It illustrates how a new entrepreneurial

culture

in medicine encouraged doctors to push the limits beyond what can be

considered, ethical research, by subjecting patients to unjustifiable

risks

and increased suffering. At the Hutch a physician with a conscience

who

clearly did not embrace the new entrepreneurial ethos blew the

whistle.

 

It has been said, " Doctors fear drug companies like bookies fear the

mob. " [10]

 

Researchers whose findings collide with corporate interests, are

finding out

that academic freedom is no longer operational. Two high profile

examples

from our Canadian neighbors illustrate that researchers can face

intimidation by both corporate sponsors and university

administrators. In

1996 Dr. Nancy Olivieri[11] found that a generic drug for thalassemia,

manufactured by Apotex, the sponsor of the tirals, failed to sustain

long-term efficacy. Dr. Olivieri informed Apotex and the chair of the

institution's research ethics board (REB) and moved to inform

patients in

the clinical trials of the risk-; as is her ethical obligation. Apotex

terminated the two trials and warned Olivieri of legal consequences

if she

informed patients or anyone else. Apotex, meanwhile had reportedly

contributed $13 million to The University of Toronto.

 

When Olivieri attempted to publish her findings, Apotex threatened to

sue

her for breach of confidentiality. The University failed to defend

Olivieri

and the principles of research ethics or academic freedom. The

University

threatened to dismiss her, initiating a biased inquiry and knowingly

relied

on false accusations by company- funded investigators - all of which

were

later discredited by an independent investigation by the Canadian

Association of University Teachers.11 Olivieri's publication of her

negative

findings was delayed for two years. The case is a dramatic

illustration of

conflicts of interest and the collision between corporate interests

and the

right of research subjects to be informed of any identified risks - as

required by the principle of informed consent.

 

 

 

Another example of the clash between academic freedom and corporate

interests, again involving retribution by the University of Toronto,

involves Dr. David Healy,[12] a prominent psychopharmacologist and

historian

of psychiatry at the University of Wales. Healy had been hired to

head the

Mood Disorder Program at the University's Center for Addiction and

Mental

Health. The program is reported to get 52% of its funding from

corporate

sources, and the Center received $1.5 million from Eli Lilly. After

Healy

criticized the drug industry in an article published by The Hastings

Center,

Eli Lilly withdrew its $25,000 contribution to Hastings. When Healy

delivered a paper expressing his concern about the risk of suicide in

some

patients taking antidepressant drugs - such as Prozac-- the University

rescinded his appointment. Academic freedom is but one casualty of

corporate

influence.

 

As Marcia Angell correctly observed [13] in her last editorial in the

NEJM,

corporate influence in medicine is ubiquitous, extending far beyond

individual physician-researchers: its influence determines what

research is

conducted, how it is done, and the way it is reported. Short-term

corporate

goals take priority over society's long-term needs. Under corporate

influence, more research is done comparing trivial differences

between one

drug and another, less research is done to gain knowledge about the

causes

of disease.

 

The pharmaceutical industry spends $15 billion[14] to buy loyalty of

health

care providers and allied professionals-- educators, investigators,

and

non-profit organizations. Drug companies shower physicians with gifts,

honoraria, global junkets, and provides fees for patient referrals for

clinical trials. They endow academic chairs and programs, provides

grants,

stock equity, patent royalty fees to researchers and institutions--

even

publication attribution is controlled by sponsoring companies. They

make

contributions to professional associations and patient advocacy

groups, and

sponsor their conferences.

 

The American Medical Association sells the rights to its " physicians'

master

file " with its detailed personal and professional information on every

doctor practicing in the United States, to dozens of pharmaceutical

companies for $20 million. [15] That database provides drug marketers

with

invaluable information. Journals and the media profit from drug

advertising

income. Such financial inducements assure industry a fraternity of

loyal

allies, among them journal editors, who protect their own interests

and

those of their corporate benefactors. For example, the British

journal, The

Lancet, reported that the editor of the British Journal of Psychiatry

had

published a favorable review of a drug while he was receiving an

annual fee

of 2,000 (British pounds) from the drug's manufacturer.[16] Although

clinical research is highly competitive, the interdependent

collaborative

network of stakeholders tightly controls a self-administered opaque

oversight system.

 

The pharmaceutical industry also buys political influence in Congress

and

the administration. Public Citizen[17] reported that there are 625

pharmaceutical industry paid lobbyists in Washington, one for every

congressman. Industry spent $262 million on political influence in the

1999-2000 election. That's more than any other industry. This

influence

ensures the industry profit enhancing legislation and reduced

regulation.

Since the 1992 Drug User Fee Act (PDUFA) which precipitated fast-

track drug

approval, congress passed the 1997 FDA Modernization Act providing

industry

with huge financial incentives - a six- month patent extension for

drugs

tested in children. These legislative initiatives are a financial

bonanza

for the drug industry, translating into billions of dollars in

revenues - a

six month patent extension can generate as much as $900 million for a

single

drug.[18].

 

However, the accelerated pace in research and in the drug approval

process

has had an enormous toll in human casualties. Adverse drug reactions

are the

leading cause of death in the United States - women and the elderly

are at

special risk.[19] The LA Times revealed that between Sept. 1997 and

Sept.

1998, nearly 20 million Americans took at least one of the harmful

drugs the

FDA had been forced to withdraw.4 A comparison of FDA's 25 year drug

approval-withdrawal record analyzed by Lasser, et al,2 in JAMA, and

the LA

Times analysis of FDA's recent five year record raises alarms: 16

drugs

withdrawn within 25 years, 12 within five. Most of those withdrawn

drugs had

been approved after 1993. The LA Times noted, " never before has the

FDA

overseen the withdrawals of so many drugs in such a short time. "

 

Since 1994, reports in the press described ethical violations that

undermined the safety of subjects in clinical trials, causing some to

die

when they might have lived.[20] The violations occurred because a

culture of

expediency had replaced a culture of personal moral responsibility.

Systemic

ethical violations were revealed at the nation's leading research

centers[21] - including, Duke, University of Pennsylvania, New York

Cornell

Medical Center, Johns Hopkins, Fred Hutchinson, NIMH, University of

Maryland, and Harvard in China. The evidence demonstrates that the

problem

is not merely a few rogue investigators-- the problem is an entrenched

insular system and weak federal oversight.[22] The federal Office of

Protection from Research Risks (now, OHRP) was forced (temporarily)

to shut

down clinical trials at some of the nation's most prestigious

institutions.[23]

 

In September 2000, near the end of her term as Secretary of HHS, Donna

Shalala acknowledged in NEJM, " I did not expect, or want, to complete

my

tenure . . . by raising questions about the safety of patients in

clinical

research. However, recent developments leave me little

choice. . . " [24]

Unfortunately, the only initiative taken was to reorganize the federal

oversight agency (now OHRP) under a new director who believes that

education

and a collaborative system of voluntary accreditation will repair the

damage.[25] I disagree. Ethical violations such as failure to

disclose risks

and to protect the welfare of patient-subjects are the result of

conflicts

of interest - not poor education.

 

An example of complicity by government officials who provide a shield

of

secrecy, while claiming " transparency: " On February 7, 2002, the

Alliance

for Human Research Protection[26] requested a copy of current

proposals that

have been received by the Secretary of HHS in accordance with Section

407 of

federal regulations (45 CFR 46. Subpart D). Subpart D protects

children -

who are incapable of exercising the right to informed consent-- from

experiments involving greater than minimal risk if there is no

potential

benefit to them. However, section 407 provides an appeal process to

the

Secretary. The regulation stipulates that nontherapeutic research

with no

potential direct benefit to the child, may be permitted if the

Secretary,

after consultation with " a panel of experts in pertinent

disciplinesand

following an opportunity for public review and comment " finds " the

research

presents a reasonable opportunity to further the understanding,

prevention

or alleviation of a serious problem affecting the health or welfare of

children. "

 

Our request was denied with the following statement:[27] " Release of

information would interfere with the agency's deliberative and

decision-making processes. Further, each researcher has a commercial

and

privacy interest in the release of any information. " A similar reason

was

given for denying disclosure of the list of experts: " Release of

expert

identities associated with the review of individual protocols would

interfere with the agency's deliberative and decision-making process

and

have a chilling effect on the ability of the agency to obtain frank

and

candid opinions from its reviewers. " This is an example of federal

officials

attempting to block public access to information guaranteed under

federal

regulation.

 

The role IRBs and bioethicists have in this enterprise:

 

Ostensibly, IRBs were established to serve as gatekeepers to protect

human

subjects. But lacking independence, they actually function as

facilitators

for the accrual of grant monies by their parent institutions. It is

not

surprising, therefore, that IRBs have failed to protect research

subjects

from harmful experiments or to weed out research that fails to meet

scientific justification. Specifically, what conclusion is one to

draw from

the fact that 90% of the protocols approved by the IRB at the NIMH,

apparently failed to meet either ethical and / or scientific

justification?

Following an investigative series in The Boston Globe,[28] in 1998,

the

director of NIMH ordered an independent evaluation of all 89 clinical

trials

at the Institute. The result: 29 were suspended at once, and an

additional

50 protocols were put on probation for lack of scientific

justification -

that's 79 out of 89.[29]

 

In " Pharma Buys a Conscience, " Dr. Carl Elliott, [30] director of the

Bioethics Center, Minnesota, (who happens to be a physician) is an

insightful critical examination of bioethics. Elliott criticizes his

colleagues who have been seduced by corporate financial incentives. He

points out how conflicts of interest have undermined the professional

integrity of bioethics. He lists ethics consultants and their

corporate

benefactors,[31] as well as what he calls, " corporate-academic dating

services " that match academic " experts " with businesses seeking

expertise.

He notes that corporate money and corporate influence is so

entrenched at

university medical centers that overt threats need not be explicitly

made,

everyone knows what's expected. Bioethicists are in demand because

they lend

the appearance of legitimacy to corporate ventures. Therefore,

corporations

funnel money to bioethics centers, and pay bioethicists retainers to

serve

on their advisory boards. But, as Elliott points out, " The problem

with

ethics consultants is that they look like watchdogs but can be used

like

show dogs. "

 

Indeed, bioethicists have lent the seal of legitimacy to highly

questionable, if not outright unethical research. Their corporate

affiliations are not publicly disclosed when they render opinions in

the

media or on IRBs, or on government advisory panels. An

institutionalized

veil of secrecy shields academics who sit on government appointed

advisory

panels. While their recommendations affect public policy, those

recommendations may also serve the financial interests of the

corporations

that pay them.

 

In 1997, I testified before the National Bioethics Advisory Commission

(NBAC) about financial conflicts of interest, betrayal of trust, and

the

undue influence of drug companies in medicine. I pointed out that

physicians

who accept large payments to refer patients for clinical trials

testing the

safety and efficacy of new products are breaching medical ethics. The

Wall

Street Journal, for example, reported that doctors with academic

affiliations have been paid as much as $30,000 per patient per drug

trial[32] in schizophrenia and Alzheimer's studies.

 

Following the testimonies, Dr. Harold Shapiro, chair of the NBAC and

President of Princeton, indicated that the NBAC would not focus on

financial

arrangements of research investigators because, " after all, this is a

capitalist country. " Dr. Shapiro neglected to mention that he was

drawing a

salary from Dow Chemical Company, on whose advisory board he sat.30

Such

publicly undisclosed personal financial arrangements by academics who

sit on

public policy advisory boards are not at all unusual. The public is

under

the illusion that so-called " expert advisory panels " are independent,

and

render objective, disinterested recommendations. The public does not

suspect

that these panelists from academia have financial ties to biochemical

companies, and therefore, conflicts of interest. No one is held

accountable

for formulating public policy recommendations that serve an

undisclosed

self-interest.

 

What chance does a vulnerable individual patient have as an outsider

confronting a fraternity of insiders - all of whom have something to

gain

from his participation as a subject? The system serves its

stakeholders.

Revelations about the system's failure to protect human subjects from

preventable harm have come to light, not because of any internal

safety

mechanisms, but as a result of information provided by conscientious

whistle

blowers and investigative press reports.

 

Following are my " dirty dozen " corrupt research review practices that

undermine both the safety of human subjects and the integrity of

research

findings:

 

1.

 

Efficacy by design: washout / placebo; unequal dose comparison = bias.

 

2.

 

Subject selection bias: younger, healthier subjects than those likely

to be

prescribed treatment; randomization criteria; recruitment coercion.

 

3.

 

Assessment of risk / benefit: entirely subjective, it depends who is

assessing.

 

4.

 

IRB evaluation and approval process: vote without examination of

protocol;

intimidation; IRB shopping.

 

5.

 

Misleading disclosure documents = Uninformed Consent.

 

 

 

Non-disclosure: there's no benefit; newly identified risks =

Uninformed

Consent.

 

6.

 

Suppressing adverse event reports: " don't ask, don't tell "

 

7.

 

Interpretation of findings-- " efficacy in expert hands is not the same

as

clinical effectiveness " 33

 

8.

 

Biased advisory panels: FDA panels recommend drugs that kill.

 

 

 

Bioethics ethics: conscience for hire;

 

Professional guidelines, recommendations.

 

9.

 

Corrupted published data: suppression of negative findings; ghost

authorship.

 

 

 

11. Complicit government oversight officials fail to enforce,

preferring to

redefine the standards: Who is a human subject? What's a condition?

Can

children's assent be called consent?

 

12. Using patients as laboratory animals in symptom provocation,

relapse

inducing experiments.

 

Case 1: Placebo design: ethics vs financial stakes

 

Corporate influence begins with the protocol design and subject

selection.

For example, unequal dosage comparisons will elicit different side

effects

that may skew the results. Selective inclusion criteria can

effectively hide

adverse side effects that will later be reveled in clinical practice.

Drug

" washout' followed by placebo allows sponsors to manipulate the

condition

under which a new drug is tested. Specifically, by making patients

very sick

during washout, the efficacy of the new drug is likely to be

inflated. Such

manipulations may explain the reason that a drug's efficacy in

clinical

trials is not usually matched under normal clinical conditions.

 

The use of placebo control trials in patients for whose condition an

effective treatment exists has been the subject of heated debate. The

FDA

has been severely criticized for its placebo control policy because it

undermines patient's best interest in violation of the Declaration of

Helsinki. Of particular concern is the risk of suicide in severely

depressed

or psychotic patients who are at increased risk when their condition

is

destabilized by drug " washout " and placebo. They are at risk whether

the

drugs are an effective treatment or not because psychotropic drugs are

associated with severe withdrawal symptoms.

 

Carl Elliott described his battle with the university's IRB when he

challenged placebo control trials: " Tables were pounded. Faces turned

scarlet. Blood pressures soared. Yet the IRB continued to approve

many of

the trials, over my objections and those of other members of the

committee.

The hospital administration eventually dissolved the IRB and

reconstituted

it with new membership. " 26 Elliott explains that the reason for the

explosive reaction was that " everyone's interests were involved " -

not just

the sponsoring drug company. These trials generated huge income for

the

hospital and investigators alike, some earning between $500,000 and $1

million a year.

 

Case 2: Biased Clinical Guidelines:

 

An investigative report by Jeanne Lenzer[33] in the British Medical

Journal

(March 2002) sheds light on the underlying factors that led the

American

Heart Association to " definitely recommend " a treatment that could

cost more

lives than the disease itself. In August 2000 the Heart Association

promoted

alteplase (tPA), manufactured by Genentech, as a treatment for " brain

attack. " The Association upgraded its recommendation of tPA for

stroke,

placing it in the class I category. It did so despite the fact that

most

controlled trials showed that such thrombolytics increase mortality

rates in

acute ischaemic stroke. In it's annual report it described tPA as

follows:

" A clot-busting drug that helped revolutionize heart attack

treatment, tPA

holds enormous potential for the treatment of ischemic stroke, which

accounts for 70 to 80 percent of all strokes. It is estimated that

tPA could

be used in 400,000 stroke case per year to save lives, reduce

disability and

reverse paralysis. " 33

 

The Heart Association made its bold recommendation on the basis of a

single

controlled clinical trial conducted by the National Institute of

Neurological Diseases and Stroke (NINDS). Six other randomized studies

reached the opposite conclusion. Lenzer reported the following: the

NINDS

study design ensured a favorable finding for tPA because the patients

selected to get tPA had mild stroke scores at baseline compared with

patients selected for the placebo arm who had worse strokes.

Furthermore,

only one fifth of those initially diagnosed were found to have

stroke. This,

of course put those non-stroke patients at increased risk of harm

with no

potential benefit. There were two observational studies reaching

opposite

conclusions. The Cleveland study found that twice as many patients

given tPA

died compared with those that did not.

 

Most suspicious of all, however, is the refusal by NINDS to reveal

the raw

data for that single trial. Lenzer's request under the Freedom of

Information Act was rejected. Furthermore, the company vigorously

opposes a

head to head study comparing alteplase to streptokinase for myocardial

infarction. Dr. Elliott Grossbard, a Genetch scientist, provided the

company's position: " We don't know how another trial would turn out

[another

study] may be good for America, but it wasn't going to be a good

thing for

us. " [34]

 

The panel of experts who wrote the Heart Association's Clinical

Practice

Guideline recommending tPA failed to mention the catastrophic results

from

the Cleveland study. According to the BMJ article, eight of the nine

expert

panel members had financial ties to the manufacturer, Genentech. Dr.

Jerome

Hoffman, the single panel member who did not have ties to Genentech

wrote a

dissenting opinion that was not even acknowledged by the panel.

Hoffman

questioned the tPA endorsement in a BMJ article, charging that the

NINDS

findings were artificially manipulated to exclude 95% of stroke

patients.[35]

 

Lenzer reported that Genentech had contributed over $11 million to

the Heart

Association and also paid $2.5 million to build the Heart Association

a new

headquarters. Only after the these financial conflicts of interest

became

public knowledge, did the Heart Association revise its class I

recommendation and withdraw statements that tPA " saves lives. "

 

The Heart Association is hardly unique: a recent report in JAMA[36]

(2002)

found that 87% of the authors who wrote treatment practice guidelines

in all

fields of medicine had financial ties with the pharmaceutical

industry. In

1998 the NEJM found that 96% of medical journal authors whose

findings were

favorable to a product had financial ties to the manufacturer.[37] As

questions have been raised about the value of mammography and other

cancer

screening recommendations, one grows suspicious that most highly

publicized

screening campaigns are launched by stakeholders with financial

interests in

the business. Their recommendations may turn out to be hazardous to

public

health.

 

Case 3: Subject selection bias--antidepressant drug trials:

 

Dr. Thomas Laughren, head of the FDA's psychiatric drug division made

the

following concessions at a Houston conference (2000): " there is a

certain

amount of myth " in the claimed efficacy of psychotropic drugs which

have

shown only marginal effect above placebo. " We don't know how

effective they

are, only that in clinical trials, they demonstrated somewhat greater

efficacy than placebo. " He then acknowledged: " there isn't any

standard for

what effect size is required to get a psychotropic drug on the

market.we

have never, in my experience, not approved a drug because of a

finding that

the effect size is too marginal. " [38]

 

To obtain even a marginal effect above placebo, 60% to 85% of

patients who

are most likely to be prescribed antidepressant drugs are excluded by

the

eligibility criteria. That's the finding of a Brown University

analysis[39]

of 31 antidepressant trials published from 1994 to 1998. Only 15

percent of

346 depressed patients who were evaluated in a Rhode Island hospital

psychiatric clinic would have met the eligibility requirements of a

standard

drug trial. Such a selection process inevitably skews the results,

thereby

invalidating the published findings and claims about the efficacy of

antidepressants. Zimmerman expressed concern: " If antidepressants

are, in

fact, not effective for some of these large subgroups of depressed

individuals, their prescription incurs an unjustifiable exposure of

risks

and side effects, and alternative treatments need to be considered. "

 

I would also argue that if the patients in clinical trials don't

resemble

the patients who are later prescribed these drugs - what relevance do

the

trials have for clinical care?

 

Case 4: Antidepressant drug efficacy hype:

 

A report in the April 10, 2002 issue of JAMA by prominent

psychopharmacologists who conducted a major government sponsored,

[40] 12

-site, controlled clinical trial comparing sertraline (Zoloft),

Hyperricum

perforatum (St. John's wort) and placebo. The investigators

acknowledged:

 

" An increasing number of studies have failed to show a difference

between

active antidepressants and placebo. Many of the presumed factors

underlying

this phenomenon were carefully attended to in this study, e.g,

adherence to

quality control by rater training, treatment adherence monitoring,

inclusion

of experienced investigators, and carefully defined entry criteria.

Despite

all of this, sertraline failed to separate from placebo on the two

primary

outcome measures "

 

Between December 1998 and June 2000, 340 Adult outpatients with major

depression and a baseline total score on the Hamilton Depression Scale

(HAM-D) of at least 20 were recruited and randomly assigned to

receive (900

to 1500 mg) St. John's wort, (50 to 100 mg) Zoloft, or placebo for 8

weeks.

Responders at week 8 could continue blinded treatment for another 18

weeks.

The results of this trial states:  " on the 2 primary outcome measures,

neither [Zoloft] nor [st. John's wort] was significantly different

from

placebo. " Full response occurred in 31.9% of the placebo-treated

patients vs

23.9% of the [st John's] - treated patients and 24.8% of [Zoloft]-

treated

patients. "  

 

Clearly a dual dilemma faces those who are invested in promoting

psychopharmacolgy: if they admit that the drugs don't really work,

then

placebo-controlled trials are ethically justified. However, absent a

demonstrable benefit of the drugs, it is unethical to expose patients

to the

known side effects and the potential long-term risks of harm. But

such an

acknowledgement would undercut the financial interests of the

pharmaceutical

industry and all of the stakeholders who depend on corporate

largesse.  The

prominent psychiatrists, whose names are too numerous to be listed at

the

head of the JAMA article, found a way to spin the negative results of

the

trial. In their conclusion they ignore their own findings, namely,

that

neither the antidepressant drug, Zoloft, nor St. John's wort were more

effective than placebo. Indeed, placebo may have an edge. In their

conclusion the investigators p

--- End forwarded message ---

Link to comment
Share on other sites

Join the conversation

You are posting as a guest. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...