Jump to content
IndiaDivine.org

Medical Journals Mislead Trusting Public

Rate this topic


Guest guest

Recommended Posts

Guest guest

Worrisome ailment in medicine: Misleading journal articles

 

 

By Anna Wilde Mathews, The Wall Street Journal

 

 

Doctors and patients who rely on articles in prestigious medical journals

for information about drugs have a problem: The articles don't always tell

the full story.

 

Some omit key findings of trials about a drug's safety and efficacy or

inconvenient details about how a trial's design changed partway through. A

study published in the Journal of the American Medical Association last year

reviewed 122 medical-journal articles and found that 65 percent of findings

on harmful effects weren't completely reported. It also found gaps in half

the findings on how well treatments worked.

 

The problem calls into question whether journals can play the role of

gatekeeper in an era when articles are increasingly used as marketing tools.

Editors have " found themselves playing a game of research hide-and-seek, "

says Jeffrey Drazen, editor in chief of the New England Journal of Medicine.

They have " had experiences where authors tried to pitch it, where they were

telling you the good news and not the bad news. "

 

Now some top journals are cracking down. This year BMJ, a leading British

medical journal, started demanding that everyone who submits an article also

submit the original study design plans, so that peer reviewers can see

whether the authors changed the goalposts when publishing the study. It's

weighing whether to make the plans and the reviewers' comments public.

 

" We want to make sure that we're not misleading the public, " says Kamran

Abbasi, deputy editor of BMJ.

 

JAMA and other top journals are also asking authors more frequently for

their original study designs. Editors at JAMA sometimes call in independent

statisticians to review the results. Several medical journals are also

requiring that sponsors of clinical trials starting this July or later

register details such as goals and size in a public database if they wish

the results to be published.

 

The measures reflect a growing body of research about discrepancies between

journal articles and the full results of the studies behind them. Journal

editors are also responding to the escalating debate in Washington on

ensuring drug side effects are properly disclosed. In the wake of the

withdrawal of Merck & Co.'s painkiller Vioxx over cardiovascular side

effects, some legislators are calling for tougher safety scrutiny of drugs

on the market.

 

The JAMA study last year said articles often cherry-picked strong results to

report, even if those results were in a different area than the study was

designed to test. Typically scientists set up clinical trials to answer one

or two primary questions -- for example, whether a drug reduces the risk of

a heart attack and stroke. These are called the primary outcomes. The JAMA

study found that 62 percent of trials had at least one primary outcome that

was changed, added or omitted.

 

" It was a shock to find that what we thought was the most reliable

information wasn't, " says Douglas Altman, an author of the JAMA study and

director of the Center for Statistics in Medicine in Oxford, England.

 

One well-publicized dispute over data interpretation came in 2000 when a

JAMA article said Pfizer Inc.'s painkiller Celebrex minimized damage to the

stomach compared with older drugs. It later emerged that the authors used

only six months of data even though they had some data extending to 12

months. When all the results were included, Celebrex didn't look markedly

better than its rivals. (Separately, a study last year suggested Celebrex

might increase heart risk.) Today, the 2000 article is part of a shareholder

suit alleging that Pfizer misled investors about its drug.

 

" We were burned very badly, " says Catherine DeAngelis, JAMA's editor in

chief. She says the journal got tougher as a result. Today, " the single

thing we change most often is the conclusion, " she says. " It comes in as,

'This product is the greatest thing,' and we say, 'Under these

circumstances, in this population, this medication seems to control a, b and

c.' "

 

In an email, a Pfizer spokeswoman said the authors of the Celebrex study

" presented the data in the manner that they felt was the most meaningful. "

Many patients discontinued the treatment after six months, " making analysis

of data beyond six months difficult, " she said.

 

Journal editors rarely see the complete design and outcome of the studies

summarized in articles submitted for publication. A typical article is

perhaps six or seven pages long, even when the research behind it took years

and involved thousands of patients. Peer reviewers -- other scientists who

work voluntarily to review articles before they are published -- also see

only the brief article. They might fail to notice suspicious omissions and

changes in focus, or, if they do, lack the time or inclination to follow

them up.

 

The system relies, in essence, on a scholarly honor system. " Science depends

on trust, " says Drummond Rennie, a professor at the University of

California, San Francisco, and deputy editor of JAMA. " But if you have

trust, you're going to be fooled. You can't have a policeman in every lab. "

 

JAMA has tightened scrutiny of articles that are co-authored by academic

researchers and industry scientists. To prevent the industry authors from

slanting the data, JAMA demands that the academic authors, like corporate

chief executives under Sarbanes-Oxley requirements, sign statements

attesting that they have taken part in the data analysis and stand by their

findings.

 

" I want an academician to put his or her reputation on the line, and that of

the institution, " says Dr. DeAngelis, the JAMA editor in chief.

 

Disputes are rarely clear-cut. Scientists may legitimately disagree whether

an article that leaves out a certain figure is deceptive or merely reflects

the fact that no several-page summary of thousands of pages of data can be

comprehensive.

 

As part of a bid for Food and Drug Administration approval to sell the

anticholesterol drug Mevacor without a doctor's prescription, Merck and

partner Johnson & Johnson set up-- mock drugstores and solicited customers

through advertising. The store shelves were lined with products including

over-the-counter Mevacor. A label on the drug instructed potential users

that they should take it only if they met several conditions, such as having

moderately high cholesterol and at least one risk factor for heart disease.

The idea was to simulate the real-life circumstances under which the pills

would be sold.

 

An article summarizing the results of the experiment in the November 2004

issue of the American Journal of Cardiology said about two-thirds of the

people who decided to try the drug met the conditions or came close. The

authors, who worked for Merck and J & J, said the study's full results made a

" compelling case " that Mevacor was suitable to be sold over-the-counter.

 

In reviewing the case, the FDA highlighted another figure, one that never

appeared in the article: Just 10 percent of the people who took the drug

fully met the label's conditions. The others included in the two-thirds

figure met many of the conditions but not all. After hearing a presentation

by agency officials, an FDA advisory committee in January voted to reject

the drug companies' request.

 

Edwin Hemwall, a vice president at the Merck-J & J joint venture that wants to

sell the drug over-the-counter, says the label was conservatively written

and the two-thirds figure accurately captured the percentage of users who

were right for the drug. It included people who had been advised by their

doctor to take Mevacor and some who were a year younger than the minimum

ages on the label (45 for men, 55 for women). " We felt that that really

represented, from a global perspective, the type of person who fit the

label, " Dr. Hemwall says. The FDA, he says, " went very, very strictly by the

label. "

 

The journal's editor, William Roberts of Baylor University, said he didn't

remember details of the article and couldn't comment.

 

Some editors say it's impossible to sift through thousands of pages of raw

data to check a paper's fairness, given the cost and demands on the time of

reviewers. Ultimately that's the job of the FDA, says David Pisetsky, a Duke

University professor who is the editor of Arthritis & Rheumatism. " People

have to be realistic about what a journal can do, " he says.

 

Alan Goldhammer, an associate vice president at the Pharmaceutical Research

and Manufacturers of America, the top drug-industry trade group, says some

of the new scrutiny unfairly singles out drug companies -- for example, by

forcing them to get academic scientists to check off on their work. " When is

enough enough? " asks Mr. Goldhammer. " Why are our submitted articles

different from all other submitted articles? "

 

They're different, journal editors contend, because of their potential use

in marketing. In 2003, an article in the Journal of the American Geriatrics

Society concluded that the Alzheimer's drug Aricept helps elderly people

live at home longer. It cited " significant delays " in the date when people

who took Aricept entered a nursing home. Pfizer and Eisai Co., the drug's

co-marketers, ran ads in medical publications that cited the study and said

the drug " helps keep patients in the community for more than five years. "

 

Critics, in letters to the journal, called the study " seriously flawed " and

" misleading. " They suggested that its design tended to weight the Aricept

group with the most compliant patients and those with the most social

support -- making it unclear whether their superior results had anything to

do with the drug's effects. Those in the non-Aricept group included people

who refused the companies' offer for free ongoing treatment with the drug

and some who dropped out of an earlier clinical trial of it.

 

The study acknowledged the possibility of " selection bias " between the

groups, but suggested this wasn't a fatal flaw. David Geldmacher, the

article's lead author and a professor at the University of Virginia, says

the study results were " meaningful " and the two groups were " comparable to a

reasonable standard. " Thomas Yoshikawa, editor in chief of the journal, says

the article was " topical and relevant " and its science " reasonably good. "

The Pfizer spokeswoman said in an email that the company " stands by our

advertising and the results of this study, " adding that they are consistent

with a different study published this January in the same journal.

 

A 2001 analysis in JAMA found that side effects were adequately reported in

only 39 percent of nearly 200 articles surveyed. The median space devoted to

safety concerns was roughly a third of a page -- about the same as the

authors' names and affiliations.

 

In 2001 the New England Journal of Medicine published an article about the

Eli Lilly & Co. drug Xigris for sepsis, a body-wide response to infection

that is often fatal. The article described Xigris as effective in a broad

spectrum of patients. But four consultants to an FDA advisory committee

later published a commentary in the journal, saying the FDA's analysis

showed the least-sick patients got no benefit and suffered side effects. The

FDA approved the drug only for sicker patients.

 

William Macias, a Lilly official, and Gordon Bernard, a Vanderbilt

University professor who was the lead author of the article, say the authors

used a different statistical analysis than the FDA, and their method showed

no meaningful differences between the subgroups. Dr. Drazen, the New England

 

Journal's editor in chief, defends the article, saying the main point was to

tell readers the overall results.

 

" One solution to this is to publish the raw data " that emerge from a trial,

says Dr. Abbasi of BMJ, the British journal. " The way things are going in

terms of openness, you can't rule it out. "

 

 

Moving Goalposts

 

 

 

The main goal of a study may change between the original design and final

publication. What some authors do, according to a sampling of articles:

 

 

Describe original main goal as secondary: 34 percent

Fail to disclose original goal: 26 percent

Turn original secondary goal into main goal: 19 percent

Create new main goal: 17 percent

 

Source: " Empirical Evidence for Selective Reporting of Outcomes in

Randomized Trials, " JAMA, May 26, 2004

 

 

 

 

 

 

 

 

 

 

 

Link to comment
Share on other sites

Join the conversation

You are posting as a guest. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...