Jump to content
IndiaDivine.org

Paradoxical results in AP/TCM Research

Rate this topic


Guest guest

Recommended Posts

Hi All, & Ken,

 

Ken wrote [re differing / paradoxical results in TCM/AP research]:

> Phil, Thanks for bringing this up. I'm curious to know your take

> on what lies beneath such findings.

 

I prefix these comments with a statement: we had good friends

home for dinner tonight, and I have imbibed Chilean red wine in

copious amounts.

 

As a professional researcher for circa 40 years, I can say that ALL

" scientific hypotheses " have 3 possible outcomes:

 

(a) supported by the experimental work;

(b) not supported (neither for, nor against) by the experiments;

© refuted by the experimental results.

 

If the pre-thesis hypotheses are based on common-sense, or well

researched pre-thesis literature review, the odds of outcomes (a),

(b) and © are circa 65, 25, and 10%, respectively.

 

Biology is not a science with outcome predictions as exact as in

the physical sciences.

 

Paradox reigns in biology moreso than in routine physics /

chemistry, because so many factors (often unrecognised by the

researcher(s) before the trial(s)), can influence / bias the outcomes

of the trials.

 

So it is no surprise to me that different research groups, or even

DIFFERENT studies by the SAME research group, can generate

different conclusions in so-called randomised controlled trials

(RCTs).

 

Ideally, in the design of RCTs, the various trial groups are

randomised. Before final assignment to the various treatments, the

structure of each group is checked to ensure that individual factors

that might bias the results (such as breed, age, sex, reproductive

status, health-history, etc) are randomised across all groups.

 

The problem with this concept is that scientists do not KNOW (in

advance) ALL of the factors that may influence tha outcomes.

Therefore the experimental groups are not properly randomised

pretrial!

 

This makes it very difficult (if not impossible) to draw sound

conclusions from the results. For example, in a trial of AP/CHM on

chronic asthma, if the researchers are not cognisant of the the

TCM concept of " KI Not Grasping LU Qi " as a factor in asthma,

and a pre-trial TCM Dx has not been made, the " placebo " or " sham-

AP " group could have 80% of its subjects FREE of that Dx, with

the " active " (real) AP group having 80% of its subjects with that

more serious Dx.

 

It follows that a comparison of " sham AP " (in the less-seriously

affected group) could have similar clinical outcomes to " expert AP "

in the more seriously affected group!

 

The (MISTAKEN) conclusion could be that " real AP " was no better

than " " sham AP " .

 

BOTTOM LINE: unwary researchers might conclude that " real AP "

had no benefit over " sham AP " .

 

> Are you familiar with the details of the studies design and

> conduct? Are there factors at work here that lead to such

> findings? Ken

 

No! I have not looked at the full-text (small print " ) of those papers.

 

But, from personal experience, I KNOW that AP can confer

significant clinical benefit to many patients (esp humans) with

chronic LU disease!

 

My problem (and the problem for the AP / TCM community) is that

I cannot PROVE that to the satisfaction of skeptical colleagues!

 

If one looks at the TOTAL number of Medline " hits " on TCM (AP,

herbal, tuina, moxa, etc), they are a FLEABITE i(?60,000) n

comparison to the number of papers on " conventional " (allopathic)

medicine (?15,000,000).

 

THAT is our problem! We simply have NOT enough high quality

published research on TCM. And I do not hold my breath while I

wait for governments or users to FUND such research.

 

Meanwhile, we are in " Limbo " . We (who use TCM daily) KNOW

that it is a very good system when used in well-selected cases, but

few scientists / conventional practitioners believe us when we say

so!

 

 

Best regards,

 

Email: <

 

WORK : Teagasc, c/o 1 Esker Lawns, Lucan, Dublin, Ireland

Mobile: 353-; [in the Republic: 0]

 

HOME : 1 Esker Lawns, Lucan, Dublin, Ireland

Tel : 353-; [in the Republic: 0]

WWW : http://homepage.eircom.net/~progers/searchap.htm

 

Chinese Proverb: " Man who says it can't be done, should not interrupt man doing

it "

Link to comment
Share on other sites

Hi All, & Ken,

 

Ken wrote [re differing / paradoxical results in TCM/AP research]:

> Phil, Thanks for bringing this up. I'm curious to know your take

> on what lies beneath such findings.

 

I prefix these comments with a statement: we had good friends

home for dinner tonight, and I have imbibed Chilean red wine in

copious amounts.

 

As a professional researcher for circa 40 years, I can say that ALL

" scientific hypotheses " have 3 possible outcomes:

 

(a) supported by the experimental work;

(b) not supported (neither for, nor against) by the experiments;

© refuted by the experimental results.

 

If the pre-thesis hypotheses are based on common-sense, or well

researched pre-thesis literature review, the odds of outcomes (a),

(b) and © are circa 65, 25, and 10%, respectively.

 

Biology is not a science with outcome predictions as exact as in

the physical sciences.

 

Paradox reigns in biology moreso than in routine physics /

chemistry, because so many factors (often unrecognised by the

researcher(s) before the trial(s)), can influence / bias the outcomes

of the trials.

 

So it is no surprise to me that different research groups, or even

DIFFERENT studies by the SAME research group, can generate

different conclusions in so-called randomised controlled trials

(RCTs).

 

Ideally, in the design of RCTs, the various trial groups are

randomised. Before final assignment to the various treatments, the

structure of each group is checked to ensure that individual factors

that might bias the results (such as breed, age, sex, reproductive

status, health-history, etc) are randomised across all groups.

 

The problem with this concept is that scientists do not KNOW (in

advance) ALL of the factors that may influence tha outcomes.

Therefore the experimental groups are not properly randomised

pretrial!

 

This makes it very difficult (if not impossible) to draw sound

conclusions from the results. For example, in a trial of AP/CHM on

chronic asthma, if the researchers are not cognisant of the the

TCM concept of " KI Not Grasping LU Qi " as a factor in asthma,

and a pre-trial TCM Dx has not been made, the " placebo " or " sham-

AP " group could have 80% of its subjects FREE of that Dx, with

the " active " (real) AP group having 80% of its subjects with that

more serious Dx.

 

It follows that a comparison of " sham AP " (in the less-seriously

affected group) could have similar clinical outcomes to " expert AP "

in the more seriously affected group!

 

The (MISTAKEN) conclusion could be that " real AP " was no better

than " " sham AP " .

 

BOTTOM LINE: unwary researchers might conclude that " real AP "

had no benefit over " sham AP " .

 

> Are you familiar with the details of the studies design and

> conduct? Are there factors at work here that lead to such

> findings? Ken

 

No! I have not looked at the full-text (small print " ) of those papers.

 

But, from personal experience, I KNOW that AP can confer

significant clinical benefit to many patients (esp humans) with

chronic LU disease!

 

My problem (and the problem for the AP / TCM community) is that

I cannot PROVE that to the satisfaction of skeptical colleagues!

 

If one looks at the TOTAL number of Medline " hits " on TCM (AP,

herbal, tuina, moxa, etc), they are a FLEABITE i(?60,000) n

comparison to the number of papers on " conventional " (allopathic)

medicine (?15,000,000).

 

THAT is our problem! We simply have NOT enough high quality

published research on TCM. And I do not hold my breath while I

wait for governments or users to FUND such research.

 

Meanwhile, we are in " Limbo " . We (who use TCM daily) KNOW

that it is a very good system when used in well-selected cases, but

few scientists / conventional practitioners believe us when we say

so!

 

 

Best regards,

 

Email: <

 

WORK : Teagasc, c/o 1 Esker Lawns, Lucan, Dublin, Ireland

Mobile: 353-; [in the Republic: 0]

 

HOME : 1 Esker Lawns, Lucan, Dublin, Ireland

Tel : 353-; [in the Republic: 0]

WWW : http://homepage.eircom.net/~progers/searchap.htm

 

Chinese Proverb: " Man who says it can't be done, should not interrupt man doing

it "

Link to comment
Share on other sites

On 24/10/2004, at 11:55 AM, wrote:

> Ideally, in the design of RCTs, the various trial groups are

> randomised. Before final assignment to the various treatments, the

> structure of each group is checked to ensure that individual factors

> that might bias the results (such as breed, age, sex, reproductive

> status, health-history, etc) are randomised across all groups.

>

> The problem with this concept is that scientists do not KNOW (in

> advance) ALL of the factors that may influence tha outcomes.

> Therefore the experimental groups are not properly randomised

> pretrial!

>

> This makes it very difficult (if not impossible) to draw sound

> conclusions from the results. For example, in a trial of AP/CHM on

> chronic asthma, if the researchers are not cognisant of the the

> TCM concept of " KI Not Grasping LU Qi " as a factor in asthma,

> and a pre-trial TCM Dx has not been made, the " placebo " or " sham-

> AP " group could have 80% of its subjects FREE of that Dx, with

> the " active " (real) AP group having 80% of its subjects with that

> more serious Dx.

>

> It follows that a comparison of " sham AP " (in the less-seriously

> affected group) could have similar clinical outcomes to " expert AP "

> in the more seriously affected group!

>

> The (MISTAKEN) conclusion could be that " real AP " was no better

> than " " sham AP " .

>

Hi Phil,

 

This approach does not produce " mistaken " results IMO due to poor

randomisation; it is far more incompetent than that. What it produces

is results that are not testing TCM at all. Without syndrome

differentiation, such experimental designs are just WM diagnosis with

cookbook herbs or acupuncture as the therapy.

 

Unfortunately, the scientific community take such " experiments " at face

value assuming they are actually testing TCM, this is the joke, the

cause of many contradictory results and what we must change.

 

Put simply, we must first test real TCM (ie. patterns as the basis of

individualised treatments) before any experimental findings have value.

 

Best Wishes,

 

Steve

Link to comment
Share on other sites

Hi Again,

 

IMO, designing an experiment to test TCM using a western disease

diagnosis without TCM pattern differentiation is analogous to testing

tou teng (head pain/ache) by a WM drug without separating subjects who

have sinusitis, stroke, brain tumours, migraines etc......

 

The results of such a research design would hold no respectability and

never be funded as the design is incompetent; sadly the same can not be

said for much of current TCM research.

 

Best Wishes,

 

Steve

 

On 24/10/2004, at 12:33 PM, Steven Slater wrote:

 

>

>

> On 24/10/2004, at 11:55 AM, wrote:

>> Ideally, in the design of RCTs, the various trial groups are

>> randomised. Before final assignment to the various treatments, the

>> structure of each group is checked to ensure that individual factors

>> that might bias the results (such as breed, age, sex, reproductive

>> status, health-history, etc) are randomised across all groups.

>>

>> The problem with this concept is that scientists do not KNOW (in

>> advance) ALL of the factors that may influence tha outcomes.

>> Therefore the experimental groups are not properly randomised

>> pretrial!

>>

>> This makes it very difficult (if not impossible) to draw sound

>> conclusions from the results. For example, in a trial of AP/CHM on

>> chronic asthma, if the researchers are not cognisant of the the

>> TCM concept of " KI Not Grasping LU Qi " as a factor in asthma,

>> and a pre-trial TCM Dx has not been made, the " placebo " or " sham-

>> AP " group could have 80% of its subjects FREE of that Dx, with

>> the " active " (real) AP group having 80% of its subjects with that

>> more serious Dx.

>>

>> It follows that a comparison of " sham AP " (in the less-seriously

>> affected group) could have similar clinical outcomes to " expert AP "

>> in the more seriously affected group!

>>

>> The (MISTAKEN) conclusion could be that " real AP " was no better

>> than " " sham AP " .

>>

> Hi Phil,

>

> This approach does not produce " mistaken " results IMO due to poor

> randomisation; it is far more incompetent than that. What it produces

> is results that are not testing TCM at all. Without syndrome

> differentiation, such experimental designs are just WM diagnosis with

> cookbook herbs or acupuncture as the therapy.

>

> Unfortunately, the scientific community take such " experiments " at face

> value assuming they are actually testing TCM, this is the joke, the

> cause of many contradictory results and what we must change.

>

> Put simply, we must first test real TCM (ie. patterns as the basis of

> individualised treatments) before any experimental findings have value.

>

> Best Wishes,

>

> Steve

>

>

>

>

>

> http://babel.altavista.com/

>

>

> and

> adjust accordingly.

>

> If you , it takes a few days for the messages to stop being

> delivered.

>

> Messages are the property of the author. Any duplication outside the

> group requires prior permission from the author.

>

Link to comment
Share on other sites

There are many things wrong with the current 'gold standard' research; RCTs.

 

* Migration of symptoms (or side effects) are not given any importance

in WM RCTs.

* AP tested against its WM rival is unheard of, why?

* No pre-TCM diagnosis is used, thus randomisation is flawed as Phil

correctly points out.

* Sham point selection is conducted without any real knowledge of AP,

although new sham point selection techniques have improved in the RCTs I've

looked at.

* The controversial 'cookbook' approach is often adopted, which isn't

TCM at all. On top of that, a cookbook selection of points may be further

reduced, excluding vital acupoints in the treatment of that syndrome. In

Phil's example this may mean the exclusion of the Kidney points in patient's

that have kidney deficiency asthma.

* The majority of research done into TCM is actually undertaken by WM

docs, therefore a high degree of bias is set against TCM. We practitioners

ourselves need to get up off our ass and do it!

* The use of stats. in RCTs is the crux of the matter. These can often

be bended like a good accountant managing a companies books. Stats. are only

as good as the way their used. Therefore, a universal stat. model should be

incorporated into all research, rather than dozens we have at the moment,

which when used cleverly can often get you a statistical significant

outcome.

 

My penny for the thought jar.

 

Attilio

 

 

Steven Slater [laozhongyi]

24 October 2004 03:33

Chinese Medicine

Re: Re: Paradoxical results in AP/TCM Research

 

 

 

On 24/10/2004, at 11:55 AM, wrote:

> Ideally, in the design of RCTs, the various trial groups are

> randomised. Before final assignment to the various treatments, the

> structure of each group is checked to ensure that individual factors

> that might bias the results (such as breed, age, sex, reproductive

> status, health-history, etc) are randomised across all groups.

>

> The problem with this concept is that scientists do not KNOW (in

> advance) ALL of the factors that may influence tha outcomes.

> Therefore the experimental groups are not properly randomised

> pretrial!

>

> This makes it very difficult (if not impossible) to draw sound

> conclusions from the results. For example, in a trial of AP/CHM on

> chronic asthma, if the researchers are not cognisant of the the

> TCM concept of " KI Not Grasping LU Qi " as a factor in asthma,

> and a pre-trial TCM Dx has not been made, the " placebo " or " sham-

> AP " group could have 80% of its subjects FREE of that Dx, with

> the " active " (real) AP group having 80% of its subjects with that

> more serious Dx.

>

> It follows that a comparison of " sham AP " (in the less-seriously

> affected group) could have similar clinical outcomes to " expert AP "

> in the more seriously affected group!

>

> The (MISTAKEN) conclusion could be that " real AP " was no better

> than " " sham AP " .

>

Hi Phil,

 

This approach does not produce " mistaken " results IMO due to poor

randomisation; it is far more incompetent than that. What it produces

is results that are not testing TCM at all. Without syndrome

differentiation, such experimental designs are just WM diagnosis with

cookbook herbs or acupuncture as the therapy.

 

Unfortunately, the scientific community take such " experiments " at face

value assuming they are actually testing TCM, this is the joke, the

cause of many contradictory results and what we must change.

 

Put simply, we must first test real TCM (ie. patterns as the basis of

individualised treatments) before any experimental findings have value.

 

Best Wishes,

 

Steve

 

 

 

 

Link to comment
Share on other sites

Hi All, & Steve & Lars,

 

Phil wrote:

> ... a comparison of " sham AP " (in the less-seriously affected

> group) could have similar clinical outcomes to " expert AP " in the

> more seriously affected group! The (MISTAKEN) conclusion could be

> that " real AP " was no better than " " sham AP " .

 

Steve replied:

> Hi Phil, This approach does not produce " mistaken " results IMO due

> to poor randomisation; it is far more incompetent than that. What

> it produces is results that are not testing TCM at all.

 

Steve, the RESULTS are not mistaken - results are results, nothing

more, nothing less. It is the CONCLUSIONS (interpretation of the

relevance of the results) that are mistaken if the trial is flawed from

the outset!

 

But I agree with your main point:

> Without syndrome differentiation, such experimental designs are

> just WM diagnosis with cookbook herbs or acupuncture as the

> therapy. Unfortunately, the scientific community take such

> " experiments " at face value assuming they are actually testing TCM,

> this is the joke, the cause of many contradictory results and what

> we must change. Put simply, we must first test real TCM (ie.

> patterns as the basis of individualised treatments) before any

> experimental findings have value.

 

> ...IMO, designing an experiment to test TCM using a western disease

> diagnosis without TCM pattern differentiation is analogous to

> testing Tou Teng (head pain/ache) by a WM drug without separating

> subjects who have sinusitis, stroke, brain tumours, migraines etc.

> The results of such a research design would hold no respectability

> and never be funded as the design is incompetent; sadly the same

> can not be said for much of current TCM research. Best Wishes, Steve

 

I agree that the BEST test of TCM in complex clinical problems

would be to include Differential Dx into the design and

randomisation of the treatment groups, and to compare relevant

TCM Differential Tx with state-of-the-art relevant WM Tx.

 

That said, many experiments have compared " Cookbook AP " with

WM treatments and found significant benefits from Cookbook AP.

For example:

 

PC06-Neiguan in nausea/emesis

GV26-Renzhong/shuigou in emergencies, shock, apnoea, etc

BL67-Zhiyin in foetal malposition

 

http://www.jcm.co.uk/news/News67.phtml has: ACUPUNCTURE &

OSTEOARTHRITIS OF THE KNEE 44 patents with advanced

osteoarthritis of the knee who were awaiting total knee joint

replacements were given AP either on the most affected knee only,

or on both knees. Points needled were SP09-Yinlingquan, SP10-

Xuehai, ST34-Liangqiu, ST36-Zusanli and LI04-Hegu. Results

showed a significant reduction in symptoms in both groups with no

signficant dfference between the two groups, suggesting that

unilateral AP is as effective as bilateral AP for this condition.

(Acupuncture in Medicine 2001;19(1):15-18).

 

Lars wrote that it could be useful to include sham AP as a control.

I replied that it is unwise to do that, because many trials failed to

show a difference in outcome of true v sham AP. However, several

studies DID show a significant difference in favour of " real AP " , as

compared with " sham AP " .

 

For example http://www.jcm.co.uk/news/News67.phtml has:

 

Another study compared true AP with sham AP in the treatment of

lumbar disc protrusion, both before and after surgery. The true AP

group were needled at points BL25-Dachangshu, GB30-Huantiao,

GB34-Yanglingquan, BL40-Weizhong, BL23-Shenshu, BL26-

Guanyuanshu, GB31-Fengshi, BL36-Chengfu and BL62-Shenmai

(listed in order of frequency of use) with Deqi obtained. The sham

group were needled at nearby non-points with no Deqi produced.

True AP resulted in significant pain relief compared to the sham

AP. (Am J Chin Med 2000;28:25-33). A further study compared AP

with physiotherapy in the treatment of lumbar and pelvic pain in

pregnancy, with the AP group improving to a greater degree than

the physiotherapy group in terms of pain and disability. (Acta

Obstet Gynecol Scan 2000;79:331-5).

 

 

Best regards,

 

Email: <

 

WORK : Teagasc, c/o 1 Esker Lawns, Lucan, Dublin, Ireland

Mobile: 353-; [in the Republic: 0]

 

HOME : 1 Esker Lawns, Lucan, Dublin, Ireland

Tel : 353-; [in the Republic: 0]

WWW : http://homepage.eircom.net/~progers/searchap.htm

 

Chinese Proverb: " Man who says it can't be done, should not interrupt man doing

it "

Link to comment
Share on other sites

On 25/10/2004, at 2:09 AM, wrote:

 

>

> Hi All, & Steve & Lars,

>

> Phil wrote:

>> ... a comparison of " sham AP " (in the less-seriously affected

>> group) could have similar clinical outcomes to " expert AP " in the

>> more seriously affected group! The (MISTAKEN) conclusion could be

>> that " real AP " was no better than " " sham AP " .

>

> Steve replied:

>> Hi Phil, This approach does not produce " mistaken " results IMO due

>> to poor randomisation; it is far more incompetent than that. What

>> it produces is results that are not testing TCM at all.

>

> Steve, the RESULTS are not mistaken - results are results, nothing

> more, nothing less. It is the CONCLUSIONS (interpretation of the

> relevance of the results) that are mistaken if the trial is flawed from

> the outset!

>

>

 

Hi Phil,

 

Yes you are correct, my bad. I did mean to say conclusions rather than

results in the first sentence. However I also think that the results do

not represent what the aim of the trial was in many cases ie. the

results are not an accurate measure of the TCM therapy tested due to

terrible trial design.

 

Best Wishes,

 

Steve

Link to comment
Share on other sites

Join the conversation

You are posting as a guest. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...