Confessions of a Quackbuster

This blog deals with healthcare consumer protection, and is therefore about quackery, healthfraud, chiropractic, and other forms of so-Called "Alternative" Medicine (sCAM).

Monday, March 28, 2005

Why Double-Blind Studies? - Steven Bratman, M.D.

Before you Believe Anything you Read
About Alternative Medicine, Read this:

I once took alternative medicine on faith. For decades, I practiced it on patients and myself and my family, and assumed that pretty much all of it worked. Then I learned about double-blind studies, and it was like a tornado blowing down a house of cards. I discovered that I, like most people who love alternative medicine, had made a huge (though understandable) mistake.

I had thought it was possible to know whether a treatment worked by trying it. I had also thought I could trust tradition, anecdote, and authority. I now see otherwise. The insights of the double-blind trial have cut through my wishful thinking and idealism, and turned me into a hard-nosed skeptic. Show me the double-blind studies, and I'll pay attention. Otherwise, so far as I'm concerned, it's little more than hot air.

Warning: This isn't an easy subject. But if you read this through, and think about it, you will never look at alternative medicine (or any form of medicine) the same way again.


Why Double-Blind Studies?


Although most people have heard of double-blind studies, few recognize their true significance. It's not that double-blind studies are hard to understand; rather, that their consequences are difficult to accept. Why? Because double-blind studies tell us that we can't trust our direct personal experience. This isn't easy to swallow, but it's nonetheless true.

The insights provided by double-blind studies have been particularly disturbing for alternative medicine. Most alternative medicine methods are grounded in tradition, common sense, anecdote, and testimonial. On the surface, these seem like perfectly good sources of information. However, double-blind studies have shown us otherwise. We now know that a host of "confounding factors" can easily create a kind of optical illusion, causing the appearance of efficacy where none in fact exists. The double-blind study is thus much more than a requirement for absolute proof of efficacy (as is commonly supposed) — it is a necessity for knowing almost anything about whether a treatment really works.

What is a Double-Blind Study?

In a randomized double-blind, placebo-controlled trial of a medical treatment, some of the participants are given the treatment, others are given fake treatment (placebo), and neither the researchers nor the participants know which is which until the study ends (they are thus both “blind”). The assignment of participants to treatment or placebo is done randomly, perhaps by flipping a coin (hence, “randomized”).

Why Double-Blind Studies?

The experience of the last forty years has shown that, for most types of treatments, only a randomized double-blind, placebo-controlled study can properly answer the question: “Does Treatment A benefit Condition B?” To explain why, I will work backwards, and illustrate the problems that occur if we attempt to answer this question any other way.

Common sense tells us that we can tell if a treatment works by simply trying it. Does it help me? Does it help my aunt? If so, it’s effective. If not, it doesn’t work.

Right? Unfortunately, no, that's not right. Medical conditions are an area of life in which direct, common sense observations aren't reliable at all. The insights brought to us by double-blind studies have shown medical researchers that they can't trust their own eyes. The reason why: a horde of confounding factors.

The Rogue’s Gallery: Eight Confounding Factors

Subtle influences called “confounding factors” can create the illusion that ineffective treatments are actually effective. It is because of these confounding factors that so many worthless medical treatments have endured for centuries. Think of the practice of "bleeding," slitting a vein to drain blood. Some of the most intelligent people in our history were sure that bleeding was a necessity, and the medical literature of past centuries is full of testimonials to the marvelous effect of this "medical necessity."

Today, though, it's clear that bleeding is not helpful, and no doubt was responsible for killing a great many people. Why did this ridiculous treatment method survive so long? Because, as I said, you can't trust your own eyes. People were sure they saw benefits through bleeding, but all they saw were confounding factors, such as:

· The Placebo Effect

· The Re-interpretation Effect

· Observer Bias

· Selection Bias

· Natural Course of the Illness

· Regression to the Mean

· The Study Effect

· Statistical Illusions

A full discussion of these confounders would take a book, but I'll give a brief introduction here.

The Placebo Effect

The placebo effect is the process by which the power of suggestion actually causes symptoms to improve. The original research that identified the placebo effect had some serious errors in it,29 but there is little doubt that some conditions are quite responsive to placebo treatment, such as menopausal hot flashes,5 symptoms of prostate enlargement,8 and many types of pain.16 While it's often reported that only 30% of people respond to placebo, this number has no foundation, and, in fact the response rate seen in some of the conditions I just listed reaches as high as 70%.

The placebo effect almost always comes as a surprise to those who experience it. Both doctors and patients are fooled. For example, surgeons used to think that arthroscopic surgery for knee arthritis really worked, and hundreds of thousands of such surgeries were performed every year. Then a study came out showing that fake surgery produces just as satisfactory and long-lasting benefits as the real thing.7 Surgeons were shocked and chagrined to find that people given the fake surgery (unbeknownst to them) were so pleased with the results that they said they would happily recommend the treatment to others!

People generally get angry if you tell them their benefits might be due to placebo. However, examples abound to show just how possible this really is. I'll give a few here.

In a double-blind, placebo-controlled study of 30 people with carpal tunnel syndrome, use of a static magnet produced dramatic and enduring benefits, but so did use of fake magnets.34

In a study of 321 people with low back pain, chiropractic manipulation was quite helpful, but no more helpful than giving patients an educational booklet on low back pain.35

In a randomized, controlled trial of 67 people with hip pain, acupuncture produced significant benefits, but no greater benefits than placing needles in random locations.33

And in a randomized, controlled trial of 177 people with neck pain, fake laser acupuncture proved to be more effective than massage.32

Note that these studies do not actually disprove the tested therapies. The study sizes might have simply been too small to detect a modest benefit. What they do show, however, is that comparison to placebo treatment is essential: without such comparison, any random form of treatment, no matter how worthless in itself, is likely to appear to be effective.

Beyond the Placebo Effect

At least the placebo effect produces a real benefit. Many, many other illusions can create the impression of benefit although no benefit has occurred at all. In this section I discuss a few of these more insidious confounders.

Even when a fake treatment doesn’t actually improve symptoms, people may re-interpret their symptoms and experience them as less severe. For example, if I give you a drug that I say will make you cough less frequently, you will very likely experience yourself as coughing less frequently, even if your actual rate of coughing doesn’t change. In other words, you will re-interpret your symptoms to perceive them as less severe. (This effect seems to have been the primary reason why people use over-the-counter cough syrups -- surprising as it may seem, current evidence suggests that they are not effective, even though people have relied upon them for decades.10)

Observer bias is a similar phenomenon, but it affects doctors rather than patients. If doctors believe that they are giving a patient an effective drug, and they interview that patient, they will observe improvements, even if there are no improvements. For a classic example of this consider the results of a study that tested the effectiveness of a new treatment regimen for multiple sclerosis by comparing it against placebo treatment.9 This was a double-blind study, and therefore the physicians whose job it was to evaluate the results were kept in the dark about which study participants were receiving real and which were receiving fake treatment (they were "blinded"). However, the experimenters introduced an interesting wrinkle: they allowed a few physicians to know for certain which patients were receiving treatment (they were "unblinded").

The results were a bit appalling. The unblinded physicians were much more likely to "observe" that the treatment worked compared to the impartial blinded physicians. In other words, the unblinded physicians hallucinated a benefit because they expected to see one! (I call this appalling because of what it says about so-called "professional objectivity." It implies that the considered opinion of a practicing physician may be highly unreliable when it is based on professional experience rather than double-blind studies.)

The term selection bias indicates that if researchers are allowed to choose who gets a real treatment and who doesn’t, rather than assigning them randomly, it is very likely that they will unconsciously pick people in such a way that the treatment will look better. For reasons that aren’t clear, this effect is so huge that it can multiply the apparent benefit of a treatment by seven times, and turn a useless treatment into an apparently useful one.3,4 This is why double-blind studies must be “randomized.”

Many diseases will get better on their own, as part of their natural course. Any treatment given at the beginning of such an illness will seem to work, and a doctor using such a treatment will experience what is called the illusion of agency, the sense of having helped even though the outcome would have been the same regardless. A good example is neck or back pain: most episodes of these conditions go away with time, regardless of treatment, and so any treatment at all will seem to be effective.

Regression to the mean is like natural course, but a bit trickier. It’s based on the fact that even for conditions that do not go away on their own, the severity of the condition tends to fluctuate. Blood pressure is a good example. For many people, blood pressure levels wax and wane throughout the day, and from week to week. Suppose a person’s average blood pressure is 140/90, but occasionally gets as high as 170/110. If such a person gets tested and found at the moment to have high blood pressure, he may be seen as needing treatment. However, if he happens to be more near his average blood pressure, or even lower, he won’t be seen as needing treatment. In other words, doctors will tend to treat people when they are at their worst, not when they are at their best. By the laws of statistics, after a while, a person is more likely to be near his average blood pressure than his worst blood pressure, regardless of what treatment (if any) is used. This will appear to be an improvement, though in fact it’s only natural fluctuation.

The study effect (also called Hawthorne effect) refers to the fact that people enrolled in a study tend to take better care of themselves, and may improve for this reason, rather than any specifics of the treatment under study. This is a surprisingly powerful influence. If you enroll someone in a trial of a new drug for reducing cholesterol, and then you give them a placebo, their cholesterol levels are likely to fall significantly. Why? Presumably, they begin to take better care of themselves, by eating better, exercising more, etc. Again, double-blinding and a placebo group are necessary,. because otherwise this confounding factor can cause the illusion of specific benefit where none exists.

Finally, illusions caused by the nature of statistics are very common. There are many kinds of these, and so I’ll give them a section of their own.

Statistical Illusions

Suppose you've invented a truly lousy treatment that fails almost all the time, but helps one in a hundred people. If you give such a nearly worthless treatment to 100,000 people, you’ll get a thousand testimonials, and the treatment will sound great.

Suppose you give someone a treatment said to enhance their mental function, and then you use twenty different methods of testing mental function. By the law of averages, improvements will be seen on some of these measurements, even if the treatment doesn’t actually work. If you’re a supplement manufacturer, you can use these results to support the sales of your product, even though in fact the results are merely due to the way statistics work, and not any mind-stimulating effect of your product. (In order to validly test the mind-enhancing power of a supplement, you have to restrict yourself to at most a couple of ways of testing benefit).

Suppose you give 1000 people a treatment to see if it prevents heart disease, and you don’t find any benefit. This frustrates you, so you begin to study the data closely. Low and behold, you discover that there is less lung cancer among people receiving the treatment. Have you made a new discovery? Possibly, but probably not. Again by the law of averages, if you allow yourself to dredge the data you are guaranteed to find improvements in some condition or other, simply by statistical accident.

Perhaps the trickiest statistical illusion of all relates to what are called observational studies. This is such an important topic, that again I’ll break for a new heading.

Observational Studies

In observational studies, researchers don’t actually give people any treatment. Instead, they simply observe a vast number of people. For example, in the Nurse’s Health Study, almost 100,000 nurses have been extensively surveyed for many years, in an attempt to find connections between various lifestyle habits and illnesses. Researchers have found, for example, that nurses who consume more fruits and vegetables have less cancer. Such a finding is often taken to indicate that fruits and vegetables prevent cancer, but this would not be a correct inference. Here’s why:

All we know from such a study is that high intake of fruits and vegetables is associated with less cancer, not that it causes less cancer. People who eat more fruits and vegetables may have other healthy habits as well, even ones we don’t know anything about, and they could be the cause of the benefit, not the fruits and vegetables.

This may sound like a purely academic issue, but it’s not. Researchers looking at observational studies noticed that menopausal women who take hormone replacement therapy (HRT) have as much as 50% less heart disease than women who do not use HRT. This finding, along with a number of very logical arguments tending to show that estrogen should prevent heart disease, led doctors to recommend that all menopausal women take estrogen. Even as late as 2001, many doctors used to say that taking estrogen was the single most important way an older woman could protect her heart.

However, this was a terrible mistake. Observational studies don’t show cause and effect, and it was possible that women who happened to use HRT were healthier in other ways and that it was those unknown other factors that led to lower heart disease rates, and not the HRT. Doctors pooh-poohed this objection (showing that even doctors often fail to understand the need for double-blind studies) and said that it was perfectly obvious HRT helped. However, when a double-blind, placebo-controlled study was done to verify what everyone “knew” was true, it turned out that that HRT actually causes heart disease, rather than prevents it.6 It also increases risk of breast cancer. In other words, placing trust in observational studies led to the deaths of many, many women. This is not, as I say, an academic issue.

In hindsight, it appears that women who happen to use HRT are healthier because they tend to be in a higher socioeconomic class, and have better access to healthcare and also take care of themselves. However, it is also possible that the real cause of the spurious association between HRT use and reduced heart disease is due to some other factor that we have not even identified. The bottom line is that observational studies don’t prove anything, and they can lead to conclusions that are exactly backwards.

This is a lesson that the news media seems unable to understand. It constantly reports the results of observational studies as proof of cause and effect. For example, it has been observed that people who consume a moderate amount of alcohol have less heart disease than those who consume either no alcohol or too much alcohol. But, contrary to what you may have heard, this doesn’t mean that alcohol prevents heart disease! It is very likely that people who are moderate in their alcohol consumption are different in a variety of ways from people who are either teetotalers or abusers, and it is those differences, and not the alcohol per se, that causes the benefit. Maybe, for example, they are moderate in general, and that makes them healthier. The fact is, we don’t know.

Similarly, it has been observed that people who consume a diet high in antioxidants have less cancer and heart disease. However, once more this does NOT mean that antioxidants prevent heart disease and cancer. In fact, when the antioxidants vitamin E and beta-carotene were studied in gigantic double-blind studies as possible cancer- or heart-disease-preventive treatments, vitamin E didn’t work (except, possibly, for prostate cancer) and beta-carotene actually made things worse!17-28 (One can pick holes in these studies, and proponents of antioxidants frequently do, but the fact is that we still lack direct double-blind evidence to indicate that antioxidants truly provide any of the benefits claimed for them. The only evidence that does exist is directly analogous to that which falsely "proved" that HRT prevents heart disease!)

Double-Blind Studies, and Nothing but Double-Blind Studies

All of the information I’ve just presented has accumulated over the last several decades. After coming to a great many false conclusions based on other forms of research, medical researchers have finally come to realize that without doing double-blind studies on a treatment it’s generally impossible to know whether it works. It doesn’t matter if the treatment has a long history of traditional use -- in medicine, tradition is very often dead wrong. It doesn’t matter if doctors or patients think it works -- doctors and patients are almost sure to observe benefits even if the treatment used is fake. And it doesn’t matter if observational trials show that people who do X have less of Y. Guesses made on the basis of this kind of bad evidence may be worse than useless: they may actually cause harm rather than benefit.

To make matters even more difficult, double-blind studies are not all created alike. There are a number of pitfalls in designing, performing and reporting such studies, and for this reason some double-blind studies deserve more credence than others. Double-blind studies from certain countries, such as China and Russia, always must be taken with a grain of salt, because historical evidence suggests a pattern of systematic bias in those countries.31 Studies that enroll few people, or last for only a short time, generally prove little. And unless more than one independent laboratories have found corroborating results, there's always the chance of bias or outright fraud. Thus, a treatment can only be considered proven effective when there have been several double-blind studies enrolling 200 or more people, performed by separate researchers, conducted according to the highest standards (as measured by a study rating scale called the "Jadad scale"), carried out at a respected institution and published in a peer-reviewed journal. Weaker evidence provides, at best, a hint of effectiveness, very likely to be disproved when better studies are done.

While a number of herbs and supplements have reached, or nearly reached the level of solid proof, most alternative therapies have not.* Again, this isn't an idealistic, ivory-tower standard useful only for academia: it's a necessity. Treatments that have not been evaluated in double-blind studies are so much hot air. Except in the rare cases when a treatment is overwhelmingly and almost instantly effective (a so-called "high effect-size" treatment), there is simply no other way to know whether it works at all besides going through the trouble and expense of double-blind trials.

Evidence-Based Medicine

The double-blind study has caused a revolution even in conventional medicine. Many old beliefs have been tossed out when double-blind studies were finally done. It’s been discovered, for example, that (as noted earlier) over-the-counter cough syrups don’t work,10 that immediate antibiotic treatment for ear infections is probably not necessary or even helpful in most cases,11-15, 30 and that cartilage scraping for knee arthritis is no better than placebo7 (but, as noted above, placebo is very effective!).

The understanding that medicine must be grounded in double-blind studies is called the “evidence-based medicine” movement, and it is the same movement that informs AltMedConsult’s approach to alternative medicine. According to evidence-based medicine, if a treatment has not been properly studied, it should not be advocated as an effective treatment.

This is true whether the treatment is an Indonesian herb or a well-accepted medical technique. Certain aspects of conventional medicine have scarcely been studied at all, for example, and for that reason are just as unproven as the latest herb from the rain forest. For example, traction, a common physical therapy treatment for back pain, has never been properly studied, and therefore does not belong in evidence-based medicine.2

However, conventional medicine, at least, has a certain reticence about offering unproven treatments. Alternative medicine, up until recently, has taken the opposite approach: offering a profusion of treatments without the slightest shred of double-blind support. Most of these, I’m afraid to say, aren’t going to stand up when proper studies are done, no matter how many testimonials (and plausible supporting arguments) they have now. Some will prove effective, though, and many already have.

For a comprehensive description of current double-blind studies regarding alternative medicine, see a product I helped develop: The TNP Natural Health Encyclopedia (The Natural Pharmacist). For a discussion of special issues relevant to herbal medicine, and why different samples of the same herb may have different efficacy, see Herbs and Supplements: Label Inaccuracy and Deeper Problems. Finally, for a list of the particular European standardized herbal extracts that have been tested in double-blind trials, see European Herbal Brands Tested in Double-Blind Trials, and their US Equivalents.

— Steven Bratman, M.D.

*To be fair, for some types of treatment, such as chiropractic, acupuncture, physical therapy and surgery, it isn't possible to design a true double-blind study: the practitioner will inevitably know whether real or a fake treatment has been applied. In such cases, most researchers settle for a "single-blind" design, in which the study participants (and the people who evaluate the participants to see if they've responded to therapy), but not the practitioners of the therapy. The problem with such single-blind studies, though, is that the practitioners may convey enthusiasm when they are providing a real treatment and lack of enthusiasm when they apply a fake one. The former might act as a better placebo than the latter, and thereby produce the results that really have nothing to do with the treatment itself. To get around this, Kerry Kamer D.O. has suggested using actors trained to provide fake treatment with confidence and enthusiasm, but, so far as I know, this has not yet been tried.


References

1. Devereaux PJ, Yusuf S. The evolution of the randomized controlled trial and its role in evidence-based decision making. J Intern Med. 2003;254:105-13.

2. Harte AA, Baxter GD, Gracey JH. The efficacy of traction for back pain: a systematic review of randomized controlled trials. Arch Phys Med Rehabil. 2003;84:1542-53.

3. Kramer MS. Randomized trials and public health interventions: time to end the scientific double standard. Clin Perinatol. 2003;30:351-61

4. Kunz R, Oxman AD. The unpredictability paradox: review of empirical comparisons of randomised and non-randomised clinical trials. BMJ. 1998;317:1185-90.

5. MacLennan A, Lester S, Moore V. Oral estrogen replacement therapy versus placebo for hot flushes: a systematic review. Climacteric. 2001;4:58-74.

6. Manson JE, Hsia J, Johnson KC, et al. Women's Health Initiative Investigators. Estrogen plus progestin and the risk of coronary heart disease. N Engl J Med. 2003;349:523-34.

7. Moseley JB, O'Malley K, Petersen NJ, et al. A controlled trial of arthroscopic surgery for osteoarthritis of the knee. N Engl J Med. 2002;347:81-8

8. Nickel JC. Placebo therapy of benign prostatic hyperplasia: a 25-month study. Canadian PROSPECT Study Group. Br J Urol. 1998;81:383-387.

9. Noseworthy J H, Ebers G C, Vandervoort M K, et al. The impact of blinding on the results of a randomized, placebo-controlled multiple sclerosis clinical trial. Neurology. 2001;57:S31-5.

10. Schroeder K, Fahey T. Over-the-counter medications for acute cough in children and adults in ambulatory settings. Cochrane Database Syst Rev. 2001;CD001831.

11. Damoiseaux RA, van Balen FA, Hoes AW, et al. Primary care based randomized, double blind trial of amoxicillin versus placebo for acute otitis media in children aged under 2 years. BMJ. 2000;320:350–354.

12. Rosenfeld RM, Vertrees JE, Carr J, et al. Clinical efficacy of antimicrobial drugs for acute otitis media: metaanalysis of 5400 children from thirty-three randomized trials. J Pediatr. 1994;124:355–367.

13. Del Mar C, Glasziou P, Hayem M. Are antibiotics indicated as initial treatment for children with acute otitis media? A meta-analysis. BMJ. 1997;314:1526–1529.

14. Little P, Gould C, Williamson I, et al. Pragmatic randomised controlled trial of two prescribing strategies for childhood acute otitis media. BMJ. 2001;322:336–342.

15. Alho O-P, Laara E, Oja H. What is the natural history of recurrent acute otitis media in infancy? J Fam Pract. 1996;43:258–264.

16. Solomon S. A review of mechanisms of response to pain therapy: why voodoo works. Headache. 2002;42:656-62

17. Clarke R, Armitage J. Antioxidant vitamins and risk of cardiovascular disease. Review of large-scale randomised trials. Cardiovasc Drugs Ther. 2002;16:411-5.

18. Moyad MA. Selenium and vitamin E supplements for prostate cancer: evidence or embellishment? Urology. 2002;59(Suppl 1):9-19.

19. Heinonen OP, Albanes D, Virtamo J, et al. Prostate cancer and supplementation with alpha-tocopherol and beta-carotene: incidence and mortality in a controlled trial. J Natl Cancer Inst. 1998;90:440–446.

20. Albanes D, Heinonen OP, Huttunen JK, et al. Effects of alpha-tocopherol and beta-carotene supplements on cancer incidence in the Alpha-Tocopherol Beta-Carotene Cancer Prevention Study. Am J Clin Nutr. 1995;62(suppl):1427S–1430S.

21. Omenn GS, Goodman GE, Thornquist MD, et al. Effects of a combination of beta carotene and vitamin A on lung cancer and cardiovascular disease. N Engl J Med. 1996;334:1150–1155.

22. Hargreaves DF, Potten CS, Harding C, et al. Two-week dietary soy supplementation has an estrogenic effect on normal premenopausal breast. J Clin Endocrinol Metab. 1999;84:4017-4024.

23. Frieling UM, Schaumberg DA, Kupper TS, et al. A randomized, 12-year primary-prevention trial of beta carotene supplementation for nonmelanoma skin cancer in the physicians' health study. Arch Dermatol. 2000;136:179–184.

24. Malila N, Taylor PR, Virtanen MJ, et al. Effects of alpha-tocopherol and beta-carotene supplementation on gastric cancer incidence in male smokers (ATBC Study, Finland). Cancer Causes Control. 2002;13:617-623.

25. Virtamo J, Edwards BK, Virtanen M, et al. Effects of supplemental alpha-tocopherol and beta-carotene on urinary tracct cancer: incidence and mortality in a controlled trial (Finland). Cancer Causes Control. 2000;11:933-939.

26. Heart Protection Study Collaborative Group. MRC/BHF Heart Protection Study of antioxidant vitamin supplementation in 20,536 high-risk individuals: a randomised placebo-controlled trial. Lancet. 2002;360:23-33.

27. Albanes D, Heinonen OP, Huttunen JK, et al. Effects of alpha-tocopherol and beta-carotene supplements on cancer incidence in the Alpha-Tocopherol Beta-Carotene Cancer Prevention Study. Am J Clin Nutr. 1995;62(suppl):1427S–1430S.

28. Lee IM, Cook NR, Manson JE, et al. Beta-carotene supplementation and incidence of cancer and cardiovascular disease: the Women's Health Study. J Natl Cancer Inst. 1999;91:2102–2106.

29. Hrobjartsson A, Gotzsche PC. Is the placebo powerless? An analysis of clinical trials comparing placebo with no treatment. N Engl J Med. 2001;344:1594-1602

30. Pappas DE, Owen Hendley J. Otitis media. A scholarly review of the evidence. Minerva Pediatr. 2003;55:407-14.

31. Vickers A, Goyal N, Harland R, et al. Do certain countries produce only positive results? A systematic review of controlled trials. Control Clin Trials. 1998;19:159-166

32. Irnich D, Behrens N, Molzen H, et al. Randomised trial of acupuncture compared with conventional massage and sham laser acupuncture for treatment of chronic neck pain. BMJ. 2001;322:1–6.

33. Fink M, Karst M, Wippermann B, et al. Non-specific effects of traditional Chinese acupuncture in osteoarthritis of the hip: a randomized controlled trial. Complement Ther Med. 2001;9:82–88.

34. Carter R, Hall T, Aspy CB, et al. Effectiveness of magnet therapy for treatment of wrist pain attributed to carpal tunnel syndrome. J Fam Pract. 2002;51:38–40. However, identical benefits were seen among those given fake magnets.

35. Cherkin DC, Deyo RA, Battie M, et al. A comparison of physical therapy, chiropractic manipulation, and provision of an educational booklet for the treatment of patients with low back pain. N Engl J Med. 1998;339:1021–1029.

36. Williams JM, Getty D. Effect of levels of exercise on psychological mood states, physical fitness, and plasma beta-endorphin. Percept Mot Skills. 1986 Dec;63(3):1099-105.



©2003 Steven Bratman, M.D.


(Reproduced here by permission - PL)


Please visit Dr. Bratman's excellent website:

Alternative Medicine Consulting Services