Most people with a little experience in homeopathy have no doubt that these medicines work, though inevitably they will have some family members, friends, neighbors, and physicians who will be skeptical about it. One way to deal with these people's skepticism is to become familiar with research on the efficacy of homeopathic medicines.
There is actually considerably more laboratory and clinical research on homeopathic medicine than most people realize. That said, it must also be recognized that more research is certainly needed, not simply to answer the questions of skeptics but to help homeopaths optimize their use of these powerful natural medicines.
Some skeptics insist that research on homeopathy is mandatory since the exceptionally small doses used do not make sense and there is no known mechanism for action for these drugs. While it is true that homeopaths presently do not know precisely how the homeopathic microdoses work, there are some compelling theories about their mechanism of action (see the discussion in Chapter 1, "The Wisdom and Wonder of Small Doses").
More important, there is compelling evidence that they do work, as this chapter will show. And although homeopaths may not understand how their medicines work, keep in mind that leading contemporary pharmacologists readily acknowledge that there are many commonly prescribed drugs today, including aspirin and certain antibiotics, whose mechanism of action remains unknown, but this gap in knowledge has yet to stop physicians from prescribing them.
Many conventional physicians express doubt about the efficacy of homeopathy, asserting that they will "believe it when they see it." It may be more appropriate for them to acknowledge that they will "see it when they will believe it." This is not meant as a criticism of conventional physicians as much as of conventional medical thinking. The biomedical paradigm has narrowed the view of, the thinking about, and the practice of medicine to the treatment of specific disease entities with supposedly symptom-specific drugs and procedures. An integral aspect of this approach to medicine is the assumption that the larger the dose of a drug, the stronger will be its effects. While this seems to make sense on the surface, knowledgeable physicians and pharmacologists know that it isn't true.
There is a recognized principle in pharmacology called the "biphasic response of drugs."1 Rather than a drug simply having increased effects as its dose becomes larger, research has consisently shown that exceedingly small doses of a substance will have the opposite effects of large doses.
The two phases of a drug's action (thus the name "biphasic") are dose-dependent. For instance, it is widely recognized that normal medical doses of atropine block the parasympathetic nerves, causing mucous membranes to dry up, while exceedingly small doses of atropine cause increased secretions to mucous membranes.
This pharmacological principle was concurrently discovered in the 1870s by two separate researchers, Hugo Schulz, a conventional scientist, and Rudolf Arndt, a psychiatrist and homeopath. Initially called the Arndt-Schulz law, this principle is still widely recognized, as witnessed by the fact that it is commonly listed in medical dictionaries under the definition of "law."
More specifically, these reseachers discovered that weak stimuli accelerate physiological activity, medium stimuli inhibit physiological activity, and strong stimuli halt physiological activity. For example, very weak concentations of iodine, bromine, mercuric chloride, and arsenious acid will stimulate yeast growth, medium doses of these substances will inhibit yeast growth, and large doses will kill the yeast.
In the 1920s, conventional scientists who tested and verified this biphasic response termed the phenomenon "hormesis," and dozens of studies were published in a wide variety of fields to confirm this biological principle.2
In the past two decades there has again been a resurgence of interest in this pharmacological law, and now hundreds of studies in numerous areas of scientific investigation have verified it.3 Because these studies have been performed by conventional scientists who are typically unfamiliar with homeopathic medicine, they have not tested or even considered testing the ultra-high dilutions commonly used in homeopathy. However, their research has consistently shown very significant effects from such small microdoses that even the researchers express confusion and surprise.
Reference to this research on the Arndt-Schulz law and hormesis is important for validating homeopathic research because it demonstrates the evidence for the important biphasic responses and microdose effects that lie at the heart of homeopathy. This research is readily available to physicians and scientists yet is often ignored or not understood.
The amount of research on homeopathic medicines is growing, and it is becoming increasingly difficult to ignore these studies, because they are now appearing in many of the most respected medical and scientific journals in the world. This chapter is not meant to be exhaustive (that would require a book or two of its own). It will include many of the best studies, most of which have been published in conventional medical and scientific journals.
Some of the studies are discussed because of the impressive results they showed, and others are included for their implications for better understanding homeopathy and the healing process. The review of research is not simply to provide evidence of the efficacy of homeopathic medicine but also to enlighten readers on how to evaluate homeopathic research, whether positive or negative results are obtained.
To best understand the remaining part of this chapter, some definitions are helpful:
- Double-blind trials
refer to experiments in which neither the experimenter nor the subjects know whether a specific treatment was prescribe or a placebo (a fake medicine that looks and tastes like real homeopathic medicines).
- Randomized trials
are those in which subjects of an experiment are randomly placed either in treatment groups or in placebo groups. The researchers attempt to place people with similar characteristics in equal numbers in treatment and placebo groups.
- Crossover studies
refer to experiments in which half of the subjects of a study are given a placebo during one phase of a study and then given the active treatment during the second phase, while the other half begin with the active treatment and then receive the placebo during the second phase. Crossover studies sometimes do not test a placebo and instead compare one type of treatment with another type of treatment.
Modern research is designed to evaluate the results of a therapy as compared to a placebo and/or another therapy. This type of study is valuable because many patients respond very well to placebos, and this "treatment" is so safe and inexpensive it is generally assumed that "real treatments" should have considerably better results than placebo medicine. One should note that placebo effects can be significant, and clinically, these effects can be very positive (some people think of them as a type of self-healing).
Double-blinding an experiment is important to research because experimenters tend to treat people who are getting the real treatment differently or better than those given a placebo, thus throwing off the results of the experiment. Research is randomized so that those people treated with the real medicine and those treated with the placebo are as similar as possible, making a comparison between real treatment and placebo treatment more accurate. Crossover studies allow researchers to compare the separate effects of a placebo and a treatment on all subjects in an experiment.
Statistics obviously are an important part of research. A treatment is thought to be considered better than a placebo if the results, according to statistical analysis, have no more than a 5% possibility of happening at random (the notation of this statistical probability is: P=.05). A study with a small number of patients (for example, 30 or less) must show a large difference between treatment and nontreatment groups for it to become statistically significant. A study with a large number of patients (for example, several hundred) needs to have only a small but consistent difference to obtain a similar statistical significance. This information is provided so that readers will know that all the studies described in this chapter are statistically significant, except when otherwise noted.
People are often confused by research, not only because it can be overly technical but because some studies show that a therapy works and other studies shows that it doesn't. To solve this problem, a recent development in research is used, called a "meta-analysis," which is a systematic review of a body of research that evaluates the overall results of experiments.
In 1991, three professors of medicine from the Netherlands, none of them homeopaths, performed a meta-analysis of 25 years of clinical studies using homeopathic medicines and published their results in the British Medical Journal.4 This meta-analysis covered 107 controlled trials, of which 81 showed that homeopathic medicines were effective, 24 showed they were ineffective, and 2 were inconclusive.
The professors concluded, "The amount of positive results came as a surprise to us." Specifically, they found that:
- 13 of 19 trials showed successful treatment of respiratory infections
- 6 of 7 trials showed positive results in treating other infections
- 5 of 7 trials showed improvement in diseases of the digestive system
- 5 of 5 showed successful treatment of hay fever
- 5 of 7 showed faster recovery after abdominal surgery
- 4 of 6 promoted healing in treating rheumatological disease
- 18 of 20 showed benefit in addressing pain or trauma
- 8 of 10 showed positive results in relieving mental or psychological problems
- 13 of 15 showed benefit from miscellaneous diagnoses
Despite the high percentage of studies that provided evidence of success with homeopathic medicine, most of these studies were flawed in some way or another. Still, the researchers found 22 high-caliber studies, 15 of which showed that homeopathic medicines were effective. Of further interest, they found that 11 of the best 15 studies showed efficacy of these natural medicines, suggesting that the better designed and performed the studies were, the higher the likelihood that the medicines were found to be effective. Although people unfamiliar with research may be surprised to learn that most of the studies on homeopathy were flawed in one significant way or another,5 research in conventional medicine during the past 25 years has had a similar percentage of flawed studies.
With this knowledge, the researchers of the meta-analysis on homeopathy concluded, "The evidence presented in this review would probably be sufficient for establishing homeopathy as a regular treatment for certain indications."
There are different types of homeopathic clinical research, some of which provide individualization of remedies; which is the hallmark of the homeopathic methodology; some of which give a commonly prescribed remedy to all people with a similar ailment, and some of which give a combination of homeopathic medicines to people with a similar condition. While one can perform good research using any of these methods, there are certain issues that researchers have to be aware of and sensitive to in order to obtain the best objective results.
For instance, if a study does not individualize a homeopathic medicine to people suffering from a specific ailment and the results of the study show that there was no difference between those given this remedy and those given a placebo, the study does not disprove homeopathy; it simply proves that this one remedy is not effective in treating every person suffering from that ailment, each of whom may have a unique pattern of symptoms that requires an individual prescription.
In describing specifics of the following studies using homeopathic medicines, differentiation has been made between studies that allowed for individualization of medicines and those that did not.