Vegetarianism can lead to heart disease and cancer
By Sarah Knapton 2:38 PM Wednesday Mar 30, 2016
Long term vegetarianism can lead to genetic mutations that raise the risk of heart disease and cancer, scientists have found.
Populations who have had a primarily vegetarian diet for generations were found to be far more likely to carry DNA that makes them susceptible to inflammation.
Scientists in the US believe that the mutation occurred to make it easier for vegetarians to absorb essential fatty acids from plants.
But it has the knock-on effect of boosting the production of arachidonic acid, which is known to increase inflammatory disease and cancer. When coupled with a diet high in vegetable oils - such as sunflower oil - the mutated gene quickly turns fatty acids into dangerous arachidonic acid.
The finding may help explain previous research which found vegetarian populations are nearly 40 per cent more likely to suffer colorectal cancer than meat eaters, a finding that has puzzled doctors because eating red meat is known to raise the risk.
Researchers from CornellUniversity in the US compared hundreds of genomes from a primarily vegetarian population in Pune, India to traditional meat-eating people in Kansas and found there was a significant genetic difference.
"Those whose ancestry derives from vegetarians are more likely to carry genetics that more rapidly metabolise plant fatty acids," said Tom Brenna, Professor of Human Nutrition at Cornell.
"In such individuals, vegetable oils will be converted to the more pro-inflammatory arachidonic acid, increasing the risk for chronic inflammation that is implicated in the development of heart disease, and exacerbates cancer. The mutation appeared in the human genome long ago, and has been passed down through the human family."
To make the problem worse, the mutation also hinders the production of beneficial Omega 3 fatty acid which is protective against heart disease. Although it may not have mattered when the mutation first developed, since the industrial revolution there has been a major shift in diets away from Omega 3 - found in fish and nuts - to less healthy Omega 6 fats - found in vegetable oils.
"Changes in the dietary Omega 6 to Omega 3 balance may contribute to the increase in chronic disease seen in some developing countries," added Dr Brenna. "The message for vegetarians is simple. Use vegetable oils that are low in omega-6 linoleic acid such as olive oil." (I would add that avoidance of PUFAs in general is a good strategy, replacing them in the diet with saturated fat from quality sources).
The mutation is called rs66698963 and is found in the FADS2 gene which controls the production of fatty acids in the body.
Previous studies have shown that vegetarianism and veganism can lead to problems with fertility by lowering sperm counts. (Nature doesn’t want to perpetuate weakness. Thus, anytime one’s vitality sinks below the level where one can positively contribute to the gene pool, the ability to procreate is compromised or lost entirely).
Separate research from Harvard University also found that a diet high in fruit and vegetables may impact fertility because men are consuming high quantities of pesticides.
Many vegetarians also struggle to get enough protein, iron, vitamin D, vitamin B12 and calcium which are essential for health. One study found that vegetarians had approximately five percent lower bone-mineral density (BMD) than non-vegetarians.
However other research suggests vegetarianism lowers the risk of diabetes, stroke and obesity.
The new research was published in the journal Molecular Biology and Evolution
Positive selection on a regulatory insertion-deletion polymorphism in FADS2 influences apparent endogenous synthesis of arachidonic acid
Long chain polyunsaturated fatty acids (LCPUFA) are bioactive components of membrane phospholipids and serve as substrates for signaling molecules. LCPUFA can be obtained directly from animal foods or synthesized endogenously from 18 carbon precursors via the FADS2 coded enzyme. Vegans rely almost exclusively on endogenous synthesis to generate LCPUFA and we hypothesized that an adaptive genetic polymorphism would confer advantage. The rs66698963 polymorphism, a 22 bp insertion-deletion within FADS2, is associated with basal FADS1 expression, and coordinated induction of FADS1 and FADS2 in vitro. Here we determined rs66698963 genotype frequencies from 234 individuals of a primarily vegetarian Indian population and 311 individuals from the U.S. A much higher I/I genotype frequency was found in Indians (68%) than in the U.S. (18%). Analysis using 1000 Genomes Project data confirmed our observation, revealing a global I/I genotype of 70% in South Asians, 53% in Africans, 29% in East Asians, and 17% in Europeans. Tests based on population divergence, site frequency spectrum and long-range haplotype consistently point to positive selection encompassing rs66698963 in South Asian, African and some East Asian populations. Basal plasma phospholipid arachidonic acid status was 8% greater in I/I compared to D/D individuals. The biochemical pathway product-precursor difference, arachidonic acid minus linoleic acid, was 31% and 13% greater for I/I and I/D compared to D/D, respectively. Our study is consistent with previous in vitro data suggesting that the insertion allele enhances n-6 LCPUFA synthesis and may confer an adaptive advantage in South Asians because of the traditional plant-based diet practice.
That the FDA considers carrageenan safe—all I can say is WOW! Then it must be. After all, their track record is exemplary! Aspartame, artificial colors, Olestra…No harm there. And it’s gotta be tough to get their approval. I explore this subject here:
Yet I wonder—have they ever “approved” of a product before issuing a recall?
Fen-Phen—24 yrs on the market despite heart disease and other pulmonary problems.
DES—37 yrs on the market when in 1971 it was connected to a rare tumor that kept appearing in the daughters of women who had taken it. The FDA only banned DES prescriptions to women because no such problems have been found in men.
Baycol—4 yrs on the market is reportedly responsible for more than 100,000 deaths
Vioxx—5 yrs on the market and then found to be responsible for increased risk of heart attack and stroke.
Read more: The Ten Worst Drug Recalls In The History Of The FDA – 24/7 Wall St. http://247wallst.com/2010/12/10/the-ten-worst-drug-recalls-in-the-history-of-the-fda/#ixzz2Qle0DSL6
But back to Carrageenan—maybe some of the below reading from the FDA’s own website would interest you:
There it clearly states that “significant abnormalities appear to be induced in the anaphase figures of human embryoic lung cells in tissue culture at dosages that are slightly above average daily human intake. It is of further concern that parenterally administered carrageenan is reported to inhibit the activity of complement, excert cytotoxic effects on macrophases, suppress delayed hypersensitivity reactions in some tuberculin sensitive animals, activate factors causing procoagulant activity in human blood platelets, increase vascular permeabiltiy, and liberate kinin in vitro, all of which point to the possibility of the generation of toxic effects that could cause adverse responses followint the oral consumption of carrageenan if, during pregnancy or in the presence of infectious challenge or metabolic disorder, appropriate amounts of carrageenan should be absorbed from the gastrointestinal tract.”
And so my readers may read Dr. Tobacman’s position, you can find a 2012 letter along with attached studies here: http://www.cornucopia.org/DrTobacmanComment_toNOSB.pdf
Or the following studies by others:
Pathol Biol (Paris). 1979 Dec;27(10):615-26. [Biological and pharmacological effects of carrageenan (author’s transl)]. [Article in French] Roch-Arveiller M, Giroud JP. Carrageenan is sulfated polysaccharide which has been extensively used as emulsifier and thickening agent in the food industry, for its ability to induce acute inflammation in pharmacology and for its selectively toxic effect for macrophages in immunology. Carrageenan is a complex substance which displays various biological properties. The authors have shown the extent of these actions and reviewed the latest investigations on this subject.
J Allergy Clin Immunol. 1995 May;95(5 Pt 1):933-6. Anaphylaxis to carrageenan: a pseudo-latex allergy. Tarlo SM, Dolovich J, Listgarten C. Toronto Hospital, Western Division, Ontario, Canada. BACKGROUND: Anaphylactic reactions during a barium enema have been attributed to allergy to latex on the barium enema device. The observation of anaphylaxis during barium enema without latex exposure or latex allergy led to the performance of an allergy skin test to the barium enema solution. METHODS: Individual components of the barium enema solution were obtained for double-blind skin testing. A RAST to identify specific IgE antibodies to the skin test active agent was established. RESULTS: Carrageenan, a component of the barium enema solution, produced positive reactions to allergy skin test and RAST. Gastrointestinal symptoms for which the patient was being investigated by the barium enema subsequently disappeared with a diet free of carrageenan. CONCLUSIONS: Carrageenan is a previously unreported cause of anaphylaxis during barium enema. It is an allergen widely distributed in common foods and potentially could account for some symptoms related to milk products or baby formula.
Food Addit Contam. 1989 Oct-Dec;6(4):425-36. Intestinal uptake and immunological effects of carrageenan–current concepts. Nicklin S, Miller K. Carrageenans are a group of high molecular weight sulphated polygalactans which find extensive use in the food industry as thickening, gelling and protein-suspending agents. Although there is no evidence to suggest that the persorption of small amounts of carrageenans across the intestinal barrier poses an acute toxic hazard, they are known to be biologically active in a number of physiological systems and extended oral administration in laboratory animals has been shown to modify both in vivo and in vitro immune competence. Whereas this effect could be attributed to carrageenan having a selective toxic effect on antigen-processing macrophages, additional studies suggest that macrophages can also influence immune responses by the timed release of immunoregulatory mediators. Evidence in support of this comes from in vitro studies which demonstrate that carrageenan-treated macrophages can, depending on conditions and time of administration, release either stimulatory or inhibitory factors. The former is known to be the immunostimulatory agent interleukin 1 (IL-1). The inhibitory factor, which is produced at an early stage following exposure to non-toxic doses of carrageenans, has yet to be formally identified but it is believed to be a prostaglandin because of its specific mode of action and short biological half-life. At present it is impossible to relate these studies to the human situation. Although it is established that carrageenans can cross the intestinal barrier of experimental animals, there is no evidence to suggest that the limited uptake that may occur in man in any way interferes with normal immune competence. Nevertheless, increased exposure may occur in the neonate during weaning, and adults and children following allergic reactions and episodes of gastrointestinal disease. Further studies under such conditions now seem warranted in order to elucidate the possible immunological consequences which may be associated with enhanced uptake of carrageenans in vulnerable groups.
Environ Health Perspect. 2001 October; 109(10): 983–994. Review of harmful gastrointestinal effects of carrageenan in animal experiments. J K Tobacman In this article I review the association between exposure to carrageenan and the occurrence of colonic ulcerations and gastrointestinal neoplasms in animal models. Although the International Agency for Research on Cancer in 1982 identified sufficient evidence for the carcinogenicity of degraded carrageenan in animals to regard it as posing a carcinogenic risk to humans, carrageenan is still used widely as a thickener, stabilizer, and texturizer in a variety of processed foods prevalent in the Western diet. I reviewed experimental data pertaining to carrageenan’s effects with particular attention to the occurrence of ulcerations and neoplasms in association with exposure to carrageenan. In addition, I reviewed from established sources mechanisms for production of degraded carrageenan from undegraded or native carrageenan and data with regard to carrageenan intake. Review of these data demonstrated that exposure to undegraded as well as to degraded carrageenan was associated with the occurrence of intestinal ulcerations and neoplasms. This association may be attributed to contamination of undegraded carrageenan by components of low molecular weight, spontaneous metabolism of undegraded carrageenan by acid hydrolysis under conditions of normal digestion, or the interactions with intestinal bacteria. Although in 1972, the U.S. Food and Drug Administration considered restricting dietary carrageenan to an average molecular weight > 100,000, this resolution did not prevail, and no subsequent regulation has restricted use. Because of the acknowledged carcinogenic properties of degraded carrageenan in animal models and the cancer-promoting effects of undegraded carrageenan in experimental models, the widespread use of carrageenan in the Western diet should be reconsidered.
Food Chem Toxicol. 1990 Dec;28(12):807-11. The effects of carrageenan on drug-metabolizing enzyme system activities in the guinea-pig. Pintauro SJ, Gilbert SW. Carrageenans are seaweed extracts comprising high molecular weight sulphated polygalactosides. They are used in foods at concentrations of up to 2.5% as thickening and gelling agents. When degraded to lower molecular weight forms, they have been shown to induce ulcerative colitis and colon cancer in laboratory animals. Furthermore, undegraded carrageenan (CG) has been shown to promote azoxymethane and methylnitrosourea initiated carcinogenesis, but the promotion mechanism is unclear. To determine if this mechanism involves alterations of tissue drug-metabolizing enzyme system (DMES) activities, six groups of five guinea-pigs each were administered 0.2% kappa undegraded, 0.2% i undegraded, 1% kappa degraded or 1% i degraded CG, or control solutions in the drinking-water for 8 wk. Microsomal and cytosolic DMES activities of the liver, small intestine and colon were determined. The kappa undegraded CG group exhibited significant (P less than 0.05) increases in small intestine cytochrome P-450 levels and benzo[a]pyrene hydroxylase activities. These data suggest that undegraded CG may selectively induce DMES activities in the small intestine mucosa.
J Pharm Pharmacol. 1989 Jun;41(6):423-6. Rapid production of ulcerative disease of the colon in newly-weaned guinea-pigs by degraded carrageenan. Marcus AJ, Marcus SN, Marcus R, Watt J. In a dose-response study, degraded carrageenan (Eucheuma spinosum) was supplied in the drinking fluid at 1.2 and 3% concentrations over two weeks to young adult guinea-pigs. Ulceration of the large bowel was produced in 100% of animals, the severity and extent of damage probably being dose-related. In a time-course study, 3% degraded carrageenan solution supplied to newly-weaned guinea-pigs produced in 100% of animals ulceration in the caecum by four days and in the ascending colon by seven days. The onset of ulceration occurred as early as the second day. This model is convenient and economic for the screening of drugs of potential therapeutic value in human ulcerative colitis.
Gut. 1971 Feb;12(2):164-71. Carrageenan-induced ulceration of the large intestine in the guinea pig. Watt J, Marcus R. A 5% aqueous solution of degraded carrageenan derived from the red seaweed Eucheuma spinosum was fed to guinea pigs in their drinking water over a period of 20-45 days. Occult blood in the faeces and multiple ulcers in the caecum, colon and rectum occurred in 100% of animals by the 30th day. The clinical and pathological features bear a close resemblance to human ulcerative colitis. The method provides a simple experimental model for the study of various aspects of the pathology of ulcerative lesions in the large intestine as well as the effects of therapeutic agents.
Int J Exp Pathol. 1992 Aug;73(4):515-26. The pre-ulcerative phase of carrageenan-induced colonic ulceration in the guinea-pig. Marcus SN, Marcus AJ, Marcus R, Ewen SW, Watt J. The pre-ulcerative phase of carrageenan-induced colonic ulceration was investigated in guinea-pigs supplied 3% degraded carrageenan as an aqueous solution as drinking fluid for 2 or 3 days during which no ulceration of the bowel was observed with the naked eye or dissecting microscope. Mucosal microscopic changes, from caecum to rectum, were multifocal and included cellular infiltrates, dilatation of glands, crypt abscesses, micro-ulcers and sulphated polysaccharide in the lamina propria. Sulphated polysaccharide was also demonstrated histologically for the first time within the surface epithelium and showed ultrastructural features similar to carrageenan. The results indicate that colonic epithelium in the guinea-pig is capable of macromolecular absorption. Carrageenan, a highly active polyanionic electrolyte, within the surface epithelial cells is most likely a primary factor in the breakdown of mucosal integrity. Macromolecular absorption causing enteropathy of the large bowel is a new pathophysiological concept which may have implications in man, particularly in the pathology of large bowel disease.
Methods Achiev Exp Pathol. 1975;7:56-71. Experimental ulcerative disease of the colon. Watt J, Marcus R. The oral administration to guinea-pigs of an aqueous solution of carrageenan derived from the red seaweed, Eucheuma spinosum, provides a useful, readily available experimental model for the study of ulcerative disease of the colon. Two types of ulcerative disease can be produced within a 4-6 week period, viz., ulceration localised mainly to the caecum by using 1% undegraded carrageenan in the drinking fluid, and extensive ulceration involving caecum, colon, and rectum by using 5% degraded carrageenan. Ulceration is probably due to the local action of carrageenan in the bowel.
J Natl Cancer Inst. 1977 Apr;58(4):1171-2. Promotion of incidence of adenovirus type 12 transplantable tumors by carrageenan, a specific antimacrophage agent. Lotzová E, Richie ER. Carrageenan, a sulfated polygalactose with known macrophage-toxic properties, was used to ascertain the role of macrophages in resistance to adenovirus type 12 transplantable tumors. A single ip injection of 5 or 10 mg carrageenan led to increased incidence and more rapid growth of tumors in C3H mice. Carrageenan was most effective if given 1 day before tumor inoculation; the effectiveness decreased with increasing intervals before or after tumor cell injection. The macrophage stabilizer poly-2-vinylpyridine N-oxide injected sc (150 mg/kg) 1 day before carrageenan was given reduced the incidence of tumors. These data lend further support to the importance of macrophages in tumor immunity.
Biomedicine. 1975 Sep;22(5):387-92. Involvement of macrophages in genetic resistance to bone marrow grafts. Studies with two specific antimacrophage agents, carrageenan and silica. Lotzova E, Gallagher MT, Trentin JJ. Carrageenans and silica, agents toxic for macrophages, were used in this study to examine the role of macrophages in resistance of irradiated mice to inbred parental and rat bone marrow grafts. Administration of 2.5 mg of carrageenans or 2.5-5 mg of silica particles intravenously to prospective graft recipients resulted in a prompt abrogation of hybrid and xenogeneic resistance. The macrophage stabilizer poly-2-vinylpyridine N-oxide (PVNO) injected subcutaneously in the dose of 150 mg/kg, 24 hr before silica prevented or reduced the suppression of resistance. PVNO, however, did not antagonize the suppression of resistance by carrageenen, horse anti-mouse thymocyte serum and cyclophosphamide. These results suggest that a) a subpopulation is involved in marrow graft rejection by irradiated mice; b) carrageenan and silica apparently act on macrophages by different mechanisms c) horse anti-mouse thymocyte serum and cyclophosphamide may act on cells other than macrophages or they act on macrophages by a different mechanism than silica, to resistance to bone marrow transplantation.
Agents Actions. 1981 May;11(3):265-73. Carrageenan: a review of its effects on the immune system. Thomson AW, Fowler EF. Carrageenans (kappa, lambda and iota) are sulphated polysaccharides isolated from marine algae that can markedly suppress immune responses both in vivo and in vitro. Impairment of complement activity and humoral responses to T-dependent antigens, depression of cell-mediated immunity, prolongation of graft survival and potentiation of tumour growth by carrageenans have been reported. The mechanism responsible for carrageenan-induced immune suppression is believed to be its selective cytopathic effect on macrophages. This property of carrageenan has led to its adoption as a tool for analysing the role of these cells in the induction and expression of immune reactivity. Systemic administration of carrageenan may, however, induce disseminated intravascular coagulation and inflict damage on both the liver and kidney. This is an important consideration in the interpretation of the effects of carrageenan in vivo and precludes its use as a clinical immune suppressant.
Biomedicine. 1978 May-Jun;28(3):148-52. Carrageenan and the immune response. Thomson AW. Since the biological effects of carrageenan were reviewed in 1972 by Di Rosa it has become clear from a large number of reports that this algal polysaccharide markedly influences immune responses. Profound suppression of immunity evidenced by impaired antibody production, graft rejection, delayed hypersensitivity and anti-tumour immunity, has been observed in carrageenan-treated animals and the immunodepressive ability of carrageenan confirmed by in vitro studies. Efforts at analysis of carrageenan-induced immune suppression have focussed on the selective cytotoxic effect of this agent on mononuclear phagocytes. Faith in the ability of carrageenan to eliminate those cells has led to its use in examination of the role played by mononuclear phagocytes in various aspects of immune reactivity. This review documents and discusses the effects of carrageenan on immune responses and assesses the value of carrageenan as a useful tool in both current and future work aimed at broadening our knowledge of mechanisms underlying immune reactions.
Biomedicine. 1976 May;24(2):102-6. Evaluation of carrageenan as an immunosuppressive agent and mediator of intravascular coagulation. Thomson AW, Wilson AR, Cruickshank WJ, Horne CH. Carrageenan suppressed antibody responses to SRBC in mice and rats, measured in terms of splenic IgM PFC production. The effect, in mice, was dependent on dose and on the temporal relationship between treatment and antigen administration. Carrageenan was found to alter the time course of the PFC response and also to produce disseminated intravascular coagulation. Some correlation between the observed effects and the use of chemically distinct carrageenans was found. The possible mode of actio2n of carrageenan is discussed in the light of these, and other findings.
J Pathol. 1980 Sep;132(1):63-79. Histological and ultrastructural changes following carrageenan injection in the mouse. Fowler EF, Simpson JG, Thomson AW. Mice were injected intravenously with either uncharacterised potassium carrageenan or purified iota carrageenan and tissue was examined by light and electron microscopy 1 hr and 24 hr later. The survival of animals injected with these carrageenans was monitored over a 6-month period. Histological examination of liver and kidney was carried out on animals which died during this time and in the surviving mice at 28 weeks. Histological and ultrastructural evidence of disseminated intravascular coagulation was observed within 24 hr of carrageenan injection. The changes were more severe in animals given potassium carrageenan. Electro-microscopic examination of liver revealed carrageenan within membrane-bound vacuoles in Küpffer cells. These cells were largely unaffected by phagocytosis of iota carrageenan but uptake of potassium carrageenan resulted in marked ultrastructural changes and occasional damage to adjacent hepatocytes. Mice given potassium carrageenan had the poorer long-term survival and many animals in this group showed chronic renal damage with features which suggested obstructive nephropathy. A smaller proportion of mice injected with iota carrageenan displayed similar changes. There was no evidence of long-term hepatotoxicity in either group although both types of carrageenan persisted within liver macrophages for at least 6 months after injection.
Am J Pathol. 1971 Aug;64(2):387-404. Spectrum and possible mechanism of carrageenan cytotoxicity. Catanzaro PJ, Schwartz HJ, Graham RC Jr. Carrageenan, a sulfated polygalactose which suppresses established delayed hypersensitivity in vivo, is shown to be cytotoxic to macrophages but not to lymphocytes in vitro. This cytotoxicity depends on the carrageenan concentration and degree of lysosomal differentiation but is independent of serum. Survival of macrophages in the presence of carrageenan can be enhanced temporarily by corticosteroids. Ultrastructural studies reveal that carrageenan is readily taken up by macrophages and stored in lysosomes, which subsequently swell and rupture, apparently resulting in cell death. The presence of corticosteroids temporarily retards lysosome swelling. It is suggested that carrageenan may exert its cytotoxic effect by causing osmotic rupture of lysosomes. The possible immunologic significance of these findings is discussed.
Cancer Lett. 1978 Mar;4(3):171-6. Induction by degraded carrageenan of colorectal tumors in rats. Ashi KW, Inagaki T, Fujimoto Y, Fukuda Y. Degraded carrageenan derived from the red seaweed Eucheuma spinosum was given to Sprague—Dawley rats through the diet, in drinking water or by stomach tube for up to 24 months. Carrageenan-induced squamous cell carcinomas, adenocarcinomas and adenomas in the colorectum were observed. Some rats had metastases to the regional lymph nodes of squamous cell carcinomas. These results show that degraded carrageenan is carcinogenic to the colorectum of the rat.
Toxicol Lett. 1981 Jun-Jul;8(4-5):207-12. Effect of degraded carrageenan on the intestine in germfree rats. Hirono I, Sumi Y, Kuhara K, Miyakawa M. The role of intestinal bacterial flora in display of the effect of degraded carrageenan was investigated by feeding 9 germfree and 12 conventional female Wistar rats on diet containing 10% carrageenan for 63 days. Animals were sacrificed 7, 20, 35, and 63 days after the start of feeding and histological changes induced by carrageenan were studied. The germfree rats showed mucosal lesions, such as macrophage aggregates, erosion, and squamous metaplasia of the large intestine, and these lesions were more extensive than those in the conventional rats. Therefore, it was concluded that bacterial flora are not essential for display of the biological effects of degraded carrageenan.
Food Chem Toxicol. 1987 Feb;25(2):113-8. Intestinal permeability changes in rodents: a possible mechanism for degraded carrageenan-induced colitis. Delahunty T, Recher L, Hollander D. Rats and guinea-pigs were treated with degraded carrageenan (50 g/litre in the drinking-water) and their intestinal permeability was studied at weekly intervals over the last 4 wk of the test period by determining the recovery of orally administered tracer doses of [3H]polyethylene glycol (PEG-900) or D-[3H]mannitol in 16-hr urine collections. A freely diffusible dye, Azure A, was administered simultaneously to compensate for non-intestinal factors that could modify renal excretion. Animals were killed after a total treatment period of 5 months for rats and 6 wk for guinea-pigs. After 3 wk of carrageenan treatment, excretion of PEG-900 (expressed as a ratio of the Azure A excretion) in guinea-pigs showed a statistically significant increase over that in the control group. At autopsy, the caeca showed numerous macroscopically visible erosions of the entire mucosal surface and histological examination showed ulcerations largely in the mucosa with abscesses in the crypts. Although no such histological changes were seen in the intestines of the treated rats, even after 5 months, a statistically significant increase in PEG-900 excretion was again found compared with the control group. This increase did not occur when deoxycholate was administered with the carrageenan solution. No effect of carrageenan treatment on mucosal permeability to D-[3H]mannitol was demonstrated in either species. The results suggest that degraded carrageenan-induced colitis could be a result of increased intestinal permeability, since ingestion of this polysaccharide by rats increased PEG-900 absorption without causing mucosal damage.
Cancer Detect Prev. 1981;4(1-4):129-34. Harmful effects of carrageenan fed to animals. Watt J, Marcus R. An increased number of reports have appeared in the literature describing the harmful effects of degraded and undegraded carrageenan supplied to several animal species in their diet or drinking fluid. The harmful effects include foetal toxicity, teratogenicity, birth defects, pulmonary lesions, hepatomegaly, prolonged storage in Kupffer cells, ulcerative disease of the large bowel with hyperplastic, metaplastic, and polypoidal mucosal changes, enhancement of neoplasia by carcinogens, and, more ominously, colorectal carcinoma. Degraded carrageenan as a drug or food additive has been restricted in the United States by the FDA, but undegraded carrageenan is still widely used throughout the world as a food additive. Its harmful effects in animals are almost certainly associated with its degradation during passage through the gastrointestinal tract. There is a need for extreme caution in the use of carrageenan or carrageenan-like products as food additives in our diet, and particularly in slimming recipes.
Food and Cosmetics Toxicology Volume 14, Issue 2, 1976, Pages 85-93 Carrageenan: The effect of molecular weight and polymer type on its uptake, excretion and degradation in animals K.A. Pittmana, L. Golberga, F. Coulstona A variety of τ-, κ- and λ-carrageenans was given to guinea-pigs, monkeys and rats, either in the drinking-water, by gavage or in the diet. Faecal and liver samples were examined qualitatively by gel electrophoresis, to determine any changes in the apparent molecular weight of carrageenans after administration. Quantitative measurements of carrageenans were carried out on samples of liver and urine. That there was little or no absorption of carrageenans of high molecular weight was evidenced by the absence of carrageenan from the livers of guinea-pigs or rats or from the urine of guinea-pigs or monkeys. By contrast, substantial amounts of carrageenan were found in the livers of guinea-pigs and rats given low-molecular-weight carrageenans (Mn ⩽ 40,000). Intermediate amounts of carrageenan were found in livers of animals given carrageenans ranging in Mn between 40,000 and 150,000. Urinary excretion of carrageenan was limited to low-molecular-weight material (Mn ⩽ 20.000). Qualitative and quantitative evidence indicated that there was an upper limit to the size of carrageenan molecules absorbed, but estimates of this upper limit ranged from 10,000 to 85.000 depending upon the analytical approach. Absorption of carrageenan from the drinking-water may differ qualitatively from absorption from the diet. Analysis of faecal samples by gel electrophoresis showed that degradation of high-molecular-weight carrageenan had occurred, either in the gut or in the faeces.
Cancer Letters Volume 14, Issue 3, December 1981, Pages 267-272 A study on carcinogenesis induced by degraded carrageenan arising from squamous metaplasia of the rat colorectum Yasuyuki Oohashi, Tomonori Ishioka, Kazuo Wakabayashi, Noriyuki Kuwabara We have undertaken studies on carcinogenesis arising from precancerous lesions, such as squamous metaplasia and ulcerative lesions of the rat colorectum, after termination of degraded carrageenan administration. Rates of tumor incidence in groups that were given a 10% diet of degraded carrageenan for 2, 6 and 9 months were 5 rats out of 39 (12.8%), 8 out of 42 (19.0%) and 17 out of 42 (40.5%), respectively. The colorectal squamous metaplasia persisted in all rats and progressed irreversibly. Degraded carrageenan was deposited not only in the colorectal propria mucosa, but also in the other reticuloendothelial organs. These results show that, even with short-term degraded carrageenan administration, degraded carrageenan is carcinogenic to the colorectum of the rat after a prolonged period.
Cancer Res. 1997 Jul 15;57(14):2823-6. Filament disassembly and loss of mammary myoepithelial cells after exposure to lambda-carrageenan. Tobacman JK. Carrageenans are naturally occurring sulfated polysaccharides, widely used in commercial food preparation to improve the texture of processed foods. Because of their ubiquity in the diet and their observed preneoplastic effects in intestinal cells, their impact on human mammary myoepithelial cells in tissue culture was studied. At concentrations as low as 0.00014%, lambda-carrageenan was associated with disassembly of filaments with reduced immunostaining for vimentin, alpha-smooth muscle-specific actin, and gelsolin; increased staining for cytokeratin 14; and cell death. The absence of mammary myoepithelial cells is associated with invasive mammary malignancy; hence, the destruction of these cells in tissue culture by a low concentration of a widely used food additive suggests a dietary mechanism for mammary carcinogenesis not considered previously.
Acta Pathol Microbiol Scand A. 1980 May;88(3):135-41. Stereomicroscopic and histologic changes in the colon of guinea pigs fed degraded carrageenan. Olsen PS, Poulsen SS. A colitis-like state induced in Guinea Pigs fed degraded carrageenan orally. By means of a combined semimacroscopic and histologic technique the course of the disease was followed during 28 days. The changes were primarily seen and became most prominent in the caecum. The first lesions were observed following 24 hours of treatment as small rounded foci initially with degenerative changes and inflammation in the surface epithelium, later forming superficial focal ulcerations. Ulcerative changes gradually progressed during the experiment, forming linear and later large, geographical ulcerations. Topographically the ulcerative process was strongly related to the larger submucosal vessels. Nonulcerated parts of the mucosa were not changed until following 7-14 days of treatment. The mucosa became bulging, granulated and finally villus-like. Accumulation of macrophages was found under the surface epithelium after 7-17 days. Possible pathogenetic mechanisms are discussed, especially the development of the early lesions and the significance of the macrotphages.
Teratology. 1981 Apr;23(2):273-8. Teratogenic effect of lambda-carrageenan on the chick embryo. Monis B, Rovasio RA. Carrageenans are widely used as food additives. Thus, it seemed of interest to test their possible teratogenic action. For this purpose, 530 chick eggs were injected in the yolk sac with 0.1 ml of a solution of 0.1% lambda-carrageenan in 0.9% sodium chloride. As controls, 286 eggs were injected with 0.1 ml of 9.0% sodium chloride. In addition, 284 eggs received no treatment. After incubation for 48–50 hours at 39 degrees C, embryos were fixed, cleared, and observed with a stereoscopic microscope. The frequency of abnormal embryos in the group receiving lambda-carrageenan was higher than in the controls (p less than 0.04). Partial duplication of the body, abnormal flexures of the trunk, anencephaly, a severely malformed brain, thickening of the neural tube wall, an irregular neural tube lumen with segmentary occlusion and a reduction in crown-rump length and number of somites were distinctly seen in the lambda-carrageenan-injected group. Moreover, the average number of anomalies per embryo in the lambda-carrageenan-injected group was nearly twice that in the controls. Present data indicate that lambda-carrageenan has teratogenic effects on early stages of the development of the chick embryo.
Let me know if I can provide more reading material for you. In the meantime, I would strongly urge you to stop listening to your government while ignoring what your body is trying to tell you. Paying attention here just may save your health.
The more health claims made about a food, the worse it is for you.
In the case the dairy industry, the above statement is nothing less than spot on (shameless plug, I'll admit). Sick cows fed everything under the sun other than the grass they were designed to eat are not the ideal source for your dairy consumption. And if the producers opt to then pasteurize or homogenize or in some shape or form bastardize that dairy, then what was once an incredibly healthy source of nutrition soon becomes udderly unrecognizable as a food.
However, organic, grass-fed cows (and sheep, goats, etc) raised the way nature intended can produce quality dairy products which are extremely beneficial for health. Sufficient quantities of bio-available calcium (i.e. animal sources for those of us who aren't ruminant herbivores) keep parathyroid hormone low while also increasing the likelihood of tryptophan converting to niacin rather than serotonin (and that should make the health conscious happy). It also helps maintain a favorable calcium to phosphorous ratio in the diet, without which blood pressure, inflammation, and even tumor growth are often increased. Calcium also down regulates the production of adrenalin. Oh--and it's involved in muscle contraction and is an essential component in the electrical conduction system of the heart, too.
But does dairy cause mucus production?
Quite simply--yes. In those who are sensitive to it, dairy can create an immune response in the body...just like any and every food someone eats that isn't conducive to their specific digestive capabilities.
As one study from the journal Medical Hypotheses states:
Excessive milk consumption has a long association with increased respiratory tract mucus production and asthma. Such an association cannot be explained using a conventional allergic paradigm and there is limited medical evidence showing causality. In the human colon, β-casomorphin-7 (β-CM-7), an exorphin derived from the breakdown of A1 milk, stimulates mucus production from gut MUC5AC glands. In the presence of inflammation similar mucus overproduction from respiratory tract MUC5AC glands characterises many respiratory tract diseases. β-CM-7 from the blood stream could stimulate the production and secretion of mucus production from these respiratory glands. Such a hypothesis could be tested in vitro using quantitative RT-PCR to show that the addition of β-CM-7 into an incubation medium of respiratory goblet cells elicits an increase in MUC5AC mRNA and by identifying β-CM-7 in the blood of asthmatic patients. This association may not necessarily be simply cause and effect as the person has to be consuming A1 milk, β-CM-7 must pass into the systemic circulation and the tissues have to be actively inflamed. These prerequisites could explain why only a subgroup of the population, who have increased respiratory tract mucus production, find that many of their symptoms, including asthma, improve on a dairy elimination diet.
So I guess it depend on who you are and what you've done to your digestive capacity via nutrition and lifestyle practices. Let me 'splain:
Oftentimes an inability to digest/assimilate dairy begins secondary to another offending agent. Gluten is a prime suspect. Alcohol and medicinal drugs are also common culprits. Anything which is a stress to the biological system has the potential to inflame the gut wall (for more on this subject, see the Seesaw of Sickness in my book, Spot On: Nutrition found here: http://triumphtraining.com/collections/books/products/spot-on-nutrition). This microtrauma to the intestine causes tiny holes to form, allowing food particles to pass into the bloodstream undigested. The body then creates antibodies to that particular food, potentially causing you to have an immune response to whatever you’re eating. Additionally, the constant inflammation causes what's termed villous atrophy. Lining the wall of your intestines, you have little finger like projections called villi. These, in turn, have tiny little microvilli covering them--you have about 200 million per square millimeter. The job of the microvilli is to help you assimilate nutrition from your food by producing various enzymes. One of these enzymes, in the case of our current discussion, is lactase--the enzyme you need to do anything with the lactose in dairy. No micromilli equals limited lactase (and other digestive enzymes) which limits your ability to consume dairy without suffering ill effects.
Of course, raw dairy typically comes with the exact enzymes one needs to safely and effectively consume it. But pasteurization destroys all those enzymes along with most if not all of the heat-sensitive nutrients. This is one reason why folks who are "lactose intolerant" often do fine when eating raw dairy. These people also typically fare better with full fat dairy instead of skim or low fat versions which will have more lactose per serving than the unadulterated milk products. Sheep and goat dairy are often better tolerated than diary from cow (ask if you wanna know why); and still others are sensitive to the form--doing fine with hard cheeses or yogurt yet having trouble with milk. Lastly, quantity and frequency of exposure are also factors which need to be considered in regards to how a person reacts to dairy. While the healthy digestive system should be able to handle just about anything that's thrown at it or in it (up to a point), the sad truth is most of us have done such damage to ourselves that we need to do some serious rehab of the gut wall and our health in general before we're able to eat whatever we want.
And, personally, I think that level of consciousness--even if forced upon us due to digestive complaints or otherwise--is actually a blessing. It's a painful signal that we're moving in the wrong direction, and it's time to redirect.
For those who need their academic mind satisfied, I've included a couple of studies below. N = 1, however, so I suggest you experiment on you to find what works best for your specific biochemistry.
http://www.ncbi.nlm.nih.gov/pubmed/2154152In the first of three studies investigating the widely held belief that "milk produces mucus," 60 volunteers were challenged with rhinovirus-2, and daily respiratory symptoms and milk and dairy product intake records were kept over a 10-day period. Nasal secretion weights were obtained by weighing tissues collected and sealed immediately after use. Information was obtained on 51 subjects, yielding 510 person-days of observation. Subjects consumed zero to 11 glasses of milk per day (mean, 2.7; SE, 0.08), and secretion weights ranged from zero to 30.4 g/day (mean, 1.1; SE, 0.1). In response to an initial questionnaire, 27.5% reported the practice of reducing intake of milk or dairy products with a cold or named milk or dairy products as bad for colds. Of the latter group, 80% stated the reason as "producing more mucus/phlegm." Milk and dairy product intake was not associated with an increase in upper or lower respiratory tract symptoms of congestion or nasal secretion weight. A trend was observed for cough, when present, to be loose with increasing milk and dairy product intake; however, this effect was not statistically significant at the 5% level. Those who believe "milk makes mucus" or reduce milk intake with colds reported significantly more cough and congestion symptoms, but they did not produce higher levels of nasal secretions. We conclude that no statistically significant overall association can be detected between milk and dairy product intake and symptoms of mucus production in healthy adults, either asymptomatic or symptomatic, with rhinovirus infection.
http://www.ncbi.nlm.nih.gov/pubmed/16373954There is a belief among some members of the public that the consumption of milk and dairy products increases the production of mucus in the respiratory system. Therefore, some who believe in this effect renounce drinking milk. According to Australian studies, subjects perceived some parameters of mucus production to change after consumption of milk and soy-based beverages, but these effects were not specific to cows' milk because the soy-based milk drink with similar sensory characteristics produced the same changes. In individuals inoculated with the common cold virus, milk intake was not associated with increased nasal secretions, symptoms of cough, nose symptoms or congestion.Nevertheless, individuals who believe in the mucus and milk theory report more respiratory symptoms after drinking milk. In some types of alternative medicine, people with bronchial asthma, a chronic inflammatory disease of the lower respiratory tract, are advised not to eat so-called mucus-forming foods, especially all kinds of dairy products. According to different investigations the consumption of milk does not seem to exacerbate the symptoms of asthma and a relationship between milk consumption and the occurrence of asthma cannot be established. However, there are a few cases documented in which people with a cow's milk allergy presented with asthma-like symptoms.
Original Source found here: http://www.smh.com.au/comment/not-eating-read-meat-wont-save-the-planet-20151214-glmxly
Not eating red meat won’t save the planet
Asa Wahlquist Published: December 14, 2015 - 9:00PM
Comment: The future of protein is not meat It sounds so easy: stop eating red meat to lower greenhouse gas emissions. But nature is far more complicated than that.
There are three critical questions you need to ask before cutting beef and lamb out of your diet for environmental reasons: what will happen to the grasslands that cattle and sheep graze; how will alternate protein be produced; and what will the greenhouse consequences of that production be?
About 60 per cent of the world's agricultural land is grasslands, land that is too poor and too dry to be cropped. In Australia, about 70 per cent of the country is grassland. The only way food can be produced from grasslands is by grazing ruminants. Mammals cannot digest grass, but ruminants have special stomachs filled with grass-digesting bacteria. The problem is those bacteria produce methane, which the ruminant burps out.
Methane is a potent greenhouse gas with a rating 25 times that of carbon dioxide over 100 years, although it has a lifetime of 9 to 12 years in the atmosphere.
The experience worldwide is that if cattle are removed from grasslands, the original ruminants re-establish themselves, or ferals invade.
In Australia the main ferals are goats, as well as camels in drier regions. Contrary to popular belief, kangaroos do produce methane, although the actual quantities, and their alternate pathways for digesting cellulose from grass, are the subject of ongoing research. Even termites produce methane: they are responsible for about three per cent of Australia's greenhouse gas emissions.
What if everyone did go vegetarian and the grasslands were not grazed at all? In Australia, they would most likely burn. Bushfire accounts for about 3 per cent of Australia's net greenhouse gas emissions.
The argument overseas focuses largely on the huge quantities of grain, that could otherwise be consumed by humans, that are fed to livestock. This is a practice that is indefensible on environmental grounds. In Australia, most cattle and all sheep are grassfed. Dairy cattle are usually given supplementary feed, which is mostly forage or hay with some grain.
If you decide not to eat meat, where are you going to get your protein, and what are the greenhouse gas consequences? Soy beans, chickpeas, lentils - all the high-protein legumes - are crops that are grown on cleared land, land that is ploughed, fertilised, planted, irrigated and harvested by greenhouse-gas producing machines.
Australia is at its limit of land that can be cleared for cropping, and is in the process of reducing irrigation in its food bowl, the Murray-Darling basin. And talking of irrigation, under Australian conditions soybeans need almost as much water as cotton. Australia produces roughly 15 per cent of the soybeans that it consumes, although much of that is used in stock feed.
Pigs and chickens are monogastric and as a result produce a small fraction, per kilo, of the methane produced by ruminants. Unlike cattle they cannot live on grass. In traditional farm situations they were fed on crop residues and waste, but now significant quantities of grains are grown to feed them.
Meat protein substitutes, ranging from tofu to synthetic meat, are all highly processed and that means more greenhouse gas production.
Estimating methane production is a tricky business. There are a number of figures for the percentage of greenhouse gas emissions agriculture is responsible for, and they are getting better. On Monday, the CSIRO announced methane emissions from Australian cattle were actually 24 per cent lower than previously thought.
Critics of meat consumption like to compare ruminant-produced methane with transport emissions. But fossil fuels are releasing carbon that was sequestered hundreds of millions of years ago that will never be replaced. The methane burped by a cow comes from carbon sequestered in the grass during the last growing season. If that grass keeps growing, or produces seedlings, carbon will be sequestered again next season.
There is no comparison: burning fossil fuels is a one-way street. The methane produced by ruminants is a natural part of an ancient life cycle.
Asa Wahlquist is a rural journalist.
This story was found at: http://www.smh.com.au/comment/not-eating-red-meat-won8217t-save-the-planet-20151214-glmxly.html
Original Source found here: http://mobile.nytimes.com/2013/12/17/health/a-lifelong-fight-against-trans-fat.html?_r=0
By MELANIE WARNER
December 16, 2013
In 1957, a fledgling nutrition scientist at the University of Illinois persuaded a hospital to give him samples of arteries from patients who had died of heart attacks.
When he analyzed them, he made a startling discovery. Not surprisingly, the diseased arteries were filled with fat — but it was a specific kind of fat. The artificial fatty acids called trans fats, which come from the hydrogen-treated oils used in processed foods like margarine, had crowded out other types of fatty acids.
The scientist, Fred Kummerow, followed up with a study that found troubling amounts of artery-clogging plaque in pigs given a diet heavy in artificial fats. He became a pioneer of trans-fat research, one of the first scientists to assert a link between heart disease and processed foods.
Now, Dr. Kummerow (KOO-mer-ow) is still active at age 99, living a few blocks from the university, where he runs a small laboratory. And he continues to come to contrarian conclusions about fat and heart disease.
In the past two years, he has published four papers in peer-reviewed scientific journals, two of them devoted to another major culprit he has singled out as responsible for atherosclerosis, or the hardening of the arteries: an excess of polyunsaturated vegetable oils like soybean, corn and sunflower — exactly the types of fats Americans have been urged to consume for the past several decades.
The problem, he says, is not LDL, the “bad cholesterol” widely considered to be the major cause of heart disease. What matters is whether the cholesterol and fat residing in those LDL particles have been oxidized. (Technically, LDL is not cholesterol, but particles containing cholesterol, along with fatty acids and protein.)
“Cholesterol has nothing to do with heart disease, except if it’s oxidized,” Dr. Kummerow said. Oxidation is a chemical process that happens widely in the body, contributing to aging and the development of degenerative and chronic diseases. Dr. Kummerow contends that the high temperatures used in commercial frying cause inherently unstable polyunsaturated oils to oxidize, and that these oxidized fatty acids become a destructive part of LDL particles. Even when not oxidized by frying, soybean and corn oils can oxidize inside the body.
If true, the hypothesis might explain why studies have found that half of all heart disease patients have normal or low levels of LDL.
“You can have fine levels of LDL and still be in trouble if a lot of that LDL is oxidized,” Dr. Kummerow said.
This leads him to a controversial conclusion: that the saturated fat in butter, cheese and meats does not contribute to the clogging of arteries — and in fact is beneficial in moderate amounts in the context of a healthy diet (lots of fruits, vegetables, whole grains and other fresh, unprocessed foods).
His own diet attests to that. Along with fruits, vegetables and whole grains, he eats red meat several times a week and drinks whole milk daily.
He cannot remember the last time he ate anything deep-fried. He has never used margarine, and instead scrambles eggs in butter every morning. He calls eggs one of nature’s most perfect foods, something he has been preaching since the 1970s, when the consumption of cholesterol-laden eggs was thought to be a one-way ticket to heart disease.
“Eggs have all of the nine amino acids you need to build cells, plus important vitamins and minerals,” he said. “It’s crazy to just eat egg whites. Not a good practice at all.”
Dr. Robert H. Eckel, an endocrinologist and former president of the American Heart Association, agreed that oxidized LDL was far worse than nonoxidized LDL in terms of creating plaque.
But he disputed Dr. Kummerow’s contention that saturated fats are benign and that polyunsaturated vegetable oils promote heart disease. “There are studies that clearly show a substitution of saturated fats with polyunsaturated fats leads to a reduction in cardiovascular disease,” said Dr. Eckel, a professor at the University of Colorado.
“Oxidation is something that consumers can detect,” he said. “Therefore, it is in everyone’s best interest to control it.”
The long arc of Fred Kummerow’s life and career illustrates the frustratingly slow pace of science and the ways in which scientific conformity can hinder the search for answers. Born in Germany just after World War I broke out, he moved to Milwaukee with his family when he was 9. His father, who worked at a cement block factory, did not have the money to send him to college, so Dr. Kummerow worked full time at a drug distribution company while attending the University of Wisconsin in the evenings. After he earned a Ph.D. in biochemistry, his first job was at Clemson University in South Carolina, where he helped prevent thousands of deaths in the South from pellagra, a disease resulting from a deficiency of vitamin B3.
His early research on trans fats was “resoundingly criticized and dismissed,” said Dr. Walter Willett, the chairman of the nutrition department at the Harvard School of Public Health, who credited Dr. Kummerow with prompting his desire to include trans fats in the Nurses’ Health Study. A 1993 finding from that study, which showed a direct link between the consumption of foods containing trans fats and heart disease in women, was a turning point in scientific and medical thinking about trans fats.
“He had great difficulty getting funding because the heart disease prevention world strongly resisted the idea that trans fats were the problem,” Dr. Willett continued. “In their view, saturated fats were the big culprit in heart disease. Anything else was a distraction from that.”
At an age when life itself is an accomplishment, Dr. Kummerow said he had no intention of stepping away from the work that has consumed him for six decades. He continues to work from home and talks daily to the two scientists who work in his lab, which receives funding from the Weston A. Price Foundation.
His wife of 70 years, Amy, died last year at age 94 from Parkinson’s disease; he has three children, three grandchildren and a great-grandchild.
He takes no medications, and his mind shows no sign of aging: He has an encyclopedic recall for names, dates and, more impressive, complex scientific concepts. After his muscles became inflamed from a blood pressure drug that he has since stopped taking, he started using a wheelchair combined with a walker.
His most significant health problem, appropriately enough, was an artery blockage at age 89 — probably a result of the inevitable effects of aging, not diet.
Bypass surgery took care of the blockage, and the fact that he now has an artery from his arm running into his heart has made him even more determined to keep working. Heart disease remains the leading cause of death for Americans, and he would like to stick around to continue funding research that will help change that.
“What I really want is to see trans fats gone finally,” he said, “and for people to eat better and have a more accurate understanding of what really causes heart disease.”
This pilot study aimed to determine if an elemental diet could be used to treat patients with active rheumatoid arthritis and to compare its effect to that of oral prednisolone.
Thirty patients with active rheumatoid arthritis were randomly allocated to 2 weeks of treatment with an elemental diet (n = 21) or oral prednisolone 15 mg/day (n = 9). Assessments of duration of early morning stiffness (EMS), pain on a 10 cm visual analog scale (VAS), the Ritchie articular index (RAI), swollen joint score, the Stanford Health Assessment Questionnaire, global patient and physician assessment, body weight, erythrocyte sedimentation rate (ESR), C-reactive protein (CRP) and haemoglobin, were made at 0, 2, 4 and 6 weeks.
All clinical parameters improved in both groups (p<0.05) except the swollen joint score in the elemental diet group. An improvement of greater than 20% in EMS, VAS and RAI occurred in 72% of the elemental diet group and 78% of the prednisolone group. ESR, CRP and haemoglobin improved in the steroid group only (p<0.05).
An elemental diet for 2 weeks resulted in a clinical improvement in patients with active rheumatoid arthritis, and was as effective as a course of oral prednisolone 15 mg daily in improving subjective clinical parameters. This study supports the concept that rheumatoid arthritis may be a reaction to a food antigen(s) and that the disease process starts within the intestine.
Original Source: http://www.ncbi.nlm.nih.gov/pubmed/17308218
From the CDC website (http://www.cdc.gov/flu/protect/vaccine/thimerosal.htm):
Do the 2014-2015 seasonal flu vaccines contain thimerosal?
The Food and Drug Administration (FDA) has approved several formulations of the seasonal flu vaccine, including multi-dose vials and single-dose units. (See Table of Approved Influenza Vaccines for the U.S. 2014–2015 Season.) Since seasonal influenza vaccine is produced in large quantities for annual vaccination campaigns, some of the vaccine is produced in multi-dose vials, and contains thimerosal to safeguard against possible contamination of the vial once it is opened.
I am pro vaccine. I had all of my six children vaccinated. I believe that vaccines save millions of lives. So let me explain why I oppose the current wave of state legislation to remove parents’ right to choose whether to give their children particular vaccines.
Vaccines are big business. Pharma is a trillion dollar industry (1) with vaccines accounting for $25 billion in annual sales. (2) CDC’s decision to add a vaccine to the schedule can guarantee its manufacturer millions of customers and billions in revenue (3) with minimal advertising or marketing costs and complete immunity from lawsuits. High stakes and the seamless marriage between Big Pharma and government agencies have spawned an opaque and crooked regulatory system. Merck, one of America’s leading vaccine outfits, is currently under criminal investigation for fraudulently deceiving FDA regulators about the effectiveness of its MMR vaccine. Two whistleblowers say Merck ginned up sham studies to maintain Merck’s MMR monopoly. (4)
Big money has fueled the exponential expansion of CDC’s vaccine schedule since 1988, when Congress’ grant of immunity from lawsuits (5) suddenly transformed vaccines into paydirt. CDC recommended five pediatric vaccines when I was a boy in 1954. Today’s children cannot attend school without at least 56 doses of 14 vaccines by the time they’re 18.(6)
An insatiable pharmaceutical industry has 271 new vaccines under development in CDC’s bureaucratic pipeline (7) in hopes of boosting vaccine revenues to $100 billion by 2025. (8)The industry’s principle spokesperson, Dr. Paul Offit, says that he believes children can take as many as 10,000 vaccines. (9)
Public health may not be the sole driver of CDC decisions to mandate new vaccines. Four scathing federal studies by Congress, (10) the US Senate, (11) the HHS Inspector General, (12) and the HHS Office of Research Integrity, (13) paint CDC as a cesspool of corruption, mismanagement and dysfunction with alarming conflicts of interest suborning its research, regulatory and policymaking functions. CDC rules allow vaccine industry profiteers like Dr. Offit to serve on advisory boards that add new vaccines to the schedule. In a typical example, Offit in 1999 sat on the CDC’s vaccine advisory committee and voted to add the rotavirus vaccine to CDC’s schedule paving the way for him to make a fortune on his own rotavirus vaccine. (14) Offit and his business partners sold the royalties to his rotavirus vaccine patent to Merck in 2006 for $182 million. (15) Offit told Newsweek, “It was like winning the lottery!” (16) A 2009 HHS Inspector General’s report found that as many as 97% of the individuals who sit on the CDC advisory boards that approve vaccines may have similar conflicts. In addition to lucrative business partnerships with Merck, Offit holds a $1.5 million research chair, funded by Merck, at Children’s Hospital in Philadelphia. (17) From this industry sinecure, he broadcasts vaccine industry propaganda and annually publishes books urging unlimited vaccinations and vilifying safe-vaccine advocates. (18) (19)
The corruption has also poisoned CDC’s immunization safety office, the research arm that tests vaccines for safety and efficacy. In August 2014, seventeen year CDC veteran, Dr. William Thompson, who is author of the principal study cited by CDC to exculpate mercury preserved vaccines from the autism link, invoked whistleblower protection, and turned extensive agency files over to Congress. (20)Thompson, who is still employed at CDC, says that for the past decade his superiors have pressured him and his fellow scientists to lie and manipulate data about the safety of the mercury based preservative, thimerosal, to conceal its causative link to a suite of brain injuries, including autism. (21) (22)
Thimerosal is 50% ethylmercury which is far more toxic and persistent in the brain than the highly regulated methylmercury in fish. (23) Hundreds of peer reviewed studies by leading government and university scientists show that thimerosal, is a devastating brain poison linked to neurological disorders now epidemic in American children.My book, Thimerosal: Let the Science Speak, (24) is a summary of these studies, (25) which CDC and its credulous journalists swear don’t exist. Although Thompson’s CDC and vaccine industry colleagues have created nine patently fraudulent and thoroughly discredited epidemiological studies to defend thimerosal, no published study shows thimerosal to be safe. (26)
The common canard that US autism rates rose after drug makers removed most thimerosal from pediatric vaccines in 2003 is wrong. That same year, CDC added flu shots containing massive doses of thimerosal to the pediatric schedule. (27) As a result, children today can get nearly as much mercury exposure as children did from all pediatric vaccines combined in the decade prior to 2003. (28) Worse, thimerosal, for the first time, is being given to pregnant women in flu shots. (29) Furthermore, CDC’s current autism numbers are for children born in 2002, when kids were still getting thimerosal in their pediatric vaccines. The best science suggests that thimerosal’s complete removal from vaccines is likely to prompt a significant decline in autism. For example, a 2013 CDC study in JAMA Pediatrics shows a 33% drop in autism spectrum disorder in Denmark following the 1992 removal of thimerosal from Danish vaccines. (30) That paper is among 37 peer reviewed studies linking thimerosal to the autism epidemic. (31)
Thimerosal has precipitated a journalistic as well as a public health crisis. Big Pharma pumps over $3.5 billion annually into TV, newspapers and other advertising, targeting news departments, which have become vehicles for pharmaceutical sales and propaganda platforms for the industry. Television and print outlets feature spokespeople like Dr. Offit – without identifying their industry ties - while censoring criticisms of vaccine safety and excluding the voices of informed vaccine safety advocates. Busy journalists parrot the deceptive talking points dispensed by government and pharma officials rather than reading the science themselves. Unable to argue the science, they bully, pillory and demonize vaccine safety advocates as “anti-vax”, “anti-science” and far worse. The unwillingness of the press to scrutinize CDC has emboldened both industry and agency to follow the lowest paths of easy profit and bureaucratic preservation.
The measles scare was classic disaster capitalism with media outlets dutifully stoking public hysteria on editorial pages and throughout the 24 hour broadcast cycle. With Dr. Offit leading the charge, CDC, drug makers and industry funded front groups parlayed a garden variety annual measles outbreak into a national tidal wave of state legislation to ban religious and philosophical vaccine exemptions. The national media frenzy over 159 measles cases (32) left little room for attention to the the autism cataclysm which has debilitated 1 million American children since the pandemic began in 1989, (33) with 27,000 new cases annually. CDC refuses to call autism an “epidemic”. In defiance of hard science, and common sense, CDC and Offit have launched a denial campaign to gull reporters into believing the autism plague is an illusion created by better diagnosis. (34) (35) (36)
Big Pharma is among the nation’s largest political donors giving $31 million last year to national political candidates. (37) It spends more on political lobbying than any other industry, $3.0 billion from 1998 to 2014 (38) – double the amount spent by oil and gas and four times as much as defense and aerospace lobbyists. (39) By February, state legislators in 36 states were pushing through over one hundred new laws to end philosophical and religious vaccine exemptions. Many of those state lawmakers are also on the industry payroll. (40) You can see how much money bill sponsors from your state took from Big Pharma on http://www.maplight.org.
Normally plaintiffs’ tort lawyers would provide a powerful check and balance to keep vaccines safe and effective and regulators and policymakers honest. But Pharma’s dirty money has brought the industry immunity from lawsuits for vaccine injury no matter how dangerous the product. An obliging Congress disposed of the Seventh Amendment right to jury trial making it impossible for vaccine injured plaintiffs to sue pharmaceutical companies for selling unsafe vaccines. (41) That’s right! No Class Actions. No discovery. No depositions and little financial incentive for the industry to make vaccines safer.
Vaccine industry money has neutralized virtually all of the checks and balances that once stood between a rapacious pharmaceutical industry and our children. With the research, regulatory and policymaking agencies captured, the courts closed to the public, the lawyers disarmed, the politicians on retainer and the media subverted, there is no one left to stand between a greedy industry and vulnerable children, except parents. Now Big Pharma’s game plan is to remove parental informed consent rights from that equation and force vaccine hesitant parents to inject their children with potentially risky vaccines which the Supreme Court has called “unavoidably unsafe”. (42)
Ending exemptions is premature until we have a functioning regulatory agency and a transparent process. The best way to insure full vaccine coverage is for the vaccine program to win back public trust by ending its corrupt financial ties with a profit-making industry.
To educate yourselves about CDC corruption and the truth about vaccine science, download the movie Trace Amounts (43) and insist your legislators watch it before voting on any of these bills.
26 Methodological Issues and Evidence of Malfeasance in Research Purporting to Show Thimerosal in Vaccines is Safe, Dr. Brian Hooker et al. 2014. www.hindawi.com/journals/bmri/2014/247218 (Critique of six epidemological studies cited by CDC to exculpate thimerosal)
35http://www.ncbi.nlm.nih.gov/pubmed/19234401 (In contrast to the unsupported statement by Gerber and Offit, this paper from the UC Davis MIND Institute shows true increases in autism levels, far beyond that possible through “broadening diagnostic criteria and increased awareness”)
40http://maplight.org/california/legislator/1397-richard-pan (This link gives an example of industry donations to state lawmakers, i.e., the campaign funding profile of Dr. Richard Pan, a state senator who is heavily supported by the health care industry and is leading the charge to remove parental vaccine consent rights in California)
Abstract Background Epidemiological and animal-based studies have suggested that prenatal and postnatal fluoride exposure has adverse effects on neurodevelopment. The aim of this study was to examine the relationship between exposure to fluoridated water and Attention-Deficit Hyperactivity Disorder (ADHD) prevalence among children and adolescents in the United States.
Methods Data on ADHD prevalence among 4-17 year olds collected in 2003, 2007 and 2011 as part of the National Survey of Children’s Health, and state water fluoridation prevalence from the Centers for Disease Control and Prevention (CDC) collected between 1992 and 2008 were utilized.
Results State prevalence of artificial water fluoridation in 1992 significantly positively predicted state prevalence of ADHD in 2003, 2007 and 2011, even after controlling for socioeconomic status. A multivariate regression analysis showed that after socioeconomic status was controlled each 1% increase in artificial fluoridation prevalence in 1992 was associated with approximately 67,000 to 131,000 additional ADHD diagnoses from 2003 to 2011.Overall state water fluoridation prevalence (not distinguishing between fluoridation types) was also significantly positively correlated with state prevalence of ADHD for all but one year examined.
Conclusions Parents reported higher rates of medically-diagnosed ADHD in their children in states in which a greater proportion of people receive fluoridated water from public water supplies. The relationship between fluoride exposure and ADHD warrants future study.
What are your thoughts on calorie counting? I've always been against it and understand a 'calorie doesn't equal a calorie' ideology from Paul Chek. Though all I ever see nutritionists go on about for weight loss is to ensure you are on a calorie deficit. Again which I've always taught has many more hormonal problems long term. Interested in your thoughts on the whole matter even tho the question is rather vague. Thanks.
I speak more about this in my latest book (http://triumphtraining.com/collections/books/products/spot-on-nutrition), but I think it's a slippery slope. Most foods worth eating don't have a label. And the USDA allows for a 20% margin of error in both calories and nutrition in a food. So while I think it can prove insightful when people track what and how much they actually consume in the short term, it's not like the human digestive system is the same as a combustion engine. As I say in my book, "the impact a given amount of food has on a person's physiology is predicated less on the total calories in that food and more on the total of what that person has done to themselves via nutrition and lifestyle choices." My experience has shown that many people don't eat enough. But since they're so metabolically damaged for the reasons mentioned in my book and otherwise, their scales and their health both move in the wrong direction.
Need some advice, 3rd week in a row since changing from chemicals on ride to organic snacks and water/honey/salt that I've had difficult cycle rides. Started out great today and burnt up with leg and hip cramps at mile 40, by mile 50 very little leg power.
My Response With His Answers:
What was breakfast? Freshly made organic fruit smoothie with gelatin and milk, half a bagel with peanut butter
What was dinner the night before? Lasagna
What was dinner 2 nights before? Steak, baked potato, zucchini
We could really look at the whole week, but those are the most important meals.
Are you pre-ex stretching? Not for a long period but focused on legs, arms, back, stomach
Have you been fit on the bike? Matt Cole fit me when I bought my Parlee from him in early 2014
Where were the cramps exactly? Started in Thighs then hamstrings and worked it’s way to hips
How much time before your pre-ride meal and the start of the ride? 45 minutes
How soon do you eat once you start? Drank a bottle ever hour with water, salt, honey, started eating buffalo bites, organic blocks within 45 minutes of ride
Is VW suffering, too, or is it just you? Just me, thank God, VW is rocking it on this nutrition and on the bike/run
--I'd nix the bagel or at least the PB--the PUFA content in it inhibits the body's ability to utilize glucose (stored and brought in at dinner and bfast) as well O2 and it's also pro-inflammatory.
--noodles are gluten, I assume, and sub-optimal for those with opposable thumbs.
--beautiful--more/less what I'd do with some fruit or quality dessert at the end!
--good--just use reps before.
--most folks (especially triathletes) are quad dominant. Learning to utilize the hammies will help so the quads don't overwork, then get tight, then force the hammies to overwork, the the whole kinetic chain falls apart.
--you may find that you do better with a longer time period between the end of the last feeding and the start of training. As soon as exercise begins, blood flow to the digestive system is compromised and any calories/nutrition you could have derived from the food in the stomach becomes unavailable (or less so, depending on intensity and several other factors).
--looks pretty good. If the organic blocks are homemade, you should be fine. If commercial, look out for sunflower oil or another PUFA which would inhibit glucose utilization.
--good to know, and I'm glad (but not surprised). Now we just need to dial you in. And I hope my responses help.
In contrast to the current belief that cholesterol reduction with statins decreases atherosclerosis, we present a perspective that statins may be causative in coronary artery calcification and can function as mitochondrial toxins that impair muscle function in the heart and blood vessels through the depletion of coenzyme Q10 and 'heme A', and thereby ATP generation. Statins inhibit the synthesis of vitamin K2, the cofactor for matrix Gla-protein activation, which in turn protects arteries from calcification. Statins inhibit the biosynthesis of selenium containing proteins, one of which is glutathione peroxidase serving to suppress peroxidative stress. An impairment of selenoprotein biosynthesis may be a factor in congestive heart failure, reminiscent of the dilated cardiomyopathies seen with selenium deficiency. Thus, the epidemic of heart failure and atherosclerosis that plagues the modern world may paradoxically be aggravated by the pervasive use of statin drugs. We propose that current statin treatment guidelines be critically reevaluated.
Original source: http://www.ncbi.nlm.nih.gov/pubmed/25655639
Fluoridation May Not Prevent Cavities, Scientific Review Shows BY DOUGLAS MAIN 6/29/15
Water fluoridation, which first began in 1945 in Grand Rapids, Michigan, and expanded nationwide over the years, has always been controversial. Those opposed to the process have argued—and a growing number of studies have suggested—that the chemical may present a number of health risks, for example interfering with the endocrine system and increasing the risk of impaired brain function; two studies in the last few months, for example, have linked fluoridation to ADHD and underactive thyroid. Others argue against water fluoridation on ethical grounds, saying the process forces people to consume a substance they may not know is there—or that they’d rather avoid.
Despite concerns about safety and ethics, many are content to continue fluoridation because of its purported benefit: that it reduces tooth decay. The Centers for Disease Control and Prevention’s Division of Oral Health, the main government body responsible for the process, says it’s “safe and effective.”
You might think, then, that fluoridated water's efficacy as a cavity preventer would be proven beyond a reasonable doubt. But new research suggests that assumption is dramatically misguided; while using fluoridated toothpaste has been proven to be good for oral health, consuming fluoridated water may have no positive impact.
The Cochrane Collaboration, a group of doctors and researchers known for their comprehensive reviews—which are widely regarded as the gold standard of scientific rigor in assessing effectiveness of public health policies—recently set out to find out if fluoridation reduces cavities. They reviewed every study done on fluoridation that they could find, and then winnowed down the collection to only the most comprehensive, well-designed and reliable papers. Then they analyzed these studies’ results, and published their conclusion in a review earlier this month.
The review identified only three studies since 1975—of sufficient quality to be included—that addressed the effectiveness of fluoridation on tooth decay in the population at large. These papers determined that fluoridation does not reduce cavities to a statistically significant degree in permanent teeth, says study co-author Anne-Marie Glenny, a health science researcher at Manchester University in the United Kingdom. The authors found only seven other studies worthy of inclusion dating prior to 1975.
The authors also found only two studies since 1975 that looked at the effectiveness of reducing cavities in baby teeth, and found fluoridation to have no statistically significant impact here, either.
The scientists also found “insufficient evidence” that fluoridation reduces tooth decay in adults (children excluded).
“From the review, we’re unable to determine whether water fluoridation has an impact on caries levels in adults,” Glenny says. (“Tooth decay,” “cavities” and “caries” all mean the same thing: breakdown of enamel by mouth-dwelling microbes.)
“Frankly, this is pretty shocking,” says Thomas Zoeller, a scientist at UMass-Amherst uninvolved in the work. “This study does not support the use of fluoride in drinking water.” Trevor Sheldon concurred. Sheldon is the dean of the Hull York Medical School in the United Kingdom who led the advisory board that conducted a systematic review of water fluoridation in 2000, that came to similar conclusions as the Cochrane review. The lack of good evidence of effectiveness has shocked him. “I had assumed because of everything I’d heard that water fluoridation reduces cavities but I was completely amazed by the lack of evidence,” he says. “My prior view was completely reversed."
“There’s really hardly any evidence” the practice works, Sheldon adds. “And if anything there may be some evidence the other way.” One 2001 study covered in the Cochrane review of two neighboring British Columbia communities found that when fluoridation was stopped in one city, cavity prevalence actually went down slightly amongst schoolchildren, while cavity rates in the fluoridated community remained stable.
Overall the review suggests that stopping fluoridation would be unlikely to increase the risk of tooth decay, says Kathleen Thiessen, a senior scientist at the Oak Ridge Center for Risk Analysis, which does human health risk assessments of environmental contaminants.
“The sad story is that very little has been done in recent years to ensure that fluoridation is still needed [or] to ensure that adverse effects do not happen,” says Dr. Philippe Grandjean, an environmental health researcher and physician at Harvard University.
The scientists also couldn’t find enough evidence to support the oft-repeated notion that fluoridation reduces dental health disparities among different socioeconomic groups, which the CDC and others use as a rationale for fluoridating water.
“The fact that there is insufficient information to determine whether fluoridation reduces social inequalities in dental health is troublesome given that this is often cited as a reason for fluoridating water,” say Christine Till and Ashley Malin, researchers at Toronto’s York University.
Studies that attest to the effectiveness of fluoridation were generally done before the widespread usage of fluoride-containing dental products like rinses and toothpastes in the 1970s and later, according to the recent Cochrane study. So while it may have once made sense to add fluoride to water, it no longer appears to be necessary or useful, Thiessen says.
It has also become clear in the last 15 years that fluoride primarily acts topically, according to the CDC. It reacts with the surface of the tooth enamel, making it more resistant to acids excreted by bacteria. Thus, there's no good reason to swallow fluoride and subject every tissue of your body to it, Thiessen says.
Another 2009 review by the Cochrane group clearly shows that fluoride toothpaste prevents cavities, serving as a useful counterpoint to fluoridation’s uncertain benefits.
Across all nine studies included in the review looking at caries reductions in children's permanent choppers, there was evidence linking fluoridation to 26 percent decline in the prevalence of decayed, missing or filled permanent teeth. But the researchers say they have serious doubts about the validity of this number. They write: “We have limited confidence in the size of this effect due to the high risk of bias within the studies and the lack of contemporary evidence.” Six of the nine studies were from before 1975, before fluoride toothpaste was widely available.
The review also found fluoridation was associated with a 14 percent increase in the number of children without any cavities. But more than two-thirds percent of the studies showing this took place more than 40 years ago, and are not of high quality.
Nearly all these papers were flawed in significant ways. For example, 70 percent of the cavity-reducing studies made no effort to control for important confounding factors such as dietary sources of fluoride other than tap water, diet in general (like how much sugar they consumed) or ethnicity.
When it comes to fluoridation research, even the best studies are not high quality. Although this was already well-established, it doesn't seem to be well-known.
“I couldn’t believe the low quality of the research” on fluoridation, Sheldon says.
The data suggest that toothpaste, besides other preventative measures like dental sealants, flossing and avoiding sugar, are the real drivers in the decline of tooth decay in the past few decades, Thiessen says. Indeed, cavity rates have declined by similar amounts in countries with and without fluoridation.
Meanwhile, dental health leaves much to be desired in widely fluoridated America: About 60 percent of American teenagers have had cavities, and 15 percent have untreated tooth decay.
One thing the review definitively concluded: Fluoridation causes fluorosis.
This condition occurs when fluoride interferes with the cells that produce enamel, creating white flecks on the teeth. On average, about 12 percent of people in fluoridated areas have fluorosis bad enough that it qualifies as an “aesthetic concern,” according to the review. According to Sheldon, that’s a “huge number.” A total of 40 percent of people in fluoridated areas have some level of fluorosis, though the majority of these cases are likely unnoticeable to the average person.
In a smaller percentage of cases, fluorosis can be severe enough to cause structural damage, brown stains and mottling to the tooth.
Sheldon says that if fluoridation were to be submitted anew for approval today, “nobody would even think about it” due to the shoddy evidence of effectiveness and obvious downside of fluorosis.
There is also a definite issue of inequality when it comes to fluorosis. Blacks and Mexican-Americans have higher rates of both moderate and severe forms of the condition. Blacks also have higher levels. As of 2004, 58 percent of African-Americans had fluorosis, compared to 36 percent of whites, and the condition is becoming more common.
The Cochrane review concerned itself only with oral health. It didn’t address other health problems associated with fluoride, which Grandjean says need to be researched.
Many of the Cochrane study’s conclusions conflict with statements by the CDC, the American Dental Association and others that maintain fluoridation is safe and effective. The ADA, for example, maintains on its website that “thousands of studies” support fluoridation’s effectiveness—which is directly contradicted by the Cochrane findings. The ADA didn’t immediately respond to requests for comment.
The CDC remains undeterred. “Nothing in the Cochrane review” reduces the government’s “confidence in water fluoridation as a valuable tool to prevent tooth decay in children as well as adults,” says Barbara Gooch, a dental researcher with CDC’s Division of Oral Health.
The CDC and others “are somehow suspending disbelief,” Sheldon says. They are “all in the mindset that this is a really good thing, and just not accepting that they might be wrong.” Sheldon and others suggest pro-fluoridation beliefs are entrenched and will not easily change, despite the poor data quality and lack of evidence from the past 40 years.
Derek Richards, the editor of the journal Evidence-Based Dentistry (published by the prestigious Nature group) concedes that “we haven’t got any current evidence” that fluoridation reduces cavities, “so we don’t know how much it’s reducing tooth decay at the moment,” he says. “But I have no qualms about that.” Richards reasons that because fluoridation may help reduce cavities in those who don’t use toothpaste or take other preventative measures, including many in lower socioeconomic groups, it’s likely still useful. He also argues that there’s no conclusive evidence of harm from fluoridation (other than fluorosis), so he doesn’t see a large downside.
But most scientists interviewed for this article don’t necessarily think fluoridation’s uncertain benefits justify its continuation without more stringent evidence, and argue for more research into the matter.
“When you have a public health intervention that’s applied to everybody, the burden of evidence to know that people are likely to benefit and not to be harmed is much higher, since people can’t choose,” Sheldon says. Everybody drinks water, after all, mostly from the tap. “Public health bodies need to have the courage to look at this review,” says Sheldon, “and be honest enough to say that this needs to be reconsidered.”
Original Source found here: http://www.newsweek.com/fluoridation-may-not-prevent-cavities-huge-study-shows-348251
A major epidemiological registry-based study from Aarhus University and Aarhus University Hospital indicates that Parkinson’s disease begins in the gastrointestinal tract; the study is the largest in the field so far.
The chronic neurodegenerative Parkinson’s disease affects an increasing number of people. However, scientists still do not know why some people develop Parkinson’s disease. Now researchers from Aarhus University and Aarhus University Hospital have taken an important step towards a better understanding of the disease.
New research indicates that Parkinson’s disease may begin in the gastrointestinal tract and spread through the vagus nerve to the brain.
The research has presented strong evidence that Parkinson’s disease begins in the gastrointestinal tract and spreads via the vagus nerve to the brain. Many patients have also suffered from gastrointestinal symptoms before the Parkinson’s diagnosis is made. The image is for illustrative purposes only.
“We have conducted a registry study of almost 15,000 patients who have had the vagus nerve in their stomach severed. Between approximately 1970-1995 this procedure was a very common method of ulcer treatment. If it really is correct that Parkinson’s starts in the gut and spreads through the vagus nerve, then these vagotomy patients should naturally be protected against developing Parkinson’s disease,” explains postdoc at Aarhus University Elisabeth Svensson on the hypothesis behind the study.
A hypothesis that turned out to be correct:
“Our study shows that patients who have had the the entire vagus nerve severed were protected against Parkinson’s disease. Their risk was halved after 20 years. However, patients who had only had a small part of the vagus nerve severed where not protected. This also fits the hypothesis that the disease process is strongly dependent on a fully or partially intact vagus nerve to be able to reach and affect the brain,” she says.
The research project has just been published in the internationally recognised journal Annals of Neurology.
The first clinical examination
The research has presented strong evidence that Parkinson’s disease begins in the gastrointestinal tract and spreads via the vagus nerve to the brain. Many patients have also suffered from gastrointestinal symptoms before the Parkinson’s diagnosis is made.
“Patients with Parkinson’s disease are often constipated many years before they receive the diagnosis, which may be an early marker of the link between neurologic and gastroenterologic pathology related to the vagus nerve ,” says Elisabeth Svensson.
Previous hypotheses about the relationship between Parkinson’s and the vagus nerve have led to animal studies and cell studies in the field. However, the current study is the first and largest epidemiological study in humans.
The research project is an important piece of the puzzle in terms of the causes of the disease. In the future the researchers expect to be able to use the new knowledge to identify risk factors for Parkinson’s disease and thus prevent the disease.
“Now that we have found an association between the vagus nerve and the development of Parkinson’s disease, it is important to carry out research into the factors that may trigger this neurological degeneration, so that we can prevent the development of the disease. To be able to do this will naturally be a major breakthrough,” says Elisabeth Svensson.
About this Parkinson’s disease research
Parkinson’s disease is a chronic and neurodegenerative disease which affects approx. 1 out of every 1,000 people.
The first signs of the disease are most often seen between the ages of 50-60.
The researchers carried out a registry study involving 14,883 patients who had undergone a vagotomy.
The research project was supported by the Danish Parkinson’s Disease Association and PROCRIN (Program for Clinical Research Infrastructure).
Funding The research was funded by the Danish Parkinson’s Disease Association.
Source: Elisabeth Svensson – Aarhus University Image Credit: Image is credited to the Gray’s Anatomy and is in the public domain Original Research:Abstract for “Vagotomy and subsequent risk of Parkinson’s disease” by Elisabeth Svensson PhD, Erzsébet Horváth-Puhó PhD, Reimar W Thomsen PhD, Jens Christian Djurhuus DMSc, Lars Pedersen PhD, Per Borghammer DMSc and Henrik Toft Sørensen DMSc in Annals of Neurology. Published online June 2015 doi:10.1002/ana.24448
Vagotomy and subsequent risk of Parkinson’s disease
Objectives: Parkinson’s disease (PD) may be caused by an enteric neurotropic pathogen entering the brain through the vagal nerve, a process that may take over 20 years. We investigated the risk of PD in patients who underwent vagotomy, and hypothesized that truncal vagotomy is associated with a protective effect, while super-selective vagotomy has a minor effect.
Methods: We constructed cohorts of all patients in Denmark who underwent vagotomy during 1977-1995 and a matched general population cohort, by linking Danish registries. We used Cox regression to compute hazard ratios (HRs) for PD and corresponding 95% confidence intervals [CIs], adjusting for potential confounders.
Results: Risk of PD was decreased in patients who underwent truncal [HR = 0.85, 95% CI= 0.56–1.27; follow-up of >20 years: HR = 0.58, 95% CI: 0.28–1.20] compared to super-selective vagotomy. Risk of PD was also decreased following truncal vagotomy when compared to the general population cohort [overall adjusted HR = 0.85, 95% CI 0.63–1.14; follow-up >20 years, adjusted HR = 0.53 [95% CI: 0.28–0.99]. In patients who underwent super-selective vagotomy, risk of PD was similar to the general population [HR = 1.09, 95% CI: 0.84–1.43; follow-up of >20 years: HR = 1.16, 95% CI: 0.80–1.70]. The statistical precision of the risk estimates was limited. Results were consistent after external adjustment for unmeasured confounding by smoking.
Interpretation: Full truncal vagotomy is associated with a decreased risk for subsequent PD, suggesting that the vagal nerve may be critically involved in the pathogenesis of PD.
“Vagotomy and subsequent risk of Parkinson’s disease” by Elisabeth Svensson PhD, Erzsébet Horváth-Puhó PhD, Reimar W Thomsen PhD, Jens Christian Djurhuus DMSc, Lars Pedersen PhD, Per Borghammer DMSc and Henrik Toft Sørensen DMSc in Annals of Neurology. Published online June 2015 doi:10.1002/ana.24448
Br Med J. 1970 April 25; 2(5703): 203–209. Thromboembolic Disease and the Steroidal Content of Oral Contraceptives. A Report to the Committee on Safety of Drugs W. H. W. Inman, M. P. Vessey, Barbro Westerholm, and A. Engelund Reports of thromboembolism following the use of oral contraceptives received by drug safety committees in the United Kingdom, Sweden, and Denmark have been analysed to investigate possible differences in the risks associated with the various preparations. For this purpose the numbers of reports of thromboembolism attributed to each product were compared with the distribution that would have been expected from market research estimates of sales, assuming that all products carried the same risk.
A positive correlation was found between the dose of oestrogen and the risk of pulmonary embolism, deep vein thrombosis, cerebral thrombosis, and coronary thrombosis in the United Kingdom. A similar association was found for venous thrombosis and pulmonary embolism in Sweden and Denmark. No significant differences could be detected between sequential and combined preparations containing the same doses of oestrogen, nor between the two oestrogens, ethinyloestradiol and mestranol. Certain discrepancies in the data suggest that the dose of oestrogen may not be the only factor related to the risk of thromboembolism; thus there was a significant deficit of reports associated with the combination of mestranol 100 μg. with norethynodrel 2•5 mg. and a significant excess of reports associated with the combination of ethinyloestradiol 50 μg. with megestrol acetate 4 mg. An excess of reports also occurred with other combined preparations containing megestrol acetate. The data obtained in earlier epidemiological studies were re-examined and, though no trend was obvious in any one of them, the combined results showed an excess of cases of thromboembolism at the highest dose of oestrogen.
Lancet. 1976 Mar 6;1(7958):509-11. Oral contraceptives, antithrombin- III activity, and postoperative deep-vein thrombosis. Sagar S, Stamatakis JD, Thomas DP, Kakkar VV. Deep-vein thrombosis (D.V.T.) was detected by the fibrinogen-uptake test in six out of a total of thirty-one young women undergoing emergency abdominal surgery who gave a history of recent oral contraceptive intake. In contrast, no D.V.T. developed in nineteen similar patients who were not on oral contraceptives (P less than 0.01). Plasma-antithrombin-III activity was significantly lower preoperatively in patients taking oral contraceptives; postoperative D.V.T. subsequently developed in three out of five patients with preoperative antithrombin-III activity below 50%. In seventy-eight dental patients undergoing molar extraction, antithrombin-III activity was measured before, during, and after operation. Activity fell in all patients during operation, but the fall was significantly greater in women taking oral contraceptives (P less than 0.01). The intra-operative fall in antithrombin-III activity was prevented by a small preoperative dose of subcutaneous heparin.
Am J Obstet Gynecol. 1975 Jul 15;122(6):688-92. Conjugated estrogens and hypercoagulability. von Kaulla E, Droegemueller W, von Kaulla KN. A group of 11 menopausal women receiving 1.25 mg. of conjugated estrogens daily had coagulation tests to determine the development of hypercoagulability after taking 5 and 21 tablets. There was no essential change in thrombin generation or fibrinolytic activity as measured by euglobin lysis time. There was a shift toward hypercoagulability in all three parameters of the thrombelastograms. The decrease of the antithrombin III activity was not as pronounced following the administration of conjugated estrogens as had been the change associated with oral contraceptives. Fibrin monomers were observed in some women during the first week of Premarin therapy.
Arch Pathol. 1970 Jan;89(1):1-8. Vascular lesions in women taking oral contraceptives. Irey NS, Manion WC, Taylor HB. Distinctive vascular lesions in association with thrombosis were found in arteries and veins in 20 relatively young women receiving oral contraceptives. These lesions were characterized by structural and histochemical changes in the intima and media. Occlusive thrombi were associated with relatively small, organized bases, the age of the latter measured in days to weeks. Nonocclusive and possibly earlier lesions were dominated by endothelial proliferation with minimal thrombus formation. It is postulated that this endothelial and intimal hyperplasia may be related to the steroids received and that it parallels similarly induced hyperplasias that have been found in cervical gland epithelium, in leiomyomas, and in a variety of mesenchymal derivatives under experimental conditions. Further control and experimental studies are required to clarify the possible relationship between these vascular lesions and oral contraceptives.
Br Med J. 1973 December 1; 4(5891): 507–512. Cryptogenic Cerebral Embolism in Women Taking Oral Contraceptives Karin Enzell and Gunnar Lindemalm Fourteen women taking oral contraceptives were admitted during a five-year period because of acute cerebrovascular lesions. A diagnosis of major cerebral embolism was established in four of them. No source of embolism was found, and thorough investigation failed to reveal any predisposing illness. Cerebral embolism was a probable diagnosis in several of the remaining 10 patients. A comparison was made with the strokes occurring in women not taking contraceptive pills in corresponding age groups.
Lancet. 1973 Jun 23;1(7817):1399-404. Oral contraceptives and venous thromboembolic disease, surgically confirmed gallbladder disease, and breast tumours. Report from the Boston Collaborative Drug Surveillance Programme. [No authors listed] A large survey of 24 hospitals was conducted to identify associations between commonly used drugs and various diseases. The results of 3 such studies–on venous thromboembolism, gall bladder disease, and breast tumors–are summarized in this article. Trained nurses in various hospital wards interviewed admissions, asking questions designed to determine smoking behavior, coffee and tea drinking, drug use, marital status, and parity and menopausal status, where appropriate. This report specifically centers on associations between oral contraceptive use and development of the 3 conditions under study. Women reported on in this portion of the study were aged 20-44 years. Compared with nonusers, the estimate of relative risk for thromboembolism in users was 11, and the estimated attack rate attributable to oral contraceptives was 60/100,000 users/year. For gall bladder disease (surgically confirmed) the corresponding relative risk estimate was 2.0, and the estimated annual attack rate was 79/100,000. The frequency of gall bladder disease in women under 35 years was significantly higher in oral contraceptive users of 6-12 months duration, compared with women who had taken the pills for longer periods. Breast cancer studies showed no evidence of a higher risk in oral contraceptive users relative to nonusers. In fact, a negative association between oral contraceptive use and breast tumors was found, and this was more pronounced in women with fibroadenoma of the breast. Most of the women surveyed for this report took low-dose estrogen formulations, but the role of dose to the above findings was not investigated. The finding of a positive correlation between the dose of oestrogen and the risk of coronary thrombosis is of special interest since previous studies have failed to provide clear evidence of a relationship between oral contraceptives and this condition.
Am J Obstet Gynecol. 1987 Oct;157(4 Pt 2):1042-8. Coagulation effects of oral contraception. Bonnar J. In Europe and North America, estrogen/progestogen oral contraception has been associated with an increase in venous thromboembolism, myocardial infarction, and stroke. These hazards are found mainly in smokers and in women over the age of 35. Venous thromboembolism appears to correlate with the estrogen dosage, and the arterial complications with both the estrogen and progestogen components. Blood coagulation and vascular thrombosis are intimately related. Estrogen/progestogen oral contraception affects blood clotting by increasing plasma fibrinogen and the activity of coagulation factors, especially factors VII and X; antithrombin III, the inhibitor of coagulation, is usually decreased. Platelet activity is also enhanced with acceleration of aggregation. These changes create a state of hypercoagulability that, to a large extent, appears to be counterbalanced by increased fibrinolytic activity. Studies of the oral contraceptives in current use show that the coagulation effects depend on the dosage of estrogen and the type of progestogen used in combination. Current research is aimed at finding the estrogen/progestogen formulations that induce the least changes in the coagulation system and other physiologic processes. In this respect, the new low-dose formulations are a major step forward and should reduce the risk of vascular thrombotic complications.
Lancet. 1980 May 24;1(8178):1097-101. Oral contraceptives and thromboembolic disease: effects of lowering oestrogen content. Böttiger LE, Boman G, Eklund G, Westerholm B. The introduction of low-oestrogen oral contraceptives in Sweden and the concomitant disappearance of high-dose preparations did not result in a lowering of the mortality of fertile women from thromboembolic disease. Morbidity due to thromboembolism seems to have fallen, and the number of thromboembolic incidents reported to the Swedish Adverse Drug Reaction Committee decreased dramatically. The decrease was due exclusively to a reduction in venous thromboembolic disease: the frequency of arterial complications (cerebral and coronary) remained constant.
Estrogen has many pro-clotting effects, and one of them is a decreased activity of vascular plasminogen activator. K. E. Miller and S. V. Pizzo, “Venous and arterial thromboembolic disease in women using oral contraceptives,” Am. J. Obst. Gyn. 144, 824, 1982. -Ray Peat, PhD
Am J Obstet Gynecol. 1982 Dec 1;144(7):824-7. Venous and arterial thromboembolic disease in women using oral contraceptives. Miller KE, Pizzo SV. Vascular plasminogen activator was measured by means of a new chromogenic assay in 24 women who had suffered from oral contraceptive-associated thrombotic disease and was compared to that in a control group of 78 premenopausal women. Vascular plasminogen activator levels were significantly reduced in the subjects who had venous thrombosis but not in the five women who had arterial thrombosis (0.04 +/- 0.03 versus 0.38 +/- 0.31, respectively) when compared to the levels in the control group (0.19 +/- 0.20). Since vascular activator levels distribute in a non-Gaussian manner, cases and controls were also stratified into deciles. Seventeen subjects who had suffered from venous thrombosis were stratified in the lowest three deciles, and two subjects, in the fourth and fifth deciles. Subjects who had suffered from arterial thrombosis were in the fourth or higher deciles. The conclusion is that, although there is a correlation between low levels of vascular plasminogen activator and venous thrombosis, no such correlation exists for arterial thrombosis.
"Being on the newest kinds of pills, which contain the progestin hormones drospirenone, desogestrel, or gestodene along with estrogen, doubled the risk again, making it six to seven times as high as women who weren't using hormonal forms of birth control." -Brenda Goodman, MA
"For women who had never used any hormonal birth control, about 3.7 out of 10,000 were diagnosed with a blood clot in a vein in a year's time. Being on an older-generation pill that contained an estrogen and the progestin hormone levonorgestrel roughly doubled that risk, to 7.5 women out of 10,000 followed for one year.
"So what did the research show? Estrogen plus progestin used for women who still had their uterus study was stopped in 2002 when the research indicated an increased risk of breast cancer, heart disease, stroke, blood clots and urinary incontinence.Of interest, the risk of colorectal cancer went down, as did hip fractures.
The study of women taking only estrogen was stopped in 2004 when it was found that there was an increased risk of strokes and blood clots. The risk of breast cancer was uncertain, and there was no change in the risk of colorectal cancer. Hip fracture risk was decreased." -Debbie Jackson, PhD
The idea that red meat is a principal dietary culprit has pervaded our national conversation for decades. We have been led to believe that we’ve strayed from a more perfect, less meat-filled past. Most prominently, when Senator McGovern announced his Senate committee’s report, called Dietary Goals, at a press conference in 1977, he expressed a gloomy outlook about where the American diet was heading.
“Our diets have changed radically within the past 50 years,” he explained, “with great and often harmful effects on our health.” These were the “killer diseases,” said McGovern. The solution, he declared, was for Americans to return to the healthier, plant-based diet they once ate.
The justification for this idea, that our ancestors lived mainly on fruits, vegetables, and grains, comes mainly from the USDA “food disappearance data.” The “disappearance” of food is an approximation of supply; most of it is probably being eaten, but much is wasted, too. Experts therefore acknowledge that the disappearance numbers are merely rough estimates of consumption.
The data from the early 1900s, which is what McGovern and others used, are known to be especially poor. Among other things, these data accounted only for the meat, dairy, and other fresh foods shipped across state lines in those early years, so anything produced and eaten locally, such as meat from a cow or eggs from chickens, would not have been included.
And since farmers made up more than a quarter of all workers during these years, local foods must have amounted to quite a lot. Experts agree that this early availability data are not adequate for serious use, yet they cite the numbers anyway, because no other data are available. And for the years before 1900, there are no “scientific” data at all.
In the absence of scientific data, history can provide a picture of food consumption in the late-18th- to 19th-century in America.
Early Americans settlers were “indifferent” farmers, according to many accounts. They were fairly lazy in their efforts at both animal husbandry and agriculture, with “the grain fields, the meadows, the forests, the cattle, etc, treated with equal carelessness,” as one 18th-century Swedish visitor described—and there was little point in farming since meat was so readily available.
Settlers recorded the extraordinary abundance of wild turkeys, ducks, grouse, pheasant, and more. Migrating flocks of birds would darken the skies for days. The tasty Eskimo curlew was apparently so fat that it would burst upon falling to the earth, covering the ground with a sort of fatty meat paste. (New Englanders called this now-extinct species the “doughbird.”)
In the woods, there were bears (prized for their fat), raccoons, bobolinks, opossums, hares, and virtual thickets of deer—so much that the colonists didn’t even bother hunting elk, moose, or bison, since hauling and conserving so much meat was considered too great an effort. A European traveler describing his visit to a Southern plantation noted that the food included beef, veal, mutton, venison, turkeys, and geese,but he does not mention a single vegetable.
Infants were fed beef even before their teeth had grown in. The English novelist Anthony Trollope reported, during a trip to the United States in 1861, that Americans ate twice as much beef as did Englishmen. Charles Dickens, when he visited, wrote that “no breakfast was breakfast” without a T-bone steak. Apparently, starting a day on puffed wheat and low-fat milk—our “Breakfast of Champions!”—would not have been considered adequate even for a servant.
Indeed, for the first 250 years of American history, even the poor in the United States could afford meat or fish for every meal. The fact that the workers had so much access to meat was precisely why observers regarded the diet of the New World to be superior to that of the Old.
“I hold a family to be in a desperate way when the mother can see the bottom of the pork barrel,” says a frontier housewife in James Fenimore Cooper’s novel The Chainbearer.
In the book Putting Meat on the American Table, researcher Roger Horowitz scours the literature for data on how much meat Americans actually ate. A survey of 8,000 urban Americans in 1909 showed that the poorest among them ate 136 pounds a year, and the wealthiest more than 200 pounds.
A food budget published in the New York Tribune in 1851 allots two pounds of meat per day for a family of five. Even slaves at the turn of the 18th century were allocated an average of 150 pounds of meat a year. As Horowitz concludes, “These sources do give us some confidence in suggesting an average annual consumption of 150–200 pounds of meat per person in the nineteenth century.”
About 175 pounds of meat per person per year—compared to the roughly 100 pounds of meat per year that an average adult American eats today. And of that 100 pounds of meat, about half is poultry—chicken and turkey—whereas until the mid-20th century, chicken was considered a luxury meat, on the menu only for special occasions (chickens were valued mainly for their eggs).
Yet this drop in red meat consumption is the exact opposite of the picture we get from public authorities. A recent USDA report says that our consumption of meat is at a “record high,” and this impression is repeated in the media.
It implies that our health problems are associated with this rise in meat consumption, but these analyses are misleading because they lump together red meat and chicken into one category to show the growth of meat eating overall, when it’s just the chicken consumption that has gone up astronomically since the 1970s. The wider-lens picture is clearly that we eat far less red meat today than did our forefathers.
Roger Horowitz, Putting Meat On the American Table (Baltimore, MD: John's Hopkins University Press, 2000): 11 - 17; Adapted from Carrie R. Daniel et al., "Trends In Meat Consumption in the USA".
Meanwhile, also contrary to our common impression, early Americans appeared to eat few vegetables. Leafy greens had short growing seasons and were ultimately considered not worth the effort. And before large supermarket chains started importing kiwis from Australia and avocados from Israel, a regular supply of fruits and vegetables could hardly have been possible in America outside the growing season. Even in the warmer months, fruit and salad were avoided, for fear of cholera. (Only with the Civil War did the canning industry flourish, and then only for a handful of vegetables, the most common of which were sweet corn, tomatoes, and peas.)
So it would be “incorrect to describe Americans as great eaters of either [fruits or vegetables],” wrote the historians Waverly Root and Richard de Rochemont. Although a vegetarian movement did establish itself in the United States by 1870, the general mistrust of these fresh foods, which spoiled so easily and could carry disease, did not dissipate until after World War I, with the advent of the home refrigerator. By these accounts, for the first 250 years of American history, the entire nation would have earned a failing grade according to our modern mainstream nutritional advice.
During all this time, however, heart disease was almost certainly rare. Reliable data from death certificates is not available, but other sources of information make a persuasive case against the widespread appearance of the disease before the early 1920s.
Austin Flint, the most authoritative expert on heart disease in the United States, scoured the country for reports of heart abnormalities in the mid-1800s, yet reported that he had seen very few cases, despite running a busy practice in New York City. Nor did William Osler, one of the founding professors of Johns Hopkins Hospital, report any cases of heart disease during the 1870s and eighties when working at Montreal General Hospital.
The first clinical description of coronary thrombosis came in 1912, and an authoritative textbook in 1915, Diseases of the Arteries including Angina Pectoris, makes no mention at all of coronary thrombosis. On the eve of World War I, the young Paul Dudley White, who later became President Eisenhower’s doctor, wrote that of his 700 male patients at Massachusetts General Hospital, only four reported chest pain, “even though there were plenty of them over 60 years of age then.”
About one fifth of the U.S. population was over 50 years old in 1900. This number would seem to refute the familiar argument that people formerly didn’t live long enough for heart disease to emerge as an observable problem. Simply put, there were some 10 million Americans of a prime age for having a heart attack at the turn of the 20th century, but heart attacks appeared not to have been a common problem.
Ironically—or perhaps tellingly—the heart disease “epidemic” began after a period of exceptionally reduced meat eating. The publication of The Jungle, Upton Sinclair’s fictionalized exposé of the meatpacking industry, caused meat sales in the United States to fall by half in 1906, and they did not revive for another 20 years.
In other words, meat eating went down just before coronary disease took off. Fat intake did rise during those years, from 1909 to 1961, when heart attacks surged, but this 12 percent increase in fat consumption was not due to a rise in animal fat. It was instead owing to an increase in the supply of vegetable oils, which had recently been invented.
Nevertheless, the idea that Americans once ate little meat and “mostly plants”—espoused by McGovern and a multitude of experts—continues to endure. And Americans have for decades now been instructed to go back to this earlier, “healthier” diet that seems, upon examination, never to have existed.
Original source found here: http://www.theatlantic.com/health/archive/2014/06/how-americans-used-to-eat/371895/
A new study suggests the long-held industry assumption that bisphenol-A breaks down safely in the human body is incorrect. Instead, researchers say, the body transforms the ubiquitous chemical additive into a compound that might spur obesity.
The study is the first to find that people’s bodies metabolize bisphenol-A (BPA) — a chemical found in most people and used in polycarbonate plastic, food cans and paper receipts — into something that impacts our cells and may make us fat. The research, from Health Canada, challenges an untested assumption that our liver metabolizes BPA into a form that doesn’t impact our health.
“This shows we can’t just say things like ‘because it’s a metabolite, it means it’s not active’,” said Laura Vandenberg, an assistant professor of environmental health at the University of Massachusetts Amherst who was not involved in the study. “You have to do a study.”
People are exposed to BPA throughout the day, mostly through diet, as it can leach from canned goods and plastic storage containers into food, but also through dust and water.
Within about 6 hours of exposure, our liver metabolizes about half the concentration. Most of that — about 80 to 90 percent — is converted into a metabolite called BPA-Glucuronide, which is eventually excreted.
The Health Canada researchers treated both mouse and human cells with BPA-Glucuronide. The treated cells had a “significant increase in lipid accumulation,” according to the study results. BPA-Glucuronide is “not an inactive metabolite as previously believed but is in fact biologically active,” the Health Canada authors wrote in the study published this week in Environmental Health Perspectives.
Not all cells will accumulate lipids, said Thomas Zoeller, a University of Massachusetts Amherst professor who was not involved in the study. Testing whether or not cells accumulate lipids is “a very simple way of demonstrating that cells are becoming fat cells,” he said.
“Hopefully this [study] stops us from making assumptions about endocrine disrupting chemicals in general,” he said.
The liver is our body's filter, but it doesn't always neutralize harmful compounds. “Metabolism’s purpose isn’t necessarily a cleaning process. The liver just takes nasty things and turns them into a form we can get out of our body,” Vandenberg said. BPA already has been linked to obesity in both human and animal studies. The associations are especially prevalent for children exposed while they’re developing.
Researchers believe BPA does so by mimicking estrogen hormones, but its metabolite doesn’t appear to do so. In figuring out why metabolized BPA appears to spur fat cells, Zoeller said, it’s possible that BPA-Glucuronide is “hitting certain receptors in cells”.
Health Canada researchers were only looking at this one possible health outcome. “There could be other [health] impacts,” Zoeller said.
In recent studiesBPA-Glucuronide has been found in human blood and urine at higher concentration than just plain BPA.
Industry representatives, however, argue the doses used were much higher than what would be found in people.
Steve Hentges, a spokesperson for the American Chemistry Council, which represents chemical manufacturers, said the concentrations used in which the researchers saw increased fat cells were "thousands of times higher than the concentrations of BPA-Glucuronide that could be present in human blood from consumer exposure to BPA.
"There were no statistically significant observations at lower BPA-G concentrations, all of which are higher than human blood concentrations,” he said in the emailed response.
Zoeller agreed the dose was high but said “the concentration is much less important than the fact that here is a group testing an assumption that’s uniformly been made.” Vandenberg said the range is not that far off from what has been found in some people’s blood.
The U.S. Food and Drug Administration is reviewing the Health Canada study but couldn’t comment before Environmental Health News’ deadline, said spokesperson Marianna Naum in an email.
The agency continues to study BPA and states on its website that federal research models “showed that BPA is rapidly metabolized and eliminated through feces and urine.” Health Canada, which was not able to provide interviews for this article, has maintained a similar stance to the U.S. FDA, stating on its website that it “has concluded that the current dietary exposure to BPA through food packaging uses is not expected to pose a health risk to the general population, including newborns and infants.”
However, the fact that Health Canada even conducted such a study is a big deal, Vandenberg said.
“Health Canada is a regulatory body and this is pretty forward thinking science,” she said. “Hopefully this is a bell that can ring for scientists working for other regulatory agencies.”
This article originally ran at Environmental Health News, a news source published by Environmental Health Sciences, a nonprofit media company.
Summary: In a study that seems to defy conventional dietary wisdom, scientists have found that adding high salt to a high-fat diet actually prevents weight gain in mice. The findings highlight the profound effect non-caloric dietary nutrients can have on energy balance and weight gain, and suggest that public health efforts to continue lowering sodium intake may have unexpected and unintended consequences.
In a study that seems to defy conventional dietary wisdom, University of Iowa scientists have found that adding high salt to a high-fat diet actually prevents weight gain in mice.
As exciting as this may sound to fast food lovers, the researchers caution that very high levels of dietary salt are associated with increased risk for cardiovascular disease in humans. Rather than suggest that a high salt diet is suddenly a good thing, the researchers say these findings really point to the profound effect non-caloric dietary nutrients can have on energy balance and weight gain.
"People focus on how much fat or sugar is in the food they eat, but [in our experiments] something that has nothing to do with caloric content -- sodium -- has an even bigger effect on weight gain," say Justin Grobe, PhD, assistant professor of pharmacology at the UI Carver College of Medicine and co-senior author of the study, which was published in the journal Scientific Reports on June 11.
The UI team started the study with the hypothesis that fat and salt, both being tasty to humans, would act together to increase food consumption and promote weight gain. They tested the idea by feeding groups of mice different diets: normal chow or high-fat chow with varying levels of salt (0.25 to 4 percent). To their surprise, the mice on the high-fat diet with the lowest salt gained the most weight, about 15 grams over 16 weeks, while animals on the high-fat, highest salt diet had low weight gain that was similar to the chow-fed mice, about 5 grams.
"We found out that our 'french fry' hypothesis was perfectly wrong," says Grobe, who also is a member of the Fraternal Order of Eagles Diabetes Research Center at the UI and a Fellow of the American Heart Association. "The findings also suggest that public health efforts to continue lowering sodium intake may have unexpected and unintended consequences."
To investigate why the high salt prevented weight gain, the researchers examined four key factors that influence energy balance in animals. On the energy input side, they ruled out changes in feeding behavior -- all the mice ate the same amount of calories regardless of the salt content in their diet. On the energy output side, there was no difference in resting metabolism or physical activity between the mice on different diets. In contrast, varying levels of salt had a significant effect on digestive efficiency -- the amount of fat from the diet that is absorbed by the body.
"Our study shows that not all calories are created equal," says Michael Lutter, MD, PhD, co-senior study author and UI assistant professor of psychiatry. "Our findings, in conjunction with other studies, are showing that there is a wide range of dietary efficiency, or absorption of calories, in the populations, and that may contribute to resistance or sensitivity to weight gain."
"This suppression of weight gain with increased sodium was due entirely to a reduced efficiency of the digestive tract to extract calories from the food that was consumed," explains Grobe.
It's possible that this finding explains the well-known digestive ill effects of certain fast foods that are high in both fat and salt, he adds.
Through his research on hypertension, Grobe knew that salt levels affect the activity of an enzyme called renin, which is a component in the renin- angiotensin system, a hormone system commonly targeted clinically to treat various cardiovascular diseases. The new study shows that angiotensin mediates the control of digestive efficiency by dietary sodium.
The clinical usefulness of reducing digestive efficiency for treating obesity has been proven by the drug orlistat, which is sold over-the-counter as Alli. The discovery that modulating the renin-angiotensin system also reduces digestive efficiency may lead to the developments of new anti-obesity treatments.
Lutter, who also is an eating disorders specialist with UI Health Care, notes that another big implication of the findings is that we are just starting to understand complex interactions between nutrients and how they affect calorie absorption, and it is important for scientists investigating the health effects of diet to analyze diets that are more complex than those currently used in animal experiments and more accurately reflect normal eating behavior.
"Most importantly, these findings support continued and nuanced discussions of public policies regarding dietary nutrient recommendations," Grobe adds.
In addition to Grobe and Lutter, the UI research team included Benjamin Weidemann; Susan Voong; Fabiola Morales-Santiago; Michael Kahn; Jonathan Ni; Nicole Littlejohn; Kristin Claflin; Colin Burnett; and Nicole Pearson. The study was funded in part by grants from the National Heart, Lung and Blood Institute, the American Diabetes Association, and American Heart Association.
Original source found here: http://www.sciencedaily.com/releases/2015/06/150611114419.htm
Original study found here: http://www.nature.com/srep/2015/150611/srep11123/full/srep11123.html
To those who enjoy the pleasures of the dining table, the news may come as a relief: drastically cutting back on calories does not seem to lengthen lifespan in primates.
The verdict, from a 25-year study in rhesus monkeys fed 30% less than control animals, represents another setback for the notion that a simple, diet-triggered switch can slow ageing. Instead, the findings, published this week in Nature1, suggest that genetics and dietary composition matter more for longevity than a simple calorie count.
“To think that a simple decrease in calories caused such a widespread change, that was remarkable,” says Don Ingram, a gerontologist at Louisiana State University in Baton Rouge, who designed the study almost three decades ago while at the National Institute on Aging (NIA) in Bethesda, Maryland.
When the NIA-funded monkey study began, however, studies of caloric restriction in short-lived animals were hinting at a connection. Experiments had showed that starvation made roundworms live longer. Other studies had showed that rats fed fewer calories than their slow and balding brethren maintained their shiny coats and a youthful vigour. And more recently, molecular studies had suggested that caloric restriction — or compounds that mimicked it — might trigger a cascade of changes in gene expression that had the net effect of slowing ageing.
In 2009, another study2, which began in 1989 at the Wisconsin National Primate Research Center (WNPRC) in Madison, concluded that caloric restriction did extend life in rhesus monkeys. The investigators found that 13% of the dieting group died from age-related causes, compared with 37% of the control group.
One reason for that difference could be that the WNPRC monkeys were fed an unhealthy diet, which made the calorie-restricted monkeys seem healthier by comparison simply because they ate less of it. The WNPRC monkeys’ diets contained 28.5% sucrose, compared with 3.9% sucrose at the NIA. Meanwhile, the NIA meals included fish oil and antioxidants, whereas the WNPRC meals did not. Rick Weindruch, a gerontologist at the WNPRC who led the study, admits: “Overall, our diet was probably not as healthy.”
Further, the WNPRC control group probably ate more overall, because their meals were unlimited, whereas NIA monkeys were fed fixed amounts. As adults, control monkeys in the WNPRC study weighed more than their NIA counterparts. Overall, the WNPRC results might have reflected an unhealthy control group rather than a long-lived treatment group. “When we began these studies, the dogma was that a calorie is a calorie,” Ingram says. “I think it’s clear that the types of calories the monkeys ate made a profound difference.”
Researchers studying caloric restriction in mice have become accustomed to mixed results, which they attribute to genetic diversity among strains. Genetics probably explains part of the variation between the monkey studies, too, as the NIA monkeys were descended from lines from India and China, whereas the Wisconsin monkeys were all from India.
The molecular effects of caloric restriction have also turned out to be complicated. Using compounds such as resveratrol, found in red wine, scientists have triggered the stress response that caloric restriction activates, which shuts down non-vital processes in favour of those that ward off disease. But hopes that ageing could be delayed by targeting a single gene or protein in a single molecular pathway have faded, as researchers have learned that the key pathways vary according to the animal.“It may take us a decade to sort out longevity networks,” says David Sinclair, a geneticist at Harvard Medical School in Boston, Massachusetts.
Meanwhile, there is a dearth of evidence that caloric restriction slows ageing in humans. Observational studies have found that people of average weight tend to live longest3. Nir Barzilai, a gerontologist at Albert Einstein College of Medicine in New York, says that the centenarians he studies have led him to believe that genetics is more important than diet and lifestyle. “They’re a chubby bunch,” he says.
A more nuanced picture would suit Ingram, who enjoys an occasional feast of Louisiana crawfish. Ingram says that he looks forward to studies of how diet composition, rather than caloric intake, affects ageing. “Is the human lifespan fixed?” he asks. “I still don’t believe that for a minute.”
Impact of caloric restriction on health and survival in rhesus monkeys from the NIA study
Calorie restriction (CR), a reduction of 10–40% in intake of a nutritious diet, is often reported as the most robust non-genetic mechanism to extend lifespan and healthspan. CR is frequently used as a tool to understand mechanisms behind ageing and age-associated diseases. In addition to and independently of increasing lifespan, CR has been reported to delay or prevent the occurrence of many chronic diseases in a variety of animals. Beneficial effects of CR on outcomes such as immune function1, 2, motor coordination3 and resistance to sarcopenia4 in rhesus monkeys have recently been reported. We report here that a CR regimen implemented in young and older age rhesus monkeys at the National Institute on Aging (NIA) has not improved survival outcomes. Our findings contrast with an ongoing study at the Wisconsin National Primate Research Center (WNPRC), which reported improved survival associated with 30% CR initiated in adult rhesus monkeys (7–14years)5 and a preliminary report with a small number of CR monkeys6. Over the years, both NIA and WNPRC have extensively documented beneficial health effects of CR in these two apparently parallel studies. The implications of the WNPRC findings were important as they extended CR findings beyond the laboratory rodent and to a long-lived primate. Our study suggests a separation between health effects, morbidity and mortality, and similar to what has been shown in rodents7, 8, 9, study design, husbandry and diet composition may strongly affect the life-prolonging effect of CR in a long-lived nonhuman primate.