Philip Alcabes discusses myths of health, disease and risk.

Childhood Obesity: NYC’s Little Lies, Big Self-Congratulation

There is very little evidence that obesity is harmful to young children.  So I have to ask why NYC’s Department of Health and Mental Hygiene feels so strongly that fat schoolchildren should be forced to slim down.  And why it’s so eager to congratulate itself today on its policing of eating behavior — see reports by WSJ, Bloomberg, CBS (with photos of fat kids!), Huffington, and many other sources.  Why would the city’s health agency lie in order to claim that its jihad against a not-very-convincing evil has been successful?

The subject is a report published by CDC today claiming that obesity among NYC schoolkids in grades K through 8 has decreased 5.5%.

The city’s health commissioner, Thomas A. Farley has been true to the shades of history’s empty-headed warriors.  Farley announced that the drop in obesity prevalence is a “turning-point in the obesity epidemic” although it “does not by any means mark the end.”

A missed photo opp:  Dr. Farley standing on top of a fat child, holding up a sign reading, “Mission Accomplished.”

Farley is zealous about controlling people’s behavior and contemptuous of facts (nobody will ever accuse him of being an intellectual, either).  He blogs about his own work for the exclusive reading pleasure of Department of Health staffers.  This allows his staff to read the Farley-esque twist on truth.  One example for now:  in October of 2010, Farley’s blog exultantly told his staff that in 2009 the department had “immunized nearly 130,000 children [against flu] in more than 1,200 schools over a few months.”  Of course, health department employees are smart — many of them knew that the 2009 H1N1 vaccine Farley was talking about was a fiasco, far too late to make a difference, and aimed at an outbreak that was more of a whimper than a bang.

What about today’s “turning point” in the obesity war?  It’s worth noting that the supposed drop in obesity among NYC schoolkids is really just a very slight (1.2%) difference in the prevalence of obesity between 2006-7 and 2010-11.

A small difference between small numbers amounts to a large percentage difference.  So the 1.2%  actual difference magically turns into the advertised 5.5% — the proportionate change.

But the false advertising gets worse

1.  The prevalence of obesity in NYC was not measured multiple times on the same group of kids (to use epidemiology jargon:  this wasn’t a panel study).  Nobody observed fat children becoming less fat.  The city simply measured obesity prevalence each year on 5- to 14-year-olds who were in the school system.  So a high proportion of the 21.9% of kids who were labeled obese in 2006-7 would have been out of the age range for the 2010-11 assessment.

Plus, lots of kids leave the NYC school system after grade school (this has to do with Bloomberg administration’s bizarre system for preventing children from attending local schools).  So, even those children who haven’t aged out of the analysis by turning 15 would be absent from the data after a few years.  And, there’s also natural immigration and emigration.

Did the 2006 fat kids get slimmer?  Nobody knows.  The 2006-7 obesity prevalence among NYC schoolkids (21.9%) can’t be compared to the 2010-11 prevalence (20.7%).  If you were forced to compare these numbers, you’d say there had been a slight change — not a 5.5% decline.  There’s the first lie.

2.  The second lie is a little more complicated.   Since there is no widely accepted functional definition for childhood obesity, children are labeled obese if their body-mass index (BMI) falls into the upper 5% of the expected distribution of weight-for-height.  This expectation is based on an old-fashioned standard.  Fair enough.  But lots of distributions shift over time — SAT scores, human height, grades awarded at Ivy League colleges, and global average temperature, to name a few.

Sometimes the reason for an overall shift of this sort isn’t hard to specify (test prep, nutritional quality, relaxation of grading standards, generalized global warming, etc.).  But the main effect causing a shift in the distribution doesn’t explain why the few people who are in the upper reaches of the distribution are so far from the mean.  To say that fewer children are now above the high-BMI cutoff than in 2006-7 therefore the tendency of children to be fat is declining is a lot like claiming that because 2011 was cooler than 2009 and 2010, global temperatures are not really going up.

(Dr. Farley, I gather that statistics aren’t your strong suit, but surely when you witnessed that snowstorm we had this past October — an outlier if there ever was one — you didn’t conclude that the climate is actually getting colder, not hotter.  So what makes you think that a very tiny decrease in the proportion of kids with high BMIs means that the city’s kids are getting slimmer?)

3.  Claiming credit.   Attributing to the health agency’s own efforts a minuscule change in the proportion of kids who are in the upper tail of the broad BMI distribution requires self-congratulation so acrobatic as to stretch credulity.

Maybe there really has been some change in the city’s children since 2006.  Or in our food supply or buying habits.  Or exercising.  But to claim that such a change both caused the tiny decline in schoolkid obesity prevalence and that it was the result of the Health Department’s efforts — the exercising and the low-fat milk and the salad bars in the school cafeterias and so forth — is to commit the fallacy that Rene Dubos outlined (in his book Mirage of Health) nearly 50 years ago:

When the tide is receding from the beach it is easy to have the illusion that one can empty the ocean by removing water with a pail.

Is childhood obesity really a health problem?

It’s not crazy for health professionals to be concerned about body mass.  Obesity might be really bad for some people, and somewhat bad for many.

But those people are adults.  Why are health agencies like NYC’s so riled up about obesity in little children?

So far, there’s no strong evidence that obesity in younger children predicts any real harm later in life, other than being a fat adult.  With adults, several signs of impending debility are more commonly found in the obese than the non-obese, such as hardening of the arteries, fatty liver, sleep apnea, and diabetes.   And with adolescents, there’s some evidence that those who are obese develop similar warning signs.  But not younger kids.

A 2005 BMJ paper reported only social effects in adulthood (being unemployed and being without a romantic partner) of early obesity.  Similarly, one cohort study carried out in Newcastle upon Tyne found little evidence that fat children became fat adults, and no evidence for predictors of illness in adulthood among those who had been overweight as children — although other studies have shown correlations between adolescent obesity and adult problems.

For kids below age 15, the most visible problem with obesity is that it occurs most commonly among the poor and dark-skinned.  This bothers the obesity warriors.  In fact, not only is obesity more common in African- and Hispanic-American children in NYC, even the slipshod standards of today’s report on NYC schoolkids can’t be manipulated to show that obesity is declining among these children.

As with all holy wars, from the Children’s Crusade through the U.S. invasion of Iraq, the warriors aren’t really concerned about principle.  Something about somebody got under their skin.

Here’s how I answer my own question:  I guess the obesity crusaders don’t like it when the children of the wealthy look like the children of the poor.  They think that white kids on the Upper East Side aren’t supposed to look like kids who live in the Bronx.

It isn’t about health, in other words.  It isn’t even about obesity.  The “childhood obesity epidemic” is about making sure society looks the way that the health crusaders want it to look.

 

 

W.H.O. and the Medical Industry

At EP-ology, Carl Phillips has a new post on the World Health Organization’s failure to care about suffering.   It’s worth reading — especially if you (still) believe that the WHO’s main aim is promoting health.

Phillips’s focus in that post is on a new WHO Atlas on headaches

and the problem that headaches cause people to stay home from work, or work less productively.   The agency estimates that Europe-wide, the lost productivity from migraines alone is worth 155 billion euros each year.  It isn’t that you feel crummy when your head hurts, and that chronic headache makes your life miserable.  It’s that you might not perform your expected per-capita service to the expansion of wealth.

Here’s how EP-ology assesses the agency:

The WHO is not the humanitarian organization that many people might think it is.  It is a special-interest medical-industry-oriented organization with an emphasis on the interests of governments, not people.  Its emphasis on productivity in looking at headaches … ignores people’s welfare…

Now, I can’t agree with Phillips’s analysis that the WHO’s ethical system is either “communist” or “fascist.”  For self-described public health agencies like the WHO to be concerned primarily with productivity and the generation of wealth — and only secondarily, if at all, with suffering — has been a hallmark of capitalism since the British Parliament passed the world’s first Public Health Act in 1848.

In fact, the laws institutionalizing public health in Britain in the late 1840s were passed by the Whig (liberal, more or less) government of Lord John Russell.  Public health was a legacy of efforts not by the nascent socialist and communist movements, but by radical capitalists — who sought to secure a moderately hale labor force to serve British industry with little cost to the factory owners.  And aimed to blame individuals for their own misery.

But it’s impossible to disagree with the main point of Phillips’s post:  WHO’s aim is to serve industry.

As further evidence, consider this follow-up note on Tamiflu by Helen Epstein, published in the May 26th issue of NY Review of Books (I discussed Epstein’s main article in a post last month).  It seems more and more apparent that potential dangers of Tamiflu (oseltamivir) in children were ignored.  Epstein reports that

the risks of delirium and unconscious episodes were indeed significantly elevated in children who took Tamiflu, especially if they took the drug during the first day or so after influenza symptoms appeared….  If these results are confirmed, they are especially worrying, since the World Health Organization and the US Centers for Disease Control both recommend that Tamiflu be taken as soon as possible after symptoms appear.

I was not the only one unaware of this important study; neither, apparently, were the World Health Organization, the US Food and Drug Administration, and the US Centers for Disease Control. When I contacted these agencies in January and February 2011, their spokespeople assured me that there was no evidence that Tamiflu causes neuropsychiatric side effects in children. [emphasis added]

In the rush to move taxpayer monies into the hands of wealthy private corporations, the WHO (with CDC and other agencies) proclaimed a flu emergency in 2009.  And ignored evidence on possible dangers of the products they were touting as part of the “preparedness” response.

Nuclear Energy and Risk

Elizabeth Kolbert is a fine science writer.  Her explanations of the complicated mechanisms — geothermal, marine chemical, atmospheric, and so forth — underlying climate change are clear and compelling.

But I confess I’m no fan of her work.  Kolbert’s sky-is-falling! rhetoric is a little too florid, and her criticism of people who don’t act environmentally a little too pointed.

Yet, her short piece in this week’s New Yorker, “The Nuclear Risk,” is terrific.  It’s worth reading.   She gets at a central lesson of the radioactivity crisis that followed on the earthquake + tsunami disaster:  you can only plan for the disasters you’re able to conceive of.  The Japanese catastrophe, she writes

illustrates, so starkly and so tragically, [that] people have a hard time planning for events that they don’t want to imagine happening. But these are precisely the events that must be taken into account in a realistic assessment of risk. We’ve more or less pretended that our nuclear plants are safe, and so far we have got away with it. The Japanese have not.

That the nuclear crisis is supposedly under control now, or might be under control if some new problems are dealt with, doesn’t change the planning problem (and have a look at this blog post by Evan Osnos for a worrying take on what happens to people who are facing such a triplex disaster scenario).

Kolbert relates the problem of nuclear planning in the U.S. to corporate interference with regulatory agencies, quoting the Government Accountability Office’s finding that the Nuclear Regulatory Commission has based its policies

on what the industry considered reasonable and feasible to defend against rather than on an assessment of the terrorist threat itself.

It’s disturbing that industry and regulators are on intimate terms, but it isn’t exactly news — not in regard to energy policy, nor health policy (for example, consider the CDC’s Advisory Committee on Immunization Practices, which I wrote about a year ago).   The comfortable collusion between corporations and government agencies is an issue — but it’s not the most troubling lesson of the Japanese crisis.

Rather, the main event is the inevitability of unforeseen and unforeseeable disasters.  And the simple impossibility of making plans to avoid what can’t be imagined.

Which is where I part company with Kolbert.   Would better planning (or stricter regulation of industry) have avoided the near-catastrophic radioactive release at Daichii?  Yes, perhaps.  But nobody could have foreseen an earthquake of this magnitude, or infrastructure so destabilized by a tsunami as fast-moving and destructive as this one, or the double-punch effect occurring where it did and how it did.  There’s only so much you can plan because there’s only so much you can envisage.

And that’s the problem with the idea of planning to reduce risk.  You plan for what you know. Maybe you plan for something a little worse than what you’ve seen before — but even that is basically what you know, with a little juicing to make it livelier.   Even the pure-fantasy regulatory agency — the one with firewall immunity from influence by industry, perfectly competent engineering of its plans, and state-of-the-art technology — can’t foresee every eventuality.  Therefore, even the best planning won’t eliminate risk.

In the end, the question isn’t just how to keep the energy industry away from the regulators.   It’s how to live in a universe that isn’t completely predictable, no matter how good you think your “science” is.   And is ruled by random, implacable, and sometimes highly destructive nature.

USPHS Back in Bed with Big Pharma

Just in case you thought that the U.S. Public Health Service’s main interest is the public’s health:

Recently, Paul Sax reported at The Body on a plan to issue guidelines on the use of pre-exposure HIV prophylaxis (PrEP) using a combination of antiretroviral drugs, announced in the January 28 issue of CDC’s Morbidity and Mortality Weekly Report. The effect of issuing guidelines is to endorse the procedure, which will help enrich pharmaceutical companies — the first being Gilead, which makes Truvada (combination of tenofovir + emtricitabine).

Here’s the CDC’s rationale for issuing interim guidelines now, with formal guidelines to follow:

CDC and other U.S. Public Health Service (PHS) agencies have begun to develop PHS guidelines on the use of PrEP for MSM at high risk for HIV acquisition in the United States as part of a comprehensive set of HIV prevention services…  [W]ithout early guidance, various unsafe and potentially less effective PrEP-related practices could develop among health-care providers and MSM … [including]

1) use of other antiretrovirals than those so far proven safe for uninfected persons;

2) use of dosing schedules of unproven efficacy;

3) not screening for acute infection before beginning PrEP or long intervals without retesting for HIV infection; and

4) providing prescriptions without other HIV prevention support (e.g., condom access and risk-reduction counseling).

Translation:  if  CDC or another USPHS agency doesn’t do something now, homosexual men might not buy  as much medication as they could.

What’s the impetus for this guidance?   Results of the iPrEx study, which was supported by the National Institute of Allergy and Infectious Diseases at NIH, were published in the New England Journal of Medicine in December.  The study purported to show a 44% reduction in HIV incidence among men who had sex with men who were taking Truvada prior to sexual exposure.  But the study was so deeply flawed, and the authors so cagey about their methods, that it’s  impossible to conclude that Truvada makes any difference to the chances of acquiring HIV.

As the iPrEx trial’s logo implies

iPrEx

it was multinational, involving almost 2500 HIV-negative people who were male (at birth) and adjudged to be at high risk of acquiring HIV because of their pattern of sexual activity.  It involved sites in Peru, Brazil, Ecuador, South Africa, Thailand, and the U.S. The comparison was between subjects taking Truvada and subjects taking a placebo.

The famous 44% reduction, however, was clearly not obtained in each site — and the authors don’t state which sites showed more effect.  More importantly, the reduced HIV incidence among those taking Truvada occurred only for a small subset of subjects who stayed on the drug for more than a year without becoming infected.  And it only lasted for about one additional year.

In other words, in the iPrEx study, people who took Truvada and remained HIV-negative for a year were slightly less likely to acquire HIV in the following year than were those who took placebo and remained HIV-negative.

Finally, even the small, second-year-only effect of Truvada is of questionable use to men in the U.S.  Because the study was based on men living in places with extremely HIV prevalences — higher than those in much of the U.S. — and involved men having a large number of partners, it provided essentially no evidence for any utility in the U.S.

As other trials of pre-exposure chemoprophylaxis are going on now, other companies’ products are likely to be included in the final version of the CDC guidelines.  So more corporations can benefit from the largesse of the Public Health Service.

Condoms are very effective at interrupting HIV transmission.  Obviously, you have to use them (properly) in order to benefit from that effect.  Because people don’t like them very much, condom promotion is a poor public-health strategy.

But as a matter of guidance for men who have sex with men, in what way is it better for the USPHS to suggest Truvada, which has to be used consistently even when you’re not having sex, probably won’t take effect for a year or so, and even then will only give you a minor reduction in the chances of acquiring HIV — rather than condoms?

Answer:  it is if you’re trying to promote profits for the pharmaceutical industry.

Plague Did Not Begin in China. And Why Should Anyone Think It Did?

Nicholas Wade, the NY Times‘s science writer, jumps the gun with a story today asserting that plague began in China.  Maybe it’s understandable:  you don’t often get a front-page story if you’re a science reporter, so once in a while you take some shaky science and turn it into an international incident.

But to understand why the story is wrong means recognizing a weakness of science as it’s often practiced today.

Wade’s claim is based on two papers published this month.  A relatively well done study by Haensch et al. in PLoS Pathogens earlier in October tested human remains from well-identified plague pits — burial sites for medieval plague victims — in different parts of Europe.  Researchers amplified DNA sequences of the plague bacterium, Yersinia pestis, at specific genetic loci, and tested to see whether the DNA matched known sequences of contemporary Y. pestis genes.

The findings published in PLoS suggest that the Black Death and perhaps subsequent waves of plague in Europe were indeed caused by Y. pestis — which would tend to debunk the theory proposed by some British researchers that the Black Death was some kind of viral hemorrhagic fever outbreak.  And they suggest that there were at least two widely different Y. pestis strains involved in different parts of Europe.  Here’s a bit of the abstract:

[O]n the basis of 17 single nucleotide polymorphisms plus the absence of a deletion in glpD gene, our aDNA results identified two previously unknown but related clades of Y. pestis associated with distinct medieval mass graves. These findings suggest that plague was imported to Europe on two or more occasions, each following a distinct route.

The main weakness here is that DNA could not be amplified from all of the plague pits the researchers studied, but after using alternative means to test the DNA debris against contemporary gene sequences the investigators concluded that the absence of genetic material reminiscent of one strain of Y. pestis was evidence that that strain was not in play in that part of Europe at the time.  Probably right, but stretching the available evidence.

It’s a common mistake, alas.  To paraphrase Karl Popper:  just because you see DNA from white swans and don’t see any DNA from black swans, doesn’t mean that black swans don’t exist.

Still, the PLoS paper is persuasive that more than one strain of the plague bacterium was circulating, and probably causing deaths, in the plague period in Europe.  Of course, it says nothing about China.

So where does the NYT reporter get his headline-grabbing story?  A paper to be published in Nature Genetics online (still embargoed at the time I’m writing, but a summary appears here) states that the sequences of plague DNA amplified from plague pit remains, as well as contemporary isolates, can be placed on a molecular clock because of the occurrence of unique mutations.  Winding the clock backward, the researchers conclude that the Ur plague organism, ancestor of all Y. pestis, came from the far east.

The molecular biology may be unimpeachable, but the inferences about history aren’t supportable by molecular evidence.  That might explain why they’re almost certainly wrong.

The problem (scientists, I hope you’re listening!) is that you may know very well what you know, but you can never know what you haven’t seen.  The hereditary tree has its roots in China.  Here is one proposed by some of the same authors in a 2004 PNAS paper:

In this set-up, isolates of Y. pestis from China seem closest to the primordial strains.

But of course, the molecular clock doesn’t take account of strains that are no longer extant.  And ones that haven’t been unearthed.  The contemporary researchers don’t see them (or don’t know how to look), so they don’t exist.

It’s a bad mistake, inferentially.  And historically.  It’s where the NYT writer goes wrong.  Almost certainly, plague did not begin in China.  It began as an enzootic infection of small mammals in the uplands of central Asia.  This is the story convincingly relayed by William H. McNeill in Plagues and Peoples a generation ago, and none of the many accounts I’ve read since then has debunked it.

Plague would have had to begin in an ecosystem in which it could circulate at moderate transmission rates with little pathogenicity among small mammals (the natural host of the bacterium).  Exactly where it started remains open to question, but it was probably in the area that is now Turkestan/Uzbekistan.  With the development of trade between that region and China, intermixing of local (central-Asian) animals with caravan-accompanying rats would have allowed Y. pestis to adapt to the latter.

Quite possibly China was the source of the first human outbreaks of plague — because the river valleys of China were settled and agricultural (therefore offering feeding opportunities for rats as well as multiple opportunities for rat-human interaction) long before Europe was.  That fact probably accounts for the biologists’ (mistaken) belief that their early samples show that Y. pestis started out in China.

But plague began as — and remains — a disease of animals.  To acknowledge that human outbreaks in China preceded the human outbreaks in Europe (the Justinian plague that began in the mid-sixth century, the Black Death that began in the 1340s, and subsequent visitations) is not the same as saying that plague originated in China.

Which it didn’t.  Plague is an animal disease from Central Asia.  Plague’s long history is the usual one:  ecosystem change, trade, animal-human interactions, alterations in climate and economic conditions, and occasional opportunities for mass human illness.   (One world, one health.)

Above all, remember that science is only capable of drawing conclusions about what scientists can observe.  Don’t be taken in by hair-raising stories.  Even in the NY Times.