Thursday, May 27, 2010

Who's afraid of the distant Wolf?

If one can make up evolutionary must-be stories, it must be that humans were programmed by our ancestry of natural selection to be short-term warriors. We simply seem unable to restrain our short-term self-interested motives for long-term ends. Maybe we don't really believe in religious long-term promises or don't really care what happens to our descendants after we become ant food. But whatever the reason, as this story exemplifies, if the wolf isn't right at our door, we don't seem to want to lock it.

Climate change is a slow process, and must be inferred by what are either short-term direct or long-term indirect measures (like deep ice core sediments, tree rings, and so on). Apparently, Britons who were very strong believers in the seriousness of climate change and ready to do something about it, are now waning in their belief and enthusiasm for appropriate political and behavioral change.

In part this may be due to 'climategate' and other evidence that the science has in some cases been misrepresented, or its scientists seemed self-protective. But partly if not mainly this is likely to be our 30-second attention span, our boredom with the topic du jour, and the lack of much immediacy in the evidence.

Polar bears are, after all, there not here. The Northwest Passage is fine for container ships, but we'll never actually see it (except as a novelty entertainment from Google maps). All the same food is still at the grocery, it's cold outside in May, and I want to keep my many electronic devices on standby status to activate with my remote. Biofuels from corn apparently aren't all that good a solution and they raise the price of food. And I like driving rather than walking round the corner for a bottle of milk. So it's easy to see no urgency in all the green-believers.

So, if evolution gave us a cognitive ability to assess our surroundings beyond what we can immediately see, why are we not able to deal with long-term issues and over-ride our emotional need for the next moment's entertainment or indulgence?

Maybe we're just doomed to live (or not) from crisis to crisis. Only when we can actually hear the cry of the wolf will we shut the door.

Wednesday, May 26, 2010

Making your bed and lying in it

Bulletin from the Boys are Different from Girls Bureau:
The news is in! Men lie more than women!

We're not talking about couch potatoes during the football season. No, a report just in from the Highly Precise Statistics department (of the British Science Museum) finds that men lie a whopping 3 times a day, while women only a measly 2. Lying to your Mum is one of the commonest offenses (assuming something as common as 3 lies per day still means they count as offenses), while lying about how much you had to drink is right there in second place.

Gifts: Apparently when she says "It's just what I've always wanted," she doesn't mean it. This is reported as a lie, but probably it is a nice (if quietly patronizing) way of saying how much she appreciates the thought. Since most men wouldn't know a good gift from a white elephant, she's just being nice and shouldn't be blamed for her euphemism.

Our every move is under the scope of social survey researchers, who tirelessly scour the landscape for anything they can think of to get a grant to study. Now, according to the BBC story, at least, we don't know how the surveyer knows who's lying to her and who isn't. Is admitting to lying about how much one drank, a euphemism for not admitting who one was drinking with?

Westerners often say that in the very polite society of Japan, 'yes' means 'no'. Cultural conventions can blur the meaning of 'lie' in the context of western social surveys. If the lawyer for someone who's suing you for hitting their car says "Dear Mr. X" is that 'dear' a lie? Perhaps the 'yours sincerely' is true (since he's sincerely demanding damages).

Of course in the daily mail we get letters of offers 'for you only!', and we know those are lies. But we excuse the advertising industry because they're paid intentional and habitual liars. And we excuse politicians because they're often not intelligent enough to know the difference--or they'd say they knew they were lying but, if elected, they would do so much noble good for the public that the lies are justified.

And, as it turns out, according to yet more social scientists, toddlers who lie do better in life! We teach our kids to tell the truth, but perhaps they quickly cotton on to the fact that we don't follow our own precepts.

Nature or nurture? Does the Science Museum know or doesn't it?
Katie Maggs, associate medical curator at the Science Museum, says the jury's out as to whether lying is a result of our genes, evolution or our upbringing.
"Lying may seem to be an unavoidable part of human nature but it's an important part of social interaction," she says.
The museum in west London is launching a gallery called "Who am I?" which makes sense of brain science, genetics and human behaviour.

At least someone there thinks they can tell us what it all means! Unless they're lying....

Tuesday, May 25, 2010

Off limits? Says who?

The other day we did a post on darwinism and the societal consequences of its use and misuse. Only the nastiest and most self-serving person can deny that Darwin's ideas about the natural nature of inequity have been used to the cost of many peoples' lives. The eugenics movement, and the Nazi regime, and Stalin's Soviet Union all in their various ways used versions of evolutionary theory to victimize millions.

A counter to that is that society has been willing to find reasons to brutalize other people since time immemorial, and science is not particularly guilty. Science may aid and abet, and scientists have an ongoing history of showing that they can be bought. But if evil politicians on the right or the left (Hitler or Stalin, for example) couldn't find justification for their actions in science, they'd likely have found some other excuse. After all, today's Islam and medieval Christianity (crusades, Inquisition) make the point. No culture is innocent.

So the argument goes that the real world is as it is, and our job as scientists is to discover and understand it. We're not responsible, so the usual (self-serving) story goes, for how our findings are used. Atomic physicists developed the atomic bomb, which one day might annihilate our civilizations, but after all they were just helping atoms do what they do naturally. Weren't they? World War II presented some particular complications to this story, but the eugenicists of the late 19th and early 20th century don't have such an excuse. They did not, after all, have to use Darwin to decide who was fit and who wasn't. In that case it was scientists, not just politicians, who did the damage.

The Nazi abuses were in many ways led by prominent doctors and anthropologists.So maybe the lesson of history is to pass laws restricting what science can actually study. Robert Proctor of Stanford (late lamented of Penn State) has coined the term 'agnotology' for the study of ignorance. He had a different context in mind, but we might say that there are things we could know, but shouldn't, for societal reasons.

Various best-selling popular science books have basically argued that molecular biology will eventually explain everything, including art and esthetics. Their hubris has potentially ominous overtones for anyone who has read any of the history of claims of genetic or Darwnian inherency. The current romance with impersonal science has led the poet and novelist Wendell Berry to say (in his book, The Way of Ignorance: And Other Essays) that there are two nuclei we humans should simply keep out of: the atom and the cell.
That view won't go down well with scientists, of course, who usually argue that stifling research is not possible and is even wrong. The truth must be known! But that too is of course transparently self-serving.

We currently have many real constraints on research. Universities have Institutional Review Boards -- that are largely in service to the institution, it's true, but they do set limits on research that are generally agreed on. You can't do experiments on people without telling them what you're up to as best you can. And there are limits to what you can do, even if you inform people ahead of time. For example, you can't do a study of how long people can stand having their skin peeled off, or how long they can stay under water without drowning (the Nazis tried that). But neither can you expose people needlessly to x-ray or toxic chemicals or pain, or publish private information, and so forth. If you call it biomedical research, you can do a lot to mice but you can't outright torture them! So we certainly do have precedent that could allow us to decide that, as a democracy, it is forbidden to study, say, the genetics of race and IQ, or many aspects of behavior that are clearly far more about social structures, like inequalities, than genes.

Scientists bristle at any such suggestions, and of course there is the question of who should be privileged to decide on the exclusions and on what basis. But historically we in science are the privileged class that rakes in the grants and salaries but doesn't pay the social consequences when our findings are misused. Of course the majority of our work is either used for good or is useless. The question, and it's not an easy one, is how much of Mother Nature we should simply not attempt to understand, for societal reasons.

One thing that we can't legitimately do, is to make the politically-correct assertion that yes, basically any disease has a genetic component and is fair game, but genes have nothing to do with behavior. If genes are major causal factors in all ethically 'safe' traits like disease, they will also be relevant, and perhaps comparably important in socially sensitive traits. But, for reasons that we've discussed on MT many times, we think that the current determination to geneticize such traits--behaviors and disease alike--is not going to work out very well (though one can always, after the fact, point to successes and claim victory). But that's our view and is beside the point.

The point here is that putting some traits off limits would not be the same as declaring they aren't 'genetic'. It would be a decision that some things are simply judged to be more potentially harmful than good to learn about. Even the Constitution doesn't allow crying Fire! in a crowded theater, and for similar reasons it would be perfectly legitimate to say that there are areas in which scientists should not play with societal fire.

Monday, May 24, 2010

William James and Varieties of Religious Experience

On In Our Time two weeks ago (this post kept getting pre-empted by breaking news!) on BBC Radio 4 the host Melvyn Bragg and his guests, three philosophers, talked about William James' book, The Varieties of Religious Experience: A study in human nature. James (left) was a psychologist, invited to Edinburgh in 1901 to deliver 20 lectures on "Natural Theology" in the prestigious Gifford lecture series, which he then published. He spoke, and then wrote, about religious experience from a personal point of view, which laid the groundwork for a new field, the psychology of religion.

'Personal' meant to James that what we choose to call 'religious' is strictly personal, whether or not one experiences that in terms of an external being--'god' in some form or other, is about one's inner experience, not an outer set of truths or claims. James claimed that, upon surveying religious experience, what was shared was something like the feeling of being a part of some larger cosmos (our term), rather than a boost for one's ego, or the idea that one is a special creature of God.

James basically limited his thoughts to the religions of the west or to some extent of other civilizations, including humanistic reports, but not an anthropological survey in pre-agricultural peoples. He was trying to explain the widespread nature of religious experience in psychological rather than theological terms.

James was originally trained as a biologist, and thought of himself as a Darwinian. But his arguments about religious experience, were about the internal world we each experience (as far as one can determine it scientifically), and in a sense from a philosophical point of view. That is he did not want to reduce it, or materialize it, in Darwinian terms, but to take it as something in itself, as experience.

To those who are aware, science has for a couple of centuries now been putting dogmatic religion on the run. Darwin helped, but industrialization and the successes of physical science were at least as important: the world could be manipulated and understood empirically, and that did not gibe with sacred texts.

Religion is far from being on the run in terms of its nature as a motivating social fact, often with tragic consequences, as various conflicts between peoples continues to show. These are based on religion as a rallying cry for tribal conflict, and presumably many who engage in religious conflict would say that they have had relevant religious experience. James would say, probably, that they were interpreting that experience as if it came from the external world--some actual deity out there. But his point, which still seems relevant, is that people still have the experience of being only a small part of a larger whole, a feeling that, in itself, is not dependent on any specific doctrinal explanation of what that part, and that whole, are.

James, a Harvard faculty member, was one of the founders of modern psychology, and wrestled in his other works with the nature of consciousness, which is a related issue. He was a pragmatist, avoiding grand theories of the mind and instead trying to understand what mental states were, and meant. He was quite nervous about giving his Gifford lectures to Presbyterian (often stern Presbyterian) Scotland, but his lectures were a major success. In the end, the last lecture and last chapter of his book, he asks whether, after all, he'd come down on the side of there actually being an external, real 'God' of some form. And, perhaps in a concession to avoid too much controversy (maybe to stay out of trouble with his Harvard administrators, colleagues, and students?), he said he'd bet on the 'yes' side.

Today's arguments by Darwinians often try to explain why we have what we call religious experiences. The arguments are about materializing or geneticizing the experience, and to show what they assume, that such experiences are illusory rather than reflecting an externally real deity, and to provide a selective argument why that would have evolved. James was after something different, and influenced a century of psychological (and philosophical) thinking after his own time.

But his target was to explain the nature of the experience, not its evolutionary origins. Both have proved elusive targets for science.

Friday, May 21, 2010

Berkeley makes a bad call

John Hawks writes at john hawks blog about a decision by UC Berkeley to give incoming freshman the chance to send in cheek swabs for DNA analysis of 3 genes involved in regulation of the ability to metabolize alcohol, lactose and folates. The genetics professor spearheading the effort was quoted in the New York Times as saying
“The history of medical genetics has been the history of finding bad things,” he said. “But in the future, I think nutritional genomics is probably going to be the sweet spot.”
Hawks finds this whole thing very bizarre, and so do we. "I'm torn between "colossally-bad-ideas" and "university-auditions-for-big-brother"," he says. And, 
In fact, there is no credible science that supports the idea that knowing your lactase persistence genotype, alcohol metabolic genotypes, or "folate" metabolic genotypes will improve health.
This information is useless. It's a total waste of money. It gives a highly misleading picture of genetics.
Yep. And UCB is already fiscally behind the 8-ball (maybe the Chancellor should be tested for the DRD receptor to cure his investment risk-taking behavior). Will the next thing be genetic ancestry tests to determine who should be admitted from various racial groups, and who may be hiding or over-stating their ancestry for preferential treatment?  

As Hawks says, the only real thing this will do is get kids used to genomic testing for just about anything.  

Fake life! Starting out, but not starting over

Frankenstein, move over!
Sooner or later it would be claimed that life had been created in the proverbial test tube, and a forthcoming article in Science (here's one news report, and the blurb in Science, from which the image here is taken) makes an expected start in this direction.

Craig Venter has synthesized a bacterial genome from scratch, so to speak, in the sense of taking extensive studies on the genes bacteria really need, stringing them together, and inserting them into a genetically empty bacterial cell. The resulting creature--surely there'll be a 'droid' kind of Sci-Fi name for it--behaved as its genes had been expected to dictate. We're sure that Madison Ave and Hollywood, our guardians of the truth, are already at work.

This is a high technological achievement, that confirms various ideas that were already well established. It is not a conceptual advance in that sense. But it could truly suggest that we're starting out on an open-ended venture in genetic engineering of microbial species. Many problems that range from ecology to medicine to agriculture could be the beneficiaries.

But we should be careful not to let any hype-engines distort what has actually been achieved. This is not life starting over! This is creating life, but only creating it after it has evolved for 4 billion years. That's because only after that time have we got cells to work with and genes to screen to see what's needed. And what's needed is needed because of those 4 billion years of evolution. So this has nothing to do with the origin of life, and is not inventing life in some fundamentally new form.

Nonetheless, if this approach is flexible, it will allow cellular robots to be constructed to ad hoc purposes. The news story quotes the usual concerns about the potential dangers. What if these things escape from the lab and get out into the real world? Will they out-do anything Japanese horror films can cook up?

Nobody knows, of course, but at least, previous similar concerns over recombinant DNA did not materialize. Probably much more likely than an unintentional escape are intentional military releases. Probably various militaries are going to be looking intently at these results. Intentionally constructed bad bugs could be devastating, and nothing in human life suggests that it's unimaginable that they'll be produced. Could indefinite constraint exist, as it has--to date--since WWII with atomic weapons? Or are the genii, including the nuclear weapons genie, once out of the bottle inevitably uncontrollable?

At least, at this stage one can think of many positive possible uses of truly engineerable bacteria, and whatever else is to follow (like doing the same with more complex diploid cells).

Thursday, May 20, 2010

The ayes of Texas

Well, is there anything that Texans are too ashamed to do? Probably not. Judge fer yerself, podner:

An imminent vote on textbooks, will tell the tale about what's true and what kiddies' li'l ol' innocent ears are ready for. Apparently the ayes in the Texas schoolbook commission are all in favor of propaganda that for transparent political crassness make Soviet propaganda look downright amateurish.

Because Texas and California together dominate the school textbook market, decisions in these states about what to include in texts have long dominated the industry, and cannot be dismissed as just backwater doin's. If Texas sez Darwin wuz wrong, well then, podner, he was wrong! If there wuzn't any slav'ry in the US, then there wuzn't an' don' try sayin' ther wuz. Thomas Jefferson didn't exist. Capitalism is God's mandate (maybe Jesus said something to that effect in the Sermon on the Mount, though it seems the Message was rather the opposite). Senator McCarthy, a hero. And much, much more in the same vein.

Are we (or, at least, are they) in the age of science? Apparently not. A small group of us elitists who have the disgusting characteristic of having had at least some sort of education actually think truth in the factual world means something. We seem to be in the shrinking minority. Or perhaps our noble Lone Star friends are just worried about keeping the future-majority in check. One hates to look at world history to see what happened to other countries when they come under the sway of comparably enlightened people. It's happened many times before, and never had a good outcome.

We don't know how this will play out. Maybe enough parents, the literate ones in Texas, will say 'enough!' and will demand real education rather than propa-gation, and will stop or reverse the decision. Perhaps the textbook industry will show more spine than is in their bindings, and won't publish rubbish. Publish for the other states, with special Texas centerfolds (pinup pictures of local preachers) for the Lone Star students. Or maybe even local school districts south of the Pecos that are not controlled by cowpunchers will find a way to develop and use online texts with actual content. Or something sensible.

Even used teabags leftover from the Party rally would make better reading.

Actually, let's the rest of us lie low. Because if both California and Texas switch to the new enlightened texts, it will be a bonanza for us! Our kids will no longer have to compete with a huge contingent of Texan and Californian kids for admission to colleges, and for jobs.

Well, we actually have friends in Texas (and California). Most can even spell their own names. So we don't want to make blanket characterizations. We lived in Texas for a long time. Anne got her doctorate there, and Ken was a faculty member at University of Texas (in Houston) for 13 years. So we know a bit whereof we speak.

On the serious side, anyone intelligent in Texas should be looking to relocate themselves or their businesses elsewhere, unless they have no kids to think of or employees to hire who they might want to have a skill or two. And get away from the heat, humidity, hurricanes, cow pies, tumbleweed, and derelict oil-rigs to boot. If you teach at UT, state law says that you'll have to admit, and attempt to teach (or un-teach) the students trained by these fine new textbooks--yes, they'll fill your class!

So, let us know you're ready to bail, and we'll try to find academic job offers for you in the civilized part of the country. You can write to us directly (include a CV), and we'll see if we can help here at Penn State.

Otherwise, looks like you Texuns in deep doo-doo! Y'all hear?

Wednesday, May 19, 2010

R r r i n n g g! Who's Calling?

The study that we've all been waiting for for 10 years to tell us whether cell phones cause brain tumors is.....inconclusive! As reported in the International Journal of Epidemiology, the World Health Organization's International Agency for Research on Cancer (IARC) conducted a large case-control study of 13,000 people in 13 countries, with interviews of 2700 users with a type of brain tumor called glioma, another 2400 with meningioma, and 7500 controls with no cancer. They chose people most likely to have been the heaviest cell phone users in the last 5-10 years, people in urban areas and aged 30-59. Controls were matched on age (within 5 years), sex and region of residence within each country. (Phone image from freefoto.com.)

Subjects were asked to recall details about their cell phone usage over 10 years -- average call duration, use of hands-free devices and so on -- and the questionnaire also asked for information about socio-demographic variables (though they ultimately used educational level as a surrogate for socioeconomic status), occupational exposure to electromagnetic fields and ionizing radiation, medical history, medical exposure to x-rays or MRI, and smoking. Information about the location of the tumor was also asked.

The majority of subjects in this study were not heavy users of cell phones. Among meningioma cases, the prevalence of regular cell phone usage was 52% during a single year, and 56% among controls, and among meningioma controls, lifetime usage was about 75 hours, or about 2 hours a month.

The odds of having meningioma or glioma with regular cell phone use compared with controls (those without cancer) was 0.79 (95% confidence interval: 0.68-0.91) for meningioma cases and 0.81 (CI: 0.70-0.94) for glioma, in the entire study population. The odds ratio (OR) varies somewhat by hours of usage, but is always below or around 1 (an odds ratio of 1 means that the test variable, in this case cell phone use, was not found to effect risk of disease), which, if taken literally, suggests that risk of brain tumor was reduced with cell phone usage, though the researchers believe that some systematic bias rather than an actual protective effect of cell phones is more likely to explain this result. Age and sex seemed to make no difference to the results. An important point is that these OR values are specific to this particular sample, and are only as good as the sample itself was 'representative' (and of what).

The authors calculated hundreds of odds ratios, broken down by age, sex, location of tumor, SES, hours of phone usage, and so on, and some were greater than 1. They found, for example, that the OR for glioma for those with the greatest call time (over 1640 hours cumulative use) was greater than 1. That some ORs were greater than 1, even if there was no actual effect, is not surprising, since with a 95% confidence level 5% of the results are expected to be positive by chance alone, so whether or not they have found any real association is still open to question. But, the fact that this is just the association that was being tested in this study would mean that more weight should be given to this finding, though the OR didn't gradually increase with use, which argues against it being a conclusive result. 
Our results include not only a disproportionately high number of ORs less than 1, but also a small number of elevated ORs. This could be taken to indicate an underlying lack of association with mobile phone use, systematic bias from one or more sources, a few random but essentially meaningless increased ORs, or a small effect detectable only in a subset of the data.
This was a $24.4 million study -- a grant almost as big as a Verizon bill -- with 1/4 of the funding coming from cell phone industry sources, because Verizon or Vodafone really, truly want to know the truth (don't they?). They of course welcomed the lack of clear findings -- as though that meant definitely there was no risk. Go on, call someone and tell 'em the good news!

According to one story reporting the study:
The British-based GSM Association, which represents international cell phone firms, said IARC's findings echoed "the large body of existing research and many expert reviews that consistently conclude that there is no established health risk."
In fact, that wasn't the conclusion of the study (rather it was that there may be a link of heavy usage with glioma, but that more research was needed -- a shocking surprise that they would ask for more grant funding).

So, apart from the clear conflict of interest in the industry funding of and reporting on the results of the study, do we actually know more than we did before the $24 million was spent, and if not, why not?

This was a retrospective study, meaning people were asked to recall behavior from years ago, which is unavoidably unreliable, particularly the farther back one goes. The authors of the study themselves discuss a number of other reasons that the study could be faulty, particularly selection bias, or a refusal to participate by some people who were contacted, which may be systematically related somehow to risk, and information bias, or a differential error in the reporting of past phone usage by cases or their proxies (who were interviewed when a subject with a brain tumor was too ill to answer questions or had died) vs controls. Proxies would presumably not know as much about the subject's usage, and it could be that cases were more motivated to remember cell phone use if it could explain their disease.

Prospective studies, which would follow people at some starting point for a certain length of time and measure as many relevant factors as possible would be a more accurate way to look at this question, and in fact a 30 year prospective study is apparently in the works. But cell phone technology is changing all the time so whether results in 30 years will be relevant to then current technology is rather doubtful.

In any case, the results of the studies done to date pretty convincingly show that the risk, if any is very small, and this raises the major problem with prospective studies: they are very costly, their conclusions don't arrive til your grandchildren are nagging for their first cell phones, and unless they're huge they will only find a very few cases. That might mean we should dismiss the risk, and if the risk were just of hang-nails that would be a reasonable conclusion. But the risk is cancer, and as with other studies of cancer risk, notoriously including cancer risks from radiation, nobody thinks we should just zap away and not worry. So what to do?

There's an obvious reasonable and rational solution: simply to accept that there may be a small, but serious risk, just as we know there are risks of driving cars, flying in planes, or getting dental x-rays. And here, a law can easily be passed that would require that headsets be part of any new mobiles. That would actually be good for Vodafone (not necessarily because of civic responsibility but at least because there'd be more gear to sell) and would obviate the problem unambiguously. If there is a radiation risk to whatever's near your pocket where the phone is stored, that would be even more difficult to detect, since the tissue is denser and thicker there than at your ear.

These large studies use as their null hypothesis that cell phones are safe -- they entail no risk. Instead, they should use as the null the current best-estimate of the risk, and try to reject that hypothesis. This is a different kind of statistical approach (related to what is known as Bayesian analysis, in which you take what you think you know, and adjust your new findings accordingly).

Since there is evidential reason to think cell-phones might be dangerous, but the risk (if any) is clearly small, the burden of proof of new studies should be to reject the idea that the phones are dangerous, not that they're safe, and estimate the maximum risk consistent with the data (if the proper null cannot be rejected). Another statistical issue is the probability ('p-value') used in evaluating the evidence. If the nominal 0.05 value is used, that's rather liberal, since it's not likely that much that's convincing can be said, in terms of small risks, at such a level. But since the risk (if any) is quite serious -- brain cancer -- the proper p-values should be very conservative -- bending over backward statistically, to say that one won't dismiss risk unless the evidence is very strong that there isn't any.

Simply requiring head-sets would be a sound, safe, and sane public health policy. Headsets would also save many lives from accidents due to distraction, as in driving. Such a policy, like fluoridating toothpaste, would have benefits but not inconvenience anybody -- well, hardly anybody. It would greatly inconvenience the epidemiological establishment by depriving it of hundreds of millions of dollars of funds, to support their habit for decades, to prove that......cell phone risk is very small, if any.

Tuesday, May 18, 2010

Blame it on Darwin?

Ken took part in a panel discussion recently about Nazi medicine and eugenics. The three participants on the panel were a lawyer who has done work on the legalities of torture, a historian of Nazi Germany, and Ken, a geneticist with interest in ethical issues. The discussion was led by a moderator, also a historian of Nazi Germany. This was in conjunction with a traveling exhibit, "Deadly Medicine," (logo at left) from the US National Holocaust Museum, that had been here at Penn State, and most of the attendees had seen the exhibit.

This was the third in a series of discussions of the importance of eugenics in Nazi Germany. Previous speakers had talked about the extermination programs that started, with the full cooperation of German physicians, with the elimination of mentally or physically disabled infants and children, and expanded to include many other members of society.  The latter part of the history is well-known to most of us, but the acquiescence of the medical system is less so.

This third evening started with a discussion of the history of eugenics and how it came to motivate the Nazis. Ken traced it back to Darwin, and the idea of survival of the fittest, which was quickly translated into the social arena (mainly not by Darwin but by others), reinforcing existing class-society ideas whereby the richest, smartest, most powerful are best for society, while having to maintain the poor and the ill is an endless resource burden on the stronger members of society. This burden not only seemed unfair (to the rich and smart and powerful), but the cost of support for the weak, ill, insane, or otherwise undesirables would be a permanent drag on a society that wished to be its best. Thus, in implementing government-driven eugenics (sterilization and eventually murdering the 'undesirables'), the Nazis believed -- or, rationalized -- that they were doing what was right for their country by culling the less fit. Germany would become the #1 country in the world. Deutschland uber alles!

Darwin's ideas, and a strict adherence to the view that human individual or 'racial' traits can be judged to have more or less value and, because of evolution, are inherent, led scientists to decide that whereas Nature had made those judgments in the past, we (the scientists) are the ones whose duty it is to make them in our scientific age. The historian on the panel agreed about the importance of the eugenics movement in Germany at the time, and added that for cultural reasons the medical system was especially well-situated (or unfortunately situated) to support the cleansing of society in this way. Doctors did a lot of the dirty work, signed off on even the worst horrors, and gave the stamp of respectability to much of what happened.  This history is well-known, but perhaps less so to the younger generation, which is why the 'Deadly Medicine' exhibit was brought to Penn State, and why there were three different events discussing its meaning.

In any event, the panel moderator finally noted that Darwin's name had been mentioned a number of times during the evening, and asked whether it was fair to conclude that Darwin should be blamed for the Holocaust. Darwinism is still used as a justification for making value judgments about human traits, including races, and for justifying inequality as a natural state of Nature. This is in fact an idea that has been widely promulgated by the extreme right wing in the US, and by Creationists. Ken's response was that blaming Darwin for what people did with his ideas would be like blaming Benjamin Franklin for the electric chair. This of course won't diminish the blame the right wing and religious fundamentalists bestow on Darwin for all of society's ills, because this is an ideological struggle, not one based on facts -- and of course Darwin's real sin was claiming that humans were not created by God in their present form.

But still, there are many lessons to be learned from the eugenic age for us in our own genetic age. Evolution, too, is an idea that is out there and can't be undone. We are not likely to repeat the same horrors of the original eugenics age, but new genetic data and the belief that genes determine your nature, can easily be misused in our own new ways, and there is no guarantee that those ways will all be benign. Our society will face the issues related to this, such as confidentiality of genetic data, the use of such data in governmental monitoring of citizens, in policies related to insurance and in many other ways. Many investigators are analyzing data on human variation in ways that are, perhaps unintentionally, almost identical to the categorical ways our species' variation was treated a century and more ago. Ken has a couple of papers in press that point this out.

So, whether remembering history actually discourages people from repeating it or not, we think it's incumbent upon practitioners of genetics and anthropology, which of course has its own entangled past with the Nazi regime, to know the history of their disciplines, and to be aware that it wasn't all pristine.

Monday, May 17, 2010

Cotton wars

A few weeks ago we commented on a story in the New York Times about increasing resistance to herbicides in weeds around the world, some of it because of plants genetically modified to resist glyphosate, or RoundUp, but all of it because of the widespread use of herbicides. And we've also written about the increasing resistance to pesticides because of genetically modified plants that produce a Bacillus thuringiensis (Bt) toxin that is lethal to many plant pests. A report in Nature this week (about a paper published in Science) describes the boon the use of Bt cotton in China has been to previously inconsequential cotton pests.

Bollworm moth larvae outbreaks were particularly destructive to cotton yields and profits in the early 1990s in China, so in response, the government approved the use of Bt cotton. Currently, according to the Nature story, more than 4 million hectares of the crop are under cultivation in China. And, as a result, previously minor pests, which were once outcompeted by the bollworm, have become major problems. The Nature piece reports:
Numbers of mirid bugs (insects of the Miridae family), previously only minor pests in northern China, have increased 12-fold since 1997, they found. "Mirids are now a main pest in the region," says [entomologist Kongming] Wu. "Their rise in abundance is associated with the scale of Bt cotton cultivation."
"Mirids can reduce cotton yields just as much as bollworms, up to 50% when no controlled," Wu adds. The insects are also emerging as a threat to crops such as green beans, cereals, vegetables and various fruits.
The rise of mirids has driven Chinese farmers back to pesticides — they are currently using about two-thirds as much as they did beforeBt cotton was introduced. As mirids develop resistance to the pesticides, Wu expects that farmers will soon spray as much as they ever did.
The Science paper is apparently the first study of the effect of GM plants on non-target pests.
Our work highlights a critical need to predict landscape-level impacts of transgenic crops on (potentially) pestiferous organisms in future ecological agricultural risk assessment. Such more comprehensive risk management may be crucial to help advance integrated pest management and ensure sustainability of transgenic technologies.
It might seem odd that an ecological perspective has been lacking when decisions are made about such things as the use of pesticides or herbicides in ecological systems, whether they are introduced into plants or sprayed on fields, but a narrow view of the problem and its solution is apparently the norm.  It is not clear (to us) why removing bollworms would make space for mirids. Was it that before, there were not seats at the cotton-table for them? Whatever the answer, it adds to the importance of ecological perspectives.

But this is true of much of science. Reducing a problem to a single cause-and-effect relationship is what our methods do best, whether it's determining the metabolic effect of a single nutrient in a single food, or of a single gene, or a parenting method or a teaching technique. The unintended consequences that can result from such approaches, such as the upsurge in non-target pests, or herbicide resistant weeks, are increasingly well-documented -- and we certainly haven't discovered 'the' way to teach math.

And this is without even mentioning the obvious lesson, that artificial selection can have fast and sweeping consequences. And that when short-term gains (in the case of Bt cotton, profit) drive decision making, whatever we actually do know about long-term consequences may not even be factored in.

Perhaps, as industrial genetics is likely to argue, we are able to use technology to keep at least a half-step ahead. The definition of 'winning' in Pest vs Crop wars would be that, rather than any dream of a pest-free world. But as the human population and its demands grow, so likely will high-volume monocropping, and along with that more vulnerability. If the pace of biotechnology can be kept greater than that of artificial selection, then the current approach will work reasonably well, at least in the short term -- or that's what industrial genetics will argue. Maybe it's true -- until another ecological aspect, the toll on soil and water of monocropping, takes over.

Like any organism -- such as a human -- the human-made and human-affected world is made of many parts that exist and evolve together. Organisms can go extinct, and so can ecosystems. That is, they could change in ways that we who are responsible cannot adopt. Whether that's in the near-term offing is of course highly debated. Whoever wins....

Friday, May 14, 2010

Science Illustrated

Strange Brew
Strange Brew, April 29, 2010. A pithy commentary on, for our purposes, how science is dominated by pandering and special interest. And so much better illustrated than any way we've said it.

Anything to satisfy the reviewers!

Thursday, May 13, 2010

T-shirt Named Desire?

An example of an hypothesis cited enough times that it has become accepted lore suggests that one criterion upon which humans select mates is dissimilarity at the major histocompatibility complex (MHC) of genes. The MHC is a region of immune genes that vary greatly within species that have them (all jawed vertebrates), used in self/non-self recognition (and discovered because they're involved in tissue rejection in transplant surgery). The mate-selection idea is that offspring will be better equipped to combat infectious diseases with a broader spectrum of immune genes, and to ensure that, natural selection has built MHC dissimilarity into the way we choose mates.

The association between MHC dissimilarity and mate choice has been found at least in mice, as well as in humans. An apparent correlation between body odor and MHC genes (a correlation that is difficult to explain, but may have to do with the clustering of olfactory receptor genes near the MHC) has led to studies in which tests (in humans) were done by asking females to smell T-shirts worn by males for 2 nights, and to state which are more pleasant, and indeed which reminded them of current or former mates.

In one of the seminal studies (no pun intended?) on this subject, females not on oral contraceptives chose males with a very dissimilar MHC to their own, while women on the pill chose males that were much more similar to themselves. In another study, the preference for dissimilarity wasn't found in single women. Even so, this work has led to the founding of a company that will help you to find your perfect match -- based on science. (Though, if it can really lead you to your perfect mate though, why are they charging $1,995.95 for a lifetime membership -- shouldn't you need their services only once?)

But anyway a new paper in PLoS Genetics by Derti et al. refutes all this.
A recent study by Chaix et al. sought signs of mate selection in the genotypes of HapMap Phase 2 (Hap2) parents by comparing the genetic relatedness of mated and unmated opposite-sex couples, both at the MHC locus and overall (using all common autosomal variants). Yoruban mates (N = 27 couples) were reported to be slightly more similar than expected (nominal two-sided P<0.001) overall, but no significant difference in MHC relatedness was detected between mates and non-mates. By contrast, European-American mates (N = 28 couples) did not differ significantly from non-mates in autosomal relatedness, but were less similar at the MHC locus than were non-mates (P = 0.015).
The latter result was interpreted as supporting a role for the MHC locus in mate choice, and outliers were excluded as an explanation. However, a visual comparison of mate and non-mate pairs suggests a weak effect that may derive from a few extreme pairs. Furthermore, adjusting the significance threshold for the fact that multiple hypotheses were tested (two sets of SNPs in each of two populations) would have rendered the results insignificant.
Thus, with minor modifications of the methodology and analysis (excluding one extreme outlier couple, and correcting for multiple testing, e.g.), Chaix et al.'s findings become from marginally significant to non-significant. Derti et al. further suggest:
If mates were found to differ from non-mates in MHC relatedness, in these or other populations, we note that this phenomenon need not stem from mate selection alone, particularly if only couples with children are considered. If offspring with certain MHC allele combinations survive preferentially, exclusion of mated couples without children could yield a non-random MHC similarity distribution amongst the remaining couples. This idea is supported by the increase in MHC heterozygosity of mouse embryos following viral infection of the parents.
So, the seemingly solid relationship between sweat, MHC variation and mate choice is at least as rocky as the relationship between Blanche and Stan in Streetcar Named Desire. But why was it so seductive anyway? The idea that the kind of variation in MHC that is found in modern large out-bred populations existed and was a strong selective force when hominids lived in small bands is not even convincing on the face of it. And, if the results of different studies have been so equivocal that single women and women on oral contraceptives don't show the preference for men with dissimilar MHC, then what does that say about early hominids? During most of human evolution, adult females were not often cycling, because they were malnourished, pregnant or lactating -- meaning that it's unlikely that their hormonal profiles looked reliably like those of college students on which the T-shirt tests were carried out.

We have written a lot about the seductiveness of selective scenarios. But, this tenuous hypothesis has a questionable aroma about it, that of something too good (or too pat) to be true. Whatever the mechanism is, if it's actually there, and not just a manifestation of collegiate horniness, which is not at all certain, it may be incidental to the functions being tested, and may have evolved for entirely separate and unknown reasons.  (Of course, it's possible that this whole story has been misunderstood -- it could be that this sweat/attraction thing is a remnant of the ancient admixture with Neandertals that everyone's talking about.  That would explain why it's not found in Africans.)

Now whether nice arm muscles inside the T-shirt, or a tough-guy way of mumbling, can trigger your mating corpuscles, is a separate question. Brando does look pretty good in a t-shirt.

Wednesday, May 12, 2010

Who's the hottest doc?

Thanks to our friend Francesc Calafell, an evolutionary geneticist at the University of Pampeu Fabra in Barcelona for alerting us to the important research reported in the British Medical Journal (here) by some of his countrymen. We are embarrassed not to have noticed this before, as it was published in 2006!

The burning question is whether physicians are shorter and less handsome than surgeons in one chosen study area, a hospital in Barcelona (though, surely generalizable to all of Catalonia and beyond), and if so, why. For the moment, this research was restricted to men around age 50, although the authors intend to expand their study within 10 years, when the number of female surgeons and physicians in the appropriate age range at their hospital is large enough to warrant such a project. At which time of course the burning question will be re-phrased to "who's hotter?"

Even despite this limitation, this study touches on a number of the themes we consistently write about here. When reporting their findings, the authors suggest some correlations, but don't assume they necessarily explain causation -- we applaud this cautiousness. It turns out that not only are surgeons taller (though that finding is confounded by their tendency to wear clog-like shoes that artificially increase their height, an attention to possible confounding variables that we can only heartily support) and handsomer (this may have to do with their habitual wearing of protective masks over their faces, and their spending so much time in oxygen-rich operating rooms), but they are also less frequently bald (which may be due to the head coverings they wear while operating). Note the welcome caution. And, "Male surgeons are taller and better looking than physicians, but whether these differences are genetic or environmental is unclear." There may be a hint of excessive genetic determinism there, but at least they are open to alternate explanations.

Though they are too cautious to say so (again, for which we applaud them), the obvious Darwinian explanation is that the genes 'for' surgery (HeartSurgery1, BrainSurgery 1 and 2, and Appendectomy331) which make them taller so they can wield scalpels (well, originally, it was hunting knives) better. The short physician genes (SPG1, SPG2, and SPG3 so far have been discovered) gave their Galenic and pre-Galenic ancestors an advantage in stooping over delicately to monitor modest female patients' heartbeats with their short, demure, stiff wooden stethoscopes. Or to more deftly reach into their leech box. It will be interesting to see what the results for female MD's show in future, or what they evolved those traits for (let your imagination run wild!).

Another difference between surgeons and physicians commented on but left unexplained in the paper is that "Surgeons are the only doctors who practise what has been called "confidence based medicine," which is based on boldness." Cut first, ask questions later. By contrast, presumably physicians practise "by guess and By Golly!" medicine.*

Francesc brought this paper to our attention, and we know he did so with the appropriate level of disinterested scientific quest for knowledge -- he's not a physician or surgeon, so he has no vested interest in the truth of the HandsomeQuotient therein reported -- handsome as he is!

----------------------------------
*A table listing alternatives to 'evidence-based medicine' (from a BMJ paper, Seven alternatives to evidence-based medicine, David Isaacs and Dominic Fitzgerald, BMJ 1999;319:1618-1618 (18 December )).

Tuesday, May 11, 2010

Every organism is unique, but we all become unique in the same way

We've just run across a 2004 Nature paper by André Pires-daSilva and Ralf Sommer about the evolution of signaling in animal development. This paper, of which we were not aware but should have been, is highly related to some main themes of our book The Mermaid's Tale, which deals with fundamental aspects of how life works including, but not focused on, how it evolved.

The authors write that only seven signaling pathways are responsible for most of the cell-cell interactions that control the development of a single cell into the organism it becomes. They are used repeatedly at every stage of development, and have been co-opted through evolutionary time in the development of new morphological traits and systems.


After millions of years of evolution, signalling pathways have evolved into complex networks of interactions. Surprisingly, genetic and biochemical studies revealed that only a few classes of signalling pathways are sufficient to pattern a wide variety of cells, tissues and morphologies. The specificity of these pathways is based on the history of the cell (referred to as the 'cell's competence'), the intensity of the signal and the cross-regulatory interactions with other signalling cascades.
These ubiquitous pathways are the Hedgehog, Wnt, transforming growth factor-beta, receptor tyrosine kinase, Notch, JAK/STAT and nuclear hormone pathways. How can the wide diversity of life around us be produced by so few ways for cells to communicate with each other?

Given the flexibility of signalling pathways, research in the past decade has concentrated on the question of how specificity is achieved in any signalling response. There is now clear evidence that the specificity of cellular responses can be achieved by at least five mechanisms, which in some cases act in combination, highlighting the network properties of signalling pathways in living cells.
First, the same receptor can activate different intracellular transducers in different tissues.
Second, differences in the kinetics of the ligand or receptor might generate distinct cellular outcomes.
Third, combinatorial activation by signalling pathways might result in the regulation of specific genes. Several signalling pathways can be integrated either at signalling proteins or at enhancers of target genes.
Fourth, cells that express distinct transcription factors might respond differently when exposed to the same signals.
Fifth, compartmentalization of the signal in the cell can contribute to specificity. The recruitment of components into protein complexes prevents cross signalling between unrelated signalling molecules or targets multifunctional molecules to specific functions.
The idea that a handful of networks can be responsible for most of the cellular 'decision-making' that is development is a beautiful example of core principles of life that we write about. The different ways that cells respond, the different developmental cascades that can be triggered by the same signaling networks, the interaction of different signaling pathways to trigger specific responses, and so forth all demonstrate the importance of modularity, signaling, contingency, sequestration and chance over and over again. How the components of these signaling pathways have evolved -- co-evolved -- is not yet well-understood, but it has to be that the interactions are tolerant of imprecision, and indeed, that tolerance (variation in the affinity of receptor/ligand binding) has been built into the system and leads to the evolutionary novelty.

The keys to this are partial sequestration of components of an organism so that local cells in different parts of the plant or animal can behave in different ways, so they can sense and respond to their environment (by signalling), and the arbitrary combinatorial codes by which signalling systems -- like the ones discussed in the 2004 paper -- work. That the same systems can produce diverse organisms reflects the logic of development, that is, the relational principles by which life is organized. Notch signaling is about the code specified by Notch, its receptor and related proteins, in combination with other such systems--and not by any particular property of the Notch proteins per se.

Every organism is unique, but we all become unique in the same way. It is basically the open-ended use of these very simple processes involving a limited number of components that enables this essentially unlimited diversity of living Nature.

Monday, May 10, 2010

Kissing cousins

Well, the latest episode of Our European Cousins has aired. Svante Paabo, who if anything knows how to play each side of the street as long as there are cameras there, has announced now that 1-4% of the modern human genome is derived by admixture with Neanderthals. In the past, he was comparably insistent in headlining that Neandertals had not admixed and were a dead lineage.

The paper reported in the news (on the BBC website, e.g.) appears in Science's new issue. Make no mistake, it's a good and important piece of work, long promised and finally arrived. It is a sequence of roughly the entire Neandertal genome compared to five available whole-genome sequences from modern humans. Getting and assembling anything close to a whole genome sequence from fragmentary bits in fossils, contaminated with DNA from other things such as bacteria in the earth where the individual fell thousands of years ago, is no easy task and Paabo's group has been one of the global leaders. Studies of ancient DNA are important because they provide direct evidence of the past, so where DNA is preserved it will remain valuable to sequence and interpret it.

One thing to note, that seems like double-think, but is not relevant to the points we want to make here, is that this Neandertal whole genome sequence is not the whole genome sequence of a Neandertal. This sequence is a composite assembled from ancient DNA extracted from three different individual Neandertals' remains. But 'the' human genome sequence online at GenBank is also a composite. Some technical issues are affected by this, but they aren't relevant here.

Whatever the details of the assembly, or whether variation among Neandertals was observed, the issue here is the origin of modern human sequences: did any of it descend directly from Neandertals, or were they an entirely separate group (or species, even) that separated from the common human stock and had no subsequent inter-breeding. That is, we today would have no descent directly from the Neandertals. Or was there some inter-group hanky-panky?

The new paper suggests that there was, but there are two major problems with that. The 1-4% are in segments that seem to have a different ancestry from the rest of the Neandertal genome, less divergent from us. The rest diverges from us by about the amount you'd expect given our joint time of separation from our common ancestry with chimpanzees.

The first problem is one we harp on regularly, the playing to the media and exaggeration of the results. In this case, the exaggeration was the definitive way the admixture issue was made melodramatic and definitive. It suggests that interbreeding was something exotic or immoral, like a human mating with a chimp, rather than what at the time would have been considered routine mate choice among individuals from neighboring groups.

They would probably have coexisted together in times when nobody moved very far, and would have differed from each other far less than, say, Africans and Europeans do today, and between whom mating is thankfully no longer a big deal in our society. In fact, the evidence reported is that this interbreeding occurred after both groups were part of the Eurasian population after its expansion out of Africa. In that sense, the groups may have diverged somewhat, and come in contact again later, and became good neighbors for a while. Whatever happened way back then involves our ancestry which is certainly interesting and worth knowing. But when the evidence is tentative so should be the claims.

But there is a second and much more important problem. It is a subtle issue, that in essence is that whether or not any direct human genetic ancestry traces back through Neanderthals basically doesn't matter related to how 'different' we are from them. In round numbers, here's why:

A copy of your genome and a copy of a chimp's (our nearest living relative) differ by about 2 to 5% in terms of DNA sequence. Two copies of the human genome today differ by about 0.1 to 2% depending on the comparison one makes.

We've been separated from our common ancestor by 7-10 million years. Corresponding to that, the paper shows that the Neandertals differ from modern humans by about 7% which is about what you'd expect given that (regardless of admixture issues) the Neandertal split happened only after about 90-95% of the time had passed since we and chimps split.

By that time, basically everybody was human, and in turn that means that overall we are essentially as similar to Neandertals as we are to each other (crudely speaking, we're 95% closer to them than to chimps). And of course the vast majority of sequence differences generally, and hence in this case, will have little if any function. If humans are virtually identical to each other then we are virtually identical to Neanderthals whether there was any inter-mixing or not.

But consider how much functionally meaningful (as opposed to evolutionary clock-meaningful) variation there is in modern humans around the world. Within our single species, there's plenty of room for differences, and they can be important. They can protect you in very important ways from the environment (as skin pigmentation does in the tropics), they can protect you from disease (as immunological differences among us do), and there is a lot of variation in behavioral abilities of all sorts. As many diseases show, even just one single DNA change can be lethal.

The point is that whatever important functional differences or similarities there were between us and Neanderthals need have nothing to do with whether there was any admixture between their populations and populations of our other direct ancestors. Natural selection will purge bad variation, favor sterling advantages, and ignore most of the rest wherever it comes from.

If there is major functional difference between us and our burly cousins, it is to be found in the relevant genes, not in the score card (or dance card) of our sequence differences. And they could have existed in them then, but not us now, even if there was inter-breeding.

This means that Dr Paabo is right to treat this as a story for publicity. Its scientific impact is far less than its human interest value. To portray the inter-mixing question as an important one about human function is to misrepresent (or misunderstand?) how genes and evolution work. But to understand that takes more than a sound byte, and of course that means not many people will be interested.

At the same time, there's nothing wrong with trying to find out, especially from direct genetic data when it's available, what we can about our closest, if dearly departed, ancestors.

Friday, May 7, 2010

Does phlebotomy 'work'?


There's a discussion in a nice book about the history of Islamic science (Ehsan Masood, Science and Islam: A History) of a man named al-Razi, who in about AD 900 was said to have done a carefully controlled experiment to test whether phlebotomy (blood-letting) worked as a treatment for meningitis. Some patients were given the treatment and others were untreated 'controls'. Al-Razi found that the bloodletting worked, in that more of the treated patients than controls recovered.

This therapy was part of the ancient and revered view of life upon which the classical medical approach codified by Galen was based. The humoral theory, that existence and hence life and health are based on balance of four basic properties (earth, air, fire, water), that in humans corresponded to blood, black bile, yellow bile, and phlegm.

Everything could be explained in terms of disease as the state in which these are out of balance. Blood-letting was done when the patient was deemed to have an imbalance by an excess of blood. Galenic medicine lasted for many centuries and it was verboten even to question it. And why question it? It worked! That is, some patients got better and the belief in the system led everyone to accept its sometime success as supportive evidence (and indeed, it's possible that even when assessed by modern scientific standards, bloodletting may sometimes have done some good, as this story describes).

Why don't we accept it today? In fact, even al-Razi himself wrote a book casting doubt about the degree to which Galenic medicine was true. After all, we accept modern medicine even though it fails to cure everyone. We have to ask what causation really is. After all, placebos work. If you know people are praying for you, it apparently works -- although prayer doesn't work if you don't know people are praying for you.

We dismiss that as 'only' psychological, even if that is purely physical and molecular, by involving neurotransmitters that affect other cell behavior, such as by the immune system and who knows what else, eventually leading to improvement in the disease. Blood-letting apparently has a measurable, replicable physiological rebound effect that makes people feel better a few hours later. We say these things don't really cure the disease, or if they're just psychosomatic, somehow that doesn't count. But if the brain is a material rather than immaterial structure, and the effect is thus material, why doesn't it count?

We want higher percentages of success. We want therapy to be direct, rather than indirect. If the treatment is believed by the patient, it boosts his immune system in some way, etc. Somehow, targeting the true pathology indirectly, rather than by targeting the proximate molecular cause, is not considered 'real'.

But that's our own culturally derived way to define medicine and its efficacy. It's similar with diseases like AIDS and HIV. As the South Africans said for a decade or more, poverty is the true 'cause' of AIDS, not the virus. Unfortunately, many thousands died as a result. Yet poverty is still causally associated with HIV infection. South Africa has finally accepted that HIV is also a cause of AIDS, and thousands or millions of lives may now be saved as a result.

Empirically, the desired explanation can be chosen to be some net result -- 'cure' in the case of disease. Science in the west, at present, wants reductionist molecular explanations, about proximate cause. Causes higher up the material chain -- like poverty and poor education cause poor neighborhoods with no good grocery stores cause reliance on McFastFood causes obesity causes high blood pressure or glucose causes retinal and peripheral neuropathy causes blindness and loss of extremeties. So what causes blindness? Even in our molecular, reductionist, technical age, diabetics still become blind.

There is no one answer. If removing poverty greatly reduced blindness, isn't poverty a cause? Or McBurgers? The prevailing view is that if we identify some ultimate cause -- the preferred target for many in science these days is your 'personalized' genome -- we will get to the 'real' cause and will then live forever. But the focus on genes is part and parcel of the structure of our current society.

Whether one approach to causation will ever, by itself, lead to miraculously high levels of efficacy nobody can say. Galenic physicians thought they had the ultimate answer. Collinsian medicine (Francis Collins, Director of NIH and the chief spokesperson for personalized genomic medicine) is having its day today. What about tomorrow?

The same kinds of questions arise in evolutionary and developmental biology. We've recently posted on phenogenetic drift-- the idea that essentially the same trait can come to be due to different genetic bases even while being conserved by natural selection -- which suggests that genes contribute but are not 'the' cause of the trait. This is related to the entire concept of complex causation.

So was al-Razi right that phlebotomy cured meningitis? Perhaps it is inappropriate to ask whether Galenic medicine 'works'. It is more interesting, to us at least, to ask what we mean by 'works'.

[p.s., al-Razi, known in the west as Rhazi, wrote critically about Galenic medicine in a book Doubts About Galen]

Thursday, May 6, 2010

Rounded up? No, the varmints got away!

It seems there really is no free lunch....except perhaps for weeds. That's because it turns out that genetically modified crops don't defy evolution after all! The laws of Nature stand, and weren't superseded by agribusiness scientists.

A story in the New York Times this week tells about the increasing resistance of weeds to Roundup, or glyphosate, the weedkiller originally introduced by Monsanto but now sold by a number of other companies. The weedkiller was made for use with 'Roundup Ready' crops, grown from seeds genetically modified to be resistant to the herbicide. Many farmers who planted these seeds were very happy with how easily weeds could be controlled as well as the kind of no-till agriculture, and the reduction in top soil erosion that that brought, that then became possible. (Though there was, and still is, controversy over Roundup Ready crop yields, the safety of the chemical, Monsanto's legal right to insist that farmers can't save seed to plant the following year, and so on, but those issues are not for this post.)

But farmers are growing increasingly unhappy. Roundup resistant weeds -- superweeds -- were first found flexing their new-found muscles in Delaware, but now crop up all over the country (so to speak), with insidious and expensive effects.
To fight them, Mr. Anderson and farmers throughout the East, Midwest and South are being forced to spray fields with more toxic herbicides, pull weeds by hand and return to more labor-intensive methods like regular plowing.
“We’re back to where we were 20 years ago,” said Mr. Anderson, who will plow about one-third of his 3,000 acres of soybean fields this spring, more than he has in years. “We’re trying to find out what works.”
Farm experts say that such efforts could lead to higher food prices, lower crop yields, rising farm costs and more pollution of land and water.
“It is the single largest threat to production agriculture that we have ever seen,” said Andrew Wargo III, the president of the Arkansas Association of Conservation Districts.
Oddly enough, Monsanto originally promised that herbicide resistance would be insignificant. Why? They must have believed they were dealing with so fundamental a vulnerability on the weedy pests' part that they couldn't evolve a way around their assassin.

Monsanto is still saying that the problem is containable -- but they would, since they stand to lose a lot if farmers no longer have reason to buy Monsanto's Roundup resistant seeds. Did Monsanto think that, as one of the largest agricultural companies in the world, that they could make evolution stand still? And of course it's not just glyphosate resistance that's the problem. The International Survey of Herbicide Resistant Weeds lists "347 Resistant Biotypes, 195 Species (115 dicots and 80 monocots) and over 340,000 fields". The additional problem with Roundup is that farmers (and Monsanto) have become dependent on GM seeds, and the cultivation methods they've used to grow them.

The story is of course reminiscent of the increasingly widespread antibiotic resistance in bacteria, though there we had 50 good years, while with Roundup it was only several decades -- neither even a blink of the eye in evolutionary terms, of course. But, we've known for millennia that artificial selection is a fast and powerful force for change -- it was Darwin's very model for how natural selection works in the wild, after all. Farmers have chosen their best animals for breeding probably since they were first domesticated 10,000 years ago.

So, it shouldn't be surprising that in effect artificially selecting for herbicide resistant weeds, or antibiotic resistant bacteria, is fast and effective as well. The idea that we might be headed for the last roundup is naive. Nope, the truth is, pardner, that the varmints are still out there, eluding all the posses we send after them.

Wednesday, May 5, 2010

First 2010 Misrepresentation of the Year Award Goes to Nature

The first 2010 Misrepresentation of the Year Award goes to Nature magazine for last week's cover story. The caption on the cover is "The MS Genome" and is accompanied by silhouettes of one person standing and her twin in a wheelchair. But is the story about 'the multiple sclerosis genome' (whatever that would be)? No.

Compare the authors' actual title of their report to the cover's:
"Genome, epigenome and RNA sequences of monozygotic twins discordant for multiple sclerosis."

This is a representative, scientifically responsible title for the paper. There is a difference between genomes from MS patients and 'the MS genome.' What the  authors
report is the complete sequencing of the genomes of one discordant pair of identical twins (where one twin has MS and the other doesn't), and gene expression differences between 3 sets of discordant identical twins. They discuss issues of data accuracy and replicability, and they find no genetic signature that can explain the discordance.

  • Is the story well-done work? Yes, we're not experts but it seems to be
  • Is the science valid? Yes, it seems to be
  • Have the authors explained MS? No, but they don't claim to have done so
  • Have they found the MS genome? No and they don't claim that either
  • Have they identified which genome is 'the' MS genome? No
  • Have they shown that there is such a thing as 'the' MS genome? No
  • Is the story an important contribution to the understanding of MS? That's debatable
Genetic differences between the two twins in each pair were found, presumably the result of somatic mutations that occurred during or after embryogenesis, but none were associated with MS. Because MS has a curious relationship to climate and has sometimes been attributed to viral infection, sequences that weren't identifiable as human from each set of twins were aligned with viral genomes. That was a clever thing to try, and might have identified an MS 'infectome', but unfortunately no differences were found between the viral load of affected vs. unaffected twins.

The twins did not show replicable discordance at the immune system (HLA) genes, or other genes, that previous mapping and other studies had shown are likely contributors to MS risk. The twins varied a lot in terms of DNA sequence (presumably, somatic mutation) and gene expression levels--but in a way that could be attributable to their MS discordance.

This is another installment in the Life is Complex Department. We said above that this may not be an important contribution, because there are so many possible reasons why the authors didn't find what they hoped to find. That would not be their fault, and indeed, their work shows some of the many reasons why these kinds of data are problematic (sequencing errors, difficulty replicating expression levels, and so on). Perhaps they should have known this would have been the case from other similar kinds of studies (such as comparing genomes of cancer and normal tissue from the same person).

They confined their gene expression comparisons to a very confined set of sites -- necessarily, given current sequencing methods -- but that means that in no sense was this study exhaustive. It could be said that this was a premature use of new technology. But that's not our issue here!

The idea was a clever one, but perhaps built too much on hope against what we know about disease complexity. In general, one would be surprised if a clear result had been found, because these twins did not even have the suggested risk genotypes at those candidate genes, and because a genetic signature for MS has been so elusive before now.

Perhaps identical twins are not as good a comparison as they seem on the surface, since we know that stochastic and somatic changes can be responsible for many differences during life (that's what the cancer studies show).

Perhaps twins that did carry the suspected HLA or other risk genotypes, but who are nonetheless discordant for MS, would provide a more cogent comparison. Discordance in them would suggest that something else--perhaps environmental pathogen exposure or somatic mutation in other genes in one of the twins--was responsible.

The etiology of MS remains mysterious and this was one way to take a shot at a discovery.

But multiple sclerosis is no joking matter. The Nature cover is a kind of cruel disservice to those who suffer from MS, or their loved ones, by suggesting that the genetic cause had been found, with the obvious innuendo that a cure is just round the corner. Why else the cover story in Nature??

Their kind of misrepresentation is what diverts resources to inefficient or even lost causes. In fact, in no way has 'the' MS genome been found, nor is it defensible to suggest that there is such a singular thing. And only half the studied people were affected. Maybe the affected twin experienced somatic genetic changes, while the unaffected twin reflected the inherited 'resistant' or 'normal' genome of the pair?

It's not the authors but the journal that is responsible for the misrepresentation. It was a shot in the dark, perhaps, that had no business as a cover story. We hope that research will not be discouraged as a result--that is, that the authors' lack of positive findings does not lead to their work being thought of as 'negative' (if the bait-and-switch of the journal cover leaves hopeful readers deflated when they see the real story).

Nature has been quite successful at its attempt to match the quality standard of People magazine, and we assume that will continue, as they're pros and they know how to reach their market. Their naked savage and 'ancient' Khoisan genome cover of a couple of months ago ran this one a close second, and we wrote about that at the time. Though we're sure they would deserve it, we want to keep these Misrepresentation Awards one per customer per year, so we hope Nature (but not People) will wait for its next one til 2011.