Monday, June 29, 2009

The Centrality of Cooperation in Life: a first installment

A recent story in the New York Times science section about the importance of cooperation in ant colonies reminded us that we've been focused on things like disease and genetic causation in our blog for a while now. So we thought it was time to get back to other things, such as the importance of cooperation in all of biology, not just to ants.

Cooperation was in the subtitle of our book, and for a reason. Ever since Darwin's Origin of Species, whose 150th anniversary is rightly being celebrated this year, there has been what we think is an excessive belief that competition is central to the nature of life. In biology, this ethos is largely about the way that Darwinian evolution, with its stress on competition among individuals within a population, with its genetically determined winners, losers, inherent goods and bads, has fit the nature of our industrial culture's worldview. It is a convenient way to rationalize and hence justify self-interested gain by a few against the many.

Nobody can deny that there is competition in life, in the sense that some individuals do better at reproducing than others. Species have their day, and fade as other species flourish. It's an important mechanism for biological change and was a brilliant insight of Darwin, as well of others in his time (Wallace's attention was more on group competition against environmental limitations, than on individual competition). It helped demystify the origin and nature of life and its diversity.

However, it's not the whole truth about life. Instead, we think, a focus on competition draws disproportionate attention to the long-term historical aspects of life, even if the Darwinian explanation is accurate!, rather than what we can easily see every day before our very eyes. What we see everywhere, every day is mainly cooperation: among molecules within each of our cells, among cells within each individual, and among individuals. In a deep biological sense, if not in one that fits the value-loaded human word 'cooperation', even predator and prey must cooperate: both must be present for each to survive.

Classical evolutionary theory, and a lot of popular science writing based on it, assumes competition to be the fundamental force in life. But absolutely as essential to life is necessity for cooperation at all times and at all these levels--among genes in genetic pathways, among organelles in cells, among cells and tissues and organs, and in ecosystems among organisms within and between species.

And, our focus on cooperation leads us to a different view of natural selection and its importance in evolution. As we say in The Mermaid's Tale, natural selection does happen, but it depends on a lot of if's. For example, if a species over-reproduces, and if there is variation in the next generation, and if some of that variation leads its bearers to do better in a given environment, and if that's due to the inherited genome, and if the environment remains stable for long enough that this variant is favored consistently, and if the favored forms reproduce successfully, as do their offspring, and if they produce more offspring than organisms without the favored variant, then these favored organisms may become more common, due to natural selection. That is, they will be better adapted to their environment. But, all these if's must co-occur for natural selection to be an important force in change. If they are sporadic, or varying in nature and intensity, then their relative importance diminishes in relation to other aspects of life, including chance. Indeed, distinguishing chance from natural selection is no simple challenge.

No matter, to understand life in a deep sense one really has first to understand the nature of the countless cooperative interactions on which it is based. How those interactions change over long time periods of generations of cells, organisms, species, and ecosystems is important. But the interactions, and how they organize life, come first.

Sunday, June 28, 2009

Can the grant system be overhauled?

Gina Kolata writes in today's New York Times that $105 billion dollars have been spent on cancer research since 1971 when Richard Nixon declared 'war on cancer', but, measuring progress in death rates, we're not a whole lot better off now than we were then. Indeed, cancer is good business, and cancer specialty clinics are opening or expanding all over the country (and advertising about how good they are, to promote sales). Cancer treatments are long and costly to our health care system, so it's a serious problem, both economically and for anyone who has cancer.

Ms Kolata correctly attributes much of this to the conservative nature of the grant system. Researchers don't even apply for money for new ideas because they know the system doesn't reward innovation, so what does get funded is research that doesn't even attempt to go beyond incremental progress. And researchers' careers, prestige, even salaries depend on grants.

Kolata's article is specifically about cancer, but the conservative nature of the grant system is true for all fields. It's partly because 'peer review'--the judging of grants by people who do similar work--keeps it that way. People can only evaluate what they already know. And it's partly because the system demands that the researcher demonstrate that the work can be done, which requires pilot data. And as with any large establishment, it learns how to protect and perpetuate its own interests.

It is not easy to say what to do about it. What kind of accountability for grant recipients would be appropriate? The research questions being asked are tough, so 'cures' cannot be promised in advance, and the more basic the research the less clear what the criteria for success could be. The idea of accountability is that if your research is paid by a health institute it should make notable contributions to health, not just journal articles or to the researcher's career. A candid observer could quickly eliminate a high fraction of grant applications on the grounds that, even if successful as promised, their contribution would be very minor, as Kolata illustrates. Perhaps there should be a penalty for making promises that aren't kept--at least, that could help make the system more honest.

Limits on the size or length of projects or of an investigator's total grants would help spread funds around. But what about the role, sometimes legitimate and sometimes mainly based on a love of technology, of very expensive equipment and approaches? Is there a way to identify less technically flashy, but perhaps more efficacious work? It's easy to see that this can be true: lifestyle changes could prevent vastly more cancer than, say, identifying genetic susceptibility causes, yet we spend much money on cancer genetics research compared to environmental change.

Speaking of lifestyles, one cannot order up innovations the way one can order a burger with fries. Might there be 'meta' approaches that would increase the odds that someone, somewhere will make a key finding or have a penetrating idea? Would that more likely come from someone in a big lab working on long-term projects, or someone in a small lab working in relative obscurity?

Or is it OK to perpetuate the system, assuming good will come of it here and there, meanwhile a lot of people are employed to manage big labs, run the experiments, collect data, make machinery and lab equipment, and sweep the floors in large lab buildings?

These reflections apply to much that is happening in the life (and other) sciences today. They drive the system in particular directions, including fads and technologically rather than conceptually based approaches, and in that sense some things are studied while other approaches may not be considered (or funded because they're out of the mainstream). An example relevant to our blog and work is the way that genetic determinism and, more broadly, a genome-centered focus, drives so much of life and health sciences. By no means irrelevant or all bad! But it is a gravitational force that pulls resources away from other areas that might be equally important.

Clearly major findings are generated by this way of doing science, even if false promises go unsanctioned (indeed, those making them usually continue to do that and continue to be funded with major grants). Life sciences certainly do increase our knowledge, in many clearly important ways. Yet disease rates are not dropping in proportionate ways relative to grandiose promises.

Is there a solution to all this? Could the system be dramatically overhauled, with, say, research money being parceled out equally to anyone a university has deemed worthy of employment? Could the peer review system be changed, so that some non-experts are on review panels, ensuring that the system doesn't simply perpetuate the insider network? Or would they not know enough to act independently? Universities encourage and reward grant success not because it allows important work to be done by their brilliant professors but because it brings prestige, score-counting, and 'overhead' money to campus--can university dependence on overhead or faculty's on salary be lessened? Is there a way to encourage and reward innovation?

Friday, June 26, 2009

Evolutionary psychology has had its day?

David Brooks had an interesting piece in the New York Times yesterday about evolutionary psychology. He says that evolutionary psychology has had a good run, but that it's time to recognize that organisms are much more adaptable than the field allows.

Adaptability is one of the basic principles of life that we propose in our book, but it has been out of favor in this era of genetic determinism. Genetic determinism has a long and sorry history as biological essentialism, a value-based view of what or who is good and what or who isn't, and what we really are like despite what we may think (and an assault on free will as well). It is part of a cycle in human thought....one that included decades of eugenics and Naziism as justifications for the worst possible actions by some against others.

It's interesting to see this conservative columnist taking up this cause.

Eccentric atheist professors

Bridget Kendall interviews Marcus du Sautoy on the BBC World Service radio program, The Forum, this week, along with two other guests. A mathematician, du Sautoy now fills the post of "Simonyi Professor for the Public Understanding of Science" at Oxford, recently vacated by Richard Dawkins. He talks on the program about the intersection of music and mathematics.

When du Sautoy was appointed to the Professorship in March 2008, the media noted his many awards, his brilliance, his experience with the media, his eccentricity and taste for loud clothes and so on, but they almost invariably also noted that he, like Dawkins, is an 'avowed atheist'. This fact is included in his Wikipedia entry. But why is it considered to be relevant to his science?

Regular readers of this blog may remember our post objecting to the idea that Dawkins' atheism has anything to do with science. We continue to object. Perhaps eccentricity is truly a requirement for explaining science to the public, but du Sautoy's atheism can't have been informed by his work in mathematics and his atheism can't inform his math, so why do we need to know? And why is this relevant to--or is it misleading--a 'public understanding of science'?

Perhaps the idea is that, as an atheist, the Professor of the Public Understanding of Science will object to creationism taught in the schools, surely a worthy cause (because creationism isn't any form of science). But it certainly isn't only atheists who object to creationism--anyone who thinks that explaining the origins of the diversity of life on Earth must be grounded in observation rather than undemonstrable faith will object, and this includes many believers, even those accepting the Bible as moral and essential truth, but told in allegory and metaphor.

The atheist/believer divide may be real, but it's social, cultural and political, not scientific. To be scientific, there must be experiments to prove that 'God' does not exist, whereas science is about things that do exist and can be analyzed as such. As a mathematician du Sautoy should know that conclusions follow from definitions, so how one defines 'God' must, for starters, be specified before one can be an 'atheist'. To show that some claim, such as a 6,000 year old Earth, is not consistent with the facts is a form of scientific statement. But that is about some particular story about God, not about God directly. Even further, the rules of mathematical deduction are assumed, and it has been shown that not even all mathematical truths can be proven to be true.

So, while a scientist may (as many or most do) believe that the empirical world is all there is, and may personally be an atheist (no matter how 'God' would be defined), that is a result of their personal experience and view, not their science itself. It is very misleading to think otherwise. To do his job, Marcus du Sautoy should perhaps ask how science could address such questions, if it can, and point out why current science is helpless to address the fundamental existence question as it has been posed. The fact that there is a long history of philosophers or even scientists (including Isaac Newton) who have tried to prove biblical truth through their idea of science doesn't change this. du Sautoy has many interesting things to say about mathematics, and his flamboyance may attract many readers to science and careers in science. But what he has to say about religion is not grounded in his scientific expertise, it's just his opinion.

Monday, June 22, 2009

The death of privacy: an anthropological perspective

For most of the world's creatures now and ever, life is a naked phenomenon. Organisms, their phenotypes and their behavior, were lived entirely in the open. Mating, selection, survival and so on all occurred that way. Those were the kinds of groups in which we too evolved as a species. So, how did privacy become so important to us? We can imagine a scenario; human history at warp speed.

For tens of thousands of years, predominantly small ancestral bands of close kin made their living by hunting and gathering, dwelling around a camp (and, eventually, a campfire). Local groups moved around frequently, abandoning sites and finding new ones--for example, to follow food resources. All must have been public, and basically nothing private. That included the shared and basically equal nature of material possessions, as well as the nature of each person's physical and behavioral traits. Everyone was related to everyone else, in known (indeed, prescribed) ways. It was, perhaps, a gossiper's heaven, since everything was known about everyone by everybody.

Over thousands of years, especially after the implementation of agriculture 10,000 years ago, large, permanently settled but no longer kin-based populations became our environment. Individuals increasingly lived in isolated nuclear (or perhaps 3-generation) families in separate dwellings. They acquired and could accumulate personal possessions, largely interacting with people unrelated to themselves in any known way. Close relatives knew a lot about their own affairs, but less about others'. Society became more unequal, and people developed increasingly private lives, as we know them today.

Very large societies require administrative structures (governments) for protections of all sorts, and to avoid the chaos of conflicts of interest and personal conflicts. This includes the protection of individuals' privacy from intrusion by others (which we name 'crime'). Industrial societies, at least in part because of the growing inequities they developed, came increasingly to recognize personal privacy, including ownership, as important or even fundamental.

Disputes, that traditionally were settled by the families involved, or by a local strong-man, became society's business, with standardized codes of acceptable behavior and of sanctions for violations, that is, laws. Probably beginning with property, society protected individual possession as well as individual rights not to disclose possessions.

Humans build their emotions and belief systems around their ways of life. So, associated with these societal privacy traits were senses of outrage or embarrassment if the traits became known. They may make a person vulnerable to social or material risks, by revealing weaknesses, or his deceits, greed, and the like.

Personal traits including health became largely private unless revealed by the individual or family, but physicians necessarily had to know. As a result, even in the Hippocratic oath around 2400 years ago, physicians promise that

"All that may come to my knowledge in the exercise of my profession or in daily commerce with men, which ought not to be spread abroad, I will keep secret and will never reveal."

There are many obvious reasons for this. The healthy could easily take advantage of the sick or even of health risks they know such people may face. Relatives would perforce know a lot, but could be trusted, at least comparatively. In modern times, the obvious reasons for health privacy include the possibility that disease may be used to discriminate against people in various ways (jobs, insurance, etc.), and this would go against the legalized sense of things-that-are-nobody-else's-business.

Medical genetic data on their surface can be viewed simply as another source of diagnostic or therapeutic information, like blood pressure or 'where does it hurt?' But there are several basic differences:
  1. Genetic data about a person are also informative about his/her relatives
  2. Genetic data can be informative about any of the person's characteristics, not just the currently-presenting disease, and including normal as well as disease traits
  3. Genetic data may be of predictive value about a person's future, in a way that vested interests can use to discriminate among people to their detriment and the gain of the discriminator (e.g., HMO, insurer, employer, pension plan). Even police and the military get into the act in many forensic, security, or other ways.

It is for this reason that many are concerned about privacy issues in relation to 'personalized medicine' which, at its core means computerized storage and analysis of DNA sequence data for the purposes of assessing existing or potential phenotypes of the individual (not just a group).

There are many professional bioethicists thinking about this, as well as lawyers, journalists, legislators, and scientists. Indeed, we ourselves are happy to be helping train a graduate student who is both knowledgeable in modern genetics and a practicing lawyer. She should be an unusually qualified individual to help as society negotiates between science, society, and the law.

A lot of worry is being expressed, some of it professional angst (making careers out of the issues), given that it seems wholly inevitable that, barring some gross national trauma, personal DNA sequence data bases are inevitably going to proliferate. Unless we get bored with genetics, our technological age is in love with DNA and is widely embracing it, both to great profit and because of the accepted promise of major health advances.

Much of this debate is moot because the lid certainly cannot be kept on such a bottle. Data bases will increase, become more shared, computerized, coordinated, public, and difficult to contain. Interpretation of all sorts will accompany that growth. More and more people will learn more and more about more and more people--or at least will think they did.

How far this goes nobody can know, but it may be most useful to think ahead and stop knotting one's stomach about the details of regulation. Let's do something that is probably more useful to think about: let's assume that everyone's complete DNA sequence and its interpretation is entirely public, and can be known to anybody who wants to look. Let's further make the au courant assumption that DNA is the deterministic causal blueprint for who and what each of us is.

Such changes are to a great extent likely to occur, and in a way they spell the death of much of the sense of privacy that we have lived with for the roughly 10,000 years since the dawn of settled agricultural societies.

Younger generations will be born into this system, after us grouchy old goats pass the scene. For new generations this will just be how things are. The effects of such data, and how they're handled, will be worked out--fallibly, imperfectly and with abuses, as always in human affairs. But we will work them out! We know that, as humans, we can live publicly naked lives. That doesn't mean we can do it free of trauma, and history does not suggest we'll always do otherwise. In a sense, in regard to this particular issue, in shedding our privacy clothes, we'll be going back to our beginnings.

Friday, June 19, 2009

The cystic fibrosis gene 20 years later

There's a Genetics "News Focus" piece in today's Science magazine called "The Promise of a Cure: 20 Years and Counting", written by Jennifer Couzin-Frankel. We can't link to it, but we will describe it a bit because it relates to several of the issues we've written about ourselves this week.

The piece is about cystic fibrosis (CF), the gene for which was found 20 years ago, and announced with great excitement and hope for a cure. Gene therapy was the immediate focus, and many labs began the hunt for a method to deliver healthy copies of the gene into the lungs of people with CF. However, years into the search, it became apparent that gene therapy, at least as it was being tried, was not going to work, and research turned to better treatment and drug therapies. In the last 20 years, life expectancy for people with CF has risen by about 10 years, to 37. This isn't due to anything genetics has taught scientists, but to more aggressive and earlier treatment to keep lungs clear.

Even if the early promise of a cure hasn't yet been fulfilled,
in a funny way, "science has benefited more from the CF gene than CF has benefited from the science," says John Riordan, a biochemist
now at the University of North Carolina at Chapel Hill and one of the co-discoverers of the CF gene. Much has been learned about the genetics and physiology of CF, and new approaches to gene therapy may be on the horizon. Millions of dollars have been spent over the last decade or so on developing new drugs to treat the disease, and two are now in clinical trials. At least 1000 different mutations in the CF gene have been reported over 20 years; this isn't unusual, even for 'simple' single gene diseases. One of these mutations is quite common, while many have been seen only once; again, this is the usual case. As we wrote in our post on June 15, one of the speakers at the Bristol meeting described instances when it is useful to know a causative allele for determining therapy, and indeed both of the new drugs now in trials are targeted toward one of the known mutations for CF--one is meant to be helpful in patients with the most common mutation, while the other targets those with a mutation that explains the disease in only a few percent of patients.

Although we have never worked on cystic fibrosis, we well remember the excitement with which the finding of the CF gene was announced, and the belief that gene therapy was the next frontier. It was very sobering and discouraging to many when gene therapy turned out to be much trickier than anticipated (will stem cell therapy, so much hyped today, be equally recalcitrant?). Much has been learned about cystic fibrosis in the 20 years since the gene was identified, and it's still possible that genetics will contribute to treatment or even a cure. But, it's a cautionary tale.

The tale is also informative about the various strategies and approaches. The first step, at least as far as genetics goes, is to identify the causal gene(s). Once this is done, specific studies can be done focused on patients (and, where needed comparative control individuals), to identify the spectrum of mutations at the gene(s) and their clinical effects. This makes it possible to focus molecular and cell biology on the genes effects. Mapping or genomewide association studies contribute little more.

One difficulty in gene therapy is getting a good experimental animal model, and this was frustratingly true for CF; in future years, hopefully cell-culture rather than transgenic animal models will be developed (perhaps based on expression manipulation in stem cells of various kinds). This array of knowledge and tools can then lead to therapy that is 'genetic' in that it is directed against the genetic physiology, but is not necessarily genetic either in terms of germ-line alterations to protect future generations, nor not even necessarily related directly to the specific mutation, to DNA or mRNA, etc. It can 'simply' be an attack on the pathophysiology that might take any number of forms.

The motivation of GWAS to identify previously unknown pathways fall somewhere in the middle ground. Mapping (in families, not association studies) found the CF gene, which showed that ion channel biology was involved. Once that happened, mapping studies were no longer needed, and new variation in the CFTR gene could be found by sequencing affected persons. Once a pathway has been found by mapping or any other method, the pathway can be studied directly.

A cogent contemporary question is when can we assume that most pathways have been identified by mapping? Pathways often involve many genes, any one of which could be mutated to have a major effect. If it is highly penetrant and viable, it should be findable in families, in which case 'linkage' analysis is the statistically best approach. Or, well-designed but not massively large genomewide association studies in appropriate samples could identify the offending pathway member.

So long as any gene in the network has mutations with detectable effect, the network is found and its remaining members can then be studied molecularly and in affected people. One might expect that this would be the case for most important gene networks and hence most diseases. In this sense, the new wave of very large genomewide association studies probably is not going to be all that useful, even if it certainly will make important findings now and then. It is here that the discussion of the relative importance of the investment in large GWAS should be.

The Science article also points out both the idea that once found, study of causal genes probably can lead to effective treatments, as well as that it takes time. Media hyperbole doesn't help, or put another way, isn't necessary for support to be given to research on genes that really are important. Likewise, not all avenues need to be followed up if they are very costly, once it becomes possible to focus on known causal genes. Existing resources can be more focused. And currently, there are hundreds of disease-related genes where this could be the case.

Wednesday, June 17, 2009

A gene 'for' depression that isn't

A new study published in the Journal of the American Medical Association today, and receiving a lot of media attention (e.g., here and here) questions a long-held result of a gene 'for' depression (well, long for a genetic finding for a complex disease--first reported in Science in 2003). This finding was significant because before it was published (and since), psychiatric illnesses had been notoriously and frustratingly immune to genetic dissection. This result was convincing enough that people were encouraged to think that genes for behavior were now going to be findable.

The new study, by Neil Risch and Kathleen Merikangas and others (by no means skeptics about the possibility of explaining complex disease with genetics, though they have cautioned in the past that gene by environment interaction is important), is a 'meta-analysis' of 14 studies that attempted to replicate the original finding, but were less successful. Risch, Merikangas et al. reanalyzed the data and did not find an association between the serotonin gene and depression, though they confirm that life events are significantly correlated with this mood disorder.

Regular readers won't be surprised to hear that we aren't surprised by these results. We are interested, though, in the first two sentences of the JAMA paper:

The successful statistical identification and independent replication of numerous genetic markers in association studies have confirmed the utility of the genome-wide approach for the detection of genetic markers for complex disorders. However, recent genome-wide association studies have also indicated that most common genetic risks, at least when studied individually, are modest in magnitude, with relative risks in the range of 1.3 or less.

It's almost required these days for a genetics paper to start out by proclaiming the success of GWAS--so like most geneticists, these authors are in the GWAS camp....but then they aren't. Indeed, it's now becoming fashionable to want to have it both ways: GWAS have been a great success, but actually they haven't, so we need a lot more money to do other things--like whole genome sequencing on everyone. And that, in essence, is another form of GWAS but on a grander scale because one still must make statistical associations between variants and disease phenotypes.

The individual sequences of Watson and Venter that are on display (with others in the pipeline) already show thousands of previously unknown protein-changing variants, plus additional thousands of 'novel' SNPs of unknown function. So the push for huge studies of this type are still based on technophilic wing-and-prayer promises to a great extent. But nobody is willing to say: pull the plug on these approaches and try something more likely to identify meaningful causes, including meaningful genetic causes, of complex traits. Too many vested interests are at stake.

In response to an article we published a few years ago in the Int. J. of Epidemology (2006 Jun;35(3):562-71), Merikangas basically said that discovery was serendipitous and even if our skepticism was justified the money should keep flowing because, eventually, something would be found. This is not a novel argument and indeed goes back to the basic Baconian idea of induction: keep observing and the theory will emerge from the data. To us in this current context it's an ultimate form of self-interested last resort.

But, this discussion is a road we've traveled down a number of times already in this blog. This new paper certainly won't deter researchers from continuing to search for genes 'for' complex diseases like depression, schizophrenia or autism and neither will we. Biologically, these traits may not be particularly different from physical traits like obesity, diabetes, or cancer. But behavioral traits are different in two very important ways that are relevant to issues of science policy: First, they are exceedingly susceptible to cultural environmental experience and effects. Even when these interact with genetic variation, it is the cultural factors that are not only clearly preponderant but also most malleable. Secondly, they are socially sensitive in the sense of potential for real abuse. We can't forget history, which is rife with arguments about biological inherency that are used to discriminate against classes of people, be they nationalities or 'races'.

At some point, surely even the most dedicated gene hunters will acknowledge that this approach isn't working for complex disease, and will begin to rethink the problem. Network-based thinking, that is, treating systems of interacting genes as wholes, could be a way out, if it has to do with therapy. But probably not if it means simply identifying every variant each person may have in the countless genes in relevant networks.

Tuesday, June 16, 2009

Genetics in clinical practice--the tail end of the spectrum?

One of the more interesting talks at the Bristol meeting, perhaps because it's an area we don't know much about, had to do with clinical applications of genetics. The question concerned when, where, and how genetic information can be useful in the clinic. But the issues go much deeper than that, and relate to main points in our 'Mermaid' book.

The idea presented was that genetic information can be useful, not so much for disease risk prediction, because genotypes have poor predictive power for 'complex' traits (that is, and somewhat circularly, those that are not due to a single, and hence predictive, gene!). Instead, the argument was that genotyping can help determine therapy in the diagnostic sense of determining which of many causes may apply to, and hence guide therapy for, individual cases of a given disease.

An example given was MODY, or 'maturity onset diabetes of the young. MODY runs in families and can mimic both type 1 and type 2 diabetes, though most often it is a mild form of T1D, but with patients continuing to make some insulin (T1D, or juvenile diabetes is due to a failure to make insulin; T2D, or adult diabetes, is failure of cells to respond to insulin). The speaker, Dr Andrew Hattersley, described one family in which each affected member was being treated differently, but once the causal gene was known, each member was treated more appropriately, and the disease then better controlled. This was very good to hear because all too often even knowing the causal gene can't inform treatment--Huntingdon's disease comes to mind.

Hattersley also discussed a gene that causes three forms of achondroplasia, or dwarfism, but the particular mutation in that gene determines whether an individual will be somewhat shorter than normal, or will be much more severely affected, and knowing the mutation during pregnancy helps to prepare the family and clinician.

He primarily discussed single gene disorders, although these are often disorders that can be caused by a number of different mutations or even different genes, but in the instances described, knowing the specific mutation can make the difference between useful medical intervention and none. The idea is essentially that the 'same' disease is not really the same in different individuals if enough specificity is known, and that when a gene's effects are understood, knowing the causal gene is clinically relevant.

By contrast, all too often in human genetics identifying genes whose variation is statistically associated in at least some studies with the occurrence of disease contributes nothing toward treatment, because the statistical connection is too weak to be useful. That is the problem that is now widely recognized in regard to GWAS (see our many earlier posts) for complex diseases, so we were heartened to learn that this is not always the case. In some cases, knowing the physiological pathway is very useful.

The much broader relevance of these points is that they relate to genetic determinism: the degree and specificity to which genes determine phenotypes (the same applies to environmental factors). Only to the extent that genes determine traits in organisms can knowing the genotype be used to predict the trait. That has everything to do with views, accurate or hyped, about the usefulness of huge genotyping studies to health. It has to do with the use of genetics in social behavioral and other societal contexts. And it has to do with the origins of traits and with evolution itself.

To the degree that natural selection molds and creates (produces) the traits we see in organisms--and that it does so is at the heart of Darwinian theory--genes must determine traits. That's because if genotypes do not determine the trait, selection for or against variation in the trait won't affect the frequency of genetic variation, and hence won't guide genetic evolution. Yet the modern Darwinian evolutionary theory is entirely based on genetic determinism.

With selection on phenotypes, not genotypes, if the connection between genotype and phenotype is weak, selection only weakly affects the relative success ('fitness') of genotypes, and their frequency changes over the generations will largely be the result of chance, not selection.

When a mutation strongly affects a trait, and the trait is important to organisms' success, then Darwinian theory works just fine as advertized. But this is not so if traits are so genotypically complex that selection hardly affects an individual gene.

There is a middle ground. If a trait, like a disease, is a strong reflection of a specific genotype, but many different genotypes can generate similar traits, then prediction can be weak but once the trait has arisen it can be useful to determine which of those causes is responsible. From an evolutionary point of view, if each instance of a trait is due only to variation in a single gene, then selection can affect the frequency of that variation.

The key difference is that for complex traits the phenotype is thought to be the simultaneous result of variation at many different genes. There are so many combinations of genotypes that can generate similar phenotypes, that one can't usefully predict the latter from the former--or, perhaps, not even go back the other way, unless variation at only one or a few genes is responsible in a given instance.

Thus, prediction in development, medicine, or evolution all depend centrally on the degree to which genes individually, or in aggregate, determine the traits you bear in life. In the 'tail' end of the causal spectrum, where specific variants exert strong effects, things work out nicely.

Monday, June 15, 2009

The simple facts of life

Here we report on some reflections after our participation in a meeting on the 'New Genomics in Medicine and Public Health' held at the University of Bristol, UK. The talks were varied and interesting, including a talk about Mendel, reports of successful and unsuccessful genomewide association studies, plaudits for the UK Biobank, and discussion of clinical applications of genomic findings.

An important question these days, related to various methods in genetics and its role in medicine and public health, is how causally complex life really is--a question at the heart of most of the work reported at the meeting. Some normal traits as well as diseases clearly are genetic, in that their variation is clearly caused by variation in a single gene (or a small number of genes, in a way that's well understood). But others are less clear cut, as we've discussed here a number of times.

Vested interests of all sorts, including venal and careerist interests, but also strongly held scientific conviction affect this area these days. One way to put the question is: "How causally complex is life?" Here the interest is mainly in genes, environment getting some but usually rather casual or minimal attention, and the question boils down to how well phenotypes can be predicted from known or knowable genotypes. Sometimes this means using individual variation to predict individual disease risk--this is the major original purpose of GWAS (genome-wide association studies). Sometimes it means using natural variation as a tactic to identify genetic pathways that are responsible for some normal trait; the idea here is either that, when mutant, the pathway (or 'systems' or 'network') genes could lead to disease and/or that these genes, when known, can be used as general preventive or therapeutic targets.

A commonly invoked motivation for human genetics work these days is that we will be able to implement 'personalized medicine', to predict disease or treatment, or to suggest preventive measures, based on each person's genotype. Many companies are promoting this, and the molecular genetics community is hyping it very heavily (here, there is no doubt of strong material vested interests, even if some actually believe it will work as advertized).

There are hundreds of diseases for which a, or often the causative gene is known. Sickle cell anemia, Huntington disease, Phenylketonuria (PKU), Cystic fibrosis (CF), and Muscular dystrophy (MD) are just a few examples. For these, predictive power already exists, though clinical application is not necessarily based on genotype. There are other examples where the latter is true, but these are generally rare in the population. Promising gene-based molecular therapy is in the works for CF, MD, and maybe even for some forms of inherited breast cancer (due to BRCA1/2 mutations). For these diseases, causation is clear even if there are substantial variation in risk, age of onset, or severity. Causation here is usually thought of as simple.

But for most common and/or chronic diseases, the story is far from clear as we've mentioned in various earlier posts (and as is widely discussed in the literature). These traits usually have substantial heritability (i.e., familial risk--if a close family member is affected, your chance of getting the same disease is greater than that of a random member of the population to which you belong). That means that, unless we are somehow badly understanding things, genetic variation plays a major role in risk (at least in current environments). Yet after many sophisticated, large studies, identified genes account for only a small fraction of the familial risk. The data suggest that many genes, say 'countless' genes, contribute substantial risk in aggregate, but individually their contribution is so small as to be unidentifiable by feasible (or cost-justifiable) studies. That would suggest that the disorder is caused by numerous combinations of huge numbers of individually weak, and rare, genetic variants. This is known classically as 'polygenic' inheritance, and if it's what's going on, things are very complex indeed.

Others, focused on the many clearly 'Mendelian' (single-gene) traits, simply don't believe life is that complicated. They suggest at least two other possibilities. One is that only a modest number of genes contribute, but most of the culpable alleles (sequence variants) are so rare and weak that genomewide association studies cannot pick them up. At such genes, there may be one or two strong, common alleles and these have high penetrance (when present, the disease usually occurs) and so they can be identified in family or GWAS. Those variants only account for a small amount of overall genetic contributions. But once the gene is known, we can sequence it in many patients and, lo and behold!, we find many other alleles that, some argue, contribute the rest of the observed family risk.

There is some truth to this: we have done simulations to show that there can be high heritability but only a few contributing genes, for just such reason (heterogeneity of the frequency and effects of existing alleles).

Another possibility is that a modest number of genes have variants with rare, but not very rare frequency. These will be identified by the panoply of existing methods, and once that's done it will be possible to genotype everyone at these genes, identify each person's individual set of variants, and determine risk. These are called 'oligogenic' effects, because the number of genes involved is small rather than huge. This view acknowledges the current problem, but assumes it will go away with enough data--and, importantly, that business as usual is a right approach.

Presentations at the Bristol meeting, including Ken's, show clearly that causation is a spectrum of aggregate vary rare genetic effects, a larger but still small fraction of oligogenic effects, major gene effects, and polygenic effects.

The question is: what do we do if this is true? Where is the practical limit below which attempts to identify all the genes are futile or not worth the investment, and is it likely that current attempts will at least identify the bulk of genetic effects and the networks involved so that the disease can be eliminated in whole or at least major part?

There is no single consensus in this area. Some are more skeptical than others. Some argue that knowing the genetic contributions to disease may make diagnosis more specific (the doctor can test for which gene is contributing to a given case), even if genotype-based prediction will remain a dream in the eyes of venture capitalists. Other computophiles believe that if enough computers are used on enough DNA sequence, the problem will, like infectious diseases, be solved. We won't be sick any more.

Time will tell where in the causal spectrum most traits lie. One thing we can be sure of, though: in this contentious area in which huge career, institutional, and commercial investments are at stake, in years to come, retrospective evaluation will always claim victory! Few will look back and say that we knew better than to make the level of investment in genetic causation that we are currently making.

Friday, June 12, 2009

Karl Marx Stares Down Herbert Spencer

We spent a couple of days in London on this trip (a post about the Genome and Public Health meeting we attended in Bristol will come in due course). On a drippy day we walked through the Highgate Cemetery because we had read that several prominent people were buried there. It is an overgrown, but esthetically very interesting place (representative photo to the left). The most prominent resident is undoubtedly Karl Marx. We don't know how he got there (though it's not a church burial ground, which would not be a very savory place for a strongly declared atheist!). In any case, he has a very nicely done, and imposing (and rather non-proletarian!) monument (photo below)--to which he would surely object!

When we were there, a few others were gathered round to reflect upon the historically influential man's tomb. A British woman was singing the Internationale while her friend took her photo. A young American, wearing an old hippie-style US Army jacket, walked up and complimented her on her singing. He asked her if she would take his photo as he stood by the monument. She said yes, if he'd also sing the Internationale. He protested that he could only sing the American version, but as his new friend aimed the camera, he burst into a loud and passionate rendition of the communist anthem, forgetting not a word.

But there is a rather great irony in this tomb. Marx had originally been buried 200 M away from his current location. The original site, a rather plain one, was not on a major walkway through the cemetery, and as Marx grew in fame, it along with the surrounding burials were being trampled by tourists. So after a fund was made available for the large new monument, the Marx family remains were moved. But just opposite the new location was the site of one of the other most noteworthy former Londoners who are interred in Highgate: Herbert Spencer.

The irony is that Marx was a strong advocate for both the potential for human improvement, and the eventual evolution of egalitarian human society, while Spencer was the advocate of what became called 'social Darwinism' and the person who coined the term 'survival of the fittest', the exact opposite of Marx's view about human nature--and who justified social inequality as Nature's way that society would oppose at its peril. To Marx, society should work towards the poor catching up with the privileged; to Spencer, the privileged should distance themselves from the poor.

The two may or may not have met--they were born at the same time (1818 and 1820) but Marx died in '83 while Spencer lived until 1903. Spencer was English and of the middle class while Marx was an immigrant living largely in poverty and so on, often relying on the help of the politically left but wealthy mercantile class. Did those who chose the new burial site for Marx know of this juxtaposition? Wikipedia says not, but we don't know if that's accurate or not.

The difference of views, between darwinian and marxist social evolution could hardly be more marked, even though both were historical materialists, saying that evolution of organisms, and society, respectively, were the result of historical processes.

We don't know what Spencer thought of Marx, or even if they referred to each other (that must be well-known to historians and should be easy to determine). But it's worth reflecting that both views were prominent and expressed at the same time, in the same place, and with the same facts at hand. It's not at all unusual for opposing or opposite views to be held by contemporaries who use the same facts, sometimes selectively, to advance their views. In this case, Marx and Spencer were both working in the 'Newtonian' era in which it was believed that there were Laws of Nature that, if understood, could be used to the betterment of human society. This grew out of the so-called Enlightenment period, and tensions were greatly increased by the French Revolution which, until it unraveled, threatened the world's existing non-egalitarian order. How could both authors, and others who allied with them or who expressed similar views, both claim to be empirical and scientific and yet disagree so profoundly--and does this have any lessons for us today?

The answer to the latter question is that we'll never know until tomorrow, when someone can look back in retrospect and see how competing ideas worked out. But it is at least almost always true that there are such ideas. Even when a theory is widely or universally accepted, such as evolution, there are always debates about aspects of the theory on the edge of current understanding.

Darwin rested his case on short-term observations of current adaptations, comparative biology, artificial breeding experience, biogeography and geology, and extrapolation of these things into the deep past. He provided convincing evidence for the fact of common historical origins of life forms, and natural selection was a mechanism that would work in principle even if its long-term effects could not be directly observed. For Darwin. evolution had no direction, value, or ending point, but was a continuing process (except in industrial societies, which he thought in many ways had finally suspended the role of natural selection).

Marx also used comparative methods (with Engels, for example, using anthropological observations on world cultures, especially the 'primitive' pre-industrial ones that colonialists had observed around the world), and detailed analysis of the current economic system (capitalism). He predicted that social evolution would resolve class conflicts, and his views were taken to mean that the desired endpoint wasn't far way. For those dedicated to the theory, like Lenin, the endpoint was just a revolution away! Society would then, in a sense, suspend the inherent conflicts between ideas and their antitheses. For Marx, social evolution did have a direction, one that was inevitable because it was due to a natural Law, and it also had an end point.

Whether or not the juxtaposition of these great men's tombs was inadvertent, it certainly set up interesting contrasts that the cemetery presented to us on a drizzly walk.

Wednesday, June 3, 2009

Singing along

We heard a BBC discussion today about urban birds in Britain. An interviewee, a bird expert of some sort (we did not catch his name or profession, but he seemed to be a biologist), was describing the development of locally different dialects in urban areas of the country. The local birds recognize each others' 'language', but these are different among localities. When played the call of a bird of the same species, but with a different 'accent', birds didn't respond with nearly as much interest as when the call was of a bird from the same locale as the listener.

This is interesting, because there is so much stereotyping in popular culture, guidebooks, and the like, that describes what 'the' so-and-so bird does. But over time, for all sorts of animals, domestic and wild, local speech dialects have been detected, so the story here is not a great surprise. The idea here is that in each city, birds become more distinct over time, so that they no longer recognize each other's songs.

The discussion then took a rather predictable evolutionary turn. If these birds continued to have locally differing dialects, then birds from different areas could no longer communicate to mate, and this would lead to the evolution of new species. Partly, this is simply a matter of our own--the scientists'--definition of what a species is. There, there's not really a difference (that one can test) between 'don't mate' and 'can't mate', and in either case we declare the two groups to be different species.

This is classic, but rather superficial Darwinism, one of the issues we write and think a lot about. There is nothing wrong with the logic itself. Long-term isolation is likely eventually to lead to the accumulation of so much genetic difference that members of each area could no longer mate successfully.

But how long does this take? Probably hundreds, or thousands of generations (or more). Think about that in this context. How stable are urban areas in a place like Britain, relative to such long times? For birds, that would mean centuries or millennia (or more). Given the rapid change of urban landscapes, transportation, and environmental changes, the odds that simple local dialects, which have been observed to develop in a short period of time, would persist or remain sufficiently isolated for such lengths of time seem remote, even if certainly not impossible in principle.

Species can remain mating-compatible even after hundreds of thousands or even millions of years of separation. Humans (who have dialects if any species do!) inhabit the entire terrestrial globe, and at the end points (the tips of South America and Africa) have been isolated for around 100,000 years (5,000 generations at least), and are still fully mating-compatible.

Oversimplified evolutionary explanations are pat, often irrefutable because untestable, and the problem is that they cover over some of the more interesting questions about how things actually happen (one of the issues we raise in The Mermaid's Tale). Speciation may be due to the accumulation of large amounts of small genetic differences due to local adaptation (behavioral, such as by mating calls, or otherwise); that was certainly Darwin's idea of what happened. But this need not be so. Small genetic changes in chromosomal compatibility can also lead to mating isolation (some of these are called hybrid sterility mutations), without any of the usual kinds of adaptations due to natural selection. And what happens over eons of time is unlikely in most or at least many cases to be easily extrapolated from what is observed in just a couple of generations.

It's for this reason that we caution against such simplifications. They give a semblance of understanding according to a widely, if often uncritically, accepted theory. But they are a reflection of scientific impatience that can obscure the facts that may be important for a deeper understanding.

Monday, June 1, 2009

Non Sequitur

Nice Non Sequitur cartoon this morning, but I can't get it to post here. It's worth a click.

Non Sequitur — UCLICK GoComics.com

Time off

We are going away for 10 days, and won't be posting as often as usual. Ken is giving a talk at the Colston Symposium on 'The New Genomics: Public Health, Social and Clinical Implications" at the University of Bristol in the UK, as part of their celebration of the 100th anniversary of the university. It's a meeting basically about what will be the consequences of the biobanks and DNA sequence data that are going to pour forth. What that is likely to mean is not that the idea of collecting such data will be on the table, but instead what it could mean. Whether deeper ethical issues than ones like confidentiality will be raised isn't clear, because most of the speakers are involved in the genetics research that is hungering for major sequence data resources.

Ken's talk will be about the question of whether 'complex' traits that have defied GWAS and other approaches--that is, that appear 'genetic' in various ways but for which only a fraction of cases have been given genetic explanations to date--are really that complex, or whether there might be other explanations. On balance, other factors may simplify the genetic basis of such traits, but they still appear to be elusively complex.

We'll try to post from there at least, and perhaps other occasions along the way.