Friday, March 29, 2013

Gutenberg II and the Broadreach university

We're not sure, but it seems likely that the move from stone tablets to papyrus was probably important in the advance of civilization.  Scrolls were cumbersome, but not as bad to lug around as stacks of inscribed slabs.  Scrolls had to be written by hand, and that meant jobs.  And lots of people could share ideas across large distances, because copies were more readily available.  High civilization left a legacy on such rolls.

Johannes Gutenberg
We're also not historians, but we understand from what we've heard and read that very quickly after Johannes Gutenberg introduced moveable type to the printing industry in the early 1400s, all hell broke loose.  Books became commercial commodities.  Trash of the pulp-novel sort, superficial analysis of this or that, and even Bibles became available to the hoi-polloi.  People, even a lot of ordinary relatively smelly people, could learn to read and actually start to make some decisions for themselves.  The reverend reverend didn't have all the answers or all the say.

Industrial and commercial life changed rapidly and dramatically.  By several centuries later, millions upon millions of books had been printed, in all the countries of Europe, creating new industries, shops, markets and ways to distribute knowledge.  There were best-selling authors across a spectrum of fields, languages, styles, and topics.  The class system began to erode, ordinary people could read without knowing Latin, news could be distributed rapidly far and wide.

Even more interesting was that the spread of knowledge, though perhaps including a lot of chaff as well as grain, helped lead to the Renaissance and the Enlightenment, and the scientific revolution.  It led to huge changes in intellectual, technical, and aesthetic life.  New knowledge, even of a highly technical form, could spread rapidly and be discussed among colleagues.  Universities could be founded almost anywhere, not just at a few libraries with scrolls, and moved from lectures about texts to assignments of textbooks and library scholarship.

Of course, despite this widening out, there was still a class-based society, still wars and pestilences, bigotry and ignorance.  But a middle class, with commercial, political, and intellectual clout developed.  A  privileged few could no longer contain knowledge, as a form of secret privilege.  But by and large I think most of us would say that it was on the whole a very good thing.

That was Gutenberg, act I.  And we're now experiencing Gutenberg II!

Gutenberg II
One is always tempted to dub one's own time as Important.  Nobody wants to be living in a Dark Age.  We want Jesus' return to be imminent.  And as a result when anything changes we are tempted to name a new Important Age, critical in human history.  So it is easy to name our time in human history as the Information Age, and pat ourselves on the back and say that we are transforming the world.  But it does seem important that computer electronics has greatly changed and hugely accelerated new ways of storing and distributing information, with fewer delays, wider dissemination, and more interaction than has ever been possible before.  It can be seen just as another step in the speed, scope, and dissemination of information.  And this is widening to have even a more global basis than printing did.

Internet Map; Wikimedia Commons

We have electronic storage of class lectures, pre-recorded as well as live online lectures, video and other ways to interact in scientific and scholarly and esthetic contexts, more ways to comment, question, or discuss ideas.  Online classes are gaining huge global reach, and even traditional-style fixed-paper journals are going from killing trees to gobbling up electrons.  Journals can publish anything relevant whenever it arises, with unlimited ancillary information, and the ability of readers to comment, disagree, correct, or reinforce findings.  Students in universities and colleges are forgetting where the 'library' is--or perhaps even what it is.

Blogs like this, along with the networks of Facebook and Twitter, professional connectors of all sorts like LinkedIn and many others, daily weave the web of connections.  Students take lectures on line, work together in groups, submit their work, and communicate with faculty electronically, at any time of day or night from anywhere in the world.  They chat, text, and tweet each other at a frenetic pace.

While societal changes are usually not point-events in time, and not even the printing press was an instantaneous invention, these changes clearly are happening, and accelerating.  What used to take generations, perhaps, now only takes decades or less.  As in the late Middle Ages, children born into the new era grow up with the new conditions as part of the only environment they know.  This is not change for them, it's reality.

So, ready or not, Welcome to Gutenberg II!


The universe of Broadreach information transfer
Gutenberg II is intercalating rapidly everywhere, but perhaps nowhere more than into college and university life.  What will, or should, we do as part of this sea change?  In our view, we should not just try nervously to adopt piecemeal this or that aspect of the new age in a reactionary way, but should move energetically to embrace the reality as a whole.  Online courses, with students in increasing numbers 24/7 and from around the world are going to replace at least some on-site classes, and professors talking to a room half-full of students dozing or texting, will be replaced by electronic communication of diverse and yet to settle-out forms.

Faculty are currently judged by a rather staid or bourgeois set of self-importance style categories, research, teaching, and service, with the mix varying depending on the institution's nature.  We have rather entrenched ways of evaluating (score-counting, really) each of our 'impact' in these categories.  Department Chairs and Deans judge performance and promote faculty on these kinds of grounds.  Of course, since we're middle class and need to protect our jobs, we have learned to game the system, to make it more legalistic--and that makes it more stodgy, rigid, and stultifying rather than stimulating energy and quality as much as it might.  But the grounds are shifting: in the predictable future, major careers will have a very different shape, and so they should.

We should shift from categorical 'impact to unified 'broadreach' concepts to evaluate a faculty member's effectiveness and contributions.  The distinctions among research and teaching, among professional and public communication are blurring rapidly.  We need to create Broadreach University as the new local, national, and global reality, a fabric of synthesis in communication, learning, and developing new ideas and knowledge.  How we do this will have to be worked out.  Bean-counting evaluation criteria will have to change in some way, because as has always been in human history, is chaff with relatively less grain.  People--faculty and students--will learn to game the system.  So will it be in Broadreach U.  But change will come and we have a chance, today, to shape it in appropriate ways for society.

Empty lecture hall; Wikimedia Commons
We are describing what is 'in the air' (or should we say 'ether') these days.  As a current example, there are predictions that journals as we know them in academic and scientific work are to become things of the past. It isn't just that electrons are replacing paper (which replaced papyrus which replaced stone slabs which replaced oral tradition), but that the idea of a regularly scheduled, formally structured set of 'articles' is phasing into a dynamically unscheduled, modifiable, real-time, commentable way of communicating knowledge, and where knowledge and opinion are going to be more clearly mixed.  This is inevitable, just as open access publishing is inevitable.

One of the earliest, loudest, most eloquent voices advocating for open access is Michael Eisen's, biologist and co-founder of the Public Library of Science.  He gave a talk in San Francisco on March 27 on OA, the transcript of which he reproduces on his blog, saying, among other things, 
Every year universities, governments and other organizations spend in excess of $10 billion dollars to buy back access to papers their researchers gave to journals for free, while most teachers, students, health care providers and members of the public are left out in the cold.
Even worse, the stranglehold existing journals have on academic publishing has stifled efforts to improve the ways scholars communicate with each other and the public. In an era when anyone can share anything with the entire world at the click of a button, the fact that it takes a typical paper nine months to be published should be a scandal. These delays matter – they slow down progress and in many cases literally cost lives.
And, on the same topic, Nature covers OA and the changing world of publishing extensively in this week's issue.  Potential career consequences of publishing in OA journals is here, e.g., and from another piece, "the Web opens the workshop windows to disseminate scholarship as it happens, erasing the artificial distinction between process and product."

And, it won't be just 'open access' journals that we'll read, but 'open publishing' will be how we'll publish.  ArXiv writ large.  Yes, deans and universities will need to come up with new measures of scholarly impact, but that's already being done. 

Broadreach will differ widely among what are currently the traditional disciplines.  It will be broader and less technical, relevant to wider audiences, in the arts and fields like anthropology than in more densely technical fields such as chemistry or mathematics.  But perhaps we'll see the reach even of the latter to be less narrow than we might expect.  Time will tell.  That's part of the fun.

One hears concerns that too many students don't attend class, don't do their homework, cheat, graduate without having invested much effort or learned very much, from tired faculty who don't really care in any uniform way.  Lectures may be boring or inscrutable. Faculty game the system for their own career-building.  Departments use various means of manipulating course enrollments for obtaining resources, to attract paying students--they can view instructors as entertainers whose jobs depend on how popular the material they provide are.  Students manipulate their choices of classes and professors to game their grade point averages for getting jobs or graduate admissions, or to avoid what's too difficult.  Sadly, it's clear that this is all true.  But this isn't a description of Broadreach University.  It's true of today's universities!

Of course Broadreach will have its flaws and in various ways won't be as good as on-campus universities, or won't really replace on-campus U, and that there will still be ways to game the system, cheat, slide and glide, and so on.  Perhaps a major or even the major gain may be that we can educate and stimulate the real thinkers among us to innovate and create, without the rather useless need to satisfy some curriculum for a formal degree: advanced degrees weren't needed by Shakespeare, Wordsworth, Darwin, Einstein....or Bill Gates. Think how liberating Broadreach University could be for those who drop out when they've got what they need!

Whether the gaming or abuses will be more, or less than they are today is rather moot.  BU is coming, whether anyone likes it or not.  What we should be doing--what we must do and will be doing whether kicking and screaming or cheering and welcoming, is adopting the reality and doing our best to make it work.

Thursday, March 28, 2013

Just like pornography!

In a famous obscenity case, an infamous Supreme Court justice Potter Stewart said he couldn't define 'pornography' but he knew it when he saw it (that was before the internet, so he actually had to do some work to get his, um,  exposure, so to speak).

There is something similar in relation to scientific explanations that really have transformative power:  when it happens, you may or may not be able to explain it in its details, but you recognize it.

Goya, The Nude Maja
Every day in evolutionary and biomedical genetics, we see a stream, indeed, a flood of reviews, overviews, commentaries, papers, blog posts, and op-eds promising progress on understanding biological complexity.  Papers with sentences like
"Here we develop a model that shows how considering [sequence, systems biology, epigenetics, copy number variation, evolution, new functional equations, neuroimaging, high throughput analysis, new 'omics' data, methylation, acetylation, ..... (pick your favorite)], major advances in understanding the biology of complex traits and diseases.  Our method ....."
But, then, where is all this promised progress?  It might not exactly be obscene, but are such bevies of claims just posturing and careerism, what we are forced to do to succeed in academic careers these days?  It may be apt way to say so, because what we really see these days is incremental change, some of it progress but most of it trivial or useless, yet fed by the constant pressure for more Large Scale high-powered computational this or that. And that leads to all the self-congratulation.  But it's as paradigm shifting as Goya's Maja is pornography. 

If you think about the major advances in science that by most counts really were progress, the so-called revolutions or real (rather than self-flattering) paradigm shifts that have happened in science, from Galileo, to Darwin/Wallace, Einstein, the discovery of the nature of DNA sequence, or continental drift, these changes were very similar:
1.  Many diverse things that had been given separate, forced, or hand-waving explanations fell dramatically and quickly into place 
2. This was almost instantly recognized 
3.  The new ideas were conceptually very simple
The new theory may have involved some technical details, like fancy math or biochemistry and the like, but the ideas themselves were over-arching, synthesizing, and simplifying.

As Thomas Huxley famously proclaimed after learning Darwin's explanation of the mechanism for evolution:  "How extremely stupid not to have thought of that!"  

Once you see it, you realize it's import.  In genetics and biomedicine today, people are always saying it....but we're not yet seeing it.

Wednesday, March 27, 2013

You gonna blog that?

Doris Mable Cochran (1898-1968), measuring a turtle shell

Readers of the MT might recognize folks like Briana PobinerKen MillerLauri LeboNorman JohnsonMelissa Wilson SayresDonald ProtheroT. Ryan GregoryBrian Switek, Josh Rosenau, Cara Santa Maria, and  Bora Zivkovic. Last weekend, along with maybe twice as many others, we took part in a "Catalysis" meeting about science communication at NESCent. It was the first time I'd ever been in a room vocalizing while holding concurrent twitter discussions--here's Brian's storify of our tweets.

Just one among the flock of topics: scientists are encouraged to go public. So to help roll out the welcome mat, I thought I'd share my experience and perspective.

You might be thinking, is this something I have to do? Should I be tweeting, Facebooking, and blogging?

I'll go ahead and let Brian answer that one:

I agree. Which is why this post isn't a how-to for everyone.  But everyone, whether they go public or not, should know why-to.


Why I blog, Facebook, tweet, talk to journalists, ...

Socializing is fun.
It's common for me to feel squirrely and defensive from interactions on-line but the good far outweighs that.

Internet = People who know things and want to share them with  you.
I'm in a constant state of awe-filled gratitude when I'm reading my Twitter feed. This blog is a big part of my continuing education and I shudder to think how ignorant, behind the times, and understimulated I would be without this online MT family and without the online community on Facebook and Twitter. Sure the socializing is all mixed up, sure there are corgi pics sandwiching science news in my feeds, but how is that bad?

Writing, words, and wordplay are integral to my life.
It was 1983. I was six. Kids' Writes put out a call for poems. I sent one in and they read it on television. Here it is. My mom helped with the cummings-esque format and much more...
My Mind
My mind can
Think. And
my mind can help do
Art. And
my mind can help me to
Read. And
my mind can
Dream. And
my mind can help
Write. And
my mind can help me to
Learn. And
my mind can help me to do the
Right Things. And
my mind is good.
Fast forward to 2006. I submitted an essay to the national This I Believe site. Once you do that, and it passes some sort of editorial filter, it's there publicly. So, I emailed the link out to my friends and then put the link in my email signature. I think that traffic got the attention of the editors and they asked me to tweak the essay (adding all that personal stuff) and then read it for NPR's Weekend Edition. I think it factored largely into why when Ken and Anne were going to be traveling for a while in 2009, and they were looking for a guest to keep the MT going while they were away, they asked me. When they came back from their trip I stayed put.

Hardly writing about fieldbookish things in my field notebook. (thanks Rutger Jansma)

Internet = me too.
This blog provides me with motivation to think immediately and deeply through new discoveries and to reflect on old ideas too. What results is sometimes lecture or class notes for my courses, even large scale curricular changes or lesson plans. Sometimes it's old dialogue I've already journaled and pull up in the context of a new science discovery. Sometimes I just have to tell my story about a doctor and her vibrator. Sometimes it's just something I need to get off my chest, by thumping it a bit.

Many of my posts, and many of Ken's and Anne's, I assign to my students to read.  And I've even had two courses create blogs for voluntarily posting their experiences with 23andMe (here and here). I wouldn't have had the wherewithal to go through with the 23andMe curriculum if it wasn't for the MT and my online connections with geneticists.

Science is everybody's. And I get a kick out of helping with that.
I've had a big paper come out since joining the MT and being able to blog about it not only got the word out to colleagues about it but helped me to put it in a context that other academics and educators, my students, and journalists could potentially appreciate. Especially if they didn't have access to the article.

When that paper was accepted we wrote up a press release (so he wouldn't have to) and sent it to my university's press office. He sent it out through his channels and when it caught people's attention I got phone interviews, like with Scott Hensley at NPR. Some writers noticed my blog posts which got quoted/sourced in some of the articles written on-line. Most notably here.

Kate Clancy, a super duper online science communicator, was kind enough to suggest to editor Bora that this was fodder worth considering for the guest blog at Scientific American. So I was fortunate to get that opportunity too, especially since it was an immediate reaction to the public reaction to my research!

This is academic communication. 
It's legit. In fact, I instruct my students to use quality science blogs (and I don't mean just MT posts) as resources or to kick-start their brainstorming and their research for course projects. There are quality assessors compiling blogs and feeding them out to us through social media.
Not all blogs are created equal. The blogosphere is mature enough now that there are connoisseurs with discerning palates. You will not be served up junk to read. If you do good work, you will get noticed and your peers might come to rely on you, like with Ryan:
Writing publicly has perks.
This, all of this, is a writing lab and practice gym. [See the article "How social media improved writing."] Constantly reading and writing hones your craft and your voice. And it gets noticed. Thanks to friends who voted enough to get me considered and then thanks to the monkey-loving cosmologist judge, I won a prize for a post on the MT about my dissertation. And thanks to the gumption I grew by writing here, I now have an agent who's got my popular science book on reproductive consciousness up for auction right this second.

Increased visibility has perks.
It's a way to get who you are out there. I don't have a web page. This is what I call "my web page": a list of links to my writing. It's like a calling card. And it's handy when I want to quickly find a link and share it.

It's difficult to know how much of my public web behavior has caused the following but it certainly hasn't hurt: Invitations to join a research workshop and to participate in symposia at conferences, invitations to speak on college campuses, filming about obstetric dilemma for the BBC show Horizon, filming about ape tail loss for an upcoming PBS program, new friendships with potential scientific collaborators in anthropology and beyond. It's a big part of how the folks at the Leakey Foundation heard about my 23andMe experience which got them to ask me to come speak at the Cal Academy and to visit two schools to talk about science. Tweeting about using my dogs to teach the scientific process indirectly, through a new teacher friend, got the attention of Understanding Science folks who are putting it on their iTunes U.

This web presence and online writing tags me as someone who engages in scientific outreach, which means that others kindly provide me with more opportunities to do outreach. This is why I was invited to the NESCent meeting that sparked this post, and am involved in absolutely amazing educational projects with the Smithsonian Human Origins crew.  These activities definitely do more for me and my teaching (and therefore my students) and for my scholarship than I do for the public but I hope that one day with enough experience that will flip around.

Some important points and tips...

In science, Verbal < Math?
This is a non-question. Writing science is still science.

You don't need to be a blogger to have a blog. 
You can do as much or little as you like on a blog and still get so much out of it. From day one, back in 2009, I've been convinced of this. And before that, I was only reading blogs and still getting so much out of this.

There are seemingly infinite uses for a blog.
If you read blogs or write on them, then you are constantly surprised by this already, but here's a nice use of the blog that you might not have seen applied yet.

Blogging need not be snarky or antagonistic. Although it can be. 
Duh.

Don't take blogging so personally, or do.
No one will read it. Or they will and won't comment. Or they will and won't share it. Or they will and will never tell you they read it, even if they liked it! So what? Take a page from Oprah Chopra and just do it for yourself if you want to.

For cultivating readership, be social. 
If you want people to read your blog (... you might not care, but if you do...), then you need to be on Facebook and Twitter too, posting your links, and links to others' blogs, and sharing news, networking and befriending other bloggers and writers and colleagues who are engaging in similar ways with each other and the public.

Twitter is not just for starf*ckers.
Contrary to my prior assumption, Twitter is not just for narcissistic, needy celebs building up masses of  "followers." However, Twitter really is for so many crazy a$$holes. I'm one. I tweet for my dog who discovered evolution and wrote a book about it @Elroybeefstu.


Downsides to going public

Negative value judgments from colleagues, especially senior colleagues.
Still happening.

Social media makes every day a conference day!
Social media also needs a sarcasm font.

It sucks up time.
Like right now.

Being a target for the trolls, pedants, haters, and grandstanders isn't so fun.
But that's obviously not going to stop us.

You could make a mistake. 
And you will.

You will be misunderstood and misinterpreted and quoted out of context.
Humans.

There is still no category for this in most P&T portfolios.  
It falls under "service" or "outreach." Here's a snippet from my Dean's annual review letter:


Next year I'll tally up my hit totals which are over 40,000 now. Then maybe she'll be even more intrigued... even though these are measly numbers compared to many other science blogs!

That she mentioned my blog activity is something, at least. But even if she hadn't, I'd still keep this up.

This is where and what science is now.



Updated: 8:49 am

Tuesday, March 26, 2013

Evolution in a terrarium?


Our understanding of ecology and evolution has historically been highly stratified; with conceptual divisions and blinders between humans and the environment, between disciplines, and between substantive foci.

As a highly anthropocentric species, for a long period of time people thought that the rules of nature somehow didn’t apply to humans.  Arguably, even today, many who study human evolution do so in the absence of thinking about how evolution occurs in all of the other organisms on our planet.  To be fair, it is difficult to really delve into a subject without focusing on it – but I also think that a narrow approach can be dangerous too.  Furthermore, it’s probably easier to see evolution occurring in your favorite organism if that organism doesn’t have generation times that are as long as yours…

And this “us versus the rest of the world” type of thinking has extended both ways.  Not only do we evolve, but we are constantly modifying our environment.  We’ve probably been doing that for a very long time.  Given that most other organisms modify their environments – beavers build dams, ruminants alter landscapes by selectively grazing in certain areas – I would say that it’s silly to think that we haven’t been altering our landscapes for a long period of evolutionary time.

For a period of time in U.S. history, when manifest destiny and western expansion were the battle cry of the day, it wasn’t uncommon for people to arrive in environments, new to them, and to think of those places as pristine, untouched landscapes.  This is what much of our modern environmental movement grew out of.  We thought of these Western landscapes as things that hadn’t yet been marred by human hands, and we needed to keep them pristine for the next generation.  This was and is a flawed view of the relationship between humans and environment.




Long before Euro-Americans showed up in the American West there were native peoples living in these wide open spaces.  There were just a lot less of them when the new Americans showed up because new pathogens had already moved across the landscape, at a speed that would have been impossible for the new Americans to match, and completely decimated much of the population.  These beautifully manicured landscapes weren’t untouched.  They had been manipulated, lived in, and exploited for thousands of years prior to European arrival here.

The arrow goes the other way too.  We are modified by our environment.  Returning to my favorite subject (pathogens), the story of the sickle cell trait and malaria is a textbook example of how the environment, this time in the form of parasites spread through mosquitoes, has actually left an imprint on the human genome [1].  And considering that much of the evidence about the evolutionary history of falciparum malaria suggests a leap into humans around 10k years ago, humans may have started the ball rolling per se with the onset of agriculture.  The real story, however, is probably much more complex than the one(s) that anthropologists usually tell.

Humans began changing their environment to such an extent that it could have led to a population increase in a specific type (or types) of mosquito.  These mosquitoes were already in the environment, and would have already been capable of spreading a parasite that had also already been in the environment.  That parasite has recently been shown to occur in high prevalences in wild gorillas, meaning that agriculture wasn’t necessary for it to exist in human populations, but several lines of evidence suggest that agriculture could have been sufficient for it to be a major factor in human populations [2].  Furthermore, malaria parasites have coevolved with their hosts, with a multitude of adaptations that allow them to survive our (and other organism’s) immune responses [3]

Humans made a change to their landscape: agriculture.  That change may have reduced the risks associated with starvation, but also may have increased the population sizes of some pest species: mosquitoes.  And it wasn’t just any type of mosquito that would have been influenced, but a type or types which are capable of effectively spreading a parasite: falciparum malaria.  And that parasite has developed the ability to sexually reproduce in certain types of mosquitoes and to maintain population sizes within human hosts that don’t immediately kill everyone who is infected, therefore allowing the life cycle to continue. 

These coevolutionary relationships aren’t even close to being limited to the malaria story.  For example, a non-trivial portion of the human genome is made up of endogenous RNA viruses.  We’ve been shaping our environment, and it has been shaping us, for a very long time. 

This story isn’t just about pathogens though, because through domestication humans have transformed plants and animals too.  The list of such “domesticated” organisms is, like the list of pathogens with coevolutionary relationships with humans, too long to give proper space to here.  But think about the special relationships between humans, cats and dogs (they’ve arguably domesticated us).  Or between humans and yeast (see a nice blog post here), or with maize or potatoes, or cows and sheep.  The not so funny thing is, that while we know about all of these modifications that humans have made to the environment, and that the environment has made to humans, they almost always get studied in a highly stratified way.  As if each of these components occur by themselves, in a terrarium, not interacting with everything else. 

Plants get studied in botany departments or in schools of agriculture.  Insects are either studied by agricultural scientists (worried about what they’ll do to plants) or biomedical scientists (worried about what they’ll do to people).  Biomedical scientists who focus on the actual pathogens, in my view, tend to ignore the human component.  (Of course, a lot of interventions are based on killing something in the epidemiological cycle, and most people don’t want to take out the human host for the sake of the other components of the epidemiological cycle).  And finally, people who study human evolution are just as bad (or worse) than everyone else on this list; mostly ignoring the major evolutionary pressures (disease and nutrition) that have undoubtedly shaped who we are so that we can instead focus on topics that are sexy (to humans, that is).  Even though we know that humans do not live in terrariums.  Or most of us anyway.    

References:
1. Kwiatkowski DP: How malaria has affected the human genome and what human genetics can teach us about malaria. American Journal of Human Genetics2 2005, 77:171–190.

2. Liu W, Li Y, Learn GH, Rudicell RS, Robertson JD, Keele BF, Ndjango JBN, Sanz CM, Morgan DB, Locatelli S: Origin of the human malaria parasite Plasmodium falciparum in gorillas. Nature 2010, 467:420–425.

3. Evans AG, Wellems TE: Coevolutionary genetics of Plasmodium malaria parasites and their human hosts. Integrative and comparative biology 2002, 42:401–7. 

  

Monday, March 25, 2013

HeLa cells and cavalier scientists

Last week came the news that the HeLa cell genome had been sequenced and published.  HeLa are cells that were taken from the tumor that eventually killed Henrietta Lacks 60 years ago, without her knowledge or consent, immortalized and used in thousands of labs around the world ever since.  They've been used for everything from significant medical research to teaching students to do lab work.  The story of Henrietta Lacks and HeLa cells was told by Rebecca Skloot in her best-selling book, The Immortal Life of Henrietta Lacks.

The sequence is, according to a story in Nature, "a mess." While this had been known to some extent from previous work, the research team
... confirmed that HeLa cells contain one extra version of most chromosomes, with up to five copies of some. Many genes were duplicated even more extensively, with four, five or six copies sometimes present, instead of the usual two. Furthermore, large segments of chromosome 11 and several other chromosomes were reshuffled like a deck of cards, drastically altering the arrangement of the genes.
Some of the chromosomal rearrangement was presumably because the cells were taken from a tumor and other aberrations were probably introduced by 60 years of large-scale cell division in labs around the world.  Chromosomal duplications, losses, and rearrangements along with other forms of mutation are expected--even may have been in the cells originally obtained from Ms Lacks.  One caution this leads to is that no two labs using "HeLa" cells are using cells with exactly the same genome.  Is this important to know, and does it justify this sequencing effort?  Those are two separate questions.

The variation the sequencing documented doesn't mean that personal genetic information is not discernible from the data.  Even duplicated bits, as well as bits not duplicated, will have the detailed sequence (plus some additional mutations) that Ms Lacks carried, and alignment of these parts, and they would be extensive, would make it possible to say many things about her.  She would carry sequence details known to be found in Africans but not Europeans (and, since she probably had some European admixture, a subset of clearly European-derived ancestral sequence as well -- African Americans have about 15% European ancestry, due to the rape of slaves, voluntary mixed mating and marriages, etc.).

Genetic variation related to some personal traits could easily be found if she carried them.  Variation related to skin or eye color, or to susceptibility to malaria, and others could easily be found if she had it.  Enough individual sequence would be easily available that, were her sequence (even before her cancer arose or the cells were distributed and grown in laboratories; or the cells at her original biopsy if still available) would make it easy to align the sequence obtained in a lab and find the match.  From a technical point of view, there is no privacy with this kind of data.

In yesterday's New York Times, Skloot tells this new story, of the sequencing of the HeLa cells, again done without the consent of anyone involved.  And the story is as disturbing as the first violation of Henrietta Lacks' privacy.  Not in the least because so much of the major media that covered the story, Nature, Scientific American (which republished the Nature piece), The Atlantic, Science 2.0, and so forth, ignored the ethical issues.  

The largest issue, of course, is that not only does the genome of these cells contain information about Henrietta Lacks, but she had five children, and they each have a 50% chance of having inherited any risk alleles she might have carried (for cancer, say).  They weren't asked whether they wanted to know their risk, and they weren't asked whether it was all right with them if that information was made public.  The data have now been taken off-line, but not before it was seen, and downloaded, by many.  

As Skloot points out in her piece in the NYT, nothing that was done was illegal--either in the sequencing of the cells or the publishing of the data, or when her cells were originally taken.   It's just that it wasn't right to publish the sequence, as the original use of the tumor wasn't right.  It's an example of the cavalier attitude of science.  The researchers had absolutely no business publishing the sequence, and it's astonishing that they did, given the widespread airing of the ethical issues surrounding the history of the HeLa cells.

And in a way to compound the felony, while we might on the surface think it honorable and exemplary that the investigators took the data off line, they are reported to have been seeking to get family permission to post it again.  But even just keeping the data to themselves is dishonorable from a human subjects point of view: neither Lacks, of course, as she died long ago, nor her family, as far as we know, consented for the lab even to do the sequencing in the first place.  The honorable thing, really, was not to have done it, but having now seen the issue raised, the data should be erased from all computers and entirely discarded.

Of course, life is complex and scientists have power (unlike the general public) and we'll get our way as a rule.  We'll find a post hoc way to justify, rectify, bully, bribe, cajole, or whatever to get to do what we want to do.  This doesn't make the specific Lacks sequence story special or in any way specially culpable, as similar kinds activity are occurring widely.  Indeed, things are more subtle and complex than that....

We blogged back in 2010 about the HeLa story, not long after the Skloot book first came out.  We were a distinct minority (of maybe two) who believed that the book, while telling an important story, was too much a continuation of the invasion of privacy that began when Henrietta Lacks' tumor was taken from her by scientists.  Skloot has, we understand, set up a foundation to help the family and donates some proceeds from her book to it.  But she is making her career on the back of the Lacks family, who she readily admits to having had to essentially coerce to let her tell their story, no matter that she also is trying in her way to help the family.

In a more pure world, she would give all proceeds to the family or, better, would have realized during the process that the book should not be written.  Due to the arrogance of science, and journalism, this family's privacy has been violated over and over again.  Whether Skloot does, in the net, help the family as well as herself, as she may, the ethical dilemmas persist: is it OK to make a book or movie, for profit, of any story so long as we try to compensate for the ethical violations in some way--even in cases where no such compensation would otherwise be available?  This is a question that anthropologists should ask themselves all the time, though too few do.  Here is where the ethical rubber meets the road--where there are no easy answers.

This new chapter of the story is more troublesome to many than was the book, it seems, presumably because it's much less ambiguous.  Everyone now seems to agree that this personal genetic information, that the family hadn't even asked to know themselves, shouldn't have been made public without their consent.  And this along with other stories shows how non-private DNA sequence may be, even if an actual name is not attached to it on a web page somewhere.  Apparently this hadn't occurred to the scientists who did the HeLa sequencing.  Nor had they thought of 'private' meaning not even known to investigators who should refrain from even peeking at it without explicit permission.  Another example of the cavalier attitude of too many scientists.  

Friday, March 22, 2013

Panic in the university!

You can almost feel the tension, the sheep-like feeling that we better hurry to join the herd, and university presidents have to wear diapers to avoid wetting themselves during discussions of the looming threat.  What's the threat?

A commentary in last week's Nature on MOOC's, massive open online courses, raises some interesting questions about the state of higher education today.  MOOC's, the piece says, are transforming the modern university, even while it's still not clear how.  But, everyone's jumping on the bandwagon, and the number of universities offering MOOC's is rapidly on the rise, as this figure from the Nature piece shows.


“In 25 years of observing higher education, I've never seen anything move this fast,” says Mitchell Stevens, a sociologist at Stanford and one of the leaders of an ongoing, campus-wide discussion series known as Education's Digital Future.
But, we have been moving away from classroom-based education for a long time.  We've had online courses for years, though not free ones.  And, in the best universities, attendance in major lecture classes is not taken.  The student's knowledge is evaluated by exams, and a grade and diploma are certification that the recipient has demonstrated sufficient achievement according to the school's requirements and standards.

And has everyone forgotten Johannes Gutenberg?  To pass an exam, you can go to class or you can just read the quaint item known as a 'textbook' or other material.  Textbooks are the age-old paper-line (rather than on-line) means of education, in which the student can study the material anywhere or at any time.  When moveable-type printing was invented, there was similar panic in the streets about how this would put the intellectual elite out of power (because the hoi-polloi might actually now be able to read and judge for themselves!).

We have also gone the next, electronic step, more than a decade ago.  We don't mean ebooks, though that may become part of the story.  Rather, we have instituted what is now nearly universal in major universities, that is, that lecture notes (including videos etc.) are posted online on a course's web page.  Students can view or download these notes at any time, and from anywhere, sober or not, and the professor can update or modify or supplement them at any time.  Some classes even give online exams.

For years, I also added a kind of chat room for my large Human Genetics course, where students could discuss the material with each other, query the TA or professor, answer each others questions and so on, along with the lecture notes being posted.  The students were all in residence (except when off on joy-breaks that kept them from being in town and/or in class).  The usage?  Almost none, except for right before exams.  The conclusion: this course had the major attributes of a MOOC as far as an individual student was concerned, plus the ability to actually go to class (very unfashionable!), so the poor usage may just indicate that the time wasn't ripe,  the fad just hadn't started, or (being pre-meds) the students didn't want to help each other or actually learn anything.

That being said, it does seem that large lecture classes in particular have been online or the equivalent for a long time, and the current threat may make dinosaurs of them.  What is different is the growing fashion for large classes to be done remotely without in-person attendance.  Given my experience and the other precursors mentioned above, it isn't that long-distance or online learning is new, just newly 'hot'.

So far, the small class in which attendance is often checked, and the laboratory or seminar lab-meetings, are safe.  Here discussion is important, social networking and cooperative work take place, and effective question and answer interactions are important.

We now have online-software, like Adobe Meeting, that can substitute for in-person meetings, include visual interaction, shared viewing of computer screens and material, and so on.  This is so much cheaper than flying from New York to London or New Orleans for a meeting that only the appeal of local restaurants or golf outings justify the travel costs businesses and universities pay for these meetings.  There is still some benefit to schmoozing and deal-making, but if the current trend persists, the uniqueness of such opportunities for doing business will wane.

Even if it were no advance, universities will have to change because students are coming up who have been on-line their whole lives, and because the raw and relentless Googleavarice will, as with Amazon versus book stores, simply put the current status quo into the obsolete-bin.  Of course, the elite will still have their 4-year playgrounds, and lesser universities will use their plush dorms, football and fraternity/sorority diversions, and the party-and-sex opportunities to draw large numbers of students to their campuses.

However, with non-stop football on television, bars and apartments everywhere, even these draws may not be enough.  Many universities, at least the good-to-better ones, may reconfigure, accepting the obsolescence of their large lecture halls, and increase their upper-level small-class and lab facilities, and become advanced-learning institutions, for perhaps fewer but also more advanced, skilled, and seriously interested students.

The elite may just have to find (and pay for) other ways to make sure their kids meet other elite families' kids, and retain the huge social advantage that they have always had, and that Harvard and Stanford have so willingly facilitated.

But then, Yahoo! just instituted a ban on telecommuting, which was once seen as a revolution in the workplace.  The reason is that people work better as a group when they work as a group, eye to eye, eating in the same (blah) cafeteria, and able to grumble and gossip when the boss is away at a meeting.  Synergy is key to suggess.  So, it's hard to foresee what's going to happen with MOOC's. 

Thursday, March 21, 2013

Walk on the wild-type side

Wild!
Let's say I want to test the effect of gene A that I've modified somehow -- to mimic a human disease, say -- on a mouse (assuming, of course, that the results are then generalizable to people).  I put the modified gene into the zygotes of a laboratory mouse strain, let's say C57/Bl, and watch what happens as the animals develop.  Of course I need controls, unmodified mice, so that I can compare my transgenic modifications to the unmodified C57/Bl.  When I write my experiment up, I will refer to the 'mutant' (transgenic) mouse and the 'wild-type'.

Wild-type axolotl (Wikimedia)
Indeed, everybody everywhere in biology, it seems, refers to their manipulated or in some way conditional organism as the 'mutant' and the other, non-manipulated form as the 'wild type'.  This is so entrenched that often even seasoned professionals don't know, don't think about, or don't want to be forthcoming about what that means.

As judges, we just heard a student research presentation in a high-school science project competition that we wrote about on Tuesday, in which the student compared the effect of a modified protein to that of the wild-type.  This student was plunked into a fancy lab and clearly got this term from her mentors in the lab, without knowing where it came from (other than that the protein "was cloned into E. coli") or thinking about what that meant.  We would assert with nearly 100% confidence that the mentor never thought about it either.

But, who cares?  
Scientists like other scholars or, perhaps, those in any profession, coin jargon to use in their work.  Sometimes a term is clear and stands for a specific thing, but says it more simply.  We can say, for example, that the frequency of a genetic variant in a population 'drifted' over time.  What we clearly mean is that "the allele frequency changed randomly over many generations because it did not cause an effect on the organism that carried it that affected that organism's survival or reproduction".  Just saying 'drifted' saves lots of words.  Properly used, such technical terminology is completely reasonable and useful.

Sometimes we are using terms to show off, to seem intelligent, insightful or perceptive, or to give an aura of technical expertise.  The humanities--we're sorry to have to assert--typically wallow deeply in terms so contorted and arcane that even they don't seem to agree on their meanings. That is, the jargon is obscurative rather than efficient or clarifying.
Mutant axolotl (reduced pigmentation); Wikimedia

But sometimes even in natural science, terminology becomes an obstacle rather than a shortcut.  Many analysts of science and the philosophy and history of science have noted how deeply we can be embedded in rhetoric which is not just shorthand, and not just showing off, but which carries connotations that actively affect how we think and what we do.  Technical terms can allow airy underpinnings to be taken as having substance.  Then it becomes quite dangerous to science (even if, perhaps, it helps get grants and your name in the paper)!

In our areas of genetics and evolutionary biology, as a rule, terms at least have somewhat of a precise meaning, even if referring to, say, the 'wild type' sounds like insider talk.  This particular term might seem to be absolutely no problem, since it's so simple.  But what is the 'wild type', even of a single gene?  Is it 'the' sequence of the gene found in the wild?  Almost never!  Why?  Because the 'same' gene will have varying sequences among individuals in a population, or in a species, or, even more so among species.

So to say you're comparing the risk of a variant to 'the' wild type is to make an assumption that there is such a type.  Do you mean the 'normal' variant?  If so, how do you define normal, and by what reason do you assume, much less actually think, that there is only one such type out there in the 'wild'?

Type specimen 'pitcher plant' (Wikimedia)
The term originated, we think, in the early 20th century experimental biology, when evolutionary theory had it that most genes had a good variant that was by far the most common in the population, and one or a very few, very rare, harmful 'mutant' forms.  Nature preferred the former, so it was the 'wild type,' and in a lab one would naturally use that form (and term) to refer to what originally was sampled from the wild, and so on.

However, we know that was a very poor understanding of evolution, of the nature of DNA sequence variation, and it was a hyper-Darwinian stereotypical treatment of nature.  It was, we think, entirely analogous to the idea of the 'type specimen'.  That's the bedraggled stuffed lion in the Natural History museum, taken to represent all lions.  Of course, if we're careful in our thinking, we know that no other lion is quite like the Snagglepuss in the museum.  But science is not always careful--in part because the rat-race for status in the field encourages simpler, if not simplistic thinking, in which we scientists routinely indulge.

If you're studying the mice in your laboratory, comparing the animals of the strain you tinkered with by introducing an altered gene to those in the strain that you did nothing to, then you're likely to call the latter the 'wild type'.  But that is preposterous!  There is nothing 'wild' or 'typical' about a laboratory mouse!  They are the result of, at the very least, many generations of inbreeding and cage maintenance and as the old quip goes, wouldn't survive for more than 5 minutes out in the, yes, wild!

Walk away from the wild-type side of thinking!
If we really want to understand Nature, we should wean ourselves from habits that can be very, if subtly, misleading. We've often mentioned the related and comparably misleading ideas of genes or selection 'for' a given trait, or the sloppy and slippery assumption that every trait was produced by natural selection, or naming genes 'for' some trait (like 'bithorax' or 'presenillen' for genes that, when mutated in particular ways in particular individuals can cause, respectively, a double thorax in insects, or early onset Alzheimer's disease in humans).  Or our routine reference to 'the' human genome--a sequence that we've posted about before, that doesn't exist in a particular individual, never has, and never will.

We often have no need whatever for special terms, and we should stay away from them like the plague.  If we want a term, then instead of the stereotypical notion of a 'wild type', that can end up not being properly checked or used in interpreting results, we should use terms like 'reference', or 'baseline',  or 'control', or even, if we're careful, 'average'.  Those terms clearly state what you mean, without latent connotations that can affect your audience's thinking....or even your own.  You can choose an inapt referent, or the average may or may not be a good baseline, but at least the meaning should be clear and your conclusions then fairly discussed.

So to keep from being wildly misled, don't walk on the wild-type side!

Wednesday, March 20, 2013

Illness as big data problem? The bicameral mind


Supercomputer; Wikimedia
We attended a very interesting discussion of Alzheimer's disease the other day, by an historian of science.  The speaker gave his overview of the history of the disease, from 1906 when it was first described by Dr Alois Alzheimer to today, 107 years and billions of research dollars later.  After much discussion of what we've learned and what we still don't know, it seemed we pretty much all agreed that dementia is a tough problem, predicting who will get it is a tough problem, and while some progress has been made in understanding the early onset form of Alzheimer's, we've got a long way to go in the development of treatment for the disease, whether early or late onset.


And then a physicist in the room spoke up.  He didn't understand why people were so pessimistic about our ability to eventually understand the disease.  Or any disease.  He himself is certain that with the incredible computing power we've got now we'll be able to understand, and predict, them all. A few others in the room signed on to that, none of them physicists, but similarly optimistic.

A piece has just been published at Wired online (18 March 2013) that says much the same.  Richard Barker: "Illness just became another big data problem."  Barker describes the case of an infant born with a rare form of type 1 diabetes for whom the proper treatment is begun once the child is genotyped and his particular mutation identified. 
So diabetes isn't just diabetes: it's a cluster of diseases with different causes and different remedies. This story is just a glimpse of a quiet medical revolution: from defining diseases by the symptoms they cause or the part of the body affected, to the underlying molecular mechanism.
Further, if this child had been sequenced at birth, as our children, or grandchildren, or great-grandchildren will be, his illness would have been identified before he was even ill, and his treatment would have been started immediately.  This day is coming.

An anthropological question:  fad, fact, or cultural bias?
This all sounds very hopeful.  But history shows how at any given era there is a working model of how to do things, and basically everyone except mavericks follow suit.  We want to be in on things, to be doing what seems right, to have support from our fellows, and the comfort all of this brings.  We often stick to this even if there is no real evidence that it is working or evidence that it may not be the best approach.  The long-lasting nature of Galenic (four-humours) medicine is one example.  Armies routinely train for the last war, and pay a price for it.  Religions, though supposedly based on ultimate divinely given truths, form sects and alter their doctrine.

At the same time, it may be--especially in science--that the current way, though imperfect, is the result of centuries of improvement and is the best we can do at any given time.  Certainly we'll not just quit because we know our methods have imperfections!  So, it is fair to ask: is illness now just a data crunching problem?

Well, we can pretty much eliminate infectious diseases right off the bat.  While there may be identifiable genetic susceptibility that explain a small minority of resistance to some infectious diseases, this is by far overwhelmed by other factors that predispose people to infection, like poverty and bad luck.  That makes a lot of illness around the world not reducible to a data crunching, at the individual level.  There's certainly a lot of population-based data crunching that can model risk and explain it in populations, but no amount of sequencing of infants at birth will identify those who will be at highest risk come the next epidemic. 

Then there are complex diseases, like heart disease, or type 2 diabetes or asthma or schizophrenia or autism or hypertension or stroke or most cancers.  Sequencing can't now, and although it's not fashionable to say so, many believe will never be able to predict accurately who's at risk of most of these diseases, for reasons that we write about here all the time.  They're complex, they're polygenic, everyone has a different and unique genome, and a different pathway to disease, environmental factors are causal, and inherently unpredictable.  And so forth.

So now we've pretty much eliminated the big causes of death around the world from the illness as big data problem model.  What's left?

Mendelian diseases, like the form of diabetes Barker described, or the thousands of other primarily very rare and usually pediatric diseases that really are genetic, many of which are now, and will eventually be, identifiable and usually (but not always) predictable with genetic data.  But, many such diseases and disorders are themselves very complex -- cystic fibrosis is a well-studied and well-characterized example, with over 1000 different implicated alleles, all in the same gene, identified to date.

Studies of unexplained genetic disorders, that is where there is familial risk and some cases have an identified gene, have about a 25% success rate for identifying a causal mutation.  Granted, by now it's probably the toughest, rarest disorders that are left to explain -- and/or those without effective enough advocacy groups to have lobbied for funding; but some of these will be explained, while others will not, because there can be numerous pathways to the same phenotype, and what's causal for one individual won't explain it in another.  This is something we know very well already.

Late spring wildflowers; Wikimedia
That complexity isn't always reducible is an idea approached from a completely different angle in a beautiful piece in Aeon Magazine on March 19, by Olivia LaingNow a writer, she describes her one time life as an herbalist, trained in the ways of western medicine to understand the molecular properties of the herbs she prescribed for sick patients, and her growing discomfort with the idea that it was all reducible to molecules.

She tells the story of how it had been believed that the Neanderthal buried their dead with flowers, or may even have used flowers medicinally, based on pollen finds in caves in which skeletons were found.  It was a beautiful idea, she writes, except that it was probably wrong.  The pollen was more likely blown in by the wind, or carried in on a rodent's fur. 
I confess to finding this story pleasing, not disappointing. It exposes the depths of our fantasies about people and plants, showing how pattern-driven we are, and how addicted to stories. At the same time, it reveals the baffling complexity of the natural world, its resistance to understanding. No matter what meanings we ascribe to them, plants maintain their mystery. I might not handle them daily anymore, but I like to think of them growing in the fields and waysides of the world: rue and cockspur, nettle and rosemary, rising from the soil year after year to spell out a code we may not ever completely crack.

The problem of the bicameral mind
It has often been said that humans have a 'bicameral' mind: one half is devoted to particular things, with analytic functions, the other to more overall or holistic impressions.  Whether this is literally true or accurate, in science we can trace two similar main kinds of thinking about Nature back through history.

One is the qualitative, enumerative, reductionist particularistic view of causation: causes are individual forces or 'things' and the job of science is to identify them and how they work. To do that, we have to isolate them, and to do that we must reduce our observations to the most rudimentary level--such as molecules, where the causes actually operate. We do this through experimental designs and so on.  We do it because complexity obscures individual causes.  This is the gene mapping approach, the number-crunching idea that we'll just overwhelm Nature with our computers and force it to yield its individual causes.  Mendelian genetics has been an exemplar of this worldview since the turn of the 20th century. It assumes great regularity in Nature, and that the scale of our data and so on are all that prevents us from identifying causes.

The other view is quantitative, and basically holds that Nature works through complex interactions at higher levels than rudimentary forces.  A building cannot be understood by enumerating its bricks, beams, and wires, even if those things are needed and in that rudimentary sense 'cause' or explain the building. But interactions of very complex forms are instead viewed by quantitative minds as the organizing principles, or higher-level 'causes', that we need to understand.  Quantitative genetics as separate from Mendelian genetics has always been a major component of biology, and was basically Darwin's view of evolution.  It treats genetic and evolutionary causation in aggregate, without attempting to enumerate its components.  This view today, though currently held by the vast minority because it's under siege by the reductionist majority, would be that which argues that computer crunching is not what we need: what we need is more innovative thinking about how complex causation works and/or is to be understood and manipulated.

This qualitative/quantitative dichotomization is an oversimplified characterization, and we don't mean to suggest that the world is just a contest between opposites that will ultimate resolve (that's a view of philosophers such as Hagel, and thinkers like Marx).  Still, it reflects widespread differences in world views--of communication between our two brain hemispheres, one might say.

There are attractions and attributes to these different points of view and their intermediaries.  How they will resolve, or if they will, remains to be seen.

Tuesday, March 19, 2013

Scientists in the making -- democracy or plutocracy?

Ken and I are just back from a morning judging high school life science projects, something we've done for years.  It's always fun, and always interesting -- but the same issue always arises.

Elementary school science project; Wikimedia
It's great to see what kinds of questions high school students choose to explore, from extremely timely and relevant to just interesting for their own sake.  One year someone did an excellent job producing biofuel from algae, explaining the fossil fuel problem and how he thinks it can be solved.  Someone else investigated white nose syndrome in bats for two years in a row -- among many other interesting projects. 

The teacher who has managed the whole event for many years told us this morning that a student who presented at this competition one year went on to be a national winner.  The guy had wondered what happens to the rubber that gets abraded off tires, and its effect on plants along the sides of highways.  He'd heard the question bandied about on Car Talk, where no one knew the answer, so he decided to answer it himself.

Original vs set-up science projects
This was a student who crossed The Great Divide.  There are always kids who think up their own project, and either get some help from their science teacher at school and use equipment in a school lab or set things up in their own basement with the proverbial, innovative use of baling wire, a piece of garden hose, and cardboard boxes, so to speak.  These are always fun presentations to see, even if the question is a bit naive or the results not earth-shattering.  These are self-motivated, thinking students working with a high school level of knowledge and technology.  But they must gaze across that Divide when it comes to competition for recognition.

On the other side of the Divide are the kids whose parents are academics, scientists running their own labs, or who know scientists running their own labs, or kids who are interested enough in science and motivated enough to find an internship in an academic or commercial lab where they work after school or during the summer. These students may or may not be as self-driven as the DYIers, or it may be that their ambitious parents are driving them, or both. Their projects are always much more professionally done, with state-of-the-art equipment and fancier analyses.  They know how to work the literature, follow complex directions, use the write jargon and so on, and sometimes they actually seem to understand what they are doing both in terms of the question and the procedures. But you always have to wonder how much of it they thought of and carried out themselves. 

Some of this you can figure out during the question and answer period. Often, but not always, the student's knowledge about the area is deep but not terribly wide -- which of course is understandable for a high school student.  Sometimes the enthusiasm is deeper than the knowledge.  Almost always he or she is clearly a cog in a wheel in a bigger machine. The student may have cribbed (which doesn't mean plagiarized!) technical descriptions from journal articles, or may know lots of terms and equipment procedures but not have really thought out the problem very deeply, and in most cases the problem was handed to them by the lab director, a post-doc, or their parents.

The projects these students do are sometimes on the level of advanced graduate students or post-docs -- like one this morning.  They may address important questions.  They may lead to the student being a co-author on a publication in a major journal.  The student may have learned everything but even if not, s/he learned a lot.  In some ways this is very good and a contribution to the future science talent.

But this is a mixed story from various points of view, in terms of science infrastructure.

Will those who get into major universities be the ones with the best ideas in the future?
It seems unfair to judge those who did their project themselves from start to finish against those who had so much help.  Should there be two distinct categories, a two-tiered system?  Basement science vs. laboratory science?  Well, that's unfair, too, because it may create an elite and a second-class set of students and in fact basement science isn't necessarily inferior even if it is less technologically elaborate, and basement scientists don't necessarily understand their project less well than the students who work in major laboratories with their parents' or others' help. To wit, the rubber on the road prize winner of a few years ago.

Are those who do the basement work more likely to become like Bill Gates, and the privileged students more likely to be cogs in the science industry wheel?  Even if that were so, is the army of those doing routine, if exotically technological science less important to society, even if they are the ones who got an insider's leg up?

This raises the issue of the point of a science fair.  It's to reward and encourage kids who excel in science, yes, but it's also to  encourage a love of science and to teach kids that they, too, can be a scientist, starting in their basement, today. 

And the teachers?
There is another important issue.  It is that there is a dearth of really capable science teachers who can stimulate and advise students at a serious level.  Even if society were to deal somehow with the disadvantage of basement science, more knowledgeable and better-trained teachers could play a pivotal role in success.

If we had more, better science teachers, then students whose father or mother doesn't work at Penn or Roche Pharmaceuticals could still have a sound K-12 knowledge and compete for slots in prestige universities.  Of course, part of what's afoot here is competition among parents to give their children an edge, and this is part of our non-egalitarian society.  What is relevant is whether being more egalitarian would lead to better scientists, or just scientists from less privileged homes.  That is a hard question to answer.  But it does seem safe to argue that more, better trained teachers would lead to more, better trained students, regardless of privilege levels, and more students doing high-level thinking about science as they enter college.

Monday, March 18, 2013

The "If"s of Natural Selection

We write a lot about genetic determinism here on MT because unfortunately it's everywhere, but we probably don't turn enough of our attention to its corollary, the assumed certainty that traits are here because of adaptive natural selection.  Everyone knows about 'survival of the fittest' and that therefore traits are here because they served a purpose, often if not usually treated as if a specific purpose, in our evolutionary past.  Even those who clearly recognize that selection, when it occurs, is usually highly probabilistic, still talk the talk of determinism as if the adaptationist assumption is latent in their thinking.

Bushy eyebrows (Darwin's)
It's easy to make up a story about how and why a trait evolved.  That's because if you assume everything has an adaptive explanation in order to be here, what is here must have such an explanation;  as scientists, of course, it's up to us to say what that is.  Thus, we've got bushy eyebrows to shield our eyes from the sun; East Africans are fast runners because they were cattle thieves and had to run fast in order to survive, and the story that appeared just last week in the Proceedings of the Royal Society B, that Neandertals had larger eye sockets to be able to see better in the long, dark northern nights, and so were out-competed by our more successful ancestors who, with smaller eyes, could devote more of their cortex to higher thought processes, specifically involving those required for social organization.

But it's fair to say that these kinds of explanations are usually Just-So stories.  Made up since there is no direct evidence for the distant past, and perhaps even plausible, but untestable.  We've said enough times to get into trouble that really, the most robust selection story we have is probably that of malaria and sickle cell (and other anemias that are protective), that these traits evolved as protection against malaria around 10,000 years ago.  Other stories running close behind are lactose tolerance and skin color -- early humans were all lactose intolerant until various groups domesticated dairy animals and adults began to subsist on dairy products, and skin color lightened as humans moved north because of the need for vitamin D.

But even those stories leak a bit.  Remember that for a trait to evolve by natural selection, those with the trait had to have more children than those without.  For many many generations.  But it's at least a bit forced to argue that lactose intolerance systematically lowers the number of children its carriers have.  It may cause occasional or even frequent discomfort, but it's rarely lethal. People report becoming accustomed to it.  It is argued that in times of food shortage, adults can gain nutrition by drinking milk.  But in times of drought what are the cows drinking?  And if there's inadequate food for agricultural humans who held cattle, why would there be cattle food?  Wouldn't the grass have died too?  Whether they or other arguments are true, the issues are not given very close consideration.  So even if there is a lot of good-looking circumstantial evidence, should we really conclude definitively that milk drinking was a strong selective force in the not so distant past?

Skin color lightened because of the need to make vitamin D in northern climes, when sunlight isn't as strong for much of the year?  But, we're able to store vitamin D for months at a time, so probably don't need to make it all year round.  Plus, estimates of required vitamin D levels differ wildly.  Further, darker skinned people tend to have lower vitamin D levels than lighter skinned, on average, yet they also have fewer bone breaks, a marker of bone density and a serious consequence of inadequate vitamin D.  Could the link between vitamin D and bone mineralization be more complex than we realize, or could there be an additional mineralization pathway, as yet unidentified?

Even if we were to grant that these issues can be resolved and the adaptive stories are correct, it is important to note that of all our traits, and our many thousands of functional genomic elements, there are precious few stories of such genetic adaptation that have persuasive documentation.  This is consistent with a much less deterministic view of adaptation, especially if one looks at the gene level.

Normally, estimates of fitness -- very, very difficult to identify directly even in the present, unless involving human activity like antibiotic or herbicide resistance -- are that the difference between the 'fitness' conferred by the better allele at a gene even under rather strong selective pressure is only about 1%.  If continuous, and deterministically systematic, a 1% advantage would indeed lead the 'good' allele to replace the 'bad' one.  But the advantage is that if I carry the good one, I have 100 children while my bad-allele-carrying neighbor has a mere 99!  This is not even testable in most natural human populations, which were in demes too small.

In our book, The Mermaid's Tale, we described natural selection this way:
Natural selection means the systematic differential reproductive success of competing organisms. The idea is simple: if a species over-reproduces so that not all individuals in the next generation can go on to successfully reproduce, and if there is variation in form among that species, and if some forms of an organism do better in a particular environment than other forms, and if the reason for this is included in their heritable genome, and if the environment remains stable long enough over time for this form to be favored persistently, and if the favorable forms are also lucky enough to produce offspring who go on to reproduce, and if they produce more offspring than their competition, then those forms can become ever more common over time at the expense of their competition. If all these contingencies do occur, indeed co-occur, then the more prolific life form will become more suited—better adapted—to the environment in question. If the forms are sequestered from each other by some mating barrier, then they would diverge over time, and this was the explanation Darwin and Wallace proposed for the origin as well as specialization of species.

This reasoning is beyond doubt, and is essentially what Darwin and Wallace were suggesting.  But it hinges on the many ifs. Clearly, natural selection is always possible, and often important, sometimes over-ridingly so. At the same time, it has been too easy to assume the ifs. But when the selective differences are small, or highly variable over time, selection is not as much like a systematic force of nature as its usual image. A force is forever, and it has both strength and direction. Instead, and aside from the importance of chance, it is more accurate and realistic to view natural selection as more nuanced, and as only one of many contributing ways in which life’s success is determined.
A force is infinitesimally divisible (and this, a kind of Newtonian-force model, was explicitly Darwin's idea), but there is far too much chance that affects survival and fertility for  selection to be that kind of force in nature, at least as a rule.

We are all too enamored of simple explanations.  We are happy when we learn that this gene is 'for' that trait, and that trait evolved 'for' this purpose. But that is sloppy thinking that is fundamentally inaccurate, and it is not good science, despite its appeal to the media looking for dramatic stories and simple dog-eat-dog explanations, and despite it being a widespread image of life in many health and life sciences.