Science = Discovery + Synthesis?

“I see no other escape from this dilemma (lest our true aim be lost forever) than that some of us should venture to embark on a synthesis of facts and theories, albeit with second hand and incomplete knowledge of some of them—and at the risk of making fools of ourselves.”

— Erwin Schrödinger

Andrew and I are currently battling to write a Primer for PLoS Biology. Primers are short, informal pieces that put a forthcoming paper into context. (In our case, it’s a difficult to penetrate forthcoming paper, which is a Herculean mix of simple and complex mathematical models, in vitro ecological experiments, and whole genome sequencing.) When he asked me about writing this piece with him, Andrew pointed out “You don’t need more papers like this on your cv”.

He’s right. My cv is a near even split of original research and reviews / opinions. Looking at just my first author publications, original research accounts for only 40% of them.

Over the course of applying for fellowships and jobs, I’ve worried about this. A reviewer of my failed NERC fellowship application stated that he’d like to see my “big picture thinking rebalanced into more original outputs”. It’s worth noting here, though, that these comments didn’t keep me from being interviewed and aren’t the reason that I didn’t get the fellowship. Indeed, during the interview I wasn’t even asked about this aspect of my publication record. Shame, since I had prepared a nice justification of it: science is about both discovery and synthesis. What good are a bunch of independent discoveries if no one weaves them together into a bigger picture?

But careers in science aren’t generally built on being enthusiastic about other people’s research over doing one’s own. At least, I don’t think they are. (Are they?!) So, this is something that I still worry I will have to defend at some point in the future.

The only discovery I seem to be making is that I don’t really have a flair for discovery. Hopefully I’ll have a flair for picking students who do.

Evidence based hiccup cures

I feel like I get the hiccups more than most people. I don’t actually know if this is true, but it feels that way. And one thing I’ve learned in my years of hiccuping* is that everyone has their own version of a hiccup cure.

A non-comprehensive list of hiccup cures that people have told me include:

1. Hold your breath.
2. Take a shot of vinegar.
3. Drink from the wrong side of the glass.
4. Stand front forward with your arms up against a wall.
5. Drink from a glass of water with a butter knife in it (this is my mother’s).
6. Eat a spoonful of sugar.
7. Come up with a list of ten famous bald men.

And of course, there’s the always popular “scare the person with hiccups” solution. After Nicole attempted this technique last night, without a lot of success**, and Laura tried to convince me to eat a spoonful of cinnamon, again without a lot of success, I was thinking maybe we should look into evidence based hiccup cures.

After sleeping on it though, I’ve decided I’m not actually that interested in knowing what hiccup cures work. As a scientist, my first instinct is to look for the data and 99%*** of the time that is the right instinct. But honestly, sometimes it’s more fun not to know. Because my hiccups generally resolve themselves, I continue to pick up outlandish and totally bogus hiccup cures, and if you really want to know how to cure hiccups there’s always PubMed.

* Not consecutive of course, which would be terrible and quite possibly drive me to homicide.

** Maybe because she attempted to scare me by telling me I was pregnant, which doesn’t have quite the same effect as jumping out at someone and saying “boo.”

*** Citation needed.

A Favorite Moment

Most of the formative experiences of my adult life occurred while doing fieldwork in other countries. A majority of that time was spent alone. This time spent alone has left me with many memories and feelings, both good and bad, that I continue to sift through and make sense of.  I thought I would share one of my favorite moments.

It was 5 am. This seems like a terribly early time to go for a run, but in Tanzania the sun has already come up and the back up generator turned off at 8 the night before so you’ve been in bed for ages when 5 am sun wakes you.

I was running down a red dirt road that cut through thick forest and I was already wet with perspiration.  Even at this early hour the air is hot and thick.  I ran past little houses constructed from wood and mud while women in brightly colored kangas watched me from their doorways. I huffed my good morning. They nodded theirs. Eventually the road narrowed and I was alone.

My beloved i-pod had died the night before so I was able to hear the bicycle coming up behind me. The chains jingled and the rusty gears creaked. A little boy pedaled past me.  He was probably 10 and he had to stand to pedal the bike, which was far too big for him.  It was loaded down with large bundles whose contents were obscured by the plastic tightly bound around them with rope.  As I was marveling at the little boy’s heavy
load, he was apparently marveling at the crazy blond woman running in the morning heat.

A cacophony of metal on metal rung out as he rode straight into a ditch.  His oversized bike fell over.  And then there was a pause. I swear I think that in that moment there were not even birds calling.  Our eyes met. On his face was a hint of embarrassment, but mostly terror.  Anyone who has seen me try to run in heat knows how terrifying I can appear.  It is a big blotchy Norwegian mess.  I’m sure I looked truly scary.

I stopped running and approached him.  I gave my best smile.  I used my best Swahili to soothe him.  It didn’t help. He didn’t move. He was terrified.

Moving slowly and gently, I started to lift the bike. Upon trying to lift it, I realized how unbelievably heavy it was. The boy joined in. Still we were struggling to balance it so that he could resume his tenuous position.

That is when it happened. They appeared like magic.  A dozen tiny hands belonging to a half dozen small Tanzanian children who had been watching us. Together we righted the bike and held it in place so that the little boy could reposition himself at the handle bars.  Together we jogged along side him to help him get some momentum and then we all let go and he glided away. He turned around and flashed a huge smile.

We all stood together for a moment watching him disappear and then, without a word they crept back to their yard, and I started my jog again. I can’t quite put my finger on why, but in that moment I knew the world was a good place and whenever I think of it I smile.

Getting paid to learn

A couple of weeks ago I woke up more bright eyed than usual for a Saturday morning and, feeling rather virtuous, headed out to a public lecture on “Races, faces, and human genetic diversity”. This was a fascinating talk, which left me boring others with tidbits of information for weeks. For example, did you know that the police are starting to use visual profiles of criminals based on DNA found at a crime scenes?

However, while the talk was interesting, what really amazed me was the audience. It was grim rainy morning, yet the spacious lecture hall was full. I’m sure there were a few students hoping to improve their grades, or faculty supporting a colleague, however, the vast majority of the audience appeared to be from the general community.  This phenomenon is not unique to state college. During my PhD I ran the Edinburgh branch of café scientifique, an organisation coordinating public talks on science within informal settings, which has now spread around the world.  Guest lists to events would fill in a matter of hours, with long reserve lists of people disappointed they wouldn’t fit into our venues to hear talks and discuss diverse topics from animal behaviour to string theory. It didn’t seem to matter how many events we ran, the enthusiasm, and the requests for more, never dried up.

So what makes people want to give up their valuable spare time to learn about science? For that matter why is David Attenborough one of the best-loved celebrities in the UK (and round the world, surely?), with his series’ on natural history reliably pulling in huge TV audiences? I don’t have an answer, but my best guess is that we all have an inherent interest in trying to understand the world around us. For most scientists, and I imagine non-scientists, knowing a little bit about why the landscapes we see or the animals around us are the way they are, only adds to the enjoyment in observing them. Similarly, we spend a lot of time preoccupied with our own bodies so understanding how they work, why we get sick, or how other organisms are living within them, is intrinsically fascinating. As scientists our job is to think about the questions that interest us and try and find answers, as well as to learn as much as we can about what is already known. Which all makes me feel pretty lucky to do this for a living.

Wanted: Certified snow cave builder

Given my relative proximity to the ground, it might surprise you to know that the track race I ran most often in high school was the 100-meter high hurdles. It wasn’t because I won or anything (For real, I’m sure I looked like a slow-motion action sequence compared to everyone else); there are just not a lot of people who want to run and jump in the same race. It was something I loved to do, regardless of my talent for it, and it was fun to learn, even if it is not something that one might classify as a “life skill.”

I was pleasantly surprised, then, when my willingness to run and jump in the same race actually came in handy for a second time in my life. In college track, they have a race called the steeplechase, which involves running around the track and jumping over five barriers, one of which has a water pit on the other side, during each circuit. Shockingly, not many people are willing to do this either, so I scored all kinds of points for my team! After all of the times I finished seconds behind everyone else in high school (it’s only a hundred meters, guys, that’s a big difference!), I was tickled that something that I learned in the past helped me so much in the future.

I mention this because I am currently teaching the lab portion of a course called “Populations and Communities,” and our first big unit has been on the genetic structure of Eastern Fence Lizard populations. I am once again tickled that something I learned long ago that didn’t seem like it would ever be useful again- i.e., the microsatellite analyses that I did for my senior thesis in college- is so very useful now. I can get ready for class in a few hours, instead of frustratingly many. I can give students helpful hints about calling alleles and sound like I have at least more than half of a clue what I’m talking about. (This is key with students, the fraction of a clue that you sound like you have.) It’s really great! And it just makes me happy, to think about the good old days at Juniata College, rocking out to Kelly Clarkson and chopping up fish fins with razor blades, and also feel…vindicated? Glad that the time I spent learning something has been tangibly useful, anyway.

Have you ever gotten to shock and amaze yourself or others by whipping out a random skill from your past?

“Red Queening” the literature

My father likes to tell stories.  Some are better than others, but one that stuck with me was from when he was a physics major in college.  He recalls being at the university library and browsing the physics research section.  Back then (at least as he tells it), there was an annual publication that contained every physics paper from the previous year (apparently the term is omnibus (Mideo personal communication, Feb. 15, 2013)).  As he looked over the bookshelf, he was shocked by how quickly the thickness of the omnibuses increased over the years, ultimately splitting into two volumes, then three, then five.  He recalls thinking, how could anyone keep up anymore?  Project forward twenty or so years from when my father was in college, and people could still simply get away with reading the table of contents of the few most relevant journals to them.  But now, with so many new and general journals and so many publications, that seems inadequate.

Keeping up with the new literature, in my opinion, is one of the toughest challenges in science.  It is also critically important to doing good science.  On top of that, I find it terribly embarrassing when I am talking with someone about my project, he or she asks me if I have seen the new paper by so-and-so, and I have to say “no” and have the person explain the paper to me.

My current method for keeping up with new publications is to use the ‘alerts’ feature on Google Scholar.  Based on key words, I receive an email with the most relevant-to-me new entries in Google Scholar about twice weekly.  I have been generally happy with it so far, having used it for about three months.  About half of the papers (~ 50-100 per week) are relevant to me, which feels like an impressive ratio of relevant to irrelevant papers.  My big fear, however, is that I am missing many important things.  Unknown unknowns.  So what I am looking for are alternative methods to find relevant papers so that I can estimate how much I am missing.  How do you keep up with the literature?

Penny Lynch PhD

Today, Penny Lynch became Dr Penny Lynch. This is truly astounding achievement. She broke records for a member of my group. She was my oldest PhD student (by more than two decades). She was the last student I will have at a British University (I left 5.5 years ago). And she took 8 and a bit years to finish, the longest PhD I have supervised. But during that time, her productivity was up there with the best of my students. She published two papers as lead author (1, 2) and was a key player in one of my more important (3). Two more papers will come from her thesis, one of which could be a game-changer. She is a major collaborator on papers with post-doc Lauren on parasite manipulation of host behavior and PhD student Katey on new work on the Dilution Solution. She has collaborations running with two groups elsewhere in the US, and I am hoping to add a third.

What is amazing is that while she was doing all that PhD work, she also did her day job (City of London financial modeling), raised two gorgeous daughters (one just left for university), maintained a huge rambling, seriously ye olde house (16th Century?) in one of the most expensive parts of Britain, did some stained glass window construction (she is seriously weird) and sustained a marriage with one of the most understanding, supportive husbands in the universe. I lived through my wife doing her PhD, when we had no kids, no house, no financial obligations, no day job. And it was tough. Tom: thanks.

Penny is, I think, my 22nd PhD student. I have enjoyed her completion more than any other. When she first came to see me I guess about a decade ago, I never thought she’d make it. It really helps that she is one of the smartest people in the business. But more impressive, she is one of the most motivated, tenacious, and time efficient. And: we had a lot of fun along the way.

Revisiting “Work” and the American Dream

I come from a long lineage of hard workers and self-made men. My paternal grandfather, half-Cherokee, half-white, spent his life struggling as a traveling salesman in dust bowl-ravaged Oklahoma and Texas. My maternal grandfather has owned and operated his own business as an auto mechanic for nearly fifty years, and still works to this day, despite being in his late eighties. My father, a USNA grad, poured his life into working his way up the corporate ladder to an executive management position in a defense contracting job, only to be let go 18 months ago. I am my father’s daughter, and I carry with me many of the traits that brought him to success, good and bad. And because I share such a close connection with him, the failure, impotence and worthlessness he felt upon losing his job hit me excruciatingly hard.

I’m sharing this with you because I am terrified of failure. More so, however, I am terrified of being seen as anything other than a hard worker. It was difficult for me, last week, to listen to Courtney advise me that I “take a break” from experiments for a while, and take a few weeks to sit alone with my data and think. It was difficult, because I didn’t grow up in an environment where it was impressed upon me that being alone with your thoughts and taking time to mull over each passing idea was “work”. That isn’t to say that my aforementioned family members were thoughtless drones, and I do know in my rational brain that “thinking time”, whether it be an hour or a few weeks during a write-up, is indeed work. And at times it can be grueling, painstaking work. But, despite the fact I’m a year and a half into my PhD (and even gearing up to write my first manuscript), I feel as though it will take some getting used to in order to come to full appreciation (and admiration) of my thinking time.

I’m not sure if this is an American affectation, where, despite the quickly changing work environment that is shifting daily from office space to working from home, and classroom space to online education, we still associate “work” with some physical, tangible presence and a laborious “doing”. Perhaps it is simply an affectation of growing up in a culture of blood-sweat-and-tears blue collar workers and ladder-climbing corporate execs. Perhaps it is because the culture of work that is ingrained in me does not mesh smoothly with the culture of work in academia and science. Please don’t misunderstand, I don’t resent my upbringing, and I deeply admire those who shaped it to be what it is. However, as one of many first-generation American scientists with very similar pasts, I find myself constantly having to divorce myself from what our society has cultivated as the (rather masochistic) American Dream, and appreciate the labor of thought, and revel in the fact that work doesn’t have to be painful.

Now, please excuse me while I read Miller’s “Death of a Salesman” for the fifth time.

As on the fields of Omdurman

Did you know that rats laugh?

I didn’t. Not until the other day anyway. The other day I was enumerating the oocysts in Courtney’s Wolbachia laden (or not as the case may be) mosquitoes while at the same time continuing Court’s much needed education into the available variety of BBC wireless programmes. Having exhausted the News Quiz’s back catalogue we had moved onto “The Infinite Monkey Cage,” a fun and at times funny science programme. This particular episode was looking at ‘brain science’ and well into the banter one of the guests, a neuroscientist from UCL, said, almost as an aside, almost as if everyone should know this, that rats laugh. One of the presenters, the physicist Brian Cox, after a moment of incredulity, pulled on the reins and said “hold the phone, did you just say that rats laugh,” or something to that effect, just as I was thinking the same thing.

’Tis true. Rats are ticklish it turns out, particularly around the nape of the neck, and when tickled they laugh. Naturally I can hear you saying, “laugh, are you sure?” and maybe the description has the taint of anthropomorphism. What the rats actually do is make high pitched chirps in the 50 kilohertz ultrasonic range. But since these chirps are distinct from other vocalisations and are only produced in response to being tickled, or ‘heterospecific hand play’ (quiet at the back there) as the jargon goes, ‘laugh’ seems a rather good description. Nor, before you all get carried away, is anyone suggesting that rats have a sense of humour. Giggling in response to a quick scratch of the neck is not the same as laughing at Derek’s jokes. I mean, it is not as though one of the rats gets up after the lights go out in the Read-Group animal room, taps a microphone, “pof, pof, pof,” intones self-consciously into it, “one, two, one, two,” and then goes into its Saturday night routine..

Stand-up Rat: Good evening ladies and gents and welcome to the animal house, lovely to see so many of the same faces here again tonight. A human, a neanderthal and an australopithecine walk into a bar….

.. to the considerable mirth, or approbation, of those assembled in neighbouring cages. Much as it is an image to conjure with there is obviously no nascent Eddie Izzard in the rat diaspora. But the ability to laugh seems real and the pleasure apparent because tickled rats actively seek out the same human hand that made them laugh before, actually start laughing when someone who has tickled them previously enters the room and rat pups prefer to be with adult rats that still laugh. Mice? Don’t know, but laughter is a common mammalian behaviour they say so yes, probably. Ultrasonically. And mice sing anyway. Duet even.

It’s an interesting line of research and one that could easily drag one into a little reading off the set list. But the neuroscientist’s comment made me think in another direction. The vets routinely suggest that we should get in and among our rats. Pick them up as soon as they arrive as barely-bigger-than-mice rats, let them get used to us and to being handled. A bit of neck scratching and the inaudible laughter resulting eases nerves, makes the animals more amenable to handling in the future.

The trouble is that learning a snippet about the private lives of lab animals, particularly where they show a response similar to one of our most treasured and intimate behaviours, creates a certain empathy, an, albeit brief, familial feeling that places in stark contrast our actual aims and intentions for these animals. We can rationalise what we do of course, the lack of alternative, for the greater good etc. But, standing amid the carnage of another experiment I find that the pungent whiff of cordite is very apparent.

Danger Zone

How generalizable are biological theories? To what extent are they helpful to have?

How has virulence evolved? What explains plant biodiversity patterns in the tropics? How is climate change affecting ecosystem function?

Questions like these have spurred huge debates that continue today, and although for the most part concession has been reached that a single theory can rarely be all encompassing, there remains a lingering desire by many scientists to produce theories that are heavily inclusive of a variety of taxa and systems.

I cannot help but get into internal debates about the importance of forming “laws” in biology. Preconceived notions of how biological systems evolve can lead to inadequate intervention strategies, whether they be public health related or rooted in environmental conservation. For example, we have seen the failure of species introductions to control pests and the driving of multi-drug resistant pathogens. So what about the successes? Some of them have been luck, but I would also like to think that they have been a result of efforts to realize the intricacies of the particular system and the discovery of surprising components.

Making broad biological statements is not only an exercise in futility at times, but can also be potentially harmful. The complexity of specific systems must be picked through especially if there are elements that are counter to the current most popular biological theory. That will allow us to better, but not completely, anticipate the evolutionary outcomes of a particular intervention.

I think this needs to be a universal understanding among scientists especially in the current era where we see that the dynamics in the biological world are changing and people are calling for a reversal of human-induced effects. If humans are intent to intervene there absolutely must be a perquisite for understanding the system-specific details. Broad generalizations will not be sufficient and could be potentially disastrous.

A slow-burning question…

I do love symmetry. It’s not so much that I really believe that all organisms should have nice equal ratios of A, C, T, and G in their genomes, but why parasite genomes are almost as a rule AT rich has bothered me for a long time. As far as I know (and I might be wrong) there isn’t a good explanation. Some parasite genomes are ridiculously, even wildly AT rich. Malaria parasites are all AT biased, but to give one example: the genome of Plasmodium falciparum comes in a whopping 75% AT in coding regions and up to 100% in non-coding! AT nucleotide bias is found even in ectoparasites, like the human body louse (72% AT).

In fact, this AT bias is so common that specific methods are being developed to deal with associated genome sequencing problems, such as no place for primers to grip – those lovely GC “clamps” necessary for good PCR primers to stick are few and far between in many parasite genomes, and stutter error as DNA polymerase struggles to read something like ATTATTTTTTTTATAT and loses its place. However, in this same paper the authors casually mention that the pathogen that causes tuberculosis is super GC biased, and that’s what blew my mind and prompted this blog.

I had rationalized AT bias in parasites, and had even thought up some hypotheses that might explain it. Maybe it’s driven by chemical structure; the bond between A-T is a weaker bond with only two compared to three connections between G and C. Could this mean if there was competition between strains of parasite, the one with the speedier replication might have a selective advantage? Related to cost and speed: maybe A’s and T’s are cheaper to manufacture (smaller molecules) so the parasites can get more progeny for the same amount of host resources. A “bang for your buck” hypothesis if you will.

Perhaps the same “problems” for genome sequencing that cause more replication errors are somehow advantageous for parasites if rapid evolution (say for drug resistance) is difficult to achieve? Or, an entirely different idea: maybe DNA degenerates predictable into A’s and T’s over time. It seems some authors accept this is just the way evolution works, stating for example, endosymbionts with long associations with their hosts tend to be AT biased and using AT bias as evidence of a long association. If this were true shouldn’t all species that have existed for more evolutionary time have greater AT bias in their genomes? Is this actually true??

Then there’s Mycobacterium tuberculosis, which is most definitely a parasite that causes tuberculosis, yet it has a high GC content. Why?!

Writing a text book in a subject that you are not an expert in: a crazy exercise in blind stubbornness

All I can really offer of value at this stage is advice. To the current graduate students, do not make any rash decisions involving large commitments of time and effort directly following your Ph.D. defense. At this stage in your career you feel omnipotent, the world has a rosy tint, and all goals are easily attainable. It was in this state that I, a disease ecologist by training, decided to write a behavioral ecology text book.

On the surface, the project seemed like a great idea. I was a graduate student instructor for four semesters of a behavioral ecology and conservation course at University of Michigan, so was somewhat familiar with the field of behavioral ecology. There was a true need for a new behavioral ecology text integrating the study of animal behavior into the framework of life history. Further, two senior behavioral ecologists had signed on as coauthors, my job was to populate the book with parasite and invertebrate examples, and we secured a contract to write the book. So all appeared easy peasy.

Four years later, we are trudging through writing the book, and I am neck deep into my post-doctoral position. Instead of being a side author contributing minimally to each chapter with bits of knowledge from my own background, I have been tasked with writing four of the core chapters in the book, spanning Mating Effort, Mating Systems, Parental Effort, and Avoiding Enemies. While I love all of these topics, writing, and teaching, constructing chapters from scratch with no literature foundation has been a huge undertaking with many costs and not many rewards. Text books are rarely cited, and in efforts to not take time away from my current research, my unpublished dissertation chapters have remained unpublished.

So, words of wisdom. 1) Nothing is as easy as it appears on the surface. 2) Assess the immediate costs and future rewards of any given project. 3) Know your limitations. 4) Develop an ability to say no, even to people that are close to you, or senior to you, if it is not in your best interest. 5) Write text books in the twilight of your career, not the dawn.

Mistakes, misconduct and modeling (three entirely different things)

As Nicole commented, increasing the odds of success goes hand in hand with increasing the failure rate, but I worry a great deal about making mistakes. I fear that despite taking care to check and recheck my work, I’ll overlook some fundamental flaw. Part of this fear is based on my assumption that most retractions are due to human error, with scientists making honest mistakes, but this PNAS article argues that most retractions are in fact due to misconduct. I expected that most of the misconduct would be due to instances of plagiarism or duplicate publishing that, while unethical, do not undermine the work of future scientists. I was wrong about that too–most of the misconduct was actually fraud. On the bright side, this article suggests that I can drastically reduce my odds of having to make a future retraction by doing what I’ve already been doing–being honest about my results and how I obtained them.

But I find this article especially disappointing because the scientists at fault came so close to making a contribution and then did the opposite. I am currently in the process of synthesizing noisy data from a model to refine an experimental design, and creating data has not turned out to be a trivial exercise. My motivating question is, assuming malaria parasites multiply in a petri dish just so, will our experimental design allow us to draw the correct conclusion?

Malaria in culture: what are the parasites doing in there and how will we know?

It requires clarifying my assumptions about the way the world works and about inherent observational error. Some of my favorite papers use models to test experimental methods rather than the other way around–like Fenton et al., who use a model to show very nicely that if there are interactions between coinfecting parasites, the signature won’t necessarily show up in prevalence data. The scientists who falsified data must have had a clear set of assumptions about the way the data ought to look (i.e., a model), and if those scientists had used the data they generated to inform their experimental protocols instead of reporting it as true observation, the resulting paper would have improved the quality of future science instead of diminishing it.