Investment strategies from George Costanza

With the stock market at historic highs, now seems like a good time to bring up a claim that has always bothered me.  Wall Street analysts will often say that the biggest mistake of individual investors is that they tend to buy high and sell low, because they make decisions based on emotion.  The idea is as follows:  People see others making gains in the stock market and become motivated to invest for fear of missing out on easy money.  They buy stocks, but because many others have beaten them to this they end up paying high prices.  If and when the value of the stock drops they panic and sell their holdings, but again others have beaten them to this and they end up selling at low prices.  The end result, they buy high, sell low, and lose money.

But if this pattern were true more often than expected by chance, employing the George Costanza philosophy — if every instinct you have is wrong, then the opposite would have to be right — would allow smart investors to consistently outperform the market average.  Just do the opposite of what your emotions tell you.  Nevertheless, active traders consistently underperform passive investment strategies, and so the only two possible conclusions are 1) individual investors and sophisticated computer algorithms alike are too stupid to employ a “do the opposite” strategy, or 2) the “buy high and sell low” anecdote is a boogie man story that people happen to believe.  I would guess the latter.

Disclaimer: The above blog post is in no way intended as investment advice.

Time

Dave and I are working on something together. Friday we decide the ball is in my court. He says he won’t be working this weekend. Wow. For once I have the edge. I do my bit and send it to him 5.30pm Sunday, feeling so ahead of the game. Two hours later, he has done his next bit. This generates the following dialog:

AR: I thought you weren’t working on the weekend?
DK: I wasn’t, but I finished my weekend early.
AR: How can you finish a weekend early?
DK: I’m very efficient sometimes.

What do we do all day?

Earlier this summer the Bureau of Labor Statistics published their annual report, known as The American Time Use Survey. The document details what we Americans do with our 24-hours, from the average amount of time we spend sleeping to the time we spend working, lunch breaking, exercising, emailing, being American. So what do we do all day?

We sleep a lot. Sleeping and resting make up a plurality of our day with 8.74 hours, followed by leisure activities which take up 5.26, and working which we seem to only do for about three hours a day. My reaction to three hours a day: where are these people working?

There are other interesting findings when we explore time use and compare different age groups, occupations and genders. Women work less hours, spend less time eating and more time sleeping. Men spend more time watching television and more time on lawn care. We all supposedly spend more time watching T.V. than we do socializing and communicating (2.77 hours compared to 0.72).  Mathematicians take some of the longest and most predictable lunch breaks. Few people work outside of typical 9 a.m. to 5 p.m., the most exceptions are employed in the “protective services” sector, i.e. firemen and police officers. Adults older than 75 spend almost double the time of 35-44 year-olds on leisure activities.

Scientists don’t have their own category in the report. Although I think I don’t need the government to give me a report in order to figure out what it is that we do all day, it would be entertaining to see. I was thinking I could start collecting data on my own time use, like Andrew mentioned in response to Eleanor’s post, but then I realized it would be terrifying to know the fraction of life I’m going to spend counting mosquito eggs. I’d prefer to remain blissfully ignorant on that front.

 

iBeacon and the Internet of Things: how will they change public health?

iBeacon is the newest word in my limited technology vocab but also one strongly trending in the realm of marketing and ecommerce.

Beacons, including Apple’s iBeacon, are wireless location transmitters soon to be used in smartphones and iOS devices that work on Bluetooth low-energy wireless technology (BLE). The premise of beacons is to collect and transfer information on relative location rather than location data collected from lower resolution sources like GPS. The technology also allows devices to communicate with other devices in the absence of user-mediated direction. Your iPhone goes with you to a bar and the cash register has a beacon that communicates with your phone’s beacon which communicates with your stored bank account app and three drinks later you leave the bar with the phone having taken care of paying and tipping. No need to call the bar tender or take out your wallet. At Rathskeller this would be amazing.

Heathrow airport is already on the practical applications of beacons, currently testing their use by partnering with Virgin Atlantic. Passengers walk into Heathrow, their phones alert the check-in counter of their arrival and by the time they look down, the boarding pass is already pre-loaded on the phone. Will we look back on 2014 and say: Why did we ever wait in lines?

Beacons make sense. They lay the foundation for a coming future of “The Internet of Things” and should eliminate many current frustrations of life in 2014 society (i.e. waiting lines, carrying credit cards, piles of receipt papers, losing devices, thefts of devices, the old-fashioned problems of, say 2035).

There is a lot of current talk on how beacons will change the future. Some is negative, predicting intolerable amounts of targeted ads directed at consumers every time one walks by a beacon for a product he or she otherwise would never notice. I understand that complaint and would prefer not to have my phone alert me when, say ketchup is 2.99, and then  have it give me some digitial coupon when I walk through the condiment aisle, especially if I have no interest in ketchup while bee-lining for dairy.

Some of the talk is positive, predicting increases in accessibility and stream-lining of digital information, such as having additional text, product info or history regarding the painting you are staring at in the art museum or the food product you are about to digest.

Incredible to me is that of all the current talk surrounding iBeacon applications, I have yet to find any talk of applications in public health. It seems technology that allows devices to communicate and interact with the outside world and other devices could be extremely valuable in terms of epidemic containment or targeting prophylaxis and treatment. A symptomatic patient comes into a clinic and is either given a beacon to wear or uses the beacon in a phone/mobile device, when a diagnosis is made, the patient’s beacon registers that information and can the pass that information to future beacons it may interact with. Airport beacons would be alerted to check the patient before allowing them on a flight, doctors would be alerted if new patients with beacons had interacted with previous patients with beacons that registered an infection, and behaviors that could be indicative of a person at risk for a certain infection or that indicate sickness (i.e. buying flu medicine at a pharmacy or frequent trips to a public restroom) could be registered by beacons which would then make use of the digital device to provide information on treatment and prevention.

After a recent flight, I passed an LED board displaying health alerts at the U.S. Customs desk, asking passengers to voluntarily declare whether or not they thought they had measles, dengue or a few other diseases. I thought this was interesting that in 2014 we still rely on human judgement and decision making even in cases where it may be flawed. Can we always trust people to come forward if they are sick and allow themselves to be detained? Would it be safer if technology pre-identified and notified travelers with x symptoms or x diagnosis? I think an iBeacon could help.

IMG_1605

Excelling at life

I recently decided that it was time for my biannual closet clean out. Instead of going about it my usual way, i.e. going through each item in my closet and evaluating it based on a poorly remembered, sentimental opinion of how important that item is, I decided to go about it in the most efficient way I could think of. That is, I’ve created a spreadsheet and I will record each item of clothing that I wear each day. After a month, I will have a comprehensive list of (warmer weather) clothing that I wear regularly and I will chuck/donate the rest.

As I was creating my spreadsheet for this cleaning project, it occurred to me that I could collect a lot of data about my life. But I haven’t quite decided what the most interesting things to track of would be. There’s the normal stuff that people tend to keep track of in spreadsheets, like spending or diet. Or I could keep track of the things that I hyperbolically declare The Worst, as in “State College drivers are The Worst” (this would be a very large spreadsheet). I could also keep track of other people’s activities as I observe them. For example, the number of times that Matt comes into the postdoc office per day (but there are certainly some unobservable explanatory variables for this event) or the number of arguments that break out at CIDD seminars (this would probably require a relatively long sampling period).

When I ran this idea past my significant other, he replied “you could use a spreadsheet to track how much time you spend using spreadsheets to track your life.” The possibilities are endless, as are the data.

Deep thoughts with Dave Kennedy

-The person who invented the saying “When the #%@$ hits the fan” either had a vivid imagination or a very dirty living room.

-Six percent of people who have ever been born are still alive, meaning that only about 100 billion people have ever died.  After accounting for reincarnations, heathens, and murderers, there is no way that heaven is overcrowded.

-People who deal in absolutes are always wrong.

-The guy who copyrighted the Happy Birthday song should hurry up and patent the placebo effect before someone else does.

-Luck is just skill leaving the body.

-Last night I dreamt that my shoe lace broke.  This morning I was relieved that I can’t dream the future.

-What would a mirror see if it reflected inward?

Human uniqueness? (includes a brief spoiler for the movie “Lucy”)

I recently watched “Lucy”, which can only be labeled as the most scientifically incorrect movie perhaps of all time. It takes as its premise the incorrect fact that humans only use 10% of their brain’s “capacity” and then proceeds to show us the absolutely mind-boggling progression of a woman’s abilities as she ultimately uses 100% of her brain to travel through time in an office chair to deliver the secrets of the universe to Morgan Freeman, all of which conveniently fit on a USB drive.

I left the theater with a fellow grad student, our brains throbbing from disbelief at how events had proceeded. It was an engrossing, utterly disturbing experience that left us shell shocked. I stand behind my statement that “It wasn’t completely bad per se, but it was completely wrong”. Interestingly enough, however, it sparked a conversation based on the following question: how different are humans from other organisms on Earth?

As common and seemingly simple as this question is, I find myself visiting it frequently. I oftentimes think that as biologists we have a tendency to have a perspective that is unique from that of traditional philosophers or social scientists. We publish accounts of animal behavior that resemble are own human perceived idiosyncrasies, in order to squander arguments that these aspects are part of what make us unique organisms. We seek alternative explanations for patterns in human conflict or the establishment of empires, paying attention to environmental factors that may have played a role, in addition to social explanations.

Rather than finding these examples as damaging to the image of what makes us unique as human beings, I instead find them very comforting. What I find uncomfortable is the fact that people seem to consider Homo sapiens as operating outside the realm of natural laws that dictate the existence of every other organism. I’m not naïve enough to overlook the fact that we certainly are highly divergent in a lot of aspects and have relieved ourselves from multiple selection pressures (at least for the time being) and of course there are examples of other animals that self-medicate, use tools and make war, but we operate at the extreme level in all of these cases (to some degree); however, the tendency for people to be anthropocentric and forget the crucial fact that at some level we still are subject to the same natural laws that all of life is, might be one of the most depressing, annoying things about being human, ultimately contributing to the ambivalence with which we treat the planet and its inhabitants.

I think I only believe this 30% of the time, but soul-sucking Lucy brought out the thoughts.

 

So …….. what exactly does that mean?

 

I was recently told that to see stars that are farther away you need a longer telescope. I am taking this little tidbit slightly out of context ………. but …….. it is a perfect example of a statement that serves to bamboozle you. It seems to me that I could construct a telescope that was a mile long and I would be just as blind to the wonders of the sky as before I built it. If I want to see objects that are farther away, some things that might be useful to do are* (i) collect more light (maybe by observing over a longer period of time) or (ii) magnify the image of interest. In a telescope that uses lenses for magnification it seems probable that more magnification requires more lenses (perhaps separated by longer distances) and that this would result in a longer telescope. The statement “to see objects that are farther away you need a longer telescope” isn’t just opaque, it is dangerous. These types of statements are often said with great confidence and young (as well as old!) inquiring minds are usually discouraged from questioning further. A more informative statement might go something like: “One strategy to see objects that are far away is to magnify them, this can be done by using a sequence of lenses separated by carefully caculated distances. This is why the telescope in front of you is so damn long”.

*Disclaimer: I haven’t actually checked if these hypotheses are correct, so use at your own peril.

Translating ‘competitive release’ with help from a tree

I just arrived back east and decided to detour through the American Museum of Natural History on my way through New York. I’m chatting with my sister, the artsy, business-y member of my family who’s spending her summer living on the upper west side, taking courses at an art institute here. We speak different languages when it comes to our work and both have trouble understanding how the other one gets paid to do what she does. We are walking down the hall of North American Forests and are staring at tree rings on the Mark Twain tree, a 1,400 year old giant sequoia that took four days to cut a cross section, and weighs something like the amount of multiple tow trucks.

Across from the Mark Twain tree I get excited to see a smaller, but still impressive tree core which helps us in our struggle at crossing the science-art world boundaries.I point at the tree and look at my sister: “This is what I study!”photo

The tree rings of the loblolly pine show the effect of competition on the tree’s growth. When resources are limited and the loblolly was competing for water, sunlight and nutrients with its fellow plant compatriots the tree’s growth was inhibited. Without competitors, the tree’s access to resources become abundant, releasing it from the suppressive pressure of competition. Wiide fat rings mark the moment of competitive release.

My sister looks at me with an expression of an “Aha,” moment, “so this is what you do.” Then she says “That’s beautiful.” And coming from someone in the art world, I feel like I just received a very great compliment.

I’m amazed by the number of things that inspire me to think about work even when I’m in places I wouldn’t expect. Reminders of broad-scale ecological patterns crop of everywhere I start looking and it’s exciting to see reminders of our science’s remarkable applications when I have the time to cross its borders.

A (wo)Man of Science

When people ask what you do, how do you respond? Biologist? Researcher? Scientist?

I recently learned that if we’d been around in 19th century Great Britain, a common response would be Man of Science. As it turns out, the title of “scientist” is actually a relatively recent word, coined as the counterpart to “artist”. It is also a title that was not easily nor rapidly accepted in Great Britain. Americans, on the other hand, took up the term relatively rapidly and with much less debate, which served as further ammunition for the British anti-“scientist” crowd.

Read more about the history of “scientist” from science historian Dr. Melinda Baldwin.

Today, I’m not worrying about my gender.

No one can deny the facts – gender inequality is a problem in science. The average male scientist is paid more than his female colleagues, in academia and industry. Even when the underrepresentation of women in the highest paid positions of academia is controlled for, women’s wages are still not equal to men’s. Then there’s the ‘leaky pipeline’ whereby, despite taking PhDs in equal numbers to men, women leave academia more often at each of the next stages. So the number of women drops the higher you look up the academic ladder. The ‘leakiness’ of the pipeline is due to many factors, including the dominance of men on selection committees and the fact that women are more likely than men to leave science after having children. Women are also notably underrepresented at conferences, where they give less invited talks than men, perhaps because they turn down invitations more often than their male counterparts. I could go on (and will direct you to Megan’s blog below for yet another startling fact) but I’ve made my point. Inequality between men and women in science exists, it has many drivers and it is angering and alarming, particularly to a young female scientist like me.

Why then, when a female scientist comes to visit and the conversation turns to ‘being a woman in science’, do I feel frustrated and wish we’d just get on with talking about science? Why, when ‘women in science’ was posited as a subject for discussion at a meeting, did a fellow grad student send me an email containing one of my favorite sentences of all time – ‘I am so f***ing bored of hearing about how my vagina is going to stop me from being a scientist’. Is it because we don’t care? I totally refute that. Perhaps I am in denial? My awareness of the above shows that this isn’t the case. Rather, I think it is because the data do not mirror my experience. I live a charmed life as a female scientist, one in which my sex doesn’t matter.

On the surface, my situation looks pretty traditional. A young woman in a lab lead by two men over forty (sorry guys), at a center lead by a man, in an institute lead by a man. Scratch the surface, though, and it starts to look different. During my tenancy in the Read/Thomas lab, I have seen four graduate students defend their PhDs. All were women. Ten postdocs have been hired, of which just two have been men. Five babies have been born to four female postdocs. Most of these new mothers wished to return to science and have done so. They have gone on to publish papers, achieve illustrious fellowships and, in one case, bag a faculty job. Indeed, I do not see the pipe leaking (at least, yet). Of the three postdocs that have won faculty jobs, all have been women. Their jobs are at leading research universities. As it stands, the Read/Thomas lab has seventeen PhD students and postdocs, 76% of which are women. One learns one’s values from one’s experiences. The Read/Thomas lab is a pretty feminist experience – gender doesn’t matter, only ‘papers, papers, papers’.

Some would say that we’ve gone too far in the other direction and that a lab dominated by women is not good for gender equality. I agree, though I believe we were all hired on our merits and would feel queasy about quotas. I do wonder whether we might benefit from more men and, as someone who has always enjoyed male company, would quite enjoy having more men around. An ex-PhD student of Andrew’s (who returned to science on a grant specifically for women, after a stint as editor of a leading journal) also got me thinking about whether women can sometimes be bad for each other. Women are more likely to suffer from ‘impostor syndrome’ and, at least according to stereotype, to speak to each other about it.  I do think that my fellow female scientists and I sometimes reinforce each other’s doubts about our abilities. One of us talks about not feeling great and we sympathize, because we feel the same. Over time, it becomes ‘normal’ not to feel confident, when in fact we should feel bloody good about ourselves! After all, we’re at a great place, working hard to do high-quality science. A preponderance of women might also be bad for diversity. It is easy to imagine a positive feedback loop whereby the sex bias deters men from joining us, so that the proportion of women only rises. In the midst of recruiting the next cohort of budding PhDs with the (all female) CGSA board, I’ve often contemplated this. Not seeing themselves represented, perhaps men don’t feel welcome?

Our lab’s skewed sex ratio might provoke an alternative reaction – causing some to snigger that Andrew and Matt ‘like women too much’ (wink wink, nudge nudge). If it weren’t for the fact that so many women experience unwanted sexual attention so often, I would find such a suggestion laughable. I can say in no uncertain terms that I have never felt that they intend to do anything other than turn me into a functioning scientist and I do not know anyone who has felt otherwise. Moreover, were I ever to feel threatened by someone at work, I know exactly who I would turn to and don’t doubt that I would be believed.

In life beyond the lab, things appear to be looking up for women too. I confess to being on Twitter and to liking it. I follow numerous scientists, many of whom happen to be women. They are a diverse group and lead successful scientific careers. Indeed, one of the women I most admire – and whose profile picture features herself and her daughter – recently got tenure, to the congratulations of The Twitterati. These women are at the forefront of both science and the campaign for gender equality in science, often raising awareness of the latter by tweeting and blogging. They keep me aware of gender issues, too often horrifying me in the process. The positive thing, the irony even, is that I’m not learning about these problems from whisperings in the corridor or meetings in darkened basements. I’m learning about gender inequality in a public forum from women who have achieved success and whom I follow primarily because I think their science is cool. Indeed, their example has given me the confidence to write about this without fear of losing a job over it.

The fight for gender equality is far from over. I can think of many things about the attitudes of both institutions and individuals that make me feel uneasy as a woman. I would hope (perhaps naively) that if and when I have a child, things will have changed so that I get a reasonable amount of time off. Having the freedom to choose how to split said time with my partner, with no one pre-prescribing my role by giving me more time than him, would also be a major step forward. I’m sure that if I were a member of the LGBT community, my life would be considerably harder. I’m sure there are some old fogeys out there who will scoff at these ideas.  But I just want to say – and I hope not to have to retract this in the future – at this moment in my life, in this lab, I don’t think the fact that I have a vagina is going to stop me from being a successful scientist.

Give the guy a break

For those of you who don’t know, LeBron James is a basketball player — the best basketball player in the world, and he has been for a while now.  He was drafted straight out of high school at the age of 18, and everyone knew he was going to be great.  But throughout his career he’s been criticized mercilessly for being overrated, for being spoiled, for being a sell out, for abandoning his team, for being afraid of competition, for being a sore loser, for being arrogant, for biting his nails, for flopping, etc..  In a world where, allegedly, football players stab people, torture dogs, and rape girls, baseball players kill people, beat their wives, and steal cars, and basketball players get in fights with fans during games, deal cocaine, and point guns at teammates, Lebron James is vehemently criticized for the way that he announced his plan to change teams.  It’s probably also worth mentioning that the way he made his announcement raised over $3 million, all of which was donated to charity.

Now don’t get me wrong, I’m not a huge fan of the guy — when his team lost to my Orlando Magic in the playoffs a few years ago, he didn’t congratulate them on the win — but let’s put things in perspective.  Here’s a guy who was told when he was 14 that he was going to be the best in the world at something, and maybe the best at it of all people ever.  He’s had people fawning over him his entire life.  He was a millionaire at 18.  He is a two time champion and four time MVP.  And how does he react; he’s happily married with children to his high school girlfriend, he’s close with his high school friends, he appears in the news for charity and his basketball performance, and never once for criminal behavior.  That’s a heck of a lot better than I probably would have done.  Just sayin’.

How is rust fungus like malaria?: My imaginary interview with Frank Dunn Kern in 1914

Frank Dunn Kern was an American mycologist and phytopathologist. He was born in 1883, died in 1973, and spent much of his life studying and teaching botany at Penn State. His 37 years at Penn State are honored by the name of our graduate school building, the Kern Graduate School. Kern studied pathogenic rust fungus, a topic he referenced frequently in his talk for the Botanical Society of America’s symposium on “The Genetic Relationship of Organisms,” held on December 30, 1914.  I recently found a transcript of his talk, later published in the American Journal of Botany, and couldn’t help but wish that Kern was still alive to give another lecture at Penn State.

His 1914 talk, titled “The Genetic Relationship of Parasites,” posed questions in regard to rust fungi, which 100 years later I find incredibly relevant when posed in the context of malaria. The CIDD seminar series has a tradition of allowing students to interview and meet lecturers post-seminar, but as Kern and I missed each other somewhere in the too-large generation gap, I wanted to have an imaginary interview, pretending to have just attended what must have been a fascinating seminar, and am basing his responses on the content of his lecture transcript. Kern’s talk, given before the era of Watson and Crick and the DNA-based field of genetics, covered broad topics ranging from the evolution of parasites, the genetic relationships between hosts in multihost systems, the taxonomic classifications of hosts and parasites, the evolution of sexuality, generalist vs. specialist trade-offs and virulence theory. An ambitious set of topics for one lecture but Kern covered all of them effectively.  It should be noted that Kern did not study malaria, though I think good things would have happened if he had, and I am thus taking much creative license in hypothesizing what he would have said about the malaria system.

An up close photograph of rust fungus. Image courtesy of Wikimedia Commons.

An up close photograph of rust fungus. Image courtesy of Wikimedia Commons.

Me: So Frank, I study malaria — how is rust fungus like malaria?

Frank Dunn Kern (FDK): Malaria and rust fungus are both heteroecious, meaning that they undergo both sexual and asexual phases of reproduction to complete their life cycles. Both separate the sexual and asexual life stages in different hosts, taking sexual forms in an intermediate host, and mostly asexual forms in the primary host. There is a tendency in rust fungi to have the “gametophytic hosts higher in classification than the telial hosts […].” Might there may be a similar trend in heteroecious protozoa, like malaria, in their asexual phase hosts vs. sexual phase hosts?

Me: We see that malaria vectors host the sexual phase of the parasites, while the vertebrate hosts harbor asexual phases. That is true for malaria across taxa and seems to be the case for trypanosomes and Leishmania major. I have no idea why this is the case. (If anyone knows I would love to have the theory explained to me).

FDK: In rust fungus, historic evidence from when the parasite was autoecious has indicated that evolution to heteroecious forms has resulted in the original autoecious host evolving to be the gametophytic host and the novel host taking on the role of the telial host. Understanding the evolution of malaria into its intermediate and primary hosts might help you in understanding whether there is a parallel in these systems.

To continue answering your question, there are other similarities as well. Both malaria and rust fungi are obligate parasites. Both are pleimorphic, showing very different phenotypes in different host environments. And, at a basic level of similarity, both are eukaryotes.

Me: Your lecture mentioned the use of parasites to better understand genetic relationships between host species, and thus to better organize taxonomic groups. If in 40 years we discover the nature of genetic material coding for differences between hosts, do you think we will see host-host relationships predicting genetic similarities more strongly than other factors such as morphologic similarities predicting genetic similarities? [Watson and Crick’s discovery of the double helix is not until 1953, 39 years after 1914]

FDK: When I started studying Gymnosporangium, I found that the fungus was not only found on hosts in the juniper and apple families, but also on a member of the rose family, and later on a member of the hydrangea family. Though these families were not always classified in the same order, further morphological studies reclassified them as such and my argument in favor of using parasites to show relatedness is favored by the reclassification grouping. In future research, when we can see the genetic material of these hosts and do a more precise comparison of similarity, I think we might find this trend in many types of hosts where the ability to share parasites might indicate shared genetic factors.

Me: We can ask that question better with an understanding of the genetic code. Humans are hypothesized to have acquired malaria from other apes, the same is true for SIV/HIV and Ebola. Is this a trend where humans are more likely to experience spillover from more closely related species? Do we acquire novel pathogens from other apes more frequently than from raccoons or bats or cats? It seems possible that we would be more likely to share similar pathogens with hosts similar in species to us. In the example of dogs acquiring parvovirus from cats with feline panleukemia virus, could that also be explained by genetically similar hosts more easily experiencing spillover than ones that are genetically very different? Were dogs any more likely than humans to get a feline virus or were they just victims of chance? If the genetic similarity theory has credence, than dogs were more likely because of their genetic relatedness to cats. It seems to me like humans are more likely to be infected by diseases of other mammals than by diseases of reptiles or amphibians, or plants.

FDK: That is something future research on genetics may be able to tell us. Hopefully we will get an answer.

[End of the interview].

Imaginary interviews are wonderful because I can end the conversation whenever I want, rather than waiting for the already-decided end time or for a socially appropriate point in conversation.  If you have never read Kerns work, I think it deserves reading by malariologists. Rust fungi and malaria have similar evolutionary history and I wonder how much of their evolution can be explained by the same story. Did malaria evolve from being autoecious in humans similar to the rust fungus in juniper? When malaria spills over into novel hosts can these host types be predicted by host genetics? Is there a pattern in which host environment asexual forms prefer and which host environment sexual forms prefer? And what explains this pattern?

What is an organism?

Last week, the life scientific took me to Guarda, Switzerland where I attended a week long writing course focused on topics in evolutionary biology. At night, students and faculty gathered for “arm chair” lectures, casual discussions held in a living room where rotating faculty members delved into deeper discussions of a particular research interest. Sometimes lectures touched the topics of career advice and professional development; at other times stories of personal chutzpah and unconventional methods yielded memorable vignettes of the happy accidents and personalities lying behind scientific discoveries.

On Wednesday, it was Joan Strassmann‘s turn to sit in the armchair, and with a room full of first year students listening attentively, she asked us: well fellow biologists, what is an organism?

At first, I sat in the camp of Potter Stewart, thinking “I’ll know it when I see it,” an answer that is both useless and vague for defining an important unit of evolutionary biology. As I began thinking of more useful possible definitions, it seemed that like many definitions in biology, the task would be nebulous. Not only are definitions tricky things to create, the things we choose to define or not define can be arbitrary. Scientists have hyper-enthusiasm for defining species and populations, but as Joan argued during her arm-chair lecture, a definition for an organism seems to have been neglected. So how should we define an organism?

Joan gave the definition that an organism is the smallest unit of adaptation. If this were mapped onto axes of conflict and cooperation, the organism would fall in the quadrant of highest cooperation and lowest conflict.

An interesting definition, but how does it fare with current presumptions we have about organisms? Would the human microbiome be a part of the same organism as human cells? How would we differentiate between parasites being a part of the human organism and mutualists being a part of the human organism? Measuring the distinction between cooperation and conflict for different associating microbes could be problematic, especially when we can’t always measure costs and benefits of the microbes inside us.

I’ve started thinking about other ways we might be able to define an organism.

(1) An organism could be a unit sharing a genetic identity. But this yields problematic questions: are twins the same organism? Are all asexual clones one organism?

(2) An organism could be defined spatially. Are all cells linked tightly in space a unit of adaptation with a collective identity?

(3) We could define an organism by the second part of Joan’s definition, considering conflict vs. cooperation. Are cancer cells that have high conflict with surrounding tissues a separate organism?

Are there other ways we could or should define an organism?

Post comments if you have ideas. In the meantime, my recent travels have taken me places where I photographed some organism-things, so to prompt organismal thinking, here are pictures:

lichen

A lichen: One organism or two?

red-tail

Red-tail hawk ready to land

A real-life unicorn. Saw a bighorn with a missing horn. Grande Ronde river in the background. Washington State.

A real-life unicorn: a bighorn with a missing horn near the Grande Ronde river, Washington State

Pair of fawns in Hells Canyon. Washington State.

Pair of fawns in Hells Canyon, Washington State

Marmots spotted in the Swiss Alps.

Marmots spotted in the Swiss Alps

Flowers found along a hiking trail: one, a hemaphrodite, one a female. Same species. Made for an interesting science lesson from Dieter Ebert along the trail.

Flowers found along a hiking trail: one is a hermaphrodite, one is a female of the same species. This made for an interesting science lesson from Dieter Ebert along the trail. Guarda, Switzerland.

A pair of oryx walking in early morning, desert sun. La Jolla, New Mexico. (They are invasives, imported from the Serengeti as game)

A pair of oryx walking in early morning desert sun, La Jolla, New Mexico (oryx are invasive in NM, imported from the Serengeti as game)

Peacock crossing. Washington State.

Peacock crossing, Washington State

Depressing widget, but read on for splinefun

I defended my thesis last week, which feels pretty much as Katey described it: a sudden, disorienting shift in momentum. During my time at Penn State, I learned that I really enjoy my work and I plan to continue in academia as long as that’s true. I also learned that models are most fun when they tell me things I didn’t expect, and fortunately that happens a lot. In that sense, the model described in this editorial by Jim Austin is one of the least-fun I’ve encountered. The statistical model has been developed into a Science widget that  is supposed to tabulate the likelihood that one will become a principal investigator. Unsurprisingly, Austin reports that the probability is always predicted to be higher for male scientists and that female scientists are predicted to need two extra first-author publications to make up for not being male. Identifying discrimination is useful, because it forces us to examine what we are doing to contribute to the problem. Case in point: this PNAS paper showing that faculty (male and female) consistently rank male students as more competent and hireable than female students with identical CVs. The model underlying the science widget serves the same function: highlighting the disparities that currently exist, so that they can be addressed. But I’m torn about whether the widget is useful, and in fact I’m worried that it could be worse than useless–do women need more reasons to give up on academia? Or is the widget just a more-accessible way of emphasizing an unacceptable bias?

For myself, the widget is worse than useless, and I’d rather not waste time wondering whether I belong in science when there’s science to be had. And if women’s contributions tend to be undervalued, I’ll mention two women who have contributed enormously to research across fields: Ada Lovelace, who wrote the first computer program, paving the way for science as we know it, and Grace Wahba, whose work on splines is frankly inspiring. And as a modest contribution of my own: animated splines!

Best transmission investment strategy and corresponding payoff.

Best transmission investment strategy and corresponding payoff.

Coincidence

As someone who studies infectious diseases, one thing that continues to amaze me about infectious diseases is how improbably their persistence can seem.

When I was in Tanzania, one of the technicians invited me to visit a local hospital with him. This particular hospital specializes in treating people with leprosy, which made me curious about how leprosy is transmitted. As it turns out, part of the answer seems to be pretty rarely. According to one paper, about 95% of people are resistant to infection by the mycobacterium that causes leprosy.*

A few days after visiting the leprosy hospital, I went to a meeting where a researcher mentioned in passing that the number of Anopheles mosquitoes, let alone malaria infected Anopheles mosquitoes, captured in Dar es Salaam seems extremely low and yet malaria persists at a relatively steady rate (something like 10%, if I remember correctly).

The other day, as I lay under a bednet in the insectary here at Penn State, it struck me as highly unlikely that such a frail little insect could find an unprotect human host even once in her lifespan, let alone twice. And not just any humans, but to feed on a malaria-infected human and survive long enough to transmit by feeding on another human. And yet, the WHO estimates over 200 million malaria cases in 2012, so it’s not as unlikely as it seems to me, watching the mosquitoes bounce off my net.

 

* As a side note, I once spent a month shadowing an infectious disease doctor at a hospital in the U.S. The doctor mostly treated people with bacterial infections secondary to other medical issues like diabetes or cancer. He also treated some HIV+ patients and occasionally, a patient with TB which, like leprosy, is caused by a mycobacterium. I realized I was probably better suited for research or public health when I was more interested in the when and why of the TB cases than the treatment plans the doctor devised for his patients.

Down the marmot hole

As infectious disease biologists, our ears perk up at the mention of pathogens in a system and we begin to wonder if and how they might be important in shaping or driving patterns.

I often find myself pausing after anecdotal stories in books, articles, etc and wishing the authors would elaborate. Why is it Mr. Theodore Roosevelt that there are invisible lines that abruptly signal the presence or absence of ticks in the depths of the Amazon? Are you telling me Tim Cope, young adventurer extraordinaire, that Mongolia is one of the few places that people can commonly get plague?

I’ll elaborate on the second thought here. First off, let it be known that little research has been done on plague in Mongolia, and the bulk of the literature is in Chinese or Russian (thereby inaccessible to this writer). Instances of transmitted plague to humans have been actively recorded since 1897. The hosts for plague include a variety of mammals including species of marmots, pika, suslik (read ground squirrels) and voles, but the most important in transmitting to humans is the charismatic and delectable (according to Mongolians) marmot.

More interestingly, instances of plague have been recorded from the 1300s on the fringes of the Mongol empire. In a popularly cited account by the Italian Gabriele de’ Mussi, he describes the siege of Caffa (today, Feodosija in Crimea). This port city held a mix of Italian traders and their Mongol hosts who often bumped heads. An escalation of tensions in 1343 led to an attack led by the Mongol leader Janibeg that lasted until 1347 when the Italians were allowed to remain in Caffa. Interestingly, this incident is posed as one of the very first instances of biological warfare. In 1346, the combative Mongols were struck by a devastating disease that contributed to the decision to allow the Italians to remain at the port. Gabriele de’ Mussi describes the scene:

“The dying Tartars, stunned and stupefied by the immensity of the disaster brought about by the disease, and realizing that they had no hope of escape, lost interest in the siege. But they ordered corpses to be placed in catapults and lobbed into the city in the hope that the intolerable stench would kill everyone inside.”

mongol_siege_edinburghIn addition, it was thought for a time that contaminated Italians fleeing the scene contributed to the introduction of plague into Western Europe. New analysis reveals that this was probably not the case, however.

See where infectious disease rabbit holes can take you? From beautiful scenes on the steppe to rotten, diseased corpses and medieval catapults.

Where’s this headed?

gender-biasSex ratios are really interesting, especially when they’re really skewed.
(This interest scientists, and also fans of science fiction – check out Y the Last Man where a virus kills all creatures on earth with an XY chromosome pairing).

Well, here’s an example of a skew in the opposite direction. Imagine you introduce a genotype into a population that produces 95% males because males are genetically engineered such that any X chromosome produced in sperm is shredded, resulting in only males in the next generation. This is happening in mosquito populations and is being touted as a means for controlling these pests in the linked news article. Useful for controlling the spread of pathogens that only females transmit with blood meals.

What worries me is that the release of male mosquitoes that produce mainly male mosquitoes is going to be a rapid-fire evolutionary dead end. There’s a huge fitness disadvantage to a male biased skew. Is this even worth doing? A waste of money? What are your thoughts or predictions about evolutionary outcomes?

More on prions…

I spent last week at the Ecology and Evolution of Infectious Diseases conference at Colorado State, which had a slew of excellent talks including an entire morning devoted to the topic of how to present science as an interesting narrative rather than a dry series of findings. Amidst the fascinating science, I learned a couple of shocking things about prions from Edward Hoover‘s talk. While we have yet to see a zombie disease transmitted through human populations, Hoover aptly describes prions as zombie proteins, which turn normal proteins into aberrant forms that aggregate into plaques in neural tissue and cause extensive damage, leading ultimately–and inevitably–to death.

Hoover presented evidence that diluting prions can prevent infection, or at least slow its progression. I wanted to know whether that’s because the animal immune system can cope with small numbers of prions, or whether prions themselves are less efficient when there are few of them. I hold out hope for some immunity against prions, but what I found was this nice article by Silveira and colleagues showing that prions need to aggregate together to zombify normal proteins (terminology mine) and make animals sick. Interestingly, there seems to be a maximum size for effective zombification of normal proteins, beyond which prions are less efficient. One of the hallmarks of prion diseases is plaques in the brain tissue, and research has focused on preventing those plaques from forming, but Silveira et al. suggest that actually concentrating prions into large plaques may be a good thing, slowing the progression of the disease.

Hoover and others also passaged prions through different animals to get prions that were better at infecting other species and with shorter incubation periods. Thus prions appear to be evolving by natural selection, but without any kind of genetic change. So prions can evolve, and quickly, while normal proteins are constrained to change at the sluggish pace set by mammalian generation times. Now instead of wondering why mammals haven’t evolved better defenses against prions, I wonder why we’re not all up to our ears in prions. Thoughts?

Field Necropsy

Field work got messy last week (this is a warning for messy pictures below).

Most days of working in the field start out the same: Kezia and I wake up, drive through canyons with our radio receiver and telemetry equipment to listen for sheep and scan hillsides for visual cues of ewe groups. But on Sunday, we strayed from the normal routine of collecting observational data when our scopes hit something we’ve been both expecting and fearing: our first dead sheep.

Working amidst an outbreak, death is expected. Disease kills. We recognize this when we talk about disease imposing selection pressure, when we talk about host-pathogen evolution, or when we talk about virulence-transmission trade-offs. Death is different when it looks like a dead sheep on a hill side.

What do you do with a dead sheep? Kezia and I were told that we needed to recover the carcass and bring it to the vet lab where necropsy experts could determine the cause of death and microbiologists could test for the presumed causative agents of pneumonia. A carcass contains valuable data in an epidemic setting, especially in a system with a small population size, difficult terrain to navigate and rarely observed fresh carcasses (in remote areas, carcasses are sometimes not found for 5 to 10 days, if not months postmortem when body conditions are close to clean bones). We were told to find the carcass, hike it back to our truck and then deliver the whole body to the vet school in Pullman.

What do you do with a dead sheep when you can’t carry it back to your field truck? By the time we found additional hands to help carry the sheep, and managed to hike our way to where the ewe died, she had been dead for 40+ hours. Her carcass, sitting on a cliff in the middle of a hot Hells Canyon, was colonized by maggots. The skin sloughed off when we lifted the body. One-hundred-fifty pounds of dead weight needed to come down an incline that took both hands to boulder up. We could not carry the whole body out of the canyon, at least not in one piece.

 

Maggots break down tissues quickly postmortem. Forensic entomologists can use images like this one to gauge how long a carcass has been in an area and estimate time-since-death

Maggots break down tissues quickly postmortem. Forensic entomologists can use images like this one to gauge how long a carcass has been in an area and estimate time-since-death.

Option B was to do a necropsy in the field and bring back the essential parts which were more feasible to carry: the head and the pluck. For pneumonia, the most important samples come from the lungs, trachea and respiratory tissues.

I had never done a field necropsy. I had never touched a sheep. I was supposed to bring back the pluck. What was “the pluck”*?

Lucky for Kezia and me, one of the field biologists who tracks the bighorn population across the river from where we work had done a necropsy before and was willing to show us how it is done.

This is how you do a field necropsy:

Step one is to remove the head and try to take the pluck out intact. You do this with tools included in a “necropsy kit” — the kit includes a large knife, a saw, pliers, sampling bags, gloves, and a cutting scissor-like tool that reminded me of pruning sheers (you use those to crack ribs if the lungs don’t slide out).

Step two is to take out other organs which may not have come out with a first attempt to slide out the pluck. Bodies decay quickly in summer sun so we needed to remove additional parts of the lungs which were not removed with the trachea and upper respiratory tract (the lungs had nearly liquefied).

Step three is to take proxy measurements of the sheep’s health status. You do this by looking at the bottom of the hooves and checking the color of the bone marrow. With the hooves, you are checking to see if there are any abnormal growths, a common indicator of poor health in bighorns. The color of the bone marrow is used to gauge nutritional status. White bone marrow is healthy and rich in fats, indicating that the sheep had high enough body fat that it would not have died from starvation. Red marrow indicates that a sheep has used up all of it’s bodily fat reserves and has begun to use it’s last reserves — the fat reserves in the bone marrow. Our sheep had nice white marrow.

The inside of a femur bone gives us a glimpse of marrow, which here looks healthy as indicated by the white to pinkish color.

The inside of a femur bone gives us a glimpse of marrow, which here looks healthy as indicated by the white to pinkish color.

Step four is to bring back the samples to a lab where other scientists can test for pneumonia pathogens and give us a better idea of the cause of death. Even if the ewe did not die of pneumonia, testing the respiratory tissues for pneumonia agents will give us an idea of whether she had been infected in the past or was harboring low densities of pathogens that may not have made her sick but could have allowed her to spread the infection to sheep in her ewe group or to young lambs in the same group.

The data we obtain from carcasses will give us a better idea of the prevalence of infection in the population and the mortality rate associated with pneumonia. When we recover samples like we did on Sunday we hope that we are working toward a better understanding of how, when and why the outbreaks occur. The experience has left me so far with a deep appreciation of data from wildlife systems: often we look at a data set and see points on a graph. Out here, I am reminded of where mortality statistics come from and what disease as an evolutionary selection pressure looks like “on the ground.”

 

*the pluck is a term used to refer to internal organs