Living in the UK for 3+ years has allowed my spouse and I to hone our European travelling skills to a considerable extent. We’ve done our fair bit of travelling, trying to take advantage of our proximity to the continent whilst we’re here. Recently we were been amazed at how comfortably and easily we get about countries even where neither of us speak the native language. I never thought I could feel so at home in Europe, and especially the UK. Just to give some context, this is where we’ve been in our time here: France (x5), Germany (x2), Italy (x2), Norway (x2), Spain, Scotland, England, Wales, and Finland.

As you can imagine, considering that we up and moved to the UK from the US, we don’t like to experience foreign countries as the typical tourist does. Our preference is to get an authentic taste for the tremendous variety of cultures that Europe has to offer, as that is truly its bounty. Whilst we realize we can never genuinely penetrate the nuances of a local culture, particularly given that neither of us is even a bad imitation of a polyglot, we’ve still learned how to see a few of the threads that weave a local community.

The key to this is quite simple really, and not entirely uncommon in Europe: self-catering accommodation (e.g, We weren’t familiar with this concept as a general way to travel before moving here, but essentially you stay at an apartment/home, often owned by an individual, instead of a hotel. This provides several advantages over staying at a hotel:

  • Typically the individuals who rent out places are natives of the area. They can give you much better recommendations of attractions and (especially!) local restaurants. This is particularly useful if you are like me, and prefer relatively small, cozy, friendly, authentic restaurants. The caveat here is that such restaurants may not have menus in English or English speaking staff, so be prepared for an adventure in gastronomy…but typically many pleasant surprises!
  • You can cook your own meals. This provides two advantages:
    • You can reduce costs, especially for meals like breakfast and lunch. Eating dinner in a few nights helps too, that way you can splurge and really enjoy some of the great food and drink Europe has to offer.
    • You get to go shopping at local markets/supermarkets. I know this doesn’t sound quite like a typical tourist destination, but it’s great fun food shopping in Europe, especially in countries like France and Italy that are passionate about food (the variety of pasta available in Italy boggles the mind!). This is something that a fair amount of time should be set aside for upon arrival to stock your shelves, as it can take awhile to sort out where to find things and what you’re looking at (typically not a lot of English here!). The other great thing is that you can often get a taste of local food/produce at a much reduced cost of what you’d pay in a restaurant (i.e, last time we were in Germany, white asparagus was in season; dirt cheap to buy and cook up!). For example, in France, if you cannot find a local fromagerie (cheese shop) to explore (or are too intimidated to step in), a supermarket is a great place to find and buy a few chunks of cheese you’ve never heard of and store at your apartment. And let’s not forget access to inexpensive, but tasty, wine! (Can you tell I’ve fallen in love with France?!)
  • You can gain access to local neighborhoods and small towns where hotels may be difficult to come by. For example, at a recent stay in Paris we stayed in a small apartment in a neighborhood well outside the main city centre (but a few steps from the metro). It was fantastic; we were on a block that had two bakeries, a cheese shop, a pastry shop, two butchers, a chocolate shop, several cafes, and a fruit/veg shop. Granted, very few of these people spoke English, but they were also ridiculously friendly, and mostly happy to help as I stumbled through my marginally coherent French. It was far and away one of the highlights of Paris, although it was dangerous leaving the apartment in the morning, as I would always come back with some cheese, wine, bread, pastry or (what made the spouse quite happy) chocolate.
  • It’s typically considerably less expensive than an equivalently furnished hotel. For example, on that recent trip to Paris, we paid 60 EUR a night. Sweet deal.

If you’re up for an adventure and want to get a deeper experience of Europe than just seeing the major sites, I highly recommend going the self-catering route. However, doing self-catering is not without some annoyances:

  • You typically (but not always!) need to transfer a deposit to whomever it is you contact to book the apartment/home. This tends to be more common in cities than in smaller towns or rural areas where more old fashioned rules tend to apply (i.e, I trust your word that you’ll show up). To transfer a deposit to a European account, I have used Worth going through it and understanding how to send money before booking a place, but it’s quite straightforward.
  • Keep an eye out for fees for final cleanings and linens. Since it is self-catering, often times they are frequented by people from the country that may drive there and bring their own towels and linens. You can typically request towels and linens (sometimes for a small fee). Also, since this is not a hotel, there won’t be any maid service to clean up after you, so it’s up to you to keep the place reasonably tidy. However, it’s expected that the apartment is relatively clean when you leave, so it’s worth asking if the final cleaning is included in the price. If not it usually costs between 30-50 EUR, but is definitely  worth paying for so you don’t spend the last few hours of your holiday cleaning up.
  • Often time’s appliances don’t work the same in Europe as they do in the US. Especially dishwashers and washer/dryers. And sometimes you’ll find unexpected amenities, such as bidets (especially in Italy), or saunas (Finland). We are still continually surprised at what we find from our self-catering experiences, but a pinch of salt and a lot of laughter go a long way!


Finally, the language barrier can be intimidating and frustrating at times. However, we’ve found throughout Europe, speaking and interacting with kindness and respect is universally understood, even if the words are not.


A provocative piece was recently published in Science concerning the relationship between paylines (i.e, priority scores) for grant submissions to NIH for bread and butter RO1s and the resulting impact of publications arising from those grants. Now, I’m no NIH peer review wonk*, but I found some of the results a bit surprising.

In the research discussed, grants funded at NHLBI (National Heart, Lung, and Blood Institute) between 2001-2008 were divided into three tiers: better than 10th percentile (best), 10th to 20th percentile, and 20th to 42nd percentile. And, well, I think the figure in the article says it all:

The data suggests that the relationship between the percentile scores on a grant does not correlate at all with publications per grant or citations per millions of dollars spent.  However, a closer look at some of the original research that this Science article data is based off of does actually suggest that there is a weak relationship between those grants published in the top 10th percentile and citations in the first 2 years post-publication (oddly, this fact was left out of the Science article). Nonetheless, this appears to confirm much of what was discussed in the comments section of a recent post by FunkyDrugMonkey where priority scores seemed to be all over the place for most investigators with no obvious relationship between self-perceived best and worst ideas. Taken together, it looks like the priority scores are an extremely poor predictor of “impact”, at least as judged by citation counts**.

So what does this all mean? Well, to me it suggests that either A) the peer review grant system at NIH is really rubbish or B) the system is wasting a lot of time and money in trying to discern the undiscernible. I suspect the answer is B (as pointed out in the comments section of the article by a Scott Nelson), that these committees at NIH are getting a lot of really good quality proposals, and differentiating between them is an inherently impossible task resulting in a lot of noise in the system. If all the proposals are roughly equal with respect to the quality of the proposed science, then what gets higher/lower scores is going depend more on the idiosyncrasies of the reviewers/review panel than anything particularly objective or obvious (or to take a more cynical tone, the name of the PI on the grant application).

If it is the case that NIH study sections are largely trying to discern the undiscernable, then it suggests that there would be straightforward ways in which to streamline the entire process. Perhaps after proposals are deemed to meet some acceptable scientific threshold a subset of them are chosen randomly to be funded and another subset chosen by program officers or others at the NIH based on certain strategic priorities, something like that. Seems like it could be a fairer and less expensive and time intensive system that would result in similar outcomes.

Even if my ideas here are a bit off, findings such as this at the very least suggest that we need to check our assumptions with respect to the best and most efficient ways in which to assess grant applications. A priori, I certainly would have expected a reasonably strong positive correlation between priority scores and citations. I would love to hear how the findings from this work jive with anecdotes of any readers out there.


*I’m very much a tyro when it comes to NIH grants, so feel free to take me to task in the comments section if I’m dead wrong on my understanding of the NIH grant peer review game.

**I’m not sold on the idea that citations are a good marker for impact. I think impact is a much more ephemeral concept than can be captured in any single or suite of metrics. Given the inherent unpredictability of science, true impact is not apparent when squinting into the bright light of the future, but when taking stock of the past.

I mostly like Google Scholar for altmetrics; it’s quick to check out any recent citations to my work and it updates much more rapidly than Web of Science (WoS). The downside (or the upside…), of course, has always been that Google Scholar tends to give higher citation counts than WoS by including things like dissertations, non-English language publications, books, etc. (basically anything published under an institutional domain as far as I understand; which can be gamed quite easily apparently, as discussed here). So the absolute numbers for Google Scholar always have to be taken with a grain of salt, but a recent change in the citations to one of my papers suggest that the citation counts should be taken with more like a 1kg bucket of NaCl.

I have a nice review article I published back in the hey day of my PhD which has garnered a decent number of citations. However, a couple of weeks ago I noticed that the citation count quickly increased by about 30. I didn’t think the paper all of a sudden became a blockbuster, so I did a little snooping. Turns out the boost in citations is from a book published back in 2010 that has several chapters (about 30), one or two of which pertain directly to the topic of my review and probably cited it. What is disturbing, however, is that it looks like every single individual chapter in this book is also included as a citation to my review article. Are the algorithms for Google Scholar really this poorly designed? Seems a bit ludicrous and really shakes my faith in Google Scholar as a reasonable (if inflated) source of citation counts. 

I’ve emailed Google Scholar to point out the issue, but I doubt it will even be read by a human being, let alone considered. I’ve emailed them in the past when Google Scholar was in beta to make suggestions and never heard/saw anything…just another small ripple in the salt filled sea that is Google Scholar…

All scientists should read: “What’s so Special About Science (And How Much Should We Spend on It?)”

Near the end of last year the president of the AAAS (American Association for the Advancement of Science, publishers of Science magazine) wrote a great piece that every scientist should read. It concerns the role of science, and particularly basic science, in boosting GDP (gross domestic product) and providing the fodder for technological advance. A lot of great talking points for dealing with science skeptics here.

The growth in U.S. GDP per captia has been exponential since the late 19th century. Such growth is largely responsible for the high standard of living those in the US, and other developed countries, have enjoyed over the past half century or so. And what is the primary driver of this growth? It turns out that at least 50%, and up to 85%, is due to technological progress buttressed by basic science, as opposed to natural resources, land, and labor. What’s further, the return on basic science is impressive:

Many institutions, including our universities and retirement funds, accept 5% sustained ROI as a decent return. Yet investments in basic research are variously estimated as ultimately returning between 20% and 60% per year

Wowzers! If that isn’t a number to impress your friends and family, I don’t know what is. I know I always find it challenging to explain to others why it’s worth doing basic scientific research. I usually point them to the fact that green flourescent protein was discovered in jellyfish and taq polymerase originally came from thermophilic bacteria. Both have revolutionized biological science, and both came out of very basic science. But it’s nice to have some sexy numbers to back this claim up!

Another interesting aspect of the piece that is discussed is the idea of appropriability. I had not really thought about this idea before, but it is essentially the ability for the creator or discoverer of a technology to appropriate it and turn it into something that can be sold in the market. As we move into a more global and digital world, the relationship between where a discovery is made and who benefits from it economically, is increasingly blurred. 

The piece then ends with a bit that almost brought a tear to my eye:

A 2009 Harris poll (17) asked the public to name the most prestigious occupations. The answers (in order) were firefighter, scientist, doctor, nurse, teacher, and military officer. What struck me immediately when I saw this result is that every one of these, except scientist, is an immediate helper occupation. These people save us from fires, prevent attacks, teach our children, and heal us. By contrast, the value of scientists and the benefit they produce can be very long term. Yet the public perceives scientists as belonging in the same basket of high-prestige helper occupations. This tells us something. Another poll, by Pew (18), finds that the majority of Americans think that government investments in basic scientific research (73%) and engineering and technology (74%) pay off in the long run, with only small differences between Democrats and Republicans. That also tells us something.

When I ask nonscientists, “what is science good for?”, only rarely do I hear the answer that it promotes economic growth. Instead, most answers are about science creating a better world in ways not easily monetizable: enabling longer, healthier lives; protecting the planet; feeding humanity; deterring (or winning) international conflicts; bringing us resiliency to engage future challenges of an uncertain world, with climate change as one example. People also talk about the basic human need for discovery and understanding of the natural world, about the almost-mysterious power of science at engaging the natural idealism of young people, and empowering them to participate in the world in ways that they otherwise would not.

Wow! The day to day life of scientist is hard. Wrangling with data, papers, and grants. Fighting nerves to give a talk in front of peers, or for authorship…it’s a battle, no doubt about it. But knowing that people the world over put us in the same boat as fire fighters and nurses…well that just warms the cockles of my funky heart.

CV Quandary: Include Acknowledgments?

Posted: February 6, 2014 in Academia
Tags: ,

I’m always looking for ways to beef up my CV for eventual applications for faculty or other positions, and I’ve been thinking about acknowledgements. In my current lab here in the UK, I’ve brought an atypical skill set that is not typically present given the sort of research that is done. However, it has proven to be quite useful and I’m always happy to lend a hand to a fellow scientist. The work that I do is not typically enough to warrant authorship, but I have received several acknowledgements in various papers for helping out. So now that I’ve actually racked up a few of these, I wonder if it is worthwhile to include in in some way on a CV. A sort of “acknowledgements received” section, indicating my willingness to help my fellow pursuers of glory truth. Or does it look desperate? That I’m trying to reach too much for an edge? Hmmmm….any thoughts out there? What would you think if you saw such a section on a CV whilst on a faculty hiring committee?

Recent Nobel Laureate, Randy Schekman, wrote an op-ed piece in The Guardian back in December railing against the “glamour mag” journals of Science, Nature, and Cell for damaging science (also discussed here and here). He rightly slams the culture of using journal impact factors as a judge of paper quality, which I agree with wholeheartedly. However, I disagree with what appears to be the crux of his argument, that that these luxury journals are artificially limiting the number of papers they publish solely in pursuit of selling more subscriptions:

These journals aggressively curate their brands, in ways more conducive to selling subscriptions than to stimulating the most important research. Like fashion designers who create limited-edition handbags or suits, they know scarcity stokes demand, so they artificially restrict the number of papers they accept. The exclusive brands are then marketed with a gimmick called “impact factor” – a score for each journal, measuring the number of times its papers are cited by subsequent research. Better papers, the theory goes, are cited more often, so better journals boast higher scores. Yet it is a deeply flawed measure, pursuing which has become an end in itself – and is as damaging to science as the bonus culture is to banking.

This is where this young postdoc parts ways with the opinions of the decorated Nobel laureate, particularly with respect to Science magazine which is produced by the non-profit American Association of the Advancement of Science (AAAS). I have been subscribing to the print version of Science since two years into my PhD, and I have continued to receive it here in the UK despite the $160 price tag of having an overseas subscription. Why get a subscription to Science as opposed to say, Nature or Cell? A couple of reasons:

  1. I get an additional line on my CV because I’m now an AAAS member. When I was first deciding between the two journals, this was the deal breaker for a young graduate student looking to find ways to beef up, what was at the time, a very bare bones CV.
  2. I am supporting the AAAS, an organization that I respect and is an advocate for science in the world of US gov’t policy. A subscription to Nature or Cell just lines the coffers of DutchBritish, and  German fat cats. Might as well support the boys (and girls) at home fightin’ the good fight!
  3. I supremely enjoy the content. Half of the print Science magazine is not scientific articles, but science journalism of various sorts. There are stories, policy debates, op-ed pieces etc. I typically read this portion of the magazine end-to-end. It’s generally very high quality and an order of magnitude better than Scientific American or New Scientist, magazines I was getting prior to, and during the first years, of graduate school. However, I rarely read the scientific articles unless they are interesting enough and close enough to my field that I can understand them. On occasion my sub-field gets a little bit of love in Science, and I definitely read those.
  4. Call me old school, but I much prefer a print magazine to reading things on a device. I have commuted to work via public transit for over 8 years now, and I typically just grab the latest Science mag to read to/from work. I’m willing to pay a premium for something new to take with me every week of the year (despite protestations from my spouse as they tend to pile up around the flat…).
  5. Exposure to different fields and ideas. Whilst I have no desire to read the scientific literature on archaeology or snake venoms, I quite enjoy reading well written scientific journalism on the topics. In addition, there’s always interesting articles about publishing, open access, bibliometrics etc.

Thus, there is considerable value in having a print version of Science magazine. This is not an “artificial” limitation on the number of articles that can be published in Science, as Scheckman claims. It is legitimate. There is still a significant place for print subscriptions, even in today’s increasingly digital world.

Nonetheless, it does look as if Science is already working to address, in part, the point that Scheckman makes with respect to the number of papers that are published. In the past year, Science has added a “Research Article Summary” along with the “Research Articles” and “Reports” that they publish. These summaries are a single printed page, with what is basically an extended abstract and a single figure. There is then the inclusion of a web address to access the entire article. Perhaps this is a format that the editors are exploring so they can indeed publish more papers.

Finally, Scheckman charges that Science (and Nature/Cell) tout their impact factors incessantly. I’ve never once seen a mention of Sciences’ impact factor in the magazine in the 7+ years I’ve been reading it. That’s not to say in other forums it is not touted, but it’s definitely not in your face as far as the magazine itself is concerned.

It seems to me that Scheckman, editor of the newly created eLife online journal, has his guns trained on the wrong target (along with a clear conflict of interest). The real pushers of glamour mags is funding, tenure, and hiring committees. Whilst I wouldn’t shed a tear if Nature, Cell, and their ilk went the way of the dodo bird, they are just for-profit organizations trying to make a buck the best they can within the existing culture. They don’t strike me as drivers of the culture; that lays with administrators, scientists, and policy makers.

So keep your hands off my Science mag!


Disclaimer: I never have, and probably never will, publish in Science.

Many of us have seen it, the inclusion of impact factors on a CV. I’ve even had a senior PI suggest putting an impact factor on my CV in the event that I publish in a journal that some of those in my sub-field aren’t as familiar with. I cringed when I was given this suggestion, and balk at CVs that contain impact factors. We all know how useful impact factors are (well most of us anyway), and it has been discussed ad nauseam in many places (e.g, here). So then what’s a wee postdoc like myself to do to promote how totally and completely awesome my publications are (or not…)?

Well what about adding citation statistics to a CV for each paper? Would require some constant updating, but not too much for a young chap like myself with under 20 pubs. Alternatively, a link to a Google scholar profile would do the trick as well I suppose. Such an idea is not without downfalls as it doesn’t say much about newly published work (but article views would if they were available form all journals like they are from PLoS as they accrue quite quickly and if I had to guess are correlated to a degree with later citations). This would certainly draw attention away from focusing on just where something is published as opposed to a more “objective” measure of article impact.

Of course the other caveat here is that perhaps citations aren’t the best method of judging how good someone’s work is. I don’t disagree with this point, but it’s got to be better than just using the journal name/impact factor, right? At least a step in the right direction….hhhmmmmm….