A provocative piece was recently published in Science concerning the relationship between paylines (i.e, priority scores) for grant submissions to NIH for bread and butter RO1s and the resulting impact of publications arising from those grants. Now, I’m no NIH peer review wonk*, but I found some of the results a bit surprising.

In the research discussed, grants funded at NHLBI (National Heart, Lung, and Blood Institute) between 2001-2008 were divided into three tiers: better than 10th percentile (best), 10th to 20th percentile, and 20th to 42nd percentile. And, well, I think the figure in the article says it all:

The data suggests that the relationship between the percentile scores on a grant does not correlate at all with publications per grant or citations per millions of dollars spent.  However, a closer look at some of the original research that this Science article data is based off of does actually suggest that there is a weak relationship between those grants published in the top 10th percentile and citations in the first 2 years post-publication (oddly, this fact was left out of the Science article). Nonetheless, this appears to confirm much of what was discussed in the comments section of a recent post by FunkyDrugMonkey where priority scores seemed to be all over the place for most investigators with no obvious relationship between self-perceived best and worst ideas. Taken together, it looks like the priority scores are an extremely poor predictor of “impact”, at least as judged by citation counts**.

So what does this all mean? Well, to me it suggests that either A) the peer review grant system at NIH is really rubbish or B) the system is wasting a lot of time and money in trying to discern the undiscernible. I suspect the answer is B (as pointed out in the comments section of the article by a Scott Nelson), that these committees at NIH are getting a lot of really good quality proposals, and differentiating between them is an inherently impossible task resulting in a lot of noise in the system. If all the proposals are roughly equal with respect to the quality of the proposed science, then what gets higher/lower scores is going depend more on the idiosyncrasies of the reviewers/review panel than anything particularly objective or obvious (or to take a more cynical tone, the name of the PI on the grant application).

If it is the case that NIH study sections are largely trying to discern the undiscernable, then it suggests that there would be straightforward ways in which to streamline the entire process. Perhaps after proposals are deemed to meet some acceptable scientific threshold a subset of them are chosen randomly to be funded and another subset chosen by program officers or others at the NIH based on certain strategic priorities, something like that. Seems like it could be a fairer and less expensive and time intensive system that would result in similar outcomes.

Even if my ideas here are a bit off, findings such as this at the very least suggest that we need to check our assumptions with respect to the best and most efficient ways in which to assess grant applications. A priori, I certainly would have expected a reasonably strong positive correlation between priority scores and citations. I would love to hear how the findings from this work jive with anecdotes of any readers out there.

——————————————————————————————

*I’m very much a tyro when it comes to NIH grants, so feel free to take me to task in the comments section if I’m dead wrong on my understanding of the NIH grant peer review game.

**I’m not sold on the idea that citations are a good marker for impact. I think impact is a much more ephemeral concept than can be captured in any single or suite of metrics. Given the inherent unpredictability of science, true impact is not apparent when squinting into the bright light of the future, but when taking stock of the past.

I mostly like Google Scholar for altmetrics; it’s quick to check out any recent citations to my work and it updates much more rapidly than Web of Science (WoS). The downside (or the upside…), of course, has always been that Google Scholar tends to give higher citation counts than WoS by including things like dissertations, non-English language publications, books, etc. (basically anything published under an institutional domain as far as I understand; which can be gamed quite easily apparently, as discussed here). So the absolute numbers for Google Scholar always have to be taken with a grain of salt, but a recent change in the citations to one of my papers suggest that the citation counts should be taken with more like a 1kg bucket of NaCl.

I have a nice review article I published back in the hey day of my PhD which has garnered a decent number of citations. However, a couple of weeks ago I noticed that the citation count quickly increased by about 30. I didn’t think the paper all of a sudden became a blockbuster, so I did a little snooping. Turns out the boost in citations is from a book published back in 2010 that has several chapters (about 30), one or two of which pertain directly to the topic of my review and probably cited it. What is disturbing, however, is that it looks like every single individual chapter in this book is also included as a citation to my review article. Are the algorithms for Google Scholar really this poorly designed? Seems a bit ludicrous and really shakes my faith in Google Scholar as a reasonable (if inflated) source of citation counts. 

I’ve emailed Google Scholar to point out the issue, but I doubt it will even be read by a human being, let alone considered. I’ve emailed them in the past when Google Scholar was in beta to make suggestions and never heard/saw anything…just another small ripple in the salt filled sea that is Google Scholar…

All scientists should read: “What’s so Special About Science (And How Much Should We Spend on It?)”

Near the end of last year the president of the AAAS (American Association for the Advancement of Science, publishers of Science magazine) wrote a great piece that every scientist should read. It concerns the role of science, and particularly basic science, in boosting GDP (gross domestic product) and providing the fodder for technological advance. A lot of great talking points for dealing with science skeptics here.

The growth in U.S. GDP per captia has been exponential since the late 19th century. Such growth is largely responsible for the high standard of living those in the US, and other developed countries, have enjoyed over the past half century or so. And what is the primary driver of this growth? It turns out that at least 50%, and up to 85%, is due to technological progress buttressed by basic science, as opposed to natural resources, land, and labor. What’s further, the return on basic science is impressive:

Many institutions, including our universities and retirement funds, accept 5% sustained ROI as a decent return. Yet investments in basic research are variously estimated as ultimately returning between 20% and 60% per year

Wowzers! If that isn’t a number to impress your friends and family, I don’t know what is. I know I always find it challenging to explain to others why it’s worth doing basic scientific research. I usually point them to the fact that green flourescent protein was discovered in jellyfish and taq polymerase originally came from thermophilic bacteria. Both have revolutionized biological science, and both came out of very basic science. But it’s nice to have some sexy numbers to back this claim up!

Another interesting aspect of the piece that is discussed is the idea of appropriability. I had not really thought about this idea before, but it is essentially the ability for the creator or discoverer of a technology to appropriate it and turn it into something that can be sold in the market. As we move into a more global and digital world, the relationship between where a discovery is made and who benefits from it economically, is increasingly blurred. 

The piece then ends with a bit that almost brought a tear to my eye:

A 2009 Harris poll (17) asked the public to name the most prestigious occupations. The answers (in order) were firefighter, scientist, doctor, nurse, teacher, and military officer. What struck me immediately when I saw this result is that every one of these, except scientist, is an immediate helper occupation. These people save us from fires, prevent attacks, teach our children, and heal us. By contrast, the value of scientists and the benefit they produce can be very long term. Yet the public perceives scientists as belonging in the same basket of high-prestige helper occupations. This tells us something. Another poll, by Pew (18), finds that the majority of Americans think that government investments in basic scientific research (73%) and engineering and technology (74%) pay off in the long run, with only small differences between Democrats and Republicans. That also tells us something.

When I ask nonscientists, “what is science good for?”, only rarely do I hear the answer that it promotes economic growth. Instead, most answers are about science creating a better world in ways not easily monetizable: enabling longer, healthier lives; protecting the planet; feeding humanity; deterring (or winning) international conflicts; bringing us resiliency to engage future challenges of an uncertain world, with climate change as one example. People also talk about the basic human need for discovery and understanding of the natural world, about the almost-mysterious power of science at engaging the natural idealism of young people, and empowering them to participate in the world in ways that they otherwise would not.

Wow! The day to day life of scientist is hard. Wrangling with data, papers, and grants. Fighting nerves to give a talk in front of peers, or for authorship…it’s a battle, no doubt about it. But knowing that people the world over put us in the same boat as fire fighters and nurses…well that just warms the cockles of my funky heart.

CV Quandary: Include Acknowledgments?

Posted: February 6, 2014 in Academia
Tags: ,

I’m always looking for ways to beef up my CV for eventual applications for faculty or other positions, and I’ve been thinking about acknowledgements. In my current lab here in the UK, I’ve brought an atypical skill set that is not typically present given the sort of research that is done. However, it has proven to be quite useful and I’m always happy to lend a hand to a fellow scientist. The work that I do is not typically enough to warrant authorship, but I have received several acknowledgements in various papers for helping out. So now that I’ve actually racked up a few of these, I wonder if it is worthwhile to include in in some way on a CV. A sort of “acknowledgements received” section, indicating my willingness to help my fellow pursuers of glory truth. Or does it look desperate? That I’m trying to reach too much for an edge? Hmmmm….any thoughts out there? What would you think if you saw such a section on a CV whilst on a faculty hiring committee?

Recent Nobel Laureate, Randy Schekman, wrote an op-ed piece in The Guardian back in December railing against the “glamour mag” journals of Science, Nature, and Cell for damaging science (also discussed here and here). He rightly slams the culture of using journal impact factors as a judge of paper quality, which I agree with wholeheartedly. However, I disagree with what appears to be the crux of his argument, that that these luxury journals are artificially limiting the number of papers they publish solely in pursuit of selling more subscriptions:

These journals aggressively curate their brands, in ways more conducive to selling subscriptions than to stimulating the most important research. Like fashion designers who create limited-edition handbags or suits, they know scarcity stokes demand, so they artificially restrict the number of papers they accept. The exclusive brands are then marketed with a gimmick called “impact factor” – a score for each journal, measuring the number of times its papers are cited by subsequent research. Better papers, the theory goes, are cited more often, so better journals boast higher scores. Yet it is a deeply flawed measure, pursuing which has become an end in itself – and is as damaging to science as the bonus culture is to banking.

This is where this young postdoc parts ways with the opinions of the decorated Nobel laureate, particularly with respect to Science magazine which is produced by the non-profit American Association of the Advancement of Science (AAAS). I have been subscribing to the print version of Science since two years into my PhD, and I have continued to receive it here in the UK despite the $160 price tag of having an overseas subscription. Why get a subscription to Science as opposed to say, Nature or Cell? A couple of reasons:

  1. I get an additional line on my CV because I’m now an AAAS member. When I was first deciding between the two journals, this was the deal breaker for a young graduate student looking to find ways to beef up, what was at the time, a very bare bones CV.
  2. I am supporting the AAAS, an organization that I respect and is an advocate for science in the world of US gov’t policy. A subscription to Nature or Cell just lines the coffers of DutchBritish, and  German fat cats. Might as well support the boys (and girls) at home fightin’ the good fight!
  3. I supremely enjoy the content. Half of the print Science magazine is not scientific articles, but science journalism of various sorts. There are stories, policy debates, op-ed pieces etc. I typically read this portion of the magazine end-to-end. It’s generally very high quality and an order of magnitude better than Scientific American or New Scientist, magazines I was getting prior to, and during the first years, of graduate school. However, I rarely read the scientific articles unless they are interesting enough and close enough to my field that I can understand them. On occasion my sub-field gets a little bit of love in Science, and I definitely read those.
  4. Call me old school, but I much prefer a print magazine to reading things on a device. I have commuted to work via public transit for over 8 years now, and I typically just grab the latest Science mag to read to/from work. I’m willing to pay a premium for something new to take with me every week of the year (despite protestations from my spouse as they tend to pile up around the flat…).
  5. Exposure to different fields and ideas. Whilst I have no desire to read the scientific literature on archaeology or snake venoms, I quite enjoy reading well written scientific journalism on the topics. In addition, there’s always interesting articles about publishing, open access, bibliometrics etc.

Thus, there is considerable value in having a print version of Science magazine. This is not an “artificial” limitation on the number of articles that can be published in Science, as Scheckman claims. It is legitimate. There is still a significant place for print subscriptions, even in today’s increasingly digital world.

Nonetheless, it does look as if Science is already working to address, in part, the point that Scheckman makes with respect to the number of papers that are published. In the past year, Science has added a “Research Article Summary” along with the “Research Articles” and “Reports” that they publish. These summaries are a single printed page, with what is basically an extended abstract and a single figure. There is then the inclusion of a web address to access the entire article. Perhaps this is a format that the editors are exploring so they can indeed publish more papers.

Finally, Scheckman charges that Science (and Nature/Cell) tout their impact factors incessantly. I’ve never once seen a mention of Sciences’ impact factor in the magazine in the 7+ years I’ve been reading it. That’s not to say in other forums it is not touted, but it’s definitely not in your face as far as the magazine itself is concerned.

It seems to me that Scheckman, editor of the newly created eLife online journal, has his guns trained on the wrong target (along with a clear conflict of interest). The real pushers of glamour mags is funding, tenure, and hiring committees. Whilst I wouldn’t shed a tear if Nature, Cell, and their ilk went the way of the dodo bird, they are just for-profit organizations trying to make a buck the best they can within the existing culture. They don’t strike me as drivers of the culture; that lays with administrators, scientists, and policy makers.

So keep your hands off my Science mag!

—————————————————-

Disclaimer: I never have, and probably never will, publish in Science.

Many of us have seen it, the inclusion of impact factors on a CV. I’ve even had a senior PI suggest putting an impact factor on my CV in the event that I publish in a journal that some of those in my sub-field aren’t as familiar with. I cringed when I was given this suggestion, and balk at CVs that contain impact factors. We all know how useful impact factors are (well most of us anyway), and it has been discussed ad nauseam in many places (e.g, here). So then what’s a wee postdoc like myself to do to promote how totally and completely awesome my publications are (or not…)?

Well what about adding citation statistics to a CV for each paper? Would require some constant updating, but not too much for a young chap like myself with under 20 pubs. Alternatively, a link to a Google scholar profile would do the trick as well I suppose. Such an idea is not without downfalls as it doesn’t say much about newly published work (but article views would if they were available form all journals like they are from PLoS as they accrue quite quickly and if I had to guess are correlated to a degree with later citations). This would certainly draw attention away from focusing on just where something is published as opposed to a more “objective” measure of article impact.

Of course the other caveat here is that perhaps citations aren’t the best method of judging how good someone’s work is. I don’t disagree with this point, but it’s got to be better than just using the journal name/impact factor, right? At least a step in the right direction….hhhmmmmm….

Of the many European countries I have visited whilst living in the UK, Italy and France by far have had the best food (you can guess which has the worst….). Here are three of my favorite restaurants I have had the pleasure of dining at when visiting Italy. One in Roma, one in Venice, and one in Northern Italy near Aosta. They are all rustic and frequented by locals; great places to go to get a taste of authentic Italian cuisine away from all the damn American tourists (well mostly)!

La Vrille (http://www.lavrille.it/) near Aosta, Italy near the Italian Alps.

This is a very welcoming family run farmhouse located in the Italian Alps. We recently had the most amazing xmas dinner here. It was an 8 course, 3.5 hour meal accompanied by wine made from the local vineyard. The food was sophisticated, yet comforting, showcasing a mix of Italian and French cooking styles (as would be expected so close to the border of France!). Despite the 8 well presented courses, the atmosphere was downright homey…homey. To top things off, the chef (who didn’t speak of lick of English!) came to visit each of the 5 tables at the end of the meal that was served by her two children. Reservations are a must!

 

Alla Vedova (http://www.in-venice.com/restaurant/ca-doro-alla-vedova/) in Venice, Italy

I first visited this restaurant over 10 years ago when I went on a 5 week backpacking trip through Europe following my college/university graduation. At the time it was not very well known to tourists, and I had my first, and only, artichoke lasagna that was delicious. Since then I’ve been back and it has clearly made it onto the radar of many American tourists. Nonetheless, my squid ink linguine was great, and the service still friendly (if not a bit rushed). Just don’t ask for parmasean cheese on your seafood dishes!

 

Ristorante La Moretta (Via Monserrato, 158) in Roma, Italy

This is a nice, simple restaurant with decent food. It is a welcome break away from the craziness of the areas run over by tourists in Rome. I had a wonderful spaghetti alla vongole (spaghetti with clams) here, but you can also get pizza and other dishes (although I cannot speak for how good they are). The prices aren’t exorbitant either, and the staff is exactly what you’d expect in a traditional Italian restaurant (i.e, relaxed about service…welcome to Europe!). 

 

That’s it…great choices for simple yet delicious dining. Any other suggestions for great, unpretentious, European dining leave below…I’m always on the hunt for simplicity elevated…