Archive for February, 2014

A provocative piece was recently published in Science concerning the relationship between paylines (i.e, priority scores) for grant submissions to NIH for bread and butter RO1s and the resulting impact of publications arising from those grants. Now, I’m no NIH peer review wonk*, but I found some of the results a bit surprising.

In the research discussed, grants funded at NHLBI (National Heart, Lung, and Blood Institute) between 2001-2008 were divided into three tiers: better than 10th percentile (best), 10th to 20th percentile, and 20th to 42nd percentile. And, well, I think the figure in the article says it all:

The data suggests that the relationship between the percentile scores on a grant does not correlate at all with publications per grant or citations per millions of dollars spent.  However, a closer look at some of the original research that this Science article data is based off of does actually suggest that there is a weak relationship between those grants published in the top 10th percentile and citations in the first 2 years post-publication (oddly, this fact was left out of the Science article). Nonetheless, this appears to confirm much of what was discussed in the comments section of a recent post by FunkyDrugMonkey where priority scores seemed to be all over the place for most investigators with no obvious relationship between self-perceived best and worst ideas. Taken together, it looks like the priority scores are an extremely poor predictor of “impact”, at least as judged by citation counts**.

So what does this all mean? Well, to me it suggests that either A) the peer review grant system at NIH is really rubbish or B) the system is wasting a lot of time and money in trying to discern the undiscernible. I suspect the answer is B (as pointed out in the comments section of the article by a Scott Nelson), that these committees at NIH are getting a lot of really good quality proposals, and differentiating between them is an inherently impossible task resulting in a lot of noise in the system. If all the proposals are roughly equal with respect to the quality of the proposed science, then what gets higher/lower scores is going depend more on the idiosyncrasies of the reviewers/review panel than anything particularly objective or obvious (or to take a more cynical tone, the name of the PI on the grant application).

If it is the case that NIH study sections are largely trying to discern the undiscernable, then it suggests that there would be straightforward ways in which to streamline the entire process. Perhaps after proposals are deemed to meet some acceptable scientific threshold a subset of them are chosen randomly to be funded and another subset chosen by program officers or others at the NIH based on certain strategic priorities, something like that. Seems like it could be a fairer and less expensive and time intensive system that would result in similar outcomes.

Even if my ideas here are a bit off, findings such as this at the very least suggest that we need to check our assumptions with respect to the best and most efficient ways in which to assess grant applications. A priori, I certainly would have expected a reasonably strong positive correlation between priority scores and citations. I would love to hear how the findings from this work jive with anecdotes of any readers out there.

——————————————————————————————

*I’m very much a tyro when it comes to NIH grants, so feel free to take me to task in the comments section if I’m dead wrong on my understanding of the NIH grant peer review game.

**I’m not sold on the idea that citations are a good marker for impact. I think impact is a much more ephemeral concept than can be captured in any single or suite of metrics. Given the inherent unpredictability of science, true impact is not apparent when squinting into the bright light of the future, but when taking stock of the past.

Advertisements

I mostly like Google Scholar for altmetrics; it’s quick to check out any recent citations to my work and it updates much more rapidly than Web of Science (WoS). The downside (or the upside…), of course, has always been that Google Scholar tends to give higher citation counts than WoS by including things like dissertations, non-English language publications, books, etc. (basically anything published under an institutional domain as far as I understand; which can be gamed quite easily apparently, as discussed here). So the absolute numbers for Google Scholar always have to be taken with a grain of salt, but a recent change in the citations to one of my papers suggest that the citation counts should be taken with more like a 1kg bucket of NaCl.

I have a nice review article I published back in the hey day of my PhD which has garnered a decent number of citations. However, a couple of weeks ago I noticed that the citation count quickly increased by about 30. I didn’t think the paper all of a sudden became a blockbuster, so I did a little snooping. Turns out the boost in citations is from a book published back in 2010 that has several chapters (about 30), one or two of which pertain directly to the topic of my review and probably cited it. What is disturbing, however, is that it looks like every single individual chapter in this book is also included as a citation to my review article. Are the algorithms for Google Scholar really this poorly designed? Seems a bit ludicrous and really shakes my faith in Google Scholar as a reasonable (if inflated) source of citation counts. 

I’ve emailed Google Scholar to point out the issue, but I doubt it will even be read by a human being, let alone considered. I’ve emailed them in the past when Google Scholar was in beta to make suggestions and never heard/saw anything…just another small ripple in the salt filled sea that is Google Scholar…

All scientists should read: “What’s so Special About Science (And How Much Should We Spend on It?)”

Near the end of last year the president of the AAAS (American Association for the Advancement of Science, publishers of Science magazine) wrote a great piece that every scientist should read. It concerns the role of science, and particularly basic science, in boosting GDP (gross domestic product) and providing the fodder for technological advance. A lot of great talking points for dealing with science skeptics here.

The growth in U.S. GDP per captia has been exponential since the late 19th century. Such growth is largely responsible for the high standard of living those in the US, and other developed countries, have enjoyed over the past half century or so. And what is the primary driver of this growth? It turns out that at least 50%, and up to 85%, is due to technological progress buttressed by basic science, as opposed to natural resources, land, and labor. What’s further, the return on basic science is impressive:

Many institutions, including our universities and retirement funds, accept 5% sustained ROI as a decent return. Yet investments in basic research are variously estimated as ultimately returning between 20% and 60% per year

Wowzers! If that isn’t a number to impress your friends and family, I don’t know what is. I know I always find it challenging to explain to others why it’s worth doing basic scientific research. I usually point them to the fact that green flourescent protein was discovered in jellyfish and taq polymerase originally came from thermophilic bacteria. Both have revolutionized biological science, and both came out of very basic science. But it’s nice to have some sexy numbers to back this claim up!

Another interesting aspect of the piece that is discussed is the idea of appropriability. I had not really thought about this idea before, but it is essentially the ability for the creator or discoverer of a technology to appropriate it and turn it into something that can be sold in the market. As we move into a more global and digital world, the relationship between where a discovery is made and who benefits from it economically, is increasingly blurred. 

The piece then ends with a bit that almost brought a tear to my eye:

A 2009 Harris poll (17) asked the public to name the most prestigious occupations. The answers (in order) were firefighter, scientist, doctor, nurse, teacher, and military officer. What struck me immediately when I saw this result is that every one of these, except scientist, is an immediate helper occupation. These people save us from fires, prevent attacks, teach our children, and heal us. By contrast, the value of scientists and the benefit they produce can be very long term. Yet the public perceives scientists as belonging in the same basket of high-prestige helper occupations. This tells us something. Another poll, by Pew (18), finds that the majority of Americans think that government investments in basic scientific research (73%) and engineering and technology (74%) pay off in the long run, with only small differences between Democrats and Republicans. That also tells us something.

When I ask nonscientists, “what is science good for?”, only rarely do I hear the answer that it promotes economic growth. Instead, most answers are about science creating a better world in ways not easily monetizable: enabling longer, healthier lives; protecting the planet; feeding humanity; deterring (or winning) international conflicts; bringing us resiliency to engage future challenges of an uncertain world, with climate change as one example. People also talk about the basic human need for discovery and understanding of the natural world, about the almost-mysterious power of science at engaging the natural idealism of young people, and empowering them to participate in the world in ways that they otherwise would not.

Wow! The day to day life of scientist is hard. Wrangling with data, papers, and grants. Fighting nerves to give a talk in front of peers, or for authorship…it’s a battle, no doubt about it. But knowing that people the world over put us in the same boat as fire fighters and nurses…well that just warms the cockles of my funky heart.

CV Quandary: Include Acknowledgments?

Posted: February 6, 2014 in Academia
Tags: ,

I’m always looking for ways to beef up my CV for eventual applications for faculty or other positions, and I’ve been thinking about acknowledgements. In my current lab here in the UK, I’ve brought an atypical skill set that is not typically present given the sort of research that is done. However, it has proven to be quite useful and I’m always happy to lend a hand to a fellow scientist. The work that I do is not typically enough to warrant authorship, but I have received several acknowledgements in various papers for helping out. So now that I’ve actually racked up a few of these, I wonder if it is worthwhile to include in in some way on a CV. A sort of “acknowledgements received” section, indicating my willingness to help my fellow pursuers of glory truth. Or does it look desperate? That I’m trying to reach too much for an edge? Hmmmm….any thoughts out there? What would you think if you saw such a section on a CV whilst on a faculty hiring committee?