I recently attended 1:AM, the first altmetrics conference, and I am still considering what I learnt from the various perspectives presented by publishers, funders, policy makers, librarians, and researchers. One impression that I came away with strongly is that the use of altmetrics as a proxy indicator of research impact is neither straightforward nor accepted, but that’s not to suggest that anyone thought it would be. Euan Adie, founder of Altmetric.com, one of the main companies exploring altmetrics and how they relate to research impact summed it up thus:

‘impact’ means different things to a publisher than to a funder, and the end goals for altmetrics in general vary from user to user

For me, the impact of research is as much about reach as it is about the influence or change it brings about. Traditionally we researchers tended to only think in terms of other researchers as the target of our reach, and of course the best way to measure that was citations. But as funders that rely primarily on tax payers money increasingly ask for evidence of “demonstrable contribution that excellent research makes to society and the economy” through their pathways to impact, then reaching an academic audience only is insufficient. This has been reinforced by the inclusion of impact case studies in Research Excellence Framework 2014. This isn’t a bad thing, and as a tax payer myself I’d quite like to know how my money is being spent. The challenge of course in terms of research impact is how to measure it.

The altmetrics manifesto, written back in 2010, makes three bold assertions:

  • Peer review is unaccountable
  • Citation metrics are too narrow and ignore context
  • Journal impact factors can be easily gamed and incorrectly measure the impact of individual articles

In order to counter the slow, unaccountable, misleading, and some might say broken metrics surrounding research, new metrics are required. Altmetrics respond to the sharing of “raw science” like datasets, code, and experimental designs, “nanopublication,” self-publishing via blogging, microblogging, and comments or annotations on existing work. Altmetrics “expand our view of what impact looks like, but also of what’s making the impact.”

The response of the emerging altmetrics services to date has been to quantify some of these metrics, and the now familiar altmetric donut gives us a reassuring score, where presumably the bigger the number the better, and the better the impact. Or does it? A view put forward by many at the 1:AM conference is that useful as some of these approaches may be, a crude number is little better than what’s on offer by conventional metrics. Surely, it’s the context that matters. But how do you measure context with a number, and what do the numbers mean anyway? Is Twitter no less vulnerable to gaming than journal impact factors? We were repeatedly told at the conference that altmetrics are so much more than social media mentions, yet more often than not the discussion came down to mentions on Twitter. We still have a long way to go I think and the jury is still out on the evidence that altmetrics are useful. We shall probably have to wait until early 2015 when HEFCE publishes their independent review of the role of metrics in research assessment for an official view.

So in the meantime what is the researcher to make of all this? Here is my own short and incomplete list of observations I made attending the 1:AM conference:

  1. Research articles that are well cited often but not always have a positive altmetric number.
  2. Research articles that are media friendly, and most trivially have quirky or scatological titles have great altmetric scores, but not necessarily many academic citations.
  3. The points above only apply to research published in the last 3-4 years. Altmetric numbers don’t tend to be available for research published more than a few years ago.
  4. Currently altmetric numbers don’t tell us much if anything about context.
  5. It is unclear whether actively engaging with social media will increase the impact of some given research.
  6. Nobody yet knows what research impact as measured by altmetrics  means.
  7. There’s probably something important about altmetrics, but it’s not yet clear what it is.

To address these rhetorical questions, I refer you, gently ready, back to the altmetrics manifesto:

Researchers must ask if altmetrics really reflect impact, or just empty buzz. Work should correlate between altmetrics and existing measures, predict citations from altmetrics, and compare altmetrics with expert evaluation.

For now though, it’s the word of caution offered by Jeremy Farrar, Director of the Wellcome Trust, who opened the 1:AM conference that struck me most, and will be the main message I take back to our research strategy group. While Farrar has a vision for the Wellcome Trust playing a role in the emerging altmetrics field, he warned the conference not to further burden an already overburdened research community by yet another approach to assessing impact that might destroy the very creativity and innovation that it sets out to measure. I couldn’t agree more. Now, ‘like’ if you agree too.