The Times Higher Education Supplement (THES) last week published an article on the possibility of Facebook, Google and other tech companies offering degrees in the UK. Not as outlandish as you might expect, because the Department for Business, Innovation & Skills white paper Success as a Knowledge Economy sets out plans to “make it quicker and easier for new high quality challenger institutions to enter the [higher education] market and award their own degrees”. THES suggests media reports claim that companies such as Facebook might like to enter this market, although when asked to comment Facebook declined.
Despite being the world’s biggest social media network, Facebook never really struck me as a platform for education. That doesn’t mean that the company’s founder Mark Zuckerberg hasn’t been thinking about it. I found an article he wrote last year introducing a Facebook side project to develop a Personalized Learning Platform in partnership with Summit Public Schools. Granted, K12 in the US is a world away from HE in the UK, but in terms of an underpinning technology platform? Maybe not so far away. At least one major provider of online courses, Futurelearn, sees that the future of online learning at scale is social.
In a timely related piece last week, the Economist reported on some of the concerns and challenges around regulating big international technology platforms including Google and Facebook. A platform provider typically connects consumers with services, and services with other services. The specific content and services on offer, however, are selected, and so reflect the value proposition of the platform provider. A couple of weeks ago there’s was a minor flap when it was alleged that Facebook censors its news feed to present a biased political view. Odd that anyone should be surprised, as some believe that all media are biased, one way or another, even if they claim that they are not.
Could Facebook and others enter the UK HE market, and if they did would it be so bad, or any different to what’s currently on offer in HE? The precedent has already been set, as Pearson PLC, a FTSE 100 company, already offers degrees through Pearson College London. In the words of HM Government’s white paper, “Competition between providers in any market incentivises them to raise their game, offering consumers a greater choice of more innovative and better quality products and services at lower cost. Higher education is no exception.” I guess we’ll just have to wait and see.
I recently attended 1:AM, the first altmetrics conference, and I am still considering what I learnt from the various perspectives presented by publishers, funders, policy makers, librarians, and researchers. One impression that I came away with strongly is that the use of altmetrics as a proxy indicator of research impact is neither straightforward nor accepted, but that’s not to suggest that anyone thought it would be. Euan Adie, founder of Altmetric.com, one of the main companies exploring altmetrics and how they relate to research impact summed it up thus:
‘impact’ means different things to a publisher than to a funder, and the end goals for altmetrics in general vary from user to user
For me, the impact of research is as much about reach as it is about the influence or change it brings about. Traditionally we researchers tended to only think in terms of other researchers as the target of our reach, and of course the best way to measure that was citations. But as funders that rely primarily on tax payers money increasingly ask for evidence of “demonstrable contribution that excellent research makes to society and the economy” through their pathways to impact, then reaching an academic audience only is insufficient. This has been reinforced by the inclusion of impact case studies in Research Excellence Framework 2014. This isn’t a bad thing, and as a tax payer myself I’d quite like to know how my money is being spent. The challenge of course in terms of research impact is how to measure it.
Citation metrics are too narrow and ignore context
Journal impact factors can be easily gamed and incorrectly measure the impact of individual articles
In order to counter the slow, unaccountable, misleading, and some might say broken metrics surrounding research, new metrics are required. Altmetrics respond to the sharing of “raw science” like datasets, code, and experimental designs, “nanopublication,” self-publishing via blogging, microblogging, and comments or annotations on existing work. Altmetrics “expand our view of what impact looks like, but also of what’s making the impact.”
The response of the emerging altmetrics services to date has been to quantify some of these metrics, and the now familiar altmetric donut gives us a reassuring score, where presumably the bigger the number the better, and the better the impact. Or does it? A view put forward by many at the 1:AM conference is that useful as some of these approaches may be, a crude number is little better than what’s on offer by conventional metrics. Surely, it’s the context that matters. But how do you measure context with a number, and what do the numbers mean anyway? Is Twitter no less vulnerable to gaming than journal impact factors? We were repeatedly told at the conference that altmetrics are so much more than social media mentions, yet more often than not the discussion came down to mentions on Twitter. We still have a long way to go I think and the jury is still out on the evidence that altmetrics are useful. We shall probably have to wait until early 2015 when HEFCE publishes their independent review of the role of metrics in research assessment for an official view.
So in the meantime what is the researcher to make of all this? Here is my own short and incomplete list of observations I made attending the 1:AM conference:
Research articles that are well cited often but not always have a positive altmetric number.
Research articles that are media friendly, and most trivially have quirky or scatological titles have great altmetric scores, but not necessarily many academic citations.
The points above only apply to research published in the last 3-4 years. Altmetric numbers don’t tend to be available for research published more than a few years ago.
Currently altmetric numbers don’t tell us much if anything about context.
It is unclear whether actively engaging with social media will increase the impact of some given research.
Nobody yet knows what research impact as measured by altmetrics means.
There’s probably something important about altmetrics, but it’s not yet clear what it is.
To address these rhetorical questions, I refer you, gently ready, back to the altmetrics manifesto:
Researchers must ask if altmetrics really reflect impact, or just empty buzz. Work should correlate between altmetrics and existing measures, predict citations from altmetrics, and compare altmetrics with expert evaluation.
For now though, it’s the word of caution offered by Jeremy Farrar, Director of the Wellcome Trust, who opened the 1:AM conference that struck me most, and will be the main message I take back to our research strategy group. While Farrar has a vision for the Wellcome Trust playing a role in the emerging altmetrics field, he warned the conference not to further burden an already overburdened research community by yet another approach to assessing impact that might destroy the very creativity and innovation that it sets out to measure. I couldn’t agree more. Now, ‘like’ if you agree too.
As part of a currentdebate on the role of the LMS and the VLE in an agenda of openness, Amber suggests that VLEs can be many things but they are not fundamentally evil:
“VLEs can be used as a platform for fantastic blended and online learning, but even if they are not used to that extent, they are still important.”
The comment I left in response was based upon a consideration that while universities are in the business of education, where students pay a considerable fee to attend a course, there is inevitably going to be a differentiation between what they receive and what someone who doesn’t pay a fee receives. This is actively being played out in many institutions as part of an exploration of pedagogy and platforms for open courses, especially MOOCs, vs fees-based accredited courses. Usually these are different. For example, platforms tend to be more social to support large communities of dispersed learners in a MOOC, and pedagogies tend to favour tutor-based support for fees-based accredited courses compared with peer-support in massive open courses.
In exchange for the fee that students pay to attend courses at university, currently £9,000 a year in England, they might reasonably expect a consistent standard of experience across modules in their course. I think institutional VLEs should play an important role in that by providing a minimum module standard of content, support, and activities that students can expect. For some teachers however, that in itself can be a challenge to their practice given competing priorities forced upon most academics. Furthermore, not every teacher is an innovator – should they be? – so it’s inevitable that different teachers are going to provide a different experience, some better than others. Nonetheless minimum standards should be a goal expected by the institution for and on behalf of students. The VLE can certainly help with consistency through templates. But minimum standard is just that, a minimum. The maximum need not be described or prescribed. I’ve yet to see a VLE that stops a teacher from being innovative should they wish to be.
I was at a conference recently that was actively promoting the use of social media including Twitter. Most conferences do these days it seems. It was a good opportunity to share thoughts and experiences with other participants and to engage with an audience not attending the conference itself by tweeting for example using the conference hashtag. Indeed there were folk back home that appeared to be tracking what they were missing by following conference session tweets, and in some cases there seemed to be meaningful interaction between conference participants and those listening in, which broadens what it means to be a conference participant these days in that you no longer need to be present to join in with conference delegates.
For me, however, I have a confession. I felt totally overwhelmed by the volume of information that was flowing through my social media channels, Twitter in particular. It was partly my fault for keeping my devices, an iPad and iPhone as it happens, always open during sessions rather than just listening to what was being presented. But also because I totally failed at finding any kind of balance between what was going on at the podium, and what was going on online. The volume of stuff that was being posted was impossible to keep up with, so I didn’t even try in the end. However that created another problem for me, digital eavesdropping. By not being able to follow everything that was posted I ended up felling like an outsider at someone else’s party. I was that person on the periphery of a circle of friends clearly having a good time, but not actually contributing. That is apart from the occasional comment or interjection that invariably gets ignored.
I enjoyed the conference but left feeling that I had actually missed a vital part of it, as others were saying how useful the online engagement was. How did they manage to participate in person and online in any meaningful way? Was I, am I, missing some important new skill for the new extended conference experience, and should I be worried? It’s the last question that’s been troubling me most, and regresses me to by late teenage years when I felt a mild form of social anxiety at potentially missing all the best parties. I’m sure that probably tells you more about me than it does at the use of social media at conferences. But I do wonder.
Anyway, wondering how people manage to integrate the tsunami of tweets (I actually referred to the tsunami of twits in my only conference Twitter contribution), what are your personal strategies for using social media at conferences?
Ten years ago today I wrote a short blog piece to note the landing of the NASA Opportunity rover on the surface of Mars. With the Spirit rover already on Mars I wrote “this is going to be an exciting next few weeks”. The mission was planned to last around 3 month. Well a decade later and Opportunity has outlived Spirit by around 30 months and is still working and generating useful data. During its time on Mars Opportunity has driven 39km and taken 187,000 images, including this selfie a few days ago. So sit back and watch some of the highlights of this incredible engineering and science project.