Research Analytics Webinar

Monday, 1 July 2019, 10:00-11:30
Free – booking required

About
The aim of this webinar is to highlight recent results and discuss the progress of our R&D work in the area of research analytics.

The webinar will consist of four short presentations including demonstrations of recent work and will outline future plans for a potential research analytics service. This will be followed by a Q&A session which will offer an opportunity for questions and feedback on the planned service.

Read an overview of Jisc’s work in the area of research analytics on the Jisc scholarly communications blog.

Who should attend?
Research administrators, managers and leaders
Researchers interested in the research process and management
Research librarians
Research funders

Agenda

10:00
Introduction
, Chris Keene, Head of library and scholarly futures, Jisc
Overview of Jisc’s R&D work in the area of Research Analytics 

10:10
Reproducibility
Analytics Labs, Adam Green, Senior data and visualisation officer, Jisc.
Jisc Analytics Labs is an approach to the development of decision-making tools underpinned by data. This presentation will briefly outline this approach and then focus on the results of the reproducibility lab which used data from articles on animal-based research to assess the degree to which factors affecting research reproducibility are reported

10:25
Data availability study
, Mike Thelwall, Professor of Data Science, University of Wolverhampton
Primary data collected during a research study is increasingly shared and may be re-used for new research. The aim of this project was to assess the extent of data sharing of summary statistics of primary human genome-wide association studies (GWAS) as an example of data sharing in favourable circumstances in a particular discipline and whether such checks can be automated. This presentation will summarise the findings of the project and demonstrate a tool to extract information from data availability statements

10:40
Prediction market
, Jackie Thompson, Research Associate, University of Bristol
The aim of this project was to develop and evaluate a prediction market tool that higher education institutions can use to rank outputs for potential REF submissions as part of their internal REF (Research Excellence Framework) planning. A prediction market is a bit like the stock market, except instead of investing in companies, participants invest in the outcomes of future events (in this case, ratings of research outputs). This presentation will give some background to the project and details of the prediction markets that have been tested with Units of Assessment at the University of Bristol. It will include a demo of the tool used and the lessons learned from the first round of markets.  

10:55
Research Analytics service,
Rob Johnson, Research Consulting
Jisc’s plans for a potential new research analytics service have started with a discovery phase to help define the problems around research analytics as the starting point to possible solutions. At the end of this phase there will be a brief defining the work required to produce a research analytics service. We have been working with a number of institutions and stakeholders to explore the problems faced by institutional leaders, managers, professionals and academic staff concerning the planning, management and evaluation of research, where better analytic insight would help address these problems. This presentation will highlight the progress made defining these problems, what we have learnt and plans for the next stage in the discovery process.

11:10
Q&A  

Open-Access Monographs and Metrics: More than counting beans

Guest Post by Martin Paul Eve

If one were to create a ranking of terms feared by those working in the humanities, “bibliometrics” would have to be up there. For differing, non-comprehensive citation cultures accompanied by long citation half-lives that are not usually seen in the natural sciences mean that, when bibliometrics are used for assessment purposes, they simply don’t work well in the humanities disciplines. When a book takes five years to write, for example, one won’t see a citation network that reflects the current state of a field within the types of timescale that are useful to research funders, for instance.

https://bit.ly/2VUoYWL *

Yet, if those in the humanities do not want bibliometrics to be used for assessment, we are all actually already used to using the citation graph in another type of utilitarian exercise: cross-referencing in order to gain an understanding of a field. For example, whenever I need to get my head around a new field of scholarship, I have a tried and tested method. I will usually go to the British Library and order ten or so books that seem to have pertinent titles. I will then begin to cross-reference the bibliographies of these books. In other words, I want to know: what do these titles cite in common? What, exactly, are the key secondary works that are cited by all of these books? It is my gamble that  the most-cited items will be good pieces to read in order to rapidly understand a new disciplinary space.

This is a labour intensive process. It involves my move to a physical space in the first place – our national research library – which on its own has implications for accessibility; as a disabled academic, I am not always in a brilliant state to make my way into a physical library space. This is then followed by a search of the catalogue, a wait for the delivery of the items, and then a laborious process of note taking, observation and cross-referencing across hundreds of permutations of bibliographic entries.

What if, in the contemporary digital publishing landscape, there were a better way? For many years now, there has been a steady growth in the number of academic books that are published open access; that is, free of price and permission barriers. Free to read and free to re-use. Several thousand of these are listed in the Directory of Open Access Books (DOAB), providing an ever-expanding corpus of high-quality, peer-reviewed monographs that are openly and digitally accessible.

It is with great pleasure, then, that with funding from Jisc’s Open Metrics Lab, the Centre for Technology and Publishing at Birkbeck can today announce our experimental project to build a bibliographic intersect tool for open-access monographs. The project has three components that Jisc is planning to make available for anyone to re-use:

  1. A literature review of existing material on bibliometrics for open-access monographs and bibliographic intersection tools;
  2. A tool that will allow people to download a corpus from the DOAB;
  3. A tool that will parse references from open-access monographs and tell the user which items are cited in common among the selected titles.

As this is a tool that I have wanted for some time in my own capacity as a researcher, it is excellent to have support from Jisc in beginning the development work on this. That said, we have had to impose some limitations. While there are excellent tools like Anystyle.io and Crossref’s citation resolution service – which we intend to use – we are going to have to work with a small subset of citations to begin with. Guaranteeing the universal parsing of arbitrary free-text input from any publisher in any style is well beyond the scope of this experimental exercise.

However, this is an exciting start to show what a citation graph – essentially, metrics for monographs – might achieve within a positive research context for the humanities. Rather than counting beans in order to assess researchers, we are interested in using the quantitative and cumulative weight of citation evidence as a way to accelerate the research process, to help with disability access, and to think through the capabilities of open access for our understanding of new areas.

Martin Paul Eve

About the author: Martin Paul Eve is Professor of Literature, Technology and Publishing at Birkbeck, University of London. He is a founder of the Open Library of Humanities, a member of the UUK Open Access Monographs Working Group, and author of Open Access and the Humanities: Contexts, Controversies and the Future, published openly by Cambridge University Press.

 


*Featured image: “the future of books” by Johan Larsson, used under the terms of a Creative Commons Attribution license.

Data availability and feasibility of validation

Can we develop an automated way to assess the availability of research data for a collection of journal articles and assess the extent to which the data are being made available in a FAIR way?

https://bit.ly/2tiK0OJ *

Data sharing is important for academic research, both for validation of results and for re-use to address new research questions. A growing number of policies encourage data sharing to varying degrees but, in many cases, the implementation of data sharing maybe less effective than apparent. Thus, new insights on the pain points faced by researchers in sharing data and the needs of readers could serve as a basis to promote good practice in data sharing. Can new ways of evaluating the effectiveness of data sharing help to improve practice?

To take an example, many publishers require the author to include a data availability statement in a publication explaining how the relevant data can be accessed. ‘Availability’, however, can be interpreted in different ways leading to different results in terms of who and how the data can be accessed. Ideally, the data underlying research should be findable, accessible, interoperable, and reusable (FAIR) so that other researchers can locate and reuse the data in a meaningful way.

To help answer this, we are working with researchers from the Universities of Wolverhampton and Bristol to carry out a study to explore how authors are sharing the data associated with their research. We will examine the full text and data availability statements from a collection of articles to assess the availability of the underlying data and then consider the extent to which the data meet certain quality criteria in terms of format, reuse etc. The study will also explore the possibility of creating a method or indicator for the evaluation of research data sharing practice to help understand what this means in a particular discipline, and to support the agenda around recognising data as valuable output from the research process.

The study will include the following steps:

1. Identify and then assemble a corpus of research articles from a research discipline for which a specific type of research data should be available (in certain disciplines community standards require sharing of a particular data type and have a common standard for reporting data).
2. Assess whether data that were reported to be available (e.g. in a repository) can actually be found there.
3. Consider the means by which the data is shared. For example, is it adequate in terms of format, metadata provision, for reuse?
4. Devise an approach for reporting on the above tests in a concise form (i.e. develop an indicator).
5. Investigate the feasibility of scaling up or building a generalizable pipeline for similar analysis in other disciplines.

The study would look to automate steps 2-4 for a given corpus of research articles (with full text available) within the selected research discipline.

We decided to focus on genome-wide association studies (GWAS) as data type. A GWAS is a study of genetic variation across the entire human genome that is designed to identify genetic associations with observable traits (e.g. smoking behaviour) or the presence of a disease or condition. GWAS data is widely reused and there are strong community norms to share this type of data. There are also likely to be issues with the ‘availability’ and the format in which they are shared. The research involving GWAS data is often undertaken by large consortia which means that data needs to be shared within the research group which makes it a smaller step to share them more widely.

Project Team
University of Wolverhampton
– Mike Thelwall, Professor of Data Science
– Kayvan Kousha, Postdoctoral researcher
– Amalia Mas Bleda, Postdoctoral researcher
– Emma Stuart, Postdoctoral researcher
– Meiko Makita, Postdoctoral researcher
– Nushrat Khan, PhD student

University of Bristol
– Marcus Munafò, Professor of Biological Psychology
– Katie Drax, PhD student
Marcus and Katie are also representing the UK Reproducibility Network  (@ukrepro)

The project runs from January 2019-July 2019 and we will share updated on this blog along with other experiments as part of the open metrics lab.

*Featured image: “Share” by Carlos Maya, used under the terms of a Creative Commons Attribution license.

Leaving the gold standard

Guest Post by Cameron Neylon

See also a briefing paper written by Cameron Neylon for Jisc on the Complexities of Citation.

Citations, we are told, are the gold standard in assessing the outputs of research. When any new measure or proxy is proposed the first question asked (although it is rarely answered with any rigour) is how this new measure correlates with the “gold standard of citations”. This is actually quite peculiar, not just because it raises the question of why citations came to gain such prominence, but also because the term “gold standard” is not without its own ambiguities.

http://bit.ly/2zPATuI *

http://bit.ly/2zPATuI *

The original meaning of “gold standard” referred to economic systems where the value of currency was pegged to that of the metal; either directly through the circulation of gold coins, or indirectly where a government would guarantee notes could be converted to gold at a fixed rate. Such systems failed repeatedly during the late 19th and early 20th centuries. Because they coupled money supply – the total available amount of government credit – to a fixed quantity of bullion in a bank, they were incapable of dealing with large-scale and rapid changes. The Gold Standard was largely dropped in the wake of World War II and totally abandoned by the 1970s.

But in common parlance “gold standard” means something quite different to this fixed point of reference, it refers to the best available. In medical sciences the term is used to refer to treatments or tests that currently are regarded as the best available. The term itself has been criticised over the years, but it is perhaps more ironic that this notion of “best available” is actually in direct contradiction to intent of the currency gold standard – that value is fixed to a single reference point for all time.

So are citations the best available measure, or the one that we should use as the basis for all comparisons? Or neither? For some time they were the only available quantitative measure of the performance of research outputs. The only other quantitative research indicator being naive measures of output productivity. Although records have long been made of journal circulation in libraries – and one time UK Science Minister David Willetts has often told the story of choosing to read the “most thumbed” issue of journals as a student –  these forms of usage data were not collated and published in the same ways as the Science Citation Index. Other measure such as research income, reach, or even efforts to quantify influence or prestige in the community have only been available for analysis relatively recently.

If the primacy of citations is largely a question of history, is there nonetheless a case to be made that citations are in some sense the best basis for evaluation? Is there something special about them? The short answer is no. A large body of theoretical and empirical work has looked at how citation-based measures correlate with other, more subjective, measures of performance. In many cases at the aggregate level those correlations or associations are quite good. As a proxy at the level of populations citation based indicators can be useful. But while much effort has been expended on seeking theories that connect individual practice to citation-based metrics there is no basis for the claim that citations are in any way better (or to be fair, any worse) than a range of other measures we might choose.

Actually there are good reasons for thinking that no such theory can exist. Paul Wouters, developing ideas also worked on by Henry Small and Blaise Cronin, has carefully investigated the meaning that gets transmitted as authors add references, publishers format them into bibliographies, and indexes collect them to make databases of citations. He makes two important points. First that we should separate the idea of the in text reference and bibliographic list – the things that authors create – from the citation database entry – the line in a database created by an index provider. His second point is that, once we understand the distinction between these objects we see clearly how the meaning behind the act of the authors is systematically – and necessarily – stripped out by the process. While we theorists may argue about the extent to which authors are seeking to assign credit in the act of referencing, all of that meaning has to be stripped out if we want citation database entries to be objects that we can count. As an aside the question of whether we should count them, let alone how, does not have an obvious answer.

It can seem like the research enterprise is changing at a bewildering rate. And the attraction of a gold standard, of whatever type, is stability. A constant point of reference, even one that may be a historical accident, has a definite appeal. But that stability is limited and it comes at a price. The Gold Standard helped keep economies stable when the world was a simple and predictable place. But such standards fail catastrophically in two specific cases.

The first failure is when the underlying basis of trade changes, when the places work is done expands or shifts, when new countries come into markets, or when the kinds of value being created changes. Under these circumstances the basis of exchange changes and a gold standard can’t keep up. Similar to the globalisation of markets and value chains, the global expansion of research and the changing nature of its application and outputs with the advent of the web puts any fixed standard of value under pressure.

A second form of crisis is a gold rush. Under normal circumstances a gold standard is supposed to constrain inflation. But when new reserves are discovered and mined hyperinflation can follow. The continued exponential expansion of scholarly publishing has lead to year on year inflation of citation database derived indicators. Actual work and value becomes devalued if we continue to cling to the idea of a citation as a constant gold standard against which to compare ourselves.

The idea of a gold standard is ambiguous to start with. In practice citation data-based indicators are just one measure amongst many, neither the best available – whatever that might mean – nor an incontrovertible standard against which to compare every other possible measure. What emerges more than anything else from the work of the past few years on responsible metrics and indicators is the need to evaluate research work in its context.

There is no, and never has been, a “gold standard”. And even if there were, the economics suggests that it would be well past time to abandon it.

A briefing paper written for Jisc by Cameron Neylon – “The Complexities of Citation: How theory can support effective policy and implementation” – is available open access from the Jisc Repository.

Cameron Neylon

Cameron Neylon

About the author: Cameron Neylon is an advocate for open access and Professor of Research Communications at the Centre for Culture and Technology at Curtin University. You can find out more about his work and get in touch with Cameron via his personal page Science in the Open.

 

 

*Featured image: “A real bag of gold” by cogdogblog@flickr, used under the terms of a Creative Commons Attribution license.

All citations are created equal

(Only some are more equal than others)

Today we introduce you to one of the Jisc-funded PhD. students working at The Knowledge Media Institute (KMi), which is a part of the Open University and is located in Milton Keynes. David Pride is one of the team working as a part of the joint Jisc/OU CORE project (COnnecting REpositores) which offers Open Access to over eight million research papers.
David Pride completed his MSc. in Computer Science (with distinction) at The University of Hertfordshire in 2016 before starting his PhD. at KMi in February of this year. David’s PhD. supervisor is Dr. Petr Knoth and his thesis topic is looking at web-scale research analytics for identifying high performance and trends in academic research. In short, this involves using state-of-the-art Text and Data Mining techniques to analyse datasets containing millions of academic papers to attempt to identify highly impactful and influential research.

http://bit.ly/2AilG21 *

http://bit.ly/2AilG21 *

At KMi, all PhD. students must complete a pilot project study within their first year. For his, David chose to undertake a review of several previous studies that have attempted to automatically categorise citations according to type, sentiment and influence. Current bibliometrics methods, from the renowned Journal Impact Factor (JIF) to the h-index­ for individual authors, treat all citations equally. There is much empirical evidence demonstrating that treating citations all equally in this manner means that basic citation counts do not reflect the true picture of how a paper may be being used. A piece of research may be highly cited because of its ground-breaking content or because it introduces a new methodology. However, it could also be highly cited because it is a survey paper that provides a rich background to a particular domain. Conversely, a paper may engender citations that refute or disagree with the original work. Whilst most citations are overtly neutral in sentiment there is a certain percentage of negative citations. Yet, currently, all these citations are treated equally.

David’s work is also focused on developing new metrics that can leverage the full content of an academic paper to evaluate its quality rather than relying on citation counts alone.  He therefore continued the work of previous studies in using machine learning and natural language processing tools to automatically classify citations according to type and ‘influence’. Influence itself is an interesting concept and, in this case, refers to how influential the cited paper was on the citing paper, i.e. was the citation central to understanding the new work or was it perfunctory, or merely mentioned as part of the literature review for example. If information regarding how a paper is being cited is available to academics, researchers and reviewers this provides a much richer insight than currently available with basic citation counts.

Building on the work of Valenzuela et al. (2015) and Zhu et al. (2015) David developed a system to classify citations in a paper as either incidental or influential. Despite running into several difficult steps along the way, the results of the experiments were overall extremely positive and the resulting short paper was presented at the TPDL (Theory and Practice of Digital Libraries) 2017 Conference and was published in the Springer Lecture Notes on Computer Science. A full version of the paper was later accepted to the ISSI (International Society of Scientometrics and Informetrics (2017) where David presented his results to conference in Wuhan, China.

Moving forward, David intends to address one of the major failings in this domain which is the lack of a massive scale human-annotated dataset of citations to use when training classifiers for this task. It is believed that the results obtained previously can be significantly improved with a larger initial training set. Citation data is unbalanced in nature, negative citations for example representing only about 4% of all citations. Training a classifier to accurately identify these citations requires a dataset of sufficient magnitude to contain enough examples of every class. A large-scale reference set which contains citations annotated according to type, sentiment and influence would be an extremely valuable asset for researchers working in this domain.

In the coming months, David will also be researching the peer review process and how well this correlates with current methodologies for tracking research excellence. He has some  interesting data he is currently looking at and we’re looking forward to seeing what he produces in 2018!

*Featured image: “measurement” by flui., used under the terms of a Creative Commons Attribution license.