OU blog

Personal Blogs

Christopher Douce

Bibliometrics, Altmetrics & DORA

Visible to anyone in the world

On 2 October I attended another of the OU’s professional development events. This time, it was an event organised by the OU library. Facilitated by Chris Biggs, Research Support Librarian, the session aimed to outline three things: “common bibliometrics, and their use and misuse”, “what are Altmetrics and where they can be found” and DORA, which is an abbreviation for the “Declaration on Research Assessment (DORA)” along with the “responsible use of metrics”.

I was particularly interested in this section since I’m a co-editor of an international journal called Open Learning. Bibliometrics are sometimes discussed during the annual editorial meetings between the editors, the members of the editorial board, and the publisher, who is Taylor and Frances. During these meetings, various numbers are shared and summarised.

When I saw the title of the event, my main thought was: “I should go along; I might pick up on a point or two”. What follows is a set of notes that I made during the session, along some of the useful weblinks that were shared. One thing that I should add is that the structure of these notes come from the facilitator, Chris, and his presentation. Towards the end of these notes, I share a set of reflections.

Citations

I missed the first couple of minutes, joining just at the point when Chris was talking about the ‘the social dimensions’ of citations. Citations are all about giving credit. A useful link to look at is the page Citing Sources: What are citations and why should I use them?

One view is that the more citations an article has, the more popular it is. Subsequently, some might associate popularity to quality. An interesting paper that was referenced had the title Citations, Citation Indicators, and Research Quality: An Overview of Basic Concepts and Theories

Retuning to the notion of the social dimension of citations, one metric I noted down was that self-citations account for 12% of citations. A self-citation is where an author references their own, or earlier work. Whilst this can be used to guide authors to earlier research, it can also be used to increase the visibility of your research.

A concept that I wasn’t familiar with but immediately understood was the notion of a citation circle or cartel. Simply put, this is a group of authors, typically working in a similar field, who regularly reference each other. This may have the effect of increasing the visibility of that group of authors. Chris shared a link to an interesting article about the notion: The Emergence of a Citation Cartel

A further notion that I hadn’t officially heard of, but was implicitly familiar with, was the notion of the honorary citation. This is where an author might cite the work of a journal editor to theoretically increase chances of their paper being accepted. As an editor, I have seen that occasionally, but not very often. On a related point, the publisher, Taylor and Francis has published some very clear ethical guidelines that editors are required to adhered to.

Something else that I hadn’t heard of is the Matthew effect. This means that if something has been published, it will continue to be cited, perhaps to the detriment of other articles. Again, we were directed to an interesting article: The Matthew Effect in Science.

It was mentioned there are interesting differences in between academic disciplines. The pace and regularity of citations in the arts and humanities can be much less than, say, a busy area of scientific research. It was also mentioned that there are differences between types of articles. For example, reviews are cited more than original research articles, and methods papers are some of the most cited papers. (It was at this point, I wondered whether there were many articles that carried out reviews of methodologies).

An interesting reflection is that articles that are considered to have societal benefit are not generally picked up by bibliometrics. This immediately reminded me about how funders require researcher to develop what is known as an impact plan. I then remembered that the STEM faculty has a couple of impact managers who are able to provide practical advice on how researchers can demonstrate the impact and the benefits of the research that they carry out.

All these points and suggestions lead to one compelling conclusion, which is that the number of citations cannot be directly considered to be a measure of the quality of an article.

An important element is all this is, of course, the peer review process. Some important points were made: that peer review can be slow, expensive, inconsistent, and prone to bias. As an editor, I recognise each of these points. One of the most frustrating elements of the peer review process is finding experienced and willing reviewers. During this session, I shared an important point: if an author holds views that are incompatible or different to the reviewers, it is okay to have a discussion with an editor. Editors are people, and we’re often happy to chat.

Bibliometrics

There are a few different sources of bibliometrics. There is Scopus, Web of Science, Dimensions, CrossRef and Google Scholar. Scopus and The Web of Science offer limited coverage for social sciences and humanities subject. In contrast, Google Scholar picks up everything, including resources that may not really be academic articles. A link to the following blog, Google Scholar, Web of Science, and Scopus: Which is best for me? was shared.

There are, of course, different types of metrics. Following on from the earlier section where citations were mentioned, there is the notion of normalised citations, percentiles, and field citation ratios. Normalised citations (if I’ve understood this correctly) is the extent of an article being over a time period. Percentiles relate to how popular, or widely cited an article is. There is, of course, a very long tail of publications. Publications that appear within, say, the top 1% or top 5% are, of course, highly popular. Finally, the field citation ratio relates to the extent to which an article is published within a particular field of research.

There is also something called the h-index, which relates to the number of publications made by a researcher. During Chris’ presentation, I made the following notes: the h-index favours people who have a consistent publication record, such established academics. For example, a h index of 19 means 19 papers that have been cited 19 times.

Moving beyond metrics that relate to individual researchers, there is also something called the journal impact factor (JIF). Broadly speaking, the more popular or influential the journal, the higher its impact factor. This has the potential to influence researchers when making decisions about how and where to publish their research findings. It was mentioned that there are two versions of a JIF: a metric that includes self-citations, and another that doesn’t.

Metrics that relate to official academic journals isn’t the whole story. Outside of ‘journal world’ (which you can access through your institutional library) there are an array of ever changing social media platforms. Subsequently, there are a number of alternatives to citation based bibliometrics. Altmetrics, from Digital Science, creates something that is called an attention score, which consolidates different ‘mentions’ of research across different platform. It can only do this if there is a reference to a a persistent digital object identifier, a DOI.

Previews of AltMetric data can be seen through articles that are published on ORO, the university’s research repository. Although I’m risking accusation of self-citation here, an interesting example is the following excellent paper: Mental health in distance learning: a taxonomy of barriers and enablers to student mental wellbeing. Scrolling to the bottom of the page will reveal a summary of tweets and citations; metrics from both Altmetric and Dimensions.

There are a couple of other alternative metrics that were mentioned: PlumX, which is from Elsevier, and Overton, which is about the extent to which research may be potentially influencing policy.

Responsible metrics, the OU and DORA

Towards the end of the event, DORA, Declaration on Open Research Assessment was introduced, of which the OU is a signatory. One of the most salient point from the DORA website is this: “Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions”. There is also a case study that relates to the OU which can be found on the DORA website

This final and important part of the session, which had the title ‘The Idea of Responsible Metrics” was covered quite briefly. The topic of DORA is something that I really do need to look at in a bit more detail.

Reflections

I learnt a lot from this session. One thing that really grabbed my attention was the h-index. As soon as the session had finished, I asked myself a question: what is my h-index? It didn’t take too long to find it out.

After finding my h-index figure, I made a mistake; I wondered what the h-index for some of my immediate colleagues were. I found this unnecessarily and seductively interesting. I looked up h scores for professors, some fellow senior lecturers, and some newly recruited members of staff. Through various links, I could see who had collaborated with who. I could also see which of the professors were really high ranking professors.

I then stopped my searching. I asked myself another question, which was: “does any of these numbers really matter?”

It takes different kinds of people to run an academic department and a university. Some colleagues are excellent at research. Some colleagues are excellent at teaching, and some colleagues are even excellent at administration. In my own role as a staff tutor, I do a lot of academic administration and quite a bit of work which could be viewed as teaching. This means that I don’t have a lot of time to do any research. Broadly speaking, central academic staff have a much higher h-index metric than staff tutors, simply because they have more time. What research I do carry out is often applied research. This research can sometimes be labelled as scholarship, which can be considered to be research about the practice of teaching and learning.

It was interesting that one of the important points I took away was that societal impact can’t be directly measured through bibliometrics. I also found it interesting that different types of articles attract a greater number of citations. One of my biggest academic hits (I don’t have very many of them) has been a review paper, where I studied different ways in which a computer could be used to assess the quality of computer programming assessments. The articles that I have published that relate to pure research have certainly attracted less attention.

All this comes back to a broader question: in academia, what is valued? I think the answer is: different types of work is valued. Pure research is valued alongside effective and engaging teaching. The bit that ties the two together is, of course, scholarship. 

Bibliometrics are, in my eyes, are a set of measurements that attempt to quantify academic debate. It only ever tells a part of a bigger and much more complicated story. Metrics are not, in my opinion, surrogates for quality. Also, academic fashions and trends come and go.

Acknowledgements

Many thanks are given to Chris Biggs for running such an engaging and interesting session. Many of the links shared in this article were shared during Chris' presentation.

Permalink Add your comment
Share post
Christopher Douce

Reviewing an academic paper for Open Learning

Visible to anyone in the world

One of the tasks I have to do pretty regularly is to review academic papers for a journal called Open Learning, which I help to co-edit. 

This blog post is intended as a summary of my own thoughts about how I approach reviewing. This post may be useful for reviewers who are new to reviewing papers for journals not too dissimilar to Open Learning.

The blog is split into three parts. The first part is about how I approach the reading (and interrogating) of a journal submission. In this first bit there are some short cuts which I tend to apply to get a feel for a paper.

The second part is about how I approach the offering of feedback to authors. The overall aim is, of course, to try to help the author of a submission to write a better paper. For this part, I should acknowledge some of the ideas of Simon Bell, a former editor of Open Learning, who put in place a really nice framework.

The final part offers an ethical perspective. This is discussed in three different ways; the ethical responsibilities of the reviewer, ethical responsibilities of an author, and the ethical perspective that must be presented through a paper.

The blog post concludes by sharing some additional resources and sharing some further reflections about the role of the reviewer.

Before beginning, an important question to ask is: why should I review? There are a few answers to this. One reason is that is gives you insight into the peer review process. It also enables you to catch sight of the kinds of papers and research that relates to a field or discipline. Also, in some respects, academics serve the discipline that they study and teach; reviewing for journals can be thought of an extension of that service. Another reason is the practice of reviewing and writing reviews develops your critical perspective. Finally, reviewing is a way to gain academic kudos and experience. If you review for a journal, this is something that you can add to your academic CV.

Reading an academic paper

One of the first things I try to do is to get a feel for the paper as a whole. 

Getting a feel for the paper

A key question to have in mind is: what kind of paper is this?

I begin with the title, then the abstract, then the introduction, and then I immediately go to the references section. My justification for this is: if I recognise some of the references, then I may be able to get a quick (and rough) understanding of the type of research that is being presented. If I don’t recognise any of the papers, then I’ll clearly have to work a lot harder than I would if some of the papers were familiar to me.

Looking at references

Whilst I’m in the references section, I look to see whether a paper has referenced any other articles from the journal that it has been submitted to. If it hasn’t referenced any papers from within the journal, this makes me ask myself the question: is this paper appropriate for the journal?

There are two reasons why references from within the paper is important. Firstly, clearly referencing from within the journal shows that the research is placed amongst and next to existing research. This means that it is likely to be following and connected to existing debates and topics. Secondly, referencing popular papers from within the journal you are submitting to is a good strategy; it enables your work to be more easily discovered by researchers. The reason for this is that many journals allow researchers to follow links between different papers. 

Gaining a critical perspective

The next thing I would do is have a quick look through all the different sections. There are always some key headings that I look for: a section that describe methods, a results section, a discussion section and then a conclusion. If any of these are missing, I would certainly be giving the paper a closer look, and asking why the article wasn’t using these headings.

Checking out the detail

When looking through all the different sections, I would also keep an eye out for any figures or graphs. I would typically ask myself a couple of things. One question would be whether there were any figures or images that were presented in colour. The reason for this is simple: printed versions of the journal are still (currently) important. Although it is unlikely that a researcher might handle a physical copy of an issue in a university library, they may well download a PDF and print a copy out. Secondly, if there were graphs, I would check to see if the axes and titles made sense.

When I’m through with looking at these aspects, I might jump from the introduction to the conclusion. Is there a consistent message between the two sections? Doing this should (ideally) give me a good feel for what the paper is all about.  

The next bit is to read through the methods section to find whether there is a clear description of the research questions, before heading onto the methods section. A key question to ask is: “does the approach make any sense?” Another question to ask is: “is there sufficient detail to enable me to get a view about the methods?”

I must confess to being more confident with assessing qualitative papers than I am with quantitative papers. If I feel that I’m not able to make sense a paper, or feel that I don’t have the appropriate expertise to make a judgement or a proper academic assessment of a submission, I tell editor to make them aware of this. This is something that I pick up on later in the ethics section.

Commenting on a paper

When my former colleague Simon Bell started as a co-editor of Open Learning, he requested that all reviewers should be sent some guidelines.

A version of his guidelines have also been published in the System Practice and Action Research Journal (Bell and Flood, 2011). Essentially, they are a set of constructive directives that are intended to create what we called “the spirit of reviewing”. 

For sake of brevity and this blog, I summarise (and paraphrase) the directives (or guidelines) as follows:

  • Always be honest but temper honesty with kindness. Ask the question: “How would I feel if I received this review?”
  • Be constructive. Articles have been developed over time and should be read with sympathy and honour. 
  • Be fair. Always comment on what I liked as well as what parts of a paper I might have had problems with.
  • Be humble and say when I do not understand something; do not present myself as a world authority on a subject. 
  • Consider myself as a co-worker who is trying to contribute to a wiser and more exciting script. 
  • To help both the editor and the author, indicate if I like the text, say whether I would publish it, and highlight what changes could be made to make the text more enjoyable and if I think the author needs to “adapt/change/re-assess the text in some more challenging manner”.

Even before I had been introduced to Simon’s guidelines, I had implicitly devised a way of providing my own feedback to authors. The approach that I take may be familiar with colleagues:

  • Highlight what you think is good about the paper and acknowledge the work that has gone into producing it.
  • Highlight areas that you might have concerns with. Explain what could be improved, giving a suggestion about how it might be improved, and describe why these improvements are important.
  • Ask whether other papers may be useful for the researcher; offer them help and pointers where you think it is appropriate.
  • Be practical; if you feel that there is a lot wrong with the paper, highlight only three points. A thought is to say something about the content, say something about the structure, and say something about how it fits in with the discipline or the journal; this will help the editor too.
  • End on a positive note. 

An ethical perspective

It is really important that reviewers carry out reviews in an ethical way. 

There are three different perspectives that need to be kept in mind: the ethical practice of the reviewer, the ethical practice of the author of an article, and the ethical practices that are followed within an article. Each of these perspectives are covered in turn.

The reviewer

Reviewers are in a position of power and privilege; their comments can influence whether an article is published. Reviewers must bear in mind the following perspectives:

Impartiality: Reviewers should be impartial. This means that they should be aware of potential biases they may have about any aspect of a submission. A reviewer should not be familiar with the work that is being reviewed.

Expertise: Reviewers should be confident in their assessment of a paper. If they lack sufficient knowledge or expertise to make a judgement, they should either state this in the review, or let an editor know that they do not have sufficient expertise to carry out a review.

Integrity: Through their articles, authors may share new ideas. These ideas are being shared, in confidence, with reviewers. Any interesting and novel research directions that are suggested through an article should remain entirely with the author of an article. Reviewers should not directly draw on or build upon the work of papers that they review.

The author

Authors of papers must not use text, data or images from unattributed sources. Quotations that are used within a paper must be correctly presented; the source of an author should be mentioned, along with accompanying page numbers. If sources are not referenced correctly and fully, authors run the risk of being accused of plagiarism, which is a term that can be applied to not only intentional copying, but also inadvertent copying.

Authors should also be mindful of a concept known as self-plagarism. This is where an author of an article might make use of their own words which might have been used (and published in) other articles. This can occur, for example, if an author writes a paper that describes their doctoral research. The words they use within a doctoral thesis must be substantially different to the words used within an academic article. The exception to this is when authors deliberately quote their earlier work, and earlier publications.

One of the ethical responsibilities of a reviewer is to make a confident judgement that an author’s submission is their own, and to the best of their knowledge, isn’t using the words of other researchers. Reviewers should also let an editor know if they find that a very similar version of a submission has been published in another journal.

In many cases, the journal editor and publisher will also be able to make use of specialist tools to carry out further checks to ensure the originality of submissions.

The research

Since Open Learning publishes education research, many research articles make use of human participants. Whenever human participants are used, reviewers must ensure that there is sufficient evidence within a paper that suggests that research is carried out in an ethical way.

Simply put, participants must be clearly told about the aims of the research they are a part of, and they should be free to leave, or to decline participation in a study at any point. This view should be clearly presented within a section that describes research methods or procedures. Reviewers should feel free to provide comments if they find that a researcher has not provided sufficient description to enable them to decide whether an ethical approach to research has been adopted.

Research ethics is a subject in its own right, and one that has a rich history. Whilst the journal is not expecting reviewers to be an expert in all aspect of research ethics, there is an expectation that reviewers always ask the question: “has this research been carried out in an ethical way?”. More information about research ethics can be found through the British Educational Research Association Ethical Guidelines for Educational Research (BERA website).

Additional resources

The publisher of Open Learning has provided a set of reviewer guidelines (Taylor and Francis website) which may also be useful.

Taylor and Frances have also published some information about their editorial policies and plagiarism (Taylor and Francis website) which may be helpful for both authors and reviewers.

Reflections

Reviewing can be an interesting and rewarding process. Although I spend most of my time reviewing papers for Open Learning, I have also reviewed papers for conferences, workshops and other journals. One of the benefits of reviewing is that is helps to maintain a connection with a discipline. It is rewarding to see how authors respond to comments, and how reviewers can directly (but implicitly) contribute to the continuing professional development of fellow academics and researchers. I would also emphasise that is isn’t necessarily something that is easy; sometimes there are some papers that are difficult to review, and the accompanying comments can be difficult to write.

To conclude, here is a concise summary of what I perceive to be the benefits of being a reviewer:

  • It helps to maintain a link with a discipline.
  • It provides a way to give academic service to a discipline, which can be highlighted on an academic CV or resume.
  • It helps to develop skills to critically assess academic writing.
  • The process of providing feedback helps to develop (and maintain) critical writing skills.
  • It helps to further understand the peer review process.
Permalink Add your comment
Share post

This blog might contain posts that are only visible to logged-in users, or where only logged-in users can comment. If you have an account on the system, please log in for full access.

Total visits to this blog: 1976344