OU blog

Personal Blogs

Christopher Douce

Bibliometrics, Altmetrics & DORA

Visible to anyone in the world

On 2 October I attended another of the OU’s professional development events. This time, it was an event organised by the OU library. Facilitated by Chris Biggs, Research Support Librarian, the session aimed to outline three things: “common bibliometrics, and their use and misuse”, “what are Altmetrics and where they can be found” and DORA, which is an abbreviation for the “Declaration on Research Assessment (DORA)” along with the “responsible use of metrics”.

I was particularly interested in this section since I’m a co-editor of an international journal called Open Learning. Bibliometrics are sometimes discussed during the annual editorial meetings between the editors, the members of the editorial board, and the publisher, who is Taylor and Frances. During these meetings, various numbers are shared and summarised.

When I saw the title of the event, my main thought was: “I should go along; I might pick up on a point or two”. What follows is a set of notes that I made during the session, along some of the useful weblinks that were shared. One thing that I should add is that the structure of these notes come from the facilitator, Chris, and his presentation. Towards the end of these notes, I share a set of reflections.

Citations

I missed the first couple of minutes, joining just at the point when Chris was talking about the ‘the social dimensions’ of citations. Citations are all about giving credit. A useful link to look at is the page Citing Sources: What are citations and why should I use them?

One view is that the more citations an article has, the more popular it is. Subsequently, some might associate popularity to quality. An interesting paper that was referenced had the title Citations, Citation Indicators, and Research Quality: An Overview of Basic Concepts and Theories

Retuning to the notion of the social dimension of citations, one metric I noted down was that self-citations account for 12% of citations. A self-citation is where an author references their own, or earlier work. Whilst this can be used to guide authors to earlier research, it can also be used to increase the visibility of your research.

A concept that I wasn’t familiar with but immediately understood was the notion of a citation circle or cartel. Simply put, this is a group of authors, typically working in a similar field, who regularly reference each other. This may have the effect of increasing the visibility of that group of authors. Chris shared a link to an interesting article about the notion: The Emergence of a Citation Cartel

A further notion that I hadn’t officially heard of, but was implicitly familiar with, was the notion of the honorary citation. This is where an author might cite the work of a journal editor to theoretically increase chances of their paper being accepted. As an editor, I have seen that occasionally, but not very often. On a related point, the publisher, Taylor and Francis has published some very clear ethical guidelines that editors are required to adhered to.

Something else that I hadn’t heard of is the Matthew effect. This means that if something has been published, it will continue to be cited, perhaps to the detriment of other articles. Again, we were directed to an interesting article: The Matthew Effect in Science.

It was mentioned there are interesting differences in between academic disciplines. The pace and regularity of citations in the arts and humanities can be much less than, say, a busy area of scientific research. It was also mentioned that there are differences between types of articles. For example, reviews are cited more than original research articles, and methods papers are some of the most cited papers. (It was at this point, I wondered whether there were many articles that carried out reviews of methodologies).

An interesting reflection is that articles that are considered to have societal benefit are not generally picked up by bibliometrics. This immediately reminded me about how funders require researcher to develop what is known as an impact plan. I then remembered that the STEM faculty has a couple of impact managers who are able to provide practical advice on how researchers can demonstrate the impact and the benefits of the research that they carry out.

All these points and suggestions lead to one compelling conclusion, which is that the number of citations cannot be directly considered to be a measure of the quality of an article.

An important element is all this is, of course, the peer review process. Some important points were made: that peer review can be slow, expensive, inconsistent, and prone to bias. As an editor, I recognise each of these points. One of the most frustrating elements of the peer review process is finding experienced and willing reviewers. During this session, I shared an important point: if an author holds views that are incompatible or different to the reviewers, it is okay to have a discussion with an editor. Editors are people, and we’re often happy to chat.

Bibliometrics

There are a few different sources of bibliometrics. There is Scopus, Web of Science, Dimensions, CrossRef and Google Scholar. Scopus and The Web of Science offer limited coverage for social sciences and humanities subject. In contrast, Google Scholar picks up everything, including resources that may not really be academic articles. A link to the following blog, Google Scholar, Web of Science, and Scopus: Which is best for me? was shared.

There are, of course, different types of metrics. Following on from the earlier section where citations were mentioned, there is the notion of normalised citations, percentiles, and field citation ratios. Normalised citations (if I’ve understood this correctly) is the extent of an article being over a time period. Percentiles relate to how popular, or widely cited an article is. There is, of course, a very long tail of publications. Publications that appear within, say, the top 1% or top 5% are, of course, highly popular. Finally, the field citation ratio relates to the extent to which an article is published within a particular field of research.

There is also something called the h-index, which relates to the number of publications made by a researcher. During Chris’ presentation, I made the following notes: the h-index favours people who have a consistent publication record, such established academics. For example, a h index of 19 means 19 papers that have been cited 19 times.

Moving beyond metrics that relate to individual researchers, there is also something called the journal impact factor (JIF). Broadly speaking, the more popular or influential the journal, the higher its impact factor. This has the potential to influence researchers when making decisions about how and where to publish their research findings. It was mentioned that there are two versions of a JIF: a metric that includes self-citations, and another that doesn’t.

Metrics that relate to official academic journals isn’t the whole story. Outside of ‘journal world’ (which you can access through your institutional library) there are an array of ever changing social media platforms. Subsequently, there are a number of alternatives to citation based bibliometrics. Altmetrics, from Digital Science, creates something that is called an attention score, which consolidates different ‘mentions’ of research across different platform. It can only do this if there is a reference to a a persistent digital object identifier, a DOI.

Previews of AltMetric data can be seen through articles that are published on ORO, the university’s research repository. Although I’m risking accusation of self-citation here, an interesting example is the following excellent paper: Mental health in distance learning: a taxonomy of barriers and enablers to student mental wellbeing. Scrolling to the bottom of the page will reveal a summary of tweets and citations; metrics from both Altmetric and Dimensions.

There are a couple of other alternative metrics that were mentioned: PlumX, which is from Elsevier, and Overton, which is about the extent to which research may be potentially influencing policy.

Responsible metrics, the OU and DORA

Towards the end of the event, DORA, Declaration on Open Research Assessment was introduced, of which the OU is a signatory. One of the most salient point from the DORA website is this: “Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions”. There is also a case study that relates to the OU which can be found on the DORA website

This final and important part of the session, which had the title ‘The Idea of Responsible Metrics” was covered quite briefly. The topic of DORA is something that I really do need to look at in a bit more detail.

Reflections

I learnt a lot from this session. One thing that really grabbed my attention was the h-index. As soon as the session had finished, I asked myself a question: what is my h-index? It didn’t take too long to find it out.

After finding my h-index figure, I made a mistake; I wondered what the h-index for some of my immediate colleagues were. I found this unnecessarily and seductively interesting. I looked up h scores for professors, some fellow senior lecturers, and some newly recruited members of staff. Through various links, I could see who had collaborated with who. I could also see which of the professors were really high ranking professors.

I then stopped my searching. I asked myself another question, which was: “does any of these numbers really matter?”

It takes different kinds of people to run an academic department and a university. Some colleagues are excellent at research. Some colleagues are excellent at teaching, and some colleagues are even excellent at administration. In my own role as a staff tutor, I do a lot of academic administration and quite a bit of work which could be viewed as teaching. This means that I don’t have a lot of time to do any research. Broadly speaking, central academic staff have a much higher h-index metric than staff tutors, simply because they have more time. What research I do carry out is often applied research. This research can sometimes be labelled as scholarship, which can be considered to be research about the practice of teaching and learning.

It was interesting that one of the important points I took away was that societal impact can’t be directly measured through bibliometrics. I also found it interesting that different types of articles attract a greater number of citations. One of my biggest academic hits (I don’t have very many of them) has been a review paper, where I studied different ways in which a computer could be used to assess the quality of computer programming assessments. The articles that I have published that relate to pure research have certainly attracted less attention.

All this comes back to a broader question: in academia, what is valued? I think the answer is: different types of work is valued. Pure research is valued alongside effective and engaging teaching. The bit that ties the two together is, of course, scholarship. 

Bibliometrics are, in my eyes, are a set of measurements that attempt to quantify academic debate. It only ever tells a part of a bigger and much more complicated story. Metrics are not, in my opinion, surrogates for quality. Also, academic fashions and trends come and go.

Acknowledgements

Many thanks are given to Chris Biggs for running such an engaging and interesting session. Many of the links shared in this article were shared during Chris' presentation.

Permalink Add your comment
Share post
Christopher Douce

Planning and evaluating impact of a scholarship project

Visible to anyone in the world

On 23 June 23, I attended an online seminar about impact and scholarship, which was facilitated by Shailey Minocha and Trevor Collins. Shailey is the School of Computing and Communications scholarship lead, and Trever used to be a director of the university’s STEM scholarship centre, eSTEeM.

The event is summarised as follows: “we will take you through the toolkit for impact of SoTL and introduce you to various resources of the impact evaluation initiative. By the end of the event, we hope that you will feel prepared to use the resources/toolkit to plan, evaluate, and report the impact of your (past, present and future) SoTL projects and interventions.” Early on in the seminar, there was a reference to a page about impact, which can be found on the eSTEeM website.

Stories of impact

One of the most notable parts of this seminar was the amount of articles and resources that were shared. One of the first articles mentioned was: Impact of Scholarship of Teaching and Learning: A compendium of case studies. In this publication, 16 Scholarship of Teaching and Learning (SoTL) projects were analysed by something called the Impact Evaluation Framework (IEF).

Two other articles were: 

Defining impact

The UK Research Excellent Framework (REF) defines impact as “an effect on, change or benefit to the economy of society”. There is a connection here with the school research fiesta which took place earlier this year: REF impact case studies are important. In terms of SoTL, impact implies demonstrative benefits to learning and teaching that are directly attributable to a specific project.

I noted a question: what has changed (as a result of a project)? What new insights have gained (from the project)? Also, how can the institution put the outcomes into use? What are the current debates that this scholarship relates to?

Impact evaluation framework

The impact evaluation framework was mentioned, but what exactly is it? It is said to contain 12 facets (or aspects) of impact, which are spread over 4 categories. During the session, I attempted to briefly summarise what they are:

  • Learning and teaching: impact on student experience; student retention; evidence of excellence?
  • Transfer to others: an influence on discipline based teaching, research, or practice; dissemination of outcomes; extent of adoption by others?
  • Stakeholder benefits: enhanced mutual understanding; facilitated personal or professional development; recognition of project team members and other stakeholders.
  • Cultural and economic benefits: has it fostered scholarship culture; financial implications (saving of money); funding opportunities.

Relating to this framework, Shailey shared a link to her blog, Impact of scholarship of teaching and learning

This article provides links to related resources, such as an executive summary, case studies, guide for educators, and two workbooks: one about impact evaluation, and another about planning for impact

Six principles (or values) of SoTL

A particularly useful resource which relates to scholarship is a free badged open course from Open Learn: Scholarship of Teaching and Learning in STEM.

This short course has 6 sections, which emphasises what contributes to an effective study:

  1. Grounded in student learning and engagement
  2. Grounded in one or more context
  3. Rigorous and methodological sound research design
  4. Conducted in partnership with students
  5. Appropriately public for evaluation and uptake by peers
  6. Reflection, critical reflection and reflexivity.

Strategies for planning and generating impact

This section of the seminar shared some useful practical tips for anyone who was considering setting up a scholarship project, or thinking about impact. These have been paraphrased as follows:

  • Align scholarship with strategic priorities of institution, school and discipline.
  • Use social media to create community and connection; make use of YouTube channels, and other social media platforms.
  • Make sure you keep a clear record of evidence of impact.

Another thought I did have was: consider developing a scholarship team which has complementary skills.

Impact resources

Building on the section which introduced the impact evaluation framework, this section aimed to highlight resources and ideas that could be useful. A key element of this was the Theory of Change methodology (ToC). This was highlighted as a dominant image methodology which is used by the Office for Students https://www.officeforstudents.org.uk/ . Apparently, the Theory of Change helps scholars plan a project for impact, helping them to consider pathways to impact from the start of project.

Some resources that were highlighted included a ToC visual tool, a SoTL impact evaluation workbook, and the Planning for SoTL impact evaluation workbook. There was also a question driven template, which was considered to be a project management tool.

A key point highlighted in this section: know who your stakeholders are. Without stakeholders, and without influence across stakeholder communities, there is no impact.

Reflections

A question that I always return to is: what is the difference between scholarship and research?

In some respects, the answer to this question is directly linked to the notion of impact. The way that I understand it is that scholarship relates to impact on teaching practice and activities. In turn, scholarship can have a direct impact on the student experience. Research, on the other hand, has impact on an academic discipline, or field of study. There is, of course, cross over between scholarship and research, especially within the domain of education and education studies. 

Another thought I always come back to is that both scholarship and research are important, and that academics should do both: research relates to what we teach, whereas scholarship relates to how we teach. I can’t get away from the perception that due to the Research Excellence Framework (REF) that research activity is valued higher than scholarship activity. This said, there are other metrics and league tables that relate to the student experience: the student satisfaction survey, and the Teaching Excellence Framework (TEF).

This seminar was timely. I’ve just finished setting up what is called my annual Academic Workload Plan. In the forthcoming year, I’m hoping to set up a scholarship project (subject to approval, of course). An important point from this session was: build in dissemination and impact right from the start.

I thought that the tools shared during this session were potentially useful, especially the articles. The session clearly highlighted that there are challenges in planning for and generating impact: projects can often take longer than expected, and project members can become tired at the end of the project. An excellent point was made; sometimes impact could occur years after the completion of a project. This point emphasises the importance of importance of collating impact after a scholarship project has officially finished.

I once heard it said that it is very difficult to change the world by writing an academic article. I understand impact being all about what you do with either your practice or research findings. A lot of academic effort goes into finding things out and getting articles published in prestigious journals. Impact, in my eyes, is all about enabling findings to facilitate positive and constructive change.

Permalink Add your comment
Share post
Christopher Douce

Computing and Communications: 2023 Research Fiesta

Visible to anyone in the world
Edited by Christopher Douce, Tuesday, 31 Jan 2023, 17:02

On 25 January 2022 I attended an event called the School of Computing and Communications Research Fiesta, which took place on the university campus. One of my reasons for attending the fiesta was to try to restart my research activities, having stepped away from research due to taking on a role called ‘lead staff tutor’ for the last three years. 

The last time I attended a school research fiesta was on 10 January 2019 (OU blog) which took place at the nearby Kents Hill conference centre. Following this earlier event, I shared an accompanying post about research funding (OU blog).

This event was advertised as a “… time for us to reconvene and discuss everything research. This event is aimed to help us (re-)connect with one another and understand how we can help and benefit from each other’s research expertise and outputs” and was facilitated by David Bush from Ascolto.

What follows is a summary of the Research Fiesta, in terms of what happened during the meeting, and what I felt the biggest take away points were. This blog may of be interest to anyone who was at the event, couldn’t make it to the event, or broadly interested in the process of research (whether computing research, or research that takes place within other disciplines).

Preparation

Before the event, we were asked to prepare some cards which summarised our research interests. Although I didn’t write the card in advance, I did come to the event with some ideas in mind. Here’s what I wrote down on three cards:

  • Understanding and characterising green computing: what it is, what the boundaries and problem are, and how can we embed this theme into our teaching?
  • Storytelling, soft skills, and software engineering: what role does storytelling play or could play in software engineering practice, and how might storytelling be used to develop soft skills in the next generation of computing graduates?
  • Accessibility of web technologies: how accessible are the current generation of web-based applications, and to what extent are hybrid apps accessible with assistive technology. How useful is WAI-ARIA? It is still useful? Does it have an impact?

Later during the session, I added two more cards:

  • Pedagogy of teaching programming at a distance: innovative tutorials; how to develop tutors, and how to help them to be creative, perhaps by embedding and using drama.
  • Development of writing skills across the computing curriculum. 

This final idea emerged from discussions with tutors, following some discussions with tutors, and might form the basis of a scholarship project. The university has prepared a lot of materials about writing; the question is whether the computing programme makes effective use of them, given the writing requirements from some courses.

Activity 1: Sharing research ideas

Our first activity could be loosely called “academic speed dating”. 

I’ve done this before (both the academic version, and the non-academic version). 

In this version, we were sent to various tables, where we met up with two other colleagues. Our task was to show our cards (our research ideas) and try to create a new card that combined aspects of all of our cards. When we had done this, we had to pin our cards onto the wall to share our ideas with everyone.

Activity 2: Forming research teams

After a short break, everyone was asked to form a line based on how much research experience everyone has. On one side, there were all the new PhD students, and on the other side, there were the professors and heads of existing research groups.

Approximately 6 PhD students and early researchers were asked to review the cards that had been generated from the speed dating activity, and each had to choose a card they found most interesting.  This card (represented by one of the researchers) would then form the basis of a new team of 3 or 4 researchers.

One at a time, the rest of the researchers were ushered over to speak with the new researchers. If you liked an idea, and there wasn’t already 3 or 4 researchers, you could join a team. The longer the game went on, the harder it becomes for the more experienced researchers. Instead, they would have to make use of all the powers of persuasion to try to join an existing team, or to persuade fellow researchers to create new teams.

After some discussion and reviewing cards, I joined two of my colleagues, Dhouha Kbaier and Yaw Buadu. Two project cards were combined together to create a new project. Paraphrasing our cards, our project intended to:

Develop digital technologies to enhance engagement and participation by integrating more physical computing into the computing curriculum. 

Accompanying research questions were: what are the challenges of using physical computing in a distance learning environment, and how might physical computing devices be connected to and integrated within the Open STEM labs

This final question suggests the opportunity to explore costs and trade-offs of a physical computing approach where students use their own equipment, or share equipment with other students through a platform which is accessed remotely.

What might physical computing actually mean? One answer to this is: physical hardware used by students to learn about or to solve computing problems, as opposed to using software simulations. There is a precedent of using (and sharing) physical computing devices at the university. In earlier decades, there was the Hektor computer (computinghistory.org.uk), which was once sent out to computing students (and then later returned to the university).

A more modern and smaller (and much more sophisticated) version is the Raspberry Pi computer (Raspberry Pi website) which can be used with any number of interesting computing projects.

One other aspect that we discussed were about the stakeholders, and who might need to be involved? We identified the following groups: students, tutors, module team members, and administrative university functions.  (The members of module team may include both tutors and curriculum managers, who act as a fundamental link between the academic team and operations of the university bureaucracy). 

Impact: evaluation and presentation

The next bit of the fiesta was a presentation; a double act from two colleagues from the research school, Betul Khalil, an Impact Manager, and Gareth Davies, who is a Research Evidence Impact Manager. 

They began with a question: what is impact, and can we give an example? 

Impact isn’t the same as project outcomes. They are very different things. An outcome might be a report, or some software. An impact can refer to a change that may have led to a positive long term benefit to stakeholders. In terms of the UK Research Excellent Framework (REF Impact case studies), impact could mean a change to society, the economy, and to the natural environment. Also, a measurable change might be on a local, regional or international scale.

The message to us was clear: when working on a project bid, researchers need to proactively consider impact from the outset and define impact objectives, since gathering effective evidence to show how those objectives may have been met takes time. In some respects, impact evidence gathering is a further part of the research process.  To do it well, researchers need an impact plan to accompany a research or project plan. 

We were all given a handout, from which I have noted down some useful questions that researchers need to bear in mind. These are: 

  • Who are the stakeholders, and who might be affected by the change your project may facilitate?
  • What do the stakeholders (or beneficiaries) gain from your research?
  • Why will they engage with your research?
  • How will you communicate with beneficiaries?
  • What activities might you need to run to effect change?
  • How might you evidence change? 
  • How will you connect change to your research?

Later, Gareth talked more about what it means to ‘evidence’ impact. An important note I made from Gareth’s presentation was that “upsteam planning is important” and that the analysis of impact should be rigorous. Researchers also need to consider which methods they use to enable them to find a way to observe what is changing. 

Apparently, one of the most common forms of evidence is a written testimonial (in the form of a testimonial letter). Within this assertion lies the reflection that researchers need to make sure they have the time and the means to gather evidence.

Activity 3: How will we do our project?

Our next activity was to sit around a table to figure out how were going to do to answer our research questions.

We began by asking: what might the outputs from our project be? We came up with some rough answers, which were:

  • Guidelines about how physical computing could be embedded and used within module teams. If used within a module, tutors could then be offered some accompanying guidance.
  • Recommendations about physical kit that could be used (these kits might be bought, or borrowed, or used from a distance); recommendations about the use of software; recommendations about pedagogy and use (which is an idea that can relate to the idea of useful guidelines). 

To produce these, what needs to be done? Our team offered the following suggestions (but the exact order of carrying these out could be easily debated):

  1. Examine learning outcomes within various qualifications and accompanying modules.
  2. Explore the problem space running focus groups with stakeholders to understand how the terms engagement and participation are understood.
  3. Use mixed methods: from the focus group results, carry out a survey to more thoroughly understand how a wider population understands engagement and participation.
  4. From these different information sources (and input from the learning outcomes) facilitate a number of curriculum design workshops to understand how physical computing can be brought into the curriculum.
  5. Carry out a detailed analysis of all the data that has been captured, writing up all the findings.
  6. Implement the findings.

A further reflection was each of these activities needs to be considered in terms of SMART objectives; specific, measurable, achievable, realistic and timebound. 

A new question that we were asked was: what impact will your project have? 

Given that students are key stakeholders, there might be broader impacts in terms of results to the National Students Survey. There might be further impacts both within the university, and to other organisations that provide distance learning. There might also be impacts that could be broadly described as the further development of computing pedagogy. This is all very well, but how might we go about measuring all this? It is this question which the facilitators from the research school may have wanted to encourage us to consider.

What happens next?

 After presenting our plan to all the other groups, we were asked a couple of final questions, which were: how excited are you about the project? Also, how doable (or realistic) is the project?

Given that we all have our own main research interests (which are slightly different to the new project that we have defined), we all had different levels of enthusiasm about going ahead with this project idea. That said, the key concepts of physical computing (in its broadest sense) and student engagement are important topics which other researchers may well be interested in exploring. Even if this particular team may not be in a position to take these ideas forward, the ideas are still worth exploring and studying.

Reflections

I really liked the way that we were asked to focus on trying to get things done. 

When thinking about research (and research projects) impact has always been something that has always been at the back of my mind, but I’ve always tended to consider it as something that is quite intangible and difficult to measure. The presenters from the research school made a really clear point. They emphasised that it is important to plan for impact before your project has started.

A personal reflection is that impact could be thought of as a way to reflect on the success of a project. In some respects, this should be something that researchers should be doing as a matter of course to further develop their professional skills. Of course, the extent and nature of this analysis will depend very much on the nature of the research that is carried out through a project. Given the collaborative nature of research, gathering of impact evidence is likely to be collaborative too. 

It is interesting to compare this Research Fiesta with the one that was held in 2019. One of the differences being that there were a lot fewer people attending this event. This might have been a factor due to the timing (some new module presentations were just about to begin) or a hangover from the 2019-20 pandemic (where so many colleagues switched to homeworking). 

An interesting difference related to the structure: this event was facilitated in a dynamic way, where the research themes emerged from the participants. The earlier event had more emphasis on sharing information about the research groups within the school, and more of the practicalities about how to gain funding for research. There is, of course, no right or wrong way to run a research fiesta. I appreciated the dynamic structure, but equally I’m always up for hearing about new concepts and ideas, and learning about what is happening within and across the school.

Acknowledgements

Many thanks to Amel Bennaceur for organising the event. One of the impacts has been to get to catch up with colleagues, and to learn more about them! It was a pleasure working with my fellow group members, Dhouha Kbaier and Yaw Buadu who kindly reviewed this blog article before it was published.

Permalink Add your comment
Share post
Christopher Douce

Advice to students, from students

Visible to anyone in the world
Edited by Christopher Douce, Thursday, 22 Dec 2022, 11:12

An interesting question to ask a student is: “what advice would you give to a fellow student about working with TMA feedback?” TMA feedback is, of course, the feedback that a student receives after collecting their marked Tutor Marked Assessment.

This question is one of several that are explored through a scholarship project that has the title Developing student use of feedback on their marked TMAs which was led by staff tutors Carol Calvert and Colette Christiansen, and Clare Morris who was the AL lead on the project. 

A summary of some of their research findings was shared to colleagues through a discussion forum. It struck me that their findings (which takes the form of practical advice) was so useful, I thought it might be helpful to share their findings more widely.

Before looking at the specific points, it is worth emphasising that TMA feedback is typically given in two different ways: through a coversheet (an eTMA form, which is sometimes called a PT3 summary) which offers students with an overview of how they have done (which is typically forward looking), and comments that have been directly provided on a submitted assignment (which is feedback that relates to work that has been done).

Here are Carol, Colette and Clare's collection of useful points:

Collecting and using your TMA feedback – advice from your fellow students

Reading the feedback on your marked TMA can be a bit nerve-racking, but it can also be a really important part of your learning. A recent survey produced a great deal of valuable advice from students, which we’re sharing with you here. All the quotes in green are taken directly from comments by students – and they’re just representative examples of topics which were mentioned dozens of times. This is the voice of experience!

Some preliminary advice – doing and handing in the TMA

  • Complete the dummy TMA [i.e., TMA00]
  • Make sure to read all the guidance about submitting TMAs well in advance
  • Attempt the TMA as soon as possible, and work on it as you go along, straight after completing the relevant unit.
  • Make sure to keep in mind when your assessments are due in, you don't want to have to rush anything.
  • Plan your time and keep disciplined! Once you fall behind, it's tough to get back on track, so don't let it happen.
  • Make a plan and keep it realistic
  • Read the question, answer the question, then read the question again! – but don’t over-think it.
  • Know that the time you spend on learning will pay off and don’t give up.
  • Do what it says on the tin and you can't go far wrong

Picking up your marked TMA…

  • Download the pack [that is, your marked script plus the summary sheet] from the website to save alongside the submission.
  • Print them so easier to refer to. Use for revision.
  • keep all feedback downloaded to use later, keep it stored

… reading and making use of the feedback…

  • Read the feedback initially then go back the next day once emotions surrounding marks have subsided. Read, and review, then revisit the comments a few days later
  • Read through the comments thoroughly and talk to a friend or family member about any mistakes you have made (or things you are particularly proud of), and how you can improve. This helps to keep the feedback in your head, so you have it at hand when tackling the next TMA.
  • Look at the feedback as soon as possible so that you can keep on top of any errors/feedback for completing the next TMA and improving your marks.
  • even if you score highly there is value in reviewing the feedback as tutors will also comment on things such as the style and formatting of the document which can be useful when setting out future assignments.
  • Focus on applying the feedback given rather than focusing on your assessment score
  • Take any general advice on board. It can provide easy extra marks throughout the rest of your studies if you fix general issues on how you show your working or answer written questions.
  • Make use of it. You might be annoyed at first to have dropped marks, but turn it into a positive and learn from your mistakes
  • Take your time to consider the feedback - then redo that part using the feedback provided
  • Take notes of your feedback to refer back to
  • go back to it as many times as needed
  • read the feedback numerous times to take it in properly to be able to use it effective in future TMA's because it is a brilliant resource to support you to improve
  • have the feedback handy for the next attempt at an TMA.

…maybe feeling a bit upset by the comments…

  • Don't take it personally, use it as fuel for doing even better in your next assignment.
  • It's for your own good. If you don't know where you are going wrong, how do you expect to improve?
  • accept it constructively, it is really helpful
  • Don't get too hung up on it
  • Try not to get too upset if your mark isn't as high as you'd hope or wanted
  • [remember] that it is given to encourage and help them
  • Making a mistake and receiving feedback for the mistake is an efficient way for an improvement. So, appreciate it rather than being disappointed
  • Take your time to process the feedback, don't allow your emotions to cloud your judgement.

And if you don’t understand something your tutor has written…

  • don't be afraid to ask your tutor for clarification, especially if you think they're wrong! (you may need help realising you've gotten the wrong idea about something)
  • Don’t be shy to ask for help from your tutor
  • Make the most of having an assigned tutor
  • If you want really clear feedback, you should ask clear questions to your tutor yourself.

Finally…

re-read the feedback from previous TMAs before submitting the next to ensure that you have learned from past mistakes and the feedback was not given in vain

And above all, remember…

TMAs are about much more than marking!

Acknowledgements

Many thanks to Carol, Clare and Collette for giving me permission to share their summary. Their research was carried out within and funded by eSTEeM: the OU centre for STEM pedagogy.

Permalink Add your comment
Share post
Christopher Douce

7th eSTEeM Conference: 25 and 26 April 2018

Visible to anyone in the world
Edited by Christopher Douce, Friday, 25 May 2018, 09:57

The Open University runs a centre called eSTEeM which funds research and scholarship to enhance and develop STEM education. For the last few years, the centre has run a conference that serves a number of purposes: to showcase research, to create a space to get people talking (and potentially collaborating) with each other, and to offer an opportunity for academic professional development.

What follows is my own personal summary the two days of the conference. There were a number of parallel sessions to choose from. My approach to choose them was very simple: I chose the sessions that packed a lot in. This meant that I chose the paper sessions rather than the various workshop sessions were on offer. At the end of the blog I offer some very short reflections based on my experience of the session.

Opening keynote

The conference was opened by Diane Butler, who introduced the introductory keynote speaker, Tony Bates who used to work at the OU and also the University of British Columbia. Tony has recently written an Open Text Book called Teaching in a Digital Age. I made a note that Tony opened with the observation that there is ‘a lot of change’ and this has direct implications for teaching and learning at the university. One of the key forces of change is the need for skills, i.e. IT skills that are embedded within a subject area; knowing skills that are specific to a discipline. An accompanying question was: what are employers looking for? Certain skills are really important, such as active listening, speaking and critical thinking. 

Learners need to practice and develop skills and to do this they need regular feedback from experts. I made a note about that technology isn’t perhaps the most appropriate way to develop the soft skills that Tony mentioned earlier. An interesting question was posed: what does an advanced course design look like? There were some suggestions (if my notes serve me well): perhaps there might be student generated multimedia content and assessment by e-portfolios.

Tony also spoke about trends: there are new models of delivery; there is face to face teaching on one side, and fully distance learning on the other (and everything in between). An interesting point was that every university in Canada had fully online courses, with 16% of all course enrolment being to online courses and programmes. Traditional universities are moving into the space where distance learning institutions used to dominate.

An interesting new trend was the notion of hybrid learning: looking at what is best done in the classroom and what is best done online. I made a note that Tony said there was ‘no theory about what is done face to face versus online’, which strikes me as surprising.

A significant trend is, of course, MOOCs, but it was reported that there was no MOOC mania in Canada. Other trends included open educational resources and open text books. The point is that we’re now at a point where university professors offer learning support and not content and this has implications for teaching and learning. 

Tony concluded by leaving some points for the university: that technology is continually changing, that there needs to be flexible accreditation for life-long learning, and perhaps there needs to be an agile approach to course (or module) development. Also, all universities will be or are going to be digital (in some way or another). 

Paper session: Supporting students

Lessons in retention success: using video media to influence students

Jessica Bartlett spoke about her experiences working on S282 Astronomy. There are some immediate and obvious challenges: students numbers are falling and the module contains a lot of maths. An interesting point is that 50% of the students studying this module were not from the STEM faculty (which is where all the maths is studied).

The aim of the project was retain more students and help more students to pass exams. The module uses formative tutor marked assessments (TMAs) which means that the module team can reuse questions but can’t (of course) provide model answers to students. I recognised an interesting comment: ‘students don’t often look at their mark, ignoring their feedback’. The module team made videos about how to deconstruct and approach the TMA questions. I made a note of something called ‘reviewing your TMA’ activities, which encourage students to look back at what they’ve done (which sounds like a great idea). There were also weekly videos, where the filming and editing was done by the module team.

Evidence that bootcamps can help student retention and progression

Tom Wilks also spoke about S282 Astronomy but within the context of a ‘bootcamp’ that was designed to offer addition student support. Tom recorded short tutorial sessions that covered a range of topics: basic maths and physics, general OU study skills, how to use the VLE and how to use the Adobe Connect conferencing tool. 28 Adobe Connect sessions were recorded, each lasting between 2 and 10 minutes in length. These sessions were advertised to all students, who could access a forum and an Adobe connect room. Other resources include something called an ‘are you ready for quiz’ which is also used with some computing. Tom commented that tutors can refer students to his recordings if some students were struggling with certain concepts, and he also found that students did re-engage with materials when they were approaching their TMA. 

Flexible/early start M140

Carol Calvert gave a talk on her work on introducing a Flexible or Early start to M140 Introducing Statistics. I’ve heard Carol speak about this subject before, and she always delivers a great talk. Her research is based on an earlier study where she looked at students who succeed despite the odds that are stacked against them. One of the key findings of this research that one thing can really make a difference, and that is: starting early. 

Carol’s intention was to create an ‘early start’ experience that was close to a student’s experience when it officially begins. This means they have access to materials, can access the VLE site, have access to tutors, and can access to resources such screen casts and software. 400 students were sent a message offering them an invite to start early, and 200 responded saying that they would. Tutors offered sessions on study skills and tutorials on content. Another advantage is that if students do start early, they will know sooner whether they are on the wrong module, which can be really useful, since there are significant fee implications if someone finds themselves on the wrong module. If you’re interested, more information about Carole’s scholarship is available on the eSTEeM website

Improving retention amongst marginal students

Anactorial Clarke and Carlton Wood spoke about an access module: Y033 Science Technology and Maths. Access modules are important due to the university’s commitment to widening participation. Y033 is studied by 1K students per year and students who have successfully studied this module (as far as I understand things) can apply for a fee waiver. 25% of students declare a disability and access students are offered 1 to 1 telephone tutorials since previous research has suggested that sympathetic and supportive tutoring is crucial to student success. The study that Anactoria and Carlton introduced use a mixed method. They looked at the completion of S111 Questions in Science and Y033. Students who have taken the access module are more likely to stay with the module; the point being that access level study builds confidence (and emphasises the importance of access).

Paper session: online delivery, tuition and international curriculum

Synchronous online tuition: differences between student and teacher

Lynda Cook and a number of other colleagues asked a really important question: what are online tutorials really like? An accompanying question is: do we meet our student’s expectations? Students on 2nd level modules were surveyed, recorded tutorials were studied, and students and tutors were interviewed. Students reported that very few were using microphones (which isn’t a surprise to anyone who had attended an Adobe Connect session) and an analysis of recorded tutorials suggested that lots of features were not used, with the exception of the chat box.

The interviews with tutors revealed that when the recording button goes on, students are reluctant to talk. One conclusion is that students’ value tutorial recordings but students don’t like to interact. A personal note is that there is a conflict between interactive and recorded lectures and I don’t think the university has quite some way to uncovering the pedagogic opportunities afforded by online tools such as Adobe Connect (and, in some ways, this links back to some of the themes mentioned in Tony’s keynote). 

Understanding tutorial observation practice

It was time for my session. I spoke about a short project that aimed to ask the question: ‘what is the best way to observe tutorials?’ I approached this question by doing three things: carrying out a literature review (with help from a brilliant tutor colleague), and conducting two sets of focus groups: one with tutors, and another with staff tutors (the members of the university who usually carry out tuition observations).

Some of the themes that emerged from the focus groups directly echoed some of the themes in the literature. An important issue is to understand what tuition observations are for: are they for development, or are they for management? (The answer is: they should be used, in the first instance, for development; the observers can learn a lot just by observing). An outcome from the project was to uncover a set of really useful tuition guidelines that have been used and developed by colleagues in Science. The next step in the project is to formally write everything up. 

An international comparative study of tuition models in open and distance learning universities

Ann Walshe, a colleague from the school of Computing and Communications, spoke about her visit to Shanghai Open University (SOU) where she was a part of a group of visiting scholars. Ann reported that SOU emphasises vocational and life-long learning. Whilst it does offer bachelor degrees, it doesn’t offer postgrad qualifications. It was interesting to hear that SOU ‘does its own thing’ and tries not to compete with other local and national universities. It has a particular emphasis on blended learning and face to face teaching, having 41 branch schools for both full time and part time students. Interestingly, students have to attend a mandatory F2F induction.

The visiting scholar group were from a range of different institutions, including Chongqing radio and TV university, University of South Africa, the National Open University of Nigeria, Cavendish University, Zambia, Jose Rizal University, Phillipines, and the Netaji Subhas Open University, India (which apparently has 120 study centres, with more opening). Ann’s talk emphasised the importance of distance learning and its global reach.

Unpacking the STEM students’ experiences and behaviours

Jenna Mittelmeier’s presentation was about the challenges of Online Intercultural group work. I enjoyed Jenna’s talk, since it was a very research focussed talk that asked a very specific question: are students more motivated when they study materials related to their own cultural background? In other words, what are the benefits of matching content and activities to the membership of a multi-cultural group? Jenna described a randomised control trial in the context of a Dutch business school. In an activity, students were asked to look at something called the World Bank statistics dashboard and it was found that students participated more when using content from their own background. A qualitative survey suggested that internationalisation (of a study context) did improve participation but did expose tensions. There was an important point, which is that content needs to be made relevant to student’s lives and experiences.

Paper session: supporting students - STEM practice and engagement

Using a dedicated website in the continuing evolution of a statistics community of learners

Rachel Hilliam and Gaynor Arrowsmith introduced us to something called the Maths and Stats Subject Site. Before the university restructured and closed regional centres, students could attend course choice events where they could look at module materials from the regional centre library and talk to academic support colleagues and speak with other students. In an environment that is increasingly digital, an important question is: can we recreate that in an online environment? I made the note that it is (of course) important that students feel a part of a community.  There is a Maths and Stats advice forum, maths education forum, and information about professional and subject societies. There is also advice about preparing to study, revise and refresh resources, are you ready for quizzes, and early units from some modules.

Implementing additional maths support for Health Science students

Nicola McIntyre, Linda Thomson and Gerry Golding spoke about their experiences on SDK100 Science and health: an evidence based approach. An important aspect of the talk was that a maths tutorial was replaced with 18 short videos covering mathematical concepts, such as decimals, percentages, scientific notation and powers. There were also two workshops which were advertised students by email, and two tutors selected and briefed on format of the workshop. I noted an important point: it’s not enough to only provide videos, the workshops are considered to be an essential component.

Two mathematicians and a ukulele

Hayley Ryder and Toby O’Neil are module team members for M208 Pure Mathematics. The module is run through a single ‘cluster’, which means that there is only one group of tutors who teach on the module, and it has 25 hours of tuition sessions. From what I remember, there was a view that students wanted more contact with module team. 

One way to address this is to record a series of informal online tutorial sessions where Hayley and Toby talk through different mathematical concepts and also discuss what is discussed on the module forum. The idea is to convey a sense of ‘what mathematicians do’ and to build ‘mathematical resilience’, a concept that has a number of aspects: (1) the fostering of a growth mindset, (2) that maths has personal value, (3) knowing how to learn maths, (4) knowing how to find appropriate support. The sessions focussed on the first three of these aspects. 

An important point was that the presenters can easily make mistakes when doing things ‘live’ and this shows that real mathematicians can get stuck, just like everyone else. As for the ukulele, this also connects to the concept of learning; this is an instrument that Toby is learning to play (and I understand that he plays it during sessions!)

A secondary analysis of SEAM responses for programming and non-programming modules by gender

Joseph Osunde from the school of Computing and Communication studies the issue of gender disparity in computing and IT. Joseph offered an important comment during the start of his talk: ‘reasons [for gender disparity] may include learning environment[s] that convey gender stereotypes on interests and anticipated success’. To learn more, Joseph has been looking at university Student Experience on a Module (SEaM) survey results.

As a staff tutor, I regularly get to see SEaM survey results and I have my own views about their usefulness as personal development tools and sources of useful research data. This said, Joseph found that there were no significant differences in achievements between gender for modules that required students to learn about programming and those that didn’t. Joseph (with Anton Dil) looked at M250 Object-oriented Java Programming. It turned out that for modules that contained programming, like M250, men seem to be more satisfied with them. When these were again compared with non-programming modules, the result is a bit more mixed. 

Whilst this is an interesting finding, this does suggest that there is some more research to be done. A related question is: to what extend are different people motivated by modules that contain programming? Also, just as our colleague, Gerry Golding has carried out research (which I mention later on) into ‘mathematics life histories’, I do feel that there might be an opportunity to study something that might be called ‘computing life histories’ to understand the qualitative reasons for differences in satisfaction.

Closing keynote for day 1

The closing keynote by Bart Rientes was entitled a ‘critical discussion of student evaluation scores and academic performance at the OU’. Bart began by telling us that he used to be an economics teacher where his teaching performance was regularly evaluated. Drawing on this experience, he asked a significant question: ‘did my increase in my [evaluation] score mean that I was a better teacher?’  He asked everyone who was attending a similar question: ‘are student evaluations a good proxy for teaching excellence?’ Bart directed us to an article, entitled:  Student satisfaction ‘unrelated’ to academic performance – study  that was featured in the Times Higher.

We were given another reference to some published research that was carried out on behalf of the QAA. Digging into the QAA website later took me to two reports that are both connected to the themes of learning, student satisfaction and quality assurance. The first report, entitled Modelling and Managing Student Satisfaction: Use of Student Feedback to Enhance Learning Experience was by Rienties, Li and Marsh. The second report has the title: The Role of Student Satisfaction Data in Quality Assurance and Enhancement: How Providers Use Data to Improve the Student Experience was by Williams and Mindano.

Onto a personal reflection about this (keynote presentations are, of course, intended to get us thinking!) As mentioned earlier I’m very aware of the OU SEaM surveys. In my experience as a tutor line manager, tutors only tend to receive a couple of responses for a group of twenty students, and the students who do respond often have a particular cause to do so. This observation connects back to Bart’s opening point, which is: what can these measure of performance (or satisfaction) tell us? The fact is that education can be difficult and frustrating, but it can also be transformative. Sometimes we only can truly judge an experience (or feel satisfied with it) when the effects of our experience have become clearer over an extended period of time.

Paper session: Supporting students and technologies for STEM learning 

Using student analytics with ALs to increate retention

Gerry Golding spoke about some of his own research into Maths life histories, an idea that, as far as I remember from Gerry’s talk, originated from a researcher called Cobden. Gerry interviewed people to understand how adults coped when studying advanced maths topics and touched on the importance of high school experiences and maths anxiety. Maths life histories can students help to understand the cause of their anxieties and help them to think about what affected them. In turn, these reflections can be used to build and develop self-efficacy to help them through the hard times and facilitate the development of a growth mindset. In terms of this bit of scholarship, initial contact with students is important. Also, the university virtual learning environment (VLE) is not a big deal, because students are studying using books. I have to confess, that I didn’t pick up on the main outcomes of this bit of research, since I started to think about Gerry’s idea of ‘life histories’.

Analytics for tracking student engagement: TM355 Communications Technology

Allan Jones spoke about TM355 Communications Technology, an important module in the computing and IT undergraduate programme. The module has three 10 point blocks, printed books, 3 TMAs and a final exam. It is also a module that makes extensive use of the VLE. 

Students study what is meant by communication technologies and how they work, such as how you modulate waves and signals, encode data and correct errors. The module also makes use of 30 computer aided learning packages. Data analytics are used to track the use of the online parts and comparisons are made between two presentations and students are interviewed to understand their motivations. 

It’s a bit more complicated than that: STEM OU analyse evaluation

Steve Walker asked a question that was implicitly linked to Allan’s presentation: can learning analytics help students to complete modules? The answer was: no… until something is done with the data. The reason for looking at this subject was both simple and important: retention is important and there is the need to figure out what works, for whom and in what context, and why. 

Steve introduced a term that I had never heard of: realist evaluation, and directed us towards a paper by Pawson and Tilley (PDF) which is (apparently) used in medical education. Points that I noted down that sounded important included: mechanisms, interventions, outcome and context. 7 associate lecturers (ALs) were interviewed by members of a module team using something called ‘intervention interviews’. An observation is that the term ‘analytics’ is used in different ways. I also made a note of a simple model, which has the components: identify, diagnose and intervene.

Java specification checking

Anton Dil spoke about the evaluation of a prototype tool for M250 object-oriented programming tutors. M250 students need to write some object-oriented software code. This includes creating something called ‘classes’. These classes have contain a number of ‘fields’ (or data stores) and are designed to carry out certain actions which are started (or invoked) using something called ‘methods’. Student code can be automatically evaluated in a couple of ways: you could write something called a ‘style checker’ to assess what code a student has written, or you could assess its functionality through using something called unit testing. The module team have written a tool called checkM250 that could be used by tutors.

Eight tutors were surveyed and 6 tutors were interviewed. Tutors didn’t use the tool because they didn’t know about it, didn’t have enough time, or didn’t think they needed it. If they did use it, they were likely to recommend it, but they were unsure whether it could highlight things that were missed. I made note of the quote: ‘if you asked me previously whether I missed things, I would say: of course not’. Tutors did report that it could be useful. My own take on tools for tutors is that any tool may be useful (and I write that in the context of being a tutor!) 

Digital by Design: workshop

The conference workshop was, interestingly, run by a theatre group. The key concept behind the workshop was an observation that the ‘coffee break’ discussions in conferences that can be just as useful as formal presentations. Instead of having further talks, the idea was to create a long session that is, in fact, one long coffee break where participants could move between different discussion groups.

Another important idea is that anyone can propose a topic for discussion. Whenever someone decides on a topic, a delegate chose a post-it note that indicates when the topic is going to be discussed, and where in the large room that discussion is taking place. Participants can see a summary of the topics that are being discussed at any one time and participants are, of course, encouraged to move between different groups, according to their own interests. It was a neat idea.

I proposed a topic: how do we develop and support our associate lecturers to do ‘digital’ in the best possible way?’ Examples of other topics included: how should we be using social media apps to communicate with students and each other, how do we become experts in advising students at how to study onscreen, and how do we decide when digital is appropriate and when is it not?

During our ‘coffee’ conversation, I was joined with two colleagues. I soon began to think about whether there might be something that could be very loosely called the ‘digital associate lecturer capability model’. I sketched out a model that had three levels: university systems and tools (such as Adobe Connect), module tools (such as module specific computer based learning products, like there is in TM355) and common IT systems and products (such as Word, Excel and Powerpoint).

During these discussions, I was reminded of a JISC project called Building digital capability that the OU library was connected to and involved with. This also provided a useful framework that could be used to guide AL development. In later discussions, I discovered that colleagues from the OU library were already using this framework in AL development sessions.

Reflections

Everyone’s experience at a conference is, of course, likely to be different. I had a simple objective when I was attending this eSTEeM conference, which was to attend as many presentations as I could to try to get a feel for the breadth of projects that were happening across the university. In some respects, there was one commonality that jumped out at me, and that was the use of videos or personalised recorded tutors that were customised to the needs of students. Underpinning this is, of course, the use of technology.

During the conference, I heard presentations from module teams and presentation from tutors. I also understand that some students were attending too, but I didn’t get to speak or hear from any of them. This links to an important reflection that it is really important to hear the student voice; we need to hear stories about what has worked and what hasn’t worked. This said, eSTEeM scholars are always asking students questions through surveys and module teams are always looking to figure out what works and what doesn’t.

A final thought is this: I’m still not sure is meant by ‘digital by design’ but I don’t think that really matters. We access materials, write materials, and carry out our scholarship using digital tools. What I think is really important is how we use these digital tools in combination with each other. Digital technologies in their various forms might new and seductive, but ‘digital’ tools cannot be transformative if you can’t see or understand how they might be used. There’s something else that is even more important: what really matters in education is people, not machines. It is people who can show us which digital tools can help our studies.

Permalink Add your comment
Share post
Christopher Douce

Getting published in Open Learning

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 24 Jun 2020, 10:05

It’s been a few months since I have taken over being the lead editor of a journal called Open Learning (Taylor and Francis website). I’m not on my own, though: there are two fabulous co-editors and an editorial assistant to help me out (thankfully!) The aim of this short blog post is to share some thoughts that might be helpful to anyone who is potentially considering making a submission to the journal. I hope this is useful!

Tip 1: Does your research fit?

The question: ‘does my research fit with the aims and objectives of the journal?’ is, perhaps, one of the most important questions that needs to be asked. This question should be applied to any kind of research that you want to share: some journals are more likely to publish your research if it is more in keeping with the aims and objectives of that journal. Another question is: who is the audience of the journal likely to be? Stop for a moment and imagine who they might be. If you can’t imagine them, or picture what kind of research they might be working on, then you need to consider whether you are looking at the right journal. 

Tip 2: Write a clear abstract

Put another way: clarity is important. Does your abstract clearly summaries the aims and objectives of the research. Also, does it present some clear research questions? I’ve seen papers that have been submitted that do not have an abstract, or have an abstract that just isn’t clear. Although academic papers sometimes be appropriately challenging to read, I’m a great believer in respecting the reader, and a way to show that an author is doing this is simple: take time to write a good abstract. 

Tip 3: Consider what has gone before

A really important tip is to be aware of the literature and debates that presented through the journal; reference earlier debates that have been published. This enables your article to be positioned amongst others. This is important, since as a researcher, as well as looking at the title, and abstract, I regularly look at the references before I even start to read a paper to see how it fits into the work of others. If I see that there are a few papers that have been published in Open Learning before, I view this as a very good thing.

Tip 4: Not too long please!

Make sure that the size of your paper is appropriate for the journal. Open Learning has a limit of seven thousand words. In my short time as editor, I have seen papers that are longer than this. Length is very important, since the publishers (and the editors) are working to a fixed number of pages per issue.

Tip 5: Practice papers are very welcome

Open Learning welcomes papers that present case studies or summaries of professional practice. Although practice papers may not be very theoretical, descriptions of teaching practice and accompanying challenges can inspire theoretical thinking and reflections amongst other researchers. As educational practitioners, always recognise what you’re doing is important and consider writing about it; this is an important aspect of your own professional development and contribution to a community.

Tip 6: Approach the editors

Don’t be afraid of the editors. They want to be helpful, so do ask them questions; they are approachable! If you are not sure whether a paper or research is appropriate, feel free to ask. Also, if you’re interested in getting more involved in a journal (it doesn’t have to be Open Learning) don’t be afraid about being cheeky. Ask to become a reviewer; introduce yourself. Any journal contributes to an academic community, so don’t be afraid to ask to become more involved in that community.

Tip 7: Be patient and engage with the process

This is a very big tip and one that I’m sharing from my own experience. Peer review sometimes feels like a brutal process. Treat the peer review as an opportunity to engage and develop, and again, do correspond with the editor if you have concerns that your own submission has been understood or interpreted by reviewers; dialogue is important. If you ever receive what you think is a negative review, try not to take things personally; they are not criticising you; they are only commenting on what they have read. After reflecting on their comments, do engage and work with the reviewers and the editors. Very often, this can lead to a much better submission than you had ever imagined. Plus, the more that you submit papers, the more experience you get.

Permalink Add your comment
Share post
Christopher Douce

Scholarship, CALRG and SST

Visible to anyone in the world

Going to three events in two days, between 15 and 16 June, was pretty intense but also pretty good fun.  The first event was all about the scholarship of teaching and learning.  A different way of understanding this is: how to go about figuring out how to best do teaching and learning?  It's important to do research into this area not because 'learning' has changed, but the ways in we learn (and the technologies we use) continually changes and evolves: we want to know that we're doing the right thing.

The second event had a similar theme.  It was the Computers And Learning Research Group (or CALRG, for short) conference.  CALRG is a long-running research group in the university's Institute of Educational Technology.

The final event was at the Birmingham regional office, where the MCT learner support staff are based.  During this event, I learnt about a range of different things - but more about this later.  Actually, there was a forth event, an associate lecturer staff development conference, which was organised by the Oxford regional centre (but there isn't the time to write about this!)

Scholarship event

Linda Price opened the event by presenting a definition of scholarship: it is a term that describes research and research action.  Scholarship regarding teaching and learning is activity that uses information relevant to our learning and teaching to inform and enhance our practice.  She also went on to emphasise the importance of evidence.  Linda told us that scholarship was a strategic priority to the university.  She spoke of an internal university project called SHARE, which is intended to help regional staff with the research activities.  An interesting and thought provoking line that Linda gave us was: 'doctors save lives, but we can change them'.

The next part of the event was a series of twelve five minute 'lightening talks' about different research project (I think there were around twelve!)  First up was my colleague Ann from Manchester who talked about 'perceptions, expectations and experience of group tuition'.  Her project was to explore different perceptions and opinions about tutorials.  This research then has the potential to inform a new Group Tuition Policy.   

Next up was a talk that had the title 'What drives scholarship?' I seem to recall that the research was looking at the use of language in assignments, tutor guidance and feedback.  A really important subject is, of course, retention.  I also remember references to Y031 Arts and Languages, Y032 People Work and Society, and Y033 Science Technology and Maths access modules (which help to prepare students for undergraduate level study).

This was followed by: 'A levels based approach to referencing and information management'.  Apparently, some students may drop out, or become demotivated due to the challenge of appropriate referencing.  The national student survey apparently said that different students are given different advice.  The following talk was all about investigating engagement with on-line library services.

An interesting question, from a colleague in the business school was: 'why do we keep failing our Japanese students?'  One of the reasons could be attributed to differences between HE in the UK and Japan.  Understanding and being aware of cultural differences can allow us to gain an understanding of how to support different groups.

Rob Parsons, an associate lecturer colleague from the South East region spoke about peer assessment.  He argued that the student-tutor relationship can be improved.  Themes that Rob's short talk addressed included active learning, learning communities and engagement and retention.

The university is, of course, a big consumer of technology, and there is a perpetual need to figure out how to use new technologies and whether a technology is appropriate for students.  One talk that explicitly explored this issue had the title: Going Live with Google hangouts.

It was then my turn.  I talked about a project that was about the gathering of tutor experiences of tutoring on a second level computing module, TT284 Web Technologies (I horrified to discover that, in front of over sixty people, all of my PowerPoint animations were messed up!)

The talk that went after mine had the title: 'how to get students to do your iCMA and why that is good' (An iCMA is an interactive computer marked assignment).  This was followed by 'investigating one to one tuition: initial findings'.  This project was from the faculty of health and social care.  The research involved interviewing students and carrying out focus groups with associate lecturers.

Sometimes low technology solutions and approaches can be really useful.  An interesting talk was 'an evaluation of the effectiveness of student buddies', which I think focussed on languages and business modules, specifically L185 English for academic purposes and LB160 Professional communication skills for business studies.  The presentation told us about student buddy training through OU Live and a shared 'student buddy forum' on L192 Bon départ: beginners' French.  This reminds me that some of the materials for these modules are available for free through the Open Learn website.

The final 'lightening' presentation of the morning had a slightly different flavour; it was entitled, 'smarter than the average ebook'.  It was given by a colleague from Learning and Teaching Solutions, part of the university that provides some of the technical infrastructure.  We were taken on a journey through different digital formats, ranging from PDFs, epub (Wikipedia) and epub2 files, and then onto OU anywhere.  Some experiments have been performed with ePub3.  These are becoming 'websites wrapped as an off-line experience'.

An interesting point was that 'students are frustrated about being sent away from the text'.  This is a comment that resonates strongly with me.  I found it difficult to study 'off line' even though I wanted to: I printed off lots of on-line materials, but I found myself being directed to various websites and on-line resources.  This is also a comment that arose during my own TT284 research.

One comment I noted was: 'ebook readers are tricky things'.  Different readers do different things.  To close this presentation, we were shown a demonstration entitled 'how to build a methane molecule'.

After a lunch, it was time for the keynote speech.  The keynote was given by our PVC in learning and teaching, Belinda Tynan.  It began with a question: what does scholarship look like across the university.  There are, apparently 15 groups that 'play in the space' that is called scholarship, and that's just the research groups that belong to faculties (not to mention the research that takes place in the library, student services and other departments).  An important question is: 'how is the scholarship impacting the university, and how is it being focussed and directed?'

Moved on to talking about the different methods that can be used in education research. A related question is: how do we learn from each other, how do we share with each other, and how do we cross boundaries and disciplines?  Another really important question, from an institutional perspective is whether we are getting value for the effort that we institutionally put into scholarship.  It was a thought provoking keynote and it really set the scene for the afternoon session (but this was a session I had to duck out of because I had another commitment: to attend one of those fifteen scholarship groups that were mentioned about).

CALRG event

The CALRG conference is a three day conference, which means that I've missed loads of talks.  In case you're interested, the organisers have published a conference programme (CALRG website).

The first talk I attended was entitled: Building understanding of open education: an overview of OER on teaching and learning.  OER is an abbreviation for open educational resources.  The institute of educational technology hosts a research group that is known as an OER hub (website).

The second talk was all about mobility technology and had the title: Conducting a field trial in Milton Keynes: Lessons from the MApp.  I originally thought that MApp was some kind of mapping device, but it seemed to be something rather different.  It seemed to be about helping different people from different cultural backgrounds (or languages) to connect to each other.  I have to confess to being a bit lost at the start of the talk, but then I discovered that the research was using some really interesting methods to gather up some qualitative research.  (This reminded me about a 'diary study project' that I've been mulling over for quite some time, but I haven't managed to get around to doing anything about it yet!)

The third talk was a longer version of the same 'lightening' talk that I gave earlier in the day.  Talking at CALRG allowed me to talk a bit more about the methodology, and some really tentative findings since the analysis is still on-going!  (This, I think, is one the challenges of qualitative research: when do you stop!  One answer is: when you see the similar findings and themes emerging time and time again).

The next talk had the title: Improving language learning and transition into second language learning, through the language learning support dimensions (LLSD).  This talk used an instrument (also known as a survey) to help learners understand more about how they carry out language learning.  Since this wasn't my subject, I struggled to connect with this research, but I appreciated the idea of using a self-directed too to help learners to reflect on how they approach a problem.

The final talk of the CALRG session was 'diverse approaches to using online 'studio' based learning in Open University modules'.  In some respects, the 'studio' can be considered to be an in-house version of the photo sharing website Flickr.  I seem to have a memory that it was used with a digital photography module, to allow students to share examples of their work.  It was interesting to hear that this module was going to be re-launched as a non-credit bearing module (which will have the module code TG189).  Modules such as U101 use a version of this tool called Open Studio.  I learnt that it has now found its way into a total of thirteen different modules, and the Open Studio tool now goes by different names and has a range of different uses. There is also a blog about OpenStudio that is hosted by the university.

The talk led onto an interesting discussion about accessibility.  Whilst an on-line environment might itself be technically accessible, the materials that are transferred to an environment might be fundamentally inaccessible.  One thought about how to remedy this is to try to facilitate collaborations between students.

SST event

The final event I'll mention was held in Birmingham, which is the home of the Computing and IT student support team.  The SST comprises of associate lecturers, learner support staff, advisors and academics.  The purpose of the meeting was to allow the learner support people to meet academic colleagues (and visa-versa) to learn about how we can work more closely with teach other.

There were three rough parts of the day.  The first enabled staff tutors (the academic staff) to learn more about what was going on from the learner support perspective.  The second was a short talk about 'the day in the life' of a staff tutor.  The final section was an update from the faculty Media Fellow.

During this final session I learnt about a project called JIFL, also known as Journeys from Informal to Formal Learning (which remains a bit of a mystery).  There are also a number of FutureLearn MOOCs (Massively Open On-line Courses) that are either currently being delivered, or are in the process of being developed.  These have the titles: 'Introduction of Cybersecurity' and 'Programming one line at a time', which uses the Python programming language (which is used in M269 Algorithms Data Structures and Computability).  Other MOOCs will include one about smart cities and another about renewable energy.

In other news, there is going to be an update to the OpenLearn site, where potential students can gain access to samples of OU materials.  There is also going to be a new programming module, which is intended to help students transition from the first level modules, such as TU100 My Digital Life and TM129 Technologies in practice, to the second level modules, such as M250 Object-oriented Java programming and M269.  This transition module will draw upon materials from an earlier module, M150 Data Computing and Information, and have a focus on problem solving.

In terms of forthcoming media productions, there was a lot of exciting news: there is going to be a programme about Algorithms (and how they relate to our lives), a programme about the life of Ada Lovelace (which contains a bit about gambling), and a documentary called Game Changer, which is about the developers of the Playstation game, Grand Theft Auto.

Permalink Add your comment
Share post
Christopher Douce

Scholarship for Staff Tutors

Visible to anyone in the world
Edited by Christopher Douce, Thursday, 6 Mar 2014, 16:37

I haven't really blogged about 'internal events' before.  I think this is my first one.  Although I've written this post mainly for an internal audience, it might be useful for a wider audience too, although I'm not yet sure whether I'll click on the 'make available to the world' box in our VLE blogging tool.

About a week or so ago I was lucky enough to attend what is called a Staff Tutor Staff Development event that was all about scholarship.  It was all about how we (as staff tutors) might fit scholarship into our day job.

The SD4ST scholarship event was hosted by the Open University office in Gateshead, a part of the country I had never explicitly visited before (other than passing through either in the car or on the train).  The Gateshead office is fabulous (as is the architecture in Newcastle).  The office presented us with a glorious view of the millennium bridge and the imposing Baltic contemporary art gallery.  I'm digressing before I've even begun, so, without further ado, on to describing the event.

Introducing scholarship

The first day kicked off (in the early afternoon) by asking the question of, 'what exactly counts as scholarship?'  An underlying theme was how to contribute to research that might be used as a part of the university REF submission (which is, of course, used to assess how well universities compare to each other in terms of their research output).

A number of different types of scholarship were defined, drawing on a paper that had recently been circulated (or published) through senate.  The paper also included explicit examples, but I won't go through them here.  Here's my attempt at summarising the different types:

  • Institutional scholarship (about and for the institution)
  • Scholarship of teaching (the investigation of one's own, or teaching by others)
  • Scholarship for teaching, or outputs that contribute to teaching materials in its different forms
  • Research that relates to and can inform professional practice (in whatever form this might take)
  • Discipline based scholarship, investigative research which can be understood in terms of adding to the knowledge of a particular subject or area.

The output of scholarship may be presented within journal or conference papers, chapters in books or be in reports.  Blogs can also be considered as scholarship too, but this is a rather difficult category, since it has to be a 'rated blog'.  In essence, an output should be something that can be judged as excellent by a peer, capable of use by others and have impact.

I thought about which of these categories I could most readily contribute to.  I came up with a couple of answers.   The first was that I might be able to carry out some discipline based scholarship, perhaps building on some of the accessibility or computing research I have been previously involved with.  Another idea might be to do some research that might inform the different course teams I'm involved on.  An example of this might have been an earlier blog post on mobile technologies that fed into course team discussions.  Also, given my current duties of supporting course presentations I could also see how I might be able to (potentially) contribute to the scholarship of teaching in a number of different ways.

How to find the time

Although I'm relatively new to the role of a staff tutor (or regional academic), I am beginning to feel that we have to be not only good at juggling different things, but also be able to put on a good balancing act too! 

The reason for this is that our time is split down the middle.  On one hand we have regional responsibilities (helping our tutors to do their job as effectively and as efficiently as possible, and doing a lot of other mysterious stuff, like marketing) which accounts for fifty percent of our time.  The other fifty percent of our time is spent on 'faculty' work.  This means that we are able to contribute to course teams, offering useful academic input and ensuring that our associate lecturers are fully taken into consideration during the course design phases.  We can also use this fifty percent slice to carry out scholarship in its different forms.

Given the different pulls from course teams and regional responsibility there is a significant question which needs to be asked, namely: 'how is it possible to do scholarship when we've got all this other stuff to do?'  The second section in the day aimed to answer this exact question through presentations by two staff tutors who seem to be successfully balancing all their different responsibilities.

The first presentation was by Dave McGarvie, Science Staff Tutor in Scotland.  Dave gave a cracking presentation about his research, which was all about volcanoes (I'm sure he will be able to provide you with a better description!)  What struck me about Dave's presentation was that he was also came across as being a bit of a 'dab hand' at media stuff too, being called upon as an 'expert' to talk about Icelandic volcano eruptions.  Dave talked about how he used his study leave (he uses all of it), and said that it is possible to ask for research leave too (which was something that I hadn't heard about).

The second presentation was by Gareth Williams, Maths and Stats Staff Tutor (or MCT), in Manchester.  Gareth told us about how he managed to carve out (and protect) a 'research day' which he used to speak (and work with) with other academics in his subject area.

I noted down a really important part of Gareth's presentation which summarised the reasons for doing research: that it is something that we're passionate about, that it's fun, it can help us to maintain knowledge, it can be exciting, it can help with networking (and recruitment of good ALs), and help to introduce and advertise the work of the university to a wider audience.

One fundamental point was echoed by both presenters, namely, that research can take a lot of time, can (and probably will) eat into our personal time.  Gareth offers some practical advice, urging us to be realistic, develop multiple strategies (or routes to publication), prioritise workload carefully and, importantly, to have fun.

The final talk of the day was by Ian Cook, who spoke about the universities eSTeEM initiative which replaces earlier funding mechanisms.  eSTeEM lets individuals or group of researchers to bid for funding for projects that may able to benefit the university or help to further understand and promote teaching and learning.

Designing a scholarship project

The next part of the day (and a part of the following day) was spent 'in the deep end'.  We were put into groups and asked to work towards creating cross faculty scholarship project which could help us to collectively understand our knowledge of Open University teaching and learning (perhaps through the use of technology).  Following the group discussions, we then had to devise an eight minute presentation to everyone in the room to try to 'win' a pot of imaginary funding.  Here's a rough list of the titles of the various projects that were proposed:

  • Group 1: Can on-line forums enhance students learning?
  • Group 2: What constitutes useful monitoring for associate lecturers?
  • Group 3: Investigate if text messaging can improve TMA (assignment) submission and retention
  • Group 4: Why do students attend (or not attend) synchronous on-line tuition?
  • Group 5: A system for the sharing of media resources between tutors

I have to confess I was involved in group five.

This activity reminded me that different people can understand research (and, subsequently, scholarship) in different ways.   In my 'home discipline' of computer science, research can be considered in terms of 'building stuff'.  This 'stuff' might be a new software system, tool or environment.  The 'stuff' might even be a demonstration of how different technologies may be connected together in a new or novel ways.   I also must confess that my discipline background emerged through our brainstorming activities.

In the end, there were two winners, and it interesting to learn that one of the winning project ideas (the use of text messaging) was the subject of an existing project.  It just goes to show that old adage that good ideas can emerge independently from different (independent) sources!

I enjoyed this activity.  I remember a lot of discussion about dissemination and how to evaluate whether a project had succeeded.  Referring back to the earlier notions of scholarship and Gareth's multiple routes to publication, dissemination can, of course, have a range of different forms, from internal presentations, workshops, focus groups, through to formal internal reports and REFable publications, such as conference and journal papers.

Final presentations

The event was rounded off by two presentations.  Celia Popovic gave a presentation about SEDA, which is an abbreviation for the 'Staff and Educational Development Association', which is a non-profit organisation which aims to facilitate networking and sharing of resources.  Celia begins by asking the question, 'what do you need [to enable you do to your scholarship and research stuff]?' and talked us through a set of different resources and the benefits of being a SEDA fellow.  The resources included books, a magazine, and a number of scholarly journals.

The final presentation, entitled 'Getting Started, Overcoming Obstacles' was by Karen Littleton.  Karen is currently the director of CREET which is a cross-faculty research grouping which comprises of Education and the Institute of Educational Technology (and some others too, I am sure!) 

A couple of things jumped out at me, namely, her advice to 'be pragmatic'.  I am personally guilty of 'thinking big' in terms of research ideas.  I once had this idea to perform some kind of comparison of different virtual learning environments, but it was something that I have never managed to get around to doing, perhaps because my heart sinks when I see all the obstacles that lay ahead of me.

Karen advises us to consider working on a series of smaller projects which have the potential to contribute towards a main goal.  She also mentions the important issue of time and the need to ring fence and guard it carefully, a point that was echoed throughout the two days of the event.

Summary

I'm only just starting to appreciate the different demands on my work time.  I have been wondering, for quite a while now, how to do 'research' within my role as a staff tutor.  What this event told me was that it is possible, but you need to do it with a high level of determination to succeed.

It strikes me that the best way to do research is to make sure that your research activities are aligned, as closely as possible, to some of the other duties of the role.  Of course, it might be possible to do other research, but if your 'job role dots' are not connected together, seeking permission and making cases to go ahead and do your own scholarship is likely to be so much harder.

A feeling that I have always had is that through research there are likely to be opportunities.  An example of this can be finding stuff out that can inform course production, or, connecting to Gareth's example, making contacts may help with recruitment of associate lecturers.  I've also come to the conclusion that network is important too.  Networking might be in the form of internal contacts within the university, or external contacts within either other higher education institutions or in industry.

A really important point that jumped out at me is that you really do need to be passionate about the stuff that you're finding out about.  The word 'fun' was mentioned a number of times too.

As a result of the event I've been thinking about my own scholarly aspirations.  Before changing roles I had some quite firm ideas about what I wanted to do, but this has changed.  As mentioned before, I think it's a good idea to try to align different pieces of my role together (to align the fifty percent of regional work with the fifty percent of 'other stuff').  I hope I'm making some progress in figuring out how to make the best contribution to both courses and research.  I hope to continue to blog about some of the stuff that I'm getting up to whilst on this journey.

I'm also hoping there is a follow up session next year which might ask the question of, 'how is your scholarship coming along, and what practical things could be done to help you do more of it?'

All in all, a really enjoyable event.  Many thanks to the organisers!  For those who can access internal OU sites (and might be staff tutors), some of the presentations have been uploaded to the VLE STLG workspace.

Permalink 1 comment (latest comment by Jonathan Vernon, Friday, 1 Jul 2011, 18:15)
Share post

This blog might contain posts that are only visible to logged-in users, or where only logged-in users can comment. If you have an account on the system, please log in for full access.

Total visits to this blog: 1977035