OU blog

Personal Blogs

Christopher Douce

Generative AI Professional Development

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 18 June 2025, 10:48

On 11 June 2025 I attended a continuing professional development event concerning Generative AI that was intended for tutors. The ALSPD team has set up an online conference site to complement the session. What follows is a set of notes I have made during the session, along with some accompanying thoughts and reflections.

Keynote Address: Reclaiming the Pace of Digital Productivity in the Age of AI

The opening keynote was by Maren Deepwell. The abstract of her talk emphasised the pace of digital content production, and mentioned the need for uncovering our own ‘critical cadence’. A key phrase I picked up on was, of course, digital literacies. This is now morphing into a related idea of AI literacies where we become more aware of the ethical, cultural and social implications of AI tool use.

I made a note of a couple of links from OpenED Culture projects:

Moving on from AI Literacy, a further theme that was highlighted was digital wellbeing; a term I’ve not heard before, but one that can mean different things to different people. It can relate to the use of technology for healthcare and physical activity, or it could be linked to the notion of a ‘digital detox’, which is a phrase that I struggle with. We were introduced to a resource called the Data Detox Kit which contains a number of guides.

Another curious term was ‘digital rewilding’, which I had never heard before. There was a reference to a site called HackThisCourse. I spent a few minutes looking at this page, and was puzzled; I couldn’t immediately grasp what it was all about – but it does look as if it relates back to the theme of digital pedagogy, and creative uses of digital technology. This point, of course, links to the emergence of GenAI.

An important role of a keynote is to challenge and to guide. An interesting question is: what is next? We were directed to Helen Beetham’s substack, which features articles and podcasts about AI.

A key question asked: what does digital capabilities for the AI age look like? During the presentation, I noted that GenAI can be useful if you’re in a position to critically evaluate what it produces.

Session 1 - Short Talks

Teaching and assessing students' use of GenAI

Jonquil Lowe, Senior Lecturer in Economics, from the Faculty of Arts and Social Sciences (FASS) asks the question: “what exactly should we be teaching and assessing about GenAI?” I noted three important suggestions which relate to taching: include acceptable and ethical use, cover generic Gen AI skills, and discipline specific skills. I noted the suggestion that perhaps every module might need a Gen AI aspect, which would change and develop as student moved to higher levels of study. I also noted an interesting point that Gen AI may offer useful guidance for students for whom English is not their first language.

Within the chat, a link to an OpenLearn course was shared: Getting started with generative artificial intelligence (AI) 

Online academic misconduct 2025: real location, face to face assessment the only effective response

Next up was David Pell, Associate Lecturer from FASS. David was refreshingly direct in his opinions. In comparison to collusion, plagiarism, essay mills, GenAI is something that is different; it is ‘huge’. It is described as a useful tool for potential cheaters. David shared a direct opinion, and one that I agree with, which is: only real life, proctored assessment provides a guarantee against academic misconduct.

Some research was shared:

The very clear point is that GenAI can produce text that is not detectable by educators.

Session 2: The AI Hydra in Assessment - a Nine Point Response to the Nine-headed Beast

The next session, by Mark Campbell, Associate Lecturer, STEM, presents a model of potential responses to GenAI. These are: progression, policing, procedures, pilots, principles and strategy, polities, plans, programmes and modules, and practices.

In addition to these useful responses, I also made a note of the following resources:

  1. OU Generative AI Staff Development
  2. JISC Innovation resources: Embrace artificial intelligence (AI) with confidence
  3. AI maturity toolkit for tertiary education
  4. OU being digital resources (library services)

Session 3 – Short Talks

Transformative Teaching with AI - re-purposing the meaning of student-centred learning

Denise Dear, Associate Lecturer, STEM moves from the topic of assessment to the topic of teaching, and asks “how lecturers can use AI to enhance student topic knowledge retention, increase student engagement, improve student confidence, reduce student stress levels and assist students as a revision tool”. I haven’t used any of these tools as a part of my teaching, so I was curious about how it might be applied.

GenAI was presented as a potential study buddy. It has the potential to provide summaries of passages of text, may be also to provide suggestions about essay structures (which, as an arts student, terrifies me), and generate interactive quizzes. In other words, there is potential that it can provide support that is tailored to the needs of individual students.

During the session tutors were asked: how do you use it? Answers include case scenarios, sample computer code, scenario simulations, generating personas – but a word of caution was highlighted: it gets maths wrong. (LLMs also have no idea how to play chess; they know about language, but they cannot reason).

The discussion of teaching led us back to assessment. How do we assess learning? In an activity, some answers were: viva voce assessments, feedback from small group work, asking students to make an audio recording.

Generative AI and Good Academic English

The final session of the day was by Claire Denton, Associate Lecturer, FASS. Use of GenAI can suggest a lack of confidence with academic English. Telltale signs may be the use of a generic tone, no references to module materials and no supporting evidence. Or alternatively, students might begin with their own voice, which will then switch where text from AI is woven into their answer. A question that tutors face is: how do students provide feedback to students when this happens?

Anyone from any faculty can access the various subject sites. There is something called the Arts and Humanities Writing Centre which contains some useful writing resources. The site also provides a link to good academic English, and a phrase bank. (There are, of course, other study skills resources available. I’ve shared a summary of some of them through this article Study Skills Resources: what is available?)

Claire shared some great tips that could be shared to students, including: if you have spent some hours looking at a piece of writing, stop. Take your time to read it aloud. You will then pick up if you need additional punctuation, have used too many of the same words, or have repeated the same point you have made earlier. The key point is, of course, if tutors spot that GenAI might have been used, there may lie opportunities to provide additional help and guidance.

Reflections

The tone of this event implies that GenAI is going to be profoundly transformative. With this in mind, I remember the tone (and the enthusiasm) that accompanied the development of MOOCs and Open Educational Resources, and the view that they both had the potential to fundamentally change the nature and character of education. I've not heard MOOCs mentioned for a while. A lot of froth has been created in the wake of the emergence of GenAI.

Some of the important themes I picked up on from this session were importance of policies, the challenges to academic conduct and integrity, as well as the potential opportunities that targeted GenAI may present. It was interesting to hear GenAI being pitched in terms of being a ‘study buddy’. This is an interesting use case, but I’m also mildly worried by the fact that these free tools are remembering every single interaction we have ever had with them. 

I’m a GenAI sceptic. In my time as a computer scientist, I have seen AI bubbles come and go. In the 1970, researchers claimed they had solved all the key problems, and it was just a matter of scaling up until we get our own personal robotic butler. When it comes to GenAI I do feel that we’re approaching the Peak of Inflated Expectations and it won’t be long before we crash into the Trough of Disillusionment (see: Gartner hype cycle, Wikipedia). If we’re not at the GenAI hype peak already, we probably will be in 18 months time. (I'll be very happy to eat these words if proved wrong).

I remember a computer book from the mid 1980s. It contained a program that you type into your home computer, so it would ‘talk back to you’. It was written in a programming language called BASIC and was only three or four pages long. It recycled your words; it was a simpler version of a 1967 computer program called ELIZA. I once described GenAI as ‘ELIZA on steroids’. GenAI is seductive, since LLMs uncannily reflect back to us echoes of our own humanity. Looking to the bigger picture, philosophers still can’t really define what intelligence is. I don’t know who said it, but I always return to the phrase that ‘AI is really clever people getting really dumb machines to do things that look clever’.

All this takes me to a mild (and yet) important constructive criticism of the event: it was significantly light on ethics. We need to slow down and ask the question: who runs and owns these GenAI tools? What are the consequences for us, and for society when these are used? To what extent does a LLM create a model of its users? How are these models shared? What terms and conditions have we signed up to? There is, of course, the slightly important issues of transparency and copyright. It is likely that these words are, in some way or other, hoovered up by a LLM engine.

Out of curiosity, I asked Copilot the following question: “Who is Chris Douce and what does he think about artificial intelligence?” Here’s the answer: “Chris Douce is a Senior Lecturer in the School of Computing and Communications at the Open University. He has a background in software engineering and educational technology, with a focus on distance learning and accessibility. Regarding his views on artificial intelligence (AI), Chris Douce has expressed interest in how AI can be integrated into educational technologies to enhance learning experiences. He has discussed the potential of AI to provide personalized learning and support for students, particularly in distance learning environments. However, he also emphasizes the importance of ethical considerations and the need to address challenges related to accessibility and inclusivity”. That’s a pretty good summary, but I think it is, perhaps, an understatement.

One of the useful aspects of this session was the sharing of links to many related resources and references. There is always homework to do. Now, that’s something that GenAI can’t directly help me with. It’s down to me to do the work. Anything else would be cheating.

Addendum

I shared this post with a colleague, who recommended two interesting resources; a book and a podcast:

Book: Bender, E. M. & Hannah, A. (2025) The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want. Bodley Head.

Podcast: Mystery AI Hype Theater 3000 by Bender and Hannah (Distributed AI Research Institute)

Acknowledgements

Many thanks to all the presenters, and for the team who facilitated the event. Thanks are also extended to David Hales.

Permalink Add your comment
Share post
Christopher Douce

Generative AI and assessment in Computing and Communications

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 27 Nov 2024, 14:06

On 13 November 2024 I attended a workshop that was all about Generative AI and assessment for OU computing modules that was organised by colleagues Michel Wermelinger (academic co-lead GAI in LTA), Mark Slaymaker (C&C Director of Teaching) and Anton Dil (Assessment Lead of C&C Board of Studies). What follows are a set of notes and reflections which relates to the event. I also share some useful links and articles that I need to find the time to follow up on. This summary is, of course, very incomplete; I only give a very broad sketch of some of the discussions that have taken place, since there is such a lot of figuring out to do. Also, for brevity, Generative AI is abbreviated to GenAI.

Introduction and some resources

The workshop opened with a useful introductory session, where we were directed various resources which we could follow up after the event. I’ll pick on a few:

To get a handle on what public research has been published by colleagues within OU, it is worth reviewing ORO papers about Generative AI

The following notable book was highlighted:

There were also a few papers about how GenAI could be used with the teaching of programming:

To conclude this very brief summary, there is also an AL Development Resource which has been produced by Kevin Waugh, available under a Creative Commons Licence.

Short presentations

There were ten short presentations which covered a range of issues, from how GenAI might be used within assessments to facilitate meaningful learning, through to the threats it may offer to academic integrity. What follows are some points that I found particularly interesting.

One of the school’s modules, TM352 Web Mobile and Cloud technologies, has a new block that is dedicated to tools that can be used with software development. Over the last couple of years, it has changed quite a bit in terms of the technologies it introduces to learners. Since software development practices are clearly evolving, I need to find the time to see what has found its way into that module. It will, of course, have an influence on what any rewrite to TM354 Software Engineering might contain.

I learnt about a new software project called Llama. From what I understand, Llama is an open source large language model (LLM) engine that can be locally installed on your desktop computer, from where it can be fed your own documents and resources. The thing is: LLMs can make things up and get things wrong. Another challenge is that LLMs need a lot of computing resources to do anything useful. If students are ever required to play with their own LLMs as a part of a series of learning activities, this raises the subject of computing equity: some students will have access to powerful laptops, whereas other might not. Maybe there will a point when the school may have to deploy AI tools within the OpenSTEM Labs?

Whether you like them or not, a point was made that these tools may begin to find their way closer to the user. Tony Hirst made the point that when models start to operate on your own data (or device) this may open up the possibility of semantically searching sets of documents. Similarly, digital assistants may begin to offer more aggressive help and suggestions about how to write bits of documents. Will the new generation of AI tools and digital assistants be more annoying than the memorable Microsoft Clippy

GenAI appears to be quite good at helping to solve well-defined and well-known programming problems. This leads us to consider a number of related issues and tensions. Knowing how to work with programming languages is currently an important graduate skill; programming also develops problem decomposition and algorithmic thinking skills. An interesting reflection is that GenAI may well help certain groups of students more than others.

Perhaps the nature of programming is changing as development environments draw upon coding solutions of others. Put another way, in the same way that nobody (except for low level software engineers) knows assembly language these days, perhaps the task of software development is moving to a higher level of abstraction. Perhaps developing software will mean less coding, but more about how to combine bits of code together. This said, it is still going to be important understand what those raw materials look like.

An interesting research question was highlighted by Mike Richards: can tutors distinguish between what has been generated by AI and what has been written by students? For more information about this research, do refer to the article Bob or Bot: Exploring ChatGPT’s answers to University Computer Science Assessment (ORO) A follow on question is, of course: what do we do about this?

One possible answer to this question may lie in a presentation which shared practice about the conducting of oral assessments, which is something that is already done on the postgraduate M812 digital forensics module. Whilst oral assessments can be a useful way to assess whether learning has taken place, it is important to consider the necessity of reasonable adjustments, to take account of students who may not be able to make an oral assessment, either due to communication difficulties, or mental health difficulties.

The next presentation, given by Zoe Tompkins, helped us to consider another approach to assessment: asking students to create study logs (which evidence their engagement), accompanied by pieces of reflective writing. On this point, I’m reminded of my current experience as an A334 English literature student, where I’m required to make regular forum postings to demonstrate regular independent study (which I feel that I’m a bit behind with). In addition to this, I also have an A334 reflective blog. A further reflection is that undergraduate apprentices have to evidence a lot of their learning by uploading digital evidence into an ePortfolio tool, which is then confirmed by a practice tutor. Regular conversations strengthen academic integrity.

This leads onto an important question which relates to the next presentation: what can be done to ensure that written assessments are ‘GenAI proof’? Is this something that can be built in? A metaphor was shared: we’re trying to ‘beat the machine’, whilst at the same time teaching everyone about the machine. One way to try to beat the machine is to use processes, and to refer to contexts that ‘the machine’ doesn’t know about. The context of questions is important. 

The final presentation was by one of our school’s academic conduct officers. Two interesting numbers were mentioned. There are 6 points that students need to bear in mind when considering GenAI. If I’ve understood this correctly, there are 19 different points of guidance available for module teams to help them to design effective assessments. There’s another point within all this, which is: tutors are likely to know whether a bit of text has been generated by a LLM.

Reflections

This event reminded me that I have quite an extensive TODO list: I need to familiarise myself with Visual Studio Code, have a good look at Copilot, get up to speed with GitHub for education, look at the TM352 materials (in addition to M813 and M814, which I keep meaning to do for quite a while), and review the new Software Engineering Body of Knowledge (SWEBOK 4.0) that has been recently released. This is in addition to learning more about the architecture of LLMs, and upskill myself when it comes to the ethics, and figure out more about the different dimensions of cloud computing. Computing as moved on since I was last a developer and software engineer. With my TM470 tutor hat on, we need to understand how and where LLMs might be useful, and more about the risks they post to academic integrity.

At the time of writing, there is such a lot of talk about GenAI (and AI in general). I do wonder where we are in the Gartner hype cycle (Wikipedia). As I might have mentioned in other blogs, I’ve been around in computing for long enough to know that AI hype has happened before. I suspect we’re currently climbing up the ‘peak of inflated expectations’. With each AI hype cycle, we always learn new things. I’m of the school of thought that the current developments represent yet another evolutionary change, rather than one that offers revolutionary change.

Whilst studying A334, my tutor talked a little about GenAI in an introductory tutorial. In doing so, he shared something about expectations, in terms of what was expected in a good assessment submission. If I remember rightly, he mentioned the importance of writing that answered the question (a context that was specific, not general), demonstrated familiarity with the module materials (by quoting relevant sections of course texts), and clear and unambiguous referencing. Since the module is all about literature, there is scope to say what we personally think a text might be about. These are all the kind of things that LLMs might be able to do at some level, but not to a degree that is yet thoroughly convincing. To get something convincing, students need to spend time doing ‘prompt engineering’.

This leads us to a final reflection: do we spent a lot of time writing prompts and interrogating a LLM to try to get what we need, or would that time be spent more effectively writing what needed to be written in the first place? If the writing of assessments are all about learning, then does it matter how learning has taken place, as long as the learning has occurred? There is, of course, the important subject of good academic practice, which means becoming aware of what the cultural norms of academic debate and discourse are all about. To offer us a little more guidance, in the coming months I understand there will be some resources about Generative AI available on OpenLearn

Acknowledgments

Many thanks to the organisers and facilitators. Thanks to all presenters; there were a lot of them!

Addendum

Edited on 27 November 24, attributing Zoe Tompkins to one of the sessions. During the event, a session was given by Professor Karen Kear, who demonstrated how Generative AI can struggle with very specific tasks: creating useful image descriptions. Generative AI is general; it doesn't understand the context in which problems are applied.

Permalink 2 comments (latest comment by Christopher Douce, Monday, 18 Nov 2024, 10:58)
Share post
Christopher Douce

Generative AI- AL Professional Development

Visible to anyone in the world
Edited by Christopher Douce, Wednesday, 29 May 2024, 12:36

On 23 May 24 I attended an AL development event (in my capacity as an OU tutor) that was all about Generative AI (which is abbreviated to here as GenAI). This blog sits alongside a couple of other blogs that I shared last year that also relate to GenAI and what this means for education, distance learning, and education practice.

What follows is some notes that I made during a couple of the sessions I attended, and what points and themes I took away from them. I also share some critical perspectives. Since GenAI is a fast moving subject, not just in terms of the technology, but in terms of policy and institutional responses, what is presented here is also likely to age quickly.

Opening keynote

The event opened with a keynote by Mychelle Pride which had the subtitle: Generative AI in Learning, Teaching and Assessment I won’t summarise it at length. Instead, I’ll share some key points that I noted down.

One important point was that AI isn’t anything new. A couple of useful resources were shared, one from the popular press, How AI chatbots like ChatGPT or Bard work – visual explainer (The Guardian) and another from industry: The rise of generative AI: A timeline of breakthrough innovations (Qualcomm).

An interesting use case was shared through a YouTube video: Be My Eyes Accessibility with GPT-4. Although clearly choreographed, and without any indication of whether any of this was ‘live’, one immediately wonders whether this technology is solving the right problems. Maybe this scenario implicitly implies that visually impaired people should adapt to the sighted world, whereas perhaps a better solution might be for the world to adapt to people with visual impairments? I digress.

There are clear risks. One significant concern lies with the lack of transparency. Tools can be trained with data that contains biases; in computing there’s the notion of GiGO: garbage in, garbage out. There’s also the clear potential that GenAI tools may accept and then propagate misinformation. It is clear that “risks need to be considered, along with the potential opportunities”.

A point was shared from a colleague Michel Wermelinger who was quoted saying “academic conduct is a symptom, not the problem”, which directly takes us to the university’s academic conduct policies about plagiarism.

In this session I learnt a new term: “green light culture”. The point here was that there are a variety of positions that relate to GenAI: in HE there are policy decisions that range from ‘forbid’ to ‘go ahead’.

I made a note of a range of searching questions. One of them was: how might students use Generative AI? It might become a study assistant, it might facilitate language learning, or support with creative projects. Another question was: how could pedagogies be augmented by AI? Also, is there a risk of over dependence in how we use these tools? Could it prevent us from developing skills? How can we assess in a generative AI world? Some answers to this question may be to have project-based assessment, collaborative assessment, to use complex case studies, and to consider the use of oral assessments. 

A point is that students will be using Generative AI in the future, which means that the university has a responsibility to educate students about it

Towards the end of the keynote, there was some talk about all this being revolutionary (I’ll say more about this later). This led onto a closing provocative question: what differentiates you (the tutor) from Generative AI?

During the keynote, some interesting resources were shared:

Teaching and learning with AI across the curriculum

The aim of a session by Mirjam Hauck was to explore the connection between AI and pedagogy, and to also consider the issue of ethics.

Just like the previous presentation, there were some interesting resources that were shared. One of them was a talk: TED Talk: How AI could save (not destroy) education.

Another resource was a recent book, Practical Pedagogy: 40 New Ways to Teach and Learn by Mike Sharples which students and staff can access through the OU Library.

I had a quick look at it, just to see what these 40 new ways were. Taking a critical perspective, I realised that the vast majority of these approaches were already familiar to me, in one way or another. These are not necessarily ‘new’ but are instead presented in a new way, in a useful compendium. The text also shares a lot of informal web links, which immediately limits the longevity of the text. It does highlight academic articles, but it doesn’t always cite them within a description of a pedagogy. My view is: do consider this text as something that shares a useful set of ideas, rather than something that is definite.

During this session, there were some complementary reflections about how GenAI could be linked with pedagogy: it could be used to help with the generation of ideas (but to be mindful that it might be regenerating ideas and bits of text that may be subject to copyright), play a role within a Socratic dialogue, or act as a digital assistant for learning (which was sometimes called an AIDA – an AI digital assistant).

Power was mentioned in this session, with respect to the power that is exerted by the corporations that develop, run, and deploy AI tools. The point I had in my mind during this part of the session was: ‘do be mindful about who is running these products, why, and that they hope to get from them’.

A brief aside…

Whilst I was prepping this blog, I was sent a related email from Hello World magazine, which is written for computing educators. In that email, there was a podcast which had the title: What is the role of AI in your classroom? 

There was an interesting discussion about assessment, and asking the question of ‘how can this help with pedagogy?’ and ‘how can we adapt our own practices?’ A further question is: ‘is there a risk that we dumb down creativity?’

A scholarship question?

A few times this year tutors have been in touch with me, to ask the question: ‘I’ve seen a students answer in a script that makes me think they may well have used Generative AI. What do I do?’ Copying TMA questions, or any other elements of university materials into a Generative AI tool represents a breach of university policy, and can potentially be viewed as an academic conduct issue. The question is: what do tutors do about this? At the moment, and without any significant evidence, tutors must mark what they have been given.

An important scholarship question to ask is: how many tutors think they are being presented with assessments that may have been produced by Generative AI tools?

Reflections

There was a lot of take on board during this session. I need to find the time to sit down and work through some of the various resources that were shared in this session, which is (in part) the reason for this blog.

When I was a computing undergraduate I went to a couple of short seminars about the development of the internet. When it came to the topic of the web browser, our lecturer said: “this is never going to catch on; who is going to spend time creating all these web pages and HTML tags?” Every day I make use of a web browser; it is, of course, an important bit of software that is embedded within my phone. This connects with an important point: it is notoriously difficult to predict the future, especially when it comes to how technologies are used. There are often unintended consequences, both good and bad.

Being a former student of AI (late in the last century) I’m aware that the fashions that surround AI is cyclical. With each cycle of hype, there are new technologies and tools. Following an early (modern) cycle of AI, I remember a project called SHRDLU, which demonstrated an imaginary world, where users could interact with natural language. This led to an expression that they key challenges had been solved, and all that needs to be done is to scale everything up. Reality, of course, is a whole lot more complicated.

A really important point to bear in mind is that GenAI (in the general sense) cannot reason. You can’t play chess with it. There are, however, other tools within the AI toolset that can do reasoning. As a postgrad student, I had to write an expert system that helped to solve a problem: to figure out a path through a university campus.

I’ve also been around for long enough to see various cycles of hype regarding learning technologies: I joined when e-learning objects were the fashion of the day, then there was the virtual learning environment, and then there was a craze that related to learning analytics. In some respects, the current generation of AI feels like a new (temporary) craze.

Embedding AI into educational tools isn’t anything new. I remember first hearing about the integration of neural networks in the early 2000s.  In 2009 I was employed on a project that intended to provide customised digital resources for learners who have different requirements and needs.

As the models get bigger, more data they hoover up, and the greater potential of these tools generating nonsense. And here lies a paradox: to make effective use of GenAI within education, you need education.

Perhaps there is a difference between generally available generative AI, to generative AI that is aligned to particular contexts. This takes me to an important reflection: no GenAI tool or engine can ever know what your own context is. You might ask it some questions and get a sensible sounding response, but it will not know why you’re asking a question, and what purpose your intended answer may serve. This is why the results produced by a GenAI tool might look terrible, or suspicious if submitted as a part of an assessment. Context is everything, and assessments relate to your personal understanding of very particular learning context.

Although the notion of power and digital corporations was mentioned, there’s another type of power that wasn’t mentioned: electrical power. I don’t have figures to hand, but large language models require an inordinate amount of electrical energy to do what they do. Their use has real environmental consequences. It's easy to forget this.

Here is my view: it is important to be aware of what GenAI is all about, but it is also really important not to get carried away and caught up in what could be thought of as technological froth. It’s also important to always remember that technology can change faster than pedagogy. We need to apply effective pedagogy to teach about technology. 

In my eyes, GenAI, or AI in many of its other forms isn’t a revolution that will change everything, or is an existential threat to humanity; it is an evolution of a set of existing technologies.

It’s important to keep everything in perspective.

Resources

A number of resources were highlighted in this session which are worth having a quick look at:

Acknowledgements

Many thanks to the presenters of this professional development event, and the team that put this event together. Lots to look at, and lots of think about.

Permalink
Share post

This blog might contain posts that are only visible to logged-in users, or where only logged-in users can comment. If you have an account on the system, please log in for full access.

Total visits to this blog: 2827971