On 20 November 2024 I was attended to attend an OU Digital Humanities annual workshop, which was hosted by the OU Knowledge Media Institute (KMi) and facilitated by Francesca Benatti, Research fFellow in Digital humanities.
What follows is a brief summary of my impressions of the event, and what I took away from it. I’ve also taken the liberty of mentioning some of the really interesting presentations that were made, and ask for forgiveness for necessary brevity; I share only certain highlights.
Annotating object itineraries in Recogito
The first main presentation was by Sarah Middle, visiting fellow in Classical Studies, and Elton Barker, Professor of Greek Literature and Culture. Sarah introduces us to a tool called Recogito, a Pelagios project that is used for semantic annotation. To share a bit more detail, I understand Recogito to be a tool that can be used to store and record information about museum artifacts.
I confess to not knowing very much about how museum collections are curated and organised. In Sarah’s presentation we were presented with the concept of object itineraries, which was a compelling idea. Artefacts are found at places (at particular times, by individuals or teams). These artefacts end up in museums which, of course, have locations. When artefacts are moved, the spatial relationships between artefacts can be lost. If multiple institutions provide location data, I can see how it is possible to look back in time to see what artefacts were found where, despite artefacts being in different geographic locations. A particular interesting example of artefacts which might have a lot of location history are navigation instruments. Instruments such as sextants and marine clocks have their own unique journeys.
From what I noted down, Recogito Studio permits different types of annotation, such as Geo-Tagger annotation (which doesn’t mean a lot to me). I also noted down a reference for something called CIDOC CRM an ontology for modelling heritage data. Ontologies, a way of organising knowledge about or describing a world, is a term that I’ve come across before. In thinking about all this, a question I asked myself was: can Recogito be thought of as a collaborative and structured Nvivo (which can be used to hold any kinds of data)?
Another project that was mentioned was Peripleo, described as ‘a browser-based map tool which enables the visualisation of information that has a geographical component’. The project used to be funded by JISC and the Andrew Mellon foundation and aimed to help researchers to collaborate with each other, to share ‘linked open data’, and to assist with tool development. The tools can be useful for history, classics, archaeology researchers, and cultural heritage professionals.
A recognised difficulty lies with keeping a project and accompanying resources alive after it has formally finished. A recent contribution to Pelagios has been made through the OU’s Open Societal Challenges Project, Pelagios: linking online resources through the places they mention.
There’s clearly a whole lot more to these projects than can be summarised in a relatively short presentation. There are tools, methods, ontologies, collaborations, records, and a whole lot of research that has been taking place. I sense this presentation merely scratched the surface, but did surface some really interesting ideas and questions.
From DH OU to DH at Reading
The next presentation was by Dawn Kanter from the Digital Humanities Hub at the University of Reading, who summarised her art history research about portraiture. An obvious question is, of course, what do computers have to do with art history? When we visit the National portrait gallery, we only see the final painting. There are a lot of other interesting details which relate to portraiture that I have never really thought about beyond the obvious question of who the painter was. Other relevant bits of information might be: when did a sitting take place?, where did it take place?, how much did it cost?, and what other portraits did a painter paint? (My begins to get a bit sketchy here, so I might well be making up questions).
Recording sitter information in a portrait-sitting ontology (that word again) translates analogue data into a digital form that can then be studied, searched and scrutinised. In turn, this can help researchers to test hypotheses, such as, when there may have been a movement from traditional portraiture to modern portraiture, which may reflect a moment from formal to informal sittings (if I’ve noted all this down correctly).
The methodology that is applied in this project is interesting. It moves from an application of an inductive qualitative method, where data is described within a structured ontology, to a deductive method, where the data is then used to answer questions. It is a method goes beyond the notion of mixed qualitative-quantitative data that my social science training has exposed me to. Both stages of the method relies on interpretation, insight and knowledge. The machine becomes a tool for reflection and discovery.
Digital Humanities and email corpora
Rachele De Felice shared something out her linguistics research from email corpora. In other words, she analyses large sets of email messages, looking for how language is used. Email is interesting, since there are some large data sets available, typically from different well known scandals. The use of this kind of data poses some unique challenges. Sometimes email corpora can be in different formats which have to be worked with, and in other situations, the databases contains emails that are incomplete, since sections may have been redacted.
Rachele’s presentation led me to a question: perhaps this blog is becoming a corpus, which reflects an element of activity and work that is taking place within the OU. Accompany questions are: how could it be analysed, and what might a linguistic analysis say?
People and music: exploring their encounters over the centuries
Alba Morales, a Research associate at KMi introduced a project called Meetups. The aim of the project was to provide a tool to enable ‘the exploration and visualisation of encounters between people in the musical world in Europe from c.1800 to c.1945, relying on information extracted from public domain books such as biographies, memoirs and travel writing, and open-access databases’. If it is possible to explore the connections between musicians, then maybe it is possible to understand more about a musician’s collaborations and influences.
The interactions are extracted from public access records. There was the development of a tool chain to extract records from Wikipedia biographies. Tools were then used to explore the resulting database. The presentation led to a thought, which was expressed as a question: what other sources of data might be used?
For the Digital Humanities outsider looking in, this presentation also spoke to me about the importance of data structures and, of course, ontologies. I reflected on a similarity between this project and the project about portraits: both were using digital tools to study artistic outputs in new ways.
Discussion
Rather than summarise what was a wide ranging discussion, I’m going to share a couple of themes or topics that caught my attention.
A discussion about reading reminded me of the UK Reading Experience Database which I remember being mentioned at another event. On this point, there was a reference to theories of intertexuality; each reader would have read a whole number of other texts. Also, what about texts that have always been digital; texts that have never been printed, but are consumed through digital devices?
Another theme that was discussed was, of course, Generative AI. On this point there was a reference to potentially using AI to capture (and perhaps present) summaries of political discourse; a topic of a presentation which will be made in a forthcoming KMi seminar.
Resources
During the session, I identified a number of different resources that might be useful:
- UK-Ireland Digital Humanities Association
- OpenARC, The Open University Arts & Humanities Research Centre
- Open Learn - Digital humanities: humanities research in the digital age
This final resource, a short course, looks particularly helpful.
Reflections
I have been aware of Digital Humanities being a subject of its own for well over a decade. After reading an article that was shared by our former head of school, I have started to adopt the view that computing should be thought as a humanities subject. After all, computing is done by people, for people. People design, use and consume digital media; our humanity is, of course, can be expressed through digital artefacts.
I remember someone suggesting that artificial intelligence is applied philosophy, since by trying to build ‘intelligence’ we try to understand what it is. In turn, computing can be thought of a subject that mirror, magnifies and reflects, our humanity. After all, it gives us a way to create and engineer tools that help us to look at ourselves and our histories in creative and imaginative ways.
One of the big takeaways was a more thorough understanding of methods. I can see how important it is to examine and study a domain, how important it is to describe or formalise data, how important it is to correctly capture data, and how to interrogate data to answer research questions. The examples that were shared were really helpful.
Acknowledgements
Many thanks to Andrew Murray, who mentioned this session to me, and to Francesca Benatti who let me join the session. I hope I can attend events in the future.