- In the discussion of task A11 (pp. 279-81) the account of the students' utterances is plausible, but why is transcript data to be preferred to the video data for such a visual task?
It is much easier to analyse and report transcribed data. Video data is time consuming to analyse as it must be observed and re-observed in order to analyse in detail and it is easier to do this from a transcript. Ease of use means that more raw material can be examined.
Some forms of analysis can be done direct from the recording when the researcher needs to focus on what is going on and not get too focused on the detail of what people have said.
- A criticism sometimes made of quantitative research is that it uses preconceived categories rather than letting findings 'emerge' from the data. The 'Commentary' on task A11 (pp. 280-1) is qualitative rather than quantitative, but it could be argued that it also uses preconceived categories.
For example, Elaine's words before the intervention, 'No, because it will come along like that', and the fact that the next utterance is by John on the next question are interpreted as, 'She gives a reason to support her view and this is not challenged.'
Her words after the intervention, 'Now we're talking about this bit so it can't be number 2 it's that one. It's that one it's that one' are interpreted as, 'In proposing number 4 Elaine is building on these two earlier failed solutions' (p. 281).
Wegerif and Mercer have prior expectations about 'exploratory talk', defined as 'talk in which reasons are given for assertions and reasoned challenges made and accepted within a co-operative framework orientated towards agreement' (p. 277).
So notions such as 'reason', 'support', 'challenge' and 'failed solution' have specific, preconceived meanings. Do you think it would be possible to avoid the use of preconceived categories when analysing this data?
Glaser & Strauss (1967) published The Discovery of Grounded Theory which suggests that researchers should ignore the literature and theory and just work on the raw data to produce categories that are not contaminated by preconceptions. This originates from philosophers such as Bacon and Locke but, in that first book they do not account for the idea that 'there can be no sensations unimpregnated by expectations' (Lakatos, 1982, p.15). Later on, they separately discuss the preconceived knowledge and research that researchers have at their disposal before data collection and analysis.
Charles Sanders Peirce discusses the 'heuristic framework' of concepts that inform a researcher and suggests that abductive interference combines the new data with previous knowledge so that pre-conceptions often have to be abandoned or modified.
Following this the preconceptions must be regarded as heuristic concepts which form lenses by which the empirical world can be viewed.
http://www.socresonline.org.uk/2/2/1.html#s4 [Accessed 13th Feb 2011]
I think that it would be difficult to avoid preconceptions when conducting the research but the use of initial coding without concern about categorising the codes would help. Once any relationships had been explored using diagrams, then focused coding can be used to reduce the number of codes and identify repeating ideas and themes.
inductivism - theories have to be based on empirical observations which are generalised into statements that can be regarded as true or probably true. (from Hume)
positivism - the only authentic knowledge is that which is based on sense, experience and positive verification (from Compte)
- Again in relation to task A11, what evidence might support the following claim on p. 281?
'In the context of John's vocal objections to previous assertions made by his two partners his silence at this point implies a tacit agreement with their decision.'
Once again it is difficult to discuss this without resort to the original paper but it may be possible to back up this claim with evidence from the video on his facial expression and body language. For example, if he is still engaged with the group activity, this would back up this assumption whereas if he was sitting back or looking at posters on the wall then it would suggest he was bored of the conversation and the authors' assumption would probably be incorrect.
- On p. 281, the authors claim:
'It was generally found to be the case that the problems which had not been solved in the pre-intervention task and were then solved in the post-intervention task, leading to the marked increase in group scores, were solved as a result of group interaction strategies associated with exploratory talk and coached in the intervention programme.'
When you read this claim, did you ask yourself if the researchers had looked at whether this was also true of the control group? If time allows, feel free to look at the papers in which fuller accounts of the study appear.
I was concerned that the control group may have a different, less comfortable relationship with the teacher/researcher and still be inhibited in the discussion for the second test.
- In the post-intervention talk around problem A11, John says, 'No, it's out, that goes out look'.
This utterance doesn't use the words 'cos', 'because', 'if', 'so' or a question word, but it is plausible that John is giving a reason. How might one deal with such a problem?
A set of rules that identified reasoning behaviours in the video as well as in words
- Are you convinced that the study effectively demonstrates the authors' case that:
'the incorporation of computer-based methods into the study of talk offers a way of combining the strengths of quantitative and qualitative methods of discourse analysis while overcoming some of their main weaknesses'?
No, intuitively I agree with the authors but I cannot determine it from the evidence that they present.
- What does the computer add to the analysis?
The growing literature on computer assisted qualitative data analysis software (CAQDAS) expresses both hopes and fears. The hopes are that CAQDAS will: help automate and thus speed up and liven up the coding process; provide a more complex way of looking at the relationships in the data; provide a formal structure for writing and storing memos to develop the analysis; and, aid more conceptual and theoretical thinking about the data. In spite of these pros there are a good many criticisms and worries about the software in the literature: that it will distance people from their data; that it will lead to qualitative data being analysed quantitatively; that it will lead to increasing homogeneity in methods of data analysis; and that it might be a monster and hi-jack the analysis.
http://www.socresonline.org.uk/3/3/4.html [Accessed 13th Feb 2011]
- What is the status of computer-based text analysis 10 years on? Spend 20 minutes trying to answer this question by searching the web.
I had a look at ATLAS.ti which offers a variety of tools for accomplishing all the tasks associated with a systematic approach to unstructured data, i.e. data that cannot be meaningfully analyzed by formal, statistical approaches. It is a tightly integrated suite of tools that support analysis of written texts, audio, video, and graphic data. ATLAS.ti brings to the job highly sophisticated tools to manage, extract, compare, explore, and reassemble meaningful segments of large amounts of data in flexible and creative, yet systematic ways.
Create quotations directly from audio and video files, work with or without transcriptions. Link audio to text and videos to photos. Treat videos clips as you would text files, draw connections between any kind of data and content.
- create quotations in any type of audio / video / image file just like in textual documents
- annotate multimedia quotations
- hyperlink between multimedia quotations and text files (and vice versa, of course...)
- assign multimedia files like textual files as standalone documents
- How does this paper compare with Reading 1?
My current thinking is that there seems to be two types of qualitative research: one which draws on theoretical concepts and draws up firm hypotheses which can be proved or disproved during the investigation; and one which uses theoretical concepts in order to produce vague conjectures about possible relationships and then examines these by investigating the raw material.
The Hiltz & Meinke paper uses two explicit research questions with firm hypothesise for each. One it investigates with a null hypothesis and quantitative analysis. The reason for a null hypothesis is to suit the statistical tests we use to analyse the results
HI: There will be no significant differences in scores measuring mastery of material taught in the virtual and the traditional classrooms.
The other uses mainly one tailed correlational hypotheses in the form of predicting a significant positive correlation.
H2: VC students will perceive the VC to be superior to the TC on a number of dimensions:
2.1 Convenient access to educational experiences;
2.2 Improved access to their professor;
2.3 Increased participation in a course
I feel very uncomfortable with correlational hypotheses as it appears to me that they are likely to influence the researcher when collecting and analysing data. Think this comes from my science background!
The Wegerif & Mercer paper reports on research that seems to have used the second type of analysis with vague conjectures that are confirmed with conversation analysis. However, only a review of the research is presented in this paper and it is hard to judge whether firm hypotheses were used. Unfortunately the OU library only has the journal from 1997 and the full research was presented in an article from 1996! Tried other Uni libraries and cannot get it at all.