OU blog

Personal Blogs

Activity 16.1: Investigating research proposals

Visible to anyone in the world

Activity 16.1: Investigating research proposals

A research proposal should be as specific and focused as possible. If the research is being driven by gaps in the existing literature, which of these gaps will you attempt to address? If your research is being driven by theoretical or policy debates, which specific points of these debates are you going to focus on?

Use a contents list and plan carefully so that each sentence follows logically from the one before and the reader knows what to expect.  A well written text is a "chain of ideas" following "verbal signposts" in the text.

Key components are:

Title

Concise and descriptive

Abstract

Approximately 300 words. Include the research question, the rationale for the study, the hypothesis (if any) and the method.

Introduction

• A description of the research problem

A clear and simple formulation of the research question followed by where the idea came from, clarifying any concepts need explaining.

• An argument as to why that problem is important

Does the research aim to resolve theoretical questions; develop better theoretical models; aim to influence public policy; or aim to change the way people do their jobs in a particular field?

Describe context, showing necessity and importance.

Set delimitations of research and define key concepts.

Literature Review

This needs to provide an integrated overview of the field of study showing your awareness of the relevant theories, models, studies and methodologies. It provides a conceptual framework for the reader and demonstrates that the researcher is aware of the breadth and diversity of literature relating to the research question.

You need to demonstrate the manner in which your research questions emanate from gaps in the existing empirical literature or apply a theory to a specific context.

Convinces your reader that your proposed research will make a significant and substantial contribution to the literature (i.e., resolving an important theoretical issue or filling a major gap in the literature).

A description of the proposed research methodology

Demonstrate your knowledge of alternative methods and make the case that your approach is the most appropriate and most valid way to address your research question

Hypotheses

Research Design

Sampling techniques and how they represent population (vital to validity)

When particular measurement instruments are used, important to explain how those instruments were developed, where they have previously been used, and to what effect.

Data collection procedures

Data analysis

A description of how the research findings will be used and/or disseminated

You need to communicate a sense of enthusiasm and confidence without exaggerating the merits of your proposal. That is why you also need to mention the limitations and weaknesses of the proposed research, which may be justified by time and financial constraints as well as by the early developmental stage of your research area.


References

Birmingham City University guidelines: http://www.ssdd.bcu.ac.uk/learner/writingguides/1.07.htm

University of Nottingham Business School:

http://www.nottingham.ac.uk/business/phd/Proposal.html

Trinity Western University

http://www.meaning.ca/archives/archive/art_how_to_write_P_Wong.htm

 

 

Permalink 1 comment (latest comment by Jonathan Vernon, Wednesday, 1 June 2011, 21:47)
Share post

H809: Activity 14.1: Reading 17: Lindroth & Bergquist 2010

Visible to anyone in the world

Activity 14.1: Reading the paper (3 hours)

Read the paper in full and consider these questions:

  • What methodological advantages does the study claim over previous research?

Using an ethnographic approach to study the students' learning intentions allows consideration of the whole range of activities for which the students use a laptop in the situation of the lecture and thus opens up the understanding of the changes in student learning in a lecture context. This is in contrast to isolated studies which compare laptop use to traditional notetaking behaviours.

  • What might be the disadvantages of this approach? What alternative methods of data collection or analysis could have been possible?

The students' learning intentions may not coincide with the lecturers' learning intentions but this would be difficult to detect without triangulation. In this case a lecturer is also a researcher and may be presuming learning intentions.
Inability to generalise to other audiences.
Quantitative results from survey data would allow comparison with groups of students from other disciplines
Screen capture, recording key presses, monitor internet usage.

  • How does the approach relate to the account of ethnography provided by Hammersley in Reading 15 last week? Can the research be considered an ethnography?

Fairly lengthy time period of 4 years quoted but actually 3X10 week courses
Development over time is not considered
Following students for the whole day
Participation - researchers only participate in role of lecturer
Context is examined but not accounted for i.e. IT course with students who are used to using technologies and encouraged to use laptops. Cannot find any mention of the type of university.
Methods include semi-structured interviews and observation
Concentration of perceptions of participants
Concentration on lecture

  • What do Goffman's account of "involvement" and the authors' use of the term "alignment" add to the research?

These terms allow examination of laptop use at various levels of involvement with the lecture. It removes the dichotomy of 'good' and 'bad' use of a technology and examines the perceptions and techniques of the students. I was really interested to see that laptop use appeared to be deepening the learning experiences of the students as they moved from passive consumption of a lecture to relating the information to their own requirements and also to critical analysis of the information being presented to them.

  • To what extent were the authors able to present evidence to support their major findings? Would other researchers with the same data reach similar conclusions?

The methodology is heavily theory-based and the conclusions drawn appear to come as much from the literature as they do from the actual research. The authors seem to have undertaken the research from a pre-determined viewpoint and many other interpretations may be possible from the reported conversations.

  • To what extent do the implications for practice follow from the findings?

Etiquette - this follows from interview data showing student confusion over what is permissible but more work is required as rules may restrict and confine use and reduce the usability of laptops.

More training in IT skills does not follow from the research findings but does seem intuitive as students need to be competent in a range of techniques in order to be able to notetake efficiently.

No evidence is shown for the laptop acting as 'glue' between different settings - again this is intuitive.

Evidence is produced indicating that students become more involved in lectures but these are IT students who are presumably comfortable with various technologies and able to use them without the use taking up a lot of their concentration. Many students from other disciplines may not be able to multi-task like this as the actions are not habitual.

Design for minimal disturbance also seems to be produced from literature review rather than from evidence derived from this particular study.

  • Do you think there are aspects of laptop use in education not considered by the paper?

The effect of the fact this is an IT course
Whether all students have laptops they can use

Mention is made of students using unaligned subordinate activities but not of the perception of other students to these activities. I have personally experienced complaints from students who are distracted from lectures by highly coloured flashing games screens.

  • What could a section on ethics have considered?

'Covert analysis' -

The effect of the teacher carrying out the research. Power positions/control of marks in course

Ethics of asking 'who is on your contact list?' and how willing young people are to disclose this material.

  • Do the grammatical errors in the paper affect your assessment of the quality of the research?

OK - I have to admit here that I am a real pedant over grammar and spelling! Incorrect use of English in an academic setting really annoys me and I do find it difficult to look past this to see the true value of the research. I consider that a researcher who does not thoroughly check their presentation will also be the sort of person who is unlikely to thoroughly check their literature background; research their methodology; check their results; and thoroughly consider their conclusions. I would tend to avoid their work or assess it very carefully if I needed to consider its findings. I cannot understand why Computers & Education published this paper in its current form.

 

Permalink Add your comment
Share post

H809: Activities 13.4, 13.5: Reading 16: Gillen 2009

Visible to anyone in the world

 

Activity 13.4: Reading 16 (2.5 hours)

Read Gillen's paper in full.

  • To what extent do you think the "new synthesis of methods" (p.72) is actually new?

Gillen uses the term 'virtual literacy ethnography' for her 'new synthesis of method'. I am not sure that they really are new methods or even a new collection of methods as she says that "I aim to explore the range of literacy practices...." (p.57) but then goes on to say that there was only "a tiny amount of speech in-world" (p.59) and that conversation was mainly written, the forum was written and all the material on the Wiki that she describes is also written. She does mention 'multi-modality' (p.69) in connection with Wiki but this is not analysed or even discussed. Julie Coiro has published some interesting things on new literacy practices including being one of the editors of The Handbook of Research on New Literacies where literacy is discussed as a constantly evolving medium where the author has to make choices about which forms and functions most suit their purpose. Gillen appears to have restricted the analysis only to the traditional analysis of language used although she also comments on collaboration

The method does not seem to address the aim as it only analyses the written word with a frequency list of words drawn up and compared to newspaper articles, correspondence and everyday adult conversation - very different conversation forms to that used when encountering another young person's character in SL.

  • Would your methods have been any different?

I would have liked to explore the young people's choice of the literacies available to them. Lawrence (2005) discusses the use that young people make of slices of digital animation, video and audio in their literacies. In order to examine this complex field, the researcher needs a framework to give the structure and order required to examine literacy from diverse perspectives and I am currently uncertain how I would do this but I am inclining towards Activity Theory as it looks at the temporal dimension of the activities as well as the tool, context etc.

Looking at a single issue, I think I would have liked to examine the support networks that the students set up between themselves in order to develop their literacy practices. For example, Gillen illustrates the literacy of the group by referring to the setting up of the dictionary. I would be interested in how this was done: how they chose and defined the terminology; how they negotiated the methods by which they wrote the definitions. This collaborative aspect of new technologies intrigues me as I work with many young people with Asperger Syndrome who find it very difficult 'letting go' of their work when it needs to go towards a group project.

  • How does Gillen's approach relate to the account of ethnography provided by Hammersley in Reading 15? Can the research be considered an ethnography?

I do not consider this work to be a true ethnography. Hine (2000) defines ethnography as 'a researcher spending an extended period of time immersed in a field setting, taking account of the relationships, activities and understandings of those in the setting and participating in those processes' (see activity 13.1 for ref.). Gillen has spent time in-world but does not seem to have taken full part in forums and the Wiki and although she joins in, she is a staff member and researcher and the chat log extract seems to show that the main members of the group tolerate and carefully instruct her in simple tasks, much as an adult is allowed to join in children's games. The analysis concentrates on only one aspect of the activities - the language used, where true ethnography would examine the relationships, context and perceptions of those using the environment.

  • What do you think of the researcher's avatar having the message "logging chat" above her head?

It marks her out as different and allows others to avoid her if they do not want their chat recorded. This is great from an ethical point of view but not so beneficial to research. In a simplistic view, ethics generally considers online information to be considered as public if it can be accessed openly by anyone with an internet connection or if the participant understands that it is public. The 'logging chat' logo is intended to inform participants that their conversation is public.

  • Given the limitations of space, the discussion of ethics in the paper is brief, but you will have come across other papers about researching Second Life in earlier weeks. Try to list the key ethical issues for such research.

Ethical decision-making and Internet research - Recommendations from the Association of Internet Researchers (AoIR) ethics working committee (2002)

Informing the participants that the information they are sharing is public

Allowing participants more information about the research e.g. notecard (Rosenburg, 2010 p.26)

Quoting using a person's avatar name should be checked with person before publication

Simple observation cannot happen, researchers must participate so representation is important

I liked the diagram shown in McKee & Porter (2009) which allows for the researcher to use discretion to determine when informed consent is necessary:

Block diagram showing areas where informed consent is necessary

 

 

 

Permalink 2 comments (latest comment by Lynn Hunt, Sunday, 22 May 2011, 21:32)
Share post

H809: Activities 13.1, 13.2, 13.3: Reading 15: Hammersley 2006

Visible to anyone in the world

 

Activity 13.1: Reading 15: Ethnographic understandings of context (2 hours)

Read pp. 3-8 of Hammersley's paper, up to the heading 'Context as virtual'. Identify the ways in which Hammersley talks about context and, in particular, what he identifies as ethnographic understandings of context.

In Week 8 context was identified as an issue in research methods generally. How do you think Hammersley addresses the issues concerning context raised in Week 8?

Notes

Ethnography from an anthropologist's point of view:

  • Living in a community continuously for a long period
  • Participating
  • Interviewing
  • Mapping
  • Studying genealogy
  • Collecting artefacts

Ethnography in other social sciences:

  • Use a particular context
  • Months not years (tech. means that large vol. of data can be collected)
  • Some attempt to use a macro view e.g. use of frameworks to assist in studying social situations and temporal cycles
  • A snapshot - can be a risk that it is treated as normal situation

Micro-ethnography vs. holistic ethnography where research is located in context of wider society

Participants context their activities themselves

Removal of participants and placing them in different context can be considered a violation

Do we always articulate the context?

Holistic - what does 'wider context' mean?

If wider on global scale, how do we obtain the information?

If we use other sources, how does this constitute grounded theory?

Context is arbitrary as it is how it is perceived by one person at any one time.

My Ideas

I am new to this area as I have not looked at research using ethnography before. I have found it very interesting and can see the use of long term anthropological studies but I am still struggling with the idea of using micro-ethnography as it seems a contradiction in terms. I am also still unsure on how applicable the term can be to studies of virtual environments. In order to try to understand this I got some books out of Keele University library:

Miller, D. & Slater, D. (2000) The Internet: an Ethnographic Approach. Oxford, BERG.

Miller and Slater define ethnography as 'a long term involvement amongst people, through a variety of methods, such that any one aspect of their lives can be properly contextualised in others' (p.21).

·         Although the study described in this book only spent 5 weeks in Trinidad, it relies on 11 years of previous work on diverse topics such as business, kinship and identity- p.21/22

  • Internet data, interviews, surveys, email, chat records - p.22
  • Several sites -p.22

They mention how they extend their ethnographic studies across various sites (London, New York and Port of Spain) and how that is criticised in Marcus (1995) which is available from the OU library:

Marcus, G.E., 1995. Ethnography in/of the World System: The Emergence of Multi-Sited Ethnography. Annual Review of Anthropology, 24(1), pp.95-117.

Hine, C. (2000) Virtual Ethnography. London, Sage.

Hine defines ethnography as 'a researcher spending an extended period of time immersed in a field setting, taking account of the relationships, activities and understandings of those in the setting and participating in those processes'. She further suggests that the ethnographic perspective can be adapted to look at how the status of the internet is negotiated in a local context by examining it as a culture in its own right and as a cultural artefact. Too much to explain here but there is some very interesting information in Chap. 2. Hine goes on to look at how ethnographic studies concerning the internet vary from normal ethnographic studies but still retain the same ethos.

 

The Tolmie (2001) paper we looked at in week 8 was behaviourist in approach with much of the language focused on the effect of the various parts of the 'context' on the learner.  Hammersley (2006) discusses context in a much more constructivist manner as something that is arbitrary as it is not only varied both temporally and physically but also differs in the way it is perceived by one person at a given instant in time.

Activity 13.2: Reading 15: Virtual context (2 hours)

Read pp. 8-10, up to the heading 'Ethnography as political'.

  • What points does Hammersley make about the 'virtual' as opposed to being physically present?

Does the researcher need to be physically present? (p.8)

What does 'physically present' mean in an online setting? (p.8)

Do we need to know the participants' social background? (p.8)

Observational data can only come from natural situations and ignores perceptions - sometimes it ignores perspective (traditional) or it may infer them from the observation (p.10)

Discursive strategies - makes the assumption that people will use same techniques in other contexts (p.10)

'Experimental' ethnography does not pretend realism as it considers accounts to be similar to modern literature. Hammersley suggests that moving it to imaginative literature abandons critical inquiry but I am not sure this is true as literacy scholars have always examined literature critically. For example, examining the motivations behind the way Shakespeare described the Jews in The Merchant of Venice.

  • In what ways could ordinary face-to-face settings be thought of as virtual?

My thoughts on this are that people always use an identity that suits the situation and that they will only disclose their background if they feel comfortable in doing so, or if it is obvious. The situation is the same online apart from 'the obvious' may not be so obvious! Hammersley suggests that we must always accept what people say critically and not at face value.  (p.9)

When interviewing a participant, either virtually or face-to-face, we cannot immerse ourselves in their world. The participant uses socio-discursively constructed comments (p.9) i.e. they are dependent on context which involves their perception of the interviewer. I was Skype interviewed for one piece of research when the interviewer was someone who had been my tutor. I trusted him to be secure and so I was honest but I also felt I needed to present in a self-critical and academic manner.

 

 


Activity 13.3: Discussing the paper (1 hour)

Once you have made your own postings read the other contributions and focus on the section, 'On the uses and limitations of interviews'.

  • What do you think are the limits of interview data?
  • What role is there for observation?

My thoughts so far:

Do we need to specify the interview context to some extent? In my Skype interview, the interviewer discussed the best time to contact me. He suggested a time when I could concentrate, undisturbed for an hour. He also checked that the time was still OK when he contacted me.

We must emphasise that the interview is a 'snapshot in time' and specific to that context. The context includes the interviewer and the same participant may not give the same response to a student-interviewer as they would to a lecturer-interviewer.  The participant may be tired, stressed or ill and thus influenced to answer harshly or to give short answers in order to curtail the interview. It may be possible to pick up some clues as to these contextual facts and thus address them by judicial questioning if a webcam is also used so that observational data is included as a form of triangulation. However, participants may object to the use of a webcam and this should be taken into account.

The influence of the researcher. It is argued that the presence of the researcher is restricted in online situations but power positions are still present. Social position can also be communicated by the use of Standard English in typing and speech and also by accent in Skype or by telephone.

The act of questioning. I believe that the act of asking people to take part in an interview is likely to make them consider their opinions more deeply; discuss the situation with friends who could influence them; or become concerned about the motives of the researchers. For example, I am researching dyslexia support sessions at university for another course and some students have asked me whether the research will lead to the sessions being discontinued.

Reflexivity. Researchers need to acknowledge and declare their stance with respect to their political and value position and the way that position may affect the design of their research, its execution and interpretation i.e. develop a reflexive attitude to their research.

 

Permalink Add your comment
Share post

H809: Activity 12.8 & 12.9 Reading 14

Visible to anyone in the world

H809: Activity 12.8/9: Reading Paper 14 (2 hours)

Read the paper by Cox (2007). When making notes, think about the following questions.

  1. Clearly, as Cox says, 'It is relatively easy to collect voluminous amounts of process data', but what else has been added here?

Technology enhanced research (TER) allows the modelling of the process to determine the best points and forms of intervention. A framework devised by Chi (1997) has been suggested in order to determine granularity of the units they are analysing and then develop a coding scheme based on this.

  1. What do you think he means by 'triangulated' and 'multiple data source approaches'?

Triangulation is a method used by qualitative researchers to check and establish validity in their studies.

Data triangulation: A key strategy is to categorise each stakeholder in your study and include a comparable number of people from each group.

Investigator triangulation: several investigators used with each using same method, findings are compared, if similar than validity established.

Theory triangulation: Multiple perspectives i.e. from different disciplines or different theoretical positions within disciplines.

Methodological triangulation: multiple qualitative and/or quantitative methods.

Environmental triangulation: different locations, settings and other key factors.

[From: http://www.rayman-bacchus.net/uploads/documents/Triangulation.pdf Accessed 2nd May 2011]

I thought Cox (2007)  had a slightly different interpretation of triangulation than one I had come across before. He describes his approach as data triangulation as the data is retrieved from a variety of sources but I would have considered this methodological triangulation as multiple methodologies are required to collect the data. I suppose that the terminology does not really matter as the whole point of triangulation is to validate the data by approaching from various angles and compare the results to see if they are consistent.

 

 

I had a few problems reading this paper. I kept getting confused between granularity as used in this context; granularity as referred to in the IEEE/LOM; and the granularity used in data management for databases. All very similar but with enough overlap to confuse me!

My thoughts so far are as follows:

Learner performance is a very high level of granularity.
Learner performance on one exercise is a lower level of granularity
Learner performance on one aspect of an exercise is an even lower level of granularity

I think that Cox was saying that a researcher needs to determine the lowest level of granularity required to answer the research question and then use this as a basic unit in order to model the process using data gleaned from various methodologies..

Is this what others understood from the paper?

 

 

 

Permalink Add your comment
Share post

H809: Activity 12.7: Investigating objectivity

Visible to anyone in the world

H809: Activity 12.7: Investigating objectivity (2 hours)

Objectivity is one of the most cherished ideals of the educational research community. In fact it is so important that if our work is accused of being subjective, its status as a source of knowledge sinks slowly into the horizon like a setting sun. Yet, though we use the term objective with ease in our conversations and in our literature, its meaning is not particularly clear, nor ... are the consequences of the tacit, almost unexamined assumptions upon which it rests.

(Eisner, 1992, p. 9)

It turns out, then, that what is crucial for the objectivity of any inquiry - whether it is qualitative or quantitative - is the critical spirit in which it has been carried out. And, of course, this suggests that there can be degrees; for the pursuit of criticism and refutation obviously can be carried out more or less seriously. 'Objectivity' is the label - the 'stamp of approval' - that is used for inquiries that are at one end of the continuum.

(Phillips, 1989, p. 36)

To pursue objectivity-or truth-to-nature or trained judgment-is simultaneously to cultivate a distinctive scientific self wherein knowing and knower converge. Moreover, the very point at which they visibly converge is in the very act of seeing not as a separate individual but as a member of a particular scientific community.

Daston, L. & Galison, P. (2007) Objectivity. Cambridge, Massachusetts, Zone Books.

Acknowledging the subjectivity of statistical analysis would be healthy for science as a whole for at least two reasons. The first is that the straightforward methods of subjective analysis, called Bayesian analysis, yield answers which are much easier to understand than standard statistical answers, and hence much less likely to be misinterpreted. This will be dramatically illustrated in our first example.

The second reason is that even standard statistical methods turn out to be based on subjective input - input of a type that science should seek to avoid. In particular, standard methods depend on the intentions of the investigator, including intentions about data that might have been obtained but were not. This kind of subjectivity is doubly dangerous. First, it is hidden; few researchers realize how subjective standard methods really are. Second, the subjective input arises from the producer rather than the consumer of the data - from the investigator rather than the individual scientist who reads or is told the results of the experiment.

Berger, J.O. & Berry, D.A. (1988) 'Statistical Analysis and the Illusion of Objectivity'. American Scientist, vol. 76 issue 2, p.159-165.

 

In order to defend the validity or objectivity of interpretation against the 'natural attitude' of the researcher, Husserl believed that any preconceptions or beliefs held by the researcher should be examined, acknowledged and then put to one side or 'bracketed'; a process also known as 'reduction' in phenomenology

Researchers subscribing to Heideggerian philosophy acknowledge that they can only interpret something according to their own beliefs, experiences and preconceptions, which are a legitimate part of the research process and should not be left out.

A defining 'quality indicator' in Heideggerian research is a detailed explication of the interviewer's preconceptions and reference to these throughout the research process. In contrast, a 'quality indicator' in Husserlian phenomenology is an account of how the interviewer's preconceptions have been treated so as not to influence the research in any way.

Lowes, L. Prowse, M.A. (2001) 'Standing outside the interview process? The illusion of objectivity in phenomenological data generation'. International Journal of Nursing Studies, vol. 38, pp. 471-480.

 

 

Permalink Add your comment
Share post

H809: Activity 12.4 & 12.5: Second Life research in education

Visible to anyone in the world

Activity 12.4 & 12.5: Second Life research in education

 

I looked at a few journal papers and also looked at some more general articles on accessibility in Second Life as this is an area that impacts on my work and in which I am especially interested. I have included two journal articles in my blog and also two magazine reports.

The research that I examined all seemed to involve practitioner-researchers using SL as a tool within their educational settings and the analysis was based on the assessments that the learners completed together with questionnaires/interviews and practitioner perceptions. Little account was taken of the fact that the learners were being assessed and that researchers were closely involved with the learners and the effects that these could have on the results. All the research I looked at was qualitative and studied constructivist, problem-based learning.

 

Good, J., Howland, K. and Thackray, L. (2008) 'Problem-based learning spanning real and virtual words: a case study in Second Life', Research in Learning Technology, 16: 3, 163 - 172.

Project teams were paired with clients from Sussex Learning Network partner institutions to design learning experiences that corresponded to real curriculum needs within the vocational learning arena. The clients were asked to identify issues which were difficult, dangerous or impossible to teach in real life, to hold an initial meeting with the student team to outline their problem area, without offering a solution, and to provide additional input if required during the project.

Assessment by portfolio:
Production of a machinima (a short film shot within SL) which showed the highlights of their learning experience
A group document describing the project overall
An individual document grounding the learning experience in the relevant literature, reflecting on the overall experience, providing a critique of their learning experience and engaging in a broader discussion of the value of IVWs for learning.

The project was supported by eight sessions of two hours each, where students were introduced to a range of learning theories, initial orientation and building classes in SL and mentoring by a staff team experienced in interactive technologies, learning theory and SL.

Schiller, S.Z. (2009) 'Practicing Learner-Centered Teaching: Pedagogical Design and Assessment of a Second Life Project', Journal of Information Systems Education, 20, 3, pp. 369-381

The Second Life project was implemented in an MBA-IS course in which thirty-two students were randomly assigned to eight teams. Each team managed an avatar and completed a series of business-related activities.

Teacher facilitates with guided activities
Snapshots
Chat transcripts
Reflection essays
Group presentation in class
Post-activity survey

As this was part of an official course, the assessments were valuable to the students but also gave feedback to practitioner-researchers. Results may be biased due to students perceiving requirements for positive feedback in order to pass the course.

 

Springer, R. (2009) 'Speech in a Virtual World, Part II', Speech Technology Magazine, 14, 7, p. 42

Programs have been designed specifically to integrate assistive technologies with SL so disabled users can participate. Two of these are TextSL and Max, the Virtual Guide Dog.

TextSL, a free download, harnesses the JAWS engine from Freedom Scientific to enable visually impaired users to access SL using the screen reader. TextSL supports commands for moving one's avatar, interacting with other avatars, and getting information about one's environment, such as the objects or avatars that are in the vicinity. It will also read the text in the chat window. The program, which was created by Eelke Folmer, an assistant professor of computer science and engineering at the University of Nevada-Reno, is compatible with the JAWS screen reader and runs on Windows, Mac OS, and Linux.

Max, the Virtual Guide Dog, was created as a proof-of-concept to show that SL could be made accessible to people with all types of disabilities. Max attaches to one's avatar, and its radar moves the user and interacts with objects. Max can tell a user what she can reach out and touch, printing the information into the chat window. Max can also help a user find a person or place and transport the user to a desired location. If a device or object has a .WAV file associated with it, then Max can play the audio file as well.

 

Springer, R. (2009) 'Speech in a Virtual World', Speech Technology Magazine, July/August.

http://www.speechtechmag.com/Articles/Column/Voice-Value/Speech-in-a-Virtual-World-55189.aspx

An estimated 50 million to nearly 200 million people use virtual worlds like Second Life (the range can be attributed to an overlap of users among sites and a differentiation between registered and active users). We don't have hard numbers regarding how many users have disabilities, but statistics on video gaming offer insight. As many as 20 percent of the estimated 300 million to 400 million video gamers are disabled, a 2008 survey by PopCap revealed. Considering that roughly 15 percent of the U.S. population is disabled, people with disabilities are overrepresented in the gaming market. Those surveyed reported more significant benefits from playing video games than their nondisabled counterparts.

 

Permalink Add your comment
Share post

H809: Activities 12.1, 12.3: Davies and Graff

Visible to anyone in the world

Activity 12.1: Read and critique the paper (3 hours)

Imagine you have been asked to peer review Davies and Graff's paper as an academic referee. The request has come from an international journal of elearning and the editor wants your opinion of the paper.

a) Introduction

Harvard convention is cf not e.g.
4 authors should be et al. not listed in full
English is very informal and sounds contrived in places e.g. 'there is therefore' and 'what needs to be investigated' p.658
'One reason for the importance....is...' p.658 Is this really this definite?
Research question - why does this need to be investigated?

b) Research design

Is this university/college/distance learning?
All on same degree which is consistent but were they studying same six modules?
Blackboard is a trade name and should be acknowledged, referred to with a capital letter and referenced.
Blackboard statistics-what are these? It is not clear what is recorded - number of times logged on or time spent online. Students could log on and straight off again or leave the program running in the background. Later on (p.659) the authors report that it is number of times logged on that is recorded.
Is this a blended course? Could these students be meeting f2f as well? Is this accounted for? Some students may be arranging their own support sessions in the library or coffee shop - is this checked, accounted for?

c) Analysis

Does access to group or communication area = participation? Were they just reading the boards?

Kruskall-Wallis needs the following data types:

  • Data points independent from each other - yes
  • Distributions do not have to be normal and variances do not have to be equal - yes
  • Ideally more than five data points per sample - just 4
  • Individuals selected at random from the population - whole population used
  • All individuals must have equal chance of being selected - whole population
  • Sample sizes should be as equal as possible - 10 to 31 so large differences

Not sure about this. Would it have been better to analyse without grouping? This would have given 98, 101, 85, 99, 70 and 80 scores per sample which is certainly greater than five data points and more equal than the group size.

The correct format is to report the direction of the data e.g. the right-tail probability (0.0052) is lower than 0.05 the H­0 must be rejected.
The null hypothesis is not stated - a positive hypothesis is proposed stating that more time spent in communication achieves better grades. (one way analysis) but authors report it is number of times logged on that is recorded. Assuming the null hypothesis to be: there is no difference between more time spent in online communication and better grades.
In all cases (for the total analysis) the null hypothesis should be rejected indicating that there may be a relationship between more time spent in online communication and better grades.

The authors suggest (p.659) that greater activity as measured by Blackboard usage is likely to lead to better performance in terms of module grade. This cannot be presumed from the initial data analysis without further analysis with a non-parametric multiple comparison test.

P.660 - Using the proportion of time spent in interaction compared to task areas as a percentage of total usage. Time was not actually used -it was 'hits'. However, this figure is probably not reliable as greater weight will be given to lower total participation rates. For example, if a poor student only logged on three times and twice accessed the interactive areas, then they would record 67%; another student accessing the site 100 times and equally accessing the task and interaction sites would only record 50%. Hence a reader could not rely on the statistics generated from these figures.

d) Discussion

I am not convinced that 'the students achieving high or medium passing grades engaged more actively with the course' (p.661) as this is not confirmed by the data analysed which measures the number of times logged onto Blackboard.

The 'proportion' analysis is flawed and cannot be used to support conclusions and this 'observed difference' was not statistically significant anyway.

Page 662 Connolly et al. is missing full stop after Latin phrase on both occasions.

e) Conclusion

No evidence presented for second paragraph.

Activity 12.3: Further research (20 minutes)

The authors talk about testing whether 'the frequency of interactions are more important in providing support, whereas the quality and dynamics are the more important factors in learning and performance'. Note down the kinds of problem you think such testing might involve.

  • Monitoring frequency of interactions - especially difficult on a blended course but also online as I personally interact with other course members by Skype, email, Twitter and Facebook as well as the forums and blogs.
  • What constitutes 'support'? How is it defined? Academic or emotional? The distinction between the two can be unclear.
  • Performance can be affected by many factors other than interaction. It could be difficult to measure the contribution of interaction amongst other factors such as IQ, past experiences, emotional and physical difficulties, current lifestyle stresses etc.

 

 

 

Permalink Add your comment
Share post

H809: Activity 11.9: Validity

Visible to anyone in the world

Activity 11.9: Validity

Reading 11: Bos et al. (2002)

Effects of four computer-mediated communications channels on trust development

To what extent does the study demonstrate that its findings generalise to other participants, places or times?

States that it suggests social dilemma tasks elicit exploitative and self-protective behaviours and so the study relates to interpersonal trust in these types of risk. This is a population of students and others attached to a university with the associated assumptions of technological literacy and intelligence levels.

To what extent are causal relationships, rather than just correlations, demonstrated?

Just correlations - there are many factors which may affect trust development and it is uncertain how many of these were kept constant. It is reported that the participants did not know each other before the study but they may have had friends in common which would affect trust. Personal disclosures were discouraged but it does not report whether conversations were monitored. I believe that it has previously been found (can't find ref.) that personal disclosures in online situations can encourage group formation.

Are the instruments used in the study actually measuring what the researchers claim they measure?

I do not think that there is necessarily a link between the group pay-off and the degree of cooperation. For example, there must be an aspect of intelligence required to understand the necessity for cooperation. In this case the participants were all students or others associated with the university and so a level of intelligence can be assumed but this may not correspond to an understanding of the how the game works.

How strong is the evidence for the claims?

A major limitation of the results from a one-way ANOVA is that it does not say how the means differ, just that the means are not equal to each other. To solve this, post-hoc tests can be used - a test conducted after you already know that there is a difference among the means. Given a set of 3 means, the Tukey procedure will test all possible 2-way comparisons: 1&2, 1&3, and 2&3 and it is optimised to reduce the likelihood of producing random positives when doing pairwise comparisons.

This showed a significant difference between text and the other three but not between the other three. This was conducted at the end of the study.

Are alternative explanations possible?

High quality video cf conference telephones for the audio and very simple text system. Chatspace is very simple and poor quality site with spelling mistakes and poor text/background definition on the home page.

There may have been people who did not work well in groups and created problems in group trust formation. This may have been detected by the post-study questionnaire but it does not say whether these were excluded or included in the results. I presume they were included but how were they distributed throughout the groups? Were there more in the text group?

How could claims be tested more strongly?

I would like to see good quality technology used for each condition and recording of the sessions to give a richer understanding of how the relationships formed. In some cases were there individuals who obstructed the ability to form trust? This could be detected from recordings of text messages, chat and video.

Reading 12: Ardalan et al. (2007)

A comparison of student feedback obtained through paper-based and web-based surveys of faculty teaching

To what extent does the study demonstrate that its findings generalise to other participants, places or times?

A student population was used so may be able to generalise to other student populations, which was the aim of the article, but not to other populations. The web-based survey is claimed to be equally available to all students which may be true for a university population but not the general population.

To what extent are causal relationships, rather than just correlations, demonstrated?

Just correlations.

Are the instruments used in the study actually measuring what the researchers claim they measure?

Chi-square is a statistical test commonly used to compare observed data with data we would expect to obtain according to a specific hypothesis. The chi-square tests the null hypothesis, which states that there is no significant difference between the expected and observed result.

The t-test assesses whether the means of two groups are statistically different from each other.

How strong is the evidence for the claims?

Statistics seem to be good but there are many variables which are unaccounted for: the same courses were assessed but there is no mention of whether the module format was changed or whether teaching staff were changed.

Are alternative explanations possible?

Testing so many hypotheses at one time leads to confusing results. There is no mention on how the paper-based questionnaires were presented to the students and who presented them although there is comprehensive discussion on how the web-based survey was presented.

The change in response rate is put down to the change in format from paper-based to web-based but no account is made of how positions had changed over the 12 month period. The students were used to the paper-based survey and not familiar with the web-version and it seems that the paper-based survey was to some extent enforced as it seems to have been handed out in class.

How could claims be tested more strongly?

Which ones!

I would like to see a dual-methodologies approach to find out why students chose to participate in the survey and to give some depth to their answers. Were the answers on the enforced paper-based survey less accurate to true feelings?

 

Permalink 1 comment (latest comment by Sylvia Moessinger, Saturday, 7 May 2011, 19:59)
Share post

H809: Activity 11.8: Validity

Visible to anyone in the world

I found it rather interesting reading about the Hawthorne Effect:

'A term referring to the tendency of some people to work harder and perform better when they are participants in an experiment. Individuals may change their behavior due to the attention they are receiving from researchers rather than because of any manipulation of independent variables'

Accessed from: http://psychology.about.com/od/hindex/g/def_hawthorn.htm

...and even more interesting to find out that when the original data were re-examined, it was found that the lighting conditions at the factory were always changed on a Sunday and so productivity on a Saturday was compared to the new conditions on a Monday. Further investigation showed that whether lighting was changed or not, productivity always went up on a Monday as it was the start of the week.

Accessed from: http://www.economist.com/node/13788427?story_id=13788427

Permalink Add your comment
Share post

H809: Activity 11.6: Reading papers 11 & 12

Visible to anyone in the world

Activity 11.6: Reading the papers (5 hours)

Effects of Four Computer-Mediated Communications Channels on Trust Development. Bos et al. (2002)

Questions: What research questions are being addressed?

Is trust development in inhibited in video and audio when compared to face-to-face settings?

Setting: What is the sector and setting? (e.g. school, higher education, training, informal learning)

  • University students - did not know each other outside games
  • F2F, audio, video, text chat
  • Single and Mixed gender groups
  • Social dilemma game (Daytrader)

Concepts: What theories, concepts and key terms are being used?

  • Trust - 'a willingness to be vulnerable based on positive expectations about the actions of others'
  • Delayed trust
  • Fragile trust
  • Bordia (1997) - text-based interaction less effective for tasks that have a high social-emotive content.
  • Audio-conferencing - encourages domination by high-status group members (France et al., 2001)
  • Text-based - fosters equal participation (Kiesler et al., 1984)

Methods: What methods of data collection and analysis are used? (e.g. the number of participants; the type of technologies; the use of interviews, surveys, observation, etc.)

  • 66 3-person groups - 105 male, 93 female - av. Age 23
  • 9 all-male, 7 all-female, 36 mixed gender groups
  • Distributed more or less equally
  • Pre-questionnaire on general trustingness
  • Post-questionnaire on game behaviours and attitudes to other players
  • Co-operation measured by total payoff

Findings: What did this research find out?

  • Communication condition had significant effect on investment
  • Greatest distance between chat and the other three
  • No significant difference between other three at end of test
  • All three mediated conditions delayed trust
  • Partial agreement used in mediated conditions (not signif)
  • Mediated conditions are more vulnerable to defections and more fragile
  • Audio and video almost identical

Limitations: What are the limitations of the methods used?

  • Not allowed to exchange social information
  • Assumption: trust determines cooperation
  • Value judgements - 'the other players could be trusted' used in second assessment when students knew each other
  • Uncertainty of to which trust situations the results apply
  • All participants of same age
  • All computer literate and used to technologies

Ethics: Are there any ethical issues associated with the research?

  • Paid to participate, according to how well they played

Implications: What are the implications (if any) for practice, policy or further research?

Some indications that managers can make determinations on which equipment to use depending on degree of trust required in a particular situation.

Classify the studies using Tables 11.1 and 11.2.

The study is Semi-interventionist, A. Asking questions as it uses pre- and post-questionnaires. There is also an aspect of Non-interventionist, C. as records of the amount of money won in each round is used to determine co-operation.

It is difficult to classify the study according to table 11.2 as it does not mention how the questionnaires are distributed. It includes both old (f2f groups) and new (audio/video/ text) learning technologies and we have no information as to whether it was assessed by either old (pen/paper) or new (emailed/web) research methods. I would suspect old research methods so this would give sections 1 and 2.

We also want you to note any difficulties you have with this task:

  • Are there words or concepts you don't understand?

Not done any research on trust so no knowledge of previous or other relevant work.

  • Are there statistical terms or methods that are new to you?

I had to relook up Tukey's test - similar to t-test but more suitable for multiple comparisons

Finally, how convinced were you by the research?

Initially I thought that I was quite well convinced but I am wondering if this is because it conforms to my expectations. Some research on social dilemma games and how trust relates to cooperation revealed that I may have been correct to be sceptical:

'Some people are, by nature, more likely to trust others. In order to solve both the first-order dilemma (how to agree to organize collective action) and the second order dilemma (who's going to police the agreement), you need both kinds of people: the more trusting people are necessary in order to make an agreement, and the less trusting people are necessary in order to police the agreement'.

http://www.cooperationcommons.com/node/390

This suggests that the balance in the group may be as, or more, important to the outcome than the fact that all people are trusting each other enough to cooperate.


A comparison of student feedback obtained through paper-based and web-based surveys of faculty teaching. Ardalan (2007)

Questions: What research questions are being addressed?

Is there a statistically significance difference between student feedback obtained by web-based and paper-based surveys?

Sub questions: 1. The equality of the percentage of the total number of respondents in each semester out of the total enrolments was considered.

2. The equality of student ratings of faculty teaching for the eight quantitative questions.

3. Whether there was a statistically significant change between the two methods in student ratings for faculty who were rated above the college average in the paper-based method

4. Whether there was a statistically significant change between the two methods in student ratings for faculty who were rated below the college average when the paper-based method was used

5. The ratio of the number of students who provided a qualitative response to the number of students who completed the quantitative portion of the survey between the two methods

6. The number of students who gave a positive, mixed or negative response between the two methods

7. the length of comments made by students between the two methods

8. the ratio of the number of students who provided a constructive response to the number of students who provided a response between the two methods

9. the number of constructive comments in each student response between the two methods

10. the ratios of number of constructive comments and the number of constructive comments that were qualified by students between the two methods

Setting: What is the sector and setting? (e.g. school, higher education, training, informal learning)

  • Higher education

Concepts: What theories, concepts and key terms are being used?

  • Qualitative
  • Quantitative
  • Various theories on differences between pen and paper and web surveys

Methods: What methods of data collection and analysis are used? (e.g. the number of participants; the type of technologies; the use of interviews, surveys, observation, etc.)

  • No incentives
  • Email and class information
  • Same survey for online and pen/paper
  • Anonymity ensured
  • 8 quantitative (Likert scale)
  • Same semester; 2 consecutive years; paired as to course
  • 46 pairs; u/g and graduate level
  • No signif demographic changes

Findings: What did this research find out?

  • A significantly larger number of students provided feedback when the paper-based method was used.
  • No significant difference in ratings for faculty teaching
  • Decrease in overall effectiveness rating for faculties with high ratings when using web survey
  • Increase in overall effectiveness rating for faculties with low ratings when using web survey
  • students who participate in the survey of faculty teaching on the Web are likely to provide the same amount of feedback as those who participate in the paper-based method.
  • This result suggests no differences between the feedback students provide on their level of satisfaction with faculty teaching in the two methods
  • a statistically significant increase in the feedback length for the web-based method
  • although students may provide longer responses, the responses are not necessarily more constructive
  • The results of Hypotheses 8 and 9 contradict the expectation stated in the literature that web-based feedback will be more thoughtful
  • Students provided more qualified constructive comments in the web-based method than in the paper-based method.

Limitations: What are the limitations of the methods used?

Ethics: Are there any ethical issues associated with the research?

  • Equal access? Some problems with accessibility for VI students but this can also be a problem with pen/paper
  • Permission obtained from Dean and VP academic affairs in addition to normal procedure
  • Data removed that would identify department/course/professor

 

Implications: What are the implications (if any) for practice, policy or further research?

  • an environment without time pressure and in which the student rates a number of sections at the same time may be less conducive to the use of extremes in the rating
  • Decrease in respondents may cause problems with small sample sizes and statistical analysis- incentives suggested

Classify the studies using Tables 11.1 and 11.2.

Semi-interventionist - A. Asking questions. Survey.

An old learning technology (F2F course) with a mixture of old and new research methods. Sections 1,3.

 

We also want you to note any difficulties you have with this task:

  • Are there words or concepts you don't understand? No
  • Are there statistical terms or methods that are new to you? No

Finally, how convinced were you by the research?

Interesting results. I was concerned about the definitions for constructive responses and qualified constructive responses and so I was not convinced by the conclusions that came from those hypotheses.

 

Permalink Add your comment
Share post

H809: Activity 11.5: Categorising new research methods

Visible to anyone in the world

H809: Activity 11.5: Categorising new research methods

It was interesting reading and considering what we mean by new methods. I am coming towards the idea that what categorises new methods is that they draw from an interdisciplinary field. Is this a result of improved communication and research abilities? I find that it is so much easier to research now than it was in the 1980s when I spent ages trawling through recent papers in just a few journals. It is so much easier now to draw from other fields, for example writing about Communities of Practice for the H810 EMA, I was interested in borders to the communities and how they can be bridged. I vaguely remembered something I had read about change agents in connection with Everett Rogers (2003) and so I did some research online. This was not enough detail so I searched all three of the local university libraries and then picked up the book. The information I retrieved combined the fields of business and education and threw some light on how educational technology can be brokered into Communities of Practice. Information technology is allowing researchers to break out of their disciplines and use applicable methods from other fields.

Another aspect of new methods is the use of technology to store, compare, analyse and share data. There are many examples of research which are just large-scale literature searches; reliable data is easily reached through organisations such as the UNSD - United Nations Statistical Databases; modelling allows researchers to play with relationships and manipulate large quantities of data.

I am wondering if the result of the ability to store and manipulate all this data is that researchers are more willing to consider all the complexities of a system rather than trying to rule out the context, i.e. whether the move to social constructivism has occurred because we are beginning to develop techniques to be able to handle the complexities.

Looking at the supplied table:

 

 

Learning technologies

 

 

Old

New

Research

methods

Old

1. A mail survey of students on a print-based course, asking for satisfaction ratings.

2. Face-to-face interviews with people about their blogging behaviour.

New

3. An electronic survey of students on a print-based course, asking for satisfaction ratings.

4. Email interviews with people about their use of wikis.

my ideas do not fit well as the development of the original research questions will come from a wide range of disciplines which is facilitated by new technologies. For example the 'old-old' example concerning student satisfaction on a print-based course may be partly answering the research question: 'How do students on the print-based course develop their own support networks?' which may draw from modern socio-constructivist methods and employ post-positivist approaches of mixed methodologies.

 

 

Permalink Add your comment
Share post

H809: Activity 11.2 and 11.3

Visible to anyone in the world

Activity 11.2: Categorising the studies met so far (1.5 hours)

Use Table 11.1 to classify each of the empirical research studies met so far in the H809 readings. Note down your reasoning and conclusions.

Reading

Author/Date

CATEGORY

Non-interventionalist

Semi-interventionalist

Interventionalist

A. Observation

B. Participant Observation

C. Documents, artefacts and records

A.

Asking Questions

B. Other People's Data

A. Experimental

B. Quasi-experimental

1

Hiltz & Meinke (1989)

 

 

*

*

 

 

 

2

Wegerif & Mercer (1997)

 

*

*

*

 

 

*

3

Laurillard (1994)

 

 

*

 

 

 

 

4

Oliver et al. (1997)

 

 

*

 

 

 

 

5

Roschelle (1992)

*

 

 

 

 

 

*

6

Conole et al. (2004)

*

 

 

 

 

*

 

7

Jones & Preece (2006)

*

 

*

*

 

 

 

8

Tolmie (2001)

 

 

*

*

 

*

 

9

Greenhow & Belbas (2007)

*

 

*

*

 

 

 

 

I found this quite difficult to do in many cases as the categories seemed to have a very positivist slant and some of the studies we looked at were anti-positivist. I think my confusion arose when 'asking questions' covers both interviews and questionnaires and is considered to be semi-interventionalist. In my opinion many questionnaires produce quantitative data and can be categorised as semi-interventionalist but interviews produce mainly qualitative data and are interventionalist because the role of the interviewer must be accounted for. Looking back on the date of initial production of these categories (Madge, 1953) I suppose it is natural that there is a strong positivist approach at this time.

 

So, what are 'new' research methods and how do they relate to the framework in Table 11.1?

Let's consider three variants of 'asking questions' using interviews:

Interview 1: A woman with a clipboard stops you in the street and asks you a lot of questions about soap powder.

Interview 2: Somebody phones you up to ask you how you are planning to vote.

Interview 3: You are asked to take part in an e-interview, using email, about your Open University studies. For most of us this would feel like a new research method.

They are all 'interviews' - the researcher asks questions and you, the respondent, answer them. Spend a few minutes reflecting on what exactly is 'new' about Interview 3.

 

I believe that 'new' reflects the method of communication; the material that can be included (screen shots, audio files, video); and the researcher's acknowledgement of their part in the process.

 

Activity 11.3: Effect of interviewing style (15 minutes)

We want you to do a 'mind experiment'. Imagine you are being interviewed about your views on government spending on education.

  • Do you think your answers would be different, depending upon which of the three interview methods mentioned above was actually used?
  • How would they be different?
  • What if the topic was whether you had ever broken professional rules?

Once you have reflected and made notes, discuss your thoughts in the Modulewide Forum.

Interview 1: I am likely to refuse as I do not want to be interviewed about soap powder just for the benefit for commercial companies and I am generally busy and do not want to be interrupted in the middle of a task.

Interview 2: I would definitely refuse - I hate phone interviews, my voting choice is private information and I hate being interrupted in the middle of things.

Interview 3: Course information is useful research for planning and so I would be inclined to take part. The fact that it is by email is also a positive as I can take part when it is convenient for me and I am not interrupted.

 

I would have to consider the purpose of the interview to be serious and important work if I were ever to consider answering the question concerning whether I had ever broken professional rules. I would also have to be certain of anonymity.

My answers were interesting. They resonated with the comments made by Adam Joinson that nowadays the more sophisticated audience is aware that online is confidential rather than anonymous. I also agree with Adam that I am encouraged to participate if the research is transparent i.e. where I can see a useful outcome for the data.

 

 

Permalink Add your comment
Share post

H809: Surveys Podcast

Visible to anyone in the world

Activity 11.1: Podcast 1 (30 minutes)

Survey One: http://learn.open.ac.uk/mod/resourcepage/view.php?id=409227

Listen to this discussion between Alan Woodley and Adam Joinson on the use of electronic surveys and their pros and cons.

In his book, The Salmon of Doubt, Douglas Adams (2003) wrote '... everybody lies to people with clipboards' (p. 93) and, when asked about the benefits of speaking to his fans via email, said 'It's quicker, easier and involves less licking' (p. 101).

Do you think he would be in favour of electronic surveys?

Notes

AlanWoodley interviews Alan Joinson (psychologist, Bath)

Benefits - speed

E-surveys producing different answers from mail surveys:

  • Is sample equivalent to mail survey?
  • Does it match sample you intended?
  • Can people access it through fire walls?
  • Media effect - images, videos, response formats
  • Context - where are there? Cyber cafe? Mobile device? Is it one person?

Lower on social desirability issues on web - more willing to admit socially undesirable issues

Psychometric measures - higher in anxiety online

Is it a sample issue, media issue, or interaction between two?

Which answers are correct?

e.g. are people more honest on line?

Sometimes it does not matter; all variables are inflated in various ways

e.g. self-esteem vs academic performance. Constantly high figure for self esteem does not matter as standardised

Problem comes from comparing online and offline sample - norms are different.

Pencil and paper - rarely true volunteers - students in classrooms where lecturer stands there and waits for paper back

Online surveys are truer volunteers

Classroom surveys - chat with peers as they fill it in.

Differences in nature of responses - are people honest or not

10 years ago - more candid in online surveys but more sophisticated audience now but there still seems to be some difference. Potentially may shift other way. Online is confidential rather than anonymous. Employer may track internet use etc. Paper questionnaire is more likely to be anonymous if posted back, less so in class situation.

User-focused idea of research rather than questionnaire - can find difficult to answer when you do not match sample answers.

Shift into public sphere - raw data linked from publication

Response rates are dropping - privacy concerns, claim less time, in past considered contributing to public good,

Can increase transparency to help this perception

Sociology research - participatory research form 60s and 70s

Online surveys - complete with audience in mind, talk to researcher in open-ended questions - people are used to this in online process as it is communication tool.

Humanise the process to increase participation rates but not too much or reintroduce interviewer bias e.g. realistic avatar speaking question - less self-disclosure.

Comments

A couple of comments on the podcast first:

  • I love the fact that the OU put pictures of the people talking on the download page for the podcast - it makes it much easier for me to relate to the speakers, however, it would really help if they were labelled so I knew who was who. I was actually sad enough to right click each picture and check for associated text to find out!
  • I missed a couple of points as I was listening to the podcast and quickly noted the time so I could refer to the transcript. I was disappointed to find that I could not reference these time points on the transcript.

I found this really interesting as it raised some things that had not occurred to me before. Full notes are on my blog for 17th April but my main thoughts are here.

Naively I had not realised that there was a difference between online and pen/paper survey results and so it was a shock to me that results could not be compared. I could understand that people were more candid in online surveys but was also interested in the fact that self-disclosure has changed over the last ten years so that would make comparisons between online surveys from ten years ago and current ones also suspect.

I have always preferred online surveys as, working with students with disabilities, I have been asked to fill in the forms that are given out in class. Often these are filled in by comparing answers with friends or the student would say 'I think this course is crap but I can't say that as the lecturer knows my handwriting'. I first began to get some doubts concerning online surveys when I found that universities were putting pressure on students to complete the National Students Survey positively by telling them that if they wanted their degree to be worth anything then they should say that the university was really good as it would raise its profile.

I think that I would have to be a lot more critical about using online surveys now I have listened to this podcast as thinking about true sampling and the context of where the respondent is filling in their answers is making me consider some extra questions on respondent demographics, context and what device they are using so that this can be accounted for. Of course the drawback to this is that the respondent may then feel that the survey is less anonymous.

As for the question about Douglas Adams (one of my favourite authors!) - I think that he would have liked e-surveys in the way that they are heading with more transparency as to why they are being conducted; user-centred, open-ended questions where there is a conversation between the researcher and the respondent; and the shift into the public sphere where the researcher links to the raw data when publishing so others can confirm and re-interpret the data. I know that humanising the process will draw accusations of interviewer-bias but do we want an abstract, quantitative survey which, despite claims, is influenced by context but where this is not openly acknowledged? Or do we want in depth, qualitative answers that can supply detailed information and, by reflexivity, we can work hard to account for interviewer influence? The first appears much more scientific and generalisable; and is certainly easier to carry out but I would consider it much less reliable.

 

Permalink Add your comment
Share post

H809: Activity 9.3 and 9.4: Comparisons

Visible to anyone in the world

 

Activity 9.3: Comparing Reading 10 with Readings 8 and 9 (2 hours)

Factors

Tolmie 2001

Crook & Dymott 2005

Greenhow  & Belbas 2007

Key features of the theory

1. Outcomes effected by interplay between technology and context so this effects research

2. difficult to manage context effects through design; need to focus on whole implementation event

3. Context -sensitive approach to evaluation required p.236

4. Socio-cognitive conflict is part of  context p.236

5. Context effects will be the norm  p.237

6. data should be collected from real contexts p.237

7. context primes learner to notice certain aspects p.240

Learning is mediated, situated and distributed

Use of the 5 types of writing as a cognitive framework to allow analysis of the practices involved.

Fundamental unit of study is the activity.

All activities are guided by a motive which is held by the subject - human consciousness.

Activities can be complex with differing actions towards goals whose outcomes are not identical to the object.

Centrality of tools or artefacts in learning: how the tools shape the user as they are appropriated by it.

Defining context

"conditions under which given resources are used" p.237

Gender norms

Past history p.238

Pre-existing activity p.239

The social environment of a group of tutors, students etc. who can be connected f2f or via technology

 

Learning that occurs through the use of information and communications technology (Khan 2005).

Aspects of learning foregrounded

Cooperative learning

Collaborative learning

How technology has changed the work environment as students can study in their rooms but stay in contact with their peers

Community and the ways in which social groupings can be designed to advance individual and collective understanding

Pros

Attempts to situate technology in context for analysis

Framework assists more focused analysis of writing process in different contexts

Making it accessible to wider variety of researchers; newcomers as well as for more experienced researchers.

Closer analysis of system facilitated by accurate tracking.

Formalising the process and producing consistency

Clarity and focus

Cons

Subtle cues overlooked by researchers p.240

Range of possible contexts surrounding any resource may be impossible to manage within any single software design p.240

Unable to generalise as the context is always different

Not convinced gender differences are that simple. Not all men are the same!

Analysis of writing using ICT seems outdated in that students tend to use widescreens divided for easy viewing and/or multiple screens or devices.

Does not analyse students reasons for working the way that they do - just looks at surface process

The overall network is based on constructivism and social interaction and this may lead to the research design excluding behaviourist and cognitivist viewpoints.

Coarse grained analysis

Application and uses

Studying group interactions with technology and the effect on learning

Studying individual interactions with different tools

Studying group interactions at macro level and identifying contradictions

Appropriate data collection methods to use

Observation / video recording

Log of contact and activity in group work p.238

Logging

Diaries

Analysis of interactions

Depends on the results required

Appropriate data analyses to use

Qualitative: conversation analysis of different types

Quantitative: time spent on activities

Qualitative: conversation analysis of different types

Qualitative generally

 

In your tutor group forum, discuss how you see the differences between these readings:

I believe that Reading 8 (Tolmie) illustrated a piece of research that concentrated on one aspect of collaborative learning: the effect of context. Examining this as a single concern allowed its analysis in depth although the researchers raise concerns about missing subtle cues. The other two readings used frameworks to examine a broader field and both sets of authors raise concerns about the coarseness of the analysis. Reading 9 (Crook & Dymott) uses a cognitive framework with its emphasis on individual interactions with different tools and Reading 10 (Greenhow & Belbas) uses a constructivist framework with its emphasis on cultural-historical context and how human consciousness effects behaviour. This framework is used to study group interactions.


Activity 9.4: Re-examining your taxonomy of theories (2 hours)

Look back to Activity 7.5, in which you attempted to develop a taxonomy for the theories in the wiki. How do Readings 8, 9 and 10 fit into this taxonomy? Share your reflections in the tutor group forums.

 

Meta-learns

Receives

Imitates

Practices

Creates

Explores

Experiments

Debates

Behavioural

*

*

*

*

Cognitive

*

*

Constructivist

*

*

*

*

Activity-based

*

*

*

*

Socially situated

*

*

*

*

*

*

*

Experiential

*

*

*

*

*

Reading 8

*

*

*

*

*

Reading 9

*

*

*

Reading 10

*

*

*

*

*

*

*

 

I looked at the reports of the collaborative activities used by the three papers and fitted them into the HLM model that I developed for the taxonomy I used for activity 7.5. It surprised me a little to see that it actually worked! Reading 9 came into line with the cognitive approach and the other two were more closely matched with the constructive and socially situated approaches. This could be claimed to be a good result but I am not happy with it. I have been doing some research this week on different learning theories and I have come up with the following:

Main Paradigm

Theories

Analysis Networks

Behaviourism

Classical Conditioning (Pavlov)

 

Operant Conditioning (Skinner)

Social Learning theory (Bandura, moving towards Cognitivism)

Cognitivism

Cognitive load theory (Sweller)

Distributed Cognition

Crook & Dymott framework for writing analysis

Cognitive theory of multimedia learning (Mayer)

Attribution theory (Weiner)

Elaboration theory (Reigeluth)

Constructivism

Social development theory (Vygotsky)

Actor-network theory

Activity theory

Communities of Practice (Lave & Wenger)

Discovery Learning (Bruner)

Situated learning (Lave)

Humanism

Experiential learning (Kolb)

 

I am still feeling my way with all this and not sure I am correct. I would really welcome some input from other people on this before I approach the TMA. My current thoughts are that socially-situated learning and experiential learning are sub-categories of either congitivism or constructivism, depending on how they are approached.

If I am correct, then the taxonomy I devised distinguished well between the main paradigms but is not good for differentiating between the theories.

 

Permalink 2 comments (latest comment by Lynn Hunt, Saturday, 2 Apr 2011, 20:48)
Share post

H809: Activities 9.1 & 9.2: Reading 10: Greenhow & Belbas 2007

Visible to anyone in the world

 

Activity 9.1: Reading and reflecting on the first part of Reading 10: Greenhow  & Belbas 2007 (2 hours)

  1. What functions do these "theoretical perspectives" appear to be serving here?

Activity theory is being used as a framework in order to examine the relationships between aspects of an activity

  1. Do you think Activity Theory is a "theory"?

I have seen activity theory described both as a 'meta-theory' and as a framework. Initially I liked the concept of a meta-theory best as I used it to increase my understanding of the constructivism and social learning by dividing it into areas that I could understand before fitting them all back together again. Working on H800, I used activity theory to analyse changes in a university department and I started viewing it more as a conceptual framework which I could apply and manipulate in order to work with more tangible projects. However it is classified, I do find it really helpful in order to be able to understand what is going on in complex learning situations by splitting them apart to analyse but always keeping in mind the effect of the whole system and finally bringing it all back together. I think it is especially useful in its 3rd Generation with the application to how activity systems work together.

  1. What do you understand to be the gap in Activity Theory that AODM is filling?

Activity-Oriented Design Methods (AODM) aim to provide a scheme for analysis of the essential elements of activities and their contradictions. This is suggested to be filling the gap caused by a lack of universally accepted methods for applying Activity Theory.

  1. Try to summarise the authors' view of "collaborative knowledge building"

Knowledge is possessed by groups and expertise is developed by the continuous process of critiquing practices with people working in same field but coming from different perspectives.

Activity 9.2: Reading and reflecting on the remainder of Reading 10 (3 hours)

  1. What benefits did Transana provide?

Qualitative research software - filing and analysis
Authors suggest that this allowed easy retrieval and preserved accuracy and context.

  1. What do you understand to be the gap in AODM that discourse analysis was chosen to fill?

Once down to analysis at level 5, a researcher will use the most suitable method/preferred method for the analysis of the particular aspect that they are studying. In this case, discourse analysis was used to examine the process by which collaborators work through intellectual disagreements (p.380). Introduction of other frameworks can prove useful to conduct a detailed investigation of a particular sub-activity (p.383).

  1. According to the authors, what advantages did Activity Theory and AODM bring?

Making it accessible to wider variety of researchers; newcomers as well as for more experienced researchers.

Closer analysis of system facilitated by accurate tracking.

Formalising the process and producing consistency

Clarity and focus

  1. Do you think these advantages could have been obtained another way?

I believe that any framework paired with a detailed methodology on how to use it would have resulted in the four advantages listed above. The framework chosen would have to reflect the analysis needed.

  1. In what ways is the research design influenced/constrained by the use of AT as a theoretical basis?

P378 "Only an overview of peer collaboration without describing it in detail or critically examining how it was enacted" - too coarse grained.

P383 "A follow up interview with the students regarding this particular finding would be helpful for producing more substantive insights."

Activity theory concentrates on the centrality of tools and artefacts in learning and using this theory in the analysis of design and implementation of learning assignments may result in a techno-centric approach rather than a learner-centred approach with an emphasis on the tools and how they are used.

Activity theory has been criticised as being difficult to test empirically (based on observation and experiment) due to the complexity of a social system. I am not sure I agree with this as observations of the system can be taken and changes in the system can be made. However, the overall network is based on constructivism and social interaction and this may lead to the research design excluding behaviourist and cognitivist viewpoints. For example, in this study the researchers conducted screening interviews to identify instructors who espoused peer collaboration and social constructivist approaches to teaching (p.371)

 

 

Permalink Add your comment
Share post

H809: Activities 8.6 & 8.7: Comparing Perspectives and Methodologies

Visible to anyone in the world

 

Tolmie, A. (2001) 'Examining learning in relation to the contexts of use of ICT', Journal of Computer Assisted Learning, vol. 17, no. 3, pp. 235-41.

Crook, C. and Dymott, R. (2005) 'ICT and the literacy practices of student writing' in Monteith, M. (ed.) Teaching Secondary School Literacies with ICT, Maidenhead, Open University Press.

Table 8.1: Comparing perspectives and methodologies

Factors

Tolmie

Crook and Dymott

Key features of the theory

1. Outcomes effected by interplay between technology and context so this effects research

2. difficult to manage context effects through design; need to focus on whole implementation event

3. Context -sensitive approach to evaluation required p.236

4. Socio-cognitive conflict is part of  context p.236

5. Context effects will be the norm  p.237

6. data should be collected from real contexts p.237

7. context primes learner to notice certain aspects p.240

Learning is mediated, situated and distributed

Use of the 5 types of writing as a cognitive framework to allow analysis of the practices involved.

Defining context

"conditions under which given resources are used" p.237

Gender norms

Past history p.238

Pre-existing activity p.239

The social environment of a group of tutors, students etc. who can be connected f2f or via technology

 

Aspects of learning foregrounded

Cooperative learning

Collaborative learning

How technology has changed the work environment as students can study in their rooms but stay in contact with their peers

Pros

Attempts to situate technology in context for analysis

Framework assists more focused analysis of writing process in different contexts

Cons

Subtle cues overlooked by researchers p.240

Range of possible contexts surrounding any resource may be impossible to manage within any single software design p.240

Unable to generalise as the context is always different

Not convinced gender differences are that simple. Not all men are the same!

Analysis of writing using ICT seems outdated in that students tend to use widescreens divided for easy viewing and/or multiple screens or devices.

Does not analyse students reasons for working the way that they do - just looks at surface process

Application and uses

Studying group interactions with technology and the effect on learning

Studying individual interactions with different tools

Appropriate data collection methods to use

Observation / video recording

Log of contact and activity in group work p.238

Logging

Diaries

Analysis of interactions

Appropriate data analyses to use

Qualitative: conversation analysis of different types

Quantitative: time spent on activities

Qualitative: conversation analysis of different types

 

  1. Does learning happen within the head of an individual, or is it mediated, situated and distributed?

I believe that learning is mediated by tools and that, by using these tools, we alter our learning experiences. To that extent I concur with distributed learning but I also believe that learning is situated in a historical and social context and affected by our past experiences with similar activities and tools.

  1. What does a test or exam measure?

The ability to memorise facts and apply them under time pressure.

 

Permalink Add your comment
Share post

H809: Activity 8.3 and 8.5: Reading 9: Crook and Dymott (2005)

Visible to anyone in the world

 


Crook, C. and Dymott, R. (2005) 'ICT and the literacy practices of student writing' in Monteith, M. (ed.) Teaching Secondary School Literacies with ICT, Maidenhead, Open University Press.

Notes from course website

Mediated learning: Vygotsky proposed that everything we do is 'done through', or mediated by, cultural artefacts:

  • We count with numbers that are culturally developed
  • We cut down trees with axes that are culturally developed
  • We talk to each other and think using language that is culturally developed
  • We remember by making lists or by grouping things into categories that may be different from those used in other cultures
  • Our institutions, such as schools, are developments of our culture.

Distributed learning: Vygotsky (1981) wrote that 'by being included in the process of behavior, the tool [artefact] alters the entire flow and structure of mental functions' (p. 137). This conceptualisation of cognition argues that artefacts are not a feature of the backdrop (separate context) against which mental functions take place, nor are they just something that facilitates mental life; they are a fundamental part of the mental function itself. In this way, mind becomes not something bound by the confines of the skull but instead extends 'beyond the skin' (Wertsch, 1991, p. 27). Cognition is 'spread over' (Lave, 1988) the artefacts present. An example of distributed cognition is Cognition in the Wild (Hutchins, 1995), which explores a US navy navigational team and its artefacts as a system of distributed cognition.

Situated learning: A seminal text in situated cognition was Brown, Collins and Duguid's article in Educational Researcher (1989). The authors described an apprenticeship model of learning closely related to the work of Lave (1988) and the idea of 'legitimate peripheral participation'. Socio-cultural theory does not focus on measuring learning with ICT in terms of what is in an individual's head or in terms of learning outcomes. Instead the focus is on individuals-using-technology-in-settings (Crook, 1994). Learning is conceptualised as a social endeavour and qualitative research methodologies have been adopted to investigate this area because the focus of inquiry is on the processes of learning and on meaning making in social settings.

 

 

 

 

  1. What part do the five aspects of writing (text on the screen; text on the network; text as electronic traffic; text and the website; and the dialogue around text) play in describing the activity of writing? Do they 'effect' writing or 'constitute' it? How?

I believe that the five aspects constitute writing in that they describe different facets of the writing process that may be used individually or together. People may have individual preferences due to context such as past experience, current purpose etc.

  1. Do you think that the learning involved in writing the assignments, or carrying out the other tasks described, is located in the head of the students? Or do you think it is distributed and situated?

I believe that learning is mediated by tools and that, by using these tools, we alter our learning experiences. To that extent I concur with distributed learning but I also believe that learning is situated in a historical and social context and effected by our past experiences with similar activities and tools.

  1. Crook and Dymott discuss the fact that there were substantial differences in the ways in which individual students used resources in one of the tasks (p. 103). What does this tell us about the mediated, situated and distributed nature of the activity?

The mediating tools, whether they are pencil and paper or computer screen, can be used in a multitude of ways. Learners will use the tools in the way that suits their current purpose and context as well as according to their past history with the tools and past experiences with similar assignments i.e. the learning is situated, and it can be distributed over several technologies in that notes can be taken from screen to paper or one window to another.

  1. If you were given the opportunity to assess some of the students' assignments that are described in this chapter, where would you focus your attention: on the end product or on the process of writing, and why?

It would depend on the research question that was being addressed but I would be interested in the process of writing as part of my job is to work with students to assist them with academic literacies. I would also be concerned that there were too many influences on the end product of the writing to be able to form any conclusions as to how that end product was achieved.

  1. Which methodologies would you use to carry out your assessment of the students' assignments, over and above those described in the chapter, and why?

It would be interesting to look at the process of academic writing by a mixture of diary and video format which recorded the process of writing, followed by semi-structured interview where the perceptions of the students could be elucidated.

 

Permalink Add your comment
Share post

H809: Activity 8.1 and 8.2: Reading 8: Tolmie (2001)

Visible to anyone in the world

Tolmie, A. (2001) 'Examining learning in relation to the contexts of use of ICT', Journal of Computer Assisted Learning, vol. 17, no. 3, pp. 235-41.

 

The individual as being the focus of research into learning (Shweder, 1990).As the focus is on the individual and what is happening in their head, this perspective views the individual's context as surrounding them. The individual is surrounded by, for example, books, computers and a classroom that exist separately from themselves; therefore, the context is theorised to affect things that individuals do. A computer affects the way a pupil learns, the classroom environment affects the behaviour of children within it, etc. This cause-effect model is an aspect of a positivist approach to learning.

  1. Take particular notice of how 'context' is conceptualised in Tolmie's paper and make notes of, or highlight, the words and sentences that do this.
  2. In the study about gender effects, discussed in the section headed 'The effects of context', note the links between the theoretical perspective, the way in which gender is conceptualised as context, and the way in which learning is assessed quantitatively.

Following well-established patterns of interactional behaviour wrt conflict management p.237

  • Where is the learning located? 12-15yrs, school
  • Where is the focus of enquiry? Physics- trajectory of falling objects
  • How is learning assessed? Pre-test and post-test performance

Also note how gender is conceptualised. Is it something that surrounds a person? Is it something that a person 'has'? Or is it a set of cultural expectations enacted by that person?

Hard to tell from this but it seems to be suggesting that the pupils are performing to expected cultural norms.

  1. The section headed 'Context effects and unintended consequences for learning' discusses the presence or absence of software prompts. In this study:
  • Where is the learning located? 9-14yrs, school
  • Where is the focus of enquiry? Fair testing with five factors
  • How is learning assessed? Pre-test and post-test performance
  • How are the software prompts discussed in terms of context?

Computer positioned alongside the apparatus suggesting that it is included as part of the equipment, rather than an addition. In test, warning prompts were issued but in control there were no warning prompts although data were still input to the computer reinforcing the fact that the technology is part of the context.

 

 

 

Permalink Add your comment
Share post

H809: Activity 7.5: Learning Theory Comparison using Hybrid Learnng Model

Visible to anyone in the world

Theory

Characteristics

HLM Learning Event

Notes

Literature

Behaviourism

Stimulus-Response Pairs

Transmission / Reception

Pedagogical focus is on control and adaptive

Skinner

Tennant

Trial and Error learning

Explores

Meta-learns

Learning through association and reinforcement

Imitates

Cognitive

Transformation of internal cognitive structures

Meta-learns

Useful for sequences of material building on existing information

Anderson

Wenger

Hutchins

Piaget

Processing & transmission of information through communication, explanation, contrast, inference and problem solving

Practices

Experiments

Debates

Meta-learns

Constructivist

Process by which learners build mental structures

Meta-learns

Structured learning environment

Papert

Duffy & Jonassen

Task oriented

Experiments

Hands-on, self directed

Creates

Explores

Activity -based

Framework of activity with mediating artefacts within socio-cultural and historically constructed context

Depends on context

Importance of environment & tools

Vygotsky 34

Wertsch 85

Engestrom 87

Zone of proximal development

Explores

Imitates

Practices

 

Socially-situated learning

Learning as social participation

Debates

Explores

Vicarious learning is important process

 

Mercer

Vygotsky

 

Personal relationships

Imitates

 

Language as communicative tool

Debates

Language as psychological tool

Meta-learns

Tutor-student interactions

Depends on context

Active engagement

Explores

Practices

Creates

Experiments

Experiential

Experience is foundation of learning

Explores

Practices

Creates

Experiments

Asynchronous communication allows greater reflection

Dewey

Kolb

Jarvis

Transform experience into knowledge, skill, attitudes, values by reflection

Meta-learns

Problem-based learning

Explores

Experiments

Systems theory

Organisational learning

Explores

Practices

Creates

Experiments

Shared knowledge banks

Senge

Laurillard

Development of learners in response to feedback

Meta-learns

 

 

 

I am not sure whether I have actually done what was required here. I have taken a list of the theories from the Conole (2004) paper and I have looked at the characteristics of each one. I really hate the 3D diagrams we looked at in the last exercise so I decided to examine the relationship with the Hybrid Learning Model which we studied in H800 and I really liked. This looks at learning activities and divides them into 8 categories depending on the students' responses: receives, explores, experiments, practices, creates, debates, imitates and meta-learns. It is really easy for a practitioner to use and it centres the activities on the learner. The full table used for analysing a specific activity also includes context and tools but I just used the HLM events in order to detect any linking characteristics.

 

Meta-learns

Receives

Imitates

Practices

Creates

Explores

Experiments

Debates

Behavioural

*

*

*

*

Cognitive

*

*

Constructivist

*

*

*

*

Activity-based

*

*

*

*

Socially situated

*

*

*

*

*

*

*

Experiential

*

*

*

*

*

 

I am really not sure if this has worked out to be useful or not.

I used the characteristics from the Conole et al. (2004) paper to decide the main HLM events involved and I am not sure I agree with all of the characteristics they list.

I think that the good point of using this method is that it concentrates on the learners' activities and so the comparison indicates similarities and differences between learner experiences.

It will be great to see what other people have devised J

 

Hybrid Learning Model (2007) 'Hybrid Learning Model' CETL(NI): Institutional E-Learning Services. University of Ulster. Available from: http://cetl.ulster.ac.uk/elearning/hlm.php [Accessed 23 March 2011]

 

Permalink 1 comment (latest comment by Sylvia Moessinger, Wednesday, 30 Mar 2011, 11:56)
Share post

H809: Activity 7.3 & Activity 7.4 Conole et al. (2004)

Visible to anyone in the world

 

Activity 7.3: Reading the first part of the Conole et al. (2004) paper (2 hours)

'Introduction' (pp. 17-21).

1. Who do the authors see as the main audience for this paper?

Practioners: 'Our assertion is that a better articulation and mapping of different pedagogical processes, tools and techniques will provide a pedagogic approach that is more reflexive and consistent with practitioners' theoretical perspective on learning and teaching' p.17

  1. What is the main aim of the paper?

To support e-learning practitioner's engagement with pedagogies and approaches to learning: 'as academics outside the field of education, they find the diverse array of theoretical perspectives alien and overwhelming (McNaught, 2003)' p.18

  1. Try to fit the readings met so far in the module into Table 1

Reading 1: Hiltz & Meinke (1989)

"Learning is the structuring of a situation in ways that help students change, through learning, in intentional (and sometimes unintentional) ways" (Johnson & Johnson, quoted on p. 432)
Suggests a behaviourist view of learning in that the students' behaviours are modified in the desired direction. (Operant conditioning, Skinner)

The VC still employs a transmissive format with e-lectures although some self-determination in order of activities is indicated, some active dialogue is employed and presentations are assessed.
Other parts of the paper suggest a more social and participatory form of education following Lave & Wenger's theories.

Reading 2: Wegerif & Mercer (1997)

Mainly experiential - the children are using problem-based learning but it is also socially situated. It could also be suggested to be systems theory as the researchers are looking at the effects of additional coaching on the problem-solving.

Reading 3: Laurillard (1994)

Laurillard discusses context, culture and social aspects as well as interactions between tools and students. I would think this fits best with activity theory as it is 'action through mediating artefacts.....within a social setting' Conole et al. (2004). It would be just as easy to argue for experiential and socially situated learning.

Reading 4: Oliver et al. (2007)

Various perspectives are discussed but the authors come down on the side of socially situated learning with the comment that 'In a complex, contested area such as this, clarity about the researcher's position is a necessary condition for establishing the credibility of research findings.' p.37

Reading 5: Roschelle (1992)

Activity Theory - interactions between the two students and between the students and the computer.

Activity 7.4: Read the rest of the paper (2 hours)

Ok. Confession here - I absolutely hate this model. It clarifies nothing to me and I find the diagrams totally confusing. I had to use them in H800 and eventually gave up and discussed the concepts with my daughter who told me where to put the dots on the lines. She likes 3D diagrams and used them all the time in geology.

I did some research and found this journal article and the tabular format which I find a little easier to understand. My reservations about this model is that I think that many practitioners would also find it really difficult to understand and, far from simplifying things for practitioners, it will make them more complicated.

I have left out some of the points as no info. in the papers

Activity      Indiv.-Social      Non-Refl.-Refl.           Expr.-Info.

Reading 1   --X------------      ------X---------             ---------X-----

Reading 2   -------------X-      -------------X--             ---X-----------

Reading 3   ------------X--      -------------X-

Reading 4   ------------X-

Reading 5   --------------X-     --------------X-             -X-------------

 

Individual - Where the individual is the focus of learning

Social - learning is explained through interaction with others through discourse and collaboration and the wider social context within which the learning takes place.

Reflection - Where conscious reflection on experience is the basis by which experience is transformed into learning.

Non-reflection - Where learning is explained with reference to processes such as conditioning, preconscious learning, skills learning and memorisation

Information - Where an external body of information such as text, artefacts and bodies of knowledge form the basis of experience and the raw material for learning.

Experience - Where learning arises through direct experience, activity and practical application.

 

Permalink Add your comment
Share post

H809: Activity 7.2: Key Concepts

Visible to anyone in the world

 

H809: Activity 7.2: Key Concepts

Learning styles: These are approaches to learning concerning the way in which individuals prefer to take in and process information. This simple concept seems intuitive and gained great popularity in the 1970s where proponents suggested that teachers should assess the learning style of their students and teach appropriately. The field became financially very profitable and methods proliferated with very little independent research.

In 2004 Coffield et al. published a report which reviewed 71 learning style inventories and classified them into 5 groups:

  • constitutionally-based learning styles and preferences
  • cognitive structure
  • stable personality type
  • 'flexibly stable' learning preferences
  • learning approaches and strategies

They concluded that there was no decisive evidence that teaching a student in their preferred learning style led to improved performance and that further research and regulation was required in this field.

My concern has always been that I believe it is really necessary to be able to access information in a variety of styles and the further up the academic level that you progress, the more important it is that you can use a variety of styles.

Another big concern of mine is the categorisation of all learners with dyslexia as visual learners. Dyslexia tuition at university level seems to consist of teaching students to take material from lectures, seminars etc. and re-present it in a visual format. I believe that using a single style runs the risk of students losing motivation when material is presented in an alternative format and leads them to believe themselves unable to access material in complex written formats such as journal articles.

The link to educational technology is that the majority of these tests, both reliable and unreliable, are marketed and taken online so that several hundred students can take these tests as part of their courses and I believe this can be detrimental if they are labelled and their learning restricted.

References

 

David Kolb - learning styles model based on experiential learning theory. Kolb, David (1984). Experiential learning: Experience as the source of learning and development. Englewood Cliffs, NJ, Prentice-Hall.

Honey, P & Mumford, A, (1982). The Manual of Learning Styles. Maidenhead, UK, Peter Honey Publications.

Coffield, F., Moseley, D., Hall, E., Ecclestone, K. (2004). Learning styles and pedagogy in post-16 learning. A systematic and critical review. London, Learning and Skills Research Centre.

 

 

Permalink Add your comment
Share post

H809: Activity 7.1 Timelines, theory and technologies

Visible to anyone in the world

 

 

Time

The Study of Learning

British Social / political events

Impact on E-learning practice

1950

Behaviourism: All behaviour caused by external stimuli (operant conditioning). All behaviour can be explained without the need to consider internal mental states or consciousness.

Skinner

Edward Thorndike

Tolman

Guthrie

Hull

 

1953 Queen Elizabeth II crowned

 

 

 

 

1960 Birth control pill on sale

1960s Decolonisation

1955

1960

1965

Phase one - 1965-1979: Mainframe systems

Predominant pedagogical emphasis is instructional, behaviourist. Research is concerned with navigational issues.

1970

Cognitivism: New cognitive frameworks of learning emerging in the 1970s, 1980s and 1990s. Cognitive theories look beyond behaviour to explain brain-based learning. Cognitivists consider how human memory works to promote learning.

1973 Britain joins EEC

 

 

1975 N. Sea Oil

1978 first in vitro fertilisation

1975

1980

1980s Concern over ozone layer

1981 Thatcher gov. starts privatisation

1982 Falklands war

Phase two - 1980-1989: Stand-alone systems:

Increased activity in terms of multimedia functionality but still content driven and focused on the interactive tutorial paradigm

1985

1990

Constructivism:

Knowledge is constructed based on personal experiences and hypotheses of the environment. The learner is not a blank slate (tabula rasa) but brings past experiences and cultural factors to a situation.

Vygotsky

Piaget

Dewey

Vico

Rorty

Bruner

1991 Liberation of Kuwait

 

 

 

 

 

 

 

 

 

1993 Peace proposal for N. Ireland

 

 

1994 Chunnel built

Phase three - 1990-2000: Networking technologies:

Beginning to see more emphasis on the wider contextual issues (skills, strategy, importance of embedding and integration). Also a shift away from the emphasis on the individual to the concept of situated learning

1995

Social Constructivism: Suggests that knowledge is first constructed in a social context and is then appropriated by individuals.

Vygotsky

Bruning et al., 1999

M. Cole, 1991

Eggan & Kauchak, 2004

Brown et al.1989 Ackerman 1996

Kukla (2000)

 

 

1997 Referendum for more autonomy for Wales/Scotland

A move to more holistic and joined-up thinking. Evidence of more linking of development to strategy and policy

2000

 

2003 Iraq war

2004 only 18% of economy is manufacturing

2004 Low unemployment at 4.7%

2008 Financial Crisis

Phase four - 2000-present: Politicisation and systematisation:

Pedagogy shifted away from individual learner to collaboration, communication and the notion of communities of practice

2005

2010

After: Conole, G., Smith, J. and White, S. (2007) 'A critique of the impact of policy and funding' in Conole, G. and Oliver, M. (eds) Contemporary Perspectives on E-learning Research, London, RoutledgeFalmer.

Added facts from:

http://www.learning-theories.com/cognitivism.html

Wikipedia

http://www.scaruffi.com/politics/british.html

http://news.bbc.co.uk/1/hi/world/europe/country_profiles/1038820.stm

 

I designed a table comparing the social and political history of the UK with learning theories and e-learning practice (based on the Conole et al. (2007) timeline).

Looking at this table I can see a growing awareness of the importance of self-determination from 1950-1990 with a lessening of control as de-colonisation is mirrored by a move from behaviourism to cognitive theories where learning is located in the individual rather than imposed by the teacher. At this point e-learning practice is lagging behind with its concentration on behaviourist principles although functionality is improving.

Between 1990 and 1995 this concentration in individuals continues but is based more in context as Kuwait is liberated and the peace process in N. Ireland is started. Co-operation is beginning to show its effects as Britain becomes more involved in the European Union and the Chunnel is constructed. In educational research there is a rise of constructivism with its awareness of cultural history and this is also becoming a part of educational practice.

Moving towards the present, we see a concentration on autonomy with the referenda for Wales and Scotland and the stated aims of the Iraq war being to encourage self-determination and democracy. The current situation in Libya is another example of minimal interference to protect people in context and allow them to determine their own future. This social context is again mirrored by the research concepts of social constructivism and the educational practice of communities of practice.

 

Permalink Add your comment
Share post

H809: Activity 6.4, 6.5, 6.7 Ethics Policy

Visible to anyone in the world

H809: Activity 6.4 Research Ethics Policy Documents

Ethics principles for research involving human participants

Principle 1: Compliance with protocol

Principle 2: Informed consent

Principle 3: Openness and integrity

Principle 4: Protection from harm

Principle 5: Confidentiality

Principle 6: Professional codes of practice and ethics

 

H809: Activity 6.5 Exploring Ethical Issues Associated with Online Research

Diversity of views is mentioned. I am not sure I agree that the diversity of views is due to the fact that it is a new field. I have worked in sports coaching in the past and I currently work in student support and so I have attended many training sessions in ethics. I have always been surprised at the diversity of views on how to approach situations. Most trainings are carried out along the lines of discussions on how to approach various situations and people's views depend on many factors including culture and heritage.

Lurking. My daughter was involved with a very responsible website which supported young people involved in self-harm. I remember vividly how upset and betrayed she felt when a researcher, who had joined the site and been welcomed by the members, finally disclosed that she was involved in research. The harm caused was enormous as the members were no longer sure who they could trust. This was a well respected site, unlike some of the others, and it provided a great deal of support to young people and their parents but it closed a few months after this incident.

According to Eysenbach and Till (2001), the following issues should be discussed before studying an internet community:

  1. Intrusiveness. Discuss the extent to which the research is intrusive (will it involve passive analysis of internet postings or more active involvement in the community by participating?);
  2. Perceived privacy. Discuss (preferably in consultation with members of the community) the level of perceived privacy of the community (Is it a closed group requiring registration? What is its membership size? What are the group norms?);
  3. Vulnerability. Discuss how vulnerable the community is (for example, a mailing list for victims of sexual abuse or HIV/AIDS may be a vulnerable community);
  4. Potential harm. As a result of the above, discuss whether the intrusion of the researcher or publication of the results has potential to harm individuals or the community as a whole;
  5. Informed consent. Discuss whether informed consent is required and how it will be obtained;
  6. Confidentiality. How can the anonymity of participants be protected?;
  7. Intellectual property rights. In some cases participants may not seek anonymity, but publicity, so the use of postings without attribution may not be appropriate.

BERA link not working but found it here:

http://www.bera.ac.uk/ethics-and-educational-research-2/

Activity 6.7: Contribute to a set of research ethics case studies (2 hours)

Using the wiki on the module website, contribute a research ethics case study. This means providing a synopsis of the information that relevant ethics committees or gatekeepers might need for a project addressing the empirical research question you outlined in TMA01. Obviously a committee or gatekeeper might need full details, but for the purposes of this exercise, keep it as brief as possible while including the salient information.

Central Research Question: Do first year undergraduate Earth Science students at Keele University exhibit a discrepancy between their actual and perceived core technological literacy skills?

 

Sub-questions:

SQ1:   What are the students' perceptions of their levels of expertise in word processing, spreadsheets and presentation packages?

o   Likert-type survey

SQ2:    How do students perform in analysis of their skills in word processing, spreadsheets and presentation packages?

o   An assessment in the computer lab

SQ3:    Is there a discrepancy between perceived and actual core technological literacy skills?

o   Analysis by researcher

SQ4:    What are students' opinions on their use of word processing, spreadsheets and presentation packages?

o   Open-ended questions as part of the survey

 

Participants:

First year undergraduate students in the department of Earth Sciences

 

Principle 1: Compliance with protocol

A protocol will be completed which specifies the procedures for recruiting the participants, gathering and managing data. It will be signed by all involved in the research.

Principle 2: Informed consent

The aims of the research and the protocol will be explained as part of the introductory lecture in fresher's week. Students will be asked if they will participate in the research and it will be explained that they will be identified by a research number that does not correlate with their student number. They will be informed that no lecturers will be involved in the research and that they will not be identified. They will also be informed that they can choose to withdraw from the study at any time and that they can choose not to answer any questions if they prefer.

No deception will be necessary

Principle 3: Openness and integrity

Students will be informed immediately in the case of any problem occurring that may be to the detriment of the student. For example, where a lecturer discovers the identity of students participating in the research.

No incentives will be offered to participants

Participants will be offered a copy of the final report.

Principle 4: Protection from harm

Participants may experience distress or discomfort if they have problems completing the assessment. The researcher will be prepared to recognise this and offer reassurance that the assessment results are for research purposes only, will not be shown to lecturers and will be anonymous. They will also remind the participant that they may withdraw from the assessment at any time.

Assessments will be held in freshers' week where it will have minimal impact on the normal workload of the participants

Principle 5: Confidentiality

It will be explained that they will be identified by a research number that does not correlate with their student number. They will be informed that no lecturers will be involved in the research and that they will not be identified.

Any disclosures concerning illegal or harmful behaviour which come to light in the course of the research will be disclosed to the relevant authorities. The participants will be informed of this disclosure before it happens if possible.

Principle 6: Professional codes of practice and ethics

Storage of personal data, in order to contact participants, will accord with the Data Protection Act 1998 and the Freedom of Information act 2000. Participants will be informed of storage for this use.

Any students who are entitled to special circumstances in examinations and tests will be allowed to the same percentage of extra time in the assessment if required.

Screen reading software will be available for any student if required.

 

Permalink 1 comment (latest comment by Jonathan Vernon, Thursday, 17 Mar 2011, 10:35)
Share post

H809:Activity 6.3: Ethics in earlier research

Visible to anyone in the world

Activity 6.3: Ethics in earlier research (1 hour)

Look back at the paper you read in Week 1. Can you see any ethical weaknesses or practices that might come under the scrutiny of an ethics committee today?

I highlighted two potential ethical problems when I first read this paper. The first was that the material on Virtual Classroom was mainly in a written format and this may cause problems with those students with SpLD or other print disabled students.

The second was that use of pen names could encourage disclosures/comments that were unwise in a class situation and, as course progressed, identities may be inadvertently revealed.

On re-reading I would also highlight the problems with the degree of author involvement in the interviewing. The author was also a lecturer on the course and this may affect the answers that the students were willing to offer both for written and face to face interviews.

I am also concerned about the fact that Virtual Classroom was a development project developed by the New Jersey Institute of Technology with extensive funding from the Annenberg Foundation which supports not-for-profit organisations. There are two ethical questions here in that the technology was being developed for lease to other institutions and so the authors had a vested interest in a positive result; and that money intended for not-for-profit research has funded a technology project that was intended for lease.

 

Permalink Add your comment
Share post

This blog might contain posts that are only visible to logged-in users, or where only logged-in users can comment. If you have an account on the system, please log in for full access.

Total visits to this blog: 467447