OU blog

Personal Blogs

H809: Activity 11.9: Validity

Visible to anyone in the world

Activity 11.9: Validity

Reading 11: Bos et al. (2002)

Effects of four computer-mediated communications channels on trust development

To what extent does the study demonstrate that its findings generalise to other participants, places or times?

States that it suggests social dilemma tasks elicit exploitative and self-protective behaviours and so the study relates to interpersonal trust in these types of risk. This is a population of students and others attached to a university with the associated assumptions of technological literacy and intelligence levels.

To what extent are causal relationships, rather than just correlations, demonstrated?

Just correlations - there are many factors which may affect trust development and it is uncertain how many of these were kept constant. It is reported that the participants did not know each other before the study but they may have had friends in common which would affect trust. Personal disclosures were discouraged but it does not report whether conversations were monitored. I believe that it has previously been found (can't find ref.) that personal disclosures in online situations can encourage group formation.

Are the instruments used in the study actually measuring what the researchers claim they measure?

I do not think that there is necessarily a link between the group pay-off and the degree of cooperation. For example, there must be an aspect of intelligence required to understand the necessity for cooperation. In this case the participants were all students or others associated with the university and so a level of intelligence can be assumed but this may not correspond to an understanding of the how the game works.

How strong is the evidence for the claims?

A major limitation of the results from a one-way ANOVA is that it does not say how the means differ, just that the means are not equal to each other. To solve this, post-hoc tests can be used - a test conducted after you already know that there is a difference among the means. Given a set of 3 means, the Tukey procedure will test all possible 2-way comparisons: 1&2, 1&3, and 2&3 and it is optimised to reduce the likelihood of producing random positives when doing pairwise comparisons.

This showed a significant difference between text and the other three but not between the other three. This was conducted at the end of the study.

Are alternative explanations possible?

High quality video cf conference telephones for the audio and very simple text system. Chatspace is very simple and poor quality site with spelling mistakes and poor text/background definition on the home page.

There may have been people who did not work well in groups and created problems in group trust formation. This may have been detected by the post-study questionnaire but it does not say whether these were excluded or included in the results. I presume they were included but how were they distributed throughout the groups? Were there more in the text group?

How could claims be tested more strongly?

I would like to see good quality technology used for each condition and recording of the sessions to give a richer understanding of how the relationships formed. In some cases were there individuals who obstructed the ability to form trust? This could be detected from recordings of text messages, chat and video.

Reading 12: Ardalan et al. (2007)

A comparison of student feedback obtained through paper-based and web-based surveys of faculty teaching

To what extent does the study demonstrate that its findings generalise to other participants, places or times?

A student population was used so may be able to generalise to other student populations, which was the aim of the article, but not to other populations. The web-based survey is claimed to be equally available to all students which may be true for a university population but not the general population.

To what extent are causal relationships, rather than just correlations, demonstrated?

Just correlations.

Are the instruments used in the study actually measuring what the researchers claim they measure?

Chi-square is a statistical test commonly used to compare observed data with data we would expect to obtain according to a specific hypothesis. The chi-square tests the null hypothesis, which states that there is no significant difference between the expected and observed result.

The t-test assesses whether the means of two groups are statistically different from each other.

How strong is the evidence for the claims?

Statistics seem to be good but there are many variables which are unaccounted for: the same courses were assessed but there is no mention of whether the module format was changed or whether teaching staff were changed.

Are alternative explanations possible?

Testing so many hypotheses at one time leads to confusing results. There is no mention on how the paper-based questionnaires were presented to the students and who presented them although there is comprehensive discussion on how the web-based survey was presented.

The change in response rate is put down to the change in format from paper-based to web-based but no account is made of how positions had changed over the 12 month period. The students were used to the paper-based survey and not familiar with the web-version and it seems that the paper-based survey was to some extent enforced as it seems to have been handed out in class.

How could claims be tested more strongly?

Which ones!

I would like to see a dual-methodologies approach to find out why students chose to participate in the survey and to give some depth to their answers. Were the answers on the enforced paper-based survey less accurate to true feelings?

 

Permalink 1 comment (latest comment by Sylvia Moessinger, Saturday, 7 May 2011, 19:59)
Share post

H809: Activity 11.8: Validity

Visible to anyone in the world

I found it rather interesting reading about the Hawthorne Effect:

'A term referring to the tendency of some people to work harder and perform better when they are participants in an experiment. Individuals may change their behavior due to the attention they are receiving from researchers rather than because of any manipulation of independent variables'

Accessed from: http://psychology.about.com/od/hindex/g/def_hawthorn.htm

...and even more interesting to find out that when the original data were re-examined, it was found that the lighting conditions at the factory were always changed on a Sunday and so productivity on a Saturday was compared to the new conditions on a Monday. Further investigation showed that whether lighting was changed or not, productivity always went up on a Monday as it was the start of the week.

Accessed from: http://www.economist.com/node/13788427?story_id=13788427

Permalink Add your comment
Share post

This blog might contain posts that are only visible to logged-in users, or where only logged-in users can comment. If you have an account on the system, please log in for full access.

Total visits to this blog: 466400