The next EUSSET Colloquium is approaching! It will be held on the 9th of March, from 16:30 to 18:00 CET. In this colloquium we will focus respectively on gender and research quality.
In the first part of the colloquium, Alice Ashcroft, School of Computing & Communications, Lancaster University, will lead a discussion on how gender can affect group design decisions. The second part of the colloquium will feature a discussion on quality in practice-centred computing research, led by Fabiano Pinatti, Institute of Information Systems and New Media/Chair of Computer-Supported Cooperative Work and Social Media, University of Siegen. Below you will find further information on what to expect from each discussion.
Make sure to register by the 7th of March 2022 to be able to participate. You just need to send an e-mail to communitybuilding[at]eusset.eu communicating your interest!
Looking forward to seeing many of you there!
The EUSSET Colloquium is a forum where community members can engage in deep intellectual exchanges.
The Effect of Gender on Group Design Decisions through Language in CS
Alice Ashcroft, 16:30 – 17:15 CET
As discussed in a previous colloquium by Ina Wagner, the gendered nature of work in interaction can often have an effect on CSCW research. Feminist Conversation Analysis, from the field of linguistics, has shown how traits of conversation can be affected by gender. The CSCW community accepts that having a diverse makeup of designers and developers is better for all, but what remains to be seen is an examination of the effect gendered language can have on design decisions. With gendered differences in turn-taking, leadership, and hedging (short for ‘hedging your bets’), how this affects design decisions needs to be carefully thought through. There seems very little point in making sure everyone is in the room, if people then aren’t heard; this will be the topic of discussion, and hopefully include potential solutions.
Suggested readings:
Alice Ashcroft. 2020. Gender Differences in Innovation Design: A Thematic Conversation Analysis. In 32nd Australian Conference on Human-Computer Interaction (OzCHI ’20). Association for Computing Machinery, New York, NY, USA, 270–280. DOI:https://doi.org/10.1145/3441000.3441021
Stokoe, E.H. and Weatheral, A., 2002. Gender, language, conversation analysis and feminism. Discourse & Society, 13(6), pp.707-713.
Quality in Practice-centred Computing Research
Fabiano Pinatti, 17:15 – 18:00 CET
Scientific rigour is arguably one of the major criteria for assessing the quality of a research contribution. In quantitative research, rigour has been traditionally associated with the concepts of validity, reliability, objectivity, and generalisability. In qualitative research, on the other hand, concepts as trustworthiness – commonly associated with the notions of credibility, transferability, dependability, and confirmability – and authenticity have been proposed as more suitable ways to establish and assess the quality of research. Depth is also often remembered in the assessment of qualitative research, in terms of the analysis of the data and the insights produced out of it. While the quality aspects of quantitative research can be gauged fairly easily through p-values and the likes of it, the quality of qualitative research can be reasonably more difficult to appraise. After all, there is no magical number for judging whether a piece of qualitative research can be deemed trustworthy and authentic. There is also not clear scale for depth, or even a general consensus about what it means in practical terms. Considering that practice-centred computing heavily draws on qualitative research methods for developing understandings of the users’ contexts and their practices, it is extremely important to discuss and (try to) agree on those criteria. In this part of the colloquium, we will hold a discussion on what is acceptable (and expected) in terms of quality of practice-centred computing research. How long should a study last, so that depth can be achieved? How many participants should be involved? How many methods should be involved in triangulation processes? How can one say that saturation has been achieved in the analysis? Can we really think of norms and guidelines? These are no easy questions to answer, and it is not the intention of the colloquium to answer them once and for all. On the contrary, this is an opportunity for the members of our community to share their practices concerning those issues and experiences referring to the assessment of their own work, so that we can learn from each other and foster quality in practice-centred computing research.
Suggested readings:
McDonald, N., Schoenebeck, S., & Forte, A. (2019). Reliability and inter-rater reliability in qualitative research: Norms and guidelines for CSCW and HCI practice. In Proceedings of the ACM on Human-Computer Interaction, 3(CSCW). doi:10.1145/3359174
Guba, E. G. (1981). Criteria for Assessing the Trustworthiness of Naturalistic Inquiries. ECTJ, 29(2), 75–91.