Kommentar |
Description:
Language and multimodality are intrinsically connected, and non-verbal elements such as gestures, stance, or facial expression have been increasingly studied as a key means of meaning-making (McNeill 2000, Streeck 2010). Multimodality is indispensable in spoken discourse in order to frame and enhance the verbal level. In online contexts, the visual channel allow the integration of showings, imitation, or forced perspective, while additional multimodal resources such as emoticons have been designed to enhance written online environments. Indeed, it has become clear that a focus only on spoken or written modes of communication can only provide a partial interpretation of discourse, as language is ”inevitably constructed across multiple modes” (Scollon & LeVine 2004), and multimodal elements are interconnected with spoken or written interaction in a dynamic process of creating meaning (Goodwin 2000 & 2007, Kendon 2004, Mondada 2014 & 2016).
By using a multimodal discourse analysis (MDA) approach, this seminar focuses on the analysis of multimodal meaning making in rich data environments, combining the study of language with that of non-verbal resources (Brunner et al. 2017, Kress 2011, O’Halloran 2011). Seminar participants will work with existing and new data in order to explore how non-verbal elements contribute to the dynamic meaning-making processes involved in interaction. We will also discuss issues of dataset compilation, transcription and analysis. Course requirements are detailed in the respective module descriptions.
Lecturer contact: s.diemer@mx.uni-saarland.de |