For best experience please turn on javascript and use a modern browser!
Adventures in Multimodality (AIM): Narrating and arguing by images, words & sounds

Adventures in Multimodality (AIM): Narrating and arguing by images, words & sounds



  • Charles Forceville (ACLC), senior researcher, coordinator
  • Yue Guan, lecturer at College English Teaching Center, Nanfang College of Sun Yat-Sen University, Guangzhou, China (Oct 2018-August 2019), visiting scholar.
  • Zahra Kashanizadeh, PhD student University of Tehran, Faculty of Management, Iran (Feb 2018-Dec. 2019), visiting scholar.


The role of static and moving images in presenting information and arguments has become more prominent in recent decades. This tendency straddles print and digital media, and challenges the primacy of the verbally conveyed message in public space. Increasingly, the sonic modality also helps frame, or spin, information, particularly through music. Assuming that all discourses are goal-driven, the group seeks to chart and analyze how multimodal discourses are structured and how they achieve their rhetorical and/or aesthetic goals. Its members draw insights from cognitive and pragmatic approaches to metaphor theory, argumentation and visual communication, and narratology, and share a commitment to develop hypotheses, where possible systematically test these, and thus uncover pertinent patterns. More information can be found  via the members' Google Scholar Profiles.

Current research projects

Relevance Theory as a model for analysing visual and multimodal mass-communication

Sperber and Wilson’s Relevance Theory (Sperber and Wilson 1995, Wilson and Sperber 2012, Carston 2002, Clark 2013), which has developed from the Gricean framework, holds that all communication comes with the presumption of optimal relevance to its addressee. It claims to work for all communication, irrespective of medium or modality. However, it has hitherto focused almost exclusively on the modality of spoken language, more specifically on face-to-face communication between two individuals. To make good on its promises, RT requires adaptation and refinement; it can become a better model for visual and multimodal communication than currently available ones. After having written a number of papers and chapters on dimensions of such adaptations (Forceville 1996, 2005, 2009, 2014, Forceville and Clark 2014), Forceville is now working on a monograph on this topic. For more information, contact

Multimodality and metaphor

Lakoff and Johnson (1980) claim we think metaphorically. But their cognitivist paradigm has paid little attention to non-verbal manifestations. In this ongoing project (e.g., Forceville 2005, 2006, 2011, 2014, 2016, Forceville and Urios-Aparisi 2009, Bounegru and Forceville 2011, Abbott and Forceville 2011, Forceville and Renckens 2013, Kromhout and Forceville 2013, Koetsier and Forceville 2014, Cornevin and Forceville 2017, Forceville and Paling 2018, Forceville and Van de Laar 2019). Forceville publishes about dimensions of visual and multimodal varieties of creative and conceptual metaphor as well as other tropes. For more information, contact

Multimodality in comics, cartoons & animations, films, and advertising

These media are almost completely “man-made,” and in many cases do not, or minimally draw on language. They are therefore ideally suited for studying how coded non-mimetic signs, such as emotion lines and text balloons in comics and manner of movement in animation film, convey significant information. This is an ongoing project (Forceville 2005, 2005, 2011, 2013, 2016, 2017,  Forceville, Veale and Feyaerts 2010, Forceville and Jeulink 2011, Forceville, El Refaie, and Meesters 2014, Stamenković, Tasić, and Forceville 2018, Tseronis and Forceville 2017a, 2017b). For more information, contact