Project overview
WSI Pilot Project
People engage in arguments on a daily basis and, with the rise of globalization and web-mediated connectedness, we are bombarded with huge amounts of argumentative data in the form of social media content, online interviews, and televised political debates. Argumentation mining (AM) is a field of research in computer science and computational linguistics that aims at disentangling all this unstructured information to create AI machines that can process and understand argumentative dialogues (Lytos et al., 2019). To tackle this complex task, different argumentation frameworks have been proposed, mainly relying on identifying claims/premises within a text, or the relationships between segments, namely attack or support (Lippi et al., 2016). Regardless of relevant advances in automatizing AM with the use of machine learning techniques, the field is still in its infancy and novel approaches are needed to improve the accuracy of the prediction algorithms and find applications analysing relevant data. Inspired by multimodal discourse analysis, an emergent paradigm in discourse theory where speech is studied in combination with its immediate context like audio, gestures, or other symbolisms (O’Halloran, 2011), we propose the investigation of multimodal audio-textual information in AM, that is, combining acoustic features from people’s speech together with natural language processing (NLP) techniques to improve the accuracy of state-of-the-art algorithms and explore their direct application in the analysis of political debates. Moreover, we plan on obtaining the first crowd-sourced audio-textual argumentation database, which we will make public, and will be used by future researchers in this field.
People engage in arguments on a daily basis and, with the rise of globalization and web-mediated connectedness, we are bombarded with huge amounts of argumentative data in the form of social media content, online interviews, and televised political debates. Argumentation mining (AM) is a field of research in computer science and computational linguistics that aims at disentangling all this unstructured information to create AI machines that can process and understand argumentative dialogues (Lytos et al., 2019). To tackle this complex task, different argumentation frameworks have been proposed, mainly relying on identifying claims/premises within a text, or the relationships between segments, namely attack or support (Lippi et al., 2016). Regardless of relevant advances in automatizing AM with the use of machine learning techniques, the field is still in its infancy and novel approaches are needed to improve the accuracy of the prediction algorithms and find applications analysing relevant data. Inspired by multimodal discourse analysis, an emergent paradigm in discourse theory where speech is studied in combination with its immediate context like audio, gestures, or other symbolisms (O’Halloran, 2011), we propose the investigation of multimodal audio-textual information in AM, that is, combining acoustic features from people’s speech together with natural language processing (NLP) techniques to improve the accuracy of state-of-the-art algorithms and explore their direct application in the analysis of political debates. Moreover, we plan on obtaining the first crowd-sourced audio-textual argumentation database, which we will make public, and will be used by future researchers in this field.
Staff
Lead researchers
Other researchers