The 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing
November 20-23, 2022
Online only
Time (Taipei Time)
Tutorial
Organisers
Contact person
Sunday 20 November 9:00-12:00
Efficient and Robust Knowledge Graph Construction
Ningyu Zhang, Tao Gui, Guoshun Nan
Ningyu Zhang
Sunday 20 November 9:00-12:00
When Cantonese NLP Meets Pre-training: Progress and Challenges
Rong Xiang, Hanzhuo Tan, Jing Li, Mingyu Wan, Kam-Fai Wong
Rong Xiang
Sunday 20 November 17:00-20:00
Recent Advances in Pre-trained Language Models: Why Do They Work and How Do They Work
Cheng-Han Chiang, Yung-Sung Chuang,Hung-yi Lee
Cheng-Han Chiang
Sunday 20 November 17:00-20:00
A Tour of Explicit Multilingual Semantics: Word Sense Disambiguation, Semantic Role Labeling and Semantic Parsing
Roberto Navigli, Edoardo Barba, Simone Conia, Rexhina Blloshmi
Roberto Navigli
Monday 21 November 0:00-3:00
Grounding Meaning Representation for Situated Reasoning
Nikhil Krishnaswamy, James Pustejovsky
Nikhil Krishnaswamy
Monday 21 November 0:00-3:00
The Battlefront of Combating Misinformation and Coping with Media Bias
Yi R. Fung, Kung-Hsiang Huang, Preslav Nakov, Heng Ji.
Yi R. Fung
PROGRAM
ON THIS PAGE
Tutorials details
Efficient and Robust Knowledge Graph Construction
Instructors: Ningyu Zhang, Tao Gui, Guoshun Nan
Knowledge graph construction which aims to extract knowledge from the text corpus has appealed to the NLP community researchers. Previous decades have witnessed the remarkable progress of knowledge graph construction on the basis of neural models; however, those models often cost massive computation or labeled data resources and suffer from unstable inference accounting for biased or adversarial samples. Recently, numerous approaches have been explored to mitigate the efficiency and robustness issues for knowledge graph construction, such as prompt learning and adversarial training. In this tutorial, we aim at bringing interested NLP researchers up to speed about the recent and ongoing techniques for efficient and robust knowledge graph construction. Additionally, our goal is to provide a systematic and up-to-date overview of these methods and reveal new research opportunities to the audience.
Grounding Meaning Representation for Situated Reasoning.
Instructors: Nikhil Krishnaswamy, James Pustejovsky
As natural language technology becomes ever-present in everyday life, people will expect artificial agents to understand language use as humans do. Nevertheless, most advanced neural AI systems fail at some types of interactions that are trivial for humans (e.g., ask a smart system "What am I pointing at?"). Certain problems in human-to-human communication cannot be solved without situated reasoning, meaning they cannot be adequately addressed with ungrounded meaning representation or cross-modal linking of instances alone. Examples include grounding an object and then reasoning with it ("Pick up this box. Put it there."), referring to a previously-established concept or instance that was never explicitly introduced into the dialogue, underspecification of deixis, and in general, dynamic updating of context through perceptual, linguistic, action, or self-announcement. Without both a representation framework and mechanism for grounding references and inferences to the environment, such problems may well remain out of reach for NLP. An appropriate representation should accommodate both the structure and content of different modalities, as well as facilitate alignment and binding across them. However, it must also distinguish between alignment across channels in a multimodal dialogue (language, gesture, gaze), and the situated grounding of an expression to the local environment, be it objects in a situated context, an image, or a formal registration in a database. It should also be sensitive to discourse artifacts introduced through communication (an utterance as a question, a gesture as acknowledgement, etc.). Therefore, such a meaning representation should also have the basic facility for situated grounding; i.e., explicit mention of object and situational state in context.
In this tutorial, we bring to the NLP/CL community a synthesis of multimodal grounding and meaning representation techniques with formal and computational models of embodied reasoning. We will discuss existing approaches to multimodal language grounding and meaning representations, discuss the kind of information each method captures and their relative suitability to situated reasoning tasks, and demonstrate how to construct agents that conduct situated reasoning by embodying a simulated environment. In doing so, these agents also represent their human interlocutor(s) within the simulation, and are represented through their virtual embodiment in the real world, enabling true bidirectional communication with a computer using multiple modalities. This tutorial will cover the most pressing problems in situated reasoning: namely, those requiring both multimodal grounding of expressions, as well as contextual reasoning with this information.
Recent Advances in Pre-trained Language Models: Why Do They Work and How Do They Work
Instructors: Cheng-Han Chiang, Yung-Sung Chuan, Hung-yi Lee
Pre-trained language models are language models that are pre-trained on large-scaled corpora in a self-supervised fashion. Traditional self-supervised pre-training tasks mostly involve recovering a corrupted input sentence, or auto-regressive language modeling. After these PLMs are pre-trained, they can be fine-tuned on downstream tasks.
Conventionally, these fine-tuning protocol includes adding a linear layer on top of the PLMs and training the whole model on the downstream tasks, or formulating the downstream tasks as a sentence completion task and fine-tuning the downstream tasks in a seq2seq way. Fine-tuning PLMs on downstream tasks often yields exceptional performance gain, which is why PLMs have become so popular. This tutorial is divided into two parts: why PLMs work and how to make them work (better).
In the first part of the tutorial (estimated 40 mins), we will summarize some findings that partially explain why PLMs lead to exceptional downstream performance. Some of these results have helped researchers design better pre-training and fine-tuning methods. In the second part (estimated 2 hrs 20 mins), we will introduce recent progress in how to pre-train and fine-tune PLMs. In this part, we will first discuss new pre-training techniques that help us pre-train better pre-trained language models for many downstream usages. Among all those techniques, we will specifically discuss the application of contrastive learning in pre-training a language model. The rest of the tutorial will introduce some emerging fine-tuning methods that have been shown to bring significant efficiency, in terms of hardware resources, training data, and model parameters while achieving superb performance.
Attendees of different backgrounds would definitely find this tutorial informative and useful. For attendees who are not that familiar with this topic, this tutorial definitely serves as a good starting point to understand why and how PLMs work. For researchers who are working on PLMs, you will still find our comprehensive and broad tutorial beneficial.
The Battlefront of Combating Misinformation and Coping with Media Bias
Instructors: Yi R. Fung, Kung-Hsiang Huang, Heng Ji, Preslav Nakov
The growth of online platforms has greatly facilitated the way people communicate with each other and stay informed about trending events. However, it has also spawned unprecedented levels of inaccurate or misleading information, as traditional journalism gate-keeping fails to keep up with the pace of media dissemination. These undesirable phenomena have caused societies to be torn over irrational beliefs, money lost from impulsive stock market moves, and deaths occurred that could have been avoided during the COVID-19 pandemic, due to the infodemic that came forth with it, etc. Even people who do not believe the misinformation may still be plagued by the pollution of unhealthy content surrounding them, an unpleasant situation known as information disorder. Thus, it is of pertinent interest for our community to better understand, and to develop effective mechanisms for remedying, misinformation and biased reporting.
When Cantonese NLP Meets Pre-training: Progress and Challenges
Instructors: Kam-Fai Wong, Mingyu Wan, Hanzhuo Tan, Rong Xiang, Jing Li
Cantonese is an influential Chinese variant with a large population of speakers worldwide. However, it is under-resourced in terms of the data scale and diversity, excluding Cantonese Natural Language Processing (NLP) from the state-of-the-art (SOTA) ``pre-training and fine-tuning'' paradigm. This tutorial will start with a substantially review of the linguistics and NLP progress for shaping language specificity, resources, and methodologies. It will be followed by an introduction to the trendy transformer-based pre-training methods, which have been largely advancing the SOTA performance of a wide range of downstream NLP tasks in numerous majority languages (e.g., English and Chinese). Based on the above, we will present the main challenges for Cantonese NLP in relation to Cantonese language idiosyncrasies of colloquialism and multilingualism, followed by the future directions to line NLP for Cantonese and other low-resource languages up to the cutting-edge pre-training practice.
A Tour of Explicit Multilingual Semantics: Word Sense Disambiguation, Semantic Role Labeling and Semantic Parsing
Instructors: Roberto Navigli, Rexhina Blloshmi, Edoardo Barba, Simone Conia
The recent advent of modern pretrained language models has sparked a revolution in Natural Language Processing (NLP), especially in multilingual and cross-lingual applications.
Today, such language models have become the de facto standard for providing rich input representations to neural systems, achieving unprecedented results in an increasing range of benchmarks. However, one question that often arises is whether current language models are indeed able to capture explicit, symbolic meaning and, if so, to what extent; perhaps more importantly, are current approaches capable of scaling across languages? In this cutting-edge tutorial, we will review recent efforts that have aimed at shedding light on meaning in NLP, with a focus on three key open problems in lexical and sentence-level semantics: Word Sense Disambiguation, Semantic Role Labeling and Semantic Parsing. After a brief introduction, we will spotlight how state-of-the-art models tackle these tasks in multiple languages, showing where they excel and fail. We hope that this tutorial will broaden the audience interested in multilingual semantics and inspire researchers to further advance the field.