The 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing

November 20-23, 2022

Online only

Keynote Speakers

The following speakers have graciously accepted to give keynotes at AACL 2022. More details to be announced.


Chris Callison-Burch

Reasoning about Goals and Making Plans with Large Language Models

Eduard Hovy

On the complementarity of neural and symbolic approaches, and on how to transfer between them

Juanzi Li (李涓子)

Knowledge-oriented Explainable Programming for Complex Question Answering

Prem Natarajan

Frontiers of Fair and Accessible AI

Chris Callison-Burch

Chris Callison-Burch is an associate professor of Computer and Information Science at the University of Pennsylvania. His course on Artificial Intelligence has one of the highest enrollments at the university with 500 students per semester. Chris is a long history of service to the ACL community. He has served as a general chair, program co-chair, chair of the NAACL Exec, Secretary-Treasurer for SIGDAT, and he has served on editorial boards of TACL and Computational Linguistics.

He is best known for his research into statistical machine translation, paraphrasing and crowdsourcing. His current research is focused on applications of large language models to long-standing challenge problems in artificial intelligence. His PhD students joke that now whenever they ask him anything his first response is "Have you tried GPT-3 for that?"

Prof Callison-Burch has more than 100 publications, which have been cited over 20,000 times. He is a Sloan Research Fellow, and he has received faculty research awards from Google, Microsoft, Amazon, Facebook and Roblox, in addition to funding from DARPA, IARPA, and the NSF.

Title: Reasoning about Goals and Making Plans with Large Language Models

Abstract

I will present research by my lab at the University of Pennsylvania that has focused on a language-centric approach to classic goals in Artificial Intelligence. The types of research questions that we are addressing are the following: Can we design language models that are better able to reason about other agent's goals by observing steps? How can AI systems learn commonsense knowledge knowledge about procedures from instructions found on the web? Should we as AI practitioners design structured schematic representations akin to Shank and Adelson's scripts, or is it more effective to use free-form natural language representations? Has GPT-3 solved all of our problems? If so, what's the best drink recipe for a Mai Tai?

Eduard Hovy

Eduard Hovy is the Executive Director of Melbourne Connect (a research and tech transfer centre at the University of Melbourne), a professor at the University of Melbourne’s School of Computing and Information Systems, and a research professor at the Language Technologies Institute in the School of Computer Science at Carnegie Mellon University.

In 2020–21 he served as Program Manager in DARPA’s Information Innovation Office (I2O), where he managed programs in Natural Language Technology and Data Analytics. Dr. Hovy completed a Ph.D. in Computer Science (Artificial Intelligence) at Yale University in 1987 and was awarded honorary doctorates from the National Distance Education University (UNED) in Madrid in 2013 and the University of Antwerp in 2015. He is one of the initial 17 Fellows of the Association for Computational Linguistics (ACL) and is also a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI).

Dr. Hovy’s research focuses on computational semantics of language and addresses various areas in Natural Language Processing and Data Analytics, including in-depth machine reading of text, information extraction, automated text summarization, question answering, the semi-automated construction of large lexicons and ontologies, and machine translation. In 2022 his Google h-index was 97, with about 50,000 citations. Dr. Hovy is the author or co-editor of eight books and around 400 technical articles and is a popular invited speaker. He regularly co-taught Ph.D.-level courses and has served on Advisory and Review Boards for both research institutes and funding organizations in Germany, Italy, Netherlands, Ireland, Singapore, and the USA.

From 2003 to 2015 he was co-Director of Research for the Department of Homeland Security’s Center of Excellence for Command, Control, and Interoperability Data Analytics, a distributed cooperation of 17 universities. In 2001 Dr. Hovy served as President of the international Association of Computational Linguistics (ACL), in 2001–03 as President of the International Association of Machine Translation (IAMT), and in 2010–11 as President of the Digital Government Society (DGS).

Title: On the complementarity of neural and symbolic approaches, and on how to transfer between them

Abstract

Today’s neural NLP can do amazing things, leading some people to expect human-level performance soon. But it also fails spectacularly, in ways we find hard to predict and explain. Is perfection just a matter of doing additional neural architecture engineering and more-advanced training to overcome these problems, or are there deeper reasons for the failures? I argue that trying to understand the nature and reason for failures by couching the necessary operations in terms of symbolic reasoning is a good way to discover what neural networks will remain unable to do despite additional architecture engineering and training.

Juanzi Li (李涓子)

Juanzi Li received the Ph.D. degree in computer science from Tsinghua University. Now, she is the professor in the Department of Computer Science and Technology and the director of Knowledge Intelligence Research Center in Institute for Artificial Intelligence, Tsinghua University. She also serves as the director of Language and Knowledge Computing Committee at the Chinese Information Processing Society in China. Her research interests include knowledge graph and semantic computing, news and social network mining. She has published over 100 papers in top-tier international conferences/academic journals and two books, i.e., “Mining User Generated Content” and “Semantic Mining in Social Networks”, which attracted over 10,000 citations. She won the second prize of State Science and Technology Progress Award in 2020, the first prize of the Beijing Science and Technology Award in 2017 and the first prize of China Association of Artificial Intelligence Science and Technology Award in 2013.

Title: Knowledge-oriented Explainable Programming for Complex Question Answering

Abstract

Question answering (QA) is one of the most important natural language processing tasks. At present, the machine has surpassed human-level performance in some simple QA datasets, but it is still insufficient in answering complex questions involving multiple reasoning abilities such as multi-hop reasoning, and logic operations. In this talk, I will present our explainable reasoning framework for Complex QA, where the question-answering system gives the step-by-step reasoning process to improve interpretability and reliability. Specifically, I will introduce our attempts and experiences in knowledge-oriented programming, automatic program induction, and visualization.

Prem Natarajan

Dr. Prem Natarajan is vice president for Amazon Alexa, leading a multi-disciplinary team to make Alexa a trusted AI assistant, advisor, and companion for everyone, everywhere.

His team’s product, engineering, and scientific advances have improved how customers experience Alexa through advances in speech recognition and speech synthesis, natural language understanding, computer vision, entity linking and resolution, and related machine learning technologies. Dr. Natarajan and his team are now focused on advancing generalizable AI, combining the best of human-like intelligence with machine learning to accelerate the future of ambient intelligence – where the underlying AI seamlessly blends into your environment, connects heterogeneous services and devices, and adapts on your behalf to provide greater utility.

Prior to Amazon, he was senior vice dean of engineering at the University of Southern California, where he led nationally influential DARPA- and IARPA-sponsored research efforts in biometrics, face recognition, OCR, natural language processing, media forensics, and forecasting. Prior to that, he served as executive vice president and principal scientist for speech, language, and multimedia at Raytheon BBN Technologies.

Dr. Natarajan is a well-recognized thought leader in AI and a keynote speaker who is frequently interviewed by publications like CNN, MIT Tech Review, Economist and Fast Company. He is the author and co-author of more than 200 published papers in research areas such as speech recognition, machine translation and computer vision. He also holds several patents in the field of conversational AI and machine learning. He received his PhD and master’s degree in Electrical Engineering at Tufts University, and a bachelor’s degree in Engineering from Savitribai Phule Pune University in India.

Title: Frontiers of Fair and Accessible AI

Abstract

In this talk, I will share my perspective on making conversational AI more inclusive and accessible through advances in natural language understanding, speech recognition, and by incorporating interaction affordances that make conversational AI more useful for everyone. I will start with a brief overview of accessibility considerations and some Alexa-powered user experiences that address those considerations. That overview will be followed by a description of some of our recent and ongoing AI/ML work in natural language understanding (Alexa Teacher Model) and in ASR with a focus on achieving fairness at scale. I will conclude with a brief mention of our engagements with academia, including through the Alexa Prize and through the release of publicly available datasets and associated research tasks.