MPI-INF Logo
Campus Event Calendar

Event Entry

What and Who

Controllable and Creativity Natural Language Generation

Nanyun Peng
University of California, Los Angeles
INF Distinguished Lecture Series

Nanyun (Violet) Peng is an Associate Professor of Computer Science at the University of California, Los Angeles. She received her Ph.D. in Computer Science from Johns Hopkins University, Center for Language and Speech Processing. Her research focuses on robust and generalizable NLP techniques, with applications to creative language generation, multi-modal cross-lingual understanding, and low-resource information extraction. Dr. Peng is a recipient of the NSF CAREER Award, an NIH R01 grant, Google Research Scholar Award, Okawa Foundation Research Grant, and various federal and industrial grants. Her works have won the Outstanding Paper Award at NAACL 2022, Best Paper Awards at AAAI 2022 DLG workshop and EMNLP 2023 PAN-DL workshop, and have been featured at the IJCAI 2022 early career spotlight.
AG 1, INET, AG 5, RG1, SWS, AG 2, AG 4, D6, AG 3  
Public Audience
English

Date, Time and Location

Friday, 19 July 2024
10:00
60 Minutes
E 1.4
024
Saarbrücken

Abstract

Recent advances of large language models (LLMs) have demonstrated strong results in natural language processing (NLP) applications such as dialogue systems, text classification, machine translation, and document summarization. With the improving capability of LLMs, there is a growing need for controllable generation to produce reliable and tailored outputs, especially in applications requiring adherence to specific guidelines or creativity within defined boundaries. However, the prevalent auto-regressive paradigm that trains models to predict the next word given the left-hand-side context makes it challenging to impose structural or content control/constraints on the model.

In this talk, I will present our recent work on controllable natural language generation (NLG) that transcends the conventional auto-regressive formulation, aiming to improve both reliability and creativity of generative models. We introduce controllable decoding-time algorithms that steer auto-regressive models to better conform to specified constraints. We also introduce novel insertion-based generation paradigm that goes beyond auto-regressive models. Our approach enables more reliable and creative outputs, with applications to creative generation, formality-controlled machine translation, and commonsense-compliant generation.

Contact

Connie Balzert
+49 681 9325 2000
--email hidden

Virtual Meeting Details

Zoom
649 2184 1396
passcode not visible
logged in users only

Connie Balzert, 07/15/2024 10:14 -- Created document.