Welcome


Event Livestream Here starting 9:00am PST

The goal of the Southern California Natural Language Processing Symposium is to gather researchers from the southern California region with broad expertise in natural language processing. The symposium will provide a participatory environment where attendees from a variety of fields can share and discuss their latest findings.

This year SoCal NLP will be hosted at Corwin Pavilion on University of California, Santa Barbara campus. It will include invited talks from academia and industry, contributed work, poster presentations and open discussion. We welcome all students, postdocs, and faculty members from universities in the region to join us this November.

Important Dates

Submission deadline: Oct. 5, 2022 (Wednesday), 11:59 PM Anywhere On Earth [NLP submission Portal]
Submission format: Approx 2 pages submissions, in the ACL format -- though longer submissions are also acceptable. Reviews will be single-blind (non-anonymized).
SoCal NLP Symposium: Nov. 18, 2022 (Friday)

Symposium Details

Registration Link: Here (free)
Thanks to our sponsors, registration and food are free for all attendees.
For those presenting a poster at the symposium, we will be providing poster boards and clips to hang your poster. The poster board is 6ft * 4ft, so it would be best if your poster were at most 6ft width and 4ft height. For those presenting a contributed talk at the symposium, you will have several minutes for the talk + questions.

COVID-19 information

To reduce COVID-19 transmission, visitors should pay attention to the following requirements and recommendations:

  • Visitors may be asked to complete the On Demand screening survey or the COVID-19 Screening for Minor Children.
  • Attendees age 2 and older need to provide proof that they are vaccinated, or have recently received a negative COVID-19 test. The test must have been conducted in the last 1 day for a rapid antigen test, or 2 days for a PCR test.
More information can be found in UCSB website: Campus Requirements and Guidelines for Campus Gatherings.


Venue


Date: Nov. 18, 2022

Location: Corwin Pavilion on UCSB

Live Stream is available!



Parking information: We provide the parking lot for visitors in parking structure 22. It will cost about $12.00 for a full day parking. The detailed information can be found here.


Schedule


8:30am - 9:00am  Breakfast & Registration
9:00am - 9:15am  Opening Remarks
9:15am - 10:00am  Keynote by Yejin Choi (University of Washington):
David V.S. Goliath: the Art of Leaderboarding in the Era of Extreme-Scale Neural Models
10:00am - 10:30am  Invited Talk by Alexander Rush (Cornell University):
Model Criticism for Long-Form Text Generation
10:30am - 11:00am  Morning Break
11:00am - 11:30am  Invited Talk by Yanning Shen (University of California, Irvine):
Fairness-aware Graph Neural Networks: Explainable Normalization and Attention Design
11:30am - 12:00pm  Invited Talk by Swabha Swayamdipta (University of Southern California):
Generating Datasets for Robust Generalization
12:00pm - 1:00pm  Lunch
1:00pm - 2:30pm  Poster Session 1 (Papers 1-44)
2:30pm - 4:00pm  Poster Session 2 (Papers 45+)
4:00pm - 4:30pm  Invited Talk by Bolei Zhou (University of California, Los Angeles):
Human-AI Interaction via Emergent Learning
4:30pm - 5:15pm  Keynote by William W. Cohen (Google AI):
Memory is the Second Most Important Thing: Language Models that Retrieve
5:15pm - 5:30pm  Closing Remarks & Awards

Invited Speakers


Alexander Rush

Associate Professor, Researcher*

Computer Science & Cornell Tech

Cornell University, HuggingFace*

William W. Cohen

Principal Scientist

Google AI

Yejin Choi

Brett Helsel Professor

Paul G. Allen School of Computer Science & Engineering

University of Washington

Swabha Swayamdipta

Gabilan Assistant Professor

Viterbi School of Engineering

University of Southern California

Yanning Shen

Assistant Professor

EECS & CS Department

University of California, Irvine

Bolei Zhou

Assistant Professor

Computer Science Department

University of California, Los Angeles



Alexander Rush
Title: Model Criticism for Long-Form Text Generation

Abstract: Language models have demonstrated the ability to generate highly fluent text; however, they still require additional scaffolding to maintain coherent high-level structure (e.g., story progression). Here, we propose to apply a statistical tool, model criticism in latent space, to evaluate the high-level structure of the generated text. Model criticism compares the distributions between real and generated data in a latent space obtained according to an assumptive generative process. Different generative processes identify specific failure modes of the underlying model. We perform experiments on three representative aspects of high-level discourse---coherence, coreference, and topicality---and find that transformer-based language models are able to capture topical structures but have a harder time maintaining structural coherence or modeling coreference structures.

Bio: Alexander "Sasha" Rush is a Professor at Cornell Tech and researcher at Hugging Face. His work is at the intersection of natural language processing and probabilistic deep learning with applications in text generation and efficient inference. He has written several popular open-source software projects supporting NLP research and data science, as well as pedagogical implementations of popular libraries. He is the secretary of ICLR and developed the MiniConf software used to run ML/NLP virtual conferences during COVID. His work has received paper and demo awards at major NLP, visualization, and hardware conferences, an NSF Career Award, and a Sloan Fellowship.


William W. Cohen
Title: Memory is the Second Most Important Thing: Language Models that Retrieve

Abstract: Since its inception one of the goals of AI has been intelligent systems that understand language and reason with information. In the early days of AI, information was represented in symbolic forms that had certain properties: in particular, knowledge was stored in discrete modular components that could be independently assessed and corrected, and combined in many ways to answer complex questions. Current AI systems built on large language models have excellent performance on many tasks, but store knowledge opaquely in their parameters. I will survey approaches over the last few years to bring large language models closer to symbolic AI systems by adding the ability to explicitly store and retrieve information from external sources.

Bio: William Cohen is a Principal Scientist at Google, and is based in Google's Pittsburgh office. He received his bachelor's degree in Computer Science from Duke University in 1984, and a PhD in Computer Science from Rutgers University in 1990. From 1990 to 2000 Dr. Cohen worked at AT&T Bell Labs and later AT&T Labs-Research, and from April 2000 to May 2002 Dr. Cohen worked at Whizbang Labs, a company specializing in extracting information from the web. From 2002 to 2018, Dr. Cohen worked at Carnegie Mellon University in the Machine Learning Department, with a joint appointment in the Language Technology Institute, as an Associate Research Professor, a Research Professor, and a Professor. Dr. Cohen also was the Director of the Undergraduate Minor in Machine Learning at CMU and co-Director of the Master of Science in ML Program. Dr. Cohen is a past president of the International Machine Learning Society. In the past he has also served as an action editor for the the AI and Machine Learning series of books published by Morgan Claypool, for the journal Machine Learning, the journal Artificial Intelligence, the Journal of Machine Learning Research, and the Journal of Artificial Intelligence Research. He was General Chair for the 2008 International Machine Learning Conference, held July 6-9 at the University of Helsinki, in Finland; Program Co-Chair of the 2006 International Machine Learning Conference; and Co-Chair of the 1994 International Machine Learning Conference. Dr. Cohen was also the co-Chair for the 3rd Int'l AAAI Conference on Weblogs and Social Media, which was held May 17-20, 2009 in San Jose, and was the co-Program Chair for the 4rd Int'l AAAI Conference on Weblogs and Social Media. He is a AAAI Fellow, and was a winner of the 2008 the SIGMOD "Test of Time" Award for the most influential SIGMOD paper of 1998, and the 2014 SIGIR "Test of Time" Award for the most influential SIGIR paper of 2002-2004. Dr. Cohen's research interests include question answering, machine learning for NLP tasks, and neuro-symbolic reasoning. He has a long-standing interest in statistical relational learning and learning models, or learning from data, that display non-trivial structure. He holds seven patents related to learning, discovery, information retrieval, and data integration, and is the author of more than 200 publications. Dr. Cohen is also a Consulting Professor at the School of Computer Science at Carnegie Mellon University.


Yejin Choi
Title: David V.S. Goliath: the Art of Leaderboarding in the Era of Extreme-Scale Neural Models

Abstract: Scale appears to be the winning recipe in today's leaderboards. And yet, extreme-scale neural models are still brittle to make errors that are often nonsensical and even counterintuitive. In this talk, I will argue for the importance of knowledge as well as inference-time reasoning algorithms, and demonstrate how smaller models developed in academia can still have an edge over larger industry-scale models, if powered with knowledge and/or reasoning algorithms. I will first introduce "symbolic knowledge distillation", a new framework to distill larger neural language models into smaller (commonsense) models, which leads to a machine-authored commonsense KB that wins, for the first time, over a human-authored KB in all criteria: scale, accuracy, and diversity. Moreover, I’ll demonstrate new, nonobvious applications of symbolic knowledge distillation --- (1) abstractive summarization and (2) generic knowledge induction, where the recurring theme is smaller models winning over models that are orders of magnitude larger. Next, I will highlight how we can make better lemonade out of neural language models by shifting our focus to unsupervised, inference-time reasoning algorithms. I will demonstrate how unsupervised models powered with algorithms can match or even outperform supervised approaches on hard reasoning tasks such as nonmonotonic reasoning (such as counterfactual and abductive reasoning), or complex language generation tasks that require logical constraints.

Bio: Yejin Choi is Brett Helsel professor at the Paul G. Allen School of Computer Science & Engineering at the University of Washington and also a senior research director at AI2 overseeing the project Mosaic. Her research investigates a wide variety problems across NLP and AI including commonsense knowledge and reasoning, neural language (de-)generation, language grounding with vision and experience, and AI for social good. She is a MacArthur Fellow and a co-recipient of the NAACL Best Paper Award in 2022, the ICML Outstanding Paper Award in 2022, the ACL Test of Time award in 2021, the CVPR Longuet-Higgins Prize (test of time award) in 2021, the NeurIPS Outstanding Paper Award in 2021, the AAAI Outstanding Paper Award in 2020, the Borg Early Career Award (BECA) in 2018, the inaugural Alexa Prize Challenge in 2017, IEEE AI's 10 to Watch in 2016, and the ICCV Marr Prize (best paper award) in 2013. She received her Ph.D. in Computer Science at Cornell University and BS in Computer Science and Engineering at Seoul National University in Korea.


Swabha Swayamdipta
Title: Generating Datasets for Robust Generalization

Abstract: Datasets are the cornerstone of AI. Yet building new datasets for modern AI systems is both expensive as well as fraught with issues due to annotation biases, resulting in models that struggle with robustness and generalization. In this talk, I will argue that the generative capability of language models offers a promising path forward. I will present two algorithms for dataset generation. The first approach involves overgenerating ambiguous instances for the natural language inference task via GPT-3 prompting, followed by efficient filtering and finally, a cheaper human labeling process. The resulting dataset, WaNLI, greatly improves out-of-distribution generalization across multiple benchmarks, while avoiding known biases. Next, I will describe how we can generate counterfactuals for existing instances, using a smaller language model (GPT-2) and controlled generation. Augmenting the training data with such “NeuroCounterfactuals” results in improved out-of-domain generalization. Overall, both algorithms highlight that involving generative models in the data creation process can promote faster AI progress.

Bio: Swabha Swayamdipta is an Assistant Professor of Computer Science and a Gabilan Assistant Professor at the University of Southern California. Her research interests are in natural language processing and machine learning, with a primary interest in the estimation of dataset quality, the semi-automatic collection of impactful data, and evaluating how human biases affect dataset construction and model decisions. At USC, Swabha leads the Data, Interpretability, Language and Learning (DILL) Lab. She received her PhD from Carnegie Mellon University, followed by a postdoc at the Allen Institute for AI. Her work has received outstanding paper awards at ICML 2022, NeurIPS 2021 and an honorable mention for the best overall paper at ACL 2020.


Yanning Shen
Title: Fairness-aware Graph Neural Networks: Explainable Normalization and Attention Design

Abstract: We live in an era of big data and "small world", where a large amount of data resides on highly connected networks representing a wide range of physical, biological, and social interdependencies, e.g., social networks and smart grids. Learning from graph/network data is hence expected to bring significant science and engineering advances along with consequent improvements in quality of life. Node representation learning has demonstrated its effectiveness for various applications on graphs. Particularly, recent developments in graph neural networks and contrastive learning have led to promising results in node representation learning for a number of tasks, such as node classification and link prediction, as well as natural language processing. Despite the success of graph neural networks, fairness is largely under-explored in the field, which may lead to biased results toward underrepresented groups in the networks. To this end, this talk will first introduce novel fainess-enhancement techniques for Graph Neural Networks. Furthermore, theoretical analysis is provided to prove that the schemes can reduce intrinsic bias. Experimental results on real networks are presented to demonstrate that the proposed framework can enhance fairness while providing comparable accuracy to state-of-the-art alternative approaches for node classification and link prediction tasks.

Bio: Yanning Shen is an assistant professor with the EECS department at the University of California, Irvine. Her research interests span the areas of machine learning, network science, data science, and signal processing. She received her Ph.D. degree from the University of Minnesota (UMN) in 2019. She was a finalist for the Best Student Paper Award at the 2017 CAMSAP conference and the 2017 Asilomar Conference. She was selected as a Rising Star in EECS by Stanford University in 2017. She received the UMN Doctoral Dissertation Fellowship in 2018, the Microsoft Academic Grant Award for AI Research in 2021, the Google Research Scholar Award in 2022, and the Hellman Fellowship in 2022. She is also an honoree of the MIT Technology Review Innovators under 35 Asia Pacific in 2022.


Bolei Zhou
Title: Human-AI Interaction via Emergent Learning

Abstract: AI has been deployed in many real-world applications, ranging from image recognition, content creation, to autonomous driving. While AI models achieve a high level of autonomy, it is difficult to establish a trustworthy relationship without the meaningful interactions between humans and AI. I will talk about utilizing the interpretable concepts emerged in the learned neural representations to facilitate human-AI interaction, with applications to interactive image generation, ML interpretability, and human-AI shared control.

Bio: Bolei Zhou is an Assistant Professor in the Computer Science Department at the University of California, Los Angeles (UCLA). He earned his Ph.D. from MIT in 2018. He has been a faculty member at The Chinese University of Hong Kong (CUHK) for the past 3 years. His research interest lies at the intersection of computer vision and machine autonomy, focusing on enabling interpretable and trustworthy human-AI interaction. He has developed many widely used interpretation methods such as CAM and Network Dissection, as well as computer vision benchmarks Places and ADE20K.



Awards


  • Amazon Best Paper Award Winner: Contrastive Novelty Learning: Anticipating Outliers with Large Language Models
    Albert Xu, Xiang Ren, and Robin Jia (USC).
  • AppFolio Best Paper Award Winner: CONDAQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation
    Abhilasha Ravichander, Matt Gardner, and Ana Marasović (AllenAI/CMU).
  • Megagon Labs Best Paper Award Winner: Empowering Language Models with Knowledge Graph Reasoning for Open-Domain Question Answering
    Ziniu Hu, Yichong Xu, Wenhao Yu, Shuohang Wang, Ziyi Yang, Chenguang Zhu, Kai-wei Chang, Yizhou Sun (UCLA).

  • Accepted Work


    Poster Session #1

    1. ADDMU: Detection of Far-Boundary Adversarial Examples with Data and Model Uncertainty Estimation
      Fan Yin, Yao Li, Cho-Jui Hsieh and Kai-Wei Chang
    2. Textual Backdoor Attacks with Iterative Trigger Injection
      Jun Yan, Vansh Gupta and Xiang Ren
    3. Improving BERT's Resistance to Adversarial Attacks via Loss-Restricted Fine-Tuning
      Qin Liu
    4. Distillation-Resistant Watermarking for Model Protection in NLP & Provably Confidential Language Modelling
      Xuandong Zhao, Lei Li and Yu-Xiang Wang
    5. GeoMLAMA: Geo-Diverse Commonsense Probing on Multilingual Pre-Trained Language Models
      Da Yin, Hritik Bansal, Masoud Monajatipoor, Liunian Harold Li and Kai-Wei Chang
    6. Distant Supervision for Preconditioned Commonsense Inference
      Ehsan Qasemi, Piyush Khanna, Qiang Ning and Muhao Chen
    7. JARVIS: A Neuro-Symbolic Commonsense Reasoning Framework for Conversational Embodied Agents
      Jing Gu, Kaizhi Zheng, Yue Fan, Kaiwen Zhou, Jialu Wang, Zonglin Di, Xuehai He and Xin Wang
    8. Evaluate Confidence Instead of Perplexity for Zero-shot Commonsense Reasoning
      Letian Peng, Zuchao Li, Ndapa Nakashole and Hai Zhao
    9. Commonsense Knowledge Base Population
      Tianqing Fang and Muhao Chen
    10. Mitigating Covertly Unsafe Text for AI Safety
      Alex Mei, Anisha Kabir, Sharon Levy, Melanie Subbiah, Emily Allaway, John Judge, Desmond Patton, Bruce Bimber, Kathleen McKeown and William Wang
    11. ER-Test: Evaluating Explanation Regularization Methods for Language Models
      Brihi Joshi, Aaron Chan, Ziyi Liu, Shaoliang Nie, Maziar Sanjabi, Hamed Firooz and Xiang Ren
    12. How well can Text-to-Image Generative Models understand Ethical Natural Language Interventions?
      Hritik Bansal, Da Yin, Masoud Monajatipoor and Kai-Wei Chang
    13. Bridging Nations: Quantifying the Role of Multilinguals in Communication on Social Media
      Julia Mendelsohn, Sayan Ghosh, David Jurgens and Ceren Budak
    14. Quantifying Social Biases Using Templates is Unreliable
      Preethi Seshadri, Pouya Pezeshkpour and Sameer Singh
    15. SafeText: A Benchmark for Exploring Physical Safety in Language Models
      Sharon Levy, Emily Allaway, Melanie Subbiah, Lydia Chilton, Desmond Patton, Kathleen McKeown and William Yang Wang
    16. POLITICS: Pretraining with Same-story Article Comparison for Ideology Prediction and Stance Detection
      Yujian Liu, Xinliang Frederick Zhang, David Wegsman, Nick Beauchamp and Lu Wang
    17. On Faithfulness and Coherence of Language Explanations for Recommendation Systems
      Zhouhang Xie, Bodhisattwa Prasad Majumder and Julian McAuley
    18. TalkToModel: Explaining Machine Learning Models with Interactive Natural Language Conversations
      Dylan Slack, Satyapriya Krishna, Himabindu Lakkaraju and Sameer Singh
    19. Continual Learning for Dialogue State Tracking by Maximizing Task Consistency
      Hyundong Cho, Andrea Madotto, Zhaojiang Lin, Khyathi Raghavi Chandu, Satwik Kottur, Jing Xu, Jonathan May and Chinnadhurai Sankar
    20. Reflect, Not Reflex: Inference-Based Common Ground Improves Dialogue Response Quality
      Pei Zhou, Hyundong Cho, Pegah Jandaghi, Dong-Ho Lee, Bill Yuchen Lin, Jay Pujara and Xiang Ren
    21. ChatMatch: Evaluating Chatbots by Autonomous Chat Tournaments
      Ruolan Yang
    22. ACCENT: An Automatic Event Commonsense Evaluation Metric for Open-Domain Dialogue Systems
      Sarik Ghazarian, Yijia Shao, Rujun Han, Aram Galstyan and Nanyun Peng
    23. Aerial Vision-and-Dialog Navigation
      Yue Fan, Winson Chen, Tongzhou Jiang, Chun Zhou, Yi Zhang and Xin Wang
    24. NewsEdits: A News Article Revision Dataset and a Document-Level Reasoning Challenge
      Alexander Spangher, Xiang Ren, Jonathan May and Nanyun Peng
    25. Examining Single Sentence Label Leakage in Natural Language Inference Datasets
      Michael Saxon, Xinyi Wang, Wenda Xu and William Yang Wang
    26. Structurally Diverse Sampling for Sample-Efficient Training and Comprehensive Evaluation
      Shivanshu Gupta, Sameer Singh and Matt Gardner
    27. RobustLR: A Diagnostic Benchmark for Evaluating Logical Robustness of Deductive Reasoners
      Soumya Sanyal, Zeyi Liao and Xiang Ren
    28. Robust (Controlled) Table-to-Text Generation with Structure-Aware Equivariance Learning
      Fei Wang, Zhewei Xu, Pedro Szekely and Muhao Chen
    29. DEGREE: A Data-Efficient Generation-Based Event Extraction Model
      I-Hung Hsu, Kuan-Hao Huang, Elizabeth Boschee, Scott Miller, Premkumar Natarajan, Kai-Wei Chang and Nanyun Peng
    30. Controllable Text Generation with Neurally-Decomposed Oracle
      Tao Meng, Sidi Lu, Nanyun Peng and Kai-Wei Chang
    31. switch-GLAT: Multilingual Parallel Machine Translation via Code-Switch Decoder
      Zhenqiao Song and Lei Li
    32. In silico diversification of DNAJB6 for directed evolution seeds
      Alexander Kratz
    33. Generative Entity Retrieval on Vaccine Adverse Effects Reports
      Bosung Kim and Ndapa Nakashole
    34. Accelerating Antimicrobial Peptide Discovery with Latent Sequence-Structure Model
      Danqing Wang, Zeyu Wen, Lei Li and Hao Zhou
    35. DICE: Data-Efficient Clinical Event Extraction with Generative Models
      Mingyu Derek Ma, Alex Taylor, Wei Wang and Nanyun Peng
    36. Causal Narrative Networks of Official COVID-19 Communications
      Sabrina Mai, Scott Leo Renshaw, Jeannette Sutton and Carter Butts
    37. ClimaBench: A Benchmark Dataset For Climate Change Text Understanding in English
      Tanmay Laud, Daniel Spokoyny, Tom Corringham and Taylor Berg-Kirkpatrick
    38. Neural Methods for Logical Reasoning over Knowledge Graphs
      Alfonso Amayuelas, Shuai Zhang, Susie Xi Rao and Ce Zhang
    39. RLogic: Recursive Logical Rule Learning from Knowledge Graphs
      Kewei Cheng, Jiahao Liu, Wei Wang and Yizhou Sun
    40. Benchmarking Long-tail Generalization with Likelihood Splits
      Ameya Godbole and Robin Jia
    41. Estimating Numbers without Regression
      Avijit Thawani, Jay Pujara and Ashwin Kalyan
    42. LOPS: Learning Order Inspired Pseudo-Label Selection for Weakly Supervised Text Classification
      Dheeraj Mekala, Chengyu Dong and Jingbo Shang
    43. FETA: A Benchmark for Few-Sample Task Transfer in Open-Domain Dialogue
      Alon Albalak, Yi-Lin Tuan, Pegah Jandaghi, Connor Pryor, Luke Yoffe, Deepak Ramachandran, Lise Getoor, Jay Pujara, William Yang Wang
    44. Not All Errors are Equal: Learning Text Generation Metrics using Stratified Error Synthesis
      Wenda Xu, Yilin Tuan, Yujie Lu, Michael Saxon, Lei Li, William Yang Wang
    Poster Session #2

    1. On the Paradox of Learning to Reason from Data
      Honghua Zhang, Liunian Harold Li, Tao Meng, Kai-Wei Chang and Guy Van den Broeck
    2. CP\Delta: Multilingual Fine-Tuning with Contextual Parameter Deltas
      Joe O'Connor, Bairu Hou, Shiyu Chang, Yang Zhang and Jacob Andreas
    3. SIMPLE: A Gradient Estimator for k-Subset Sampling
      Kareem Ahmed, Zhe Zeng, Mathias Niepert and Guy Van den Broeck
    4. Know Where You're Going: Meta-Learning for Parameter-Efficient Fine-Tuning
      Mozhdeh Gheini, Xuezhe Ma and Jonathan May
    5. Eliciting Cross-task Skills in Multi-task Learning with Task-level Mixture-of-Experts
      Qinyuan Ye, Juan Zha and Xiang Ren
    6. A Vector Is Not Enough: Taxonomy Expansion via Box Embeddings
      Song Jiang, Qiyue Yao, Qifan Wang and Yizhou Sun
    7. GENEVA: Pushing the Limit of Generalizability for Event Argument Extraction with 100+ Event Types
      Tanmay Parekh, I-Hung Hsu, Kuan-Hao Huang, Kai-Wei Chang and Nanyun Peng
    8. Sharpness-Aware Minimization with Dynamic Reweighting
      Wenxuan Zhou, Fangyu Liu, Huan Zhang and Muhao Chen
    9. Fine-grained Contrastive Learning for Relation Extraction
      William Hogan, Jiacheng Li and Jingbo Shang
    10. Impact of Pretraining Term Frequencies on Few-Shot Reasoning: Analysis and Online Interface tool
      Yasaman Razeghi, Raja Sekhar Reddy Mekala, Robert Logan, Matt Gardner and Sameer Singh
    11. Formulating Few-shot Fine-tuning Towards Language Model Pre-training: A Pilot Study on Named Entity Recognition
      Zihan Wang, Kewen Zhao, Zilong Wang and Jingbo Shang
    12. A Study of Syntactic Multi-Modality in Non-Autoregressive Machine Translation
      Kexun Zhang
    13. Fine-Grained Alignment for Few-Shot Speech Translation
      Siqi Ouyang, Rong Ye and Lei Li
    14. Numerical Correlation in Text
      Daniel Spokoyny, Chien-Sheng Wu and Caiming Xiong
    15. Dynamic Prompt Learning via Policy Gradient for Semi-structured Mathematical Reasoning
      Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Tanmay Rajpurohit, Peter Clark and Ashwin Kalyan
    16. PromptBoosting: Black-Box Text Classification with Ten Forward Passes
      Bairu Hou, Joe O'Connor, Jacob Andreas, Shiyu Chang, Yang Zhang
    17. RLPrompt: Optimizing Discrete Text Prompts With Reinforcement Learning
      Mingkai Deng, Jianyu Wang, Cheng-Ping Hsieh, Yihan Wang, Han Guo, Tianmin Shu, Meng Song, Eric Xing and Zhiting Hu
    18. CONDAQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation
      Abhilasha Ravichander, Matt Gardner and Ana Marasović
    19. Successive Prompting for Decomposing Complex Questions
      Dheeru Dua, Shivanshu Gupta, Sameer Singh and Matt Gardner
    20. Bridging the Training-Inference Gap for Dense Phrase Retrieval
      Gyuwan Kim, Jinhyuk Lee, Barlas O˘guz, Wenhan Xiong, Yizhe Zhang, Yashar Mehdad and William Wang
    21. Summarization as Indirect Supervision for Relation Extraction
      Keming Lu, I-Hung Hsu, Wenxuan Zhou, Mingyu Ma and Muhao Chen
    22. Learning to Query Internet Text for Informing Reinforcement Learning Agents
      Kolby Nottingham, Alekhya Pyla, Roy Fox and Sameer Singh
    23. ProceduralQA
      Tenghao Huang, Yixiang Yao, Xuezhe Ma and Muhao Chen
    24. Empowering Language Models with Knowledge Graph Reasoning for Open-Domain Question Answering
      Ziniu Hu, Kai-Wei Chang and Yizhou Sun
    25. Contrastive Novelty Learning: Anticipating Outliers with Large Language Models
      Albert Xu, Xiang Ren and Robin Jia
    26. Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference
      Bangzheng Li, Wenpeng Yin and Muhao Chen
    27. Representation Learning for Resource-Constrained Keyphrase Generation
      Di Wu, Wasi Ahmad, Sunipa Dev and Kai-Wei Chang
    28. Unified Semantic Typing with Meaningful Label Inference
      James Y. Huang, Bangzheng Li, Jiashu Xu and Muhao Chen
    29. Compact Token Representations with Contextual Quantization for Efficient Document Re-ranking
      Yingrui Yang, Yifan Qiao and Tao Yang
    30. Structured Data Representation Methods: A Comparative Study
      Yutong Shao and Ndapandula Nakashole
    31. Zoom Audio Transcription Accuracy for African American Vernacular English
      Christina Chance and Dorian Arnold
    32. More Data, Better Accuracy? An Empirical Study on Few-shot Speech-based Dementia Detection
      Youxiang Zhu and Xiaohui Liang
    33. Does Your Model Classify Entities Reasonably? Diagnosing and Mitigating Spurious Correlations in Entity Typing
      Nan Xu, Fei Wang, Bangzheng Li, Mingtao Dong and Muhao Chen
    34. On Grounded Planning for Embodied Tasks with Language Models
      Bill Yuchen Lin, Chengsong Huang, Qian Liu, Wenda Gu, Sam Sommerer and Xiang Ren
    35. Paraphrasing Is All You Need for Novel Object Captioning
      Cheng-Fu Yang, Yao-Hung Tsai, Wan-Cyuan Fan, Ruslan Salakhutdinov, Louis-Philippe Morency and Yu-Chiang Frank Wang
    36. Grounding Words in Visual Perceptions: Experiments in Spoken Language Acquisition
      Fabio De Ponte
    37. Variation of Gender Biases in Visual Recognition Models Before and After Finetuning
      Jaspreet Ranjit, Tianlu Wang, Vicente Ordóñez and Baishakhi Ray
    38. FedVLN: Privacy-preserving Federated Vision-and-Language Navigation
      Kaiwen Zhou and Xin Eric Wang
    39. VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation
      Kaizhi Zheng, Xiaotong Chen, Odest Chadwicke Jenkins and Xin Wang
    40. Curriculum Learning for Data-Efficient Vision-Language Alignment
      Tejas Srinivasan, Xiang Ren and Jesse Thomason
    41. Generalization Differences between End-to-End and Neuro-Symbolic Vision-Language Reasoning Systems
      Wang Zhu, Jesse Thomason and Robin Jia
    42. Counterfactual Prompt Learning for Vision and Language Models
      Xuehai He and Xin Wang
    43. Towards Underspecified Vision-and-Language Navigation
      Weixi Feng, Tsu-jui Fu, Yujie Lu, William Yang Wang
    44. Visualize Before You Write: Imagination-Guided Open-Ended Text Generation
      Wanrong Zhu, An Yan, Yujie Lu, Wenda Xu, Xin Eric Wang, Miguel Eckstein, William Yang Wang

    Organizers


    General Chairs

    William Wang

    Associate Professor

    Computer Science

    UCSB

    Lei Li

    Assistant Professor

    Computer Science

    UCSB

    Shiyu Chang

    Assistant Professor

    Computer Science

    UCSB


    PC Chairs

    Michael Saxon

    Ph.D. Student

    Computer Science

    UCSB

    Yujie Lu

    Ph.D. Student

    Computer Science

    UCSB


    Local Organization Chairs

    Laura Langan

    CRML Administrative Coordinator

    Computer Science

    UCSB

    Zhenqiao Song

    Ph.D. Student

    Computer Science

    UCSB

    Qiucheng Wu

    Ph.D. Student

    Computer Science

    UCSB

    Danqing Wang

    Ph.D. Student

    Computer Science

    UCSB

    Kexun Zhang

    Ph.D. Student

    Computer Science

    UCSB



    Past Symposiums



    Contact


    If you need accommodations or have questions or comments, please email us at saxon@ucsb.edu or yujielu@ucsb.edu.