Research - Papers
Explore a selection of our published work on a variety of key research challenges in AI.
Provable Limitations of Acquiring Meaning from Ungrounded Form: What will Future Language Models Understand?
Language models trained on billions of tokens have recently led to unprecedented results on many NLP tasks. This success raises the question of whether, in principle, a system can ever “understand”…
Infusing Finetuning with Semantic Dependencies
Abstract For natural language processing systems, two kinds of evidence support the use of text representations from neural language models “pretrained” on large unannotated corpora: performance on…
Break, Perturb, Build: Automatic Perturbation of Reasoning Paths through Question Decomposition
Recent efforts to create challenge benchmarks that test the abilities of natural language understanding models have largely depended on human annotations. In this work, we introduce the “Break,…
Revisiting Few-shot Relation Classification: Evaluation Data and Classification Schemes
We explore few-shot learning (FSL) for relation classification (RC). Focusing on the realistic scenario of FSL, in which a test instance might not belong to any of the target categories…
MultiCite: Modeling realistic citations requires moving beyond the single-sentence single-label setting
Citation context analysis (CCA) is an important task in natural language processing that studies how and why scholars discuss each others’ work. Despite being studied for decades, traditional…
“How’s Shelby the Turtle today?” Strengths and Weaknesses of Interactive Animal-Tracking Maps for Environmental Communication
Interactive wildlife-tracking maps on public-facing websites and apps have become a popular way to share scientific data with the public as more conservationists and wildlife researchers deploy…
Critical Thinking for Language Models
This paper takes a first step towards a critical thinking curriculum for neural auto-regressive language models. We introduce a synthetic text corpus of deductively valid arguments, and use this…
Divergence Frontiers for Generative Models: Sample Complexity, Quantization Level, and Frontier Integral
The spectacular success of deep generative models calls for quantitative tools to measure their statistical performance. Divergence frontiers have recently been proposed as an evaluation framework…
Memory-efficient Transformers via Top-k Attention
Following the success of dot-product attention in Transformers, numerous approximations have been recently proposed to address its quadratic complexity with respect to the input length. While these…
Overview and Insights from the SciVer Shared Task on Scientific Claim Verification
We present an overview of the SCIVER shared task, presented at the 2nd Scholarly Document Processing (SDP) workshop at NAACL 2021. In this shared task, systems were provided a scientific claim and a…