Luke Zettlemoyer
Zettlemoyer, Luke S., 1978-
VIAF ID: 157146936838313782967 (Personal)
Permalink: http://viaf.org/viaf/157146936838313782967
Preferred Forms
-
100 0 _ ‡a Luke Zettlemoyer
-
100 1 _ ‡a Zettlemoyer, Luke S ‡d 1978-
-
100 1 _ ‡a Zettlemoyer, Luke S., ‡d 1978-
4xx's: Alternate Name Forms (4)
Works
Title | Sources |
---|---|
3D Wikipedia |
![]() |
AllenNLP: A Deep Semantic Natural Language Processing Platform |
![]() |
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension |
![]() |
BERT for Coreference Resolution: Baselines and Analysis |
![]() |
Cloze-driven Pretraining of Self-attention Networks |
![]() |
CM3: A Causal Masked Multimodal Model of the Internet |
![]() |
Deep contextualized word representations |
![]() |
Efficient Large Scale Language Modeling with Mixtures of Experts |
![]() |
Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning |
![]() |
Knowledge Guided Text Retrieval and Reading for Open Domain Question Answering |
![]() |
Learning a Neural Semantic Parser from User Feedback |
![]() |
Learning to Parse Natural Language Commands to a Robot Control System |
![]() |
Multi-Agent Filtering with Infinitely Nested Beliefs |
![]() |
Multi-hop Reading Comprehension through Question Decomposition and Rescoring |
![]() |
OPT: Open Pre-trained Transformer Language Models |
![]() |
Pre-training via Paraphrasing |
![]() |
Recognizing and Imitating Programmer Style: Adversaries in Program Authorship Attribution |
![]() |
RoBERTa: A Robustly Optimized BERT Pretraining Approach |
![]() |
Situated understanding and learning of natural language, 2015, via WWW, June 20, 2016: |
![]() |
Situation Recognition: Visual Semantic Role Labeling for Image Understanding |
![]() |
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension |
![]() |
Unsupervised Cross-lingual Representation Learning at Scale |
![]() |