SpanBert


Title

SpanBERT: Improving Pre-training by Representing and Predicting Spans

Abstract

​ We present SpanBERT, a pre-training method that is designed to better represent and predict spans of text. Our approach extends BERT by (1) masking contiguous random spans, rather than random tokens, and (2) training the span boundary representations to predict the entire content of the masked span, without relying on the individual token representations within it. SpanBERT consistently outperforms BERT and our better-tuned baselines, with substantial gains on span selection tasks such as question answering and coreference resolution.

​ SpanBert相比于Bert更够更好的表示和预测文章的范围。

我们的方法在Bert的基础上mask了连续随机的范围,而非随机的token, 而且训练范围的表示去预测整个文本的masked span,为不是依赖于单个token。

Introduction

Reference

https://arxiv.org/pdf/1907.10529.pdf


Author: WeiRuoHe
Reprint policy: All articles in this blog are used except for special statements CC BY 4.0 reprint polocy. If reproduced, please indicate source WeiRuoHe !
 Previous
AMBert AMBert
1.Explain Yourself! Leveraging Language Models for Commonsense ReasoningNazneen Rajani +2 authors R. Socher 2019, ACL
2020-11-10 WeiRuoHe
Next 
K-Bert K-Bert
https://arxiv.org/pdf/1909.07606.pdf Title《K-BERT: Enabling Language Representation with Knowledge Graph》 Abstract Bert缺
2020-11-07 WeiRuoHe
  TOC