CCG-Supertagging-RNN


Title

CCG Supertagging with a Recurrent Neural Network

Abstract

Recent work on supertagging using a feed-forward neural network achieved signifificant improvements for

CCG supertagging and parsing (Lewis and Steedman, 2014).However, their architecture is limited to

considering local contexts and does not naturally model sequences of arbitrary length.

Introduction

And third, in order to reduce computational requirements and feature sparsity, each tagging decision is made without considering any potentially useful contextual information beyond a local context window.

Q:

Above all,these sequences tell us the problem:the architecture can’t consider local contect text and naturally model sequences of arbitrary.

[^rely too much on POS tagging,there three drawbacks]: POS tagging是语义标签,NLTK词性标注器,先将句子中的所有word分开,再对每一个word进行词性的标注,生成一个语义词汇对照序列

1.the effectiveness of supertagging make the accuracy of out of-domain-data down

2.performance degradation will happen if there are rare or unseen words(features mainly base on raw words and POS tag)

3.the architecture can’t consider local contect text

S:

RNN ,Distributed word representation,C&C parser as supertagger

[^Word2vec:Distributed word representation,一种word的低维表示,相比于one-hot,他的速度更快而且包含语义信息,即语义相近的两个词语表示的向量之间的距离也近]:

C:

we obtain substantial accuracy improvements, outperforming the feed-forward setup on both supertagging and parsing.

Reference

https://blog.csdn.net/wangyangjingjing/article/details/86631058

https://blog.csdn.net/AMDS123/article/details/67644298?utm_medium=distribute.pc_relevant_t0.none-task-blog-BlogCommendFromMachineLearnPai2-1.channel_param&depth_1-utm_source=distribute.pc_relevant_t0.none-task-blog-BlogCommendFromMachineLearnPai2-1.channel_param


Author: Weiruohe
Reprint policy: All articles in this blog are used except for special statements CC BY 4.0 reprint polocy. If reproduced, please indicate source Weiruohe !
  TOC