2008 AUnifiedArchitectureforNaturalL

From GM-RKB
(Redirected from Collobert and Weston, 2008)
Jump to navigation Jump to search

Subject Headings: Natural Language Processing Task, Morpheme Embedding System, Collobert-Weston Embedding.

Notes

Cited By

Quotes

Abstract

We describe a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense (grammatically and semantically) using a language model. The entire network is trained jointly on all these tasks using weight-sharing, an instance of multitask learning. All the tasks use labeled data except the language model which is learnt from unlabeled text and represents a novel form of semi-supervised learning for the shared tasks. We show how both multitask learning and semi-supervised learning improve the generalization of the shared tasks, resulting in state-of-the-art performance.

1. Introduction

The field of Natural Language Processing (NLP) aims to convert human language into a formal representation that is easy for computers to manipulate. Current end applications include information extraction, machine translation, summarization, search and human-computer interfaces.

While complete semantic understanding is still a far-distant goal, researchers have taken a divide and conquer approach and identified several sub-tasks useful for application development and analysis. These range from the syntactic, such as part-of-speech tagging, chunking and parsing, to the semantic, such as word-sense disambiguation, semantic-role labeling, named entity extraction and anaphora resolution.

Currently, most research analyzes those tasks separately. Many systems possess few characteristics that would help develop a unified architecture which would presumably be necessary for deeper semantic tasks. In particular, many systems possess three failings in this regard: (i) they are shallow in the sense that the classifier is often linear, (ii) for good performance with a linear classifier they must incorporate many hand-engineered features specific for the task; and (iii) they cascade features learnt separately from other tasks, thus propagating errors.

In this work we attempt to define a unified architecture for Natural Language Processing that learns features that are relevant to the tasks at hand given very limited prior knowledge. This is achieved by training a deep neural network, building upon work by (Bengio & Ducharme, 2001) and (Collobert & Weston, 2007). We define a rather general convolutional network architecture and describe its application to many well known NLP tasks including part-of-speech tagging, chunking, named-entity recognition, learning a language model and the task of semantic role-labeling.

All of these tasks are integrated into a single system which is trained jointly. All the tasks except the language model are supervised tasks with labeled training data. The language model is trained in an unsupervised fashion on the entire Wikipedia website. Training this task jointly with the other tasks comprises a novel form of semi-supervised learning.

We focus on, in our opinion, the most difficult of these tasks: the semantic role-labeling problem. We show that both (i) multitask learning and (ii) semisupervised learning significantly improve performance on this task in the absence of hand-engineered features. We also show how the combined tasks, and in particular the unsupervised task, learn powerful features with clear semantic information given no human supervision other than the (labeled) data from the tasks (see Table 1).

A Unified Architecture for Natural Language Processing The article is structured as follows. In Section 2 we describe each of the NLP tasks we consider, and in Section 3 we define the general architecture that we use to solve all the tasks. Section 4 describes how this architecture is employed for multitask learning on all the labeled tasks we consider, and Section 5 describes the unlabeled task of building a language model in some detail. Section 6 gives experimental results of our system, and Section 7 concludes with a discussion of our results and possible directions for future research.

2. NLP Tasks

We consider six standard NLP tasks in this paper.

Part-Of-Speech Tagging (POS) aims at labeling each word with a unique tag that indicates its syntactic role, e.g. plural noun, adverb,.

Chunking, also called shallow parsing, aims at labeling segments of a sentence with syntactic constituents such as noun or verb phrase (NP or VP). Each word is assigned only one unique tag, often encoded as a begin-chunk (e.g. B-NP) or inside-chunk tag (e.g. INP).

Named Entity Recognition (NER) labels atomic elements in the sentence into categories such as “PERSON”, “COMPANY”, or “LOCATION”.

Semantic Role Labeling (SRL) aims at giving a semantic role to a syntactic constituent of a sentence. In the PropBank (Palmer et al., 2005) formalism one assigns roles ARG0-5 to words that are arguments of a predicate in the sentence, e.g. the following sentence might be tagged “[ John]ARG0 [ate]REL [the apple]ARG1 ”, where “ate” is the predicate. The precise arguments depend on a verb’s frame and if there are multiple verbs in a sentence some words might have multiple tags. In addition to the ARG0-5 tags, there there are 13 modifier tags such as ARGM-LOC (locational) and ARGM-TMP (temporal) that operate in a similar way for all verbs.

Language Models A language model traditionally estimates the probability of the next word being w in a sequence. We consider a different setting: predict whether the given sequence exists in nature, or not, following the methodology of (Okanohara & Tsujii, 2007). This is achieved by labeling real texts as positive examples, and generating “fake” negative text.

Semantically Related Words (“Synonyms”) This is the task of predicting whether two words are semantically related (synonyms, holonyms, hypernyms...) which is measured using the WordNet database (http://wordnet.princeton.edu) as ground truth. Our main interest is SRL, as it is, in our opinion, the most complex of these tasks. We use all these tasks to: (i) show the generality of our proposed architecture; and (ii) improve SRL through multitask learning.

3. General Deep Architecture for NLP

All the NLP tasks above can be seen as tasks assigning labels to words. The traditional NLP approach is: extract from the sentence a rich set of hand-designed features which are then fed to a classical shallow classification algorithm, e.g.

The choice of features is a completely empirical process, mainly based on trial and error, and the feature selection is task dependent, implying additional research for each new NLP task. Complex tasks like SRL then require a large number of possibly complex features (e.g., extracted from a parse tree) which makes such systems slow and intractable for large-scale applications. Instead we advocate a deep neural network (NN) architecture, trained in an end-to-end fashion. The input sentence is processed by several layers of feature extraction. The features in deep layers of the network are automatically trained by backpropagation to be relevant to the task. We describe in this section a general deep architecture suitable for all our NLP tasks, and easily generalizable to other NLP tasks. Our architecture is summarized in Figure 1. The first layer extracts features for each word. The second layer extracts features from the sentence treating it as a sequence with local and global structure (i.e., it is not treated like a bag of words). The following layers are classical NN layers.

3.1 Transforming Indices into Vectors

As our architecture deals with raw words and not engineered features, the first layer has to map words into real-valued vectors for processing by subsequent layers of the NN. For simplicity (and efficiency) we consider words as indices in a finite dictionary of words D � N. Lookup-Table Layer Each word i 2 D is embedded into a d-dimensional space using a lookup table

LTW (·): LTW (i) = Wi,

where W 2 Rd×|D|is a matrix of parameters to be learnt, Wi 2 Rd is the ith column of W and d is the word vector size (wsz) to be chosen by the user. In the first layer of our architecture an input sentence {s1, s2,..., sn} of n words in D is thus transformed into a series of vectors {Ws1, Ws2,..., Wsn} by applying the lookup-table to each of its words.

It is important to note that the parameters W of the layer are automatically trained during the learning process using backpropagation.

References

BibTeX

@inproceedings{2008_AUnifiedArchitectureforNaturalL,
  author    = {Ronan Collobert and
               Jason Weston},
  editor    = {William W. Cohen and
               Andrew McCallum and
               Sam T. Roweis},
  title     = {A unified architecture for natural language processing: deep neural
               networks with multitask learning},
  booktitle = {Proceedings of the Twenty-Fifth International Conference in Machine Learning
               (ICML 2008)},
  series    = {ACM International Conference Proceeding Series},
  volume    = {307},
  pages     = {160--167},
  publisher = {{ACM}},
  year      = {2008},
  url       = {https://doi.org/10.1145/1390156.1390177},
  doi       = {10.1145/1390156.1390177},
}


 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2008 AUnifiedArchitectureforNaturalLRonan Collobert
Jason Weston
A Unified Architecture for Natural Language Processing: Deep Neural Networks with Multitask Learning10.1145/1390156.13901772008