id
stringclasses 179
values | question
stringlengths 8.75k
85.9k
| answer
dict |
|---|---|---|
1911.12569
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Emotion helps Sentiment: A Multi-task Model for Sentiment and Emotion Analysis
<<<Abstract>>>
In this paper, we propose a two-layered multi-task attention based neural network that performs sentiment analysis through emotion analysis. The proposed approach is based on Bidirectional Long Short-Term Memory and uses Distributional Thesaurus as a source of external knowledge to improve the sentiment and emotion prediction. The proposed system has two levels of attention to hierarchically build a meaningful representation. We evaluate our system on the benchmark dataset of SemEval 2016 Task 6 and also compare it with the state-of-the-art systems on Stance Sentiment Emotion Corpus. Experimental results show that the proposed system improves the performance of sentiment analysis by 3.2 F-score points on SemEval 2016 Task 6 dataset. Our network also boosts the performance of emotion analysis by 5 F-score points on Stance Sentiment Emotion Corpus.
<<</Abstract>>>
<<<Introduction>>>
The emergence of social media sites with limited character constraint has ushered in a new style of communication. Twitter users within 280 characters per tweet share meaningful and informative messages. These short messages have a powerful impact on how we perceive and interact with other human beings. Their compact nature allows them to be transmitted efficiently and assimilated easily. These short messages can shape people's thought and opinion. This makes them an interesting and important area of study. Tweets are not only important for an individual but also for the companies, political parties or any organization. Companies can use tweets to gauge the performance of their products and predict market trends BIBREF0. The public opinion is particularly interesting for political parties as it gives them an idea of voter's inclination and their support. Sentiment and emotion analysis can help to gauge product perception, predict stock prices and model public opinions BIBREF1.
Sentiment analysis BIBREF2 is an important area of research in natural language processing (NLP) where we automatically determine the sentiments (positive, negative, neutral). Emotion analysis focuses on the extraction of predefined emotion from documents. Discrete emotions BIBREF3, BIBREF4 are often classified into anger, anticipation, disgust, fear, joy, sadness, surprise and trust. Sentiments and emotions are subjective and hence they are understood similarly and often used interchangeably. This is also mostly because both emotions and sentiments refer to experiences that result from the combined influences of the biological, the cognitive, and the social BIBREF5. However, emotions are brief episodes and are shorter in length BIBREF6, whereas sentiments are formed and retained for a longer period. Moreover, emotions are not always target-centric whereas sentiments are directed. Another difference between emotion and sentiment is that a sentence or a document may contain multiple emotions but a single overall sentiment.
Prior studies show that sentiment and emotion are generally tackled as two separate problems. Although sentiment and emotion are not exactly the same, they are closely related. Emotions, like joy and trust, intrinsically have an association with a positive sentiment. Similarly, anger, disgust, fear and sadness have a negative tone. Moreover, sentiment analysis alone is insufficient at times in imparting complete information. A negative sentiment can arise due to anger, disgust, fear, sadness or a combination of these. Information about emotion along with sentiment helps to better understand the state of the person or object. The close association of emotion with sentiment motivates us to build a system for sentiment analysis using the information obtained from emotion analysis.
In this paper, we put forward a robust two-layered multi-task attention based neural network which performs sentiment analysis and emotion analysis simultaneously. The model uses two levels of attention - the first primary attention builds the best representation for each word using Distributional Thesaurus and the secondary attention mechanism creates the final sentence level representation. The system builds the representation hierarchically which gives it a good intuitive working insight. We perform several experiments to evaluate the usefulness of primary attention mechanism. Experimental results show that the two-layered multi-task system for sentiment analysis which uses emotion analysis as an auxiliary task improves over the existing state-of-the-art system of SemEval 2016 Task 6 BIBREF7.
The main contributions of the current work are two-fold: a) We propose a novel two-layered multi-task attention based system for joint sentiment and emotion analysis. This system has two levels of attention which builds a hierarchical representation. This provides an intuitive explanation of its working; b) We empirically show that emotion analysis is relevant and useful in sentiment analysis. The multi-task system utilizing fine-grained information of emotion analysis performs better than the single task system of sentiment analysis.
<<</Introduction>>>
<<<Related Work>>>
A survey of related literature reveals the use of both classical and deep-learning approaches for sentiment and emotion analysis. The system proposed in BIBREF8 relied on supervised statistical text classification which leveraged a variety of surface form, semantic, and sentiment features for short informal texts. A Support Vector Machine (SVM) based system for sentiment analysis was used in BIBREF9, whereas an ensemble of four different sub-systems for sentiment analysis was proposed in BIBREF10. It comprised of Long Short-Term Memory (LSTM) BIBREF11, Gated Recurrent Unit (GRU) BIBREF12, Convolutional Neural Network (CNN) BIBREF13 and Support Vector Regression (SVR) BIBREF14. BIBREF15 reported the results for emotion analysis using SVR, LSTM, CNN and Bi-directional LSTM (Bi-LSTM) BIBREF16. BIBREF17 proposed a lexicon based feature extraction for emotion text classification. A rule-based approach was adopted by BIBREF18 to extract emotion-specific semantics. BIBREF19 used a high-order Hidden Markov Model (HMM) for emotion detection. BIBREF20 explored deep learning techniques for end-to-end trainable emotion recognition. BIBREF21 proposed a multi-task learning model for fine-grained sentiment analysis. They used ternary sentiment classification (negative, neutral, positive) as an auxiliary task for fine-grained sentiment analysis (very-negative, negative, neutral, positive, very-positive). A CNN based system was proposed by BIBREF22 for three phase joint multi-task training. BIBREF23 presented a multi-task learning based model for joint sentiment analysis and semantic embedding learning tasks. BIBREF24 proposed a multi-task setting for emotion analysis based on a vector-valued Gaussian Process (GP) approach known as coregionalisation BIBREF25. A hierarchical document classification system based on sentence and document representation was proposed by BIBREF26. An attention framework for sentiment regression is described in BIBREF27. BIBREF28 proposed a DeepEmoji system based on transfer learning for sentiment, emotion and sarcasm detection through emoji prediction. However, the DeepEmoji system treats these independently, one at a time.
Our proposed system differs from the above works in the sense that none of these works addresses the problem of sentiment and emotion analysis concurrently. Our empirical analysis shows that performance of sentiment analysis is boosted significantly when this is jointly performed with emotion analysis. This may be because of the fine-grained characteristics of emotion analysis that provides useful evidences for sentiment analysis.
<<</Related Work>>>
<<<Proposed Methodology>>>
We propose a novel two-layered multi-task attention based neural network for sentiment analysis where emotion analysis is utilized to improve its efficiency. Figure FIGREF1 illustrates the overall architecture of the proposed multi-task system. The proposed system consists of a Bi-directional Long Short-Term Memory (BiLSTM) BIBREF16, a two-level attention mechanism BIBREF29, BIBREF30 and a shared representation for emotion and sentiment analysis tasks. The BiLSTM encodes the word representation of each word. This representation is shared between the subsystems of sentiment and emotion analysis. Each of the shared representations is then fed to the primary attention mechanism of both the subsystems. The primary attention mechanism finds the best representation for each word for each task. The secondary attention mechanism acts on top of the primary attention to extract the best sentence representation by focusing on the suitable context for each task. Finally, the representations of both the tasks are fed to two different feed-forward neural networks to produce two outputs - one for sentiment analysis and one for emotion analysis. Each component is explained in the subsequent subsections.
<<<Two-Layered Multi-Task Attention Model>>>
<<<BiLSTM based word encoder>>>
Recurrent Neural Networks (RNN) are a class of networks which take sequential input and computes a hidden state vector for each time step. The current hidden state vector depends on the current input and the previous hidden state vector. This makes them good for handling sequential data. However, they suffer from a vanishing or exploding gradient problem when presented with long sequences. The gradient for back-propagating error either reduces to a very small number or increases to a very high value which hinders the learning process. Long Short Term Memory (LSTM) BIBREF11, a variant of RNN solves this problem by the gating mechanisms. The input, forget and output gates control the information flow.
BiLSTM is a special type of LSTM which takes into account the output of two LSTMs - one working in the forward direction and one working in the backward direction. The presence of contextual information for both past and future helps the BiLSTM to make an informed decision. The concatenation of a hidden state vectors $\overrightarrow{h_t}$ of the forward LSTM and $\overleftarrow{h_t}$ of the backward LSTM at any time step t provides the complete information. Therefore, the output of the BiLSTM at any time step t is $h_t$ = [$\overrightarrow{h_t}$, $\overleftarrow{h_t}$]. The output of the BiLSTM is shared between the main task (Sentiment Analysis) and the auxiliary task (Emotion Analysis).
<<</BiLSTM based word encoder>>>
<<<Word Attention>>>
The word level attention (primary attention) mechanism gives the model a flexibility to represent each word for each task differently. This improves the word representation as the model chooses the best representation for each word for each task. A Distributional Thesaurus (DT) identifies words that are semantically similar, based on whether they tend to occur in a similar context. It provides a word expansion list for words based on their contextual similarity. We use the top-4 words for each word as their candidate terms. We only use the top-4 words for each word as we observed that the expansion list with more words started to contain the antonyms of the current word which empirically reduced the system performance. Word embeddings of these four candidate terms and the hidden state vector $h_t$ of the input word are fed to the primary attention mechanism. The primary attention mechanism finds the best attention coefficient for each candidate term. At each time step $t$ we get V($x_t$) candidate terms for each input $x_t$ with $v_i$ being the embedding for each term (Distributional Thesaurus and word embeddings are described in the next section). The primary attention mechanism assigns an attention coefficient to each of the candidate terms having the index $i$ $\in $ V($x_t$):
where $W_w$ and $b_{w}$ are jointly learned parameters.
Each embedding of the candidate term is weighted with the attention score $\alpha _{ti}$ and then summed up. This produces $m_{t}$, the representation for the current input $x_{t}$ obtained from the Distributional Thesaurus using the candidate terms.
Finally, $m_{t}$ and $h_{t}$ are concatenated to get $\widehat{h_{t}}$, the final output of the primary attention mechanism.
<<</Word Attention>>>
<<<Sentence Attention>>>
The sentence attention (secondary attention) part focuses on each word of the sentence and assigns the attention coefficients. The attention coefficients are assigned on the basis of words' importance and their contextual relevance. This helps the model to build the overall sentence representation by capturing the context while weighing different word representations individually. The final sentence representation is obtained by multiplying each word vector representation with their attention coefficient and summing them over. The attention coefficient $\alpha _t$ for each word vector representation and the sentence representation $\widehat{H}$ are calculated as:
where $W_s$ and $b_{s}$ are parameters to be learned.
$\widehat{H}$ denotes the sentence representation for sentiment analysis. Similarly, we calculate $\bar{H}$ which represents the sentence for emotion classification. The system has the flexibility to compute different representations for sentiment and emotion analysis both.
<<</Sentence Attention>>>
<<<Final Output>>>
The final outputs for both sentiment and emotion analysis are computed by feeding $\widehat{H}$ and $\bar{H}$ to two different one-layer feed forward neural networks. For our task, the feed forward network for sentiment analysis has two output units, whereas the feed forward network for emotion analysis has eight output nodes performing multi-label classification.
<<</Final Output>>>
<<</Two-Layered Multi-Task Attention Model>>>
<<<Distributional Thesaurus>>>
Distributional Thesaurus (DT) BIBREF31 ranks words according to their semantic similarity. It is a resource which produces a list of words in decreasing order of their similarity for each word. We use the DT to expand each word of the sentence. The top-4 words serve as the candidate terms for each word. For example, the candidate terms for the word good are: great, nice awesome and superb. DT offers the primary attention mechanism external knowledge in the form of candidate terms. It assists the system to perform better when presented with unseen words during testing as the unseen words could have been a part of the DT expansion list. For example, the system may not come across the word superb during training but it can appear in the test set. Since the system has already seen the word superb in the DT expansion list of the word good, it can handle this case efficiently. This fact is established by our evaluation results as the model performs better when the DT expansion and primary attentions are a part of the final multi-task system.
<<</Distributional Thesaurus>>>
<<<Word Embeddings>>>
Word embeddings represent words in a low-dimensional numerical form. They are useful for solving many NLP problems. We use the pre-trained 300 dimensional Google Word2Vec BIBREF32 embeddings. The word embedding for each word in the sentence is fed to the BiLSTM network to get the current hidden state. Moreover, the primary attention mechanism is also applied to the word embeddings of the candidate terms for the current word.
<<</Word Embeddings>>>
<<</Proposed Methodology>>>
<<<Datasets, Experiments and Analysis>>>
In this section we present the details of the datasets used for the experiments, results that we obtain and the necessary analysis.
<<<Datasets>>>
We evaluate our proposed approach for joint sentiment and emotion analysis on the benchmark dataset of SemEval 2016 Task 6 BIBREF7 and Stance Sentiment Emotion Corpus (SSEC) BIBREF15. The SSEC corpus is an annotation of the SemEval 2016 Task 6 corpus with emotion labels. The re-annotation of the SemEval 2016 Task 6 corpus helps to bridge the gap between the unavailability of a corpus with sentiment and emotion labels. The SemEval 2016 corpus contains tweets which are classified into positive, negative or other. It contains 2,914 training and 1,956 test instances. The SSEC corpus is annotated with anger, anticipation, disgust, fear, joy, sadness, surprise and trust labels. Each tweet could belong to one or more emotion classes and one sentiment class. Table TABREF15 shows the data statistics of SemEval 2016 task 6 and SSEC which are used for sentiment and emotion analysis, respectively.
<<</Datasets>>>
<<<Preprocessing>>>
The SemEval 2016 task 6 corpus contains tweets from Twitter. Since the tweets are derived from an environment with the constraint on the number of characters, there is an inherent problem of word concatenation, contractions and use of hashtags. Example: #BeautifulDay, we've, etc. Usernames and URLs do not impart any sentiment and emotion information (e.g. @John). We use the Python package ekphrasis BIBREF33 for handling these situations. Ekphrasis helps to split the concatenated words into individual words and expand the contractions. For example, #BeautifulDay to # Beautiful Day and we've to we have. We replace usernames with $<$user$>$, number with $<number>$ and URLs with $<$url$>$ token.
<<</Preprocessing>>>
<<<Implementation Details>>>
We implement our model in Python using Tensorflow on a single GPU. We experiment with six different BiLSTM based architectures. The three architectures correspond to BiLSTM based systems without primary attention i.e. only with secondary attention for sentiment analysis (S1), emotion analysis (E1) and the multi-task system (M1) for joint sentiment and emotion analysis. The remaining three architectures correspond to the systems for sentiment analysis (S2), emotion analysis (E2) and multi-task system (M2), with both primary and secondary attention. The weight matrices were initialized randomly using numbers form a truncated normal distribution. The batch size was 64 and the dropout BIBREF34 was 0.6 with the Adam optimizer BIBREF35. The hidden state vectors of both the forward and backward LSTM were 300-dimensional, whereas the context vector was 150-dimensional. Relu BIBREF36 was used as the activation for the hidden layers, whereas in the output layer we used sigmoid as the activation function. Sigmoid cross-entropy was used as the loss function. F1-score was reported for the sentiment analysis BIBREF7 and precision, recall and F1-score were used as the evaluation metric for emotion analysis BIBREF15. Therefore, we report the F1-score for sentiment and precision, recall and F1-score for emotion analysis.
<<</Implementation Details>>>
<<<Results and Analysis>>>
We compare the performance of our proposed system with the state-of-the-art systems of SemEval 2016 Task 6 and the systems of BIBREF15. Experimental results show that the proposed system improves the existing state-of-the-art systems for sentiment and emotion analysis. We summarize the results of evaluation in Table TABREF18.
The primary attention mechanism plays a key role in the overall system as it improves the score of both sentiment and emotion analysis in both single task as well as multi-task systems. The use of primary attention improves the performance of single task systems for sentiment and emotion analysis by 2.21 and 1.72 points, respectively.Similarly, when sentiment and emotion analysis are jointly performed the primary attention mechanism improves the score by 0.93 and 2.42 points for sentiment and emotion task, respectively. To further measure the usefulness of the primary attention mechanism and the Distributional Thesaurus, we remove it from the systems S2, E2, and M2 to get the systems S1, E1, and M1. In all the cases, with the removal of primary attention mechanism, the performance drops. This is clearly illustrated in Figure FIGREF21. These observations indicate that the primary attention mechanism is an important component of the two-layered multi-task attention based network for sentiment analysis. We also perform t-test BIBREF40 for computing statistical significance of the obtained results from the final two-layered multi-task system M2 for sentiment analysis by calculating the p-values and observe that the performance gain over M1 is significant with p-value = 0.001495. Similarly, we perform the statistical significance test for each emotion class. The p-values for anger, anticipation, fear, disgust, joy, sadness, surprise and trust are 0.000002, 0.000143, 0.00403, 0.000015, 0.004607, 0.069, 0.000001 and 0.000001, respectively. These results provide a good indication of statistical significance.
Table TABREF19 shows the comparison of our proposed system with the existing state-of-the-art system of SemEval 2016 Task 6 for the sentiment dataset. BIBREF7 used feature-based SVM, BIBREF39 used keyword rules, LitisMind relied on hashtag rules on external data, BIBREF38 utilized a combination of sentiment classifiers and rules, whereas BIBREF37 used a maximum entropy classifier with domain-specific features. Our system comfortably surpasses the existing best system at SemEval. Our system manages to improve the existing best system of SemEval 2016 task 6 by 3.2 F-score points for sentiment analysis.
We also compare our system with the state-of-the-art systems proposed by BIBREF15 on the emotion dataset. The comparison is demonstrated in Table TABREF22. Maximum entropy, SVM, LSTM, Bi-LSTM, and CNN were the five individual systems used by BIBREF15. Overall, our proposed system achieves an improvement of 5 F-Score points over the existing state-of-the-art system for emotion analysis. Individually, the proposed system improves the existing F-scores for all the emotions except surprise. The findings of BIBREF15 also support this behavior (i.e. worst result for the surprise class). This could be attributed to the data scarcity and a very low agreement between the annotators for the emotion surprise.
Experimental results indicate that the multi-task system which uses fine-grained information of emotion analysis helps to boost the performance of sentiment analysis. The system M1 comprises of the system S1 performing the main task (sentiment analysis) with E1 undertaking the auxiliary task (emotion analysis). Similarly, the system M2 is made up of S2 and E2 where S2 performs the main task (sentiment analysis) and E2 commits to the auxiliary task (emotion analysis). We observe that in both the situations, the auxiliary task, i.e. emotional information increases the performance of the main task, i.e. sentiment analysis when these two are jointly performed. Experimental results help us to establish the fact that emotion analysis benefits sentiment analysis. The implicit sentiment attached to the emotion words assists the multi-task system. Emotion such as joy and trust are inherently associated with a positive sentiment whereas, anger, disgust, fear and sadness bear a negative sentiment. Figure FIGREF21 illustrates the performance of various models for sentiment analysis.
As a concrete example which justifies the utility of emotion analysis in sentiment analysis is shown below.
@realMessi he is a real sportsman and deserves to be the skipper.
The gold labels for the example are anticipation, joy and trust emotion with a positive sentiment. Our system S2 (single task system for sentiment analysis with primary and secondary attention) had incorrectly labeled this example with a negative sentiment and the E2 system (single task system with both primary and secondary attention for emotion analysis) had tagged it with anticipation and joy only. However, M2 i.e. the multi-task system for joint sentiment and emotion analysis had correctly classified the sentiment as positive and assigned all the correct emotion tags. It predicted the trust emotion tag, in addition to anticipation and joy (which were predicted earlier by E2). This helped M2 to correctly identify the positive sentiment of the example. The presence of emotional information helped the system to alter its sentiment decision (negative by S2) as it had better understanding of the text.
A sentiment directly does not invoke a particular emotion always and a sentiment can be associated with more than one emotion. However, emotions like joy and trust are associated with positive sentiment mostly whereas, anger, disgust and sadness are associated with negative sentiment particularly. This might be the reason of the extra sentiment information not helping the multi-task system for emotion analysis and hence, a decreased performance for emotion analysis in the multi-task setting.
<<</Results and Analysis>>>
<<<Error Analysis>>>
We perform quantitative error analysis for both sentiment and emotion for the M2 model. Table TABREF23 shows the confusion matrix for sentiment analysis. anger,anticipation,fear,disgust,joy,sadness,surprise,trust consist of the confusion matrices for anger, anticipation, fear, disgust, joy, sadness, surprise and trust. We observe from Table TABREF23 that the system fails to label many instances with the emotion surprise. This may be due to the reason that this particular class is the most underrepresented in the training set. A similar trend can also be observed for the emotion fear and trust in Table TABREF23 and Table TABREF23, respectively. These three emotions have the least share of training instances, making the system less confident towards these emotions.
Moreover, we closely analyze the outputs to understand the kind of errors that our proposed model faces. We observe that the system faces difficulties at times and wrongly predicts the sentiment class in the following scenarios:
$\bullet $ Often real-world phrases/sentences have emotions of conflicting nature. These conflicting nature of emotions are directly not evident from the surface form and are left unsaid as these are implicitly understood by humans. The system gets confused when presented with such instances.
Text: When you become a father you realize that you are not the most important person in the room anymore... Your child is!
Actual Sentiment: positive
Actual Emotion: anticipation, joy, surprise, trust
Predicted Sentiment: negative
Predicted Emotion: anger, anticipation, sadness
The realization of not being the most important person in a room invokes anger, anticipation and sadness emotions, and a negative sentiment. However, it is a natural feeling of overwhelmingly positive sentiment when you understand that your own child is the most significant part of your life.
$\bullet $ Occasionally, the system focuses on the less significant part of the sentences. Due to this the system might miss crucial information which can influence and even change the final sentiment or emotion. This sometimes lead to the incorrect prediction of the overall sentiment and emotion.
Text: I've been called many things, quitter is not one of them...
Actual Sentiment: positive
Actual Emotion: anticipation, joy, trust
Predicted Sentiment: negative
Predicted Emotion: anticipation, sadness
Here, the system focuses on the first part of the sentence where the speaker was called many things which denotes a negative sentiment. Hence, the system predicts a negative sentiment and, anticipation and sadness emotions. However, the speaker in the second part uplifts the overall tone by justifying that s/he has never been called a quitter. This changes the negative sentiment to a positive sentiment and the overall emotion.
<<</Error Analysis>>>
<<</Datasets, Experiments and Analysis>>>
<<<Conclusion>>>
In this paper, we have presented a novel two-layered multi-task attention based neural network which performs sentiment analysis through emotion analysis. The primary attention mechanism of the two-layered multi-task system relies on Distributional Thesaurus which acts as a source of external knowledge. The system hierarchically builds the final representation from the word level to the sentence level. This provides a working insight to the system and its ability to handle the unseen words. Evaluation on the benchmark dataset suggests an improvement of 3.2 F-score point for sentiment analysis and an overall performance boost of 5 F-score points for emotion analysis over the existing state-of-the-art systems. The system empirically establishes the fact that emotion analysis is both useful and relevant to sentiment analysis. The proposed system does not rely on any language dependent features or lexicons. This makes it extensible to other languages as well. In future, we would like to extend the two-layered multi-task attention based neural network to other languages.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Work\nProposed Methodology\nTwo-Layered Multi-Task Attention Model\nBiLSTM based word encoder\nWord Attention\nSentence Attention\nFinal Output\nDistributional Thesaurus\nWord Embeddings\nDatasets, Experiments and Analysis\nDatasets\nPreprocessing\nImplementation Details\nResults and Analysis\nError Analysis\nConclusion"
],
"type": "outline"
}
|
1911.03243
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Crowdsourcing a High-Quality Gold Standard for QA-SRL
<<<Abstract>>>
Question-answer driven Semantic Role Labeling (QA-SRL) has been proposed as an attractive open and natural form of SRL, easily crowdsourceable for new corpora. Recently, a large-scale QA-SRL corpus and a trained parser were released, accompanied by a densely annotated dataset for evaluation. Trying to replicate the QA-SRL annotation and evaluation scheme for new texts, we observed that the resulting annotations were lacking in quality and coverage, particularly insufficient for creating gold standards for evaluation. In this paper, we present an improved QA-SRL annotation protocol, involving crowd-worker selection and training, followed by data consolidation. Applying this process, we release a new gold evaluation dataset for QA-SRL, yielding more consistent annotations and greater coverage. We believe that our new annotation protocol and gold standard will facilitate future replicable research of natural semantic annotations.
<<</Abstract>>>
<<<Introduction>>>
Semantic Role Labeling (SRL) provides explicit annotation of predicate-argument relations, which have been found useful in various downstream tasks BIBREF0, BIBREF1, BIBREF2, BIBREF3. Question-Answer driven Semantic Role Labeling (QA-SRL) BIBREF4 is an SRL scheme in which roles are captured by natural language questions, while arguments represent their answers, making the annotations intuitive, semantically rich, and easily attainable by laymen. For example, in Table TABREF4, the question Who cut something captures the traditional “agent” role.
Previous attempts to annotate QA-SRL initially involved trained annotators BIBREF4 but later resorted to crowdsourcing BIBREF5 to achieve scalability. Naturally, employing crowd workers raises challenges when annotating semantic structures like SRL. As BIBREF5 acknowledged, the main shortage of the large-scale 2018 dataset is the lack of recall, estimated by experts to be in the lower 70s.
In light of this and other annotation inconsistencies, we propose an improved QA-SRL crowdsourcing protocol for high-quality annotation, allowing for substantially more reliable performance evaluation of QA-SRL parsers. To address worker quality, we systematically screen workers, provide concise yet effective guidelines, and perform a short training procedure, all within a crowd-sourcing platform. To address coverage, we employ two independent workers plus an additional one for consolidation — similar to conventional expert-annotation practices. In addition to yielding 25% more roles, our coverage gain is demonstrated by evaluating against expertly annotated data and comparison with PropBank (Section SECREF4). To foster future research, we release an assessed high-quality gold dataset along with our reproducible protocol and evaluation scheme, and report the performance of the existing parser BIBREF5 as a baseline.
<<</Introduction>>>
<<<Background — QA-SRL>>>
<<<Specifications>>>
In QA-SRL, a role question adheres to a 7-slot template, with slots corresponding to a WH-word, the verb, auxiliaries, argument placeholders (SUBJ, OBJ), and prepositions, where some slots are optional BIBREF4 (see appendix for examples). Such question captures the corresponding semantic role with a natural easily understood expression. The set of all non-overlapping answers for the question is then considered as the set of arguments associated with that role. This broad question-based definition of roles captures traditional cases of syntactically-linked arguments, but also additional semantic arguments clearly implied by the sentence meaning (see example (2) in Table TABREF4).
<<</Specifications>>>
<<<Corpora>>>
The original 2015 QA-SRL dataset BIBREF4 was annotated by non-expert workers after completing a brief training procedure. They annotated 7.8K verbs, reporting an average of 2.4 QA pairs per predicate. Even though multiple annotators were shown to produce greater coverage, their released dataset was produced using only a single annotator per verb. In subsequent work, BIBREF5 constructed a large-scale corpus and used it to train a parser. They crowdsourced 133K verbs with 2.0 QA pairs per verb on average. Since crowd-workers had no prior training, quality was established using an additional validation step, where workers had to ascertain the validity of the question, but not of its answers. Instead, the validator provided additional answers, independent of the other annotators. Each verb in the corpus was annotated by a single QA-generating worker and validated by two others.
In a reserved part of the corpus (Dense), targeted for parser evaluation, verbs were densely validated with 5 workers, approving questions judged as valid by at least 4/5 validators. Notably, adding validators to the Dense annotation pipeline accounts mostly for precision errors, while role coverage solely relies upon the single generator's set of questions. As both 2015 and 2018 datasets use a single question generator, both struggle with maintaining coverage. Also noteworthy, is that while traditional SRL annotations contain a single authoritative and non-redundant annotation, the 2018 dataset provides the raw annotations of all annotators. These include many overlapping or noisy answers, without settling on consolidation procedures to provide a single gold reference.
We found that these characteristics of the dataset impede its utility for future development of parsers.
<<</Corpora>>>
<<</Background — QA-SRL>>>
<<<Annotation and Evaluation Methods>>>
<<<Crowdsourcing Methodology>>>
<<<Screening and Training>>>
Our pool of annotators is selected after several short training rounds, with up to 15 predicates per round, in which they received extensive personal feedback. 1 out of 3 participants were selected after exhibiting good performance, tested against expert annotations.
<<</Screening and Training>>>
<<<Annotation>>>
We adopt the annotation machinery of BIBREF5 implemented using Amazon's Mechanical Turk, and annotate each predicate by 2 trained workers independently, while a third consolidates their annotations into a final set of roles and arguments. In this consolidation task, the worker validates questions, merges, splits or modifies answers for the same role according to guidelines, and removes redundant roles by picking the more naturally phrased questions. For example, in Table TABREF4 ex. 1, one worker could have chosen “47 people”, while another chose “the councillor”; in this case the consolidator would include both of those answers. In Section SECREF4, we show that this process yields better coverage. For example annotations, please refer to the appendix.
<<</Annotation>>>
<<<Guidelines Refinements>>>
We refine the previous guidelines by emphasizing several semantic features: correctly using modal verbs and negations in the question, and choosing answers that coincide with a single entity (example 1 in Table TABREF4).
<<</Guidelines Refinements>>>
<<<Data & Cost>>>
We annotated a sample taken from the Dense set on Wikinews and Wikipedia domains, each with 1000 sentences, equally divided between development and test. QA generating annotators are paid the same as in fitz2018qasrl, while the consolidator is rewarded 5¢ per verb and 3¢ per question. Per predicate, on average, our cost is 54.2¢, yielding 2.9 roles, compared to reported 2.3 valid roles with an approximated cost of 51¢ per predicate for Dense.
<<</Data & Cost>>>
<<</Crowdsourcing Methodology>>>
<<<Evaluation Metrics>>>
Evaluation in QA-SRL involves aligning predicted and ground truth argument spans and evaluating role label equivalence. Since detecting question paraphrases is still an open challenge, we propose both unlabeled and labeled evaluation metrics.
Unlabeled Argument Detection (UA) Inspired by the method presented in BIBREF5, arguments are matched using a span matching criterion of intersection over union $\ge 0.5$ . To credit each argument only once, we employ maximal bipartite matching between the two sets of arguments, drawing an edge for each pair that passes the above mentioned criterion. The resulting maximal matching determines the true-positive set, while remaining non-aligned arguments become false-positives or false-negatives.
Labeled Argument Detection (LA) All aligned arguments from the previous step are inspected for label equivalence, similar to the joint evaluation reported in BIBREF5. There may be many correct questions for a role. For example, What was given to someone? and What has been given by someone? both refer to the same semantic role but diverge in grammatical tense, voice, and presence of a syntactical object or subject. Aiming to avoid judging non-equivalent roles as equivalent, we propose Strict-Match to be an equivalence on the following template slots: WH, SUBJ, OBJ, as well as on negation, voice, and modality extracted from the question. Final reported numbers on labelled argument detection rates are based on bipartite aligned arguments passing Strict-Match. We later manually estimate the rate of correct equivalences missed by this conservative method.
As we will see, our evaluation heuristics, adapted from those in BIBREF5, significantly underestimate agreement between annotations, hence reflecting performance lower bounds. Devising more tight evaluation measures remains a challenge for future research.
<<<Evaluating Redundant Annotations>>>
We extend our metric for evaluating manual or automatic redundant annotations, like the Dense dataset or the parser in BIBREF5, which predicts argument spans independently of each other. To that end, we ignore predicted arguments that match ground-truth but are not selected by the bipartite matching due to redundancy. After connecting unmatched predicted arguments that overlap, we count one false positive for every connected component to avoid penalizing precision too harshly when predictions are redundant.
<<</Evaluating Redundant Annotations>>>
<<</Evaluation Metrics>>>
<<</Annotation and Evaluation Methods>>>
<<<Dataset Quality Analysis>>>
<<<Inter-Annotator Agreement (IAA)>>>
To estimate dataset consistency across different annotations, we measure F1 using our UA metric with 5 generators per predicate. Individual worker-vs-worker agreement yields 79.8 F1 over 10 experiments with 150 predicates, indicating high consistency across our annotators, inline with results by other structured semantic annotations (e.g. BIBREF6). Overall consistency of the dataset is assessed by measuring agreement between different consolidated annotations, obtained by disjoint triplets of workers, which achieves F1 of 84.1 over 4 experiments, each with 35 distinct predicates. Notably, consolidation boosts agreement, suggesting it is a necessity for semantic annotation consistency.
<<</Inter-Annotator Agreement (IAA)>>>
<<<Dataset Assessment and Comparison>>>
We assess both our gold standard set and the recent Dense set against an integrated expert annotated sample of 100 predicates. To construct the expert set, we blindly merged the Dense set with our worker annotations and manually corrected them. We further corrected the evaluation decisions, accounting for some automatic evaluation mistakes introduced by the span-matching and question paraphrasing criteria. As seen in Table TABREF19, our gold set yields comparable precision with significantly higher recall, which is in line with our 25% higher yield.
Examining disagreements between our gold and Dense, we observe that our workers successfully produced more roles, both implied and explicit. To a lesser extent, they split more arguments into independent answers, as emphasized by our guidelines, an issue which was left under-specified in the previous annotation guidelines.
<<</Dataset Assessment and Comparison>>>
<<<Agreement with PropBank Data>>>
It is illuminating to observe the agreement between QA-SRL and PropBank (CoNLL-2009) annotations BIBREF7. In Table TABREF22, we replicate the experiments in BIBREF4 for both our gold set and theirs, over a sample of 200 sentences from Wall Street Journal (agreement evaluation is automatic and the metric is somewhat similar to our UA). We report macro-averaged (over predicates) precision and recall for all roles, including core and adjuncts, while considering the PropBank data as the reference set. Our recall of the PropBank roles is notably high, reconfirming the coverage obtained by our annotation protocol.
The measured precision with respect to PropBank is low for adjuncts due to the fact that our annotators were capturing many correct arguments not covered in PropBank. To examine this, we analyzed 100 false positive arguments. Only 32 of those were due to wrong or incomplete QA annotations in our gold, while most others were outside of PropBank's scope, capturing either implied arguments or roles not covered in PropBank. Extrapolating from this manual analysis estimates our true precision (on all roles) to be about 91%, which is consistent with the 88% precision figure in Table TABREF19. Compared with 2015, our QA-SRL gold yielded 1593 annotations, with 989 core and 604 adjuncts, while theirs yielded 1315 annotations, 979 core and 336 adjuncts. Overall, the comparison to PropBank reinforces the quality of our gold dataset and shows its better coverage relative to the 2015 dataset.
<<</Agreement with PropBank Data>>>
<<</Dataset Quality Analysis>>>
<<<Baseline Parser Evaluation>>>
To illustrate the effectiveness of our new gold-standard, we use its Wikinews development set to evaluate the currently available parser from BIBREF5. For each predicate, the parser classifies every span for being an argument, independently of the other spans. Unlike many other SRL systems, this policy often produces outputs with redundant arguments (see appendix for examples). Results for 1200 predicates are reported in Table TABREF23, demonstrating reasonable performance along with substantial room for improvement, especially with respect to coverage. As expected, the parser's recall against our gold is substantially lower than the 84.2 recall reported in BIBREF5 against Dense, due to the limited recall of Dense relative to our gold set.
<<<Error Analysis>>>
We sample and evaluate 50 predicates to detect correct argument and paraphrase pairs that are skipped by the IOU and Strict-Match criteria. Based on this inspection, the parser completely misses 23% of the 154 roles present in the gold-data, out of which, 17% are implied. While the parser correctly predicts 82% of non-implied roles, it skips half of the implied ones.
<<</Error Analysis>>>
<<</Baseline Parser Evaluation>>>
<<<Conclusion>>>
We introduced a refined crowdsourcing pipeline and a corresponding evaluation methodology for QA-SRL. It enabled us to release a new gold standard for evaluations, notably of much higher coverage of core and implied roles than the previous Dense evaluation dataset. We believe that our annotation methodology and dataset would facilitate future research on natural semantic annotations and QA-SRL parsing.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nBackground — QA-SRL\nSpecifications\nCorpora\nAnnotation and Evaluation Methods\nCrowdsourcing Methodology\nScreening and Training\nAnnotation\nGuidelines Refinements\nData & Cost\nEvaluation Metrics\nEvaluating Redundant Annotations\nDataset Quality Analysis\nInter-Annotator Agreement (IAA)\nDataset Assessment and Comparison\nAgreement with PropBank Data\nBaseline Parser Evaluation\nError Analysis\nConclusion"
],
"type": "outline"
}
|
1910.03467
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Overcoming the Rare Word Problem for Low-Resource Language Pairs in Neural Machine Translation
<<<Abstract>>>
Among the six challenges of neural machine translation (NMT) coined by (Koehn and Knowles, 2017), rare-word problem is considered the most severe one, especially in translation of low-resource languages. In this paper, we propose three solutions to address the rare words in neural machine translation systems. First, we enhance source context to predict the target words by connecting directly the source embeddings to the output of the attention component in NMT. Second, we propose an algorithm to learn morphology of unknown words for English in supervised way in order to minimize the adverse effect of rare-word problem. Finally, we exploit synonymous relation from the WordNet to overcome out-of-vocabulary (OOV) problem of NMT. We evaluate our approaches on two low-resource language pairs: English-Vietnamese and Japanese-Vietnamese. In our experiments, we have achieved significant improvements of up to roughly +1.0 BLEU points in both language pairs.
<<</Abstract>>>
<<<Introduction>>>
NMT systems have achieved better performance compared to statistical machine translation (SMT) systems in recent years not only on available data language pairs BIBREF1, BIBREF2, but also on low-resource language pairs BIBREF3, BIBREF4. Nevertheless, NMT still exists many challenges which have adverse effects on its effectiveness BIBREF0. One of these challenges is that NMT has biased tend in translating high-frequency words, thus words which have lower frequencies are often translated incorrectly. This challenge has also been confirmed again in BIBREF3, and they have proposed two strategies to tackle this problem with modifications on the model's output distribution: one for normalizing some matrices by fixing them to constants after several training epochs and another for adding a direct connection from source embeddings through a simple feed forward neural network (FFNN). These approaches increase the size and the training time of their NMT systems. In this work, we follow their second approach but simplify the computations by replacing FFNN with two single operations.
Despite above approaches can improve the prediction of rare words, however, NMT systems often use limited vocabularies in their sizes, from 30K to 80K most frequent words of the training data, in order to reduce computational complexity and the sizes of the models BIBREF5, BIBREF6, so the rare-word translation are still problematic in NMT. Even when we use a larger vocabulary, this situation still exists BIBREF7. A word which has not seen in the vocabulary of the input text (called unknown word) are presented by the $unk$ symbol in NMT systems. Inspired by alignments and phrase tables in phrase-based machine translation (SMT) as suggested by BIBREF8, BIBREF6 proposed to address OOV words using an annotated training corpus. They then used a dictionary generated from alignment model or maps between source and target words to determine the translations of $unks$ if translations are not found. BIBREF9 proposed to reduce unknown words using Gage's Byte Pair Encoding (BPE) algorithm BIBREF10, but NMT systems are less effective for low-resource language pairs due to the lack of data and also for other languages that sub-word are not the optimal translation unit. In this paper, we employ several techniques inspired by the works from NMT and the traditional SMT mentioned above. Instead of a loosely unsupervised approach, we suggest a supervised approach to solve this trouble using synonymous relation of word pairs from WordNet on Japanese$\rightarrow $Vietnamese and English$\rightarrow $Vietnamese systems. To leverage effectiveness of this relation in English, we transform variants of words in the source texts to their original forms by separating their affixes collected by hand.
Our contributes in this work are:
We release the state-of-the-art for Japanese-Vietnamese NMT systems.
We proposed the approach to deal with the rare word translation by integrating source embeddings to the attention component of NMT.
We present a supervised algorithm to reduce the number of unknown words for the English$\rightarrow $Vietnamese translation system.
We demonstrate the effectiveness of leveraging linguistic information from WordNet to alleviate the rare-word problem in NMT.
<<</Introduction>>>
<<<Neural Machine Translation>>>
Our NMT system use a bidirectional recurrent neural network (biRNN) as an encoder and a single-directional RNN as a decoder with input feeding of BIBREF11 and the attention mechanism of BIBREF5. The Encoder's biRNN are constructed by two RNNs with the hidden units in the LSTM cell, one for forward and the other for backward of the source sentence $\mathbf {x}=(x_1, ...,x_n)$. Every word $x_i$ in sentence is first encoded into a continuous representation $E_s(x_i)$, called the source embedding. Then $\mathbf {x}$ is transformed into a fixed-length hidden vector $\mathbf {h}_i$ representing the sentence at the time step $i$, which called the annotation vector, combined by the states of forward $\overrightarrow{\mathbf {h}}_i$ and backward $\overleftarrow{\mathbf {h}}_i$:
$\overrightarrow{\mathbf {h}}_i=f(E_s(x_i),\overrightarrow{\mathbf {h}}_{i-1})$
$\overleftarrow{\mathbf {h}}_i=f(E_s(x_i),\overleftarrow{\mathbf {h}}_{i+1})$
The decoder generates the target sentence $\mathbf {y}={(y_1, ..., y_m)}$, and at the time step $j$, the predicted probability of the target word $y_j$ is estimated as follows:
where $\mathbf {z}_j$ is the output hidden states of the attention mechanism and computed by the previous output hidden states $\mathbf {z}_{j-1}$, the embedding of previous target word $E_t(y_{j-1})$ and the context $\mathbf {c}_j$:
$\mathbf {z}_j=g(E_t(y_{j-1}), \mathbf {z}_{j-1}, \mathbf {c}_j)$
The source context $\mathbf {c}_j$ is the weighted sum of the encoder's annotation vectors $\mathbf {h}_i$:
$\mathbf {c}_j=\sum ^n_{i=1}\alpha _{ij}\mathbf {h}_i$
where $\alpha _{ij}$ are the alignment weights, denoting the relevance between the current target word $y_j$ and all source annotation vectors $\mathbf {h}_i$.
<<</Neural Machine Translation>>>
<<<Rare Word translation>>>
In this section, we present the details about our approaches to overcome the rare word situation. While the first strategy augments the source context to translate low-frequency words, the remaining strategies reduce the number of OOV words in the vocabulary.
<<<Low-frequency Word Translation>>>
The attention mechanism in RNN-based NMT maps the target word into source context corresponding through the annotation vectors $\mathbf {h}_i$. In the recurrent hidden unit, $\mathbf {h}_i$ is computed from the previous state $\mathbf {h}_{t-1}$. Therefore, the information flow of the words in the source sentence may be diminished over time. This leads to the accuracy reduction when translating low-frequency words, since there is no direct connection between the target word and the source word. To alleviate the adverse impact of this problem, BIBREF3 combined the source embeddings with the predictive distribution over the output target word in several following steps:
Firstly, the weighted average vector of the source embeddings is computed as follows:
where $\alpha _j(e)$ are alignment weights in the attention component and $f_e = E_s(x)$, are the embeddings of the source words.
Then $l_j$ is transformed through one-hidden-layer FFNN with residual connection proposed by BIBREF12:
Finally, the output distribution over the target word is calculated by:
The matrices $\mathbf {W}_l$, $\mathbf {W}_t$ and $\mathbf {b}_t$ are trained together with other parameters of the NMT model.
This approach improves the performance of the NMT systems but introduces more computations as the model size increase due to the additional parameters $\mathbf {W}_l$, $\mathbf {W}_t$ and $\mathbf {b}_t$. We simplify this method by using the weighted average of source embeddings directly in the softmax output layer:
Our method does not learn any additional parameters. Instead, it requires the source embedding size to be compatible with the decoder's hidden states. With the additional information provided from the source embeddings, we achieve similar improvements compared to the more expensive method described in BIBREF3.
<<</Low-frequency Word Translation>>>
<<<Reducing Unknown Words>>>
In our previous experiments for English$\rightarrow $Vietnamese, BPE algorithm BIBREF9 applied to the source side does not significantly improves the systems despite it is able to reduce the number of unknown English words. We speculate that it might be due to the morphological differences between the source and the target languages (English and Vietnamese in this case). The unsupervised way of BPE while learning sub-words in English thus might be not explicit enough to provide the morphological information to the Vietnamese side. In this work, we would like to attempt a more explicit, supervised way. We collect 52 popular affixes (prefixes and suffixes) in English and then apply the separating affixes algorithm (called SAA) to reduce the number of unknown words as well as to force our NMT systems to learn better morphological mappings between two languages.
The main ideal of our SAA is to separate affixes of unknown words while ensuring that the rest of them still exists in the vocabulary. Let the vocabulary $V$ containing $K$ most frequency words from the training set $T1$, a set of prefixes $P$, a set of suffixes $S$, we call word $w^{\prime }$ is the rest of an unknown word or rare word $w$ after delimiting its affixes. We iteratively pick a $w$ from $N$ words (including unknown words and rare words) of the source text $T2$ to consider if $w$ starts with a prefix $p$ in $P$ or ends with a suffix $s$ in $S$, we then determine splitting its affixes if $w^{\prime }$ in $V$. A rare word in $V$ also can be separated its affixes if its frequency is less than the given threshold. We set this threshold by 2 in our experiments. Similarly to BPE approach, we also employ a pair of the special symbol $@$ for separating affixes from the word. Listing SECREF6 shows our SAA algorithm.
<<</Reducing Unknown Words>>>
<<<Dealing with OOV using WordNet>>>
WordNet is a lexical database grouping words into sets which share some semantic relations. Its version for English is proposed for the first time by BIBREF13. It becomes a useful resource for many tasks of natural language processing BIBREF14, BIBREF15, BIBREF16. WordNet are available mainly for English and German, the version for other languages are being developed including some Asian languages in such as Japanese, Chinese, Indonesian and Vietnamese. Several works have employed WordNet in SMT systemsBIBREF17, BIBREF18 but to our knowledge, none of the work exploits the benefits of WordNet in order to ease the rare word problem in NMT. In this work, we propose the learning synonymous algorithm (called LSW) from the WordNet of English and Japanese to handle unknown words in our NMT systems.
In WordNet, synonymous words are organized in groups which are called synsets. Our aim is to replace an OOV word by its synonym which appears in the vocabulary of the translation system. From the training set of the source language $T1$, we extract the vocabulary $V$ in size of $K$ most frequent words. For each OOV word from $T1$, we learn its synonyms which exist in the $V$ from the WordNet $W$. The synonyms are then arranged in the descending order of their frequencies to facilitate selection of the $n$ best words which have the highest frequencies. The output file $C$ of the algorithm contains OOV words and its corresponding synonyms and then it is applied to the input text $T2$. We also utilize a frequency threshold for rare words in the same way as in SAA algorithm. In practice, we set this threshold as 0, meaning no words on $V$ is replaced by its synonym. If a source sentence has $m$ unknown words and each of them has $n$ best synonyms, it would generate $m^n$ sentences. Translation process allow us to select the best hypothesis based on their scores. Because of each word in the WordNet can belong to many synsets with different meanings, thus an inappropriate word can be placed in the current source context. We will solve this situation in the further works. Our systems only use 1-best synonym for each OOV word. Listing SECREF7 presents the LSW algorithm.
<<</Dealing with OOV using WordNet>>>
<<</Rare Word translation>>>
<<<Experiments>>>
We evaluate our approaches on the English-Vietnamese and the Japanese-Vietnamese translation systems. Translation performance is measured in BLEU BIBREF19 by the multi-BLEU scripts from Moses.
<<<Datasets>>>
We consider two low-resource language pairs: Japanese-Vietnamese and English-Vietnamese. For Japanese-Vietnamese, we use the TED data provided by WIT3 BIBREF20 and compiled by BIBREF21. The training set includes 106758 sentence pairs, the validation and test sets are dev2010 (568 pairs) and tst2010 (1220 pairs). For English$\rightarrow $Vietnamese, we use the dataset from IWSLT 2015 BIBREF22 with around 133K sentence pairs for the training set, 1553 pairs in tst2012 as the validation and 1268 pairs in tst2013 as the test sets.
For LSW algorithm, we crawled pairs of synonymous words from Japanese-English WordNet and achieved 315850 pairs for English and 1419948 pairs for Japanese.
<<</Datasets>>>
<<<Preprocessing>>>
For English and Vietnamese, we tokenized the texts and then true-cased the tokenized texts using Moses script. We do not use any word segmentation tool for Vietnamese. For comparison purpose, Sennrich's BPE algorithm is applied for English texts. Following the same preprocessing steps for Japanese (JPBPE) in BIBREF21, we use KyTea BIBREF23 to tokenize texts and then apply BPE on those texts. The number of BPE merging operators are 50k for both Japanese and English.
<<</Preprocessing>>>
<<<Systems and Training>>>
We implement our NMT systems using OpenNMT-py framework BIBREF24 with the same settings as in BIBREF21 for our baseline systems. Our system are built with two hidden layers in both encoder and decoder, each layer has 512 hidden units. In the encoder, a BiLSTM architecture is used for each layer and in the decoder, each layer are basically an LSTM layer. The size of embedding layers in both source and target sides is also 512. Adam optimizer is used with the initial learning rate of $0.001$ and then we apply learning rate annealing. We train our systems for 16 epochs with the batch size of 32. Other parameters are the same as the default settings of OpenNMT-py.
We then modify the baseline architecture with the alternative proposed in Section SECREF5 in comparison to our baseline systems. All settings are the same as the baseline systems.
<<</Systems and Training>>>
<<<Results>>>
In this section, we show the effectiveness of our methods on two low-resource language pairs and compare them to the other works. The empirical results are shown in Table TABREF15 for Japanese-Vietnamese and in Table TABREF20 for English-Vietnamese. Note that, the Multi-BLEU is only measured in the Japanese$\rightarrow $Vietnamese direction and the standard BLEU points are written in brackets.
<<<Japanese-Vietnamese Translation>>>
We conduct two out of the three proposed approaches for Japanese-Vietnamese translation systems and the results are given in the Table TABREF15.
Baseline Systems. We find that our translation systems which use Sennrich's BPE method for Japanese texts and do not use word segmentation for Vietnamese texts are neither better or insignificant differences compare to those systems used word segmentation in BIBREF21. Particularly, we obtained +0.38 BLEU points between (1) and (4) in the Japanese$\rightarrow $Vietnamese and -0.18 BLEU points between (1) and (3) in the Vietnamese$\rightarrow $Japanese.
Our Approaches. On the systems trained with the modified architecture mentioned in the section SECREF5, we obtained an improvements of +0.54 BLEU points in the Japanese$\rightarrow $Vietnamese and +0.42 BLEU points on the Vietnamese$\rightarrow $Japanese compared to the baseline systems.
Due to the fact that Vietnamese WordNet is not available, we only exploit WordNet to tackle unknown words of Japanese texts in our Japanese$\rightarrow $Vietnamese translation system. After using Kytea, Japanese texts are applied LSW algorithm to replace OOV words by their synonyms. We choose 1-best synonym for each OOV word. Table TABREF18 shows the number of OOV words replaced by their synonyms. The replaced texts are then BPEd and trained on the proposed architecture. The largest improvement is +0.92 between (1) and (3). We observed an improvement of +0.7 BLEU points between (3) and (5) without using data augmentation described in BIBREF21.
<<</Japanese-Vietnamese Translation>>>
<<<English-Vietnamese Translation>>>
We examine the effect of all approaches presented in Section SECREF3 for our English-Vietnamese translation systems. Table TABREF20 summarizes those results and the scores from other systems BIBREF3, BIBREF25.
Baseline systems. After preprocessing data using Moses scripts, we train the systems of English$\leftrightarrow $Vietnamese on our baseline architecture. Our translation system obtained +0.82 BLEU points compared to BIBREF3 in the English$\rightarrow $Vietnamese and this is lower than the system of BIBREF25 with neural phrase-based translation architecture.
Our approaches. The datasets from the baseline systems are trained on our modified NMT architecture. The improvements can be found as +0.55 BLEU points between (1) and (2) in the English$\rightarrow $Vietnamese and +0.45 BLEU points (in tst2012) between (1) and (2) in the Vietnamese$\rightarrow $English.
For comparison purpose, English texts are split into sub-words using Sennrich's BPE methods. We observe that, the achieved BLEU points are lower Therefore, we then apply the SAA algorithm on the English texts from (2) in the English$\rightarrow $Vietnamese. The number of applied words are listed in Table TABREF21. The improvement in BLEU are +0.74 between (4) and (1).
Similarly to the Japanese$\rightarrow $Vietnamese system, we apply LSW algorithm on the English texts from (4) while selecting 1-best synonym for each OOV word. The number of replaced words on English texts are indicated in the Table TABREF22. Again, we obtained a bigger gain of +0.99 (+1.02) BLEU points in English$\rightarrow $Vietnamese direction. Compared to the most recent work BIBREF25, our system reports an improvement of +0.47 standard BLEU points on the same dataset.
We investigate some examples of translations generated by the English$\rightarrow $Vietnamese systems with our proposed methods in the Table TABREF23. The bold texts in red color present correct or approximate translations while the italic texts in gray color denote incorrect translations. The first example, we consider two words: presentation and the unknown word applauded. The word presentation is predicted correctly as Vietnamese"bài thuyết trình" in most cases when we combined source context through embeddings. The unknown word applauded which has not seen in the vocabulary is ignored in the first two cases (baseline and source embedding) but it is roughly translated as Vietnamese"hoan nghênh" in the SAA because it is separated into applaud and ed. In the second example, we observe the translations of the unknown word tryout, they are mistaken in the first three cases but in the LSW, it is predicted with a closer meaning as Vietnamese"bài kiểm tra" due to the replacement by its synonymous word as test.
<<</English-Vietnamese Translation>>>
<<</Results>>>
<<</Experiments>>>
<<<Related Works>>>
Addressing unknown words was mentioned early in the Statistical Machine Translation (SMT) systems. Some typical studies as: BIBREF26 proposed four techniques to overcome this situation by extend the morphology and spelling of words or using a bilingual dictionary or transliterating for names. These approaches are difficult when manipulate to different domains. BIBREF27 trained word embedding models to learn word similarity from monolingual data and an unknown word are then replaced by a its similar word. BIBREF28 used a linear model to learn maps between source and target spaces base on a small initial bilingual dictionary to find the translations of source words. However, in NMT, there are not so many works tackling this problem. BIBREF7 use a very large vocabulary to solve unknown words. BIBREF6 generate a dictionary from alignment data based on annotated corpus to decide the hypotheses of unknown words. BIBREF3 have introduced the solutions for dealing with the rare word problem, however, their models require more parameters, thus, decreasing the overall efficiency.
In another direction, BIBREF9 exploited the BPE algorithm to reduce number of unknown words in NMT and achieved significant efficiency on many language pairs. The second approach presented in this works follows this direction when instead of using an unsupervised method to split rare words and unknown words into sub-words that are able to translate, we use a supervised method. Our third approach using WordNet can be seen as a smoothing way, when we use the translations of the synonymous words to approximate the translation of an OOV word. Another work followed this direction is worth to mention is BIBREF29, when they use the morphological and semantic information as the factors of the words to help translating rare words.
<<</Related Works>>>
<<<Conclusion>>>
In this study, we have proposed three difference strategies to handle rare words in NMT, in which the combination of methods brings significant improvements to the NMT systems on two low-resource language pairs. In future works, we will consider selecting some appropriate synonymous words for the source sentence from n-best synonymous words to further improve the performance of the NMT systems and leverage more unsupervised methods based on monolingual data to address rare word problem.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nNeural Machine Translation\nRare Word translation\nLow-frequency Word Translation\nReducing Unknown Words\nDealing with OOV using WordNet\nExperiments\nDatasets\nPreprocessing\nSystems and Training\nResults\nJapanese-Vietnamese Translation\nEnglish-Vietnamese Translation\nRelated Works\nConclusion"
],
"type": "outline"
}
|
2003.04748
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
On the coexistence of competing languages
<<<Abstract>>>
We investigate the evolution of competing languages, a subject where much previous literature suggests that the outcome is always the domination of one language over all the others. Since coexistence of languages is observed in reality, we here revisit the question of language competition, with an emphasis on uncovering the ways in which coexistence might emerge. We find that this emergence is related to symmetry breaking, and explore two particular scenarios -- the first relating to an imbalance in the population dynamics of language speakers in a single geographical area, and the second to do with spatial heterogeneity, where language preferences are specific to different geographical regions. For each of these, the investigation of paradigmatic situations leads us to a quantitative understanding of the conditions leading to language coexistence. We also obtain predictions of the number of surviving languages as a function of various model parameters.
<<</Abstract>>>
<<<Introduction>>>
The dynamics of language evolution is one of many interdisciplinary fields to which methods and insights from statistical physics have been successfully applied (see BIBREF0 for an overview, and BIBREF1 for a specific comprehensive review).
In this work we revisit the question of language coexistence. It is known that a sizeable fraction of the more than 6000 languages that are currently spoken, is in danger of becoming extinct BIBREF2, BIBREF3, BIBREF4. In pioneering work by Abrams and Strogatz BIBREF5, theoretical predictions were made to the effect that less attractive or otherwise unfavoured languages are generally doomed to extinction, when contacts between speakers of different languages become sufficiently frequent. Various subsequent investigations have corroborated this finding, emphasising that the simultaneous coexistence of competing languages is only possible in specific circumstances BIBREF6, BIBREF7, all of which share the common feature that they involve some symmetry breaking mechanism BIBREF1. A first scenario can be referred to as spatial symmetry breaking. Different competing languages may coexist in different geographical areas, because they are more or less favoured locally, despite the homogenising effects of migration and language shift BIBREF8, BIBREF9, BIBREF10. A second scenario corresponds to a more abstract internal symmetry breaking. Two or more competing languages may coexist at a given place if the populations of speakers of these languages have imbalanced dynamics BIBREF11, BIBREF12, BIBREF13. Moreover, it has been shown that a stable population of bilinguals or multilinguals also favours the coexistence of several languages BIBREF14, BIBREF15, BIBREF16.
The aim of the present study is to provide a quantitative understanding of the conditions which ensure the coexistence of two or more competing languages within each of the symmetry breaking scenarios outlined above. Throughout this paper, in line with many earlier studies on the dynamics of languages BIBREF5, BIBREF7, BIBREF8, BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, and with an investigation of grammar acquisition BIBREF17, we describe the dynamics of the numbers of speakers of various languages by means of coupled rate equations. This approach is sometimes referred to as ecological modelling, because of its similarity with models used in theoretical ecology (see e.g. BIBREF18). From a broader perspective, systems of coupled differential equations, and especially Lotka-Volterra equations and replicator equations, are ubiquitous in game theory and in a broad range of areas in mathematical biology (see e.g. BIBREF19, BIBREF20, BIBREF21).
The plan of this paper is as follows. For greater clarity, we first consider in Section SECREF2 the situation of several competing languages in a single geographic area where the population is well mixed. We address the situation where internal symmetry is broken by imbalanced population dynamics. The relevant concepts are reviewed in detail in the case of two competing languages in Section SECREF1, and the full phase diagram of the model is derived. The case of an arbitrary number $N$ of competing languages is then considered in Section SECREF11 in full generality. The special situation where the attractivenesses of the languages are equally spaced is studied in Section SECREF22, whereas Section SECREF34 is devoted to the case where attractivenesses are modelled as random variables. Section SECREF3 is devoted to the situation where coexistence is due to spatial symmetry breaking. We focus our attention onto the simple case of two languages in competition on a linear array of $M$ distinct geographic areas. Language attractivenesses vary arbitrarily along the array, whereas migrations take place only between neighbouring areas at a uniform rate $\gamma $. A uniform consensus is reached at high migration rate, where the same language survives everywhere. This general result is demonstrated in detail for two geographic areas (Section SECREF57), and generalised to an arbitrary number $M$ of areas (Section SECREF67). The cases of ordered and random attractiveness profiles are investigated in Sections SECREF71 and SECREF84. In Section SECREF4 we present a non-technical discussion of our findings and their implications. Two appendices contain technical details about the regime of a large number of competing languages in a single geographic area (Appendix SECREF5) and about stability matrices and their spectra (Appendix SECREF6).
<<</Introduction>>>
<<<Breaking internal symmetry: language coexistence by imbalanced population dynamics>>>
This section is devoted to the dynamics of languages in a single geographic area. As mentioned above, it has been shown that two or more competing languages may coexist only if the populations of speakers of these languages have imbalanced dynamics BIBREF11, BIBREF12, BIBREF13. Our goal is to make these conditions more explicit and to provide a quantitative understanding of them.
<<<Two competing languages>>>
We begin with the case of two competing languages. We assume that language 1 is more favoured than language 2. Throughout this work we neglect the effect of bilingualism, so that at any given time $t$ each individual speaks a single well-defined language. Let $X_1(t)$ and $X_2(t)$ denote the numbers of speakers of each language at time $t$, so that $X(t)=X_1(t)+X_2(t)$ is the total population of the area under consideration.
The dynamics of the model is defined by the coupled rate equations
The above equations are an example of Lotka-Volterra equations (see e.g. BIBREF18, BIBREF19). The terms underlined by braces describe the intrinsic dynamics of the numbers of speakers of each language. For the sake of simplicity we have chosen the well-known linear-minus-bilinear or `logistic' form which dates back to Lotka BIBREF22 and is still commonly used in population dynamics. The linear term describes population growth, whereas the quadratic terms represent a saturation mechanism.
The main novelty of our approach is the introduction of the parameter $q$ in the saturation terms. This imbalance parameter is responsible for the internal symmetry breaking leading to language coexistence. It allows for the interpolation between two situations: when the saturation mechanism only involves the total population, i.e., $q=1$, and when the saturation mechanism acts separately on the populations of speakers of each language, $q=0$, which is the situation considered by Pinasco and Romanelli BIBREF11. Generic values of $q$ correspond to tunably imbalanced dynamics.
The last term in each of equations (DISPLAY_FORM2), () describes the language shift consisting of the conversions of single individuals from the less favoured language 2 to the more favoured language 1. In line with earlier studies BIBREF7, BIBREF11, BIBREF12, BIBREF13, conversions are triggered by binary interactions between individuals, so that the frequency of conversions is proportional to the product $X_1(t)X_2(t)$. The reduced conversion rate $C$ measures the difference of attractivenesses between the two languages.
For generic values of the parameters $q$ and $C$, the rate equations (DISPLAY_FORM2), () admit a unique stable fixed point. The dynamics converges exponentially fast to the corresponding stationary state, irrespective of initial conditions. There are two possible kinds of stationary states:
I. Consensus.
The solution
describes a consensus state where the unfavoured language 2 is extinct. The inverse relaxation times describing convergence toward the latter state are the opposites of the eigenvalues of the stability matrix associated with equations (DISPLAY_FORM2), (). The reader is referred to Appendix SECREF131 for details. These inverse relaxation times read
The above stationary solution is thus stable whenever $q+C>1$.
II. Coexistence.
The solution
describes a coexistence state where both languages survive forever. This stationary solution exists whenever $q+C<1$. It is always stable, as the inverse relaxation times read
Figure FIGREF9 shows the phase diagram of the model in the $q$–$C$ plane. There is a possibility of language coexistence only for $q<1$. The vertical axis ($q=0$) corresponds to the model considered by Pinasco and Romanelli BIBREF11, where the coexistence phase is maximal and extends up to $C=1$. As the parameter $q$ is increased, the coexistence phase shrinks until it disappears at the point $q=1$, corresponding to the balanced dynamics where the saturation mechanism involves the total population.
The model exhibits a continuous transition along the phase boundary between both phases ($q+C=1$). The number $X_2$ of speakers of the unfavoured language vanishes linearly as the phase boundary is approached from the coexistence phase (see (DISPLAY_FORM7)), whereas the relaxation time $1/\omega _2$ diverges linearly as the phase boundary is approached from both sides (see (DISPLAY_FORM5) and (DISPLAY_FORM8)).
For parameters along the phase boundary ($q+C=1$), the less attractive language still becomes extinct, albeit very slowly. Equations (DISPLAY_FORM2), () here yield the power-law relaxation laws
irrespective of initial conditions.
<<</Two competing languages>>>
<<<@!START@$N$@!END@ competing languages>>>
The above setting can be extended to the case of an arbitrary number $N$ of competing languages in a given area. Languages, numbered $i=1,\dots ,N$, are more or less favoured, depending on their attractivenesses $A_i$. The latter quantities are assumed to be quenched, i.e., fixed once for all. This non-trivial static profile of attractivenesses is responsible for conversions of single individuals from less attractive to more attractive languages.
Let $X(t)$ be the total population of the area under consideration at time $t$, and $X_i(t)$ be the number of speakers of language number $i=1,\dots ,N$. The dynamics of the model are defined by the rate equations
The terms underlined by braces describe the intrinsic dynamics of the numbers of speakers of each language. The novel feature here is again the presence of the parameter $q$, which is responsible for imbalanced dynamics, allowing thus the possibility of language coexistence. The last term in (DISPLAY_FORM12) describes the conversions of single individuals. If language $i$ is more attractive than language $j$, there is a net positive conversion rate $C_{ji}=-C_{ij}$ from language $j$ to language $i$. For the sake of simplicity, we assume that these conversion rates depend linearly on the differences of attractivenesses between departure and target languages, i.e.,
in some consistent units.
Throughout this work we shall not pay any attention to the evolution of the whole population $X(t)$. We therefore reformulate the model in terms of the fractions
of speakers of the various languages, which sum up to unity:
The reduction to be derived below is quite natural in the present setting. It provides an example of the reduction of Lotka-Volterra equations to replicator equations, proposed in BIBREF23 (see also BIBREF19, BIBREF20, BIBREF21). In the present situation, for $q<1$, which is precisely the range of $q$ where there is a possibility of language coexistence, the dynamics of the fractions $x_i(t)$ obeys the following reduced rate equations, which can be derived from (DISPLAY_FORM12):
with
and where attractivenesses and conversion rates have been rescaled according to
In the following, we focus our attention onto the stationary states of the model, rather than on its dynamics. It is therefore legitimate to redefine time according to
so that equations (DISPLAY_FORM16) simplify to
The rate equations (DISPLAY_FORM20) for the fractions of speakers of the $N$ competing languages will be the starting point of further developments. The quantity $Z(t)$ can be alternatively viewed as a dynamical Lagrange multiplier ensuring that the dynamics conserves the sum rule (DISPLAY_FORM15). The above equations belong to the class of replicator equations (see e.g. BIBREF19, BIBREF20, BIBREF21). Extensive studies of the dynamics of this class of equations have been made in mathematical biology, where the main focus has been on systematic classifications of fixed points and bifurcations in low-dimensional cases BIBREF23, BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF28.
From now on, we focus on the stationary state of the model for arbitrarily high values of the number $N$ of competing languages. The analysis of this goes as follows. The stationary values $x_i$ of the fractions of speakers are such that the right-hand sides of (DISPLAY_FORM20) vanish. For each language number $i$, there are two possibilities: either $x_i=0$, i.e., language $i$ gets extinct, or $x_i>0$, i.e., language $i$ survives forever. The non-zero fractions $x_i$ of speakers of surviving languages obey the coupled linear equations
where the parameter $Z$ is determined by expressing that the sum rule (DISPLAY_FORM15) holds in the stationary state. For generic values of model parameters, there is a unique stationary state, and the system relaxes exponentially fast to the latter, irrespective of its initial conditions. The uniqueness of the attractor is characteristic of the specific form of the rate equations (DISPLAY_FORM20), (DISPLAY_FORM21), with skew-symmetric conversion rates $c_{ij}$ (see (DISPLAY_FORM18)). This has been demonstrated explicitly in the case of two competing languages, studied in detail in Section SECREF1. The problem is however more subtle than it seems at first sight, as the number $K$ of surviving languages depends on model parameters in a non-trivial way.
<<</@!START@$N$@!END@ competing languages>>>
<<<The case of equally spaced attractivenesses>>>
It is useful to consider first the simple case where the (reduced) attractivenesses $a_i$ of the $N$ competing languages are equally spaced between 0 and some maximal value that we denote by $2g$. Numbering languages in order of decreasing attractivenesses, so that language 1 is the most attractive and language $N$ the least attractive, this reads
We have
The parameter $g$ is therefore the mean attractiveness.
The (reduced) conversion rates read
so that the fixed-point equations (DISPLAY_FORM21) take the form
Already in this simple situation the number $K$ of surviving languages depends on the mean attractiveness $g$ in a non-trivial way.
Consider first the situation where all languages survive ($K=N$). This is certainly true for $g=0$, where there are no conversions, so that the solution is simply $x_i=1/N$. There, all languages are indeed equally popular, as nothing distinguishes them. More generally, as long as all languages survive, the stationary solution obeying (DISPLAY_FORM26) reads
for $i=1,\dots ,N$. The above solution ceases to hold when the fraction of speakers of the least attractive language vanishes, i.e., $x_N=0$. This first extinction takes place for the threshold value
of the mean attractiveness $g$.
Consider now the general case where only $K$ among the $N$ languages survive. These are necessarily the $K$ most attractive ones, shown as red symbols in Figure FIGREF29.
In this situation, (DISPLAY_FORM26) yields
for $i=1,\dots ,K$. The linear relationship between the attractiveness $a_i$ of language $i$ and the stationary fraction $x_i$ of speakers of that language, observed in (DISPLAY_FORM27) and (DISPLAY_FORM30), is a general feature of the model (see Section SECREF34). The fraction $x_K$ of speakers of the least attractive of the surviving languages vanishes at the following threshold mean attractiveness:
for $K=2,\dots ,N$.
The following picture therefore emerges for the stationary state of $N$ competing languages with equally spaced attractivenesses. The number $K$ of surviving languages decreases as a function of the mean attractiveness $g$, from $K=N$ (all languages survive) near $g=0$ to $K=1$ (consensus) as very large $g$. Less attractive languages become extinct one by one as every single one of the thresholds (DISPLAY_FORM31) is traversed, so that
Figure FIGREF33 illustrates this picture for 5 competing languages. In each of the sectors defined in (DISPLAY_FORM32), the stationary fractions $x_i$ of speakers of the surviving languages are given by (DISPLAY_FORM30). They depend continuously on the mean attractiveness $g$, even though they are given by different expressions in different sectors. In particular, $x_i$ is flat, i.e., independent of $g$, in the sector where $K=2i-1$. The fraction $x_1$ of speakers of the most attractive language grows monotonically as a function of $g$, whereas all the other fractions of speakers eventually go to zero.
When the number of languages $N$ is large, the range of values of $g$ where the successive transitions take place is very broad. The threshold at which a consensus is reached, $g_{N,2}=N/2$, is indeed much larger than the threshold at which the least attractive language disappears, $g_{N,N}=1/(N-1)$. The ratio between these two extreme thresholds reads $N(N-1)/2$.
<<</The case of equally spaced attractivenesses>>>
<<<The general case>>>
We now turn to the general case of $N$ competing languages with arbitrary reduced attractivenesses $a_i$. Throughout the following, languages are numbered in order of decreasing attractivenesses, i.e.,
We shall be interested mostly in the stationary state of the model. As already mentioned above, the number $K$ of surviving languages depends on model parameters in a non-trivial way. The $K$ surviving languages are always the most attractive ones (see Figure FIGREF29). The fractions $x_i$ of speakers of those languages, obeying the fixed-point equations (DISPLAY_FORM21), can be written in full generality as
for $i=1,\dots ,K$, with
The existence of an explicit expression (DISPLAY_FORM36) for the solution of the fixed-point equations (DISPLAY_FORM21) in full generality is a consequence of their simple linear-minus-bilinear form, which also ensures the uniqueness of the attractor.
The number $K$ of surviving languages is the largest such that the solution (DISPLAY_FORM36) obeys $x_i>0$ for $i=1,\dots ,K$. Equivalently, $K$ is the largest integer in $1,\dots ,N$ such that
Every single one of the differences involved in the sum is positive, so that:
From now on, we model attractivenesses as independent random variables. More precisely, we set
where $w$ is the mean attractiveness, and the rescaled attractivenesses $\xi _i$ are positive random variables drawn from some continuous distribution $f(\xi )$ such that $\left\langle \xi \right\rangle =1$. For any given instance of the model, i.e., any draw of the $N$ random variables $\lbrace \xi _i\rbrace $, languages are renumbered in order of decreasing attractivenesses (see (DISPLAY_FORM35)).
For concreteness we assume that $f(0)$ is non-vanishing and that $f(\xi )$ falls off more rapidly than $1/\xi ^3$ at large $\xi $. These hypotheses respectively imply that small values of $\xi $ are allowed with non-negligible probability and ensure the convergence of the second moment $\left\langle \xi ^2\right\rangle =1+\sigma ^2$, where $\sigma ^2$ is the variance of $\xi $.
Some quantities of interest can be expressed in closed form for all language numbers $N$. One example is the consensus probability ${\cal P}$, defined as the probability of reaching consensus, i.e., of having $K=1$ (see (DISPLAY_FORM39)). This reads
We have
for all $N\ge 2$, where
is the cumulative distribution of $\xi $.
In forthcoming numerical and analytical investigations we use the following distributions:
We begin our exploration of the model by looking at the dynamics of a typical instance of the model with $N=10$ languages and a uniform distribution of attractivenesses with $w=0.3$. Figure FIGREF45 shows the time-dependent fractions of speakers of all languages, obtained by solving the rate equations (DISPLAY_FORM20) numerically, with the uniform initial condition $x_i(0)=1/10$ for all $i$. In this example there are $K=6$ surviving languages. The plotted quantities are observed to converge to their stationary values given by (DISPLAY_FORM36) for $i=1,\dots ,6$, and to zero for $i=7,\dots ,10$. They are ordered as the corresponding attractivenesses at all positive times, i.e., $x_1(t)>x_2(t)>\dots >x_N(t)$. Some of the fractions however exhibit a non-monotonic evolution. This is the case for $i=5$ in the present example.
Figure FIGREF48 shows the distribution $p_K$ of the number $K$ of surviving languages, for $N=10$ (top) and $N=40$ (bottom), and a uniform distribution of attractivenesses for four values of the product
This choice is motivated by the analysis of Appendix SECREF5. Each dataset is the outcome of $10^7$ draws of the attractiveness profile. The widths of the distributions $p_K$ are observed to shrink as $N$ is increased, in agreement with the expected $1/\sqrt{N}$ behavior stemming from the law of large numbers. The corresponding mean fractions $\left\langle K\right\rangle /N$ of surviving languages are shown in Table TABREF49 to converge smoothly to the asymptotic prediction (DISPLAY_FORM126), i.e.,
with $1/N$ corrections.
An overall picture of the dependence of the statistics of surviving languages on the mean attractiveness $w$ is provided by Figure FIGREF50, showing the mean number $\left\langle K\right\rangle $ of surviving languages against $w$, for $N=10$ and uniform and exponential attractiveness distributions. The plotted quantity decreases monotonically, starting from the value $\left\langle K\right\rangle =N$ in the absence of conversions ($w=0$), and converging to its asymptotic value $\left\langle K\right\rangle =1$ in the $w\rightarrow \infty $ limit, where consensus is reached with certainty. Its dependence on $w$ is observed to be steeper for the exponential distribution. These observations are corroborated by the asymptotic analysis of Appendix SECREF5. For the uniform distribution, (DISPLAY_FORM126) yields the scaling law $\left\langle K\right\rangle \approx (N/w)^{1/2}$. Concomitantly, the consensus probability becomes sizeable for $w\sim N$ (see (DISPLAY_FORM124)). For the exponential distribution, (DISPLAY_FORM130) yields the decay law $\left\langle K\right\rangle \approx 1/w$, irrespective of $N$, and the consensus probability is strictly independent of $N$ (see (DISPLAY_FORM128)).
<<</The general case>>>
<<</Breaking internal symmetry: language coexistence by imbalanced population dynamics>>>
<<<Breaking spatial symmetry: language coexistence by inhomogeneous attractivenesses>>>
As mentioned in the Introduction, different competing languages may coexist in distinct geographical areas, because they are more or less favoured locally, despite the homogenising effects of migration and language shift BIBREF8, BIBREF9, BIBREF10. The aim of this section is to provide a quantitative understanding of this scenario. We continue to use the approach and the formalism of Section SECREF2. We however take the liberty of adopting slightly different notations, as both sections are entirely independent.
We consider the dynamics of two competing languages in a structured territory comprising several distinct geographic areas. For definiteness, we assume that the population of each area is homogeneous. We restrict ourselves to the geometry of an array of $M$ areas, where individuals can only migrate along the links joining neighbouring areas, as shown in Figure FIGREF51. We assume for simplicity that the migration rates $\gamma $ between neighbouring areas are uniform, so that in the very long run single individuals eventually perform random walks across the territory. The relative attractivenesses of both competing languages are distributed inhomogeneously among the various areas, so that the net conversion rate $C_m$ from language 2 to language 1 depends on the area number $m$. Finally, in order to emphasise the effects of spatial inhomogeneity on their own, we simplify the model by neglecting imbalance and thus set $q=1$.
Let $X_m(t)$ and $Y_m(t)$ denote the respective numbers of speakers of language 1 and of language 2 in area number $m=1,\dots ,M$ at time $t$. The dynamics of the model is defined by the coupled rate equations
The extremal sites $m=1$ and $m=M$ have only one neighbour. The corresponding equations have to be modified accordingly. The resulting boundary conditions can be advantageously recast as
and similarly for other quantities. These are known as Neumann boundary conditions.
The total populations $P_m(t)=X_m(t)+Y_m(t)$ of the various areas obey
irrespective of the conversion rates $C_m$. As a consequence, in the stationary state all areas have the same population, which reads $P_m=1$ in our reduced units. The corresponding stability matrix is given in (DISPLAY_FORM137). The population profile $P_m(t)$ therefore converges exponentially fast to its uniform stationary value, with unit relaxation time ($\omega =1$).
From now on we assume, for simplicity, that the total population of each area is unity in the initial state. This property is preserved by the dynamics, i.e., we have $P_m(t)=1$ for all $m$ and $t$, so that the rate equations (DISPLAY_FORM52) simplify to
The rate equations (DISPLAY_FORM55) for the fractions $X_m(t)$ of speakers of language 1 in the various areas provide another example of the broad class of replicator equations (see e.g. BIBREF19, BIBREF20, BIBREF21). The above equations are the starting point of the subsequent analysis. In the situation where language 1 is uniformly favoured or disfavoured, so that the conversion rates are constant ($C_m=C$), the above rate equations boil down to the discrete Fisher-Kolmogorov-Petrovsky-Piscounov (FKPP) equation BIBREF29, BIBREF30, which is known to exhibit traveling fronts, just as the well-known FKPP equation in the continuum BIBREF31, BIBREF32. In the present context, the focus will however be on stationary solutions on finite arrays, obeying
<<<Two geographic areas>>>
We begin with the case of two geographic areas connected by a single link. The problem is simple enough to allow for an explicit exposition of its full solution. The rate equations (DISPLAY_FORM55) become
Because of the migration fluxes, for any non-zero $\gamma $ it is impossible for any of the languages to become extinct in one area and survive in the other one. The only possibility is that of a uniform consensus, where one and the same language survives in all areas. The consensus state where language 1 survives is described by the stationary solution $X_1=X_2=1$. The corresponding stability matrix is
where $\mathop {{\rm diag}}(\dots )$ denotes a diagonal matrix (whose entries are listed), whereas ${\Delta }_2$ is defined in (DISPLAY_FORM135). The stability condition amounts to
Similarly, the consensus state where language 2 survives is described by the stationary solution $X_1=X_2=0$. The corresponding stability matrix is
The conditions for the latter to be stable read
Figure FIGREF66 shows the phase diagram of the model in the $C_1$–$C_2$ plane for $\gamma =1$. Region I1 is the consensus phase where language 1 survives. It is larger than the quadrant where this language is everywhere favoured (i.e., $C_1$ and $C_2$ are positive), as its boundary (red curve) reads $C_1C_2+\gamma (C_1+C_2)=0$. Similarly, region I2 is the consensus phase where language 2 survives. It is larger than the quadrant where this language is everywhere favoured (i.e., $C_1$ and $C_2$ are negative), as its boundary (blue curve) reads $C_1C_2-\gamma (C_1+C_2)=0$. The regions marked IIA and IIB are coexistence phases. These phases are located symmetrically around the line $C_1+C_2=0$ (black dashed line) where none of the languages is globally favoured. There, the fractions $X_1$ and $X_2$ of speakers of language 1 in both areas vary continuously between zero on the blue curve and unity on the red one, according to
with
We have therefore
all over the coexistence phases IIA and IIB. The right-hand-side equals 0 on the blue curve, 1 on the black dashed line, and 2 on the red curve.
<<</Two geographic areas>>>
<<<@!START@$M$@!END@ geographical areas>>>
From now on we consider the general situation of $M$ geographic areas, as shown in Figure FIGREF51. The basic properties of the model can be inferred from the case of two areas, studied in section SECREF57. In full generality, because of migration fluxes, it is impossible for any of the languages to become extinct in some areas and survive in some other ones. The only possibility is that of a uniform consensus, where one and the same language survives in all areas.
The consensus state where language 1 survives is described by the uniform stationary solution where $X_m=1$ for all $m=1,\dots ,M$. The corresponding stability matrix is
Similarly, the consensus state where language 2 survives corresponds to the stationary solution where $X_m=0$ for all $m=1,\dots ,M$. The corresponding stability matrix is
These expressions respectively generalise (DISPLAY_FORM59) and (DISPLAY_FORM61).
If all the conversion rates $C_m$ vanish, both the above matrices read $-\gamma {\Delta }_M$, whose spectrum comprises one vanishing eigenvalue (see (DISPLAY_FORM136)). In the regime where all the conversion rates $C_m$ are small with respect to $\gamma $, perturbation theory tells us that the largest eigenvalues of ${S}_M^{(0)}$ and ${S}_M^{(1)}$ respectively read $\overline{C}$ and $-\overline{C}$, to leading order, where
We therefore predict that the average conversion rate $\overline{C}$ determines the fate of the system in the regime where conversion rates are small with respect to $\gamma $. If language 1 is globally favoured, i.e., $\overline{C}>0$, the system reaches the consensus where language 1 survives, and vice versa.
In the generic situation where the conversion rates $C_m$ are comparable to $\gamma $, their dispersion around their spatial average $\overline{C}$ broadens the spectra of the matrices ${S}_M^{(1)}$ and ${S}_M^{(0)}$. As a consequence, the condition $\overline{C}>0$ (resp. $\overline{C}<0$) is necessary, albeit not sufficient, for the consensus where language 1 (resp. language 2) survives to be stable.
In the following we shall successively consider ordered attractiveness profiles in Section SECREF71 and random ones in Section SECREF84.
<<</@!START@$M$@!END@ geographical areas>>>
<<<Ordered attractiveness profiles>>>
This section is devoted to a simple situation where the attractiveness profiles of both languages are ordered spatially. More specifically, we consider the case where language 1 is favoured in the $K$ first (i.e., leftmost) areas, whereas language 2 is favoured in the $L$ last (i.e., rightmost) areas, with $K\ge L$ and $K+L=M$. For the sake of simplicity, we choose to describe this situation by conversion rates that have unit magnitude, as shown in Figure FIGREF73:
The symmetric situation where $M$ is even and $K=L=M/2$, so that $\overline{C}=0$, can be viewed as a generalisation of the case of two geographic areas, studied in Section SECREF57, for $C_1+C_2=0$, i.e., along the black dashed line of Figure FIGREF66. Both languages play symmetric roles, so that no language is globally preferred, and no consensus can be reached. As a consequence, both languages survive everywhere, albeit with non-trivial spatial profiles, which can be thought of as avatars of the FKPP traveling fronts mentioned above, rendered stationary by being pinned by boundary conditions. The upper panel of Figure FIGREF76 shows the stationary fraction $X_m$ of speakers of language 1 against area number, for $M=20$ (i.e., $K=L=10$) and several $\gamma $. The abscissa $m-1/2$ is chosen in order to have a symmetric plot. As one might expect, each language is preferred in the areas where it is favoured, i.e., we have $X_m>1/2$ for $m=1,\dots ,K$, whereas $X_m<1/2$ for $m=K+1,\dots ,M$. Profiles get smoother as the migration rate $\gamma $ is increased. The width $\xi $ of the transition region is indeed expected to grow as
This scaling law is nothing but the large $\gamma $ behaviour of the exact dispersion relation
(see (DISPLAY_FORM148)) between $\gamma $ and the decay rate $\mu $ such that either $X_m$ or $1-X_m$ falls off as ${\rm e}^{\pm m\mu }$, with the natural identification $\xi =1/\mu $.
The asymmetric situation where $K>L$, so that $\overline{C}=(K-L)/M>0$, implying that language 1 is globally favoured, is entirely different. The system indeed reaches a consensus state where the favoured language survives, whenever the migration rate $\gamma $ exceeds some threshold $\gamma _c$. This threshold, corresponding to the consensus state becoming marginally stable, only depends on the integers $K$ and $L$. It is derived in Appendix SECREF6 and given by the largest solution of (DISPLAY_FORM153).
This is illustrated in the lower panel of Figure FIGREF76, showing $X_m$ against $m-1/2$ for $K=12$ and $L=8$, and the same values of $\gamma $ as on the upper panel. The corresponding threshold reads $\gamma _c=157.265$. The whole profile shifts upwards while it broadens as $\gamma $ is increased. It tends uniformly to unity as $\gamma $ tends to $\gamma _c$, demonstrating the continuous nature of the transition where consensus is formed.
The threshold migration rate $\gamma _c$ assumes a scaling form in the regime where $K$ and $L$ are large and comparable. Setting
so that the excess fraction $f$ identifies with the average conversion rate $\overline{C}$, the threshold rate $\gamma _c$ grows quadratically with the system size $M$, according to
where $g(f)$ is the smallest positive solution of the implicit equation
which is a rescaled form of (DISPLAY_FORM153).
The quadratic growth law (DISPLAY_FORM78) is a consequence of the diffusive nature of migrations. The following limiting cases deserve special mention.
For $f\rightarrow 0$, i.e., $K$ and $L$ relatively close to each other ($K-L\ll M$), we have
yielding to leading order
For $f\rightarrow 1$, i.e., $L\ll K$, we have $g(f)\approx \pi /(4(1-f))$, up to exponentially small corrections, so that
The situation considered in the lower panel of Figure FIGREF76, i.e., $M=20$, $K=12$ and $L=8$, corresponds to $f=1/5$, hence $g=0.799622814\dots $, so that
This scaling result predicts $\gamma _c\approx 156.397$ for $M=20$, a good approximation to the exact value $\gamma _c=157.265$.
<<</Ordered attractiveness profiles>>>
<<<Random attractiveness profiles>>>
We now consider the situation of randomly disordered attractiveness profiles. The conversion rates $C_m$ are modelled as independent random variables drawn from some symmetric distribution $f(C)$, such that $\left\langle C_m\right\rangle =0$ and $\left\langle C_m^2\right\rangle =w^2$.
The first quantity we will focus on is the consensus probability ${\cal P}$. It is clear from a dimensional analysis of the rate equations (DISPLAY_FORM56) that ${\cal P}$ depends on the ratio $\gamma /w$, the system size $M$, and the distribution $f(C)$. Furthermore, ${\cal P}$ is expected to increase with $\gamma /w$. It can be estimated as follows in the limiting situations where $\gamma /w$ is either very small or very large.
In the regime where $\gamma \ll w$ (e.g. far from the center in Figure FIGREF66), conversion effects dominate migration effects. There, a consensus where language 1 (resp. language 2) survives can only be reached if all conversion rates $C_m$ are positive (resp. negative). The total consensus probability thus scales as
Consensus is therefore highly improbable in this regime. In other words, coexistence of both languages is overwhelmingly the rule.
In the opposite regime where $\gamma \gg w$ (e.g. in the vicinity of the center in Figure FIGREF66), migration effects dominate conversion effects. There, we have seen in Section SECREF67 that the average conversion rate defined in (DISPLAY_FORM70) essentially determines the fate of the system. If language 1 is globally favoured, i.e., $\overline{C}>0$, then the system reaches the uniform consensus where language 1 survives, and vice versa. Coexistence is therefore rare in this regime, as it requires $\overline{C}$ to be atypically small. The probability ${\cal Q}$ for this to occur, to be identified with $1-{\cal P}$, has been given a precise definition in Appendix SECREF6 by means of the expansion (DISPLAY_FORM143) of $D_M=\det {S}_M^{(1)}$ as a power series in the $C_m$, and estimated within a simplified Gaussian setting. In spite of the heuristic character of its derivation, the resulting estimate (DISPLAY_FORM147) demonstrates that the consensus probability scales as
all over the regime where the ratio $\gamma /w$ and the system size $M$ are both large. Furthermore, taking (DISPLAY_FORM147) literally, we obtain the following heuristic prediction for the finite-size scaling function:
The scaling result (DISPLAY_FORM86) shows that the scale of the migration rate $\gamma $ which is relevant to describe the consensus probability for a typical disordered profile of attractivenesses reads
This estimate grows less rapidly with $M$ than the corresponding threshold for ordered profiles, which obeys a quadratic growth law (see (DISPLAY_FORM78)). The exponent $3/2$ of the scaling law (DISPLAY_FORM88) can be put in perspective with the anomalous scaling of the localisation length in one-dimensional Anderson localisation near band edges. There is indeed a formal analogy between the stability matrices of the present problem and the Hamiltonian of a tight-binding electron in a disordered potential, with the random conversion rates $C_m$ replacing the disordered on-site energies. For the tight-binding problem, the localisation length is known to diverge as $\xi \sim 1/w^2$ in the bulk of the spectrum, albeit only as $\xi \sim 1/w^{2/3}$ in the vicinity of band edges BIBREF33, BIBREF34, BIBREF35, BIBREF36, BIBREF37. Replacing $\xi $ by the system size $M$ and remembering that $w$ stands for $w/\gamma $, we recover (DISPLAY_FORM88). The exponent $3/2$ is therefore nothing but the inverse of the exponent $2/3$ of anomalous band-edge localisation.
Figure FIGREF89 shows a finite-size scaling plot of the consensus probability ${\cal P}$ against $x=\gamma /M^{3/2}$. Data correspond to arrays of length $M=20$ with uniform and Gaussian distributions of conversion rates with $w=1$. Each data point is the outcome of $10^6$ independent realisations. The thin black curve is a guide to the eye, suggesting that the finite-size scaling function $\Phi $ is universal, i.e., independent of details of the conversion rate distribution. It has indeed been checked that the weak residual dependence of data points on the latter distribution becomes even smaller as $M$ is further increased. The full green curve shows the heuristic prediction (DISPLAY_FORM87), providing a semi-quantitative picture of the finite-size scaling function. For instance, consensus is reached with probability ${\cal P}=1/2$ and ${\cal P}=2/3$ respectively for $x\approx 0.18$ and $x\approx 0.33$, according to actual data, whereas (DISPLAY_FORM87) respectively predicts $x=1/\sqrt{12}=0.288675\dots $ and $x=1/2$.
Besides the value of the consensus probability ${\cal P}$, the next question is what determines whether or not the system reaches consensus. In Section SECREF67 it has been demonstrated that the average conversion rate $\overline{C}$ defined in (DISPLAY_FORM70) essentially determines the fate of the system in the regime where migration effects dominate conversion effects. It has also been shown that the consensus denoted by I1, where language 1 survives, can only be stable for $\overline{C}>0$, whereas the consensus denoted by I2, where language 2 survives, can only be stable for $\overline{C}<0$. The above statements are made quantitative in Figure FIGREF90, showing the probability distribution of the average conversion rate $\overline{C}$, for a Gaussian distribution of conversion rates with $w=1$. The total (i.e., unconditioned) distribution (black curves) is Gaussian. Red and blue curves show the distributions conditioned on consensus. They are indeed observed to live entirely on $\overline{C}>0$ for I1 and on $\overline{C}<0$ for I2. Finally, the distributions conditioned on coexistence (green curves, denoted by II) exhibit narrow symmetric shapes around the origin. Values of the migration rate $\gamma $ are chosen so as to have three partial histograms with equal weights, i.e., a consensus probability ${\cal P}=2/3$. This fixes $\gamma \approx 0.351$ for $M=2$ (top) and $\gamma \approx 10.22$ for $M=10$ (bottom).
<<</Random attractiveness profiles>>>
<<</Breaking spatial symmetry: language coexistence by inhomogeneous attractivenesses>>>
<<<Discussion>>>
An area of interest that is common to both physicists and linguists concerns the evolution of competing languages. It was long assumed that such competition would result in the dominance of one language above all its competitors, until some recent work hinted that coexistence might be possible under specific circumstances. We argue here that coexistence of two or more competing languages can result from two symmetry-breaking mechanisms – due respectively to imbalanced internal dynamics and spatial heterogeneity – and engage in a quantitative exploration of the circumstances which lead to this coexistence. In this work, both symmetry-breaking scenarios are dealt with on an equal footing.
In the first case of competing languages in a single geographical area, our introduction of an interpolation parameter $q$, which measures the amount of imbalance in the internal dynamics, turns out to be crucial for the investigation of language coexistence. It is conceptually somewhat subtle, since it appears only in the saturation terms in the coupled logistic equations used here to describe language competition; in contrast to the conversion terms (describing language shift from a less to a more favoured language), its appearance is symmetric with respect to both languages. For multiply many competing languages, the ensuing rate equations for the fractions of speakers are seen to bear a strong resemblance to a broad range of models used in theoretical ecology, including Lotka-Volterra or predator-prey systems.
We first consider the case where the $N$ languages in competition in a single area have equally spaced attractivenesses. This simple situation allows for an exact characterisation of the stationary state. The range of attractivenesses is measured by the mean attractiveness $g$. As this parameter is increased, the number $K$ of surviving languages decreases progressively, as the least favoured languages successively become extinct at threshold values of $g$. Importantly, the range of values of $g$ between the start of the disappearances and the appearance of consensus grows proportionally to $N^2$. There is therefore a substantial amount of coexistence between languages that are significantly attractive.
In the general situation, where the attractivenesses of the competing languages are modelled as random variables with an arbitrary distribution, the outcomes of numerical studies at finite $N$ are corroborated by a detailed asymptotic analysis in the regime of large $N$. One of the key results is that the quantity $W=Nw$ (the product of the number of languages $N$ with the mean attractiveness $w$) determines many quantities of interest, including the mean fraction $R=\left\langle K\right\rangle /N$ of surviving languages. The relation between $W$ and $R$ is however non-universal, as it depends on the full attractiveness distribution. This non-universality is most prominent in the regime where the mean attractiveness is large, so that only the few most favoured languages survive in the stationary state. The number of such survivors is found to obey a scaling law, whose non-universal critical exponent is dictated by the specific form of the attractiveness distribution near its upper edge.
As far as symmetry breaking via spatial heterogeneity is concerned, we consider the paradigmatic case of two competing languages in a linear array of $M$ geographic areas, whose neighbours are linked via a uniform migration rate $\gamma $. In the simplest situation of two areas, we determine the full phase diagram of the model as a function of $\gamma $ as well as the conversion rates ruling language shift in each area. This allows us to associate different regions of phase space with either consensus or coexistence. Our analysis is then generalised to longer arrays of $M$ linked geographical regions. We first consider ordered attractiveness profiles, where language 1 is favoured in the $K$ leftmost areas, while language 2 is favoured in the $L$ rightmost ones. If the two blocks are of equal size so that no language is globally preferred, coexistence always results; however, the spatial profiles of the language speakers themselves are rather non-trivial. For blocks of unequal size, there is a transition from a situation of coexistence at low migration rates to a situation of uniform consensus at high migration rates, where the language favoured in the larger block is the only survivor in all areas. The critical migration rate at this transition grows as $M^2$. We next investigate disordered attractiveness profiles, where conversion rates are modelled as random variables. There, the probability of observing a uniform consensus is given by a universal scaling function of $x=\gamma /(M^{3/2}w)$, where $w$ is the width of the symmetric distribution of conversion rates.
The ratio between migration and conversion rates beyond which there is consensus – either with certainty or with a sizeable probability – grows with the number of geographic areas as $M^2$ for ordered profiles of attractivenesses, and as $M^{3/2}$ for disordered ones. The first exponent is a consequence of the diffusive nature of migrations, whereas the second one has been derived in Appendix SECREF134 and related to anomalous band-edge scaling in one-dimensional Anderson localisation. If geographical areas were arranged according to a more complex geometric structure, these exponents would respectively read $2d/d_s$ and $(4-d_s)/(2d_s)$, with $d$ and $d_s$ being the fractal and spectral dimensions of the underlying structure (see BIBREF38, BIBREF39, and BIBREF40, BIBREF41 for reviews).
Finally, we remark on another striking formal analogy – that between the rate equations (DISPLAY_FORM20) presented here, and those of a spatially extended model of competitive dynamics BIBREF42, itself inspired by a model of interacting black holes BIBREF43. In the latter, the non-trivial patterns of survivors on various networks and other geometrical structures were a particular focus of investigation, and led to the unearthing of universal behaviour. We believe that a network model of competing languages which combines both the symmetry-breaking scenarios discussed in this paper, so that every node corresponds to a geographical area with its own imbalanced internal dynamics, might lead to the discovery of similar universalities.
AM warmly thanks the Leverhulme Trust for the Visiting Professorship that funded this research, as well as the Faculty of Linguistics, Philology and Phonetics at the University of Oxford, for their hospitality.
Both authors contributed equally to the present work, were equally involved in the preparation of the manuscript, and have read and approved the final manuscript.
<<</Discussion>>>
<<<Asymptotic analysis for a large number of competing languages in a single area>>>
This Appendix is devoted to an analytical investigation of the statistics of surviving languages in a single geographic area, in the regime where the numbers $N$ of competing languages is large.
The properties of the attractiveness distribution of the languages are key to determining whether coexistence or consensus will prevail. In particular the transition to consensus depends critically, and non-universally, on the way in which the attractiveness distribution decays, as will be shown below.
Statistical fluctuations between various instances of the model become negligible for large $N$, so that sharp (i.e., self-averaging) expressions can be obtained for many quantities of interest.
Let us begin with the simplest situation where all languages survive. When the number $N$ of competing languages is large, the condition for this to occur assumes a simple form. Consider the expression (DISPLAY_FORM36) for $x_N$. The law of large numbers ensures that the sum $S$ converges to
whereas $a_N$ is relatively negligible. The condition that all the $N$ competing languages survive therefore takes the form of a sharp inequality at large $N$, i.e.,
All over this regime, the expression for $x_N$ simplifies to
The above analysis can be extended to the general situation where the numbers $N$ of competing languages and $K$ of surviving ones are large and comparable, with the fraction of surviving languages,
taking any value in the range $0<R<1$.
The rescaled attractiveness of the least favoured surviving language, namely
turns out to play a key role in the subsequent analysis. Let us introduce for further reference the truncated moments ($k=0,1,2$)
First of all, the relationship between $R$ and $\eta $ becomes sharp in the large-$N$ regime. We have indeed
The limits of all quantities of interest can be similarly expressed in terms of $\eta $. We have for instance
for the sum introduced in (DISPLAY_FORM37). The marginal stability condition, namely that language number $K$ is at the verge of becoming extinct, translates to
The asymptotic dependence of the fraction $R$ of surviving languages on the rescaled mean attractiveness $W$ is therefore given in parametric form by (DISPLAY_FORM97) and (DISPLAY_FORM99). The identity
demonstrates that $R$ is a decreasing function of $W$, as it should be.
When the parameter $W$ reaches unity from above, the model exhibits a continuous transition from the situation where all languages survive. The parameter $\eta $ vanishes linearly as
with unit prefactor, irrespective of the attractiveness distribution. The fraction of surviving languages departs linearly from unity, according to
In the regime where $W\gg 1$, the fraction $R$ of surviving languages is expected to fall off to zero. As a consequence of (DISPLAY_FORM97), $R\ll 1$ corresponds to the parameter $\eta $ being close to the upper edge of the attractiveness distribution $f(\xi )$. This is to be expected, as the last surviving languages are the most attractive ones. As a consequence, the form of the relationship between $W$ and $R$ for $W\gg 1$ is highly non-universal, as it depends on the behavior of the distribution $f(\xi )$ near its upper edge. It turns out that the following two main classes of attractiveness distributions have to be considered.
Class 1: Power law at finite distance.
Consider the situation where the distribution $f(\xi )$ has a finite upper edge $\xi _0$, and either vanishes or diverges as a power law near this edge, i.e.,
The exponent $\alpha $ is positive. The density $f(\xi )$ diverges near its upper edge $\xi _0$ for $0<\alpha <1$, whereas it vanishes near $\xi _0$ for $\alpha >1$, and takes a constant value $f(\xi _0)=A$ for $\alpha =1$.
In the relevant regime where $\eta $ is close to $\xi _0$, the expressions (DISPLAY_FORM97) and (DISPLAY_FORM99) simplify to
Eliminating $\eta $ between both above estimates, we obtain the following power-law relationship between $W$ and $R$:
In terms of the original quantities $K$ and $w$, the above result reads
Setting $K=1$ in this estimate, we predict that the consensus probability ${\cal P}$ becomes appreciable when
Class 2: Power law at infinity.
Consider now the situation where the distribution extends up to infinity, and falls off as a power law, i.e.,
The exponent $\beta $ is larger than 2, in order for the first two moments of $\xi $ to be convergent.
In the relevant regime where $\eta $ is large, the expressions (DISPLAY_FORM97) and (DISPLAY_FORM99) simplify to
Eliminating $\eta $ between both above estimates, we obtain the following power-law relationship between $W$ and $R$:
In terms of the original quantities $K$ and $w$, the above result reads
Setting $K=1$ in this estimate, we predict that the consensus probability ${\cal P}$ becomes appreciable when
We now summarise the above discussion. In the regime where $W\gg 1$, the fraction $R$ of surviving languages falls off as a power law of the form
where the positive exponent $\lambda $ varies continuously, according to whether the distribution of attractivenesses extends up to a finite distance or infinity (see (DISPLAY_FORM106), (DISPLAY_FORM112)):
In the marginal situation between both classes mentioned above, comprising e.g. the exponential distribution, the decay exponent sticks to its borderline value
The decay law $R\sim 1/W$ might however be affected by logarithmic corrections.
Another view of the above scaling laws goes as follows. When the number of languages $N$ is large, the number of surviving languages decreases from $K=N$ to $K=1$ over a very broad range of mean attractivenesses. The condition for all languages to survive (see (DISPLAY_FORM92)) sets the beginning of this range as
The occurrence of a sizeable consensus probability ${\cal P}$ sets the end of this range as
where the exponent $\mu >-1/2$ varies continuously, according to (see (DISPLAY_FORM108), (DISPLAY_FORM114)):
In the marginal situation between both classes, the above exponent sticks to its borderline value
The extension of the dynamical range, defined as the ratio between both scales defined above, diverges as
We predict in particular a linear divergence for the exponential distribution ($\mu =0$) and a quadratic divergence for the uniform distribution ($\mu =1$). This explains the qualitative difference observed in Figure FIGREF50. The slowest growth of the dynamical range is the square-root law observed for distributions falling off as a power-law with $\beta \rightarrow 2$, so that $\mu =-1/2$.
To close, let us underline that most of the quantities met above assume simple forms for the uniform and exponential distributions (see (DISPLAY_FORM44)).
Uniform distribution.
The consensus probability (see (DISPLAY_FORM42)) reads
For large $N$, this becomes ${\cal P}\approx \exp (-N/(2w))$, namely a function of the ratio $w/N$, in agreement with (DISPLAY_FORM119) and (DISPLAY_FORM120), with exponent $\mu =1$, since $\alpha =1$.
The truncated moments read
We thus obtain
with exponent $\lambda =1/2$, in agreement with (DISPLAY_FORM106) and (DISPLAY_FORM116) for $\alpha =1$.
Exponential distribution.
The consensus probability reads
irrespective of $N$, in agreement with (DISPLAY_FORM119), with exponent $\mu =0$ (see (DISPLAY_FORM121)).
The truncated moments read
We thus obtain
with exponent $\lambda =1$, in agreement with (DISPLAY_FORM117).
<<</Asymptotic analysis for a large number of competing languages in a single area>>>
<<<Stability matrices and their spectra>>>
<<<Generalities>>>
This Appendix is devoted to stability matrices and their spectra. Let us begin by reviewing some general background (see e.g. BIBREF44 for a comprehensive overview). Consider an autonomous dynamical system defined by a vector field ${E}({x})$ in $N$ dimensions, i.e., by $N$ coupled first-order equations of the form
with $m,n=1,\dots ,N$, where the right-hand sides depend on the dynamical variables $\lbrace x_n(t)\rbrace $ themselves, but not explicitly on time.
Assume the above dynamical system has a fixed point $\lbrace x_m\rbrace $, such that $E_m\lbrace x_n\rbrace =0$ for all $m$. Small deviations $\lbrace \delta x_m(t)\rbrace $ around the fixed point $\lbrace x_m\rbrace $ obey the linearised dynamics given by the stability matrix ${S}$, i.e., the $N\times N$ matrix defined by
where right-hand sides are evaluated at the fixed point. The fixed point is stable, in the strong sense that small deviations fall off exponentially fast to zero, if all eigenvalues $\lambda _a$ of ${S}$ have negative real parts. In this case, if all the $\lambda _a$ are real, their opposites $\omega _a=-\lambda _a>0$ are the inverse relaxation times of the linearised dynamics. In particular, the opposite of the smallest eigenvalue, simply denoted by $\omega $, characterises exponential convergence to the fixed point for a generic initial state. If some of the $\lambda _a$ have non-zero imaginary parts, convergence is oscillatory.
The analysis of fixed points and bifurcations in low-dimensional Lotka-Volterra and replicator equations has been the subject of extensive investigations BIBREF23, BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF28 (see also BIBREF19, BIBREF20, BIBREF21).
<<</Generalities>>>
<<<Array models>>>
The remainder of this Appendix is devoted to the stability matrices involved in the array models considered in Section SECREF3, for an arbitrarily large number $M$ of geographical areas. All those stability matrices are related to the symmetric $M\times M$ matrix
representing (minus) the Laplacian operator on a linear array of $M$ sites, with Neumann boundary conditions. References BIBREF45, BIBREF46 provide reviews on the Laplacian and related operators on graphs.
The eigenvalues $\lambda _a$ of ${\Delta }_M$ and the corresponding normalised eigenvectors ${\phi }_a$, such that ${\Delta }_M{\phi }_a=\lambda _a{\phi }_a$ and ${\phi }_a\cdot {\phi }_b=\delta _{ab}$, read
($a=0,\dots ,M-1$). The vanishing eigenvalue $\lambda _0=0$ corresponds to the uniform eigenvector $\phi _{0,m}=1/\sqrt{M}$.
Let us begin by briefly considering the simple example of the stability matrix
of the rate equations (DISPLAY_FORM54) for the total populations $P_m(t)$. Its eigenvalues are $-1-\gamma \lambda _a$. The smallest of them is $-1$, so that the inverse relaxation time is given by $\omega =1$, as announced below (DISPLAY_FORM54).
Let us now consider the stability matrices
respectively defined in (DISPLAY_FORM68) and (DISPLAY_FORM69), and corresponding to both uniform consensus states for an arbitrary profile of conversion rates $C_m$. The ensuing stability conditions have been written down explicitly in (DISPLAY_FORM60) and (DISPLAY_FORM62) for $M=2$. It will soon become clear that it is virtually impossible to write them down for an arbitrary size $M$. Some information can however be gained from the calculation of the determinants of the above matrices. They only differ by a global sign change of all the conversion rates $C_m$, so that it is sufficient to consider ${S}_M^{(1)}$. It is a simple matter to realise that its determinant reads
where $u_m$ is a generalised eigenvector solving the following Cauchy problem:
with initial conditions $u_0=u_1=1$. We thus obtain recursively
and so on. The expression (DISPLAY_FORM141) for $D_2$ agrees with the second of the conditions (DISPLAY_FORM60) and with the equation of the red curve in Figure FIGREF66, as should be. The expression () for $D_3$ demonstrates that the complexity of the stability conditions grows rapidly with the system size $M$.
<<<Random arrays>>>
In the case of random arrays, considered in Section SECREF84, the conversion rates $C_m$ are independent random variables such that $\left\langle C_m\right\rangle =0$ and $\left\langle C_m^2\right\rangle =w^2$.
The regime of most interest is where the conversion rates $C_n$ are small with respect to $\gamma $. In this regime, the determinant $D_M$ can be expanded as a power series in the conversion rates. The $u_m$ solving the Cauchy problem (DISPLAY_FORM140) are close to unity. Setting
where the $u_m^{(1)}$ are linear and the $u_m^{(2)}$ quadratic in the $C_n$, we obtain after some algebra
where
are respectively linear and quadratic in the $C_n$. We have
In Section SECREF84 we need an estimate of the probability ${\cal Q}$ that $\overline{C}=X/M$ is atypically small. Within the present setting, it is natural to define the latter event as $\left|X\right|<\left|Y\right|$. The corresponding probability can be worked out proviso we make the ad hoc simplifying assumptions – that definitely do not hold in the real world – that $X$ and $Y$ are Gaussian and independent. Within this framework, the complex Gaussian random variable
has an isotropic density in the complex plane. We thus obtain
<<</Random arrays>>>
<<<Ordered arrays>>>
The aim of this last section is to investigate the spectrum of the stability matrix ${S}_M^{(1)}$ associated with the ordered profile of conversion rates given by (DISPLAY_FORM72).
In this case, the generalised eigenvector $u_m$ solving the Cauchy problem (DISPLAY_FORM140) can be worked out explicitly. We have $C_m=1$ for $m=1,\dots ,K$, and therefore $u_m=a{\rm e}^{m\mu }+b{\rm e}^{-m\mu }$, where $\mu >0$ obeys the dispersion relation
The initial conditions $u_0=u_1=1$ fix $a$ and $b$, and so
Similarly, we have $C_m=-1$ for $m=K+\ell $, with $\ell =1,\dots ,L$, and therefore $u_m=\alpha {\rm e}^{{\rm i}\ell q}+\beta {\rm e}^{-{\rm i}\ell q}$, where $0<q<\pi $ obeys the dispersion relation
Matching both solutions for $m=K$ and $K+1$ fixes $\alpha $ and $\beta $, and so
Inserting the latter result into (DISPLAY_FORM139), we obtain the following expression for the determinant of ${S}_M^{(1)}$, with $M=K+L$:
The vanishing of the above expression, i.e.,
signals that one eigenvalue of the stability matrix ${S}^{(1)}$ vanishes. In particular, the consensus state where language 1 survives becomes marginally stable at the threshold migration rate $\gamma _c$, where the largest eigenvalue of ${S}^{(1)}$ vanishes. Equation (DISPLAY_FORM153) amounts to a polynomial equation of the form $P_{K,L}(\gamma )=0$, where the polynomial $P_{K,L}$ has degree $K+L-1=M-1$. All its zeros are real, and $\gamma _c$ is the largest of them. The first of these polynomials read
<<</Ordered arrays>>>
<<</Array models>>>
<<</Stability matrices and their spectra>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nBreaking internal symmetry: language coexistence by imbalanced population dynamics\nTwo competing languages\n@!START@$N$@!END@ competing languages\nThe case of equally spaced attractivenesses\nThe general case\nBreaking spatial symmetry: language coexistence by inhomogeneous attractivenesses\nTwo geographic areas\n@!START@$M$@!END@ geographical areas\nOrdered attractiveness profiles\nRandom attractiveness profiles\nDiscussion\nAsymptotic analysis for a large number of competing languages in a single area\nStability matrices and their spectra\nGeneralities\nArray models\nRandom arrays\nOrdered arrays"
],
"type": "outline"
}
|
1908.07816
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
A Multi-Turn Emotionally Engaging Dialog Model
<<<Abstract>>>
Open-domain dialog systems (also known as chatbots) have increasingly drawn attention in natural language processing. Some of the recent work aims at incorporating affect information into sequence-to-sequence neural dialog modeling, making the response emotionally richer, while others use hand-crafted rules to determine the desired emotion response. However, they do not explicitly learn the subtle emotional interactions captured in human dialogs. In this paper, we propose a multi-turn dialog system aimed at learning and generating emotional responses that so far only humans know how to do. Compared with two baseline models, offline experiments show that our method performs the best in perplexity scores. Further human evaluations confirm that our chatbot can keep track of the conversation context and generate emotionally more appropriate responses while performing equally well on grammar.
<<</Abstract>>>
<<<Introduction>>>
Recent development in neural language modeling has generated significant excitement in the open-domain dialog generation community. The success of sequence-to-sequence learning BIBREF0, BIBREF1 in the field of neural machine translation has inspired researchers to apply the recurrent neural network (RNN) encoder-decoder structure to response generation BIBREF2. Specifically, the encoder RNN reads the input message, encodes it into a fixed context vector, and the decoder RNN uses it to generate the response. Shang et al. BIBREF3 applied the same structure combined with attention mechanism BIBREF4 on Twitter-style microblogging data. Following the vanilla sequence-to-sequence structure, various improvements have been made on the neural conversation model—for example, increasing the diversity of the response BIBREF5, BIBREF6, modeling personalities of the speakers BIBREF7, and developing topic aware dialog systems BIBREF8.
Some of the recent work aims at incorporating affect information into neural conversational models. While making the responses emotionally richer, existing approaches either explicitly require an emotion label as input BIBREF9, or rely on hand-crafted rules to determine the desired emotion responses BIBREF10, BIBREF11, ignoring the subtle emotional interactions captured in multi-turn conversations, which we believe to be an important aspect of human dialogs. For example, Gottman BIBREF12 found that couples are likely to practice the so called emotional reciprocity. When an argument starts, one partner's angry and aggressive utterance is often met with equally furious and negative utterance, resulting in more heated exchanges. On the other hand, responding with complementary emotions (such as reassurance and sympathy) is more likely to lead to a successful relationship. However, to the best of our knowledge, the psychology and social science literature does not offer clear rules for emotional interaction. It seems such social and emotional intelligence is captured in our conversations. This is why we believe that the data driven approach will have an advantage.
In this paper, we propose an end-to-end data driven multi-turn dialog system capable of learning and generating emotionally appropriate and human-like responses with the ultimate goal of reproducing social behaviors that are habitual in human-human conversations. We chose the multi-turn setting because only in such cases is the emotion appropriateness most necessary. To this end, we employ the latest multi-turn dialog model by Xing et al. BIBREF13, but we add an additional emotion RNN to process the emotional information in each history utterance. By leveraging an external text analysis program, we encode the emotion aspects of each utterance into a fixed-sized one-zero vector. This emotion RNN reads and encodes the input affect information, and then uses the final hidden state as the emotion representation vector for the context. When decoding, at each time step, this emotion vector is concatenated with the hidden state of the decoder and passed to the softmax layer to produce the probability distribution over the vocabulary.
Thereby, our contributions are threefold. (1) We propose a novel emotion-tracking dialog generation model that learns the emotional interactions directly from the data. This approach is free of human-defined heuristic rules, and hence, is more robust and fundamental than those described in existing work BIBREF9, BIBREF10, BIBREF11. (2) We apply the emotion-tracking mechanism to multi-turn dialogs, which has never been attempted before. Human evaluation shows that our model produces responses that are emotionally more appropriate than the baselines, while slightly improving the language fluency. (3) We illustrate a human-evaluation approach for judging machine-produced emotional dialogs. We consider factors such as the balance of positive and negative sentiments in test dialogs, a well-chosen range of topics, and dialogs that our human evaluators can relate. It is the first time such an approach is designed with consideration for the human judges. Our main goal is to increase the objectivity of the results and reduce judges' mistakes due to out-of-context dialogs they have to evaluate.
The rest of the paper unfolds as follows. Section SECREF2 discusses some related work. In Section SECREF3, we give detailed description of the methodology. We present experimental results and some analysis in Section SECREF4. The paper is concluded in Section SECREF5, followed by some future work we plan to do.
<<</Introduction>>>
<<<Related Work>>>
Many early open-domain dialog systems are rule-based and often require expert knowledge to develop. More recent work in response generation seeks data-driven solutions, leveraging on machine learning techniques and the availability of data. Ritter et al. BIBREF14 first applied statistical machine translation (SMT) methods to this area. However, it turns out that bilingual translation and response generation are different. The source and target sentences in translation share the same meaning; thus the words in the two sentences tend to align well with each other. However, for response generation, one could have many equally good responses for a single input. Later studies use the sequence-to-sequence neural framework to model dialogs, followed by various improving work on the quality of the responses, especially the emotional aspects of the conversations.
The vanilla RNN encoder-decoder is usually applied to single-turn response generation, where the response is generated based on one single input message. In multi-turn settings, where a context with multiple history utterances is given, the same structure often ignores the hierarchical characteristic of the context. Some recent work addresses this problem by adopting a hierarchical recurrent encoder-decoder (HRED) structure BIBREF15, BIBREF16, BIBREF17. To give attention to different parts of the context while generating responses, Xing et al. BIBREF13 proposed the hierarchical recurrent attention network (HRAN) that uses a hierarchical attention mechanism. However, these multi-turn dialog models do not take into account the turn-taking emotional changes of the dialog.
Recent work on incorporating affect information into natural language processing tasks, such as building emotional dialog systems and affect language models, has inspired our current work. For example, the Emotional Chatting Machine (ECM) BIBREF9 takes as input a post and a specified emotional category and generates a response that belongs to the pre-defined emotion category. The main idea is to use an internal memory module to capture the emotion dynamics during decoding, and an external memory module to model emotional expressions explicitly by assigning different probability values to emotional words as opposed to regular words. However, the problem setting requires an emotional label as an input, which might be unpractical in real scenarios. Asghar et al. BIBREF10 proposed to augment the word embeddings with a VAD (valence, arousal, and dominance) affective space by using an external dictionary, and designed three affect-related loss functions, namely minimizing affective dissonance, maximizing affective dissonance, and maximizing affective content. The paper also proposed the affectively diverse beam search during decoding, so that the generated candidate responses are as affectively diverse as possible. However, literature in affective science does not necessarily validate such rules. In fact, the best strategy to speak to an angry customer is the de-escalation strategy (using neutral words to validate anger) rather than employing equally emotional words (minimizing affect dissonance) or words that convey happiness (maximizing affect dissonance). Zhong et al. BIBREF11 proposed a biased attention mechanism on affect-rich words in the input message, also by taking advantage of the VAD embeddings. The model is trained with a weighted cross-entropy loss function, which encourages the generation of emotional words. However, these models only deal with single-turn conversations. More importantly, they all adopt hand-coded emotion responding mechanisms. To our knowledge, we are the first to consider modeling the emotional flow and its appropriateness in a multi-turn dialog system by learning from humans.
<<</Related Work>>>
<<<Model>>>
In this paper, we consider the problem of generating response $\mathbf {y}$ given a context $\mathbf {X}$ consisting of multiple previous utterances by estimating the probability distribution $p(\mathbf {y}\,|\,\mathbf {X})$ from a data set $\mathcal {D}=\lbrace (\mathbf {X}^{(i)},\mathbf {y}^{(i)})\rbrace _{i=1}^N$ containing $N$ context-response pairs. Here
is a sequence of $m_i$ utterances, and
is a sequence of $n_{ij}$ words. Similarly,
is the response with $T_i$ words.
Usually the probability distribution $p(\mathbf {y}\,|\,\mathbf {X})$ can be modeled by an RNN language model conditioned on $\mathbf {X}$. When generating the word $y_t$ at time step $t$, the context $\mathbf {X}$ is encoded into a fixed-sized dialog context vector $\mathbf {c}_t$ by following the hierarchical attention structure in HRAN BIBREF13. Additionally, we extract the emotion information from the utterances in $\mathbf {X}$ by leveraging an external text analysis program, and use an RNN to encode it into an emotion context vector $\mathbf {e}$, which is combined with $\mathbf {c}_t$ to produce the distribution. The overall architecture of the model is depicted in Figure FIGREF4. We are going to elaborate on how to obtain $\mathbf {c}_t$ and $\mathbf {e}$, and how they are combined in the decoding part.
<<<Hierarchical Attention>>>
The hierarchical attention structure involves two encoders to produce the dialog context vector $\mathbf {c}_t$, namely the word-level encoder and the utterance-level encoder. The word-level encoder is essentially a bidirectional RNN with gated recurrent units (GRU) BIBREF1. For utterance $\mathbf {x}_j$ in $\mathbf {X}$ ($j=1,2,\dots ,m$), the bidirectional encoder produces two hidden states at each word position $k$, the forward hidden state $\mathbf {h}^\mathrm {f}_{jk}$ and the backward hidden state $\mathbf {h}^\mathrm {b}_{jk}$. The final hidden state $\mathbf {h}_{jk}$ is then obtained by concatenating the two,
The utterance-level encoder is a unidirectional RNN with GRU that goes from the last utterance in the context to the first, with its input at each step as the summary of the corresponding utterance, which is obtained by applying a Bahdanau-style attention mechanism BIBREF4 on the word-level encoder output. More specifically, at decoding step $t$, the summary of utterance $\mathbf {x}_j$ is a linear combination of $\mathbf {h}_{jk}$, for $k=1,2,\dots ,n_j$,
Here $\alpha _{jk}^t$ is the word-level attention score placed on $\mathbf {h}_{jk}$, and can be calculated as
where $\mathbf {s}_{t-1}$ is the previous hidden state of the decoder, $\mathbf {\ell }_{j+1}^t$ is the previous hidden state of the utterance-level encoder, and $\mathbf {v}_a$, $\mathbf {U}_a$, $\mathbf {V}_a$ and $\mathbf {W}_a$ are word-level attention parameters. The final dialog context vector $\mathbf {c}_t$ is then obtained as another linear combination of the outputs of the utterance-level encoder $\mathbf {\ell }_{j}^t$, for $j=1,2,\dots ,m$,
Here $\beta _{j}^t$ is the utterance-level attention score placed on $\mathbf {\ell }_{j}^t$, and can be calculated as
where $\mathbf {s}_{t-1}$ is the previous hidden state of the decoder, and $\mathbf {v}_b$, $\mathbf {U}_b$ and $\mathbf {W}_b$ are utterance-level attention parameters.
<<</Hierarchical Attention>>>
<<<Emotion Encoder>>>
In order to capture the emotion information carried in the context $\mathbf {X}$, we utilize an external text analysis program called the Linguistic Inquiry and Word Count (LIWC) BIBREF18. LIWC accepts text files as input, and then compares each word in the input with a user-defined dictionary, assigning it to one or more of the pre-defined psychologically-relevant categories. We make use of five of these categories, related to emotion, namely positive emotion, negative emotion, anxious, angry, and sad. Using the newest version of the program LIWC2015, we are able to map each utterance $\mathbf {x}_j$ in the context to a six-dimensional indicator vector ${1}(\mathbf {x}_j)$, with the first five entries corresponding to the five emotion categories, and the last one corresponding to neutral. If any word in $\mathbf {x}_j$ belongs to one of the five categories, then the corresponding entry in ${1}(\mathbf {x}_j)$ is set to 1; otherwise, $\mathbf {x}_j$ is treated as neutral, with the last entry of ${1}(\mathbf {x}_j)$ set to 1. For example, assuming $\mathbf {x}_j=$ “he is worried about me”, then
since the word “worried” is assigned to both negative emotion and anxious. We apply a dense layer with sigmoid activation function on top of ${1}(\mathbf {x}_j)$ to embed the emotion indicator vector into a continuous space,
where $\mathbf {W}_e$ and $\mathbf {b}_e$ are trainable parameters. The emotion flow of the context $\mathbf {X}$ is then modeled by an unidirectional RNN with GRU going from the first utterance in the context to the last, with its input being $\mathbf {a}_j$ at each step. The final emotion context vector $\mathbf {e}$ is obtained as the last hidden state of this emotion encoding RNN.
<<</Emotion Encoder>>>
<<<Decoding>>>
The probability distribution $p(\mathbf {y}\,|\,\mathbf {X})$ can be written as
We model the probability distribution using an RNN language model along with the emotion context vector $\mathbf {e}$. Specifically, at time step $t$, the hidden state of the decoder $\mathbf {s}_t$ is obtained by applying the GRU function,
where $\mathbf {w}_{y_{t-1}}$ is the word embedding of $y_{t-1}$. Similar to Affect-LM BIBREF19, we then define a new feature vector $\mathbf {o}_t$ by concatenating $\mathbf {s}_t$ with the emotion context vector $\mathbf {e}$,
on which we apply a softmax layer to obtain a probability distribution over the vocabulary,
Each term in Equation (DISPLAY_FORM16) is then given by
We use the cross-entropy loss as our objective function
<<</Decoding>>>
<<</Model>>>
<<<Evaluation>>>
We trained our model using two different datasets and compared its performance with HRAN as well as the basic sequence-to-sequence model by performing both offline and online testings.
<<<Datasets>>>
We use two different dialog corpora to train our model—the Cornell Movie Dialogs Corpus BIBREF20 and the DailyDialog dataset BIBREF21.
Cornell Movie Dialogs Corpus. The dataset contains 83,097 dialogs (220,579 conversational exchanges) extracted from raw movie scripts. In total there are 304,713 utterances.
DailyDialog. The dataset is developed by crawling raw data from websites used for language learners to learn English dialogs in daily life. It contains 13,118 dialogs in total.
We summarize some of the basic information regarding the two datasets in Table TABREF25.
In our experiments, the models are first trained on the Cornell Movie Dialogs Corpus, and then fine-tuned on the DailyDialog dataset. We adopted this training pattern because the Cornell dataset is bigger but noisier, while DailyDialog is smaller but more daily-based. To create a training set and a validation set for each of the two datasets, we take segments of each dialog with number of turns no more than six, to serve as the training/validation examples. Specifically, for each dialog $\mathbf {D}=(\mathbf {x}_1,\mathbf {x}_2,\dots ,\mathbf {x}_M)$, we create $M-1$ context-response pairs, namely $\mathbf {U}_i=(\mathbf {x}_{s_i},\dots ,\mathbf {x}_i)$ and $\mathbf {y}_i=\mathbf {x}_{i+1}$, for $i=1,2,\dots ,M-1$, where $s_i=\max (1,i-4)$. We filter out those pairs that have at least one utterance with length greater than 30. We also reduce the frequency of those pairs whose responses appear too many times (the threshold is set to 10 for Cornell, and 5 for DailyDialog), to prevent them from dominating the learning procedure. See Table TABREF25 for the sizes of the training and validation sets. The test set consists of 100 dialogs with four turns. We give more detailed description of how we create the test set in Section SECREF31.
<<</Datasets>>>
<<<Baselines and Implementation>>>
We compared our multi-turn emotionally engaging dialog model (denoted as MEED) with two baselines—the vanilla sequence-to-sequence model (denoted as S2S) and HRAN. We chose S2S and HRAN as baselines because we would like to evaluate our model's capability to keep track of the multi-turn context and to produce emotionally more appropriate responses, respectively. In order to adapt S2S to the multi-turn setting, we concatenate all the history utterances in the context into one.
For all the models, the vocabulary consists of 20,000 most frequent words in the Cornell and DailyDialog datasets, plus three extra tokens: <unk> for words that do not exist in the vocabulary, <go> indicating the begin of an utterance, and <eos> indicating the end of an utterance. Here we summarize the configurations and parameters of our experiments:
We set the word embedding size to 256. We initialized the word embeddings in the models with word2vec BIBREF22 vectors first trained on Cornell and then fine-tuned on DailyDialog, consistent with the training procedure of the models.
We set the number of hidden units of each RNN to 256, the word-level attention depth to 256, and utterance-level 128. The output size of the emotion embedding layer is 256.
We optimized the objective function using the Adam optimizer BIBREF23 with an initial learning rate of 0.001. We stopped training the models when the lowest perplexity on the validation sets was achieved.
For prediction, we used beam search BIBREF24 with a beam width of 256.
<<</Baselines and Implementation>>>
<<<Evaluation Metrics>>>
The evaluation of chatbots remains an open problem in the field. Recent work BIBREF25 has shown that the automatic evaluation metrics borrowed from machine translation such as BLEU score BIBREF26 tend to align poorly with human judgement. Therefore, in this paper, we mainly adopt human evaluation, along with perplexity, following the existing work.
<<<Human evaluation setup>>>
To develop a test set for human evaluation, we first selected the emotionally colored dialogs with exactly four turns from the DailyDialog dataset. In the dataset each dialog turn is annotated with a corresponding emotional category, including the neutral one. For our purposes we filtered out only those dialogs where more than a half of utterances have non-neutral emotional labels. This gave us 78 emotionally positive dialogs and 14 emotionally negative dialogs. In order to have a balanced test set with equal number of positive and negative dialogs, we recruited two English-speaking students from our university without any relationship to the authors' lab and instructed them to create five negative dialogs with four turns, as if they were interacting with another human, according to each of the following topics: relationships, entertainment, service, work and study, and everyday situations. Thus each person produced 25 dialogs, and in total we obtained 50 emotionally negative daily dialogs in addition to the 14 already available. To form the test set, we randomly selected 50 emotionally positive and 50 emotionally negative dialogs from the two pools of dialogs described above (78 positive dialogs from DailyDialog, 64 negative dialogs from DailyDialog and human-generated).
For human evaluation of the models, we recruited another four English-speaking students from our university without any relationship to the authors' lab to rate the responses generated by the models. Specifically, we randomly shuffled the 100 dialogs in the test set, then we used the first three utterances of each dialog as the input to the three models being compared and let them generate the responses. According to the context given, the raters were instructed to evaluate the quality of the responses based on three criteria: (1) grammatical correctness—whether or not the response is fluent and free of grammatical mistakes; (2) contextual coherence—whether or not the response is context sensitive to the previous dialog history; (3) emotional appropriateness—whether or not the response conveys the right emotion and feels as if it had been produced by a human. For each criterion, the raters gave scores of either 0, 1 or 2, where 0 means bad, 2 means good, and 1 indicates neutral.
<<</Human evaluation setup>>>
<<</Evaluation Metrics>>>
<<<Results>>>
Table TABREF34 gives the perplexity scores obtained by the three models on the two validation sets and the test set. As shown in the table, MEED achieves the lowest perplexity score on all three sets. We also conducted t-test on the perplexity obtained, and results show significant improvements (with $p$-value $<0.05$).
Table TABREF34, TABREF35 and TABREF35 summarize the human evaluation results on the responses' grammatical correctness, contextual coherence, and emotional appropriateness, respectively. In the tables, we give the percentage of votes each model received for the three scores, the average score obtained with improvements over S2S, and the agreement score among the raters. Note that we report Fleiss' $\kappa $ score BIBREF27 for contextual coherence and emotional appropriateness, and Finn's $r$ score BIBREF28 for grammatical correctness. We did not use Fleiss' $\kappa $ score for grammatical correctness. As agreement is extremely high, this can make Fleiss' $\kappa $ very sensitive to prevalence BIBREF29. On the contrary, we did not use Finn's $r$ score for contextual coherence and emotional appropriateness because it is only reasonable when the observed variance is significantly less than the chance variance BIBREF30, which did not apply to these two criteria. As shown in the tables, we got high agreement among the raters for grammatical correctness, and fair agreement among the raters for contextual coherence and emotional appropriateness. For grammatical correctness, all three models achieved high scores, which means all models are capable of generating fluent utterances that make sense. For contextual coherence and emotional appropriateness, MEED achieved higher average scores than S2S and HRAN, which means MEED keeps better track of the context and can generate responses that are emotionally more appropriate and natural. We conducted Friedman test BIBREF31 on the human evaluation results, showing the improvements of MEED are significant (with $p$-value $<0.01$).
<<<Case Study>>>
We present four sample dialogs in Table TABREF36, along with the responses generated by the three models. Dialog 1 and 2 are emotionally positive and dialog 3 and 4 are negative. For the first two examples, we can see that MEED is able to generate more emotional content (like “fun” and “congratulations”) that is appropriate according to the context. For dialog 4, MEED responds in sympathy to the other speaker, which is consistent with the second utterance in the context. On the contrary, HRAN poses a question in reply, contradicting the dialog history.
<<</Case Study>>>
<<</Results>>>
<<</Evaluation>>>
<<<Conclusion and Future Work>>>
According to the Media Equation Theory BIBREF32, people respond to computers socially. This means humans expect talking to computers as they talk to other human beings. This is why we believe reproducing social and conversational intelligence will make social chatbots more believable and socially engaging. In this paper, we propose a multi-turn dialog system capable of generating emotionally appropriate responses, which is the first step toward such a goal. We have demonstrated how to do so by (1) modeling utterances with extra affect vectors, (2) creating an emotional encoding mechanism that learns emotion exchanges in the dataset, (3) curating a multi-turn dialog dataset, and (4) evaluating the model with offline and online experiments.
As future work, we would like to investigate the diversity issue of the responses generated, possibly by extending the mutual information objective function BIBREF5 to multi-turn settings. We would also like to evaluate our model on a larger dataset, for example by extracting multi-turn dialogs from the OpenSubtitles corpus BIBREF33.
<<</Conclusion and Future Work>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Work\nModel\nHierarchical Attention\nEmotion Encoder\nDecoding\nEvaluation\nDatasets\nBaselines and Implementation\nEvaluation Metrics\nHuman evaluation setup\nResults\nCase Study\nConclusion and Future Work"
],
"type": "outline"
}
|
1911.09483
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
MUSE: Parallel Multi-Scale Attention for Sequence to Sequence Learning
<<<Abstract>>>
In sequence to sequence learning, the self-attention mechanism proves to be highly effective, and achieves significant improvements in many tasks. However, the self-attention mechanism is not without its own flaws. Although self-attention can model extremely long dependencies, the attention in deep layers tends to overconcentrate on a single token, leading to insufficient use of local information and difficultly in representing long sequences. In this work, we explore parallel multi-scale representation learning on sequence data, striving to capture both long-range and short-range language structures. To this end, we propose the Parallel MUlti-Scale attEntion (MUSE) and MUSE-simple. MUSE-simple contains the basic idea of parallel multi-scale sequence representation learning, and it encodes the sequence in parallel, in terms of different scales with the help from self-attention, and pointwise transformation. MUSE builds on MUSE-simple and explores combining convolution and self-attention for learning sequence representations from more different scales. We focus on machine translation and the proposed approach achieves substantial performance improvements over Transformer, especially on long sequences. More importantly, we find that although conceptually simple, its success in practice requires intricate considerations, and the multi-scale attention must build on unified semantic space. Under common setting, the proposed model achieves substantial performance and outperforms all previous models on three main machine translation tasks. In addition, MUSE has potential for accelerating inference due to its parallelism. Code will be available at this https URL
<<</Abstract>>>
<<<Introduction>>>
In recent years, Transformer has been remarkably adept at sequence learning tasks like machine translation BIBREF0, BIBREF1, text classification BIBREF2, BIBREF3, language modeling BIBREF4, BIBREF5, etc. It is solely based on an attention mechanism that captures global dependencies between input tokens, dispensing with recurrence and convolutions entirely. The key idea of the self-attention mechanism is updating token representations based on a weighted sum of all input representations.
However, recent research BIBREF6 has shown that the Transformer has surprising shortcomings in long sequence learning, exactly because of its use of self-attention. As shown in Figure 1 (a), in the task of machine translation, the performance of Transformer drops with the increase of the source sentence length, especially for long sequences. The reason is that the attention can be over-concentrated and disperse, as shown in Figure 1 (b), and only a small number of tokens are represented by attention. It may work fine for shorter sequences, but for longer sequences, it causes insufficient representation of information and brings difficulty for the model to comprehend the source information intactly. In recent work, local attention that constrains the attention to focus on only part of the sequences BIBREF7, BIBREF8 is used to address this problem. However, it costs self-attention the ability to capture long-range dependencies and also does not demonstrate effectiveness in sequence to sequence learning tasks.
To build a module with both inductive bias of local and global context modelling in sequence to sequence learning, we hybrid self-attention with convolution and present Parallel multi-scale attention called MUSE. It encodes inputs into hidden representations and then applies self-attention and depth-separable convolution transformations in parallel. The convolution compensates for the insufficient use of local information while the self-attention focuses on capturing the dependencies. Moreover, this parallel structure is highly extensible, and new transformations can be easily introduced as new parallel branches, and is also favourable to parallel computation.
The main contributions are summarized as follows:
We find that the attention mechanism alone suffers from dispersed weights and is not suitable for long sequence representation learning. The proposed method tries to address this problem and achieves much better performance on generating long sequence.
We propose a parallel multi-scale attention and explore a simple but efficient method to successfully combine convolution with self-attention all in one module.
MUSE outperforms all previous models with same training data and the comparable model size, with state-of-the-art BLEU scores on three main machine translation tasks.
MUSE-simple introduce parallel representation learning and brings expansibility and parallelism. Experiments show that the inference speed can be increased by 31% on GPUs.
<<</Introduction>>>
<<<MUSE: Parallel Multi-Scale Attention>>>
Like other sequence-to-sequence models, MUSE also adopts an encoder-decoder framework. The encoder takes a sequence of word embeddings $(x_1, \cdots , x_n)$ as input where $n$ is the length of input. It transfers word embeddings to a sequence of hidden representation ${z} = (z_1, \cdots , z_n)$. Given ${z}$, the decoder is responsible for generating a sequence of text $(y_1, \cdots , y_m)$ token by token.
The encoder is a stack of $N$ MUSE modules. Residual mechanism and layer normalization are used to connect two adjacent layers. The decoder is similar to encoder, except that each MUSE module in the decoder not only captures features from the generated text representations but also performs attention over the output of the encoder stack through additional context attention. Residual mechanism and layer normalization are also used to connect two modules and two adjacent layers.
The key part in the proposed model is the MUSE module, which contains three main parts: self-attention for capturing global features, depth-wise separable convolution for capturing local features, and a position-wise feed-forward network for capturing token features. The module takes the output of $(i-1)$ layer as input and generates the output representation in a fusion way:
where “Attention” refers to self-attention, “Conv” refers to dynamic convolution, “Pointwise” refers to a position-wise feed-forward network. The followings list the details of each part. We also propose MUSE-simple, a simple version of MUSE, which generates the output representation similar to the MUSE model except for that it dose not the include convolution operation:
<<<Attention Mechanism for Global Context Representation>>>
Self-attention is responsible for learning representations of global context. For a given input sequence $X$, it first projects $X$ into three representations, key $K$, query $Q$, and value $V$. Then, it uses a self-attention mechanism to get the output representation:
Where $W^O$, $W^Q$, $W^K$, and $W^V$ are projection parameters. The self-attention operation $\sigma $ is the dot-production between key, query, and value pairs:
Note that we conduct a projecting operation over the value in our self-attention mechanism $V_1=VW^V$ here.
<<</Attention Mechanism for Global Context Representation>>>
<<<Convolution for Local Context Modeling>>>
We introduce convolution operations into MUSE to capture local context. To learn contextual sequence representations in the same hidden space, we choose depth-wise convolution BIBREF9 (we denote it as DepthConv in the experiments) as the convolution operation because it includes two separate transformations, namely, point-wise projecting transformation and contextual transformation. It is because that original convolution operator is not separable, but DepthConv can share the same point-wise projecting transformation with self-attention mechanism. We choose dynamic convolution BIBREF10, the best variant of DepthConv, as our implementation.
Each convolution sub-module contains multiple cells with different kernel sizes. They are used for capturing different-range features. The output of the convolution cell with kernel size $k$ is:
where $W^{V}$ and $W^{out}$ are parameters, $W^{V}$ is a point-wise projecting transformation matrix. The $Depth\_conv$ refers to depth convolution in the work of BIBREF10. For an input sequence $X$, the output $O$ is computed as:
where $d$ is the hidden size. Note that we conduct the same projecting operation over the input in our convolution mechanism $V_2=XW^V$ here with that in self-attention mechanism.
Shared projection To learn contextual sequence representations in the same hidden space, the projection in the self-attention mechanism $V_1=VW_V$ and that in the convolution mechanism $V_2=XW^V$ is shared. Because the shared projection can project the input feature into the same hidden space. If we conduct two independent projection here: $V_1=VW_1^V$ and $V_2=XW^V_2$, where $W_1^V$ and $W_2^V$ are two parameter matrices, we call it as separate projection. We will analyze the necessity of applying shared projection here instead of separate projection.
Dynamically Selected Convolution Kernels We introduce a gating mechanism to automatically select the weight of different convolution cells.
<<</Convolution for Local Context Modeling>>>
<<<Point-wise Feed-forward Network for Capturing Token Representations>>>
To learn token level representations, MUSE concatenates an self-attention network with a position-wise feed-forward network at each layer. Since the linear transformations are the same across different positions, the position-wise feed-forward network can be seen as a token feature extractor.
where $W_1$, $b_1$, $W_2$, and $b_2$ are projection parameters.
<<</Point-wise Feed-forward Network for Capturing Token Representations>>>
<<</MUSE: Parallel Multi-Scale Attention>>>
<<<Experiment>>>
We evaluate MUSE on four machine translation tasks. This section describes the datasets, experimental settings, detailed results, and analysis.
<<<Datasets>>>
WMT14 En-Fr and En-De datasets The WMT 2014 English-French translation dataset, consisting of $36M$ sentence pairs, is adopted as a big dataset to test our model. We use the standard split of development set and test set. We use newstest2014 as the test set and use newstest2012 +newstest2013 as the development set. Following BIBREF11, we also adopt a joint source and target BPE factorization with the vocabulary size of $40K$. For medium dataset, we borrow the setup of BIBREF0 and adopt the WMT 2014 English-German translation dataset which consists of $4.5M$ sentence pairs, the BPE vocabulary size is set to $32K$. The test and validation datasets we used are the same as BIBREF0.
IWSLT De-En and En-Vi datasets Besides, we perform experiments on two small IWSLT datasets to test the small version of MUSE with other comparable models. The IWSLT 2014 German-English translation dataset consists of $160k$ sentence pairs. We also adopt a joint source and target BPE factorization with the vocabulary size of $32K$. The IWSLT 2015 English-Vietnamese translation dataset consists of $133K$ training sentence pairs. For the En-Vi task, we build a dictionary including all source and target tokens. The vocabulary size for English is $17.2K$, and the vocabulary size for the Vietnamese is $6.8K$.
<<</Datasets>>>
<<<Experimental Settings>>>
<<<Model>>>
For fair comparisons, we only compare models reported with the comparable model size and the same training data. We do not compare BIBREF12 because it is an ensemble method. We build MUSE-base and MUSE-large with the parameter size comparable to Transformer-base and Transformer-large. We adopt multi-head attention BIBREF0 as implementation of self-attention in MUSE module. The number of attention head is set to 4 for MUSE-base and 16 for MUSE-large. We also add the network architecture built by MUSE-simple in the similar way into the comparison.
MUSE consists of 12 residual blocks for encoder and 12 residual blocks for decoder, the dimension is set to 384 for MUSE-base and 768 for MUSE-large. The hidden dimension of non linear transformation is set to 768 for MUSE-base and 3072 for MUSE-large.
The MUSE-large is trained on 4 Titan RTX GPUs while the MUSE-base is trained on a single NVIDIA RTX 2080Ti GPU. The batch size is calculated at the token level, which is called dynamic batching BIBREF0. We adopt dynamic convolution as the variant of depth-wise separable convolution. We tune the kernel size on the validation set. For convolution with a single kernel, we use the kernel size of 7 for all layers. In case of dynamic selected kernels, the kernel size is 3 for small kernels and 15 for large kernels for all layers.
<<</Model>>>
<<<Training>>>
The training hyper-parameters are tuned on the validation set.
MUSE-large For training MUSE-large, following BIBREF13, parameters are updated every 32 steps. We train the model for $80K$ updates with a batch size of 5120 for En-Fr, and train the model for ${30K}$ updates with a batch size of 3584 for En-De. The dropout rate is set to $0.1$ for En-Fr and ${0.3}$ for En-De. We borrow the setup of optimizer from BIBREF10 and use the cosine learning rate schedule with 10000 warmup steps. The max learning rate is set to $0.001$ on En-De translation and ${0.0007}$ on En-Fr translation. For checkpoint averaging, following BIBREF10, we tune the average checkpoints for En-De translation tasks. For En-Fr translation, we do not average checkpoint but use the final single checkpoint.
MUSE-base We train and test MUSE-base on two small datasets, IWSLT 2014 De-En translation and IWSLT2015 En-Vi translation. Following BIBREF0, we use Adam optimizer with a learning rate of $0.001$. We use the warmup mechanism and invert the learning rate decay with warmup updates of $4K$. For the De-En dataset, we train the model for $20K$ steps with a batch size of $4K$. The parameters are updated every 4 steps. The dropout rate is set to $0.4$. For the En-Vi dataset, we train the model for $10K$ steps with a batch size of $4K$. The parameters are also updated every 4 steps. The dropout rate is set to $0.3$. We save checkpoints every epoch and average the last 10 checkpoints for inference.
<<</Training>>>
<<<Evaluation>>>
During inference, we adopt beam search with a beam size of 5 for De-En, En-Fr and En-Vi translation tasks. The length penalty is set to 0.8 for En-Fr according to the validation results, 1 for the two small datasets following the default setting of BIBREF14. We do not tune beam width and length penalty but use the setting reported in BIBREF0. The BLEU metric is adopted to evaluate the model performance during evaluation.
<<</Evaluation>>>
<<</Experimental Settings>>>
<<<Results>>>
As shown in Table TABREF24, MUSE outperforms all previously models on En-De and En-Fr translation, including both state-of-the-art models of stand alone self-attention BIBREF0, BIBREF13, and convolutional models BIBREF11, BIBREF15, BIBREF10. This result shows that either self-attention or convolution alone is not enough for sequence to sequence learning. The proposed parallel multi-scale attention improves over them both on En-De and En-Fr.
Compared to Evolved Transformer BIBREF19 which is constructed by NAS and also mixes convolutions of different kernel size, MUSE achieves 2.2 BLEU gains in En-Fr translation.
Relative position or local attention constraints bring improvements over origin self-attention model, but parallel multi-scale outperforms them.
MUSE can also scale to small model and small datasets, as depicted in Table TABREF25, MUSE-base pushes the state-of-the-art from 35.7 to 36.3 on IWSLT De-En translation dataset.
It is shown in Table TABREF24 and Table TABREF25 that MUSE-simple which contains the basic idea of parallel multi-scale attention achieves state-of-the-art performance on three major machine translation datasets.
<<</Results>>>
<<<How do we propose effective parallel multi-scale attention?>>>
In this subsection we compare MUSE and its variants on IWSLT 2015 De-En translation to answer the question.
Does concatenating self-attention with convolution certainly improve the model? To bridge the gap between point-wise transformation which learns token level representations and self-attention which learns representations of global context, we introduce convolution to enhance our multi-scale attention. As we can see from the first experiment group of Table TABREF27, convolution is important in the parallel multi-scale attention. However, it is not easy to combine convolution and self-attention in one module to build better representations on sequence to sequence tasks. As shown in the first line of both second and third group of Table TABREF27, simply learning local representations by using convolution or depth-wise separable convolution in parallel with self-attention harms the performance. Furthermore, combining depth-wise separable convolution (in this work we choose its best variant dynamic convolution as implementation) is even worse than combining convolution.
Why do we choose DepthConv and what is the importance of sharing Projection of DepthConv and self-attention? We conjecture that convolution and self-attention both learn contextual sequence representations and they should share the point transformation and perform the contextual transformation in the same hidden space. We first project the input to a hidden representation and perform a variant of depth-wise convolution and self-attention transformations in parallel. The fist two experiments in third group of Table TABREF27 show that validating the utility of sharing Projection in parallel multi-scale attention, shared projection gain 1.4 BLEU scores over separate projection, and bring improvement of 0.5 BLEU scores over MUSE-simple (without DepthConv).
How much is the kernel size? Comparative experiments show that the too large kernel harms performance both for DepthConv and convolution. Since there exists self-attention and point-wise transformations, simply applying the growing kernel size schedule proposed in SliceNet BIBREF15 doesn't work. Thus, we propose to use dynamically selected kernel size to let the learned network decide the kernel size for each layer.
<<</How do we propose effective parallel multi-scale attention?>>>
<<<Further Analysis>>>
<<<Parallel multi-scale attention brings time efficiency on GPUs>>>
The underlying parallel structure (compared to the sequential structure in each block of Transformer) allows MUSE to be efficiently computed on GPUs. For example, we can combine small matrices into large matrices, and while it does not reduce the number of actual operations, it can be better paralleled by GPUs to speed up computation. Concretely, for each MUSE module, we first concentrate $W^Q,W^K,W^V$ of self-attention and $W_1$ of point feed-forward transformation into a single encoder matrix $W^{Enc}$, and then perform transformation such as self-attention, depth-separable convolution, and nonlinear transformation, in parallel, to learn multi-scale representations in the hidden layer. $W^O,W_2,W^{out}$ can also be combined a single decoder matrix $W^{Dec}$. The decoder of sequence to sequence architecture can be implemented similarly.
In Table TABREF31, we conduct comparisons to show the speed gains with the aforementioned implementation, and the batch size is set to one sample per batch to simulate online inference environment. Under the settings, where the numbers of parameters are similar for MUSE and Transformer, about 31% increase in inference speed can be obtained. The experiments use MUSE with 6 MUSE-simple modules and Transformer with 6 base blocks. The hidden size is set to 512.
Parallel multi-scale attention generates much better long sequence As demonstrated in Figure FIGREF32, MUSE generates better sequences of various length than self-attention, but it is remarkably adept at generate long sequence, e.g. for sequence longer than 100, MUSE is two times better.
Lower layers prefer local context and higher layers prefer more contextual representations MUSE contains multiple dynamic convolution cells, whose streams are fused by a gated mechanism. The weight for each dynamic cell is a scalar. Here we analyze the weight of different dynamic convolution cells in different layers. Figure FIGREF32 shows that as the layer depth increases, the weight of dynamic convolution cells with small kernel sizes gradually decreases. It demonstrates that lower layers prefer local features while higher layers prefer global features. It is corresponding to the finding in BIBREF26.
MUSE not only gains BLEU scores, but also generates more reasonable sentences and increases the translation quality. We conduct the case study on the De-En dataset and the cases are shown in Table TABREF34 in Appendix. In case 1, although the baseline transformer translates many correct words according to the source sentence, the translated sentence is not fluent at all. It indicates that Transformer does not capture the relationship between some words and their neighbors, such as “right” and “clap”. By contrast, MUSE captures them well by combining local convolution with global self-attention. In case 2, the cause adverbial clause is correctly translated by MUSE while transformer misses the word “why” and fails to translate it.
<<</Parallel multi-scale attention brings time efficiency on GPUs>>>
<<</Further Analysis>>>
<<</Experiment>>>
<<<Related Work>>>
Sequence to sequence learning is an important task in machine learning. It evolves understanding and generating sequence. Machine translation is the touchstone of sequence to sequence learning. Traditional approaches usually adopt long-short term memory networks BIBREF27, BIBREF28 to learn the representation of sequences. However, these models either are built upon auto-regressive structures requiring longer encoding time or perform worse on real-world natural language processing tasks. Recent studies explore convolutional neural networks (CNN) BIBREF11 or self-attention BIBREF0 to support high-parallel sequence modeling and does not require auto-regressive structure during encoding, thus bringing large efficiency improvements. They are strong at capturing local or global dependencies.
There are several studies on combining self-attention and convolution. However, they do not surpass both convectional and self-attention mechanisms. BIBREF4 propose to augment convolution with self attention by directly concentrating them in computer vision tasks. However, as demonstrated in Table TABREF27 there method does not work for sequence to sequence learning task. Since state-of-the-art models on question answering tasks still consist on self-attention and do no adopt ideas in QAnet BIBREF29. Both self-attention BIBREF13 and convolution BIBREF10 outperforms Evolved transformer by near 2 BLEU scores on En-Fr translation. It seems that learning global and local context through stacking self-attention and convolution layers does not beat either self-attention or convolution models. In contrast, the proposed parallel multi-scale attention outperforms previous convolution or self-attention based models on main translation tasks, showing its effectiveness for sequence to sequence learning.
<<</Related Work>>>
<<<Conclusion and Future work>>>
Although the self-attention mechanism has been prevalent in sequence modeling, we find that attention suffers from dispersed weights especially for long sequences, resulting from the insufficient local information.
To address this problem, we present Parallel Multi-scale Attention (MUSE) and MUSE-simple. MUSE-simple introduces the idea of parallel multi-scale attention into sequence to sequence learning. And MUSE fuses self-attention, convolution, and point-wise transformation together to explicitly learn global, local and token level sequence representations. Especially, we find from empirical results that the shared projection plays important part in its success, and is essential for our multi-scale learning.
Beyond the inspiring new state-of-the-art results on three major machine translation datasets, detailed analysis and model variants also verify the effectiveness of MUSE.
For future work, the parallel structure is highly extensible and provide many opportunities to improve these models. In addition, given the success of shared projection, we would like to explore its detailed effects on contextual representation learning. Finally, we are exited about future of parallel multi-scale attention and plan to apply this simple but effective idea to other tasks including image and speech.
<<</Conclusion and Future work>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nMUSE: Parallel Multi-Scale Attention\nAttention Mechanism for Global Context Representation\nConvolution for Local Context Modeling\nPoint-wise Feed-forward Network for Capturing Token Representations\nExperiment\nDatasets\nExperimental Settings\nModel\nTraining\nEvaluation\nResults\nHow do we propose effective parallel multi-scale attention?\nFurther Analysis\nParallel multi-scale attention brings time efficiency on GPUs\nRelated Work\nConclusion and Future work"
],
"type": "outline"
}
|
1909.05358
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Taskmaster-1: Toward a Realistic and Diverse Dialog Dataset
<<<Abstract>>>
A significant barrier to progress in data-driven approaches to building dialog systems is the lack of high quality, goal-oriented conversational data. To help satisfy this elementary requirement, we introduce the initial release of the Taskmaster-1 dataset which includes 13,215 task-based dialogs comprising six domains. Two procedures were used to create this collection, each with unique advantages. The first involves a two-person, spoken "Wizard of Oz" (WOz) approach in which trained agents and crowdsourced workers interact to complete the task while the second is "self-dialog" in which crowdsourced workers write the entire dialog themselves. We do not restrict the workers to detailed scripts or to a small knowledge base and hence we observe that our dataset contains more realistic and diverse conversations in comparison to existing datasets. We offer several baseline models including state of the art neural seq2seq architectures with benchmark performance as well as qualitative human evaluations. Dialogs are labeled with API calls and arguments, a simple and cost effective approach which avoids the requirement of complex annotation schema. The layer of abstraction between the dialog model and the service provider API allows for a given model to interact with multiple services that provide similar functionally. Finally, the dataset will evoke interest in written vs. spoken language, discourse patterns, error handling and other linguistic phenomena related to dialog system research, development and design.
<<</Abstract>>>
<<<Introduction>>>
Voice-based “personal assistants" such as Apple's SIRI, Microsoft's Cortana, Amazon Alexa, and the Google Assistant have finally entered the mainstream. This development is generally attributed to major breakthroughs in speech recognition and text-to-speech (TTS) technologies aided by recent progress in deep learning BIBREF0, exponential gains in compute power BIBREF1, BIBREF2, and the ubiquity of powerful mobile devices. The accuracy of machine learned speech recognizers BIBREF3 and speech synthesizers BIBREF4 are good enough to be deployed in real-world products and this progress has been driven by publicly available labeled datasets. However, conspicuously absent from this list is equal progress in machine learned conversational natural language understanding (NLU) and generation (NLG). The NLU and NLG components of dialog systems starting from the early research work BIBREF5 to the present commercially available personal assistants largely rely on rule-based systems. The NLU and NLG systems are often carefully programmed for very narrow and specific cases BIBREF6, BIBREF7. General understanding of natural spoken behaviors across multiple dialog turns, even in single task-oriented situations, is by most accounts still a long way off. In this way, most of these products are very much hand crafted, with inherent constraints on what users can say, how the system responds and the order in which the various subtasks can be completed. They are high precision but relatively low coverage. Not only are such systems unscalable, but they lack the flexibility to engage in truly natural conversation.
Yet none of this is surprising. Natural language is heavily context dependent and often ambiguous, especially in multi-turn conversations across multiple topics. It is full of subtle discourse cues and pragmatic signals whose patterns have yet to be thoroughly understood. Enabling an automated system to hold a coherent task-based conversation with a human remains one of computer science's most complex and intriguing unsolved problems BIBREF5. In contrast to more traditional NLP efforts, interest in statistical approaches to dialog understanding and generation aided by machine learning has grown considerably in the last couple of years BIBREF8, BIBREF9, BIBREF10. However, the dearth of high quality, goal-oriented dialog data is considered a major hindrance to more significant progress in this area BIBREF9, BIBREF11.
To help solve the data problem we present Taskmaster-1, a dataset consisting of 13,215 dialogs, including 5,507 spoken and 7,708 written dialogs created with two distinct procedures. Each conversation falls into one of six domains: ordering pizza, creating auto repair appointments, setting up ride service, ordering movie tickets, ordering coffee drinks and making restaurant reservations. For the spoken dialogs, we created a “Wizard of Oz” (WOz) system BIBREF12 to collect two-person, spoken conversations. Crowdsourced workers playing the “user" interacted with human operators playing the “digital assistant” using a web-based interface. In this way, users were led to believe they were interacting with an automated system while it was in fact a human, allowing them to express their turns in natural ways but in the context of an automated interface. We refer to this spoken dialog type as “two-person dialogs". For the written dialogs, we engaged crowdsourced workers to write the full conversation themselves based on scenarios outlined for each task, thereby playing roles of both the user and assistant. We refer to this written dialog type as “self-dialogs". In a departure from traditional annotation techniques BIBREF10, BIBREF8, BIBREF13, dialogs are labeled with simple API calls and arguments. This technique is much easier for annotators to learn and simpler to apply. As such it is more cost effective and, in addition, the same model can be used for multiple service providers.
Taskmaster-1 has richer and more diverse language than the current popular benchmark in task-oriented dialog, MultiWOZ BIBREF13. Table TABREF2 shows that Taskmaster-1 has more unique words and is more difficult for language models to fit. We also find that Taskmaster-1 is more realistic than MultiWOZ. Specifically, the two-person dialogs in Taskmaster-1 involve more real-word entities than seen in MutliWOZ since we do not restrict conversations to a small knowledge base. Beyond the corpus and the methodologies used to create it, we present several baseline models including state-of-the-art neural seq2seq architectures together with perplexity and BLEU scores. We also provide qualitative human performance evaluations for these models and find that automatic evaluation metrics correlate well with human judgments. We will publicly release our corpus containing conversations, API call and argument annotations, and also the human judgments.
<<</Introduction>>>
<<<Related work>>>
<<<Human-machine vs. human-human dialog>>>
BIBREF14 discuss the major features and differences among the existing offerings in an exhaustive and detailed survey of available corpora for data driven learning of dialog systems. One important distinction covered is that of human-human vs. human-machine dialog data, each having its advantages and disadvantages. Many of the existing task-based datasets have been generated from deployed dialog systems such as the Let’s Go Bus Information System BIBREF15 and the various Dialog State Tracking Challenges (DSTCs) BIBREF16. However, it is doubtful that new data-driven systems built with this type of corpus would show much improvement since they would be biased by the existing system and likely mimic its limitations BIBREF17. Since the ultimate goal is to be able to handle complex human language behaviors, it would seem that human-human conversational data is the better choice for spoken dialog system development BIBREF13. However, learning from purely human-human based corpora presents challenges of its own. In particular, human conversation has a different distribution of understanding errors and exhibits turn-taking idiosyncrasies which may not be well suited for interaction with a dialog system BIBREF17, BIBREF14.
<<</Human-machine vs. human-human dialog>>>
<<<The Wizard of Oz (WOz) Approach and MultiWOZ>>>
The WOz framework, first introduced by BIBREF12 as a methodology for iterative design of natural language interfaces, presents a more effective approach to human-human dialog collection. In this setup, users are led to believe they are interacting with an automated assistant but in fact it is a human behind the scenes that controls the system responses. Given the human-level natural language understanding, users quickly realize they can comfortably and naturally express their intent rather than having to modify behaviors as is normally the case with a fully automated assistant. At the same time, the machine-oriented context of the interaction, i.e. the use of TTS and slower turn taking cadence, prevents the conversation from becoming fully fledged, overly complex human discourse. This creates an idealized spoken environment, revealing how users would openly and candidly express themselves with an automated assistant that provided superior natural language understanding.
Perhaps the most relevant work to consider here is the recently released MultiWOZ dataset BIBREF13, since it is similar in size, content and collection methodologies. MultiWOZ has roughly 10,000 dialogs which feature several domains and topics. The dialogs are annotated with both dialog states and dialog acts. MultiWOZ is an entirely written corpus and uses crowdsourced workers for both assistant and user roles. In contrast, Taskmaster-1 has roughly 13,000 dialogs spanning six domains and annotated with API arguments. The two-person spoken dialogs in Taskmaster-1 use crowdsourcing for the user role but trained agents for the assistant role. The assistant's speech is played to the user via TTS. The remaining 7,708 conversations in Taskmaster-1 are self-dialogs, in which crowdsourced workers write the entire conversation themselves. As BIBREF18, BIBREF19 show, self dialogs are surprisingly rich in content.
<<</The Wizard of Oz (WOz) Approach and MultiWOZ>>>
<<</Related work>>>
<<<The Taskmaster Corpus>>>
<<<Overview>>>
There are several key attributes that make Taskmaster-1 both unique and effective for data-driven approaches to building dialog systems and for other research.
Spoken and written dialogs: While the spoken sources more closely reflect conversational language BIBREF20, written dialogs are significantly cheaper and easier to gather. This allows for a significant increase in the size of the corpus and in speaker diversity.
Goal-oriented dialogs: All dialogs are based on one of six tasks: ordering pizza, creating auto repair appointments, setting up rides for hire, ordering movie tickets, ordering coffee drinks and making restaurant reservations.
Two collection methods: The two-person dialogs and self-dialogs each have pros and cons, revealing interesting contrasts.
Multiple turns: The average number of utterances per dialog is about 23 which ensures context-rich language behaviors.
API-based annotation: The dataset uses a simple annotation schema providing sufficient grounding for the data while making it easy for workers to apply labels consistently.
Size: The total of 13,215 dialogs in this corpus is on par with similar, recently released datasets such as MultiWOZ BIBREF13.
<<</Overview>>>
<<<Two-person, spoken dataset>>>
In order to replicate a two-participant, automated digital assistant experience, we built a WOz platform that pairs agents playing the digital assistant with crowdsourced workers playing the user in task-based conversational scenarios. An example dialog from this dataset is given in Figure FIGREF5.
<<<WOz platform and data pipeline>>>
While it is beyond the scope of this work to describe the entire system in detail, there are several platform features that help illustrate how the process works.
Modality: The agents playing the assistant type their input which is in turn played to the user via text-to-speech (TTS) while the crowdsourced workers playing the user speak aloud to the assistant using their laptop and microphone. We use WebRTC to establish the audio channel. This setup creates a digital assistant-like communication style.
Conversation and user quality control: Once the task is completed, the agents tag each conversation as either successful or problematic depending on whether the session had technical glitches or user behavioral issues. We are also then able to root out problematic users based on this logging.
Agent quality control: Agents are required to login to the system which allows us to monitor performance including the number and length of each session as well as their averages.
User queuing: When there are more users trying to connect to the system than available agents, a queuing mechanism indicates their place in line and connects them automatically once they move to the front of the queue.
Transcription: Once complete, the user's audio-only portion of the dialog is transcribed by a second set of workers and then merged with the assistant's typed input to create a full text version of the dialog. Finally, these conversations are checked for transcription errors and typos and then annotated, as described in Section SECREF48.
<<</WOz platform and data pipeline>>>
<<<Agents, workers and training>>>
Both agents and crowdsourced workers are given written instructions prior to the session. Examples of each are given in Figure FIGREF6 and Figure FIGREF23. The instructions continue to be displayed on screen to the crowdsourced workers while they interact with the assistant. Instructions are modified at times (for either participant or both) to ensure broader coverage of dialog scenarios that are likely to occur in actual user-assistant interactions. For example, in one case users were asked to change their mind after ordering their first item and in another agents were instructed to tell users that a given item was not available. Finally, in their instructions, crowdsourced workers playing the user are told they will be engaging in conversation with “a digital assistant”. However, it is plausible that some suspect human intervention due to the advanced level of natural language understanding from the assistant side.
Agents playing the assistant role were hired from a pool of dialog analysts and given two hours of training on the system interface as well as on how to handle specific scenarios such as uncooperative users and technical glitches. Uncooperative users typically involve those who either ignored agent input or who rushed through the conversation with short phrases. Technical issues involved dropped sessions (e.g. WebRTC connections failed) or cases in which the user could not hear the agent or vice-versa. In addition, weekly meetings were held with the agents to answer questions and gather feedback on their experiences. Agents typically work four hours per day with dialog types changing every hour. Crowdsourced workers playing the user are accessed using Amazon Mechanical Turk. Payment for a completed dialog session lasting roughly five to seven minutes was typically in the range of $\$1.00$ to $\$1.30$. Problematic users are detected either by the agent involved in the specific dialog or by post-session assessment and removed from future requests.
<<</Agents, workers and training>>>
<<</Two-person, spoken dataset>>>
<<<Self-dialogs (one-person written dataset)>>>
While the two-person approach to data collection creates a realistic scenario for robust, spoken dialog data collection, this technique is time consuming, complex and expensive, requiring considerable technical implementation as well as administrative procedures to train and manage agents and crowdsourced workers. In order to extend the Taskmaster dataset at minimal cost, we use an alternative self-dialog approach in which crowdsourced workers write the full dialogs themselves (i.e. interpreting the roles of both user and assistant).
<<<Task scenarios and instructions>>>
Targeting the same six tasks used for the two-person dialogs, we again engaged the Amazon Mechanical Turk worker pool to create self-dialogs, this time as a written exercise. In this case, users are asked to pretend they have a personal assistant who can help them take care of various tasks in real time. They are told to imagine a scenario in which they are speaking to their assistant on the phone while the assistant accesses the services for one of the given tasks. They then write down the entire conversation. Figure FIGREF34 shows a sample set of instructions.
<<</Task scenarios and instructions>>>
<<<Pros and cons of self-dialogs>>>
The self-dialog technique renders quality data and avoids some of the challenges seen with the two-person approach. To begin, since the same person is writing both sides of the conversation, we never see misunderstandings that lead to frustration as is sometimes experienced between interlocutors in the two-person approach. In addition, all the self-dialogs follow a reasonable path even when the user is constructing conversations that include understanding errors or other types of dialog glitches such as when a particular choice is not available. As it turns out, crowdsourced workers are quite effective at recreating various types of interactions, both error-free and those containing various forms of linguistic repair. The sample dialog in Figure FIGREF44 shows the result of a self-dialog exercise in which workers were told to write a conversation with various ticket availability issues that is ultimately unsuccessful.
Two more benefits of the self-dialog approach are its efficiency and cost effectiveness. We were able to gather thousands of dialogs in just days without transcription or trained agents, and spent roughly six times less per dialog. Despite these advantages, the self-dialog written technique cannot recreate the disfluencies and other more complex error patterns that occur in the two-person spoken dialogs which are important for model accuracy and coverage.
<<</Pros and cons of self-dialogs>>>
<<</Self-dialogs (one-person written dataset)>>>
<<<Annotation>>>
We chose a highly simplified annotation approach for Taskmaster-1 as compared to traditional, detailed strategies which require robust agreement among workers and usually include dialog state and slot information, among other possible labels. Instead we focus solely on API arguments for each type of conversation, meaning just the variables required to execute the transaction. For example, in dialogs about setting up UBER rides, we label the “to" and “from" locations along with the car type (UberX, XL, Pool, etc). For movie tickets, we label the movie name, theater, time, number of tickets, and sometimes screening type (e.g. 3D vs. standard). A complete list of labels is included with the corpus release.
As discussed in Section SECREF33, to encourage diversity, at times we explicitly ask users to change their mind in the middle of the conversation, and the agents to tell the user that the requested item is not available. This results in conversations having multiple instances of the same argument type. To handle this ambiguity, in addition to the labels mentioned above, the convention of either “accept” or “reject" was added to all labels used to execute the transaction, depending on whether or not that transaction was successful.
In Figure FIGREF49, both the number of people and the time variables in the assistant utterance would have the “.accept" label indicating the transaction was completed successfully. If the utterance describing a transaction does not include the variables by name, the whole sentence is marked with the dialog type. For example, a statement such as The table has been booked for you would be labeled as reservation.accept.
<<</Annotation>>>
<<</The Taskmaster Corpus>>>
<<<Dataset Analysis>>>
<<<Self-dialogs vs MultiWOZ>>>
We quantitatively compare our self-dialogs (Section SECREF45) with the MultiWOZ dataset in Table TABREF2. Compared to MultiWOZ, we do not ask the users and assistants to stick to detailed scripts and do not restrict them to have conversations surrounding a small knowledge base. Table TABREF2 shows that our dataset has more unique words, and has almost twice the number of utterances per dialog than the MultiWOZ corpus. Finally, when trained with the Transformer BIBREF21 model, we observe significantly higher perplexities and lower BLEU scores for our dataset compared to MultiWOZ suggesting that our dataset conversations are difficult to model. Finally, Table TABREF2 also shows that our dataset contains close to 10 times more real-world named entities than MultiWOZ and thus, could potentially serve as a realistic baseline when designing goal oriented dialog systems. MultiWOZ has only 1338 unique named entities and only 4510 unique values (including date, time etc.) in their datatset.
<<</Self-dialogs vs MultiWOZ>>>
<<<Self-dialogs vs Two-person>>>
In this section, we quantitatively compare 5k conversations each of self-dialogs (Section SECREF45) and two-person (Section SECREF31). From Table TABREF50, we find that self-dialogs exhibit higher perplexity ( almost 3 times) compared to the two-person conversations suggesting that self-dialogs are more diverse and contains more non-conventional conversational flows which is inline with the observations in Section-SECREF47. While the number of unique words are higher in the case of self-dialogs, conversations are longer in the two-person conversations. We also report metrics by training a single model on both the datasets together.
<<</Self-dialogs vs Two-person>>>
<<<Baseline Experiments: Response Generation>>>
We evaluate various seq2seq architectures BIBREF22 on our self-dialog corpus using both automatic evaluation metrics and human judgments. Following the recent line of work on generative dialog systems BIBREF23, we treat the problem of response generation given the dialog history as a conditional language modeling problem. Specifically we want to learn a conditional probability distribution $P_{\theta }(U_{t}|U_{1:t-1})$ where $U_{t}$ is the next response given dialog history $U_{1:t-1}$. Each utterance $U_i$ itself is comprised of a sequence of words $w_{i_1}, w_{i_2} \ldots w_{i_k}$. The overall conditional probability is factorized autoregressively as
$P_{\theta }$, in this work, is parameterized by a recurrent, convolution or Transformer-based seq2seq model.
n-gram: We consider 3-gram and 4-gram conditional language model baseline with interpolation. We use random grid search for the best coefficients for the interpolated model.
Convolution: We use the fconv architecture BIBREF24 and default hyperparameters from the fairseq BIBREF25 framework. We train the network with ADAM optimizer BIBREF26 with learning rate of 0.25 and dropout probability set to 0.2.
LSTM: We consider LSTM models BIBREF27 with and without attention BIBREF28 and use the tensor2tensor BIBREF29 framework for the LSTM baselines. We use a two-layer LSTM network for both the encoder and the decoder with 128 dimensional hidden vectors.
Transformer: As with LSTMs, we use the tensor2tensor framework for the Transformer model. Our Transformer BIBREF21 model uses 256 dimensions for both input embedding and hidden state, 2 layers and 4 attention heads. For both LSTMs and Transformer, we train the model with ADAM optimizer ($\beta _{1} = 0.85$, $\beta _{2} = 0.997$) and dropout probability set to 0.2.
GPT-2: Apart from supervised seq2seq models, we also include results from pre-trained GPT-2 BIBREF30 containing 117M parameters.
We evaluate all the models with perplexity and BLEU scores (Table TABREF55). Additionally, we perform two kinds of human evaluation - Ranking and Rating (LIKERT scale) for the top-3 performing models - Convolution, LSTM-attention and Transformer. For the ranking task, we randomly show 500 partial dialogs and generated responses of the top-3 models from the test set to three different crowdsourced workers and ask them to rank the responses based on their relevance to the dialog history. For the rating task, we show the model responses individually to three different crowdsourced workers and ask them to rate the responses on a 1-5 LIKERT scale based on their appropriateness to the dialog history. From Table-TABREF56, we see that inter-annotator reliability scores (Krippendorf’s Alpha) are higher for the ranking task compared to the rating task. From Table TABREF55, we see that Transformer is the best performing model on automatic evaluation metrics. It is interesting to note that there is a strong correlation between BLEU score and human ranking judgments.
<<</Baseline Experiments: Response Generation>>>
<<<Baseline Experiments: Argument Prediction>>>
Next, we discuss a set of baseline experiments for the task of argument prediction. API arguments are annotated as spans in the dialog (Section SECREF48). We formulate this problem as mapping text conversation to a sequence of output arguments. Apart from the seq2seq Transformer baseline, we consider an additional model - an enhanced Transformer seq2seq model where the decoder can choose to copy from the input or generate from the vocabulary BIBREF31, BIBREF32. Since all the API arguments are input spans, the copy model having the correct inductive bias achieves the best performance.
<<</Baseline Experiments: Argument Prediction>>>
<<</Dataset Analysis>>>
<<<Conclusion>>>
To address the lack of quality corpora for data-driven dialog system research and development, this paper introduces Taskmaster-1, a dataset that provides richer and more diverse language as compared to current benchmarks since it is based on unrestricted, task-oriented conversations involving more real-word entities. In addition, we present two data collection methodologies, both spoken and written, that ensure both speaker diversity and conversational accuracy. Our straightforward, API-oriented annotation technique is much easier for annotators to learn and simpler to apply. We give several baseline models including state-of-the-art neural seq2seq architectures, provide qualitative human performance evaluations for these models, and find that automatic evaluation metrics correlate well with human judgments.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated work\nHuman-machine vs. human-human dialog\nThe Wizard of Oz (WOz) Approach and MultiWOZ\nThe Taskmaster Corpus\nOverview\nTwo-person, spoken dataset\nWOz platform and data pipeline\nAgents, workers and training\nSelf-dialogs (one-person written dataset)\nTask scenarios and instructions\nPros and cons of self-dialogs\nAnnotation\nDataset Analysis\nSelf-dialogs vs MultiWOZ\nSelf-dialogs vs Two-person\nBaseline Experiments: Response Generation\nBaseline Experiments: Argument Prediction\nConclusion"
],
"type": "outline"
}
|
2004.03744
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
e-SNLI-VE-2.0: Corrected Visual-Textual Entailment with Natural Language Explanations
<<<Abstract>>>
The recently proposed SNLI-VE corpus for recognising visual-textual entailment is a large, real-world dataset for fine-grained multimodal reasoning. However, the automatic way in which SNLI-VE has been assembled (via combining parts of two related datasets) gives rise to a large number of errors in the labels of this corpus. In this paper, we first present a data collection effort to correct the class with the highest error rate in SNLI-VE. Secondly, we re-evaluate an existing model on the corrected corpus, which we call SNLI-VE-2.0, and provide a quantitative comparison with its performance on the non-corrected corpus. Thirdly, we introduce e-SNLI-VE-2.0, which appends human-written natural language explanations to SNLI-VE-2.0. Finally, we train models that learn from these explanations at training time, and output such explanations at testing time.
<<</Abstract>>>
<<<Introduction>>>
Inspired by textual entailment BIBREF0, Xie BIBREF1 introduced the visual-textual entailment (VTE) task, which considers semantic entailment between a premise image and a textual hypothesis. Semantic entailment consists in determining if the hypothesis can be concluded from the premise, and assigning to each pair of (premise image, textual hypothesis) a label among entailment, neutral, and contradiction. In Figure FIGREF3, the label for the first image-sentence pair is entailment, because the hypothesis states that “a bunch of people display different flags”, which can be clearly derived from the image. On the contrary, the second image-sentence pair is labelled as contradiction, because the hypothesis stating that “people [are] running a marathon” contradicts the image with static people.
Xie also propose the SNLI-VE dataset as the first dataset for VTE. SNLI-VE is built from the textual entailment SNLI dataset BIBREF0 by replacing textual premises with the Flickr30k images that they originally described BIBREF2. However, images contain more information than their descriptions, which may entail or contradict the textual hypotheses (see Figure FIGREF3). As a result, the neutral class in SNLI-VE has substantial labelling errors. Vu BIBREF3 estimated ${\sim }31\%$ errors in this class, and ${\sim }1\%$ for the contradiction and entailment classes.
Xie BIBREF1 introduced the VTE task under the name of “visual entailment”, which could imply recognizing entailment between images only. This paper prefers to follow Suzuki BIBREF4 and call it “visual-textual entailment” instead, as it involves reasoning on image-sentence pairs.
In this work, we first focus on decreasing the error in the neutral class by collecting new labels for the neutral pairs in the validation and test sets of SNLI-VE, using Amazon Mechanical Turk (MTurk). To ensure high quality annotations, we used a series of quality control measures, such as in-browser checks, inserting trusted examples, and collecting three annotations per instance. Secondly, we re-evaluate current image-text understanding systems, such as the bottom-up top-down attention network (BUTD) BIBREF5 on VTE using our corrected dataset, which we call SNLI-VE-2.0.
Thirdly, we introduce the e-SNLI-VE-2.0 corpus, which we form by appending human-written natural language explanations to SNLI-VE-2.0. These explanations were collected in e-SNLI BIBREF6 to support textual entailment for SNLI. For the same reasons as above, we re-annotate the explanations for the neutral pairs in the validation and test sets, while keeping the explanations from e-SNLI for all the rest. Finally, we extend a current VTE model with the capacity of learning from these explanations at training time and outputting an explanation for each predicted label at testing time.
<<</Introduction>>>
<<<SNLI-VE-2.0>>>
The goal of VTE is to determine if a textual hypothesis $H_{text}$ can be concluded, given the information in a premise image $P_{image}$ BIBREF1. There are three possible labels:
Entailment: if there is enough evidence in $P_{image}$ to conclude that $H_{text}$ is true.
Contradiction: if there is enough evidence in $P_{image}$ to conclude that $H_{text}$ is false.
Neutral: if neither of the earlier two are true.
The SNLI-VE dataset proposed by Xie BIBREF1 is the combination of Flickr30k, a popular image dataset for image captioning BIBREF2 and SNLI, an influential dataset for natural language inference BIBREF0. Textual premises from SNLI are replaced with images from Flickr30k, which is possible, as these premises were originally collected as captions of these images (see Figure FIGREF3).
However, in practice, a sensible proportion of labels are wrong due to the additional information contained in images. This mostly affects neutral pairs, since images may contain the necessary information to ground a hypothesis for which a simple premise caption was not sufficient. An example is shown in Figure FIGREF3. Vu BIBREF3 report that the label is wrong for ${\sim }31\%$ of neutral examples, based on a random subset of 171 neutral points from the test set. We also annotated 150 random neutral examples from the test set and found a similar percentage of 30.6% errors.
Our annotations are available at https://github.com/virginie-do/e-SNLI-VE/tree/master/annotations/gt_labels.csv
<<<Re-annotation details>>>
In this work, we only collect new labels for the neutral pairs in the validation and test sets of SNLI-VE. While the procedure of re-annotation is generic, we limit our re-annotation to these splits as a first step to verify the difference in performance that current models have when evaluated on the corrected test set as well as the effect of model selection on the corrected validation set. We leave for future work re-annotation of the training set, which would likely lead to training better VTE models. We also chose not to re-annotate entailment and contradiction classes, as their error rates are much lower ($<$1% as reported by Vu BIBREF3).
The main question that we want our dataset to answer is: “What is the relationship between the image premise and the sentence hypothesis?”. We provide workers with the definitions of entailment, neutral, and contradiction for image-sentence pairs and one example for each label. As shown in Figure FIGREF8, for each image-sentence pair, workers are required to (a) choose a label, (b) highlight words in the sentence that led to their decision, and (c) explain their decision in a comprehensive and concise manner, using at least half of the words that they highlighted. The collected explanations will be presented in more detail in Section SECREF20, as we focus here on the label correction. We point out that it is likely that requiring an explanation at the same time as requiring a label has a positive effect on the correctness of the label, since having to justify in writing the picked label may make workers pay an increased attention. Moreover, we implemented additional quality control measures for crowdsourced annotations, such as (a) collecting three annotations for every input, (b) injecting trusted annotations into the task for verification BIBREF7, and (c) restricting to workers with at least 90% previous approval rate.
First, we noticed that some instances in SNLI-VE are ambiguous. We show some examples in Figure FIGREF3 and in Appendix SECREF43. In order to have a better sense of this ambiguity, three authors of this paper independently annotated 100 random examples. All three authors agreed on 54% of the examples, exactly two authors agreed on 45%, and there was only one example on which all three authors disagreed. We identified the following three major sources of ambiguity:
mapping an emotion in the hypothesis to a facial expression in the image premise, e.g., “people enjoy talking”, “angry people”, “sad woman”. Even when the face is seen, it may be subjective to infer an emotion from a static image (see Figure FIGREF44 in Appendix SECREF43).
personal taste, e.g., “the sign is ugly”.
lack of consensus on terms such as “many people” or “crowded”.
To account for the ambiguity that the neutral labels seem to present, we considered that an image-sentence pair is too ambiguous and not suitable for a well-defined visual-textual entailment task when three different labels were assigned by the three workers. Hence, we removed these examples from the validation (5.2%) and test (5.5%) sets.
To ensure that our workers are correctly performing the task, we randomly inserted trusted pairs, i.e., pairs among the 54% on which all three authors agreed on the label. For each set of 10 pairs presented to a worker, one trusted pair was introduced at a random location, so that the worker, while being told that there is such a test pair, cannot figure out which one it is. Via an in-browser check, we only allow workers to submit their answers for each set of 10 instances only if the trusted pair was correctly labelled. Other in-browser checks were done for the collection of explanations, as we will describe in Section SECREF20. More details about the participants and design of the Mechanical Turk task can be found in Appendix SECREF41.
After collecting new labels for the neutral instances in the validation and testing sets, we randomly select and annotate 150 instances from the validation set that were neutral in SNLI-VE. Based on this sample, the error rate went down from 31% to 12% in SNLI-VE-2.0. Looking at the 18 instances where we disagreed with the label assigned by MTurk workers, we noticed that 12 were due to ambiguity in the examples, and 6 were due to workers' errors. Further investigation into potentially eliminating ambiguous instances would likely be beneficial. However, we leave it as future work, and we proceed in this work with using our corrected labels, since our error rate is significantly lower than that of the original SNLI-VE.
Finally, we note that only about 62% of the originally neutral pairs remain neutral, while 21% become contradiction and 17% entailment pairs. Therefore, we are now facing an imbalance between the neutral, entailment, and contradiction instances in the validation and testing sets of SNLI-VE-2.0. The neutral class becomes underrepresented and the label distributions in the corrected validation and testing sets both become E / N / C: 39% / 20% / 41%. To account for this, we compute the balanced accuracy, i.e., the average of the three accuracies on each class.
<<</Re-annotation details>>>
<<<Re-evaluation of Visual-Textual Entailment>>>
Since we decreased the error rate of labels in the validation and test set, we are interested in the performance of a VTE model when using the corrected sets.
<<<Model.>>>
To tackle SNLI-VE, Xie BIBREF1 used EVE (for “Explainable Visual Entailment”), a modified version of the BUTD architecture, the winner of the Visual Question Answering (VQA) challenge in 2017 BIBREF5. Since the EVE implementation is not available at the time of this work, we used the original BUTD architecture, with the same hyperparameters as reported in BIBREF1.
BUTD contains an image processing module and a text processing module. The image processing module encodes each image region proposed by FasterRCNN BIBREF8 into a feature vector using a bottom-up attention mechanism. In the text processing module, the text hypothesis is encoded into a fixed-length vector, which is the last output of a recurrent neural network with 512-GRU units BIBREF9. To input each token into the recurrent network, we use the pretrained GloVe vectors BIBREF10. Finally, a top-down attention mechanism is used between the hypothesis vector and each of the image region vectors to obtain an attention weight for each region. The weighted sum of these image region vectors is then fused with the text hypothesis vector. The multimodal fusion is fed to a multilayer percetron (MLP) with tanh activations and a final softmax layer to classify the image-sentence relation as entailment, contradiction, or neutral.
Using the implementation from https://github.com/claudiogreco/coling18-gte.
We use the original training set from SNLI-VE. To see the impact of correcting the validation and test sets, we do the following three experiments:
model selection as well as testing are done on the original uncorrected SNLI-VE.
model selection is done on the uncorrected SNLI-VE validation set, while testing is done on the corrected SNLI-VE-2.0 test set.
model selection as well as testing are done on the corrected SNLI-VE-2.0.
Models are trained with cross-entropy loss optimized by the Adam optimizer BIBREF11 with batch size 64. The maximum number of training epochs is set to 100, with early stopping when no improvement is observed on validation accuracy for 3 epochs. The final model checkpoint selected for testing is the one with the highest validation accuracy.
<<</Model.>>>
<<<Results.>>>
The results of the three experiments enumerated above are reported in Table TABREF18. Surprisingly, we obtained an accuracy of 73.02% on SNLI-VE using BUTD, which is better than the 71.16% reported by Xie BIBREF1 for the EVE system which meant to be an improvement over BUTD. It is also better than their reproduction of BUTD, which gave 68.90%.
The same BUTD model that achieves 73.02% on the uncorrected SNLI-VE test set, achieves 73.18% balanced accuracy when tested on the corrected test set from SNLI-VE-2.0. Hence, for this model, we do not notice a significant difference in performance. This could be due to randomness. Finally, when we run the training loop again, this time doing the model selection on the corrected validation set from SNLI-VE-2.0, we obtain a slightly worse performance of 72.52%, although the difference is not clearly significant.
Finally, we recall that the training set has not been re-annotated, and hence approximately 31% image-sentence pairs are wrongly labelled as neutral, which likely affects the performance of the model.
<<</Results.>>>
<<</Re-evaluation of Visual-Textual Entailment>>>
<<</SNLI-VE-2.0>>>
<<<Visual-Textual Entailment with Natural Language Explanations>>>
In this work, we also introduce e-SNLI-VE-2.0, a dataset combining SNLI-VE-2.0 with human-written explanations from e-SNLI BIBREF6, which were originally collected to support textual entailment. We replace the explanations for the neutral pairs in the validation and test sets with new ones collected at the same time as the new labels. We extend a current VTE model with an explanation module able to learn from these explanations at training time and generate an explanation for each predicted label at testing time.
<<<e-SNLI-VE-2.0>>>
e-SNLI BIBREF6 is an extension of the SNLI corpus with human-annotated natural language explanations for the ground-truth labels. The authors use the explanations to train models to also generate natural language justifications for their predictions. They collected one explanation for each instance in the training set of SNLI and three explanations for each instance in the validation and testing sets.
We randomly selected 100 image-sentence pairs in the validation set of SNLI-VE and their corresponding explanations in e-SNLI and examined how relevant these explanations are for the VTE task. More precisely, we say that an explanation is relevant if it brings information that justifies the relationship between the image and the sentence. We restricted the count to correctly labelled inputs and found that 57% explanations were relevant. For example, the explanation for entailment in Figure FIGREF21 (“Cooking in his apartment is cooking”) was counted as irrelevant in our statistics, because it would not be the best explanation for an image-sentence pair, even though it is coherent with the textual pair. We investigate whether these explanations improve a VTE model when enhanced with a component that can process explanations at train time and output them at test time.
To form e-SNLI-VE-2.0, we append to SNLI-VE-2.0 the explanations from e-SNLI for all except the neutral pairs in the validation and test sets of SNLI-VE, which we replace with newly crowdsourced explanations collected at the same time as the labels for these splits (see Figure FIGREF21). Statistics of e-SNLI-VE-2.0 are shown in Appendix SECREF39, Table TABREF40.
<<</e-SNLI-VE-2.0>>>
<<<Collecting Explanations>>>
As mentioned before, in order to submit the annotation of an image-sentence pair, three steps must be completed: workers must choose a label, highlight words in the hypothesis, and use at least half of the highlighted words to write an explanation for their decision. The last two steps thus follow the quality control of crowd-sourced explanations introduced by Camburu BIBREF6. We also ensured that workers do not simply use a copy of the given hypothesis as explanation. We ensured all the above via in-browser checks before workers' submission. An example of collected explanations is given in Figure FIGREF21.
To check the success of our crowdsourcing, we manually assessed the relevance of explanations among a random subset of 100 examples. A marking scale between 0 and 1 was used, assigning a score of $k$/$n$ when $k$ required attributes were given in an explanation out of $n$. We report an 83.5% relevance of explanations from workers. We note that, since our explanations are VTE-specific, they were phrased differently from the ones in e-SNLI, with more specific mentions to the images (e.g., “There is no labcoat in the picture, just a man wearing a blue shirt.”, “There are no apples or oranges shown in the picture, only bananas.”). Therefore, it would likely be beneficial to collect new explanations for all SNLI-VE-2.0 (not only for the neutral pairs in the validation and test sets) such that models can learn to output convincing explanations for the task at hand. However, we leave this as future work, and we show in this work the results that one obtains when using the explanations from e-SNLI-VE-2.0.
<<</Collecting Explanations>>>
<<<VTE Models with Natural Language Explanations>>>
This section presents two VTE models that generate natural language explanations for their own decisions. We name them PaE-BUTD-VE and EtP-BUTD-VE, where PaE (resp. EtP) is for PredictAndExplain (resp. ExplainThenPredict), two models with similar principles introduced by Camburu BIBREF6. The first system learns to generate an explanation conditioned on the image premise, textual hypothesis, and predicted label. In contrast, the second system learns to first generate an explanation conditioned on the image premise and textual hypothesis, and subsequently makes a prediction solely based on the explanation.
<<<Predict and Explain>>>
PaE-BUTD-VE is a system for solving VTE and generating natural language explanations for the predicted labels. The explanations are conditioned on the image premise, the text hypothesis, and the predicted label (ground-truth label at train time), as shown in Figure FIGREF24.
<<<Loss.>>>
The training loss is a weighted combination of the classification loss and the explanation loss, both computed using softmax cross entropy: $\mathcal {L} = \alpha \mathcal {L}_{label} + (1-\alpha ) \mathcal {L}_{explanation} \; \textrm {;} \; \alpha \in [0,1]$.
<<</Loss.>>>
<<<Model selection.>>>
In this experiment, we are first interested in examining if a neural network can generate explanations at no cost for label accuracy. Therefore, only balanced accuracy on label is used for the model selection criterion. However, future work can investigate other selection criteria involving a combination between the label and explanation performances. We performed hyperparameter search on $\alpha $, considering values between 0.2 and 0.8 with a step of 0.2. We found $\alpha =0.4$ to produce the best validation balanced accuracy of 72.81%, while BUTD trained without explanations yielded a similar 72.58% validation balanced accuracy.
<<</Model selection.>>>
<<</Predict and Explain>>>
<<<Explain Then Predict>>>
When assigning a label, an explanation is naturally part of the decision-making process. This motivates the design of a system that explains itself before deciding on a label, called EtP-BUTD-VE. For this system, a first neural network is trained to generate an explanation given an image-sentence input. Separately, a second neural network, called ExplToLabel-VE, is trained to predict a label from an explanation (see Figure FIGREF32).
<<</Explain Then Predict>>>
<<<Qualitative Analysis of Generated Explanations>>>
We complement our quantitative results with a qualitative analysis of the explanations generated by our enhanced VTE systems. In Figures FIGREF36 and FIGREF37, we present examples of the predicted labels and generated explanations.
Figure FIGREF36 shows an example where the EtP-BUTD-VE model produces both a correct label and a relevant explanation. The label is contradiction, because in the image, the students are playing with a soccer ball and not a basketball, thus contradicting the text hypothesis. Given the composition of the generated sentence (“Students cannot be playing soccer and baseball at the same time.”), ExplToLabel-VE was able to detect a contradiction in the image-sentence input. In comparison, the explanation from e-SNLI-VE-2.0 is not correct, even if it was valid for e-SNLI when the text premise was given. This emphasizes the difficulty that we are facing with generating proper explanations when training on a noisy dataset.
Even when the generated explanations are irrelevant, we noticed that they are on-topic and that most of the time the mistakes come from repetitions of certain sub-phrases. For example, in Figure FIGREF37, PaE-BUTD-VE predicts the label neutral, which is correct, but the explanation contains an erroneous repetition of the n-gram “are in a car”. However, it appears that the system learns to generate a sentence in the form “Just because ...doesn't mean ...”, which is frequently found for the justification of neutral pairs in the training set. The explanation generated by EtP-BUTD-VE adopts the same structure, and the ExplToLabel-VE component correctly classifies the instance as neutral. However, even if the explanation is semantically correct, it is not relevant for the input and fails to explain the classification.
<<</Qualitative Analysis of Generated Explanations>>>
<<</VTE Models with Natural Language Explanations>>>
<<</Visual-Textual Entailment with Natural Language Explanations>>>
<<<Conclusion>>>
In this paper, we first presented SNLI-VE-2.0, which corrects the neutral instances in the validation and test sets of SNLI-VE. Secondly, we re-evaluated an existing model on the corrected sets in order to update the estimate of its performance on this task. Thirdly, we introduced e-SNLI-VE-2.0, a dataset which extends SNLI-VE-2.0 with natural language explanations. Finally, we trained two types of models that learn from these explanations at training time, and output such explanations at test time, as a stepping stone in explainable artificial intelligence. Our work is a jumping-off point for both the identification and correction of SNLI-VE, as well as in the extension to explainable VTE. We hope that the community will build on our findings to create more robust as well as explainable multimodal systems.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nSNLI-VE-2.0\nRe-annotation details\nRe-evaluation of Visual-Textual Entailment\nModel.\nResults.\nVisual-Textual Entailment with Natural Language Explanations\ne-SNLI-VE-2.0\nCollecting Explanations\nVTE Models with Natural Language Explanations\nPredict and Explain\nLoss.\nModel selection.\nExplain Then Predict\nQualitative Analysis of Generated Explanations\nConclusion"
],
"type": "outline"
}
|
1911.12579
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
A New Corpus for Low-Resourced Sindhi Language with Word Embeddings
<<<Abstract>>>
Representing words and phrases into dense vectors of real numbers which encode semantic and syntactic properties is a vital constituent in natural language processing (NLP). The success of neural network (NN) models in NLP largely rely on such dense word representations learned on the large unlabeled corpus. Sindhi is one of the rich morphological language, spoken by large population in Pakistan and India lacks corpora which plays an essential role of a test-bed for generating word embeddings and developing language independent NLP systems. In this paper, a large corpus of more than 61 million words is developed for low-resourced Sindhi language for training neural word embeddings. The corpus is acquired from multiple web-resources using web-scrappy. Due to the unavailability of open source preprocessing tools for Sindhi, the prepossessing of such large corpus becomes a challenging problem specially cleaning of noisy data extracted from web resources. Therefore, a preprocessing pipeline is employed for the filtration of noisy text. Afterwards, the cleaned vocabulary is utilized for training Sindhi word embeddings with state-of-the-art GloVe, Skip-Gram (SG), and Continuous Bag of Words (CBoW) word2vec algorithms. The intrinsic evaluation approach of cosine similarity matrix and WordSim-353 are employed for the evaluation of generated Sindhi word embeddings. Moreover, we compare the proposed word embeddings with recently revealed Sindhi fastText (SdfastText) word representations. Our intrinsic evaluation results demonstrate the high quality of our generated Sindhi word embeddings using SG, CBoW, and GloVe as compare to SdfastText word representations.
<<</Abstract>>>
<<<Introduction>>>
Sindhi is a rich morphological, mutltiscript, and multidilectal language. It belongs to the Indo-Aryan language family BIBREF0, with significant cultural and historical background. Presently, it is recognized as is an official language BIBREF1 in Sindh province of Pakistan, also being taught as a compulsory subject in Schools and colleges. Sindhi is also recognized as one of the national languages in India. Ulhasnagar, Rajasthan, Gujarat, and Maharashtra are the largest Indian regions of Sindhi native speakers. It is also spoken in other countries except for Pakistan and India, where native Sindhi speakers have migrated, such as America, Canada, Hong Kong, British, Singapore, Tanzania, Philippines, Kenya, Uganda, and South, and East Africa. Sindhi has rich morphological structure BIBREF2 due to a large number of homogeneous words. Historically, it was written in multiple writing systems, which differ from each other in terms of orthography and morphology. The Persian-Arabic is the standard script of Sindhi, which was officially accepted in 1852 by the British government. However, the Sindhi-Devanagari is also a popular writing system in India being written in left to right direction like the Hindi language. Formerly, Khudabadi, Gujrati, Landa, Khojki, and Gurumukhi were also adopted as its writing systems. Even though, Sindhi has great historical and literal background, presently spoken by nearly 75 million people BIBREF1. The research on SNLP was coined in 2002, however, IT grabbed research attention after the development of its Unicode system BIBREF3. But still, Sindhi stands among the low-resourced languages due to the scarcity of core language processing resources of the raw and annotated corpus, which can be utilized for training robust word embeddings or the use of machine learning algorithms. Since the development of annotated datasets requires time and human resources.
The Language Resources (LRs) are fundamental elements for the development of high quality NLP systems based on automatic or NN based approaches. The LRs include written or spoken corpora, lexicons, and annotated corpora for specific computational purposes. The development of such resources has received great research interest for the digitization of human languages BIBREF4. Many world languages are rich in such language processing resources integrated in their software tools including English BIBREF5 BIBREF6, Chinese BIBREF7 and other languages BIBREF8 BIBREF9. The Sindhi language lacks the basic computational resources BIBREF10 of a large text corpus, which can be utilized for training robust word embeddings and developing language independent NLP applications including semantic analysis, sentiment analysis, parts of the speech tagging, named entity recognition, machine translation BIBREF11, multitasking BIBREF12, BIBREF13. Presently Sindhi Persian-Arabic is frequently used for online communication, newspapers, public institutions in Pakistan, and India BIBREF1. But little work has been carried out for the development of LRs such as raw corpus BIBREF14, BIBREF15, annotated corpus BIBREF16, BIBREF17, BIBREF1, BIBREF18. In the best of our knowledge, Sindhi lacks the large unlabelled corpus which can be utilized for generating and evaluating word embeddings for Statistical Sindhi Language Processing (SSLP).
One way to to break out this loop is to learn word embeddings from unlabelled corpora, which can be utilized to bootstrap other downstream NLP tasks. The word embedding is a new term of semantic vector space BIBREF19, distributed representations BIBREF20, and distributed semantic models. It is a language modeling approach BIBREF21 used for the mapping of words and phrases into $n$-dimensional dense vectors of real numbers that effectively capture the semantic and syntactic relationship with neighboring words in a geometric way BIBREF22 BIBREF23. Such as “Einstein” and “Scientist” would have greater similarity compared with “Einstein” and “doctor.” In this way, word embeddings accomplish the important linguistic concept of “a word is characterized by the company it keeps". More recently NN based models yield state-of-the-art performance in multiple NLP tasks BIBREF24 BIBREF25 with the word embeddings. One of the advantages of such techniques is they use unsupervised approaches for learning representations and do not require annotated corpus which is rare for low-resourced Sindhi language. Such representions can be trained on large unannotated corpora, and then generated representations can be used in the NLP tasks which uses a small amount of labelled data.
In this paper, we address the problems of corpus construction by collecting a large corpus of more than 61 million words from multiple web resources using the web-scrappy framework. After the collection of the corpus, we carefully preprocessed for the filtration of noisy text, e.g., the HTML tags and vocabulary of the English language. The statistical analysis is also presented for the letter, word frequencies and identification of stop-words. Finally, the corpus is utilized to generate Sindhi word embeddings using state-of-the-art GloVe BIBREF26 SG and CBoW BIBREF27 BIBREF20 BIBREF24 algorithms. The popular intrinsic evaluation method BIBREF20 BIBREF28 BIBREF29 of calculating cosine similarity between word vectors and WordSim353 BIBREF30 are employed to measure the performance of the learned Sindhi word embeddings. We translated English WordSim353 word pairs into Sindhi using bilingual English to Sindhi dictionary. The intrinsic approach typically involves a pre-selected set of query terms BIBREF23 and semantically related target words, which we refer to as query words. Furthermore, we also compare the proposed word embeddings with recently revealed Sindhi fastText (SdfastText) BIBREF25 word representations. To the best of our knowledge, this is the first comprehensive work on the development of large corpus and generating word embeddings along with systematic evaluation for low-resourced Sindhi Persian-Arabic. The synopsis of our novel contributions is listed as follows:
We present a large corpus of more than 61 million words obtained from multiple web resources and reveal a list of Sindhi stop words.
We develop a text cleaning pipeline for the preprocessing of the raw corpus.
Generate word embeddings using GloVe, CBoW, and SG Word2Vec algorithms also evaluate and compare them using the intrinsic evaluation approaches of cosine similarity matrix and WordSim353.
We are the first to evaluate SdfastText word representations and compare them with our proposed Sindhi word embeddings.
The remaining sections of the paper are organized as; Section SECREF2 presents the literature survey regarding computational resources, Sindhi corpus construction, and word embedding models. Afterwards, Section SECREF3 presents the employed methodology, Section SECREF4 consist of statistical analysis of the developed corpus. Section SECREF5 present the experimental setup. The intrinsic evaluation results along with comparison are given in Section SECREF6. The discussion and future work are given in Section SECREF7, and lastly, Section SECREF8 presents the conclusion.
<<</Introduction>>>
<<<Related work>>>
The natural language resources refer to a set of language data and descriptions BIBREF31 in machine readable form, used for building, improving, and evaluating NLP algorithms or softwares. Such resources include written or spoken corpora, lexicons, and annotated corpora for specific computational purposes. Many world languages are rich in such language processing resources integrated in the software tools including NLTK for English BIBREF5, Stanford CoreNLP BIBREF6, LTP for Chinese BIBREF7, TectoMT for German, Russian, Arabic BIBREF8 and multilingual toolkit BIBREF9. But Sindhi language is at an early stage for the development of such resources and software tools.
The corpus construction for NLP mainly involves important steps of acquisition, preprocessing, and tokenization. Initially, BIBREF14 discussed the morphological structure and challenges concerned with the corpus development along with orthographical and morphological features in the Persian-Arabic script. The raw and annotated corpus BIBREF1 for Sindhi Persian-Arabic is a good supplement towards the development of resources, including raw and annotated datasets for parts of speech tagging, morphological analysis, transliteration between Sindhi Persian-Arabic and Sindhi-Devanagari, and machine translation system. But the corpus is acquired only form Wikipedia-dumps. A survey-based study BIBREF4 provides all the progress made in the Sindhi Natural Language Processing (SNLP) with the complete gist of adopted techniques, developed tools and available resources which show that work on resource development on Sindhi needs more sophisticated efforts. The raw corpus is utilized for word segmentation BIBREF32 of Sindhi Persian-Arabic. More recently, an initiative towards the development of resources is taken BIBREF16 by open sourcing annotated dataset of Sindhi Persian-Arabic obtained from news and social blogs. The existing and proposed work is presented in Table TABREF9 on the corpus development, word segmentation, and word embeddings, respectively.
The power of word embeddings in NLP was empirically estimated by proposing a neural language model BIBREF21 and multitask learning BIBREF12, but recently usage of word embeddings in deep neural algorithms has become integral element BIBREF33 for performance acceleration in deep NLP applications. The CBoW and SG BIBREF27 BIBREF20 popular word2vec neural architectures yielded high quality vector representations in lower computational cost with integration of character-level learning on large corpora in terms of semantic and syntactic word similarity later extended BIBREF33 BIBREF24. Both approaches produce state-of-the-art accuracy with fast training performance, better representations of less frequent words and efficient representation of phrases as well. BIBREF34 proposed NN based approach for generating morphemic-level word embeddings, which surpassed all the existing embedding models in intrinsic evaluation. A count-based GloVe model BIBREF26 also yielded state-of-the-art results in an intrinsic evaluation and downstream NLP tasks.
The performance of Word embeddings is evaluated using intrinsic BIBREF23 BIBREF29 and extrinsic evaluation BIBREF28 methods. The performance of word embeddings can be measured with intrinsic and extrinsic evaluation approaches. The intrinsic approach is used to measure the internal quality of word embeddings such as querying nearest neighboring words and calculating the semantic or syntactic similarity between similar word pairs. A method of direct comparison for intrinsic evaluation of word embeddings measures the neighborhood of a query word in vector space. The key advantage of that method is to reduce bias and create insight to find data-driven relevance judgment. An extrinsic evaluation approach is used to evaluate the performance in downstream NLP tasks, such as parts-of-speech tagging or named-entity recognition BIBREF23, but the Sindhi language lacks annotated corpus for such type of evaluation. Moreover, extrinsic evaluation is time consuming and difficult to interpret. Therefore, we opt intrinsic evaluation method BIBREF28 to get a quick insight into the quality of proposed Sindhi word embeddings by measuring the cosine distance between similar words and using WordSim353 dataset. A study reveals that the choice of optimized hyper-parameters BIBREF35 has a great impact on the quality of pretrained word embeddings as compare to desing a novel algorithm. Therefore, we optimized the hyperparameters for generating robust Sindhi word embeddings using CBoW, SG and GloVe models. The embedding visualization is also useful to visualize the similarity of word clusters. Therefore, we use t-SNE BIBREF36 dimensionality reduction algorithm for compressing high dimensional embedding into 2-dimensional $x$,$y$ coordinate pairs with PCA BIBREF37. The PCA is useful to combine input features by dropping the least important features while retaining the most valuable features.
<<</Related work>>>
<<<Methodology>>>
This section presents the employed methodology in detail for corpus acquisition, preprocessing, statistical analysis, and generating Sindhi word embeddings.
<<<Task description>>>
We initiate this work from scratch by collecting large corpus from multiple web resources. After preprocessing and statistical analysis of the corpus, we generate Sindhi word embeddings with state-of-the-art CBoW, SG, and GloVe algorithms. The generated word embeddings are evaluated using the intrinsic evaluation approaches of cosine similarity between nearest neighbors, word pairs, and WordSim-353 for distributional semantic similarity. Moreover, we use t-SNE with PCA for the comparison of the distance between similar words via visualization.
<<</Task description>>>
<<<Corpus acquisition>>>
The corpus is a collection of human language text BIBREF31 built with a specific purpose. However, the statistical analysis of the corpus provides quantitative, reusable data, and an opportunity to examine intuitions and ideas about language. Therefore, the corpus has great importance for the study of written language to examine the text. In fact, realizing the necessity of large text corpus for Sindhi, we started this research by collecting raw corpus from multiple web resource using web-scrappy framwork for extraction of news columns of daily Kawish and Awami Awaz Sindhi newspapers, Wikipedia dumps, short stories and sports news from Wichaar social blog, news from Focus Word press blog, historical writings, novels, stories, books from Sindh Salamat literary websites, novels, history and religious books from Sindhi Adabi Board and tweets regarding news and sports are collected from twitter.
<<</Corpus acquisition>>>
<<<Preprocessing>>>
The preprocessing of text corpus obtained from multiple web resources is a challenging task specially it becomes more complicated when working on low-resourced language like Sindhi due to the lack of open-source preprocessing tools such as NLTK BIBREF5 for English. Therefore, we design a preprocessing pipeline depicted in Figure FIGREF22 for the filtration of unwanted data and vocabulary of other languages such as English to prepare input for word embeddings. Whereas, the involved preprocessing steps are described in detail below the Figure FIGREF22. Moreover, we reveal the list of Sindhi stop words BIBREF38 which is labor intensive and requires human judgment as well. Hence, the most frequent and least important words are classified as stop words with the help of a Sindhi linguistic expert. The partial list of Sindhi stop words is given in TABREF61. We use python programming language for designing the preprocessing pipeline using regex and string functions.
Input: The collected text documents were concatenated for the input in UTF-8 format.
Replacement symbols: The punctuation marks of a full stop, hyphen, apostrophe, comma, quotation, and exclamation marks replaced with white space for authentic tokenization because without replacing these symbols with white space the words were found joined with their next or previous corresponding words.
Filtration of noisy data: The text acquisition from web resources contain a huge amount of noisy data. Therefore, we filtered out unimportant data such as the rest of the punctuation marks, special characters, HTML tags, all types of numeric entities, email, and web addresses.
Normalization: In this step, We tokenize the corpus then normalize to lower-case for the filtration of multiple white spaces, English vocabulary, and duplicate words. The stop words were only filtered out for preparing input for GloVe. However, the sub-sampling approach in CBoW and SG can discard most frequent or stop words automatically.
<<</Preprocessing>>>
<<<Word embedding models>>>
The NN based approaches have produced state-of-the-art performance in NLP with the usage of robust word embedings generated from the large unlabelled corpus. Therefore, word embeddings have become the main component for setting up new benchmarks in NLP using deep learning approaches. Most recently, the use cases of word embeddings are not only limited to boost statistical NLP applications but can also be used to develop language resources such as automatic construction of WordNet BIBREF39 using the unsupervised approach.
The word embedding can be precisely defined as the encoding of vocabulary $V$ into $N$ and the word $w$ from $V$ to vector $\overrightarrow{w} $ into $N$-dimensional embedding space. They can be broadly categorized into predictive and count based methods, being generated by employing co-occurrence statistics, NN algorithms, and probabilistic models. The GloVe BIBREF26 algorithm treats each word as a single entity in the corpus and generates a vector of each word. However, CBoW and SG BIBREF27 BIBREF20, later extended BIBREF33 BIBREF24, well-known as word2vec rely on simple two layered NN architecture which uses linear activation function in hidden layer and softmax in the output layer. The work2vec model treats each word as a bag-of-character n-gram.
<<</Word embedding models>>>
<<<GloVe>>>
The GloVe is a log-bilinear regression model BIBREF26 which combines two methods of local context window and global matrix factorization for training word embeddings of a given vocabulary in an unsupervised way. It weights the contexts using the harmonic function, for example, a context word four tokens away from an occurrence will be counted as $\frac{1}{4}$. The Glove’s implementation represents word $w \in V_{w}$ and context $c \in V_{c}$ in $D$-dimensional vectors $\overrightarrow{w}$ and $\overrightarrow{c}$ in a following way,
Where, $b^{\overrightarrow{w}}$ is row vector $\left|V_{w}\right|$ and $b^{\overrightarrow{c}}$ is $\left|V_{c}\right|$ is column vector.
<<</GloVe>>>
<<<Continuous bag-of-words>>>
The standard CBoW is the inverse of SG BIBREF27 model, which predicts input word on behalf of the context. The length of input in the CBoW model depends on the setting of context window size which determines the distance to the left and right of the target word. Hence the context is a window that contain neighboring words such as by giving $w=\left\lbrace w_{1}, w_{2}, \dots \dots w_{t}\right\rbrace $ a sequence of words $T$, the objective of the CBoW is to maximize the probability of given neighboring words such as,
Where, $c_{t}$ is context of $t^{\text{th}}$ word for example with window $w_{t-c}, \ldots w_{t-1}, w_{t+1}, \ldots w_{t+c}$ of size $2 c$.
<<</Continuous bag-of-words>>>
<<<Skip gram>>>
The SG model predicts surrounding words by giving input word BIBREF20 with training objective of learning good word embeddings that efficiently predict the neighboring words. The goal of skip-gram is to maximize average log-probability of words $w=\left\lbrace w_{1}, w_{2}, \dots \dots w_{t}\right\rbrace $ across the entire training corpus,
Where, $c_{t}$ denotes the context of words indices set of nearby $w_{t}$ words in the training corpus.
<<</Skip gram>>>
<<<Hyperparameters>>>
<<<Sub-sampling>>>
Th sub-sampling BIBREF20 approach is useful to dilute most frequent or stop words, also accelerates learning rate, and increases accuracy for learning rare word vectors. Numerous words in English, e.g., ‘the’, ‘you’, ’that’ do not have more importance, but these words appear very frequently in the text. However, considering all the words equally would also lead to over-fitting problem of model parameters BIBREF24 on the frequent word embeddings and under-fitting on the rest. Therefore, it is useful to count the imbalance between rare and repeated words. The sub-sampling technique randomly removes most frequent words with some threshold $t$ and probability $p$ of words and frequency $f$ of words in the corpus.
Where each word$w_{i}$ is discarded with computed probability in training phase, $f(w_i )$ is frequency of word $w_{i}$ and $t>0$ are parameters.
<<</Sub-sampling>>>
<<<Dynamic context window>>>
The traditional word embedding models usually use a fixed size of a context window. For instance, if the window size ws=6, then the target word apart from 6 tokens will be treated similarity as the next word. The scheme is used to assign more weight to closer words, as closer words are generally considered to be more important to the meaning of the target word. The CBoW, SG and GloVe models employ this weighting scheme. The GloVe model weights the contexts using a harmonic function, for example, a context word four tokens away from an occurrence will be counted as $\frac{1}{4}$. However, CBoW and SG implementation equally consider the contexts by dividing the ws with the distance from target word, e.g. ws=6 will weigh its context by $\frac{6}{6} \frac{5}{6} \frac{4}{6} \frac{3}{6} \frac{2}{6} \frac{1}{6}$.
<<</Dynamic context window>>>
<<<Sub-word model>>>
The sub-word model BIBREF24 can learn the internal structure of words by sharing the character representations across words. In that way, the vector for each word is made of the sum of those character $n-gram$. Such as, a vector of a word “table” is a sum of $n-gram$ vectors by setting the letter $n-gram$ size $min=3$ to $max=6$ as, $<ta, tab, tabl, table, table>, abl, able, able>, ble, ble>, le>$, we can get all sub-words of "table" with minimum length of $minn=3$ and maximum length of $maxn=6$. The $<$ and $>$ symbols are used to separate prefix and suffix words from other character sequences. In this way, the sub-word model utilizes the principles of morphology, which improves the quality of infrequent word representations. In addition to character $n-grams$, the input word $w$ is also included in the set of character $n-gram$, to learn the representation of each word. We obtain scoring function using a input dictionary of $n-grams$ with size $K$ by giving word $w$ , where $K_{w} \subset \lbrace 1, \ldots , K\rbrace $. A word representation $Z_{k}$ is associated to each $n-gram$ $Z$. Hence, each word is represented by the sum of character $n-gram$ representations, where, $s$ is the scoring function in the following equation,
<<</Sub-word model>>>
<<<Position-dependent weights>>>
The position-dependent weighting approach BIBREF40 is used to avoid direct encoding of representations for words and their positions which can lead to over-fitting problem. The approach learns positional representations in contextual word representations and used to reweight word embedding. Thus, it captures good contextual representations at lower computational cost,
Where, $p$ is individual position in context window associated with $d_{p}$ vector. Afterwards the context vector reweighted by their positional vectors is average of context words. The relative positional set is $P$ in context window and $v_{C}$ is context vector of $w_{t}$ respectively.
<<</Position-dependent weights>>>
<<<Shifted point-wise mutual information>>>
The use sparse Shifted Positive Point-wise Mutual Information (SPPMI) BIBREF41 word-context matrix in learning word representations improves results on two word similarity tasks. The CBoW and SG have $k$ (number of negatives) BIBREF27 BIBREF20 hyperparameter, which affects the value that both models try to optimize for each $(w, c): P M I(w, c)-\log k$. Parameter $k$ has two functions of better estimation of negative examples, and it performs as before observing the probability of positive examples (actual occurrence of $w,c$).
<<</Shifted point-wise mutual information>>>
<<<Deleting rare words>>>
Before creating a context window, the automatic deletion of rare words also leads to performance gain in CBoW, SG and GloVe models, which further increases the actual size of context windows.
<<</Deleting rare words>>>
<<</Hyperparameters>>>
<<<Evaluation methods>>>
The intrinsic evaluation is based on semantic similarity BIBREF23 in word embeddings. The word similarity measure approach states BIBREF35 that the words are similar if they appear in the similar context. We measure word similarity of proposed Sindhi word embeddings using dot product method and WordSim353.
<<<Cosine similarity>>>
The cosine similarity between two non-zero vectors is a popular measure that calculates the cosine of the angle between them which can be derived by using the Euclidean dot product method. The dot product is a multiplication of each component from both vectors added together. The result of a dot product between two vectors isn’t another vector but a single value or a scalar. The dot product for two vectors can be defined as: $\overrightarrow{a}=\left(a_{1}, a_{2}, a_{3}, \dots , a_{n}\right)$ and $\overrightarrow{b}=\left({b}_{1}, {b}_{2}, {b}_{3}, \ldots , {b}_{n}\right)$ where $a_{n}$ and $b_{n}$ are the components of the vector and $n$ is dimension of vectors such as,
However, the cosine of two non-zero vectors can be derived by using the Euclidean dot product formula,
Given $a_{i}$ two vectors of attributes $a$ and $b$, the cosine similarity, $\cos ({\theta })$, is represented using a dot product and magnitude as,
where $a_{i}$ and $b_{i}$ are components of vector $\overrightarrow{a}$ and $\overrightarrow{b}$, respectively.
<<</Cosine similarity>>>
<<<WordSim353>>>
The WordSim353 BIBREF42 is popular for the evaluation of lexical similarity and relatedness. The similarity score is assigned with 13 to 16 human subjects with semantic relations BIBREF30 for 353 English noun pairs. Due to the lack of annotated datasets in the Sindhi language, we translated WordSim353 using English to Sindhi bilingual dictionary for the evaluation of our proposed Sindhi word embeddings and SdfastText. We use the Spearman correlation coefficient for the semantic and syntactic similarity comparison which is used to used to discover the strength of linear or nonlinear relationships if there are no repeated data values. A perfect Spearman’s correlation of $+1$ or $-1$ discovers the strength of a link between two sets of data (word-pairs) when observations are monotonically increasing or decreasing functions of each other in a following way,
where $r_s$ is the rank correlation coefficient, $n$ denote the number of observations, and $d^i$ is the rank difference between $i^{th}$ observations.
<<</WordSim353>>>
<<</Evaluation methods>>>
<<</Methodology>>>
<<<Statistical analysis of corpus>>>
The large corpus acquired from multiple resources is rich in vocabulary. We present the complete statistics of collected corpus (see Table TABREF52) with number of sentences, words and unique tokens.
<<<Letter occurrences>>>
The frequency of letter occurrences in human language is not arbitrarily organized but follow some specific rules which enable us to describe some linguistic regularities. The Zipf’s law BIBREF43 suggests that if the frequency of letter or word occurrence ranked in descending order such as,
Where, $F_{r}$ is the letter frequency of rth rank, $a$ and $b$ are parameters of input text. The comparative letter frequency in the corpus is the total number of occurrences of a letter divided by the total number of letters present in the corpus. The letter frequencies in our developed corpus are depicted in Figure FIGREF55; however, the corpus contains 187,620,276 total number of the character set. Sindhi Persian-Arabic alphabet consists of 52 letters but in the vocabulary 59 letters are detected, additional seven letters are modified uni-grams and standalone honorific symbols.
<<</Letter occurrences>>>
<<<Letter n-grams frequency>>>
We denote the combination of letter occurrences in a word as n-grams, where each letter is a gram in a word. The letter n-gram frequency is carefully analyzed in order to find the length of words which is essential to develop NLP systems, including learning of word embeddings such as choosing the minimum or maximum length of sub-word for character-level representation learning BIBREF24. We calculate the letter n-grams in words along with their percentage in the developed corpus (see Table TABREF57). The bi-gram words are most frequent, mostly consists of stop words and secondly, 4-gram words have a higher frequency.
<<</Letter n-grams frequency>>>
<<<Word Frequencies>>>
The word frequency count is an observation of word occurrences in the text. The commonly used words are considered to be with higher frequency, such as the word “the" in English. Similarly, the frequency of rarely used words to be lower. Such frequencies can be calculated at character or word-level. We calculate word frequencies by counting a word $w$ occurrence in the corpus $c$, such as,
Where the frequency of $w$ is the sum of every occurrence $k$ of $w$ in $c$.
<<</Word Frequencies>>>
<<<Stop words>>>
The most frequent and least important words in NLP are often classified as stop words. The removal of such words can boost the performance of the NLP model BIBREF38, such as sentiment analysis and text classification. But the construction of such words list is time consuming and requires user decisions. Firstly, we determined Sindhi stop words by counting their term frequencies using Eq. DISPLAY_FORM59, and secondly, by analysing their grammatical status with the help of Sindhi linguistic expert because all the frequent words are not stop words (see Figure FIGREF62). After determining the importance of such words with the help of human judgment, we placed them in the list of stop words. The total number of detected stop words is 340 in our developed corpus. The partial list of most frequent Sindhi stop words is depicted in Table TABREF61 along with their frequency. The filtration of stop words is an essential preprocessing step for learning GloVe BIBREF26 word embeddings; therefore, we filtered out stop words for preparing input for the GloVe model. However, the sub-sampling approach BIBREF33 BIBREF24 is used to discard such most frequent words in CBoW and SG models.
<<</Stop words>>>
<<</Statistical analysis of corpus>>>
<<<Experiments and results>>>
Hyperparameter optimization BIBREF23is more important than designing a novel algorithm. We carefully choose to optimize the dictionary and algorithm-based parameters of CBoW, SG and GloVe algorithms. Hence, we conducted a large number of experiments for training and evaluation until the optimization of most suitable hyperparameters depicted in Table TABREF64 and discussed in Section SECREF63. The choice of optimized hyperparameters is based on The high cosine similarity score in retrieving nearest neighboring words, the semantic, syntactic similarity between word pairs, WordSim353, and visualization of the distance between twenty nearest neighbours using t-SNE respectively. All the experiments are conducted on GTX 1080-TITAN GPU.
<<<Hyperparameter optimization>>>
The state-of-the-art SG, CBoW BIBREF27 BIBREF33 BIBREF20 BIBREF24 and Glove BIBREF26 word embedding algorithms are evaluated by parameter tuning for development of Sindhi word embeddings. These parameters can be categories into dictionary and algorithm based, respectively. The integration of character n-gram in learning word representations is an ideal method especially for rich morphological languages because this approach has the ability to compute rare and misspelled words. Sindhi is also a rich morphological language. Therefore more robust embeddings became possible to train with the hyperparameter optimization of SG, CBoW and GloVe algorithms. We tuned and evaluated the hyperparameters of three algorithms individually which are discussed as follows:
Number of Epochs: Generally, more epochs on the corpus often produce better results but more epochs take long training time. Therefore, we evaluate 10, 20, 30 and 40 epochs for each word embedding model, and 40 epochs constantly produce good results.
Learning rate (lr): We tried lr of $0.05$, $0.1$, and $0.25$, the optimal lr $(0.25)$ gives the better results for training all the embedding models.
Dimensions ($D$): We evaluate and compare the quality of $100-D$, $200-D$, and $300-D$ using WordSim353 on different $ws$, and the optimal $300-D$ are evaluated with cosine similarity matrix for querying nearest neighboring words and calculating the similarity between word pairs. The embedding dimensions have little affect on the quality of the intrinsic evaluation process. However, the selection of embedding dimensions might have more impact on the accuracy in certain downstream NLP applications. The lower embedding dimensions are faster to train and evaluate.
Character n-grams: The selection of minimum (minn) and the maximum (maxn) length of character $n-grams$ is an important parameter for learning character-level representations of words in CBoW and SG models. Therefore, the n-grams from $3-9$ were tested to analyse the impact on the accuracy of embedding. We optimized the length of character n-grams from $minn=2$ and $maxn=7$ by keeping in view the word frequencies depicted in Table TABREF57.
Window size (ws): The large ws means considering more context words and similarly less ws means to limit the size of context words. By changing the size of the dynamic context window, we tried the ws of 3, 5, 7 the optimal ws=7 yield consistently better performance.
Negative Sampling (NS): : The more negative examples yield better results, but more negatives take long training time. We tried 10, 20, and 30 negative examples for CBoW and SG. The best negative examples of 20 for CBoW and SG significantly yield better performance in average training time.
Minimum word count (minw): We evaluated the range of minimum word counts from 1 to 8 and analyzed that the size of input vocabulary is decreasing at a large scale by ignoring more words similarly the vocabulary size was increasing by considering rare words. Therefore, by ignoring words with a frequency of less than 4 in CBoW, SG, and GloVe consistently yields better results with the vocabulary of 200,000 words.
Loss function (ls): we use hierarchical softmax (hs) for CBoW, negative sampling (ns) for SG and default loss function for GloVe BIBREF26.
The recommended verbosity level, number of buckets, sampling threshold, number of threads are used for training CBoW, SG BIBREF24, and GloVe BIBREF26.
<<</Hyperparameter optimization>>>
<<</Experiments and results>>>
<<<Word similarity comparison of Word Embeddings>>>
<<<Nearest neighboring words>>>
The cosine similarity matrix BIBREF35 is a popular approach to compute the relationship between all embedding dimensions of their distinct relevance to query word. The words with similar context get high cosine similarity and geometrical relatedness to Euclidean distance, which is a common and primary method to measure the distance between a set of words and nearest neighbors. Each word contains the most similar top eight nearest neighboring words determined by the highest cosine similarity score using Eq. DISPLAY_FORM48. We present the English translation of both query and retrieved words also discuss with their English meaning for ease of relevance judgment between the query and retrieved words.To take a closer look at the semantic and syntactic relationship captured in the proposed word embeddings, Table TABREF74 shows the top eight nearest neighboring words of five different query words Friday, Spring, Cricket, Red, Scientist taken from the vocabulary. As the first query word Friday returns the names of days Saturday, Sunday, Monday, Tuesday, Wednesday, Thursday in an unordered sequence. The SdfastText returns five names of days Sunday, Thursday, Monday, Tuesday and Wednesday respectively. The GloVe model also returns five names of days. However, CBoW and SG gave six names of days except Wednesday along with different writing forms of query word Friday being written in the Sindhi language which shows that CBoW and SG return more relevant words as compare to SdfastText and GloVe. The CBoW returned Add and GloVe returns Honorary words which are little similar to the querry word but SdfastText resulted two irrelevant words Kameeso (N) which is a name (N) of person in Sindhi and Phrase is a combination of three Sindhi words which are not tokenized properly. Similarly, nearest neighbors of second query word Spring are retrieved accurately as names and seasons and semantically related to query word Spring by CBoW, SG and Glove but SdfastText returned four irrelevant words of Dilbahar (N), Pharase, Ashbahar (N) and Farzana (N) out of eight. The third query word is Cricket, the name of a popular game. The first retrieved word in CBoW is Kabadi (N) that is a popular national game in Pakistan. Including Kabadi (N) all the returned words by CBoW, SG and GloVe are related to Cricket game or names of other games. But the first word in SdfastText contains a punctuation mark in retrieved word Gone.Cricket that are two words joined with a punctuation mark (.), which shows the tokenization error in preprocessing step, sixth retrieved word Misspelled is a combination of three words not related to query word, and Played, Being played are also irrelevant and stop words. Moreover, fourth query word Red gave results that contain names of closely related to query word and different forms of query word written in the Sindhi language. The last returned word Unknown by SdfastText is irrelevant and not found in the Sindhi dictionary for translation. The last query word Scientist also contains semantically related words by CBoW, SG, and GloVe, but the first Urdu word given by SdfasText belongs to the Urdu language which means that the vocabulary may also contain words of other languages. Another unknown word returned by SdfastText does not have any meaning in the Sindhi dictionary. More interesting observations in the presented results are the diacritized words retrieved from our proposed word embeddings and The authentic tokenization in the preprocessing step presented in Figure FIGREF22. However, SdfastText has returned tri-gram words of Phrase in query words Friday, Spring, a Misspelled word in Cricket and Scientist query words. Hence, the overall performance of our proposed SG, CBoW, and GloVe demonstrate high semantic relatedness in retrieving the top eight nearest neighbor words.
<<</Nearest neighboring words>>>
<<<Word pair relationship>>>
Generally, closer words are considered more important to a word’s meaning. The word embeddings models have the ability to capture the lexical relations between words. Identifying such relationship that connects words is important in NLP applications. We measure that semantic relationship by calculating the dot product of two vectors using Eq. DISPLAY_FORM48. The high cosine similarity score denotes the closer words in the embedding matrix, while less cosine similarity score means the higher distance between word pairs. We present the cosine similarity score of different semantically or syntactically related word pairs taken from the vocabulary in Table TABREF77 along with English translation, which shows the average similarity of 0.632, 0.650, 0.591 yields by CBoW, SG and GloVe respectively. The SG model achieved a high average similarity score of 0.650 followed by CBoW with a 0.632 average similarity score. The GloVe also achieved a considerable average score of 0.591 respectively. However, the average similarity score of SdfastText is 0.388 and the word pair Microsoft-Bill Gates is not available in the vocabulary of SdfastText. This shows that along with performance, the vocabulary in SdfastText is also limited as compared to our proposed word embeddings.
Moreover, the average semantic relatedness similarity score between countries and their capitals is shown in Table TABREF78 with English translation, where SG also yields the best average score of 0.663 followed by CBoW with 0.611 similarity score. The GloVe also yields better semantic relatedness of 0.576 and the SdfastText yield an average score of 0.391. The first query word China-Beijing is not available the vocabulary of SdfastText. However, the similarity score between Afghanistan-Kabul is lower in our proposed CBoW, SG, GloVe models because the word Kabul is the name of the capital of Afghanistan as well as it frequently appears as an adjective in Sindhi text which means able.
<<</Word pair relationship>>>
<<<Comparison with WordSim353>>>
We evaluate the performance of our proposed word embeddings using the WordSim353 dataset by translation English word pairs to Sindhi. Due to vocabulary differences between English and Sindhi, we were unable to find the authentic meaning of six terms, so we left these terms untranslated. So our final Sindhi WordSim353 consists of 347 word pairs. Table TABREF80 shows the Spearman correlation results using Eq. DISPLAY_FORM51 on different dimensional embeddings on the translated WordSim353. The Table TABREF80 presents complete results with the different ws for CBoW, SG and GloVe in which the ws=7 subsequently yield better performance than ws of 3 and 5, respectively. The SG model outperforms CBoW and GloVe in semantic and syntactic similarity by achieving the performance of 0.629 with ws=7. In comparison with English BIBREF27 achieved the average semantic and syntactic similarity of 0.637, 0.656 with CBoW and SG, respectively. Therefore, despite the challenges in translation from English to Sindhi, our proposed Sindhi word embeddings have efficiently captured the semantic and syntactic relationship.
<<</Comparison with WordSim353>>>
<<<Visualization>>>
We use t-Distributed Stochastic Neighboring (t-SNE) dimensionality BIBREF36 reduction algorithm with PCA BIBREF37 for exploratory embeddings analysis in 2-dimensional map. The t-SNE is a non-linear dimensionality reduction algorithm for visualization of high dimensional datasets. It starts the probability calculation of similar word clusters in high-dimensional space and calculates the probability of similar points in the corresponding low-dimensional space. The purpose of t-SNE for visualization of word embeddings is to keep similar words close together in 2-dimensional $x,y$ coordinate pairs while maximizing the distance between dissimilar words. The t-SNE has a perplexity (PPL) tunable parameter used to balance the data points at both the local and global levels. We visualize the embeddings using PPL=20 on 5000-iterations of 300-D models. We use the same query words (see Table TABREF74) by retrieving the top 20 nearest neighboring word clusters for a better understanding of the distance between similar words. Every query word has a distinct color for the clear visualization of a similar group of words. The closer word clusters show the high similarity between the query and retrieved word clusters. The word clusters in SG (see Fig. FIGREF83) are closer to their group of semantically related words. Secondly, the CBoW model depicted in Fig. FIGREF82 and GloVe Fig. FIGREF84 also show the better cluster formation of words than SdfastText Fig. FIGREF85, respectively.
<<</Visualization>>>
<<</Word similarity comparison of Word Embeddings>>>
<<<Discussion and future work>>>
In this era of the information age, the existence of LRs plays a vital role in the digital survival of natural languages because the NLP tools are used to process a flow of un-structured data from disparate sources. It is imperative to mention that presently, Sindhi Persian-Arabic is frequently used in online communication, newspapers, public institutions in Pakistan and India. Due to the growing use of Sindhi on web platforms, the need for its LRs is also increasing for the development of language technology tools. But little work has been carried out for the development of resources which is not sufficient to design a language independent or machine learning algorithms. The present work is a first comprehensive initiative on resource development along with their evaluation for statistical Sindhi language processing. More recently, the NN based approaches have produced a state-of-the-art performance in NLP by exploiting unsupervised word embeddings learned from the large unlabelled corpus. Such word embeddings have also motivated the work on low-resourced languages. Our work mainly consists of novel contributions of resource development along with comprehensive evaluation for the utilization of NN based approaches in SNLP applications. The large corpus obtained from multiple web resources is utilized for the training of word embeddings using SG, CBoW and Glove models. The intrinsic evaluation along with comparative results demonstrates that the proposed Sindhi word embeddings have accurately captured the semantic information as compare to recently revealed SdfastText word vectors. The SG yield best results in nearest neighbors, word pair relationship and semantic similarity. The performance of CBoW is also close to SG in all the evaluation matrices. The GloVe also yields better word representations; however SG and CBoW models surpass the GloVe model in all evaluation matrices. Hyperparameter optimization is as important as designing a new algorithm. The choice of optimal parameters is a key aspect of performance gain in learning robust word embeddings. Moreover, We analysed that the size of the corpus and careful preprocessing steps have a large impact on the quality of word embeddings. However, in algorithmic perspective, the character-level learning approach in SG and CBoW improves the quality of representation learning, and overall window size, learning rate, number of epochs are the core parameters that largely influence the performance of word embeddings models. Ultimately, the new corpus of low-resourced Sindhi language, list of stop words and pretrained word embeddings along with empirical evaluation, will be a good supplement for future research in SSLP applications. In the future, we aim to use the corpus for annotation projects such as parts-of-speech tagging, named entity recognition. The proposed word embeddings will be refined further by creating custom benchmarks and the extrinsic evaluation approach will be employed for the performance analysis of proposed word embeddings. Moreover, we will also utilize the corpus using Bi-directional Encoder Representation Transformer BIBREF13 for learning deep contextualized Sindhi word representations. Furthermore, the generated word embeddings will be utilized for the automatic construction of Sindhi WordNet.
<<</Discussion and future work>>>
<<<Conclusion>>>
In this paper, we mainly present three novel contributions of large corpus development contains large vocabulary of more than 61 million tokens, 908,456 unique words. Secondly, the list of Sindhi stop words is constructed by finding their high frequency and least importance with the help of Sindhi linguistic expert. Thirdly, the unsupervised Sindhi word embeddings are generated using state-of-the-art CBoW, SG and GloVe algorithms and evaluated using popular intrinsic evaluation approaches of cosine similarity matrix and WordSim353 for the first time in Sindhi language processing. We translate English WordSim353 using the English-Sindhi bilingual dictionary, which will also be a good resource for the evaluation of Sindhi word embeddings. Moreover, the proposed word embeddings are also compared with recently revealed SdfastText word representations.
Our empirical results demonstrate that our proposed Sindhi word embeddings have captured high semantic relatedness in nearest neighboring words, word pair relationship, country, and capital and WordSim353. The SG yields the best performance than CBoW and GloVe models subsequently. However, the performance of GloVe is low on the same vocabulary because of character-level learning of word representations and sub-sampling approaches in SG and CBoW. Our proposed Sindhi word embeddings have surpassed SdfastText in the intrinsic evaluation matrix. Also, the vocabulary of SdfastText is limited because they are trained on a small Wikipedia corpus of Sindhi Persian-Arabic. We will further investigate the extrinsic performance of proposed word embeddings on the Sindhi text classification task in the future. The proposed resources along with systematic evaluation will be a sophisticated addition to the computational resources for statistical Sindhi language processing.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated work\nMethodology\nTask description\nCorpus acquisition\nPreprocessing\nWord embedding models\nGloVe\nContinuous bag-of-words\nSkip gram\nHyperparameters\nSub-sampling\nDynamic context window\nSub-word model\nPosition-dependent weights\nShifted point-wise mutual information\nDeleting rare words\nEvaluation methods\nCosine similarity\nWordSim353\nStatistical analysis of corpus\nLetter occurrences\nLetter n-grams frequency\nWord Frequencies\nStop words\nExperiments and results\nHyperparameter optimization\nWord similarity comparison of Word Embeddings\nNearest neighboring words\nWord pair relationship\nComparison with WordSim353\nVisualization\nDiscussion and future work\nConclusion"
],
"type": "outline"
}
|
2004.02929
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
An Annotated Corpus of Emerging Anglicisms in Spanish Newspaper Headlines
<<<Abstract>>>
The extraction of anglicisms (lexical borrowings from English) is relevant both for lexicographic purposes and for NLP downstream tasks. We introduce a corpus of European Spanish newspaper headlines annotated with anglicisms and a baseline model for anglicism extraction. In this paper we present: (1) a corpus of 21,570 newspaper headlines written in European Spanish annotated with emergent anglicisms and (2) a conditional random field baseline model with handcrafted features for anglicism extraction. We present the newspaper headlines corpus, describe the annotation tagset and guidelines and introduce a CRF model that can serve as baseline for the task of detecting anglicisms. The presented work is a first step towards the creation of an anglicism extractor for Spanish newswire.
<<</Abstract>>>
<<<Introduction>>>
The study of English influence in the Spanish language has been a hot topic in Hispanic linguistics for decades, particularly concerning lexical borrowing or anglicisms BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6.
Lexical borrowing is a phenomenon that affects all languages and constitutes a productive mechanism for word-formation, especially in the press. chesleypaulapredicting2010 estimated that a reader of French newspapers encountered a new lexical borrowing for every 1,000 words. In Chilean newspapers, lexical borrowings account for approximately 30% of neologisms, 80% of those corresponding to English loanwords BIBREF7.
Detecting lexical borrowings is relevant both for lexicographic purposes and for NLP downstream tasks BIBREF8, BIBREF9. However, strategies to track and register lexical borrowings have traditionally relied on manual review of corpora.
In this paper we present: (1) a corpus of newspaper headlines in European Spanish annotated with emerging anglicisms and (2) a CRF baseline model for anglicism automatic extraction in Spanish newswire.
<<</Introduction>>>
<<<Related Work>>>
Corpus-based studies of English borrowings in Spanish media have traditionally relied on manual evaluation of either previously compiled general corpora such as CREA BIBREF10, BIBREF11, BIBREF12, BIBREF13, either new tailor-made corpora designed to analyze specific genres, varieties or phenomena BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20.
In terms of automatic detection of anglicisms, previous approaches in different languages have mostly depended on resource lookup (lexicon or corpus frequencies), character n-grams and pattern matching. alex-2008-comparing combined lexicon lookup and a search engine module that used the web as a corpus to detect English inclusions in a corpus of German texts and compared her results with a maxent Markov model. furiassi2007retrieval explored corpora lookup and character n-grams to extract false anglicisms from a corpus of Italian newspapers. andersen2012semi used dictionary lookup, regular expressions and lexicon-derived frequencies of character n-grams to detect anglicism candidates in the Norwegian Newspaper Corpus (NNC) BIBREF21, while losnegaard2012data explored a Machine Learning approach to anglicism detection in Norwegian by using TiMBL (Tilburg Memory-Based Learner, an implementation of a k-nearest neighbor classifier) with character trigrams as features. garley-hockenmaier-2012-beefmoves trained a maxent classifier with character n-gram and morphological features to identify anglicisms in German online communities. In Spanish, serigos2017using extracted anglicisms from a corpus of Argentinian newspapers by combining dictionary lookup (aided by TreeTagger and the NLTK lemmatizer) with automatic filtering of capitalized words and manual inspection. In serigos2017applying, a character n-gram module was added to estimate the probabilities of a word being English or Spanish. moreno2018configuracion used different pattern-matching filters and lexicon lookup to extract anglicism cadidates from a corpus of tweets in US Spanish.
Work within the code-switching community has also dealt with language identification on multilingual corpora. Due to the nature of code-switching, these models have primarily focused on oral copora and social media datasets BIBREF22, BIBREF23, BIBREF24. In the last shared task of language identification in code-switched data BIBREF23, approaches to English-Spanish included CRFs models BIBREF25, BIBREF26, BIBREF27, BIBREF28, logistic regression BIBREF29 and LSTMs models BIBREF30, BIBREF31.
The scope and nature of lexical borrowing is, however, somewhat different to that of code-switching. In fact, applying code-switching models to lexical borrowing detection has previously proved to be unsuccessful, as they tend to overestimate the number of anglicisms BIBREF32. In the next section we address the differences between both phenomena and set the scope of this project.
<<</Related Work>>>
<<<Anglicism: Scope of the Phenomenon>>>
Linguistic borrowing can be defined as the transference of linguistic elements between two languages. Borrowing and code-switching have frequently been described as a continuum BIBREF33, with a fuzzy frontier between the two. As a result, a precise definition of what borrowing is remains elusive BIBREF34 and some authors prefer to talk about code-mixing in general BIBREF35 or “lone other-language incorporations" BIBREF36.
Lexical borrowing in particular involves the incorporation of single lexical units from one language into another language and is usually accompanied by morphological and phonological modification to conform with the patterns of the recipient language BIBREF37, BIBREF38. By definition, code-switches are not integrated into a recipient language, unlike established loanwords BIBREF39. While code-switches are usually fluent multiword interferences that normally comply with grammatical restrictions in both languages and that are produced by bilingual speakers in bilingual discourses, lexical borrowings are words used by monolingual individuals that eventually become lexicalized and assimilated as part of the recipient language lexicon until the knowledge of “foreign" origin disappears BIBREF40.
In terms of approaching the problem, automatic code-switching identification has been framed as a sequence modeling problem where every token receives a language ID label (as in a POS-tagging task). Borrowing detection, on the other hand, while it can also be transformed into a sequence labeling problem, is an extraction task, where only certain spans of texts will be labeled (in the fashion of a NER task).
Various typologies have been proposed that aim to classify borrowings according to different criteria, both with a cross-linguistic perspective and also specifically aimed to characterize English inclusions in Spanish BIBREF34, BIBREF41, BIBREF42, BIBREF5. In this work, we will be focusing on unassimilated lexical borrowings (sometimes called foreignisms), i.e. words from English origin that are introduced into Spanish without any morphological or orthographic adaptation.
<<</Anglicism: Scope of the Phenomenon>>>
<<<Corpus description and annotation>>>
<<<Corpus description>>>
In this subsection we describe the characteristics of the corpus. We first introduce the main corpus, with the usual train/development/test split that was used to train, tune and evaluate the model. We then present an additional test set that was designed to assess the performance of the model on more naturalistic data.
<<<Main Corpus>>>
The main corpus consists of a collection of monolingual newspaper headlines written in European Spanish. The corpus contains 16,553 headlines, which amounts to 244,114 tokens. Out of those 16,553 headlines, 1,109 contain at least one anglicism. The total number of anglicisms is 1,176 (most of them are a single word, although some of them were multiword expressions). The corpus was divided into training, development and test set. The proportions of headlines, tokens and anglicisms in each corpus split can be found in Table TABREF6.
The headlines in this corpus come from the Spanish newspaper eldiario.es, a progressive online newspaper based in Spain. eldiario.es is one of the main national newspapers from Spain and, to the best of our knowledge, the only one that publishes its content under a Creative Commons license, which made it ideal for making the corpus publicly available.
The headlines were extracted from the newspaper website through web scraping and range from September 2012 to January 2020. Only the following sections were included: economy, technology, lifestyle, music, TV and opinion. These sections were chosen as they were the most likely to contain anglicisms. The proportion of headlines with anglicisms per section can be found in Table TABREF7.
Using headlines (instead of full articles) was beneficial for several reasons. First of all, annotating a headline is faster and easier than annotating a full article; this helps ensure that a wider variety of topics will be covered in the corpus. Secondly, anglicisms are abundant in headlines, because they are frequently used as a way of calling the attention of the reader BIBREF43. Finally, borrowings that make it to the headline are likely to be particularly salient or relevant, and therefore are good candidates for being extracted and tracked.
<<</Main Corpus>>>
<<<Supplemental Test Set>>>
In addition to the usual train/development/test split we have just presented, a supplemental test set of 5,017 headlines was collected. The headlines included in this additional test set also belong to eldiario.es. These headlines were retrieved daily through RSS during February 2020 and included all sections from the newspaper. The headlines in the supplemental corpus therefore do not overlap in time with the main corpus and include more sections. The number of headlines, tokens and anglicisms in the supplemental test set can be found in Table TABREF6.
The motivation behind this supplemental test set is to assess the model performance on more naturalistic data, as the headlines in the supplemental corpus (1) belong to the future of the main corpus and (2) come from a less borrowing-dense sample. This supplemental test set better mimics the real scenario that an actual anglicism extractor would face and can be used to assess how well the model generalizes to detect anglicisms in any section of the daily news, which is ultimately the aim of this project.
<<</Supplemental Test Set>>>
<<</Corpus description>>>
<<<Annotation guidelines>>>
The term anglicism covers a wide range of linguistic phenomena. Following the typology proposed by gomez1997towards, we focused on direct, unadapted, emerging Anglicisms, i.e. lexical borrowings from the English language into Spanish that have recently been imported and that have still not been assimilated into Spanish. Other phenomena such as semantic calques, syntactic anglicisms, acronyms and proper names were considered beyond the scope of this annotation project.
Lexical borrowings can be adapted (the spelling of the word is modified to comply with the phonological and orthographic patterns of the recipient language) or unadapted (the word preserves its original spelling). For this annotation task, adapted borrowings were ignored and only unadapted borrowings were annotated. Therefore, Spanish adaptations of anglicisms like fútbol (from football), mitin (from meeting) and such were not annotated as borrowings. Similarly, words derived from foreign lexemes that do not comply with Spanish orthotactics but that have been morphologically derived following the Spanish paradigm (hacktivista, hackear, shakespeariano) were not annotated either. However, pseudo-anglicisms (words that are formed as if they were English, but do not exist in English, such as footing or balconing) were annotated.
Words that were not adapted but whose original spelling complies with graphophonological rules of Spanish (and are therefore unlikely to be ever adapted, such as web, internet, fan, club, videoclip) were annotated or not depending on how recent or emergent they were. After all, a word like club, that has been around in Spanish language for centuries, cannot be considered emergent anymore and, for this project, would not be as interesting to retrieve as real emerging anglicisms. The notion of emergent is, however, time-dependent and quite subjective: in order to determine which unadapted, graphophonologically acceptable borrowings were to be annotated, the online version of the Diccionario de la lengua española dle was consulted. This dictionary is compiled by the Royal Spanish Academy, a prescriptive institution on Spanish language. This decision was motivated by the fact that, if a borrowing was already registered by this dictionary (that has conservative approach to language change) and is considered assimilated (that is, the institution recommended no italics or quotation marks to write that word) then it could be inferred that the word was not emergent anymore.
Although the previous guidelines covered most cases, they proved insufficient. Some anglicisms were unadapted (they preserved their original spelling), unacceptable according to the Spanish graphophonological rules, and yet did not satisfy the condition of being emergent. That was the case of words like jazz or whisky, words that do not comply with Spanish graphophonological rules but that were imported decades ago, cannot be considered emergent anymore and are unlikely to ever be adapted into the Spanish spelling system. To adjudicate these examples on those cases, the criterion of pragmatic markedness proposed by winter2012proposing (that distinguishes between catachrestic and non-catachrestic borrowing) was applied: if a borrowing was not adapted (i.e. its form remained exactly as it came from English) but referred to a particular invention or innovation that came via the English language, that was not perceived as new anymore and that had never competed with a Spanish equivalent, then it was ignored. This criteria proved to be extremely useful to deal with old unadapted anglicisms in the fields of music and food. Figure 1 summarizes the decision steps followed during the annotation process.
The corpus was annotated by a native speaker of Spanish using Doccano doccano. The annotation tagset includes two labels: ENG, to annotate the English borrowings just described, and OTHER. This OTHER tag was used to tag lexical borrowings from languages other than English. After all, although English is today by far the most prevalent donor of borrowings, there are other languages that also provide new borrowings to Spanish. Furthermore, the tag OTHER allows to annotate borrowings such as première or tempeh, borrowings that etymologically do not come from English but that have entered the Spanish language via English influence, even when their spelling is very different to English borrowings. In general, we considered that having such a tag could also help assess how successful a classifier is detecting foreign borrowings in general in Spanish newswire (without having to create a label for every possible donor language, as the number of examples would be too sparse). In total, the training set contained 40 entities labeled as OTHER, the development set contained 14 and the test set contained 13. The supplemental test set contained 35 OTHER entities.
<<</Annotation guidelines>>>
<<</Corpus description and annotation>>>
<<<Baseline Model>>>
A baseline model for automatic extraction of anglicisms was created using the annotated corpus we just presented as training material. As mentioned in Section 3, the task of detecting anglicisms can be approached as a sequence labeling problem where only certain spans of texts will be labeled as anglicism (in a similar way to an NER task). The chosen model was conditional random field model (CRF), which was also the most popular model in both Shared Tasks on Language Identification for Code-Switched Data BIBREF23, BIBREF24.
The model was built using pycrfsuite korobov2014python, the Python wrapper for crfsuite CRFsuite that implements CRF for labeling sequential data. It also used the Token and Span utilities from spaCy library honnibal2017spacy.
The following handcrafted features were used for the model:
Bias feature
Token feature
Uppercase feature (y/n)
Titlecase feature (y/n)
Character trigram feature
Quotation feature (y/n)
Word suffix feature (last three characters)
POS tag (provided by spaCy utilities)
Word shape (provided by spaCy utilities)
Word embedding (see Table TABREF26)
Given that anglicisms can be multiword expressions (such as best seller, big data) and that those units should be treated as one borrowing and not as two independent borrowings, we used multi-token BIO encoding to denote the boundaries of each span BIBREF44. A window of two tokens in each direction was set for the feature extractor. The algorithm used was gradient descent with the L-BFGS method.
The model was tuned on the development set doing grid search; the hyperparameters considered were c1 (L1 regularization coefficient: $0.01$, $0.05$, $0.1$, $0.5$, $1.0$), c2 (L2 regularization coefficient: $0.01$, $0.05$, $0.1$, $0.5$, $1.0$), embedding scaling ($0.5$, $1.0$, $2.0$, $4.0$), and embedding type bojanowski2017enriching,josecanete20193255001,cardellinoSBWCE,grave2018learning,honnibal2017spacy,perezfasttext,perezglove (see Table TABREF26). The best results were obtained with c1 = $0.05$, c2 = $0.01$, scaling = $0.5$ and word2vec Spanish embeddings by cardellinoSBWCE. The threshold for the stopping criterion delta was selected through observing the loss during preliminary experiments (delta = $1\mathrm {e}-3$).
In order to assess the significance of the the handcrafted features, a feature ablation study was done on the tuned model, ablating one feature at a time and testing on the development set. Due to the scarcity of spans labeled with the OTHER tag on the development set (only 14) and given that the main purpose of the model is to detect anglicisms, the baseline model was run ignoring the OTHER tag both during tuning and the feature ablation experiments. Table TABREF27 displays the results on the development set with all features and for the different feature ablation runs. The results show that all features proposed for the baseline model contribute to the results, with the character trigram feature being the one that has the biggest impact on the feature ablation study.
<<</Baseline Model>>>
<<<Results>>>
The baseline model was then run on the test set and the supplemental test set with the set of features and hyperparameters mentioned on Section SECREF5 Table TABREF28 displays the results obtained. The model was run both with and without the OTHER tag. The metrics for ENG display the results obtained only for the spans labeled as anglicisms; the metrics for OTHER display the results obtained for any borrowing other than anglicisms. The metrics for BORROWING discard the type of label and consider correct any labeled span that has correct boundaries, regardless of the label type (so any type of borrowing, regardless if it is ENG or OTHER). In all cases, only full matches were considered correct and no credit was given to partial matching, i.e. if only fake in fake news was retrieved, it was considered wrong and no partial score was given.
Results on all sets show an important difference between precision and recall, precision being significantly higher than recall. There is also a significant difference between the results obtained on development and test set (F1 = 89.60, F1 = 87.82) and the results on the supplemental test set (F1 = 71.49). The time difference between the supplemental test set and the development and test set (the headlines from the the supplemental test set being from a different time period to the training set) can probably explain these differences.
Comparing the results with and without the OTHER tag, it seems that including it on the development and test set produces worse results (or they remain roughly the same, at best). However, the best precision result on the supplemental test was obtained when including the OTHER tag and considering both ENG and OTHER spans as BORROWING (precision = 87.62). This is caused by the fact that, while the development and test set were compiled from anglicism-rich newspaper sections (similar to the training set), the supplemental test set contained headlines from all the sections in the newspaper, and therefore included borrowings from other languages such as Catalan, Basque or French. When running the model without the OTHER tag on the supplemental test set, these non-English borrowings were labeled as anglicisms by the model (after all, their spelling does not resemble Spanish spelling), damaging the precision score. When the OTHER tag was included, these non-English borrowings got correctly labeled as OTHER, improving the precision score. This proves that, although the OTHER tag might be irrelevant or even damaging when testing on the development or test set, it can be useful when testing on more naturalistic data, such as the one in the supplemental test set.
Concerning errors, two types of errors were recurrent among all sets: long titles of songs, films or series written in English were a source of false positives, as the model tended to mistake some of the uncapitalized words in the title for anglicisms (for example, it darker in “`You want it darker', la oscura y brillante despedida de Leonard Cohen"). On the other hand, anglicisms that appear on the first position of the sentence (and were, therefore, capitalized) were consistently ignored (as the model probably assumed they were named entities) and produced a high number of false negatives (for example, vamping in “Vamping: la recurrente leyenda urbana de la luz azul `asesina'").
The results on Table TABREF28 cannot, however, be compared to the ones reported by previous work: the metric that we report is span F-measure, as the evaluation was done on span level (instead of token level) and credit was only given to full matches. Secondly, there was no Spanish tag assigned to non-borrowings, that means that no credit was given if a Spanish token was identified as such.
<<</Results>>>
<<<Future Work>>>
This is an on-going project. The corpus we have just presented is a first step towards the development of an extractor of emerging anglicisms in the Spanish press. Future work includes: assessing whether to keep the OTHER tag, improving the baseline model (particularly to improve recall), assessing the suitability and contribution of different sets of features and exploring different models. In terms of the corpus development, the training set is now closed and stable, but the test set could potentially be increased in order to have more and more diverse anglicisms.
<<</Future Work>>>
<<<Conclusions>>>
In this paper we have presented a new corpus of 21,570 newspaper headlines written in European Spanish. The corpus is annotated with emergent anglicisms and, up to our very best knowledge, is the first corpus of this type to be released publicly. We have presented the annotation scope, tagset and guidelines, and we have introduced a CRF baseline model for anglicism extraction trained with the described corpus. The results obtained show that the the corpus and baseline model are appropriate for automatic anglicism extraction.
<<</Conclusions>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Work\nAnglicism: Scope of the Phenomenon\nCorpus description and annotation\nCorpus description\nMain Corpus\nSupplemental Test Set\nAnnotation guidelines\nBaseline Model\nResults\nFuture Work\nConclusions"
],
"type": "outline"
}
|
1910.00825
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Abstractive Dialog Summarization with Semantic Scaffolds
<<<Abstract>>>
The demand for abstractive dialog summary is growing in real-world applications. For example, customer service center or hospitals would like to summarize customer service interaction and doctor-patient interaction. However, few researchers explored abstractive summarization on dialogs due to the lack of suitable datasets. We propose an abstractive dialog summarization dataset based on MultiWOZ. If we directly apply previous state-of-the-art document summarization methods on dialogs, there are two significant drawbacks: the informative entities such as restaurant names are difficult to preserve, and the contents from different dialog domains are sometimes mismatched. To address these two drawbacks, we propose Scaffold Pointer Network (SPNet)to utilize the existing annotation on speaker role, semantic slot and dialog domain. SPNet incorporates these semantic scaffolds for dialog summarization. Since ROUGE cannot capture the two drawbacks mentioned, we also propose a new evaluation metric that considers critical informative entities in the text. On MultiWOZ, our proposed SPNet outperforms state-of-the-art abstractive summarization methods on all the automatic and human evaluation metrics.
<<</Abstract>>>
<<<Introduction>>>
Summarization aims to condense a piece of text to a shorter version, retaining the critical information. On dialogs, summarization has various promising applications in the real world. For instance, the automatic doctor-patient interaction summary can save doctors' massive amount of time used for filling medical records. There is also a general demand for summarizing meetings in order to track project progress in the industry. Generally, multi-party conversations with interactive communication are more difficult to summarize than single-speaker documents. Hence, dialog summarization will be a potential field in summarization track.
There are two types of summarization: extractive and abstractive. Extractive summarization selects sentences or phrases directly from the source text and merges them to a summary, while abstractive summarization attempts to generate novel expressions to condense information. Previous dialog summarization research mostly study extractive summarization BIBREF1, BIBREF2. Extractive methods merge selected important utterances from a dialog to form summary. Because dialogs are highly dependant on their histories, it is difficult to produce coherent discourses with a set of non-consecutive conversation turns. Therefore, extractive summarization is not the best approach to summarize dialogs. However, most modern abstractive methods focus on single-speaker documents rather than dialogs due to the lack of dialog summarization corpora. Popular abstractive summarization dataset like CNN/Daily Mail BIBREF3 is on news documents. AMI meeting corpus BIBREF4 is the common benchmark, but it only has extractive summary.
In this work, we introduce a dataset for abstractive dialog summarization based on MultiWOZ BIBREF0. Seq2Seq models such as Pointer-Generator BIBREF5 have achieved high-quality summaries of news document. However, directly applying a news summarizer to dialog results in two drawbacks: informative entities such as place name are difficult to capture precisely and contents in different domains are summarized unequally. To address these problems, we propose Scaffold Pointer Network (SPNet). SPNet incorporates three types of semantic scaffolds in dialog: speaker role, semantic slot, and dialog domain. Firstly, SPNet adapts separate encoder to attentional Seq2Seq framework, producing distinct semantic representations for different speaker roles. Then, our method inputs delexicalized utterances for producing delexicalized summary, and fills in slot values to generate complete summary. Finally, we incorporate dialog domain scaffold by jointly optimizing dialog domain classification task along with the summarization task. We evaluate SPNet with both automatic and human evaluation metrics on MultiWOZ. SPNet outperforms Pointer-Generator BIBREF5 and Transformer BIBREF6 on all the metrics.
<<</Introduction>>>
<<<Related Work>>>
BIBREF7 first applied modern neural models to abstractive summarization. Their approach is based on Seq2Seq framework BIBREF8 and attention mechanism BIBREF9, achieving state-of-the-art results on Gigaword and DUC-2004 dataset. BIBREF10 proposed copy mechanism in summarization, demonstrating its effectiveness by combining the advantages of extractive and abstractive approach. BIBREF5 applied pointing BIBREF11 as copy mechanism and use coverage mechanism BIBREF12 to discourage repetition. Most recently, reinforcement learning (RL) has been employed in abstractive summarization. RL-based approaches directly optimize the objectives of summarization BIBREF13, BIBREF14. However, deep reinforcement learning approaches are difficult to train and more prone to exposure bias BIBREF15.
Recently, pre-training methods are popular in NLP applications. BERT BIBREF16 and GPT BIBREF17 have achieved state-of-the-art performance in many tasks, including summarization. For instance, BIBREF18 proposed a method to pre-train hierarchical document encoder for extractive summarization. BIBREF19 proposed two strategies to incorporate a pre-trained model (GPT) to perform the abstractive summarizer and achieved a better performance. However, there has not been much research on adapting pre-trained models to dialog summarization.
Dialog summarization, specifically meeting summarization, has been studied extensively. Previous work generally focused on statistical machine learning methods in extractive dialog summarization: BIBREF20 used skip-chain conditional random fields (CRFs) BIBREF21 as a ranking method in extractive meeting summarization. BIBREF22 compared support vector machines (SVMs) BIBREF23 with LDA-based topic models BIBREF24 for producing decision summaries. However, abstractive dialog summarization was less explored due to the lack of a suitable benchmark. Recent work BIBREF25, BIBREF26, BIBREF27 created abstractive dialog summary benchmarks with existing dialog corpus. BIBREF26 annotated topic descriptions in AMI meeting corpus as the summary. However, topics they defined are coarse, such as “industrial designer presentation". They also proposed a model with a sentence-gated mechanism incorporating dialog acts to perform abstractive summarization. Moreover, BIBREF28 first built a model to summarize audio-visual meeting data with an abstractive method. However, previous work has not investigated the utilization of semantic patterns in dialog, so we explore it in-depth in our work.
<<</Related Work>>>
<<<Proposed Method>>>
As discussed above, state-of-the-art document summarizers are not applicable in conversation settings. We propose Scaffold Pointer Network (SPNet) based on Pointer-Generator BIBREF5. SPNet incorporates three types of semantic scaffolds to improve abstractive dialog summarization: speaker role, semantic slot and dialog domain.
<<<Background>>>
We first introduce Pointer-Generator BIBREF5. It is a hybrid model of the typical Seq2Seq attention model BIBREF29 and pointer network BIBREF11. Seq2Seq framework encodes source sequence and generates the target sequence with the decoder. The input sequence is fed into the encoder token by token, producing the encoder hidden states $h_i$ in each encoding step. The decoder receives word embedding of the previous word and generates a distribution to decide the target element in this step, retaining decoder hidden states $s_t$. In Pointer-Generator, attention distribution $a^t$ is computed as in BIBREF9:
where $W_h$, $W_s$, $v$ and $b_{attn}$ are all learnable parameters.
With the attention distribution $a^t$, context vector $h_t^*$ is computed as the weighted sum of encoder's hidden states. Context vector is regarded as the attentional information in the source text:
Pointer-Generator differs from typical Seq2Seq attention model in the generation process. The pointing mechanism combines copying words directly from the source text with generating words from a fixed vocabulary. Generation probability $p_{gen}$ is calculated as “a soft switch" to choose from copy and generation:
where $x_t$ is the decoder input, $w_{h^*}$, $w_s$, $w_x$ and $b_{ptr}$ are all learnable parameters. $\sigma $ is sigmoid function, so the generation probability $p_{gen}$ has a range of $[0, 1]$.
The ability to select from copy and generation corresponds to a dynamic vocabulary. Pointer network forms an extended vocabulary for the copied tokens, including all the out-of-vocabulary(OOV) words appeared in the source text. The final probability distribution $P(w)$ on extended vocabulary is computed as follows:
where $P_{vocab}$ is the distribution on the original vocabulary, $V^{\prime }$, $V$, $b$ and $b^{\prime }$ are learnable parameters used to calculate such distribution.
<<</Background>>>
<<<Scaffold Pointer Network (SPNet)>>>
Our Scaffold Pointer Network (depicted in Figure FIGREF7) is based on Pointer-Generator BIBREF5. The contribution of SPNet is three-fold: separate encoding for different roles, incorporating semantic slot scaffold and dialog domain scaffold.
<<<Speaker Role Scaffold>>>
Our encoder-decoder framework employs separate encoding for different speakers in the dialog. User utterances $x_t^{usr}$ and system utterances $x_t^{sys}$ are fed into a user encoder and a system encoder separately to obtain encoder hidden states $h_{i}^{usr}$ and $h_{i}^{sys}$ . The attention distributions and context vectors are calculated as described in section SECREF1. In order to merge these two encoders in our framework, the decoder's hidden state $s_0$ is initialized as:
The pointing mechanism in our model follows the Equation DISPLAY_FORM4, and we obtain the context vector $h_t^{*}$:
<<</Speaker Role Scaffold>>>
<<<Semantic Slot Scaffold>>>
We integrate semantic slot scaffold by performing delexicalization on original dialogs. Delexicalization is a common pre-processing step in dialog modeling. Specifically, delexicalization replaces the slot values with its semantic slot name(e.g. replace 18:00 with [time]). It is easier for the language modeling to process delexicalized texts, as they have a reduced vocabulary size. But these generated sentences lack the semantic information due to the delexicalization. Some previous dialog system research ignored this issue BIBREF30 or completed single delexicalized utterance BIBREF31 as generated response. We propose to perform delexicalization in dialog summary, since delexicalized utterances can simplify dialog modeling. We fill the generated templates with slots with the copy and pointing mechanism.
We first train the model with the delexicalized utterance. Attention distribution $a^t$ over the source tokens instructs the decoder to fill up the slots with lexicalized values:
Note that $w_{slot}$ specifies the tokens that represents the slot name (e.g. [hotel_place], [time]). Decoder directly copies lexicalized value $value(w_i)$ conditioned on attention distribution $a_i^t$. If $w$ is not a slot token, then the probability $P(w)$ is calculated as Equation DISPLAY_FORM5.
<<</Semantic Slot Scaffold>>>
<<<Dialog Domain Scaffold>>>
We integrate dialog domain scaffold through a multi-task framework. Dialog domain indicates different conversation task content, for example, booking hotel, restaurant and taxi in MultiWOZ dataset. Generally, the content in different domains varies so multi-domain task summarization is more difficult than single-domain. We include domain classification as the auxiliary task to incorporate the prior that different domains have different content. Feedback from the domain classification task provides domain specific information for the encoder to learn better representations. For domain classification, we feed the concatenated encoder hidden state through a binary classifier with two linear layers, producing domain probability $d$. The $i^{th}$ element $d_i$ in $d$ represents the probability of the $i^{th}$ domain:
where $U$, $U^{\prime }$, $b_{d}$ and $b_{d}^{\prime }$ are all trainable parameters in the classifier. We denote the loss function of summarization as $loss_1$ and domain classification as $loss_2$. Assume target word at timestep $t$ is $w_t^{*}$, $loss_1$ is the arithmetic mean of the negative log likelihood of $w_t^{*}$ over the generated sequence:
The domain classification task is a multi-label binary classification problem. We use binary cross entropy loss between the $i^{th}$ domain label $\hat{d_i}$ and predict probability $d_i$ for this task:
where $|D|$ is the number of domains. Finally, we reweight the classification loss with hyperparameter $\lambda $ and the objective function is:
<<</Dialog Domain Scaffold>>>
<<</Scaffold Pointer Network (SPNet)>>>
<<</Proposed Method>>>
<<<Experimental Settings>>>
<<<Dataset>>>
We validate SPNet on MultiWOZ-2.0 dataset BIBREF0. MultiWOZ consists of multi-domain conversations between a tourist and a information center clerk on varies booking tasks or domains, such as booking restaurants, hotels, taxis, etc. There are 10,438 dialogs, spanning over seven domains. 3,406 of them are single-domain (8.93 turns on average) and 7,302 are multi-domain (15.39 turns on average). During MultiWOZ data collection, instruction is provided for crowd workers to perform the task. We use the instructions as the dialog summary, and an example data is shown in Table TABREF25. Dialog domain label is extracted from existing MultiWOZ annotation. In the experiment, we split the dataset into 8,438 training, 1,000 validation, and 1,000 testing.
<<</Dataset>>>
<<<Evaluation Metrics>>>
ROUGE BIBREF32 is a standard metric for summarization, designed to measure the surface word alignment between a generated summary and a human written summary. We evaluate our model with ROUGE-1, ROUGE-2 and ROUGE-L. They measure the word-overlap, bigram-overlap, and longest common sequence between the reference summary and the generated summary respectively. We obtain ROUGE scores using the files2rouge package. However, ROUGE is insufficient to measure summarization performance. The following example shows its limitations:
Reference: You are going to [restaurant_name] at [time].
Summary: You are going to [restaurant_name] at.
In this case, the summary has a high ROUGE score, as it has a considerable proportion of word overlap with the reference summary. However, it still has poor relevance and readability, for leaving out one of the most critical information: [time]. ROUGE treats each word equally in computing n-gram overlap while the informativeness actually varies: common words or phrases (e.g. “You are going to") significantly contribute to the ROUGE score and readability, but they are almost irrelevant to essential contents. The semantic slot values (e.g. [restaurant_name], [time]) are more essential compared to other words in the summary. However, ROUGE did not take this into consideration. To address this drawback in ROUGE, we propose a new evaluation metric: Critical Information Completeness (CIC). Formally, CIC is a recall of semantic slot information between a candidate summary and a reference summary. CIC is defined as follows:
where $V$ stands for a set of delexicalized values in the reference summary, $Count_{match}(v)$ is the number of values co-occurring in the candidate summary and reference summary, and $m$ is the number of values in set $V$. In our experiments, CIC is computed as the arithmetic mean over all the dialog domains to retain the overall performance.
CIC is a suitable complementary metric to ROUGE because it accounts for the most important information within each dialog domain. CIC can be applied to any summarization task with predefined essential entities. For example, in news summarization the proper nouns are the critical information to retain.
<<</Evaluation Metrics>>>
<<<Implementation Details>>>
We implemented our baselines with OpenNMT framework BIBREF33. We delexicalize utterances according to the belief span annotation. To maintain the generalizability of SPNet, we combine the slots that refer to the same information from different dialog domains into one slot (e.g. time). Instead of using pre-trained word embeddings like GloVe BIBREF34, we train word embeddings from scratch with a 128-dimension embedding layer. We set the hidden states of the bidirectional LSTM encoders to 256 dimensions, and the unidirectional LSTM decoder to 512 dimension. Our model is optimized using Adam BIBREF35 with a learning rate of 0.001, $\beta _1=0.9$, $\beta _2=0.999$. We reduce the learning rate to half to avoid overfitting when the validation loss increases. We set the hyperparameter $\lambda $ to 0.5 in the objective function and the batch size to eight. We use beam search with a beam size of three during decoding. We use the validation set to select the model parameter. Our model with and without multi-task takes about 15 epochs and seven epochs to converge, respectively.
<<</Implementation Details>>>
<<</Experimental Settings>>>
<<<Results and Discussions>>>
<<<Automatic Evaluation Results>>>
To demonstrate SPNet's effectiveness, we compare it with two state-of-the-art methods, Pointer-Generator BIBREF5 and Transformer BIBREF6. Pointer-Generator is the state-of-the-art method in abstractive document summarization. In inference, we use length penalty and coverage penalty mentioned in BIBREF36. The hyperparameters in the original implementation BIBREF5 were used. Transformer uses attention mechanisms to replace recurrence for sequence transduction. Transformer generalizes well to many sequence-to-sequence problems, so we adapt it to our task, following the implementation in the official OpenNMT-py documentation.
We show all the models' results in Table TABREF24. We observe that SPNet reaches the highest score in both ROUGE and CIC. Both Pointer-Generator and Transformer achieve high ROUGE scores, but a relative low CIC scores. It suggests that the baselines have more room for improvement on preserving critical slot information. All the scaffolds we propose can be applied to different neural network models. In this work we select Pointer-Generator as our base model in SPNet because we observe that Transformer only has a small improvement over Pointer-Generator but is having a higher cost on training time and computing resources. We observe that SPNet outperforms other methods in all the automatic evaluation metrics with a big margin, as it incorporates all the three semantic scaffolds. Semantic slot contributes the most to SPNet's increased performance, bringing the largest increase on all automatic evaluation metrics.
<<</Automatic Evaluation Results>>>
<<<Human Evaluation Results>>>
We also perform human evaluation to verify if our method's increased performance on automatic evaluation metrics entails better human perceived quality. We randomly select 100 test samples from MultiWOZ test set for evaluation. We recruit 150 crowd workers from Amazon Mechanical Turk. For each sample, we show the conversation, reference summary, as well as summaries generated by Pointer-Generator and SPNet to three different participants. The participants are asked to score each summary on three indicators: relevance, conciseness and readability on a 1 to 5 scale, and rank the summary pair (tie allowed).
We present human evaluation results in Table TABREF27. In the scoring part, our model outperforms Pointer-Generator in all three evaluation metrics. SPNet scored better than Pointer-Generator on relevance and readability. All generated summaries are relatively concise; therefore, they score very similar in conciseness. Ground truth is still perceived as more relevant and readable than SPNet results. However, ground truth does not get a high absolute score. From the feedback of the evaluators, we found that they think that the ground truth has not covered all the necessary information in the conversation, and the description is not so natural. This motivates us to collect a dialog summarization dataset with high-quality human-written summaries in the future. Results in the ranking evaluation show more differences between different summaries. SPNet outperforms Pointer-Generator with a large margin. Its performance is relatively close to the ground truth summary.
<<</Human Evaluation Results>>>
<<<Case study>>>
Table TABREF25 shows an example summary from all models along with ground truth summary. We observe that Pointer-Generator ignores some essential fragments, such as the restaurant booking information (6 people, Sunday, 18:45). Missing information always belongs to the last several domains (restaurant in this case) in a multi-domain dialog. We also observe that separately encoding two speakers reduces repetition and inconsistency. For instance, Pointer-Generator's summary mentions “free wifi" several times and has conflicting requirements on wifi. This is because dialogs has information redundancy, but single-speaker model ignores such dialog property.
Our method has limitations. In the example shown in Table TABREF25, our summary does not mention the hotel name (Alexander Bed and Breakfast) and its address (517a Coldham Lane) referred in the source. It occurs because the ground truth summary doe not cover it in the training data. As a supervised method, SPNet is hard to generate a summary containing additional information beyond the ground truth. However, in some cases, SPNet can also correctly summarize the content not covered in the reference summary (see Table TABREF31 in Appendix).
Furthermore, although our SPNet achieves a much-improved performance, the application of SPNet still needs extra annotations for semantic scaffolds. For a dialog dataset, speaker role scaffold is a natural pattern for modeling. Most multi-domain dialog corpus has the domain annotation. While for texts, for example news, its topic categorization such as sports or entertainment can be used as domain annotation. We find that semantic slot scaffold brings the most significant improvement, but it is seldom explicitly annotated. However, the semantic slot scaffold can be relaxed to any critical entities in the corpus, such as team name in sports news or professional terminology in a technical meeting.
<<</Case study>>>
<<</Results and Discussions>>>
<<<Conclusion and Future Work>>>
We adapt a dialog generation dataset, MultiWOZ to an abstractive dialog summarization dataset. We propose SPNet, an end-to-end model that incorporates the speaker role, semantic slot and dialog domain as the semantic scaffolds to improve abstractive summary quality. We also propose an automatic evaluation metric CIC that considers semantic slot relevance to serve as a complementary metric to ROUGE. SPNet outperforms baseline methods in both automatic and human evaluation metrics. It suggests that involving semantic scaffolds efficiently improves abstractive summarization quality in the dialog scene.
Moreover, we can easily extend SPNet to other summarization tasks. We plan to apply semantic slot scaffold to news summarization. Specifically, we can annotate the critical entities such as person names or location names to ensure that they are captured correctly in the generated summary. We also plan to collect a human-human dialog dataset with more diverse human-written summaries.
<<</Conclusion and Future Work>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Work\nProposed Method\nBackground\nScaffold Pointer Network (SPNet)\nSpeaker Role Scaffold\nSemantic Slot Scaffold\nDialog Domain Scaffold\nExperimental Settings\nDataset\nEvaluation Metrics\nImplementation Details\nResults and Discussions\nAutomatic Evaluation Results\nHuman Evaluation Results\nCase study\nConclusion and Future Work"
],
"type": "outline"
}
|
1910.00458
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
MMM: Multi-stage Multi-task Learning for Multi-choice Reading Comprehension
<<<Abstract>>>
Machine Reading Comprehension (MRC) for question answering (QA), which aims to answer a question given the relevant context passages, is an important way to test the ability of intelligence systems to understand human language. Multiple-Choice QA (MCQA) is one of the most difficult tasks in MRC because it often requires more advanced reading comprehension skills such as logical reasoning, summarization, and arithmetic operations, compared to the extractive counterpart where answers are usually spans of text within given passages. Moreover, most existing MCQA datasets are small in size, making the learning task even harder. We introduce MMM, a Multi-stage Multi-task learning framework for Multi-choice reading comprehension. Our method involves two sequential stages: coarse-tuning stage using out-of-domain datasets and multi-task learning stage using a larger in-domain dataset to help model generalize better with limited data. Furthermore, we propose a novel multi-step attention network (MAN) as the top-level classifier for this task. We demonstrate MMM significantly advances the state-of-the-art on four representative MCQA datasets.
<<</Abstract>>>
<<<Introduction>>>
Building a system that comprehends text and answers questions is challenging but fascinating, which can be used to test the machine's ability to understand human language BIBREF0, BIBREF1. Many machine reading comprehension (MRC) based question answering (QA) scenarios and datasets have been introduced over the past few years, which differ from each other in various ways, including the source and format of the context documents, whether external knowledge is needed, the format of the answer, to name a few. We can divide these QA tasks into two categories: 1) extractive/abstractive QA such as SQuAD BIBREF2, and HotPotQA BIBREF3. 2) multiple-choice QA (MCQA) tasks such as MultiRC BIBREF4, and MCTest BIBREF5.
In comparison to extractive/abstractive QA tasks, the answers of the MCQA datasets are in the form of open, natural language sentences and not restricted to spans in text. Various question types exist such as arithmetic, summarization, common sense, logical reasoning, language inference, and sentiment analysis. Therefore it requires more advanced reading skills for the machine to perform well on this task. Table TABREF1 shows one example from one of MCQA datasets, DREAM BIBREF6. To answer the first question in Table TABREF1, the system needs to comprehend the whole dialogue and use some common sense knowledge to infer that such a conversation can only happen between classmates rather than brother and sister. For the second question, the implicit inference relationship between the utterance “You'll forget your head if you're not careful.” in the passage and the answer option “He is too careless.” must be figured out by the model to obtain the correct answer. Many MCQA datasets were collected from language or science exams, which were purposely designed by educational experts and consequently require non-trivial reasoning techniques BIBREF7. As a result, the performance of machine readers on these tasks can more accurately gauge comprehension ability of a model.
Recently large and powerful pre-trained language models such as BERT BIBREF8 have been achieving the state-of-the-art (SOTA) results on various tasks, however, its potency on MCQA datasets has been severely limited by the data insufficiency. For example, the MCTest dataset has two variants: MC160 and MC500, which are curated in a similar way, and MC160 is considered easier than MC500 BIBREF9. However, BERT-based models perform much worse on MC160 compared with MC500 (8–10% gap) since the data size of the former is about three times smaller. To tackle this issue, we investigate how to improve the generalization of BERT-based MCQA models with the constraint of limited training data using four representative MCQA datasets: DREAM, MCTest, TOEFL, and SemEval-2018 Task 11.
We proposed MMM, a Multi-stage Multi-task learning framework for Multi-choice question answering. Our framework involves two sequential stages: coarse-tuning stage using out-of-domain datasets and multi-task learning stage using a larger in-domain dataset. For the first stage, we coarse-tuned our model with natural language inference (NLI) tasks. For the second multi-task fine-tuning stage, we leveraged the current largest MCQA dataset, RACE, as the in-domain source dataset and simultaneously fine-tuned the model on both source and target datasets via multi-task learning. Through extensive experiments, we demonstrate that the two-stage sequential fine-tuning strategy is the optimal choice for BERT-based model on MCQA datasets. Moreover, we also proposed a Multi-step Attention Network (MAN) as the top-level classifier instead of the typical fully-connected neural network for this task and obtained better performance. Our proposed method improves BERT-based baseline models by at least 7% in absolute accuracy for all the MCQA datasets (except the SemEval dataset that already achieves 88.1% for the baseline). As a result, by leveraging BERT and its variant, RoBERTa BIBREF10, our approach advanced the SOTA results for all the MCQA datasets, surpassing the previous SOTA by at least 16% in absolute accuracy (except the SemEval dataset).
<<</Introduction>>>
<<<Methods>>>
In MCQA, the inputs to the model are a passage, a question, and answer options. The passage, denoted as $P$, consists of a list of sentences. The question and each of the answer options, denoted by $Q$ and $O$, are both single sentences. A MCQA model aims to choose one correct answer from answer options based on $P$ and $Q$.
<<<Model Architecture>>>
Figure FIGREF3 illustrates the model architecture. Specifically, we concatenate the passage, question and one of the answer options into a long sequence. For a question with $n$ answer options, we obtain $n$ token sequences of length $l$. Afterwards, each sequence will be encoded by a sentence encoder to get the representation vector $H \in \mathbb {R}^{d\times l}$, which is then projected into a single value $p=C(H)$ ($p\in \mathbb {R}^{1}$) via a top-level classifier $C$. In this way, we obtain the logit vector $\mathbf {p}=[p_1,p_2,...,p_n]$ for all options of a question, which is then transformed into the probability vector through a softmax layer. We choose the option with highest logit value $p$ as the answer. Cross entropy loss is used as the loss function. We used the pre-trained bidirectional transformer encoder, i.e., BERT and RoBERTa as the sentence encoder. The top-level classifier will be detailed in the next subsection.
<<</Model Architecture>>>
<<<Multi-step Attention Network>>>
For the top-level classifier upon the sentence encoder, the simplest choice is a two-layer full-connected neural network (FCNN), which consist of one hidden layer with $tanh$ activation and one output layer without activation. This has been widely adopted when BERT is fine-tuned for the down-streaming classification tasks and performs very well BIBREF8. Inspired from the success of the attention network widely used in the span-based QA task BIBREF11, we propose the multi-step attention network (MAN) as our top-level classifier. Similar to the dynamic or multi-hop memory network BIBREF12, BIBREF13, MAN maintains a state and iteratively refines its prediction via the multi-step reasoning.
The MAN classifier works as follows. A pair of question and answer option together is considered as a whole segment, denoted as $QO$. Suppose the sequence length of the passage is $p$ and that of the question and option pair is $q$. We first construct the working memory of the passage $H^P\in \mathbb {R}^{d\times p}$ by extracting the hidden state vectors of the tokens that belong to $P$ from $H$ and concatenating them together in the original sequence order. Similarly, we obtain the working memory of the (question, option) pair, denoted as $H^{QO}\in \mathbb {R}^{d\times q}$. Alternatively, we can also encode the passage and (question, option) pair individually to get their representation vectors $H^P$ and $H^{QO}$, but we found that processing them in a pair performs better.
We then perform $K$-step reasoning over the memory to output the final prediction. Initially, the initial state $\mathbf {s}^0$ in step 0 is the summary of $H^P$ via self-attention: $\mathbf {s}^0=\sum _i \alpha _i H_i^P$, where $\alpha _i=\frac{exp(w_1^TH_i^P)}{\sum _j exp(w_1^TH_j^P)}$. In the following steps $k \in {1,2,...,K-1}$, the state is calculated by:
where $\mathbf {x}^k=\sum _i\beta _iH_i^{QO}$ and $\beta _i=\frac{exp(w_2^T[\mathbf {s}^{k-1};H_i^{QO}])}{\sum _j exp(w_2^T[\mathbf {s}^{k-1};H_j^{QO}])}$. Here $[x;y]$ is concatenation of the vectors $x$ and $y$. The final logit value is determined using the last step state:
Basically, the MAN classifier calculates the attention scores between the passage and (question, option) pair step by step dynamically such that the attention can refine itself through several steps of deliberation. The attention mechanism can help filter out irrelevant information in the passage against (question, option) pair.
<<</Multi-step Attention Network>>>
<<<Two Stage Training>>>
We adopt a two-stage procedure to train our model with both in-domain and out-of-domain datasets as shown in Figure FIGREF10.
<<<Coarse-tuning Stage>>>
We first fine-tune the sentence encoder of our model with natural language inference (NLI) tasks. For exploration, we have also tried to fine-tune the sentence encoder on other types of tasks such as sentiment analysis, paraphrasing, and span-based question answering at this stage. However, we found that only NLI task shows robust and significant improvements for our target multi-choice task. See Section SECREF5 for details.
<<</Coarse-tuning Stage>>>
<<<Multi-task Learning Stage>>>
After corase-tuning stage, we simultaneously fine-tune our model on a large in-domain source dataset and the target dataset together via multi-task learning. We share all model parameters including the sentence encoder as well as the top-level classifier for these two datasets.
<<</Multi-task Learning Stage>>>
<<</Two Stage Training>>>
<<</Methods>>>
<<<Experimental Setup>>>
<<<Datasets>>>
We use four MCQA datasets as the target datasets: DREAM BIBREF6, MCTest BIBREF9, TOEFL BIBREF5, and SemEval-2018 Task 11 BIBREF14, which are summarized in Table TABREF11. For the first coarse-tuning stage with NLI tasks, we use MultiNLI BIBREF15 and SNLI BIBREF16 as the out-of-domain source datasets. For the second stage, we use the current largest MCQA dataset, i.e., RACE BIBREF7 as in-domain source dataset. For all datasets, we use the official train/dev/test splits.
<<</Datasets>>>
<<<Speaker Normalization>>>
Passages in DREAM dataset are dialogues between two persons or more. Every utterance in a dialogue starts with the speaker name. For example, in utterance “m: How would he know?”, “m” is the abbreviation of “man” indicating that this utterance is from a man. More than 90% utterances have the speaker names as “w,” “f,” and “m,” which are all abbreviations. However, the speaker names mentioned in the questions are full names such as “woman” and “man.” In order to make it clear for the model to learn which speaker the question is asking about, we used a speaker normalization strategy by replacing “w” or “f” with “woman” and “m” with “man” for the speaker names in the utterances. We found this simple strategy is quite effective, providing us with 1% improvement. We will always use this strategy for the DREAM dataset for our method unless explicitly mentioned.
<<</Speaker Normalization>>>
<<<Multi-task Learning>>>
For the multi-task learning stage, at each training step, we randomly selected a dataset from the two datasets (RACE and the target dataset) and then randomly fetched a batch of data from that dataset to train the model. This process was repeated until the predefined maximum number of steps or the early stopping criterion has been met. We adopted the proportional sampling strategy, where the probability of sampling a task is proportional to the relative size of each dataset compared to the cumulative size of all datasets BIBREF17.
<<</Multi-task Learning>>>
<<<Training Details>>>
We used a linear learning rate decay schedule with warm-up proportion of $0.1$. We set the dropout rate as $0.1$. The maximum sequence length is set to 512. We clipped the gradient norm to 5 for DREAM dataset and 0 for other datasets. The learning rate and number of training epochs vary for different datasets and encoder types, which are summarized in Section 1 of the Supplementary Material.
More than 90% of passages have more than 512 words in the TOEFL dataset, which exceed the maximum sequence length that BERT supports, thus we cannot process the whole passage within one forward pass. To solve this issue, we propose the sliding window strategy, in which we split the long passage into several snippets of length 512 with overlaps between subsequent snippets and each snippet from the same passage will be assigned with the same label. In training phase, all snippets will be used for training, and in inference phase, we aggregate the logit vectors of all snippets from the same passage and pick the option with highest logit value as the prediction. In experiments, we found the overlap of 256 words is the optimal, which can improve the BERT-Base model from accuracy of 50.0% to 53.2%. We adopted this sliding window strategy only for the TOEFL dataset.
<<</Training Details>>>
<<</Experimental Setup>>>
<<<Results>>>
We first evaluate our method on the DREAM dataset. The results are summarized in Table TABREF16. In the table, we first report the accuracy of the SOTA models in the leaderboard. We then report the performance of our re-implementation of fine-tuned models as another set of strong baselines, among which the RoBERTa-Large model has already surpassed the previous SOTA. For these baselines, the top-level classifier is a two-layer FCNN for BERT-based models and a one-layer FCNN for the RoBERTa-Large model. Lastly, we report model performances that use all our proposed method, MMM (MAN classifier + speaker normalization + two stage learning strategies). As direct comparisons, we also list the accuracy increment between MMM and the baseline with the same sentence encoder marked by the parentheses, from which we can see that the performance augmentation is over 9% for BERT-Base and BERT-Large. Although the RoBERTa-Large baseline has already outperformed the BERT-Large baseline by around 18%, MMM gives us another $\sim $4% improvement, pushing the accuracy closer to the human performance. Overall, MMM has achieved a new SOTA, i.e., test accuracy of 88.9%, which exceeds the previous best by 16.9%.
We also test our method on three other MCQA datasets: MCTest including MC160 and MC500, TOEFL, and SemEval-2018 Task 11. The results are summarized in Table TABREF17. Similarly, we list the previous SOTA models with their scores for comparison. We compared our method with the baselines that use the same sentence encoder. Except for the SemEval dataset, our method can improve the BERT-Large model by at least 10%. For both MCTest and SemEval datasets, our best scores are very close to the reported human performance. The MC160 and MC500 datasets were curated in almost the same way BIBREF9 with only one difference that MC160 is around three times smaller than MC500. We can see from Table TABREF17 that both the BERT and RoBERTa baselines perform much worse on MC160 than MC500. We think the reason is that the data size of MC160 is not enough to well fine-tune the large models with a huge amount of trainable parameters. However, by leveraging the transfer learning techniques we proposed, we can significantly improve the generalization capability of BERT and RoBERTa models on the small datasets so that the best performance of MC160 can even surpass that of MC500. This demonstrates the effectiveness of our method.
To better understand why MMM can be successful, we conducted an ablation study be removing one feature at a time on the BERT-Base model. The results are shown in Table TABREF18. We see that the removal of the second stage multi-task learning part hurts our method most significantly, indicating that the majority of improvement is coming from the knowledge transferred from the in-domain dataset. The first stage of coarse-tuning using NLI datasets is also very important, which provides the model with enhanced language inference ability. As for the top-level classifier, i.e., the MAN module, if we replace it with a typical two-layer FCNN as in BIBREF8, we have 1–2% performance drop. Lastly, for the DREAM dataset, the speaker normalization strategy gives us another $\sim $1% improvement.
<<</Results>>>
<<<Discussion>>>
<<<Why does natural language inference help?>>>
As shown in Table TABREF18, coarse-tuning on NLI tasks can help improve the performance of MCQA. We conjecture one of the reasons is that, in order to pick the correct answer, we need to rely on the language inference capability in many cases. As an example in Table TABREF1, the utterance highlighted in the bold and italic font in the dialogue is the evidence sentence from which we can obtain the correct answer to Question 2. There is no token overlap between the evidence sentence and the correct answer, indicating that the model cannot solve this question by surface matching. Nevertheless, the correct answer is an entailment to the evidence sentence while the wrong answers are not. Therefore, the capability of language inference enables the model to correctly predict the answer. On the other hand, we can deem the passage and the pair of (question, answer) as a pair of premise and hypothesis. Then the process of choosing the right answer to a certain question is similar to the process of choosing the hypothesis that can best entail the premise. In this sense, the part of MCQA task can be deemed as a NLI task. This also agrees with the argument that NLI is a fundamental ability of a natural language processing model and it can help support other tasks that require higher level of language processing abilities BIBREF21. We provided several more examples that require language inference reading skills in the Section 2 of the Supplementary Material; they are wrongly predicted by the BERT-Base baseline model but can be correctly solved by exposing the model to NLI data with the coarse-tuning stage.
<<</Why does natural language inference help?>>>
<<<Can other tasks help with MCQA?>>>
By analyzing the MCQA datasets, we found that some questions ask about the attitude of one person towards something and in some cases, the correct answer is simply a paraphrase of the evidence sentence in the passage. This finding naturally leads to the question: could other kinds of tasks such as sentiment classification, paraphrasing also help with MCQA problems?
To answer this question, we select several representative datasets for five categories as the up-stream tasks: sentiment analysis, paraphrase, span-based QA, NLI, and MCQA. We conduct experiments where we first train the BERT-Base models on each of the five categories and then further fine-tune our models on the target dataset: DREAM and MC500 (MCTest-MC500). For the sentiment analysis category, we used the Stanford Sentiment Treebank (SST-2) dataset from the GLUE benchmark BIBREF22 (around 60k train examples) and the Yelp dataset (around 430k train examples). For the paraphrase category, three paraphrasing datasets are used from the GLUE benchmark: Microsoft Research Paraphrase Corpus (MRPC), Semantic Textual Similarity Benchmark (STS-B), and Quora Question Pairs (QQP), which are denoted as “GLUE-Para.”. For the span-based QA, we use the SQuAD 1.1, SQuAD 2.0 , and MRQA which is a joint dataset including six popular span-based QA datasets. Table TABREF23 summarizes the results. We see that sentiment analysis datasets do not help much with our target MCQA datasets. But the paraphrase datasets do bring some improvements for MCQA. For span-based QA, only SQuAD 2.0 helps to improve the performance of the target dataset. Interestingly, although MRQA is much larger than other QA datasets (at least six times larger), it makes the performance worst. This suggests that span-based QA might not the appropriate source tasks for transfer learning for MCQA. We hypothesis this could due to the fact that most of the questions are non-extractive (e.g., 84% of questions in DREAM are non-extractive) while all answers are extractive in the span-based QA datasets.
For the completeness of our experiments, we also used various NLI datasets: MultiNLI, SNLI, Question NLI (QLI), Recognizing Textual Entailment (RTE), and Winograd NLI (WNLI) from the GLUE benchmark. We used them in three kinds of combinations: MultiNLI alone, MultiNLI plus SNLI denoted as “NLI”, and combining all five datasets together, denoted as “GLUE-NLI”. As the results shown in Table TABREF23, NLI and GLUE-NLI are comparable and both can improve the target dataset by a large margin.
Lastly, among all these tasks, using the MCQA task itself, i.e., pretraining on RACE dataset, can help boost the performance, most. This result agrees with the intuition that the in-domain dataset can be the most ideal data for transfer learning.
In conclusion, we find that for out-of-domain datasets, the NLI datasets can be most helpful to the MCQA task, indicating that the natural language inference capability should be an important foundation of the MCQA systems. Besides, a larger in-domain dataset, i.e. another MCQA dataset, can also be very useful.
<<</Can other tasks help with MCQA?>>>
<<<NLI dataset helps with convergence>>>
The first stage of coarse-tuning with NLI data can not only improve the accuracy but also help the model converge faster and better. Especially for the BERT-Large and RoBERTa-Large models that have much larger amount of trainable parameters, convergence is very sensitive to the optimization settings. However, with the help of NLI datasets , convergence for large models is no longer an issue, as shown in Figure FIGREF25. Under the same optimization hyper-parameters, compared with the baseline, coarse-tuning can make the training loss of the BERT-Base model decrease much faster. More importantly, for the BERT-Large model, without coarse-tuning, the model does not converge at all at the first several epochs, which can be completely resolved by the help of NLI data.
<<</NLI dataset helps with convergence>>>
<<<Multi-stage or Multi-task>>>
In a typical scenario where we have one source and one target dataset, we naturally have a question about whether we should simultaneously train a model on them via multi-task learning or first train on the source dataset then on the target sequentially. Many previous works adopted the latter way BIBREF19, BIBREF20, BIBREF23 and BIBREF20 demonstrated that the sequential fine-tuning approach outperforms the multi-task learning setting in their experiments. However, we had contradictory observations in our experiments. Specifically, we conducted a pair of control experiments: one is that we first fine-tune the BERT-Base model on the source dataset RACE and then further fine-tune on the target dataset, and the other is that we simultaneously train the model on RACE and the target dataset via multi-task learning. The comparison results are shown in Table TABREF27. We see that compared with sequential fine-tuning, the multi-task learning achieved better performance. We conjecture that in the sequential fine-tuning setting, while the model is being fine-tuned on the target dataset, some information or knowledge learned from the source dataset may be lost since the model is no longer exposed to the source dataset in this stage. In comparison, this information can be kept in the multi-task learning setting and thus can better help improve the target dataset.
Now that the multi-task learning approach outperforms the sequential fine-tuning setting, we naturally arrive at another question: what if we merged the coarse-tuning and multi-task learning stages together? That is, what if we simultaneously trained the NLI, source, and target datasets altogether under the multi-task learning framework? We also conducted a pair of control experiments for investigation. The results in Table TABREF27, show that casting the fine-tuning process on three datasets into separate stages performs better, indicating that multi-stage training is also necessary. This verifies our MMM framework with coarse-tuning on out-of-domain datasets and fine-tuning on in-domain datesets.
<<</Multi-stage or Multi-task>>>
<<<Multi-steps reasoning is important>>>
Previous results show that the MAN classifier shows improvement compared with the FCNN classifier, but we are also interested in how the performance change while varying the number of reasoning steps $K$ as shown in Figure FIGREF29. $K=0$ means that we do not use MAN but FCNN as the classifier. We observe that there is a gradual improvement as we increase $K=1$ to $K=5$, but after 5 steps the improvements have saturated. This verifies that an appropriate number of steps of reasoning is important for the memory network to reflect its benefits.
<<</Multi-steps reasoning is important>>>
<<<Could the source dataset be benefited?>>>
So far we have been discussing the case where we do multi-task learning with the source dataset RACE and various much smaller target datasets to help improve the targets. We also want to see whether our proposed techniques can also benefit the source dataset itself. Table TABREF31 summarizes the results of BERT-Base model on the RACE dataset obtained by adding the coarse-tuning stage, adding the multi-task training together with DREAM, and adding the MAN module. From this table, we see that all three techniques can bring in improvements over the baseline model for the source dataset RACE, among which NLI coarse-tuning stage can help elevate the scores most.
Since we found all parts of MMM can work well for the source dataset, we tried to use them to improve the accuracy on RACE. The results are shown in Table TABREF32. We used four kinds of pre-trained sentence encoders: BERT-Base, BERT-Large, XLNet-Large, and RoBERTa-Large. For each encoder, we listed the official report of scores from the leaderboard. Compared with the baselines, MMM leads to improvements ranging from 0.5% to 3.0% in accuracy. Our best result is obtained by the RoBERTa-Large encoder.
<<</Could the source dataset be benefited?>>>
<<<Error Analysis>>>
In order to investigate how well our model performs for different types of questions, we did an error analysis by first randomly selecting 150 samples that had wrong predictions by the BERT-Base baseline model from the development set of DREAM dataset. We then manually classified them into several question types, as shown in Table TABREF34. The annotation criterion is described in the Section 3 of the Supplementary Material. We see that the BERT-Base baseline model still does not do well on matching problems. We then evaluate our best model on these samples and report the accuracy of each question type in the last column of Table TABREF34. We find that our best model can improve upon every question type significantly especially for the matching problems, and most surprisingly, our best model can even greatly improve its ability on solving the arithmetic problems, achieving the accuracy of 73.7%.
However, could our model really do math? To investigate this question, we sampled some arithmetic questions that are correctly predicted by our model, made small alterations to the passage or question, and then checked whether our model can still make correct choices. We found our model is very fragile to these minor alterations, implicating that the model is actually not that good at arithmetic problems. We provided one interesting example in the Section 3 of the Supplementary Material.
<<</Error Analysis>>>
<<</Discussion>>>
<<<Related Work>>>
There are increasing interests in machine reading comprehension (MRC) for question answering (QA). The extractive QA tasks primarily focus on locating text spans from the given document/corpus to answer questions BIBREF2. Answers in abstractive datasets such as MS MARCO BIBREF24, SearchQA BIBREF25, and NarrativeQA BIBREF26 are human-generated and based on source documents or summaries in free text format. However, since annotators tend to copy spans as answers BIBREF27, the majority of answers are still extractive in these datasets. The multi-choice QA datasets are collected either via crowd sourcing, or collected from examinations designed by educational experts BIBREF7. In this type of QA datasets, besides token matching, a significant portion of questions require multi-sentence reasoning and external knowledge BIBREF5.
Progress of research for MRC first relies on the breakthrough of the sentence encoder, from the basic LSTM to the pre-trained transformer based model BIBREF8, which has elevated the performance of all MRC models by a large margin. Besides, the attention mechanisms between the context and the query can empower the neural models with higher performance BIBREF11. In addition, some techniques such as answer verification BIBREF28, multi-hop reasoning BIBREF29, and synthetic data augmentation can be also helpful.
Transfer learning has been widely proved to be effective across many domain in NLP. In the QA domain, the most well-known example of transfer learning would be fine-tuning the pre-trained language model such as BERT to the down-streaming QA datasets such as SQuAD BIBREF8. Besides, multi-task learning can also be deemed as a type of transfer learning, since during the training of multiple datasets from different domains for different tasks, knowledge will be shared and transferred from each task to others, which has been used to build a generalized QA model BIBREF30. However, no previous works have investigated that the knowledge from the NLI datasets can also be transferred to improve the MCQA task.
<<</Related Work>>>
<<<Conclusions>>>
We propose MMM, a multi-stage multi-task transfer learning method on the multiple-choice question answering tasks. Our two-stage training strategy and the multi-step attention network achieved significant improvements for MCQA. We also did detailed analysis to explore the importance of both our training strategies as well as different kinds of in-domain and out-of-domain datasets. We hope our work here can also shed light on new directions for other NLP domains.
<<</Conclusions>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nMethods\nModel Architecture\nMulti-step Attention Network\nTwo Stage Training\nCoarse-tuning Stage\nMulti-task Learning Stage\nExperimental Setup\nDatasets\nSpeaker Normalization\nMulti-task Learning\nTraining Details\nResults\nDiscussion\nWhy does natural language inference help?\nCan other tasks help with MCQA?\nNLI dataset helps with convergence\nMulti-stage or Multi-task\nMulti-steps reasoning is important\nCould the source dataset be benefited?\nError Analysis\nRelated Work\nConclusions"
],
"type": "outline"
}
|
2001.11268
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Data Mining in Clinical Trial Text: Transformers for Classification and Question Answering Tasks
<<<Abstract>>>
This research on data extraction methods applies recent advances in natural language processing to evidence synthesis based on medical texts. Texts of interest include abstracts of clinical trials in English and in multilingual contexts. The main focus is on information characterized via the Population, Intervention, Comparator, and Outcome (PICO) framework, but data extraction is not limited to these fields. Recent neural network architectures based on transformers show capacities for transfer learning and increased performance on downstream natural language processing tasks such as universal reading comprehension, brought forward by this architecture's use of contextualized word embeddings and self-attention mechanisms. This paper contributes to solving problems related to ambiguity in PICO sentence prediction tasks, as well as highlighting how annotations for training named entity recognition systems are used to train a high-performing, but nevertheless flexible architecture for question answering in systematic review automation. Additionally, it demonstrates how the problem of insufficient amounts of training annotations for PICO entity extraction is tackled by augmentation. All models in this paper were created with the aim to support systematic review (semi)automation. They achieve high F1 scores, and demonstrate the feasibility of applying transformer-based classification methods to support data mining in the biomedical literature.
<<</Abstract>>>
<<<INTRODUCTION>>>
Systematic reviews (SR) of randomized controlled trials (RCTs) are regarded as the gold standard for providing information about the effects of interventions to healthcare practitioners, policy makers and members of the public. The quality of these reviews is ensured through a strict methodology that seeks to include all relevant information on the review topic BIBREF0.
A SR, as produced by the quality standards of Cochrane, is conducted to appraise and synthesize all research for a specific research question, therefore providing access to the best available medical evidence where needed BIBREF1. The research question is specified using the PICO (population; intervention; comparator; outcomes) framework. The researchers conduct very broad literature searches in order to retrieve every piece of clinical evidence that meets their review's inclusion criteria, commonly all RCTs of a particular healthcare intervention in a specific population. In a search, no piece of relevant information should be missed. In other words, the aim is to achieve a recall score of one. This implies that the searches are broad BIBREF2, and authors are often left to screen a large number of abstracts manually in order to identify a small fraction of relevant publications for inclusion in the SR BIBREF3.
The number of RCTs is increasing, and with it increases the potential number of reviews and the amount of workload that is implied for each. Research on the basis of PubMed entries shows that both the number of publications and the number of SRs increased rapidly in the last ten years BIBREF4, which is why acceleration of the systematic reviewing process is of interest in order to decrease working hours of highly trained researchers and to make the process more efficient.
In this work, we focus on the detection and annotation of information about the PICO elements of RCTs described in English PubMed abstracts. In practice, the comparators involved in the C of PICO are just additional interventions, so we often refer to PIO (populations; interventions; outcomes) rather than PICO. Focus points for the investigation are the problems of ambiguity in labelled PIO data, integration of training data from different tasks and sources and assessing our model's capacity for transfer learning and domain adaptation.
Recent advances in natural language processing (NLP) offer the potential to be able to automate or semi-automate the process of identifying information to be included in a SR. For example, an automated system might attempt to PICO-annotate large corpora of abstracts, such as RCTs indexed on PubMed, or assess the results retrieved in a literature search and predict which abstract or full text article fits the inclusion criteria of a review. Such systems need to be able to classify and extract data of interest. We show that transformer models perform well on complex data-extraction tasks. Language models are moving away from the semantic, but static representation of words as in Word2Vec BIBREF5, hence providing a richer and more flexible contextualized representation of input features within sentences or long sequences of text.
The rest of this paper is organized as follows. The remainder of this section introduces related work and the contributions of our work. Section 2 describes the process of preparing training data, and introduces approaches to fine-tuning for sentence classification and question answering tasks. Results are presented in section 3, and section 4 includes a critical evaluation and implications for practice.
<<<Tools for SR automation and PICO classification>>>
The website systematicreviewtools.com BIBREF6 lists 36 software tools for study selection to date. Some tools are intended for organisational purposes and do not employ PICO classification, such as Covidence BIBREF7. The tool Rayyan uses support vector machines BIBREF8. RobotReviewer uses neural networks, word embeddings and recently also a transformer for named entity recognition (NER) BIBREF9. Question answering systems for PICO data extraction exist based on matching words from knowledge bases, hand-crafted rules and naïve Bayes classification, both on entity and sentence level BIBREF10, BIBREF11, but commonly focus on providing information to practicing clinicians rather than systematic reviewers BIBREF12.
In the following we introduce models related to our sentence and entity classification tasks and the data on which our experiments are based. We made use of previously published training and testing data in order to ensure comparability between models.
<<</Tools for SR automation and PICO classification>>>
<<<Sentence classification data>>>
In the context of systematic review (semi)automation, sentence classification can be used in the screening process, by highlighting relevant pieces of text. A long short-term memory (LSTM) neural network trained with sentences of structured abstracts from PubMed was published in 2018 BIBREF13. It uses a pre-trained Word2Vec embedding in order to represent each input word as a fixed vector. Due to the costs associated with labelling, its authors acquired sentence labels via automated annotation. Seven classes were assigned on the basis of structured headings within the text of each abstract. Table TABREF4 provides an overview of class abbreviations and their meaning.In the following we refer to it as the PubMed data.
The LSTM itself yields impressive results with F1 scores for annotation of up to 0.85 for PIO elements, it generalizes across domains and assigns one label per sentence. We were able to confirm these scores by replicating a local version of this model.
<<</Sentence classification data>>>
<<<Question answering data>>>
<<<SQuAD>>>
The Stanford Question Answering Dataset (SQuAD) is a reading-comprehension dataset for machine learning tasks. It contains question contexts, questions and answers and is available in two versions. The older version contains only questions that can be answered based on the given context. In its newer version, the dataset also contains questions which can not be answered on the basis of the given context. The SQuAD creators provide an evaluation script, as well as a public leader board to compare model performances BIBREF14.
<<</SQuAD>>>
<<<Ebm-nlp>>>
In the PICO domain, the potential of NER was shown by Nye and colleagues in using transformers, as well as LSTM and conditional random fields. In the following, we refer to these data as the ebm-nlp corpus. BIBREF15. The ebm-nlp corpus provided us with 5000 tokenized and annotated RCT abstracts for training, and 190 expert-annotated abstracts for testing. Annotation in this corpus include PIO classes, as well as more detailed information such as age, gender or medical condition. We adapted the human-annotated ebm-nlp corpus of abstracts for training our QA-BERT question answering system.
<<</Ebm-nlp>>>
<<</Question answering data>>>
<<<Introduction to transformers>>>
In the following, the bidirectional encoder representations from transformers (BERT) architecture is introduced BIBREF16. This architecture's key strengths are rooted in both feature representation and training. A good feature representation is essential to ensure any model's performance, but often data sparsity in the unsupervised training of embedding mechanisms leads to losses in overall performance. By employing a word piece vocabulary, BERT eliminated the problem of previously unseen words. Any word that is not present in the initial vocabulary is split into a sub-word vocabulary. Especially in the biomedical domain this enables richer semantic representations of words describing rare chemical compounds or conditions. A relevant example is the phrase ’two drops of ketorolac tromethamine’, where the initial three words stay intact, while the last words are tokenized to ’ket’, ’#oro’, ’#lac’, ’tro’, ’#meth’, ’#amine’, hence enabling the following model to focus on relevant parts of the input sequence, such as syllables that indicate chemical compounds. When obtaining a numerical representation for its inputs, transformers apply a ’self-attention’ mechanism, which leads to a contextualized representation of each word with respect to its surrounding words.
BERT's weights are pre-trained in an unsupervised manner, based on large corpora of unlabelled text and two pre-training objectives. To achieve bidirectionality, its first pre-training objective includes prediction of randomly masked words. Secondly, a next-sentence prediction task trains the model to capture long-term dependencies. Pre-training is computationally expensive but needs to be carried out only once before sharing the weights together with the vocabulary. Fine-tuning to various downstream tasks can be carried out on the basis of comparably small amounts of labelled data, by changing the upper layers of the neural network to classification layers for different tasks.
SCIBERT is a model based on the BERT-base architecture, with further pre-trained weights based on texts from the Semantic Scholar search engine BIBREF17. We used these weights as one of our three starting points for fine-tuning a sentence classification architecture BIBREF18. Furthermore, BERT-base (uncased) and Bert multilingual (cased, base architecture) were included in the comparison BIBREF16.
<<</Introduction to transformers>>>
<<<Weaknesses in the previous sentence classification approach>>>
In the following, we discuss weaknesses in the PubMed data, and LSTM models trained on this type of labelled data. LSTM architectures commonly employ a trimmed version of Word2Vec embeddings as embedding layer. In our case, this leads to 20% of the input data being represented by generic `Unknown' tokens. These words are missing because they occur so rarely that no embedding vector was trained for them. Trimming means that the available embedding vocabulary is then further reduced to the known words of the training, development and testing data, in order to save memory and increase speed. The percentage of unknown tokens is likely to increase when predicting on previously unseen and unlabelled data. We tested our locally trained LSTM on 5000 abstracts from a study-based register BIBREF19 and found that 36% of all unique input features did not have a known representation.
In the case of the labelled training and testing data itself, automatic annotation carries the risk of producing wrongly labelled data. But it also enables the training of neural networks in the first place because manual gold standard annotations for a project on the scale of a LSTM are expensive and time-consuming to produce. As we show later, the automated annotation technique causes noise in the evaluation because as the network learns, it can assign correct tags to wrongly labelled data. We also show that sentence labels are often ambiguous, and that the assignment of a single label limits the quality of the predictions for their use in real-world reviewing tasks.
We acknowledge that the assignment of classes such as `Results' or `Conclusions' to sentences is potentially valuable for many use-cases. However, those sentences can contain additional information related to the PICO classes of interest. In the original LSTM-based model the A, M, R, and C data classes in Table TABREF4 are utilized for sequence optimization, which leads to increased classification scores. Their potential PICO content is neglected, although it represents crucial information in real-world reviewing tasks.
A general weakness of predicting labels for whole sentences is the practical usability of the predictions. We will show sentence highlighting as a potential use-case for focusing reader's attention to passages of interest. However, the data obtained through this method are not fine-grained enough for usage in data extraction, or for the use in pipelines for automated evidence synthesis. Therefore, we expand our experiments to include QA-BERT, a question-answering model that predicts the locations of PICO entities within sentences.
<<</Weaknesses in the previous sentence classification approach>>>
<<<Contributions of this research>>>
In this work we investigate state-of-the-art methods for language modelling and sentence classification. Our contributions are centred around developing transformer-based fine-tuning approaches tailored to SR tasks. We compare our sentence classification with the LSTM baseline and evaluate the biggest set of PICO sentence data available at this point BIBREF13. We demonstrate that models based on the BERT architecture solve problems related to ambiguous sentence labels by learning to predict multiple labels reliably. Further, we show that the improved feature representation and contextualization of embeddings lead to improved performance in biomedical data extraction tasks. These fine-tuned models show promising results while providing a level of flexibility to suit reviewing tasks, such as the screening of studies for inclusion in reviews. By predicting on multilingual and full text contexts we showed that the model's capabilities for transfer learning can be useful when dealing with diverse, real-world data.
In the second fine-tuning approach, we apply a question answering architecture to the task of data extraction. Previous models for PICO question answering relied on vast knowledge bases and hand-crafted rules. Our fine-tuning approach shows that an abstract as context, together with a combination of annotated PICO entities and SQuAD data can result in a system that outperforms contemporary entity recognition systems, while retaining general reading comprehension capabilities.
<<</Contributions of this research>>>
<<</INTRODUCTION>>>
<<<METHODOLOGY>>>
<<<Feature representation and advantages of contextualization>>>
A language processing model's performance is limited by its capability of representing linguistic concepts numerically. In this preliminary experiment, we used the PubMed corpus for sentence classification to show the quality of PICO sentence embeddings retrieved from BERT. We mapped a random selection of 3000 population, intervention, and outcome sentences from the PubMed corpus to BERT-base uncased and SCIBERT. This resulted in each sentence being represented by a fixed length vector of 768 dimensions in each layer respectively, as defined by the model architecture's hidden size. These vectors can be obtained for each of the network's layers, and multiple layers can be represented together by concatenation and pooling. We used the t-distributed Stochastic Neighbour Embedding (t-SNE) algorithm to reduce each layer-embedding into two-dimensional space, and plotted the resulting values. Additionally, we computed adjusted rand scores in order to evaluate how well each layer (or concatenation thereof, always using reduce_mean pooling) represents our input sequence. The rand scores quantify the extent to which a naïve K-means (N=3) clustering algorithm in different layers alone led to correct grouping of the input sentences.
<<</Feature representation and advantages of contextualization>>>
<<<Sentence classification>>>
<<<Preparation of the data>>>
We used the PubMed corpus to fine-tune a sentence classification architecture. Class names and abbreviations are displayed in Table TABREF4. The corpus was supplied in pre-processed form, comprising 24,668 abstracts. For more information about the original dataset we refer to its original publication BIBREF13. Because of the PICO framework, methods for systematic review semi(automation) commonly focus on P, I, and O detection. A, M, R, and C classes are an additional feature of this corpus. They were included in the following experiment because they represent important information in abstracts and they occur in a vast majority of published trial text. Their exclusion can lead to false classification of sentences in full abstracts. In a preliminary experiment we summarized A, M, R, and C sentences as a generic class named ’Other’ in order to shift the model's focus to PIO classes. This resulted in high class imbalance, inferior classification scores and a loss of ability to predict these classes when supporting systematic reviewers during the screening process.
In the following, abstracts that did not include a P, I, and O label were excluded. This left a total of 129,095 sentences for training, and 14,344 for testing (90:10 split).
<<</Preparation of the data>>>
<<<Fine-tuning>>>
We carried out fine-tuning for sentence classification based on BERT-base (uncased), multilingual BERT (cased), and on SCIBERT. We changed the classification layer on top of the original BERT model. It remains as linear, fully connected layer but now employs the sigmoid cross-entropy loss with logits function for optimization. During training, this layer is optimised for predicting probabilities over all seven possible sentence labels. Therefore, this architecture enables multi-class, multi-label predictions. In comparison, the original BERT fine-tuning approach for sentence classification employed a softmax layer in order to obtain multi-class, single-label predictions of the most probable class only. During the training process the model then predicts class labels from Table 1 for each sentence. After each training step, backpropagation then adjusts the model's internal weights. To save GPU resources, a maximal sequence length of 64, batch size 32, learning rate of $2\times 10^{-5}$, a warm-up proportion of 0.1 and two epochs for training were used.
<<</Fine-tuning>>>
<<<Post-training assignment of classes>>>
In the scope of the experiments for this paper, the model returns probabilities for the assignment of each class for every sentence. These probabilities were used to show effects of different probability thresholds (or simply assignment to the most probable class) on recall, precision and F1 scores. The number of classes was set to 7, thereby making use of the full PubMed dataset.
<<</Post-training assignment of classes>>>
<<</Sentence classification>>>
<<<Question answering>>>
<<</Question answering>>>
<<</METHODOLOGY>>>
<<<RESULTS>>>
<<<Feature representation and contextualization>>>
Figure FIGREF23 shows the dimensionality-reduced vectors for 3000 sentences in BERT-base, along with the positions of three exemplary sentences. All three examples were labelled as 'P' in the gold standard. This visualization highlights overlaps between the sentence data and ambiguity or noise in the labels.
UTF8bsmi
Sentences 1 and 2 are labelled incorrectly, and clearly appear far away from the population class centroid. Sentence 3 is an example of an ambiguous case. It appears very close to the population centroid, but neither its label nor its position reflect the intervention content. This supports a need for multiple tags per sentence, and the fine-tuning of weights within the network.
Figure FIGREF23 shows the same set of sentences, represented by concatenations of SCIBERT outputs. SCIBERT was chosen as an additional baseline model for fine-tuning because it provided the best representation of embedded PICO sentences. When clustered, its embeddings yielded an adjusted rand score of 0.57 for a concatenation of the two layers, compared with 0.25 for BERT-base.
<<</Feature representation and contextualization>>>
<<</RESULTS>>>
<<<DISCUSSION>>>
In this work, we have shown possibilities for sentence classification and data extraction of PICO characteristics from abstracts of RCTs.
For sentence classification, models based on transformers can predict multiple labels per sentence, even if trained on a corpus that assigns a single label only. Additionally, these architectures show a great level of flexibility with respect to adjusting precision and recall scores. Recall is an important metric in SR tasks and the architectures proposed in this paper enable a post-classification trade-off setting that can be adjusted in the process of supporting reviewers in real-world reviewing tasks.
However, tagging whole sentences with respect to populations, interventions and outcomes might not be an ideal method to advance systematic review automation. Identifying a sentence's tag could be helpful for highlighting abstracts from literature searches. This focuses the reader's attention on sentences, but is less helpful for automatically determining whether a specific entity (e.g. the drug aspirin) is mentioned.
Our implementation of the question answering task has shown that a substantial amount of PICO entities can be identified in abstracts on a token level. This is an important step towards reliable systematic review automation. With our provided code and data, the QA-BERT model can be switched with more advanced transformer architectures, including XLM, XLNet, DistilBERT and ALBERT pre-trained models. More detailed investigations into multilingual predictions BIBREF26 pre-processing and predicting more than one PICO per sentence are reserved for future work.
<<<Limitations>>>
Limitations in the automatically annotated PubMed training data mostly consist of incomplete detection or noise P, I, and O entities due to the single labelling. We did not have access to multilingual annotated PICO corpora for testing, and therefore tested the model on German abstracts found on PubMed, as well as Chinese data provided by the Cochrane Schizophrenia Group.
For the question answering, we limited the use of original SQuAD domains to enrich our data. This was done in order to save computing resources, as an addition of 100 SQuAD domains resulted in training time increases of two hours, depending on various other parameter settings. Adjusted parameters include increased batch size, and decreased maximal context length in order to reduce training time.
<<</Limitations>>>
<<</DISCUSSION>>>
<<<CONCLUSION>>>
With this paper we aimed to explore state-of-the-art NLP methods to advance systematic review (semi)automation. Both of the presented fine-tuning approaches for transformers demonstrated flexibility and high performance. We contributed an approach to deal with ambiguity in whole-sentence predictions, and proposed the usage of a completely different approach to entity recognition in settings where training data are sparse.
In conclusion we wish to emphasize our argument that for future applications, interoperability is important. Instead of developing yet another stand-alone organizational interface with a machine learning classifier that works on limited data only, the focus should be to develop and train cross-domain and neural models that can be integrated into the backend of existing platforms. The performance of these models should be comparable on standardized datasets, evaluation scripts and leader boards.
The logical next step, which remains less explored in the current literature because of its complexity, is the task of predicting an RCT's included or excluded status on the basis of PICOs identified in its text. For this task, more complex architectures that include drug or intervention ontologies could be integrated. Additionally, information from already completed reviews could be re-used as training data.
<<</CONCLUSION>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nINTRODUCTION\nTools for SR automation and PICO classification\nSentence classification data\nQuestion answering data\nSQuAD\nEbm-nlp\nIntroduction to transformers\nWeaknesses in the previous sentence classification approach\nContributions of this research\nMETHODOLOGY\nFeature representation and advantages of contextualization\nSentence classification\nPreparation of the data\nFine-tuning\nPost-training assignment of classes\nQuestion answering\nRESULTS\nFeature representation and contextualization\nDISCUSSION\nLimitations\nCONCLUSION"
],
"type": "outline"
}
|
1909.08824
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Modeling Event Background for If-Then Commonsense Reasoning Using Context-aware Variational Autoencoder
<<<Abstract>>>
Understanding event and event-centered commonsense reasoning are crucial for natural language processing (NLP). Given an observed event, it is trivial for human to infer its intents and effects, while this type of If-Then reasoning still remains challenging for NLP systems. To facilitate this, a If-Then commonsense reasoning dataset Atomic is proposed, together with an RNN-based Seq2Seq model to conduct such reasoning. However, two fundamental problems still need to be addressed: first, the intents of an event may be multiple, while the generations of RNN-based Seq2Seq models are always semantically close; second, external knowledge of the event background may be necessary for understanding events and conducting the If-Then reasoning. To address these issues, we propose a novel context-aware variational autoencoder effectively learning event background information to guide the If-Then reasoning. Experimental results show that our approach improves the accuracy and diversity of inferences compared with state-of-the-art baseline methods.
<<</Abstract>>>
<<<Introduction>>>
Recently, event-centered commonsense knowledge has attracted much attention BIBREF0, BIBREF1, BIBREF2, BIBREF3, because of understanding events is an important component of NLP. Given a daily-life event, human can easily understand it and reason about its causes, effects, and so on. However, it still remains a challenging task for NLP systems. This is partly due to most of them are trained for task-specific datasets or objectives, which results in models that are adapt at finding task-specific underlying correlation patterns but have limited capability in simple and explainable commonsense reasoning BIBREF4.
To facilitate this, BIBREF5 (BIBREF5) build the Event2Mind dataset and BIBREF4 (BIBREF4) present the Atomic dataset, mainly focus on nine If-Then reasoning types to describe causes, effects, intents and participant characteristic about events. Together with these datasets, a simple RNN-based encoder-decoder framework is proposed to conduct the If-Then reasoning.
However, there still remains two challenging problems. First, as illustrated in Figure FIGREF1, given an event “PersonX finds a new job”, the plausible feeling of PersonX about that event could be multiple (such as “needy/stressed out” and “relieved/joyful”). Previous work showed that for the one-to-many problem, conventional RNN-based encoder-decoder models tend to generate generic responses, rather than meaningful and specific answers BIBREF6, BIBREF7.
Second, as a commonsense reasoning problem, rich background knowledge is necessary for generating reasonable inferences. For example, as shown in Figure FIGREF1, the feeling of PersonX upon the event “PersonX finds a new job” could be multiple. However, after given a context “PersonX was fired”, the plausible inferences would be narrowed down to “needy” or “stressed out”.
To better solve these problems, we propose a context-aware variational autoencoder (CWVAE) together with a two-stage training procedure. Variational Autoencoder (VAE) based models have shown great potential in modeling the one-to-many problem and generate diversified inferences BIBREF8, BIBREF9.
In addition to the traditional VAE structure, we introduces an extra context-aware latent variable in CWVAE to learn the event background knowledge. In the pretrain stage, CWVAE is trained on an auxiliary dataset (consists of three narrative story corpora and contains rich event background knowledge), to learn the event background information by using the context-aware latent variable. Subsequently, in the finetune stage, CWVAE is trained on the task-specific dataset to adapt the event background information to each specific aspect of If-Then inferential target (e.g., intents, reactions, etc.).
Experiments on the Event2Mind and Atomic dataset show that our proposed approach outperforms baseline methods in both the accuracy and diversity of inferences. The code is released at https://github.com/sjcfr/CWVAE.
<<</Introduction>>>
<<<Background>>>
Before specifically describing two dataset —- Event2Mind and Atomic used in this paper as well as the If-Then reasoning task, for clarity, we define the following terminologies:
Base event: the prerequisite event in If-Then reasoning, organized as a verb phrase with a predicate and its arguments, such as the event “PersonX finds a new job” shown in Figure FIGREF1.
Inference dimension: a particular If-Then reasoning type, e.g., intents, effects of the base event. Details are shown in Table TABREF2 and Table TABREF3.
Target: the inferential results. For example, as shown in Figure FIGREF1, given a base event “PersonX finds a new job” and one inference dimension “xReact”, the targets could be “relieved” or “needy”. Notice that each inference dimension can have multiple targets.
Event2Mind Dataset contains 25K base events and 300K targets, annotated through crowdsourcing. Event2Mind is organized in a hierarchical form: each base event has three types of inference dimensions, and given a base event, under one of inference dimensions, several targets may simultaneously exist. Table TABREF2 shows the (base event-inference dimension-target) hierarchical structure through an example from Event2Mind.
Atomic Dataset Inspired by Event2Mind, the Atomic dataset shares the same hierarchical structure as Event2Mind, while scales up the size of dataset and expands the scope to nine types of inference dimensions. Table TABREF3 shows the (base event-inference dimension-target) hierarchical structure through an example from Atomic. Though Atomic covers the inference dimensions of Event2Mind, the base event collection of Event2Mind is nonidentical to that of Atomic.
Problem Definition The If-Then reasoning task could be formally defined as a conditional one-to-many generation problem: given a base event $x$ and one inference dimension $d$, the model is required to generate targets $y=f(x, d)$ as close to the ground truths as possible. Both $x$ and $y$ consist of sequence of words: $x=\lbrace x_1,\dots , x_{m}\rbrace $, and $y=\lbrace y_1,\dots , y_{n}\rbrace $, where $m$ and $n$ denotes the length of $x$ and $y$, respectively.
Conditional Variational Autoencoder The variational autoencoder (VAE) defines a generative framework suited for one-to-many generation problem BIBREF10. While conditional variational autoencoder (CVAE) BIBREF11 is an extension of VAE on the conditional generation problem. As shown in Figure FIGREF5 (a), CVAE characterizes the conditional one-to-many generation problem using three random variables: event $x$, target $y$ and a latent variable $z$, which is used for modeling the latent distribution of semantic over targets given an event. Hence, under a certain inference dimension, with regard to the latent semantic variable $z$, the conditional generation problem could be expressed as $p(y|x)=\int p(y|x,z)p(z|x)dz$. CVAE models $p(y|x,z)$ and $p(z|x)$ using deep neural networks (parameterized by $\theta $) $p_{\theta }(y|x,z)$ and $p_{\theta }(z|x)$. Then as illustrated in Figure FIGREF5 (b), $y$ could be generated from $x$ and $z$.
CVAE is trained to maximize the conditional likelihood $p(y|x)$, which involves an intractable marginalization over the latent variable $z$. Instead, following BIBREF10 (BIBREF10), a practical way is to introduce another deep network (parameterized by $\phi $) $q_{\phi }(z|x,y)$ to approximate the true posterior distribution $p(z|x,y)$ and maximize the evidence lower bound (ELBO) of the log-likelihood function:
Therefore, CVAE is composed of three neural networks in general. We refer to $p_{\theta }(z|x)$ as a prior network, $q_{\phi }(z|x,y)$ as a recognition network, and $p_{\theta }(y|x,z)$ as a neural decoder.
<<</Background>>>
<<<Context-aware Variational Autoencoder>>>
Traditional CVAE can model the event-target relation. In other words, given an observed event, CVAE can generate its corresponding targets. While in this paper we model the If-Then reasoning as a [(background), event]-target process. It means that in addition to the observed event, we also want to involve the event background knowledge (which can be learned from event contexts) to generate the reasonable targets.
To this end, we propose a context-aware variational autoencoder (CWVAE), with two additional latent variables: a context-acquiring latent variable $z_c$ to directly acquire context information, and a context-aware latent variable $z_{c^{\prime }}$ to learn background knowledge from $z_c$, as shown in Figure FIGREF6 (a). However, the event context information is absent in the Event2Mind and Atomic dataset. To learn from the external event context information, we design the following two-stage training procedure for CWVAE.
Pretrain: Learning Event Background Knowledge from Auxiliary Dataset In the pretrain stage, CWVAE is trained on three narrative story corpora with rich event context information. As shown in Figure FIGREF6 (a), context-acquiring latent variable $z_c$ is directly conditioned on the context $c$. Hence, $z_c$ could be employed for acquiring background knowledge from event contexts. Then, we minimize the distance between $z_c$ and the context-aware latent variable $z_{c^{\prime }}$, by which the event background knowledge is transferred from $z_c$ to $z_{c^{\prime }}$.
Finetune: Adapt Event Background Knowledge to Each Inference Dimension In the finetune stage, as shown in Figure FIGREF6 (b), CWVAE is trained on the Event2Mind and Atomic dataset without the event context information. Pretrained CWVAE is finetuned to learn the specific inferential knowledge of each inference dimension. After the training procedure, as shown in Figure FIGREF6 (c), samples of $z$ is generated based on $x$ and samples of $z_{c^{\prime }}$, where $z_{c^{\prime }}$ contains rich event background knowledge helpful for If-Then reasoning.
<<<Architecture of CWVAE>>>
As shown in Figure FIGREF8, CWVAE is mainly composed of four parts: a neural encoder that provides distributed representations of base events/targets, a recognition network for inferring $q_{\phi }(z|x,y)$, $q_{\phi }(z_c|x,c)$ and $q_{\phi }(z|z_{c^{\prime }}, x)$, a prior network for modeling $p_{\theta }(z_{c^{\prime }}|x)$ and $p_{\theta }(z|x, z_{c^{\prime }})$, and a neural decoder that integrates the information from $z$ and $z_{c^{\prime }}$ to generate targets.
Neural Encoder We employ a bidirectional GRU as neural encoder, which encodes context $c$, event $x$ and target $y$ into distributed representations $h^c=\lbrace h_1^c,\dots ,h_{l_c}^c\rbrace $, $h^x=\lbrace h_1^x,\dots ,h_{l_x}^x\rbrace $ and $h^y=\lbrace h_1^y,\dots ,h_{l_y}^y\rbrace $, where $l_c$, $l_x$ and $l_y$ is the length of $c$, $x$ and $y$, respectively.
Recognition Network The recognition network models $q_{\phi }(z|x,y)$, $q_{\phi }(z_c|x,c)$, $q_{\phi }(z|z_{c^{\prime }}, x)$ based on $h^x$, $h^y$ and $h^c$.
Following traditional VAE, the above-mentioned three distributions are assumed to be multivariate Gaussian distribution with a diagonal covariance structure:
where $\mu $ denotes the mean of the distribution, $\sigma $ denotes the standard deviation of the distribution, and $I$ denotes the identity matrix.
Given $h^x$, $h^y$ and $h^c$, we propose a novel attention-based inferer (ABI) module to estimate the mean and standard deviation of $q_{\phi }(z_{c}|x,c)$, $q_{\phi }(z_{c^{\prime }}|x,y)$ and $q_{\phi }(z|x,y)$:
Briefly, through the attention mechanism, ABI can capture the semantic interaction between input sequences, and estimate the parameters of distributions based on it. We will introduce the specific structure of ABI in below.
Prior Network Prior Network models $p_{\theta }(z_{c^{\prime }}|x)$ and $p_{\theta }(z|x, z_{c^{\prime }})$ based on $h^x$. The distribution of $p_{\theta }(z_{c^{\prime }}|x)$ and $p_{\theta }(z|x, z_{c^{\prime }})$ are still assumed to be multivariate Gaussian, whereas the parameters are different:
where $\mu ^{^{\prime }}$ denotes the mean of the distribution, $\sigma ^{^{\prime }}$ denotes the standard deviation of the distribution and $I$ denotes the identity matrix.
Then the attention-based inferer module is still employed to estimate parameters of distributions:
Neural Decoder Given the base event $x$, the semantic latent variable $z$, and the context-aware latent variable $z_{c^{\prime }}$, the neural decoder defines the generation probability of $y$ as following:
where $p(y_j|y<j, z, z_{c^{\prime }}, x)=g(y_{j-1}, s_{j-1}, e_j)$, $g(\cdot )$ is an attention-based feed forward model, $e_j=\sum _i \alpha _{ji}h_i^{x}$ is the context vector and $s_{j-1}$ is the hidden state of the decoder. We obtain $g(\cdot )$ and $e_j$ the same way as BIBREF12 (BIBREF12). Whereas our decoder differs from BIBREF12 (BIBREF12) in that our model integrates the context-aware latent variable $z_{c^{\prime }}$ and semantic latent variable $z$ in the computation of $s_j=\mathrm {GRU}([E_{yj};s_{j-1},z,z_{j-1}])$, where $E_{yj}$ is the word embeddings of target words.
Note that through concatenating $z$ and $z_{c^{\prime }}$ with $E_{yj}$ and $s_{j-1}$, $s_j$ could be affected by context-aware latent variable $z_{c^{\prime }}$ and semantic latent variable $z$. This allows model to directly access to the event background knowledge from $z_{c^{\prime }}$. In addition, the randomness of $z$ and $z_{c^{\prime }}$ would increase the diversity of model generation.
Attention-based Inferer Attention mechanism has shown strong ability in capturing semantic interactions BIBREF13. Inspired by the co-attention mechanism BIBREF14, we propose an attention-based inferer (ABI) to estimate the mean and standard deviation of a distribution belongs to $p_{\theta }(\cdot )$ or $q_{\phi }(\cdot )$ by capturing semantic interactions of input sequences.
Specifically, given two input sequences (e.g., representations of contexts and events) $a=\lbrace a_1,\dots ,a_{l_a}\rbrace $ and $b=\lbrace b_1,\dots ,b_{l_b}\rbrace $ with length $l_a$ and $l_b$, we first obtain the attention scores from each side through:
where $W_a \in \mathbb {R}^{d\times d_a}$ and $W_b \in \mathbb {R}^{d\times d_b}$ are parameter weights.
With these attention scores, the context vectors of both sequences are given by:
Then we perform a mean pooling operation on context vectors of both sequences:
To obtain the mean and standard deviation, the pooled context vectors $\bar{c^a}$ and $\bar{c^b}$ which carry semantic interaction between two sequences, are concatenated and projected into a latent semantic space through a nonlinear transformation:
Finally the mean and standard deviation are generated through a nonlinear transformation over $h_z$:
<<</Architecture of CWVAE>>>
<<<Optimizing>>>
With the incorporation of $z_{c^{\prime }}$, the original loglikelihood could be decomposed as:
Then following traditional CVAE, the ELBO of CWVAE is defined as follows:
which is the objective function at the finetune stage.
While in the pretrain stage, as we aim to learn background knowledge through minimizing the distance between $z_c$ and $z_{c^{\prime }}$, in addition to $L^{ELBO}$, a context-aware regulation term is introduced:
where the context aware regularization term is the KL distance between $z$ and $z_{c^{\prime }}$. Through minimizing the context aware regularization term, we aim to pass event context knowledge from $z_c$ to the context aware latent variable $z_{c^{\prime }}$.
<<</Optimizing>>>
<<<Training Details>>>
To test the performance of CWVAE, we split the Event2Mind and Atomic dataset into training, development and test sets (80%, 10%, 10%) in the same way as BIBREF5 (BIBREF5) and BIBREF4 (BIBREF4), respectively. We initialize the embedding layer from 300d GloVe word embeddings. The neural encoder is chosen to be biGRU with 300 hidden units. For the ABI module, size of $W_a$ and $W_b$ is set to be $100 \times d_a$ and $100 \times d_b$ respectively. The dimension of $z_c$, $z_{c^{\prime }}$ and $z$ is all set as 40. The neural decoder is set to be GRU with 300d hidden state. Regulation coefficient $\lambda $ of context-aware regulation term is set to be 0.1. Models are trained using an Adam optimizer BIBREF15 with a learning rate of 0.001.
<<</Training Details>>>
<<</Context-aware Variational Autoencoder>>>
<<<Experiments>>>
<<<Auxiliary Dataset>>>
The auxiliary dataset is built upon three human-written story corpora: ROCStories BIBREF16, VIST BIBREF17 and WritingPrompts BIBREF18. ROCStories and VIST are composed of short stories with five sentences. We filter out stories of more than 1,000 words in WritingPrompts, and cut the remaining stories into five-sentence-paragraphs.
For each five-sentence-paragraph, we define the first three sentences as contexts of the base event, the fourth sentence as the base event, and the fifth sentence as the inference target. For example, as shown in Table TABREF25, the first three sentences describe a context that Jason was unsatisfied about his job and applied for a new job. Hence, after happening the event “he got the job”, a plausible react about the event could be “jason was much happier at his new job”. In total, the auxiliary dataset contains 192,316 $(context, event, target)$ triples.
<<</Auxiliary Dataset>>>
<<<Baselines>>>
We compared our proposed model with the following four baseline methods:
RNN-based Seq2Seq proposed by BIBREF4 (BIBREF4) for the If-Then reasoning on Atomic.
Variational Seq2Seq combines a latent variable with the encoder-decoder structure through converting the last hidden state of RNN encoder into a Gaussian distributed latent variable BIBREF8.
VRNMT Propose by BIBREF19 (BIBREF19), VRNMT combines CVAE with attention-based encoder-decoder framework through introduces a latent variable to model the semantic distribution of targets.
CWVAE-Unpretrained refers to the CWVAE model without the pretrain stage.
Note that, for each baseline method, we train distinct models for each distinct inference dimension, respectively.
<<</Baselines>>>
<<<Evaluation Metrics>>>
<<<Automatic Evaluation>>>
We first compare the perplexity of CWVAE with baseline methods. Perplexity measures the probability of model to regenerate the exact targets, which is particular suitable for evaluating the model performance on one-to-many problem BIBREF20. Further, we employ BLEU score to evaluate the accuracy of generations BIBREF21, and the number of distinct n-gram to evaluate the diversity of generations BIBREF6. The distinct is normalized to $[0, 1]$ by dividing the total number of generated tokens.
<<</Automatic Evaluation>>>
<<<Human Evaluation>>>
Since automatic evaluation of generations is still a challenging task BIBREF22, we also conduct human evaluations on the model performance. Five human experts are employed to evaluate the coherence, diversity and fluency of generated targets. Experts are asked to vote for if a generation is fluent or coherent for each generated target, and give a 1-5 score for the diversity of generations. For both Event2Mind and Atomic datasets, 100 events are randomly selected from the test set. For each method, top 10 generated targets of each base event are used for evaluation. Finally we report three overall averaged scores of coherence, diversity and fluency on both datasets, respectively.
<<</Human Evaluation>>>
<<</Evaluation Metrics>>>
<<<Overall Results>>>
We list the perplexity and BLEU score of CWVAE and baseline methods on Event2Mind and Atomic in Table TABREF31 and Table TABREF33, respectively, and show the distinct-1 and distinct-2 score on Event2Mind and Atomic in Table TABREF32 and Table TABREF34, respectively. We find that:
(1) As shown in Table TABREF32 and Table TABREF34, comparison between RNN-based Seq2Seq and variational-based methods, including Variational Seq2Seq, VRNMT, CWVAE-unpretrained and CWVAE shows that, variational-based methods could increase the diversity of generations. This confirms one of our motivations that variational-based methods could capture the latent semantic distribution within targets and increase the diversity of If-Then reasoning.
(2) Comparing CWVAE-unpretrained with other baseline methods shows that, in general CWVAE improves the accuracy and diversity on both dataset. These results indicate the efficiency of CWVAE in capturing the latent semantic distribution of targets, and generate more reasonable inferential results.
(3) Comparison between CWVAE and CWVAE-unpretrained shows that the pretrain stage could enhance the performance of CWVAE in both the accuracy and diversity. This is mainly because event knowledge could offer the guidance for If-Then reasoning. In the pretrain stage, CWVAE could capture the event background knowledge through context-aware latent variable, and such knowledge could be be adapted to our task through the fintune stage.
To further evaluate the effectiveness of our proposed approach, we also conduct human evaluations, the results of which are shown in Table TABREF39 and Table TABREF40. On both datasets, CWVAE-based methods achieve consistent better coherence, diversity and fluency performances. While comparing with CWVAE-Unpretrained, the pretrain procedure could improves the performance on coherence and fluency. The main reasons are twofold: first, the CWVAE has advantage in capturing the semantic distribution of targets; second, event background learned from the pretrain stage is helpful for the If-Then reasoning.
<<</Overall Results>>>
<<<Case Study>>>
Table TABREF41 provides an example of model generations given the base event “PersonX works tirelessly” and the inference dimension “xIntent”. The generations under CWVAE mainly contain four kinds of semantics: (1) be productive, (2) finish his work soon, (3) accomplish goal, (4) earn more money. While the semantics of generations using baseline RNN-based Seq2Seq model is relatively limited. Furthermore, the first three kinds of semantic overlap the three ground truth targets, and the fourth kind of semantic is in accordance with daily-life commonsense. Compared to RNN-based Seq2Seq model, our approach can increase the diversity and rationality of generations, meanwhile keep the accuracy.
<<</Case Study>>>
<<</Experiments>>>
<<<Related Work>>>
<<<Event-Centered Commonsense Reasoning>>>
Understanding events and constructing event-centered commonsense knowledge are crucial to many NLP applications, such as intention recognition BIBREF23 and dialog generation BIBREF24. Recently a growing number of studies focus on event-centered commonsense reasoning, which mainly concentrates on two areas, script event prediction and story ending generation/choosing.
Script event prediction concerns with the temporal relationships between script events BIBREF25, which requires models to choose a correct subsequent triple-organized event among the candidates BIBREF2. Prior work mainly focused on modeling event pairs BIBREF25, event chains BIBREF2 and event graph BIBREF3 to predict the subsequent event. Story ending generation focuses on generating plausible story endings BIBREF16, which requires models to understand the story context, and keep generated endings logically consistent with it BIBREF26, BIBREF27. The above tasks mainly investigate the logical orders of events, whereas the If-Then reasoning task focuses on inferring the mental state of event participants.
<<</Event-Centered Commonsense Reasoning>>>
<<<Variational AutoEncoder-Decoder Based Natural Language Generation>>>
VAE BIBREF10 has been widely applied in various of text generation tasks, such as dialogue and machine translation. In dialogue generation, BIBREF9 (BIBREF9) adapts VAE with encoder-decoder framework to model the latent semantic distribution of answers, which can increase the diversity of generations. For the task of machine translation, BIBREF19 (BIBREF19) and BIBREF28 (BIBREF28) employ a latent variable to capture the semantic interaction between the source and target sentence, and regard the latent variable as a supplementation of attention mechanism. While BIBREF29 (BIBREF29) use the latent variable to model topic distributions in text generation. In this paper, we introduce an additional context-aware latent variable to effectively learn background knowledge and conduct If-Then reasoning on the guidance of it.
<<</Variational AutoEncoder-Decoder Based Natural Language Generation>>>
<<</Related Work>>>
<<<Conclusion>>>
In this paper, we propose a novel context-aware VAE (CWVAE) framework with two training stages for If-Then commonsense reasoning. By introducing an additional context-aware latent variable, CWVAE is able to learn external background knowledge, and conduct If-Then reasoning under its guidance. In the pretrain stage, CWVAE learns event background knowledge, then in the finetune stage CWVAE adapts such knowledge to each inference dimension. Experimental results demonstrate that CWVAE outperforms baseline methods in both the accuracy and diversity of generations.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nBackground\nContext-aware Variational Autoencoder\nArchitecture of CWVAE\nOptimizing\nTraining Details\nExperiments\nAuxiliary Dataset\nBaselines\nEvaluation Metrics\nAutomatic Evaluation\nHuman Evaluation\nOverall Results\nCase Study\nRelated Work\nEvent-Centered Commonsense Reasoning\nVariational AutoEncoder-Decoder Based Natural Language Generation\nConclusion"
],
"type": "outline"
}
|
1909.02480
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
FlowSeq: Non-Autoregressive Conditional Sequence Generation with Generative Flow
<<<Abstract>>>
Most sequence-to-sequence (seq2seq) models are autoregressive; they generate each token by conditioning on previously generated tokens. In contrast, non-autoregressive seq2seq models generate all tokens in one pass, which leads to increased efficiency through parallel processing on hardware such as GPUs. However, directly modeling the joint distribution of all tokens simultaneously is challenging, and even with increasingly complex model structures accuracy lags significantly behind autoregressive models. In this paper, we propose a simple, efficient, and effective model for non-autoregressive sequence generation using latent variable models. Specifically, we turn to generative flow, an elegant technique to model complex distributions using neural networks, and design several layers of flow tailored for modeling the conditional density of sequential latent variables. We evaluate this model on three neural machine translation (NMT) benchmark datasets, achieving comparable performance with state-of-the-art non-autoregressive NMT models and almost constant decoding time w.r.t the sequence length.
<<</Abstract>>>
<<<Introduction>>>
Neural sequence-to-sequence (seq2seq) models BIBREF0, BIBREF1, BIBREF2, BIBREF3 generate an output sequence $\mathbf {y} = \lbrace y_1, \ldots , y_T\rbrace $ given an input sequence $\mathbf {x} = \lbrace x_1, \ldots , x_{T^{\prime }}\rbrace $ using conditional probabilities $P_\theta (\mathbf {y}|\mathbf {x})$ predicted by neural networks (parameterized by $\theta $).
Most seq2seq models are autoregressive, meaning that they factorize the joint probability of the output sequence given the input sequence $P_\theta (\mathbf {y}|\mathbf {x})$ into the product of probabilities over the next token in the sequence given the input sequence and previously generated tokens:
Each factor, $P_\theta (y_{t} | y_{<t}, \mathbf {x})$, can be implemented by function approximators such as RNNs BIBREF0 and Transformers BIBREF3. This factorization takes the complicated problem of joint estimation over an exponentially large output space of outputs $\mathbf {y}$, and turns it into a sequence of tractable multi-class classification problems predicting $y_t$ given the previous words, allowing for simple maximum log-likelihood training. However, this assumption of left-to-right factorization may be sub-optimal from a modeling perspective BIBREF4, BIBREF5, and generation of outputs must be done through a linear left-to-right pass through the output tokens using beam search, which is not easily parallelizable on hardware such as GPUs.
Recently, there has been work on non-autoregressive sequence generation for neural machine translation (NMT; BIBREF6, BIBREF7, BIBREF8) and language modeling BIBREF9. Non-autoregressive models attempt to model the joint distribution $P_\theta (\mathbf {y}|\mathbf {x})$ directly, decoupling the dependencies of decoding history during generation. A naïve solution is to assume that each token of the target sequence is independent given the input:
Unfortunately, the performance of this simple model falls far behind autoregressive models, as seq2seq tasks usually do have strong conditional dependencies between output variables BIBREF6. This problem can be mitigated by introducing a latent variable $\mathbf {z}$ to model these conditional dependencies:
where $p_{\theta }(\mathbf {z}|\mathbf {x})$ is the prior distribution over latent $\mathbf {z}$ and $P_{\theta }(\mathbf {y}|\mathbf {z}, \mathbf {x})$ is the “generative” distribution (a.k.a decoder). Non-autoregressive generation can be achieved by the following independence assumption in the decoding process:
BIBREF6 proposed a $\mathbf {z}$ representing fertility scores specifying the number of output words each input word generates, significantly improving the performance over Eq. (DISPLAY_FORM4). But the performance still falls behind state-of-the-art autoregressive models due to the limited expressiveness of fertility to model the interdependence between words in $\textbf {y}$.
In this paper, we propose a simple, effective, and efficient model, FlowSeq, which models expressive prior distribution $p_{\theta }(\mathbf {z}|\mathbf {x})$ using a powerful mathematical framework called generative flow BIBREF10. This framework can elegantly model complex distributions, and has obtained remarkable success in modeling continuous data such as images and speech through efficient density estimation and sampling BIBREF11, BIBREF12, BIBREF13. Based on this, we posit that generative flow also has potential to introduce more meaningful latent variables $\mathbf {z}$ in the non-autoregressive generation in Eq. (DISPLAY_FORM5).
FlowSeq is a flow-based sequence-to-sequence model, which is (to our knowledge) the first non-autoregressive seq2seq model utilizing generative flows. It allows for efficient parallel decoding while modeling the joint distribution of the output sequence. Experimentally, on three benchmark datasets for machine translation – WMT2014, WMT2016 and IWSLT-2014, FlowSeq achieves comparable performance with state-of-the-art non-autoregressive models, and almost constant decoding time w.r.t. the sequence length compared to a typical left-to-right Transformer model, which is super-linear.
<<</Introduction>>>
<<<Background>>>
As noted above, incorporating expressive latent variables $\mathbf {z}$ is essential to decouple the dependencies between tokens in the target sequence in non-autoregressive models. However, in order to model all of the complexities of sequence generation to the point that we can read off all of the words in the output in an independent fashion (as in Eq. (DISPLAY_FORM6)), the prior distribution $p_{\theta }(\mathbf {z}|\mathbf {x})$ will necessarily be quite complex. In this section, we describe generative flows BIBREF10, an effective method for arbitrary modeling of complicated distributions, before describing how we apply them to sequence-to-sequence generation in §SECREF3.
<<<Flow-based Generative Models>>>
Put simply, flow-based generative models work by transforming a simple distribution (e.g. a simple Gaussian) into a complex one (e.g. the complex prior distribution over $\mathbf {z}$ that we want to model) through a chain of invertible transformations.
Formally, a set of latent variables $\mathbf {\upsilon } \in \Upsilon $ are introduced with a simple prior distribution $p_{\Upsilon }(\upsilon )$. We then define a bijection function $f: \mathcal {Z} \rightarrow \Upsilon $ (with $g = f^{-1}$), whereby we can define a generative process over variables $\mathbf {z}$:
An important insight behind flow-based models is that given this bijection function, the change of variable formula defines the model distribution on $\mathbf {z}\in \mathcal {Z}$ by:
Here $\frac{\partial f_{\theta }(\mathbf {z})}{\partial \mathbf {z}}$ is the Jacobian matrix of $f_{\theta }$ at $\mathbf {z}$.
Eq. (DISPLAY_FORM9) provides a way to calculate the (complex) density of $\mathbf {z}$ by calculating the (simple) density of $\upsilon $ and the Jacobian of the transformation from $\mathbf {z}$ to $\upsilon $. For efficiency purposes, flow-based models generally use certain types of transformations $f_{\theta }$ where both the inverse functions $g_{\theta }$ and the Jacobian determinants are tractable to compute. A stacked sequence of such invertible transformations is also called a (normalizing) flow BIBREF10:
where $f = f_1 \circ f_2 \circ \cdots \circ f_K$ is a flow of $K$ transformations (omitting $\theta $s for brevity).
<<</Flow-based Generative Models>>>
<<<Variational Inference and Training>>>
In the context of maximal likelihood estimation (MLE), we wish to minimize the negative log-likelihood of the parameters:
where $D=\lbrace (\mathbf {x}^i, \mathbf {y}^i)\rbrace _{i=1}^{N}$ is the set of training data. However, the likelihood $P_{\theta }(\mathbf {y}| \mathbf {x})$ after marginalizing out latent variables $\mathbf {z}$ (LHS in Eq. (DISPLAY_FORM5)) is intractable to compute or differentiate directly. Variational inference BIBREF14 provides a solution by introducing a parametric inference model $q_{\phi }(\mathbf {z}|\mathbf {y}, \mathbf {x})$ (a.k.a posterior) which is then used to approximate this integral by sampling individual examples of $\mathbf {z}$. These models then optimize the evidence lower bound (ELBO), which considers both the “reconstruction error” $\log P_\theta (\mathbf {y}|\mathbf {z},\mathbf {x})$ and KL-divergence between the posterior and the prior:
Both inference model $\phi $ and decoder $\theta $ parameters are optimized according to this objective.
<<</Variational Inference and Training>>>
<<</Background>>>
<<<FlowSeq>>>
We first overview FlowSeq's architecture (shown in Figure FIGREF13) and training process here before detailing each component in following sections. Similarly to classic seq2seq models, at both training and test time FlowSeq first reads the whole input sequence $\mathbf {x}$ and calculates a vector for each word in the sequence, the source encoding.
At training time, FlowSeq's parameters are learned using a variational training paradigm overviewed in §SECREF10. First, we draw samples of latent codes $\mathbf {z}$ from the current posterior $q_{\phi } (\mathbf {z}|\mathbf {y}, \mathbf {x})$. Next, we feed $\mathbf {z}$ together with source encodings into the decoder network and the prior flow to compute the probabilities of $P_{\theta }(\mathbf {y}|\mathbf {z}, \mathbf {x})$ and $p_{\theta }(\mathbf {z}|\mathbf {x})$ for optimizing the ELBO (Eq. (DISPLAY_FORM12)).
At test time, generation is performed by first sampling a latent code $\mathbf {z}$ from the prior flow by executing the generative process defined in Eq. (DISPLAY_FORM8). In this step, the source encodings produced from the encoder are used as conditional inputs. Then the decoder receives both the sampled latent code $\mathbf {z}$ and the source encoder outputs to generate the target sequence $\mathbf {y}$ from $P_{\theta }(\mathbf {y}|\mathbf {z}, \mathbf {x})$.
<<<Source Encoder>>>
The source encoder encodes the source sequences into hidden representations, which are used in computing attention when generating latent variables in the posterior network and prior network as well as the cross-attention with decoder. Any standard neural sequence model can be used as its encoder, including RNNs BIBREF0 or Transformers BIBREF3.
<<</Source Encoder>>>
<<<Posterior>>>
<<<Generation of Latent Variables.>>>
The latent variables $\mathbf {z}$ are represented as a sequence of continuous random vectors $\mathbf {z}=\lbrace \mathbf {z}_1, \ldots , \mathbf {z}_T\rbrace $ with the same length as the target sequence $\mathbf {y}$. Each $\mathbf {z}_t$ is a $d_{\mathrm {z}}$-dimensional vector, where $d_{\mathrm {z}}$ is the dimension of the latent space. The posterior distribution $q_{\phi } (\mathbf {z}|\mathbf {y}, \mathbf {x})$ models each $\mathbf {z}_t$ as a diagonal Gaussian with learned mean and variance:
where $\mu _{t}(\cdot )$ and $\sigma _{t}(\cdot )$ are neural networks such as RNNs or Transformers.
<<</Generation of Latent Variables.>>>
<<<Zero initialization.>>>
While we perform standard random initialization for most layers of the network, we initialize the last linear transforms that generate the $\mu $ and $\log \sigma ^2$ values with zeros. This ensures that the posterior distribution as a simple normal distribution, which we found helps train very deep generative flows more stably.
<<</Zero initialization.>>>
<<<Token Dropout.>>>
The motivation of introducing the latent variable $\mathbf {z}$ into the model is to model the uncertainty in the generative process. Thus, it is preferable that $\mathbf {z}$ capture contextual interdependence between tokens in $\mathbf {y}$. However, there is an obvious local optimum where the posterior network generates a latent vector $\mathbf {z}_t$ that only encodes the information about the corresponding target token $y_t$, and the decoder simply generates the “correct” token at each step $t$ with $\mathbf {z}_t$ as input. In this case, FlowSeq reduces to the baseline model in Eq. (DISPLAY_FORM4). To escape this undesired local optimum, we apply token-level dropout to randomly drop an entire token when calculating the posterior, to ensure the model also has to learn how to use contextual information. This technique is similar to the “masked language model” in previous studies BIBREF15, BIBREF16, BIBREF17.
<<</Token Dropout.>>>
<<</Posterior>>>
<<<Decoder>>>
As the decoder, we take the latent sequence $\mathbf {z}$ as input, run it through several layers of a neural sequence model such as a Transformer, then directly predict the output tokens in $\mathbf {y}$ individually and independently. Notably, unlike standard seq2seq decoders, we do not perform causal masking to prevent attending to future tokens, making the model fully non-autoregressive.
<<</Decoder>>>
<<<Flow Architecture for Prior>>>
The flow architecture is based on Glow BIBREF11. It consists of a series of steps of flow, combined in a multi-scale architecture (see Figure FIGREF13.) Each step of flow consists three types of elementary flows – actnorm, invertible multi-head linear, and coupling. Note that all three functions are invertible and conducive to calculation of log determinants (details in Appendix SECREF6).
<<<Actnorm.>>>
The activation normalization layer (actnorm; BIBREF11) is an alternative for batch normalization BIBREF18, that has mainly been used in the context of image data to alleviate problems in model training. Actnorm performs an affine transformation of the activations using a scale and bias parameter per feature for sequences:
Both $\mathbf {z}$ and $\mathbf {z}^{\prime }$ are tensors of shape $[T\times d_{\mathrm {z}}]$ with time dimension $t$ and feature dimension $d_{\mathrm {z}}$. The parameters are initialized such that over each feature $\mathbf {z}_{t}^{\prime }$ has zero mean and unit variance given an initial mini-batch of data.
<<</Actnorm.>>>
<<<Invertible Multi-head Linear Layers.>>>
To incorporate general permutations of variables along the feature dimension to ensure that each dimension can affect every other ones after a sufficient number of steps of flow, BIBREF11 proposed a trainable invertible $1\times 1$ convolution layer for 2D images. It is straightforward to apply similar transformations to sequential data:
where $\mathbf {W}$ is the weight matrix of shape $[d_{\mathrm {z}} \times d_{\mathrm {z}}]$. The log-determinant of this transformation is:
The cost of computing $\mathrm {det}(\mathbf {W})$ is $O(d_{\mathrm {z}}^3)$.
Unfortunately, $d_{\mathrm {z}}$ in Seq2Seq generation is commonly large, e.g. 512, significantly slowing down the model for computing $\mathrm {det}(\mathbf {W})$. To apply this to sequence generation, we propose a multi-head invertible linear layer, which first splits each $d_{\mathrm {z}}$-dimensional feature vector into $h$ heads with dimension $d_h = d_{\mathrm {z}}/h$. Then the linear transformation in (DISPLAY_FORM26) is applied to each head, with $d_h\times d_h$ weight matrix $\mathbf {W}$, significantly reducing the dimension. For splitting of heads, one step of flow contains one linear layer with either row-major or column-major splitting format, and these steps with different linear layers are composed in an alternating pattern.
<<</Invertible Multi-head Linear Layers.>>>
<<<Affine Coupling Layers.>>>
To model interdependence across time steps, we use affine coupling layers BIBREF19:
where $\mathrm {s}(\mathbf {z}_a, \mathbf {x})$ and $\mathrm {b}(\mathbf {z}_a, \mathbf {x})$ are outputs of two neural networks with $\mathbf {z}_a$ and $\mathbf {x}$ as input. These are shown in Figure FIGREF21 (c). In experiments, we implement $\mathrm {s}(\cdot )$ and $\mathrm {b}(\cdot )$ with one Transformer decoder layer BIBREF3: multi-head self-attention over $\mathbf {z}_a$, followed by multi-head inter-attention over $\mathbf {x}$, followed by a position-wise feed-forward network. The input $\mathbf {z}_a$ is fed into this layer in one pass, without causal masking.
As in BIBREF19, the $\mathrm {split}()$ function splits $\mathbf {z}$ the input tensor into two halves, while the $\mathrm {concat}$ operation performs the corresponding reverse concatenation operation. In our architecture, three types of split functions are used, based on the split dimension and pattern. Figure FIGREF21 (b) illustrates the three splitting types. The first type of split groups $\mathbf {z}$ along the time dimension on alternate indices. In this case, FlowSeq mainly models the interactions between time-steps. The second and third types of splits perform on the feature dimension, with continuous and alternate patterns, respectively. For each type of split, we alternate $\mathbf {z}_a$ and $\mathbf {z}_b$ to increase the flexibility of the split function. Different types of affine coupling layers alternate in the flow, similar to the linear layers.
<<</Affine Coupling Layers.>>>
<<<Multi-scale Architecture.>>>
We follow BIBREF19 in implementing a multi-scale architecture using the squeezing operation on the feature dimension, which has been demonstrated helpful for training deep flows. Formally, each scale is a combination of several steps of the flow (see Figure FIGREF21 (a)). After each scale, the model drops half of the dimensions with the third type of split in Figure FIGREF21 (b) to reduce computational and memory cost, outputting the tensor with shape $[T \times \frac{d}{2}]$. Then the squeezing operation transforms the $T \times \frac{d}{2}$ tensor into an $\frac{T}{2} \times d$ one as the input of the next scale. We pad each sentence with EOS tokens to ensure $T$ is divisible by 2. The right component of Figure FIGREF13 illustrates the multi-scale architecture.
<<</Multi-scale Architecture.>>>
<<</Flow Architecture for Prior>>>
<<<Predicting Target Sequence Length>>>
In autoregressive seq2seq models, it is natural to determine the length of the sequence dynamically by simply predicting a special EOS token. However, for FlowSeq to predict the entire sequence in parallel, it needs to know its length in advance to generate the latent sequence $\mathbf {z}$. Instead of predicting the absolute length of the target sequence, we predict the length difference between source and target sequences using a classifier with a range of $[-20, 20]$. Numbers in this range are predicted by max-pooling the source encodings into a single vector, running this through a linear layer, and taking a softmax. This classifier is learned jointly with the rest of the model.
<<</Predicting Target Sequence Length>>>
<<<Decoding Process>>>
At inference time, the model needs to identify the sequence with the highest conditional probability by marginalizing over all possible latent variables (see Eq. (DISPLAY_FORM5)), which is intractable in practice. We propose three approximating decoding algorithms to reduce the search space.
<<<Argmax Decoding.>>>
Following BIBREF6, one simple and effective method is to select the best sequence by choosing the highest-probability latent sequence $\mathbf {z}$:
where identifying $\mathbf {y}^*$ only requires independently maximizing the local probability for each output position (see Eq. DISPLAY_FORM6).
<<</Argmax Decoding.>>>
<<<Noisy Parallel Decoding (NPD).>>>
A more accurate approximation of decoding, proposed in BIBREF6, is to draw samples from the latent space and compute the best output for each latent sequence. Then, a pre-trained autoregressive model is adopted to rank these sequences. In FlowSeq, different candidates can be generated by sampling different target lengths or different samples from the prior, and both of the strategies can be batched via masks during decoding. In our experiments, we first select the top $l$ length candidates from the length predictor in §SECREF29. Then, for each length candidate we use $r$ random samples from the prior network to generate output sequences, yielding a total of $l\times r$ candidates.
<<</Noisy Parallel Decoding (NPD).>>>
<<<Importance Weighted Decoding (IWD)>>>
The third approximating method is based on the lower bound of importance weighted estimation BIBREF20. Similarly to NPD, IWD first draws samples from the latent space and computes the best output for each latent sequence. Then, IWD ranks these candidate sequences with $K$ importance samples:
IWD does not rely on a separate pre-trained model, though it significantly slows down the decoding speed. The detailed comparison of these three decoding methods is provided in §SECREF45.
<<</Importance Weighted Decoding (IWD)>>>
<<</Decoding Process>>>
<<<Discussion>>>
Different from the architecture proposed in BIBREF9, the architecture of FlowSeq is not using any autoregressive flow BIBREF21, BIBREF22, yielding a truly non-autoregressive model with efficient generation. Note that the FlowSeq remains non-autoregressive even if we use an RNN in the architecture because RNN is only used to encode a complete sequence of codes and all the input tokens can be fed into the RNN in parallel. This makes it possible to use highly-optimized implementations of RNNs such as those provided by cuDNN. Thus while RNNs do experience some drop in speed, it is less extreme than that experienced when using autoregressive models.
<<</Discussion>>>
<<</FlowSeq>>>
<<<Experiments>>>
<<<Experimental Setups>>>
<<<Translation Datasets>>>
We evaluate FlowSeq on three machine translation benchmark datasets: WMT2014 DE-EN (around 4.5M sentence pairs), WMT2016 RO-EN (around 610K sentence pairs) and a smaller dataset IWSLT2014 DE-EN (around 150K sentence pairs). We use scripts from fairseq BIBREF23 to preprocess WMT2014 and IWSLT2014, where the preprocessing steps follow BIBREF3 for WMT2014. We use the data provided in BIBREF7 for WMT2016. For both WMT datasets, the source and target languages share the same set of BPE embeddings while for IWSLT2014 we use separate embeddings. During training, we filter out sentences longer than 80 for WMT dataset and 60 for IWSLT, respectively.
<<</Translation Datasets>>>
<<<Modules and Hyperparameters>>>
We implement the encoder, decoder and posterior networks with standard (unmasked) Transformer layers BIBREF3. For WMT datasets, the encoder consists of 6 layers, and the decoder and posterior are composed of 4 layers, and 8 attention heads. and for IWSLT, the encoder has 5 layers, and decoder and posterior have 3 layers, and 4 attention heads. The prior flow consists of 3 scales with the number of steps $[48, 48, 16]$ from bottom to top. To dissect the impact of model dimension on translation quality and speed, we perform experiments on two versions of FlowSeq with $d_{model}/d_{hidden} = 256/512$ (base) and $d_{model}/d_{hidden} = 512/1024$ (large). More model details are provided in Appendix SECREF7.
<<</Modules and Hyperparameters>>>
<<<Optimization>>>
Parameter optimization is performed with the Adam optimizer BIBREF24 with $\beta =(0.9, 0.999)$ and $\epsilon =1e-6$. Each mini-batch consist of 2048 sentences. The learning rate is initialized to $5e-4$, and exponentially decays with rate $0.999995$. The gradient clipping cutoff is $1.0$. For all the FlowSeq models, we apply $0.1$ label smoothing and averaged the 5 best checkpoints to create the final model.
At the beginning of training, the posterior network is randomly initialized, producing noisy supervision to the prior. To mitigate this issue, we first set the weight of the $\mathrm {KL}$ term in ELBO to zero for 30,000 updates to train the encoder, decoder and posterior networks. Then the $\mathrm {KL}$ weight linearly increases to one for another 10,000 updates, which we found essential to accelerate training and achieve stable performance.
<<</Optimization>>>
<<<Knowledge Distillation>>>
Previous work on non-autoregressive generation BIBREF6, BIBREF8 has used translations produced by a pre-trained autoregressive NMT model as the training data, noting that this can significantly improve the performance. We analyze the impact of distillation in § SECREF45.
<<</Knowledge Distillation>>>
<<</Experimental Setups>>>
<<<Main Results>>>
We first conduct experiments to compare the performance of FlowSeq with strong baseline models, including NAT w/ Fertility BIBREF6, NAT-IR BIBREF7, NAT-REG BIBREF25, LV NAR BIBREF26, CTC Loss BIBREF27, and CMLM BIBREF8.
Table TABREF39 provides the BLEU scores of FlowSeq with argmax decoding, together with baselines with purely non-autoregressive decoding methods that generate output sequence in one parallel pass. The first block lists results of models trained on raw data, while the second block are results using knowledge distillation. Without using knowledge distillation, FlowSeq base model achieves significant improvement (more than 9 BLEU points) over CMLM-base and LV NAR. It demonstrates the effectiveness of FlowSeq on modeling the complex interdependence in target languages.
Towards the effect of knowledge distillation, we can mainly obtain two observations: i) Similar to the findings in previous work, knowledge distillation still benefits the translation quality of FlowSeq. ii) Compared to previous models, the benefit of knowledge distillation on FlowSeq is less significant, yielding less than 3 BLEU improvement on WMT2014 DE-EN corpus, and even no improvement on WMT2016 RO-EN corpus. The reason might be that FlowSeq does not rely much on knowledge distillation to alleviate the multi-modality problem.
Table TABREF40 illustrates the BLEU scores of FlowSeq and baselines with advanced decoding methods such as iterative refinement, IWD and NPD rescoring. The first block in Table TABREF40 includes the baseline results from autoregressive Transformer. For the sampling procedure in IWD and NPD, we sampled from a reduced-temperature model BIBREF11 to obtain high-quality samples. We vary the temperature within $\lbrace 0.1, 0.2, 0.3, 0.4, 0.5, 1.0\rbrace $ and select the best temperature based on the performance on development sets. The analysis of the impact of sampling temperature and other hyper-parameters on samples is in § SECREF50. For FlowSeq, NPD obtains better results than IWD, showing that FlowSeq still falls behind auto-regressive Transformer on model data distributions. Comparing with CMLM BIBREF8 with 10 iterations of refinement, which is a contemporaneous work that achieves state-of-the-art translation performance, FlowSeq obtains competitive performance on both WMT2014 and WMT2016 corpora, with only slight degradation in translation quality. Leveraging iterative refinement to further improve the performance of FlowSeq has been left to future work.
<<</Main Results>>>
<<<Analysis on Decoding Speed>>>
In this section, we compare the decoding speed (measured in average time in seconds required to decode one sentence) of FlowSeq at test time with that of the autoregressive Transformer model. We use the test set of WMT14 EN-DE for evaluation and all experiments are conducted on a single NVIDIA TITAN X GPU.
<<<How does batch size affect the decoding speed?>>>
First, we investigate how different decoding batch size can affect the decoding speed. We vary the decoding batch size within $\lbrace 1, 4, 8, 32, 64, 128\rbrace $. Figure. FIGREF44 shows that for both FlowSeq and Transformer decoding is faster when using a larger batch size. However, FlowSeq has much larger gains in the decoding speed w.r.t. the increase in batch size, gaining a speed up of 594% of base model and 403% of large model when using a batch size of 128. We hypothesize that this is because the operations in FlowSeq are more friendly to batching while the Transformer model with beam search at test time is less efficient in benefiting from batching.
<<</How does batch size affect the decoding speed?>>>
<<<How does sentence length affect the decoding speed?>>>
Next, we examine if sentence length is a major factor affecting the decoding speed. We bucket the test data by the target sentence length. From Fig. FIGREF44, we can see that as the sentence length increases, FlowSeq achieves almost constant decoding time while Transformer has a linearly increasing decoding time. The relative decoding speed up of FlowSeq versus Transformer linearly increases as the sequence length increases. The potential of decoding long sequences with constant time is an attractive property of FlowSeq.
<<</How does sentence length affect the decoding speed?>>>
<<</Analysis on Decoding Speed>>>
<<<Analysis of Rescoring Candidates>>>
In Fig. FIGREF49, we analyze how different sampling hyperparameters affect the performance of rescoring. First, we observe that the number of samples $r$ for each length is the most important factor. The performance is always improved with a larger sample size. Second, a larger number of length candidates does not necessarily increase the rescoring performance. Third, we find that a larger sampling temperature (0.3 - 0.5) can increase the diversity of translations and leads to better rescoring BLEU. However, the latent samples become noisy when a large temperature (1.0) is used.
<<</Analysis of Rescoring Candidates>>>
<<<Analysis of Translation Diversity>>>
Following BIBREF28, we analyze the output diversity of FlowSeq. BIBREF28 proposed pairwise-BLEU and BLEU computed in a leave-one-out manner to calibrate the diversity and quality of translation hypotheses. A lower pairwise-BLEU score implies a more diverse hypothesis set. And a higher BLEU score implies a better translation quality. We experiment on a subset of test set of WMT14-ENDE with ten references each sentence BIBREF29. In Fig. FIGREF52, we compare FlowSeq with other multi-hypothesis generation methods (ten hypotheses each sentence) to analyze how well the generation outputs of FlowSeq are in terms of diversity and quality. The right corner area of the figure indicates the ideal generations: high diversity and high quality. While FlowSeq still lags behind the autoregressive generations, by increasing the sampling temperature it provides a way of generating more diverse outputs while keeping the translation quality almost unchanged. More analysis of translation outputs and detailed results are provided in the Appendix SECREF9 and SECREF10.
<<</Analysis of Translation Diversity>>>
<<</Experiments>>>
<<<Conclusion>>>
We propose FlowSeq, an efficient and effective model for non-autoregressive sequence generation by using generative flows. One potential direction for future work is to leverage iterative refinement techniques such as masked language models to further improve translation quality. Another exciting direction is to, theoretically and empirically, investigate the latent space in FlowSeq, hence providing deep insights of the model, even enhancing controllable text generation.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nBackground\nFlow-based Generative Models\nVariational Inference and Training\nFlowSeq\nSource Encoder\nPosterior\nGeneration of Latent Variables.\nZero initialization.\nToken Dropout.\nDecoder\nFlow Architecture for Prior\nActnorm.\nInvertible Multi-head Linear Layers.\nAffine Coupling Layers.\nMulti-scale Architecture.\nPredicting Target Sequence Length\nDecoding Process\nArgmax Decoding.\nNoisy Parallel Decoding (NPD).\nImportance Weighted Decoding (IWD)\nDiscussion\nExperiments\nExperimental Setups\nTranslation Datasets\nModules and Hyperparameters\nOptimization\nKnowledge Distillation\nMain Results\nAnalysis on Decoding Speed\nHow does batch size affect the decoding speed?\nHow does sentence length affect the decoding speed?\nAnalysis of Rescoring Candidates\nAnalysis of Translation Diversity\nConclusion"
],
"type": "outline"
}
|
1910.02754
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
On Leveraging the Visual Modality for Neural Machine Translation
<<<Abstract>>>
Leveraging the visual modality effectively for Neural Machine Translation (NMT) remains an open problem in computational linguistics. Recently, Caglayan et al. posit that the observed gains are limited mainly due to the very simple, short, repetitive sentences of the Multi30k dataset (the only multimodal MT dataset available at the time), which renders the source text sufficient for context. In this work, we further investigate this hypothesis on a new large scale multimodal Machine Translation (MMT) dataset, How2, which has 1.57 times longer mean sentence length than Multi30k and no repetition. We propose and evaluate three novel fusion techniques, each of which is designed to ensure the utilization of visual context at different stages of the Sequence-to-Sequence transduction pipeline, even under full linguistic context. However, we still obtain only marginal gains under full linguistic context and posit that visual embeddings extracted from deep vision models (ResNet for Multi30k, ResNext for How2) do not lend themselves to increasing the discriminativeness between the vocabulary elements at token level prediction in NMT. We demonstrate this qualitatively by analyzing attention distribution and quantitatively through Principal Component Analysis, arriving at the conclusion that it is the quality of the visual embeddings rather than the length of sentences, which need to be improved in existing MMT datasets.
<<</Abstract>>>
<<<Introduction>>>
A number of works have explored integrating the visual modality for Neural Machine Translation (NMT) models, though, there has been relatively modest gains or no gains at all by incorporating the visual modality in the translation pipeline BIBREF0. In particular, BIBREF1 leverage multi-task learning, BIBREF2 use visual adaptive training, while BIBREF3, BIBREF4, BIBREF5 use a number of fusion techniques to incorporate features obtained from the visual modality.
Regarding the seemingly low utility of visual modality in machine translation, BIBREF6 hypothesize that the highly relevant visual properties are often not represented by linguistic models because they are too obvious to be explicitly mentioned in text (e.g., birds have wings, violins are brown). Similarly, BIBREF7 argue that perceptual information is already sufficiently encoded in textual cues. However, recently BIBREF0 have demonstrated that neural models are capable of leveraging the visual modality for translations, and posit that it is the nature of the Multi30k dataset (the only multimodal machine translation dataset at the time) which is inhibiting gains from the visual modality to emerge, due to the presence of short, simple and repetitive sentences, which renders the source text as sufficient context for translation. In this work, we further investigate this hypothesis on a large-scale multimodal machine translation (MMT) dataset, named How2 BIBREF2, which has 1.57 times longer sentences, in terms of the mean sentence length, when compared to Multi30k .
To this end, we restrict ourselves to the Sequence-to-Sequence (Seq2Seq) framework and propose three simple but novel fusion techniques to ensure the utilization of visual context during different stages (Input Context Encoding, Attention and Supervision) of the Sequence-to-Sequence transduction pipeline. We then evaluate and analyze the results for further insights, with the goal of testing the utility of visual modality for NMT under full source-side linguistic context.
<<</Introduction>>>
<<<Proposed Fusion Techniques>>>
In this section, we describe three additions to the Seq2Seq model to ensure that the visual context is utilized at different stages, namely when computing context during each step of the decoder, during attention as well as when computing the supervision signal in the Sequence-to-Sequence pipeline. This is done to encourage the Seq2Seq NMT model to make use of the visual features under full linguistic context. In each case, we assume that the visual features are fine-tuned using a visual encoder, which is trained jointly alongside the Seq2Seq model.
<<<Step-Wise Decoder Fusion>>>
Our first proposed technique is the step-wise decoder fusion of visual features during every prediction step i.e. we concatenate the visual encoding as context at each step of the decoding process. This differs from the usual practice of passing the visual feature only at the beginning of the decoding process BIBREF5.
<<</Step-Wise Decoder Fusion>>>
<<<Multimodal Attention Modulation>>>
Similar to general attention BIBREF8, wherein a variable-length alignment vector $a_{th}(s)$, whose size equals the number of time steps on the source side, is derived by comparing the current target hidden state $h_{t}$ with each source hidden state $\overline{h_{s}}$; we consider a variant wherein the visual encoding $v_{t}$ is used to calculate an attention distribution $a_{tv}(s)$ over the source encodings as well. Then, the true attention distribution $a_{t}(s)$ is computed as an interpolation between the visual and text based attention scores. The score function is a content based scoring mechanism as usual.
This formulation differs from BIBREF3 in that we use both the natural language as well as the visual modality to compute attention over the source sentence, rather than having attention over images. Since attention is computed over the same source embeddings (arising from a single encoder) using two different modalities, our approach also differs from BIBREF4, which focuses on combining the attention scores of multiple source encoders.
<<</Multimodal Attention Modulation>>>
<<<Visual-Semantic (VS) Regularizer>>>
In terms of leveraging the visual modality for supervision, BIBREF1 use multi-task learning to learn grounded representations through image representation prediction. However, to our knowledge, visual-semantic supervision hasn't been much explored for multimodal translation in terms of loss functions.
Our proposed technique is the inclusion of visual-semantic supervision to the machine translation model. Recently, BIBREF9 proposed an optimal transport based loss function which computes the distance between the word embeddings of the predicted sentence and the target sentence and uses it as a regularizer $L_{\text{ot}}^{\text{tgt}}$. The purpose of this term is to provide the model with sequence level supervision. We leverage this idea by including a Cosine distance term, $L_{\text{cosine}}^{\text{visual}}$, between the visual encoding (which is at the sentence level) and the target/predicted sentence embeddings (computed as the average of the target/predicted word embeddings). The purpose of this distance term is to provide sequence level supervision by aligning the visual and text embeddings. In practice, as in BIBREF9, we introduce a hyperparameter in the loss function:
where $\gamma $ is a hyper-parameter balancing the effect of loss components (a separate hyperparameter than in Section 2.2).
<<</Visual-Semantic (VS) Regularizer>>>
<<</Proposed Fusion Techniques>>>
<<<Results and Analysis>>>
Throughout our experiments, we use the 300 hours subset of How2 dataset BIBREF10, which contains 300 hours of videos, sentence-level time alignments to the ground-truth English subtitles, and Portuguese translations of English subtitles. The How2 dataset has 2048 dimensional pre-trained ResNeXt embeddings BIBREF11 available for each of the video clips aligned to the sentences.
Further, our baseline model is the canonical Seq2Seq model BIBREF12 consisting of bidirectional LSTM as encoder and decoder, general attention BIBREF8 and length normalization BIBREF13. In all cases, we use the embedding size of 300 and the hidden size of 512. Whenever the visual modality is used, we encode each of the visual features to 300 dimensional vectors through an encoder (consisting of a Linear layer followed by Batch Normalization and ReLU non-linearity) which is also trained end-to-end with the Seq2Seq model. Further, to integrate sequence level supervision as in BIBREF9, we utilize the Geomloss library , which provides a batched implementation of the Sinkhorn algorithm for the Optimal Transport computation. For all the translation experiments, we preprocess the data by lowercasing and removing the punctuations BIBREF2, and construct vocabulary at word level. Adam optimizer with a learning rate of 0.001 and a learning rate decay of 0.5 is used to throughout to train our models.
<<<Experimental Results>>>
The performances of the models are summarized in Table TABREF9, along with the gains in BLEU points. From Table TABREF9, we can make a few observations:
The visual modality leads to modest gains in BLEU scores. The proposed VS regularizer leads to slightly higher gain when compared to Decoder-Fusion and Attention modulation techniques for the En-Pt language pair.
Further, the gains from incorporating the visual modality are less for Multimodal Attention and VS Regularization in the case of the reversed language pair of Pt-En (Table TABREF10), even though the visual modality is common to both the languages. This can possibly be attributed to the How2 dataset creation process wherein first the videos were aligned with English sentences and then the Portuguese translations were created, implying a reduction in correspondence with the visual modality due to errors introduced in the translation process.
<<</Experimental Results>>>
<<<Discussion>>>
To analyze the reasons for modest gains, despite incorporating multiple techniques to effectively leverage the visual modality for machine translation, we inspect the dataset as well as the proposed mechanisms.
<<<PCA of Visual Features>>>
We first investigate and compare the visual feature quality of the How2 dataset with respect to that of the Multi30k dataset . To analyze the discriminativeness of the visual features for both of these datasets, we leverage an analysis mechanism used in BIBREF14 in the context of analyzing word embedding discriminativeness. We analyze the variance of the visual features corresponding to each sentence in the training set. Since the visual features semantically represent the sentence as well, we could analyze how well the features are able to discriminate between the sentences and consequently between the individual words, as a measure of their utility for NMT.
Figure FIGREF14 (Top) shows the variance explained by the Top 100 principal components, obtained by applying PCA on the How2 and Multi30k training set visual features. The original feature dimensions are 2048 in both the cases. It is clear from the Figure FIGREF14 that most of the energy of the visual feature space resides in a low-dimensional subspace BIBREF14. In other words, there exist a few directions in the embedding space which disproportionately explain the variance. These "common" directions affect all of the embeddings in the same way, rendering them less discriminative. Figure FIGREF14 also shows the cumulative variance explained by Top 10, 20, 50 and 100 principal components respectively. It is clear that the visual features in the case of How2 dataset are much more dominated by the "common" dimensions, when compared to the Multi30k dataset. Further, this analysis is still at the sentence level, i.e. the visual features are much less discriminative among individual sentences, further aggravating the problem at the token level. This suggests that the existing visual features aren't sufficient enough to expect benefits from the visual modality in NMT, since they won't provide discriminativeness among the vocabulary elements at the token level during prediction. Further, this also indicates that under subword vocabulary such as BPE BIBREF15 or Sentence-Piece BIBREF16, the utility of such visual embeddings will only aggravate.
<<</PCA of Visual Features>>>
<<<Comparison of Attention Components>>>
In this section, we analyze the visual and text based attention mechanisms. We find that the visual attention is very sparse, in that just one source encoding is attended to (the maximum visual attention over source encodings, across the test set, has mean 0.99 and standard deviation 0.015), thereby limiting the use of modulation. Thus, in practice, we find that a small weight ($\gamma =0.1$) is necessary to prevent degradation due to this sparse visual attention component. Figure FIGREF18 & FIGREF19 shows the comparison of visual and text based attention for two sentences, one long source sentence of length 21 and one short source sentence of length 7. In both cases, we find that the visual component of the attention hasn't learnt any variation over the source encodings, again suggesting that the visual embeddings do not lend themselves to enhancing token-level discriminativess during prediction. We find this to be consistent across sentences of different lengths.
<<</Comparison of Attention Components>>>
<<</Discussion>>>
<<</Results and Analysis>>>
<<<Conclusions and Future Work>>>
To conclude, we investigated the utility of visual modality for NMT, under full linguistic context on a new large-scale MMT dataset named How2. Our results on the How2 dataset confirm the general consensus that the visual modality does not lead to any significant gains for NMT, however, unlike BIBREF0 we attribute the relatively modest gains to the limited discriminativeness offered by the existing visual features, rather than the length of the sentences in the dataset. We validate this hypothesis quantitatively through a PCA based analysis of the visual features as well as qualitatively by analyzing attention components. We hope that our work would lead to more useful techniques and better visual features for MMT. An immediate future direction to explore would be to construct more discriminative features for utilizing the visual modality in NMT.
<<</Conclusions and Future Work>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nProposed Fusion Techniques\nStep-Wise Decoder Fusion\nMultimodal Attention Modulation\nVisual-Semantic (VS) Regularizer\nResults and Analysis\nExperimental Results\nDiscussion\nPCA of Visual Features\nComparison of Attention Components\nConclusions and Future Work"
],
"type": "outline"
}
|
2004.02393
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Learning to Recover Reasoning Chains for Multi-Hop Question Answering via Cooperative Games
<<<Abstract>>>
We propose the new problem of learning to recover reasoning chains from weakly supervised signals, i.e., the question-answer pairs. We propose a cooperative game approach to deal with this problem, in which how the evidence passages are selected and how the selected passages are connected are handled by two models that cooperate to select the most confident chains from a large set of candidates (from distant supervision). For evaluation, we created benchmarks based on two multi-hop QA datasets, HotpotQA and MedHop; and hand-labeled reasoning chains for the latter. The experimental results demonstrate the effectiveness of our proposed approach.
<<</Abstract>>>
<<<Introduction>>>
NLP tasks that require multi-hop reasoning have recently enjoyed rapid progress, especially on multi-hop question answering BIBREF0, BIBREF1, BIBREF2. Advances have benefited from rich annotations of supporting evidence, as in the popular multi-hop QA and relation extraction benchmarks, e.g., HotpotQA BIBREF3 and DocRED BIBREF4, where the evidence sentences for the reasoning process were labeled by human annotators.
Such evidence annotations are crucial for modern model training, since they provide finer-grained supervision for better guiding the model learning. Furthermore, they allow a pipeline fashion of model training, with each step, such as passage ranking and answer extraction, trained as a supervised learning sub-task. This is crucial from a practical perspective, in order to reduce the memory usage when handling a large amount of inputs with advanced, large pre-trained models BIBREF5, BIBREF6, BIBREF7.
Manual evidence annotation is expensive, so there are only a few benchmarks with supporting evidence annotated. Even for these datasets, the structures of the annotations are still limited, as new model designs keep emerging and they may require different forms of evidence annotations. As a result, the supervision from these datasets can still be insufficient for training accurate models.
Taking question answering with multi-hop reasoning as an example, annotating only supporting passages is not sufficient to show the reasoning processes due to the lack of necessary structural information (Figure FIGREF1). One example is the order of annotated evidence, which is crucial in logic reasoning and the importance of which has also been demonstrated in text-based QA BIBREF8. The other example is how the annotated evidence pieces are connected, which requires at least the definition of arguments, such as a linking entity, concept, or event. Such information has proved useful by the recently popular entity-centric methods BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF0, BIBREF2 and intuitively will be a benefit to these methods if available.
We propose a cooperative game approach to recovering the reasoning chains with the aforementioned necessary structural information for multi-hop QA. Each recovered chain corresponds to a list of ordered passages and each pair of adjacent passages is connected with a linking entity. Specifically, we start with a model, the Ranker, which selects a sequence of passages arriving at the answers, with the restriction that each adjacent passage pair shares at least an entity. This is essentially an unsupervised task and the selection suffers from noise and ambiguity. Therefore we introduce another model, the Reasoner, which predicts the exact linking entity that points to the next passage. The two models play a cooperative game and are rewarded when they find a consistent chain. In this way, we restrict the selection to satisfy not only the format constraints (i.e., ordered passages with connected adjacencies) but also the semantic constraints (i.e., finding the next passage given that the partial selection can be effectively modeled by a Reasoner). Therefore, the selection can be less noisy.
We evaluate the proposed method on datasets with different properties, i.e., HotpotQA and MedHop BIBREF13, to cover cases with both 2-hop and 3-hop reasoning. We created labeled reasoning chains for both datasets. Experimental results demonstrate the significant advantage of our proposed approach.
<<</Introduction>>>
<<<Task Definition>>>
Reasoning Chains Examples of reasoning chains in HotpotQA and MedHop are shown in Figure FIGREF1. Formally, we aim at recovering the reasoning chain in the form of $(p_1 \rightarrow e_{1,2} \rightarrow p_2 \rightarrow e_{2,3} \rightarrow \cdots \rightarrow e_{n-1,n} \rightarrow p_n)$, where each $p_i$ is a passage and each $e_{i,i+1}$ is an entity that connects $p_i$ and $p_{i+1}$, i.e., appearing in both passages. The last passage $p_n$ in the chain contains the correct answer. We say $p_i$ connects $e_{i-1,i}$ and $e_{i,i+1}$ in the sense that it describes a relationship between the two entities.
Our Task Given a QA pair $(q,a)$ and all its candidate passages $\mathcal {P}$, we can extract all possible candidate chains that satisfy the conditions mentioned above, denoted as $\mathcal {C}$. The goal of reasoning chain recovery is to extract the correct chains from all the candidates, given $q,a$ and $\mathcal {P}$ as inputs.
Related Work Although there are recent interests on predicting reasoning chains for multi-hop QA BIBREF0, BIBREF14, BIBREF2, they all consider a fully supervised setting; i.e., annotated reasoning chains are available. Our work is the first to recover reasoning chains in a more general unsupervised setting, thus falling into the direction of denoising over distant supervised signals. From this perspective, the most relevant studies in the NLP field includes BIBREF15, BIBREF16 for evidence identification in open-domain QA and BIBREF17, BIBREF18, BIBREF19 for rationale recovery.
<<</Task Definition>>>
<<<Method>>>
The task of recovering reasoning chains is essentially an unsupervised problem, as we have no access to annotated reasoning chains. Therefore, we resort to the noisy training signal from chains obtained by distant supervision. We first propose a conditional selection model that optimizes the passage selection by considering their orders (Section SECREF4). We then propose a cooperative Reasoner-Ranker game (Section SECREF12) in which the Reasoner recovers the linking entities that point to the next passage. This enhancement encourages the Ranker to select the chains such that their distribution is easier for a linking entity prediction model (Reasoner) to capture. Therefore, it enables our model to denoise the supervision signals while recovering chains with entity information. Figure FIGREF3 gives our overall framework, with a flow describing how the Reasoner passes additional rewards to the Ranker.
<<<Passage Ranking Model>>>
The key component of our framework is the Ranker model, which is provided with a question $q$ and $K$ passages $\mathcal {P} = \lbrace p_1, p_2 ... p_K\rbrace $ from a pool of candidates, and outputs a chain of selected passages.
<<<Passage Scoring>>>
For each step of the chain, the Ranker estimates a distribution of the selection of each passage. To this end we first encode the question and passage with a 2-layer bi-directional GRU network, resulting in an encoded question $\mathbf {Q} = \lbrace \vec{\mathbf {q}_0}, \vec{\mathbf {q}_1}, ..., \vec{\mathbf {q}_N}\rbrace $ and $\mathbf {H}_i = \lbrace \vec{\mathbf {h}_{i,0}}, \vec{\mathbf {h}_{i,1}}, ..., \vec{\mathbf {h}_{i,M_i}}\rbrace $ for each passage $p_i \in P$ of length $M_i$. Then we use the MatchLSTM model BIBREF20 to get the matching score between $\mathbf {Q}$ and each $\mathbf {H}_i$ and derive the distribution of passage selection $P(p_i|q)$ (see Appendix SECREF6 for details). We denote $P(p_i|q)=\textrm {MatchLSTM}(\mathbf {H}_i, \mathbf {Q})$ for simplicity.
<<</Passage Scoring>>>
<<<Conditional Selection>>>
To model passage dependency along the chain of reasoning, we use a hard selection model that builds a chain incrementally. Provided with the $K$ passages, at each step $t$ the Ranker computes $P^t(p_i|\mathbf {Q}^{t-1}), i = 0, ..., K$, which is the probability of selecting passage $p_i$ conditioned on the query and previous states representation $\mathbf {Q}^{t-1}$. Then we sample one passage $p^t_{\tau }$ according to the predicted selection probability.
The first step starts with the original question $\mathbf {Q}^0$. A feed-forward network is used to project the concatenation of query encoding and selected passage encoding $\tilde{\mathbf {m}}^t_{p_{\tau }}$ back to the query space, and the new query $\mathbf {Q}^{t+1}$ is used to select the next passage.
<<</Conditional Selection>>>
<<<Reward via Distant Supervision>>>
We use policy gradient BIBREF21 to optimize our model. As we have no access to annotated reasoning chains during training, the reward comes from distant supervision. Specifically, we reward the Ranker if a selected passage appears as the corresponding part of a distant supervised chain in $\mathcal {C}$. The model receives immediate reward at each step of selection.
In this paper we only consider chains consist of $\le 3$ passages (2-hop and 3-hop chains). For the 2-hop cases, our model predicts a chain of two passages from the candidate set $\mathcal {C}$ in the form of $p_h\rightarrow e \rightarrow p_t$. Each candidate chain satisfies that $p_t$ contains the answer, while $p_h$ and $p_t$ contain a shared entity $e$. We call $p_h$ the head passage and $p_t$ the tail passage. Let $\mathcal {P}_{T}/\mathcal {P}_{H}$ denote the set of all tail/head passages from $\mathcal {C}$. Our model receives rewards $r_h, r_t$ according to its selections:
For the 3-hop cases, we need to select an additional intermediate passage $p_m$ between $p_h$ and $p_t$. If we reward any $p_m$ selection that appears in the middle of a chain in candidate chain set $\mathcal {C}$, the number of feasible options can be very large. Therefore, we make our model first select the head passage $p_h$ and the tail passage $p_t$ independently and then select $p_m$ conditioned on $(p_h,p_t)$. We further restrict that each path in $\mathcal {C}$ must have the head passage containing an entity from $q$. Then the selected $p_m$ is only rewarded if it appears in a chain in $\mathcal {C}$ that starts with $p_h$ and ends with $p_t$:
<<</Reward via Distant Supervision>>>
<<</Passage Ranking Model>>>
<<<Cooperative Reasoner>>>
To alleviate the noise in the distant supervision signal $\mathcal {C}$, in addition to the conditional selection, we further propose a cooperative Reasoner model, also implemented with the MatchLSTM architecture (see Appendix SECREF6), to predict the linking entity from the selected passages. Intuitively, when the Ranker makes more accurate passage selections, the Reasoner will work with less noisy data and thus is easier to succeed. Specifically, the Reasoner learns to extract the linking entity from chains selected by a well-trained Ranker, and it benefits the Ranker training by providing extra rewards. Taking 2-hop as an example, we train the Ranker and Reasoner alternatively as a cooperative game:
Reasoner Step: Given the first passage $p_t$ selected by the trained Ranker, the Reasoner predicts the probability of each entity $e$ appearing in $p_t$. The Reasoner is trained with the cross-entropy loss:
Ranker Step: Given the Reasoner's top-1 predicted linking entity $e$, the reward for Ranker at the $2^{\textrm {nd}}$ step is defined as:
The extension to 3-hop cases is straightforward; the only difference is that the Reasoner reads both the selected $p_h$ and $p_t$ to output two entities. The Ranker receives one extra reward if the Reasoner picks the correct linking entity from $p_h$, so does $p_t$.
<<</Cooperative Reasoner>>>
<<</Method>>>
<<<Experiments>>>
<<<Settings>>>
<<<Datasets>>>
We evaluate our path selection model on HotpotQA bridge type questions and on the MedHop dataset. In HotpotQA, the entities are pre-processed Wiki anchor link objects and in MedHop they are drug/protein database identifiers.
For HotpotQA, two supporting passages are provided along with each question. We ignore the support annotations during training and use them to create ground truth on development set: following BIBREF8, we determine the order of passages according to whether a passage contains the answer. We discard ambiguous instances.
For MedHop, there is no evidence annotated. Therefore we created a new evaluation dataset by manually annotating the correct paths for part of the development set: we first extract all candidate paths in form of passage triplets $(p_h, p_m, p_t)$, such that $p_h$ contains the query drug and $p_t$ contains the answer drug, and $p_h/p_m$ and $p_m/p_t$ are connected by shared proteins. We label a chain as positive if all the drug-protein or protein-protein interactions are described in the corresponding passages. Note that the positive paths are not unique for a question.
During training we select chains based on the full passage set $\mathcal {P}$; at inference time we extract the chains from the candidate set $\mathcal {C}$ (see Section SECREF2).
<<</Datasets>>>
<<<Baselines and Evaluation Metric>>>
We compare our model with (1) random baseline, which randomly selects a candidate chain from the distant supervision chain set $\mathcal {C}$; and (2) distant supervised MatchLSTM, which uses the same base model as ours but scores and selects the passages independently. We use accuracy as our evaluation metric. As HotpotQA does not provide ground-truth linking entities, we only evaluate whether the supporting passages are fully recovered (yet our model still output the full chains). For MedHop we evaluate whether the whole predicted chain is correct. More details can be found in Appendix SECREF7. We use BIBREF24 as word embedding for HotpotQA, and BIBREF25 for MedHop.
<<</Baselines and Evaluation Metric>>>
<<</Settings>>>
<<<Results>>>
<<<HotpotQA>>>
We first evaluate on the 2-hop HotpotQA task. Our best performed model first selects the tail passage $p_t$ and then the head passage $p_h$, because the number of candidates of tail is smaller ($\sim $2 per question). Table TABREF21 shows the results. First, training a ranker with distant supervision performs significantly better than the random baseline, showing that the training process itself has a certain degree of denoising ability to distinguish the more informative signals from distant supervision labels. By introducing additional inductive bias of orders, the conditional selection model further improves with a large margin. Finally, our cooperative game gives the best performance, showing that a trained Reasoner has the ability of ignoring entity links that are irrelevant to the reasoning chain.
Table TABREF22 demonstrates the effect of selecting directions, together with the methods' recall on head passages and tail passages. The latter is evaluated on a subset of bridge-type questions in HotpotQA which has no ambiguous support annotations in passage orders; i.e., among the two human-labeled supporting passages, only one contains the answer and thus must be a tail. The results show that selecting tail first performs better. The cooperative game mainly improves the head selection.
<<</HotpotQA>>>
<<<MedHop>>>
Results in table TABREF21 show that recovering chains from MedHop is a much harder task: first, the large number of distant supervision chains in $\mathcal {C}$ introduce too much noise so the Distant Supervised Ranker improves only 3%; second, the dependent model leads to no improvement because $\mathcal {C}$ is strictly ordered given our data construction. Our cooperative game manages to remain effective and gives further improvement.
<<</MedHop>>>
<<</Results>>>
<<</Experiments>>>
<<<Conclusions>>>
In this paper we propose the problem of recovering reasoning chains in multi-hop QA from weak supervision signals. Our model adopts an cooperative game approach where a ranker and a reasoner cooperate to select the most confident chains. Experiments on the HotpotQA and MedHop benchmarks show the effectiveness of the proposed approach.
<<</Conclusions>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nTask Definition\nMethod\nPassage Ranking Model\nPassage Scoring\nConditional Selection\nReward via Distant Supervision\nCooperative Reasoner\nExperiments\nSettings\nDatasets\nBaselines and Evaluation Metric\nResults\nHotpotQA\nMedHop\nConclusions"
],
"type": "outline"
}
|
2004.01694
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
A Set of Recommendations for Assessing Human-Machine Parity in Language Translation
<<<Abstract>>>
The quality of machine translation has increased remarkably over the past years, to the degree that it was found to be indistinguishable from professional human translation in a number of empirical investigations. We reassess Hassan et al.'s 2018 investigation into Chinese to English news translation, showing that the finding of human-machine parity was owed to weaknesses in the evaluation design - which is currently considered best practice in the field. We show that the professional human translations contained significantly fewer errors, and that perceived quality in human evaluation depends on the choice of raters, the availability of linguistic context, and the creation of reference translations. Our results call for revisiting current best practices to assess strong machine translation systems in general and human-machine parity in particular, for which we offer a set of recommendations based on our empirical findings.
<<</Abstract>>>
<<<Introduction>>>
Machine translation (MT) has made astounding progress in recent years thanks to improvements in neural modelling BIBREF0, BIBREF1, BIBREF2, and the resulting increase in translation quality is creating new challenges for MT evaluation. Human evaluation remains the gold standard, but there are many design decisions that potentially affect the validity of such a human evaluation.
This paper is a response to two recent human evaluation studies in which some neural machine translation systems reportedly performed at (or above) the level of human translators for news translation from Chinese to English BIBREF3 and English to Czech BIBREF4, BIBREF5.
Both evaluations were based on current best practices in the field: they used a source-based direct assessment with non-expert annotators, using data sets and the evaluation protocol of the Conference on Machine Translation (WMT). While the results are intriguing, especially because they are based on best practices in MT evaluation, BIBREF5 warn against taking their results as evidence for human–machine parity, and caution that for well-resourced language pairs, an update of WMT evaluation style will be needed to keep up with the progress in machine translation. We concur that these findings have demonstrated the need to critically re-evaluate the design of human MT evaluation.
Our paper investigates three aspects of human MT evaluation, with a special focus on assessing human–machine parity: the choice of raters, the use of linguistic context, and the creation of reference translations. We focus on the data shared by BIBREF3, and empirically test to what extent changes in the evaluation design affect the outcome of the human evaluation. We find that for all three aspects, human translations are judged more favourably, and significantly better than MT, when we make changes that we believe strengthen the evaluation design. Based on our empirical findings, we formulate a set of recommendations for human MT evaluation in general, and assessing human–machine parity in particular. All of our data are made publicly available for external validation and further analysis.
<<</Introduction>>>
<<<Background>>>
We first review current methods to assess the quality of machine translation system outputs, and highlight potential issues in using these methods to compare such outputs to translations produced by professional human translators.
<<<Human Evaluation of Machine Translation>>>
The evaluation of MT quality has been the subject of controversial discussions in research and the language services industry for decades due to its high economic importance. While automatic evaluation methods are particularly important in system development, there is consensus that a reliable evaluation should—despite high costs—be carried out by humans.
Various methods have been proposed for the human evaluation of MT quality BIBREF8. What they have in common is that the MT output to be rated is paired with a translation hint: the source text or a reference translation. The MT output is then either adapted or scored with reference to the translation hint by human post-editors or raters, respectively.
As part of the large-scale evaluation campaign at WMT, two primary evaluation methods have been used in recent years: relative ranking and direct assessment BIBREF9. In the case of relative ranking, raters are presented with outputs from two or more systems, which they are asked to evaluate relative to each other (e.g., to determine system A is better than system B). Ties (e.g., system A is as good or as bad as system B) are typically allowed. Compared to absolute scores on Likert scales, data obtained through relative ranking show better inter- and intra-annotator agreement BIBREF10. However, they do not allow conclusions to be drawn about the order of magnitude of the differences, so that it is not possible to determine how much better system A was than system B.
This is one of the reasons why direct assessment has prevailed as an evaluation method more recently. In contrast to relative ranking, the raters are presented with one MT output at a time, to which they assign a score between 0 and 100. To increase homogeneity, each rater's ratings are standardised BIBREF11. Reference translations serve as the basis in the context of WMT, and evaluations are carried out by monolingual raters. To avoid reference bias, the evaluation can be based on source texts instead, which presupposes bilingual raters, but leads to more reliable results overall BIBREF12.
<<</Human Evaluation of Machine Translation>>>
<<<Assessing Human–Machine Parity>>>
BIBREF3 base their claim of achieving human–machine parity on a source-based direct assessment as described in the previous section, where they found no significant difference in ratings between the output of their MT system and a professional human translation. Similarly, BIBREF5 report that the best-performing English to Czech system submitted to WMT 2018 BIBREF4 significantly outperforms the human reference translation. However, the authors caution against interpreting their results as evidence of human–machine parity, highlighting potential limitations of the evaluation.
In this study, we address three aspects that we consider to be particularly relevant for human evaluation of MT, with a special focus on testing human–machine parity: the choice of raters, the use of linguistic context, and the construction of reference translations.
<<<Choice of Raters>>>
The human evaluation of MT output in research scenarios is typically conducted by crowd workers in order to minimise costs. BIBREF13 shows that aggregated assessments of bilingual crowd workers are very similar to those of MT developers, and BIBREF14, based on experiments with data from WMT 2012, similarly conclude that with proper quality control, MT systems can be evaluated by crowd workers. BIBREF3 also use bilingual crowd workers, but the studies supporting the use of crowdsourcing for MT evaluation were performed with older MT systems, and their findings may not carry over to the evaluation of contemporary higher-quality neural machine translation (NMT) systems. In addition, the MT developers to which crowd workers were compared are usually not professional translators. We hypothesise that expert translators will provide more nuanced ratings than non-experts, and that their ratings will show a higher difference between MT outputs and human translations.
<<<Evaluation Protocol>>>
We test for difference in ratings of MT outputs and human translations between experts and non-experts. We consider professional translators as experts, and both crowd workers and MT researchers as non-experts.
We conduct a relative ranking experiment using one professional human (H$_A$) and two machine translations (MT$_1$ and MT$_2$), considering the native Chinese part of the WMT 2017 Chinese–English test set (see Section SECREF35 for details). The 299 sentences used in the experiments stem from 41 documents, randomly selected from all the documents in the test set originally written in Chinese, and are shown in their original order. Raters are shown one sentence at a time, and see the original Chinese source alongside the three translations. The previous and next source sentences are also shown, in order to provide the annotator with local inter-sentential context.
Five raters—two experts and three non-experts—participated in the assessment. The experts were professional Chinese to English translators: one native in Chinese with a fluent level of English, the other native in English with a fluent level of Chinese. The non-experts were NLP researchers native in Chinese, working in an English-speaking country.
The ratings are elicited with Appraise BIBREF21. We derive an overall score for each translation (H$_A$, MT$_1$, and MT$_2$) based on the rankings. We use the TrueSkill method adapted to MT evaluation BIBREF22 following its usage at WMT15, i. e., we run 1,000 iterations of the rankings recorded with Appraise followed by clustering (significance level $\alpha =0.05$).
<<</Evaluation Protocol>>>
<<<Results>>>
Table TABREF17 shows the TrueSkill scores for each translation resulting from the evaluations by expert and non-expert translators. We find that translation expertise affects the judgement of MT$_1$ and H$_A$, where the rating gap is wider for the expert raters. This indicates that non-experts disregard translation nuances in the evaluation, which leads to a more tolerant judgement of MT systems and a lower inter-annotator agreement ($\kappa =0.13$ for non-experts versus $\kappa =0.254$ for experts).
It is worth noticing that, regardless of their expertise, the performance of human raters may vary over time. For example, performance may improve or decrease due to learning effects or fatigue, respectively BIBREF23. It is likely that such longitudinal effects are present in our data. They should be accounted for in future work, e. g., by using trial number as an additional predictor BIBREF24.
<<</Results>>>
<<</Choice of Raters>>>
<<<Linguistic Context>>>
MT has been evaluated almost exclusively at the sentence level, owing to the fact that most MT systems do not yet take context across sentence boundaries into account. However, when machine translations are compared to those of professional translators, the omission of linguistic context—e. g., by random ordering of the sentences to be evaluated—does not do justice to humans who, in contrast to most MT systems, can and do take inter-sentential context into account BIBREF15, BIBREF16. We hypothesise that an evaluation of sentences in isolation, as applied by BIBREF3, precludes raters from detecting translation errors that become apparent only when inter-sentential context is available, and that they will judge MT quality less favourably when evaluating full documents.
<<<Discussion>>>
Our findings emphasise the importance of linguistic context in human evaluation of MT. In terms of adequacy, raters assessing documents as a whole show a significant preference for human translation, but when assessing single sentences in random order, they show no significant preference for human translation.
Document-level evaluation exposes errors to raters which are hard or impossible to spot in a sentence-level evaluation, such as coherent translation of named entities. The example in Table TABREF23 shows the first two sentences of a Chinese news article as translated by a professional human translator (H$_A$) and BIBREF3's BIBREF3 NMT system (MT$_1$). When looking at both sentences (document-level evaluation), it can be seen that MT$_1$ uses two different translations to refer to a cultural festival, “2016盂兰文化节", whereas the human translation uses only one. When assessing the second sentence out of context (sentence-level evaluation), it is hard to penalise MT$_1$ for producing 2016 Python Cultural Festival, particularly for fluency raters without access to the corresponding source text. For further examples, see Section SECREF24 and Table TABREF34.
<<</Discussion>>>
<<</Linguistic Context>>>
<<<Reference Translations>>>
The human reference translations with which machine translations are compared within the scope of a human–machine parity assessment play an important role. BIBREF3 used all source texts of the WMT 2017 Chinese–English test set for their experiments, of which only half were originally written in Chinese; the other half were translated from English into Chinese. Since translated texts are usually simpler than their original counterparts BIBREF17, they should be easier to translate for MT systems. Moreover, different human translations of the same source text sometimes show considerable differences in quality, and a comparison with an MT system only makes sense if the human reference translations are of high quality. BIBREF3, for example, had the WMT source texts re-translated as they were not convinced of the quality of the human translations in the test set. At WMT 2018, the organisers themselves noted that the manual evaluation included several reports of ill-formed reference translations BIBREF5. We hypothesise that the quality of the human translations has a significant effect on findings of human–machine parity, which would indicate that it is necessary to ensure that human translations used to assess parity claims need to be carefully vetted for their quality.
We empirically test and discuss the impact of these factors on human evaluation of MT in Sections SECREF3–SECREF5. Based on our findings, we then distil a set of recommendations for human evaluation of strong MT systems, with a focus on assessing human–machine parity (Section SECREF6).
<<<Quality>>>
Because the translations are created by humans, a number of factors could lead to compromises in quality:
If the translator is a non-native speaker of the source language, they may make mistakes in interpreting the original message. This is particularly true if the translator does not normally work in the domain of the text, e. g., when a translator who normally works on translating electronic product manuals is asked to translate news.
If the translator is a non-native speaker of the target language, they might not be able to generate completely fluent text. This similarly applies to domain-specific terminology.
Unlike computers, human translators have limits in time, attention, and motivation, and will generally do a better job when they have sufficient time to check their work, or are particularly motivated to do a good job, such as when doing a good job is necessary to maintain their reputation as a translator.
In recent years, a large number of human translation jobs are performed by post-editing MT output, which can result in MT artefacts remaining even after manual post-editing BIBREF31, BIBREF32, BIBREF33.
In this section, we examine the effect of the quality of underlying translations on the conclusions that can be drawn with regards to human–machine parity. We first do an analysis on (i) how the source of the human translation affects claims of human–machine parity, and (ii) whether significant differences exist between two varieties of human translation. We follow the same protocol as in Section SECREF19, having 4 professional translators per condition, evaluate the translations for adequacy and fluency on both the sentence and document level.
The results are shown in Table TABREF30. From this, we can see that the human translation H$_B$, which was aggressively edited to ensure target fluency, resulted in lower adequacy (Table TABREF30). With more fluent and less accurate translations, raters do not prefer human over machine translation in terms of adequacy (Table TABREF30), but have a stronger preference for human translation in terms of fluency (compare Tables TABREF30 and TABREF21). In a direct comparison of the two human translations (Table TABREF30), we also find that H$_A$ is considered significantly more adequate than H$_B$, while there is no significant difference in fluency.
To achieve a finer-grained understanding of what errors the evaluated translations exhibit, we perform a categorisation of 150 randomly sampled sentences based on the classification used by BIBREF3. We expand the classification with a Context category, which we use to mark errors that are only apparent in larger context (e. g., regarding poor register choice, or coreference errors), and which do not clearly fit into one of the other categories. BIBREF3 perform this classification only for the machine-translated outputs, and thus the natural question of whether the mistakes that humans and computers make are qualitatively different is left unanswered. Our analysis was performed by one of the co-authors who is a bi-lingual native Chinese/English speaker. Sentences were shown in the context of the document, to make it easier to determine whether the translations were correct based on the context. The analysis was performed on one machine translation (MT$_1$) and two human translation outputs (H$_A$, H$_B$), using the same 150 sentences, but blinding their origin by randomising the order in which the documents were presented. We show the results of this analysis in Table TABREF32.
From these results, we can glean a few interesting insights. First, we find significantly larger numbers of errors of the categories of Incorrect Word and Named Entity in MT$_1$, indicating that the MT system is less effective at choosing correct translations for individual words than the human translators. An example of this can be found in Table TABREF33, where we see that the MT system refers to a singular “point of view" and translates 线路 (channel, route, path) into the semantically similar but inadequate lines. Interestingly, MT$_1$ has significantly more Word Order errors, one example of this being shown in Table TABREF33, with the relative placements of at the end of last year (去年年底) and stop production (停产). This result is particularly notable given previous reports that NMT systems have led to great increases in reordering accuracy compared to previous statistical MT systems BIBREF35, BIBREF36, demonstrating that the problem of generating correctly ordered output is far from solved even in very strong NMT systems. Moreover, H$_B$ had significantly more Missing Word (Semantics) errors than both H$_A$ ($p<.001$) and MT$_1$ ($p<.001$), an indication that the proofreading process resulted in drops of content in favour of fluency. An example of this is shown in Table TABREF33, where H$_B$ dropped the information that the meetings between Suning and Apple were recently (近期) held. Finally, while there was not a significant difference, likely due to the small number of examples overall, it is noticeable that MT$_1$ had a higher percentage of Collocation and Context errors, which indicate that the system has more trouble translating words that are dependent on longer-range context. Similarly, some Named Entity errors are also attributable to translation inconsistencies due to lack of longer-range context. Table TABREF34 shows an example where we see that the MT system was unable to maintain a consistently gendered or correct pronoun for the female Olympic shooter Zhang Binbin (张彬彬).
Apart from showing qualitative differences between the three translations, the analysis also supports the finding of the pairwise ranking study: H$_A$ is both preferred over MT$_1$ in the pairwise ranking study, and exhibits fewer translation errors in our error classification. H$_B$ has a substantially higher number of missing words than the other two translations, which agrees with the lower perceived adequacy in the pairwise ranking.
However, the analysis not only supports the findings of the pairwise ranking study, but also adds nuance to it. Even though H$_B$ has the highest number of deletions, and does worse than the other two translations in a pairwise adequacy ranking, it is similar to H$_A$, and better than MT$_1$, in terms of most other error categories.
<<</Quality>>>
<<<Directionality>>>
Translation quality is also affected by the nature of the source text. In this respect, we note that from the 2,001 sentences in the WMT 2017 Chinese–English test set, half were originally written in Chinese; the remaining half were originally written in English and then manually translated into Chinese. This Chinese reference file (half original, half translated) was then manually translated into English by BIBREF3 to make up the reference for assessing human–machine parity. Therefore, 50 % of the reference comprises direct English translations from the original Chinese, while 50 % are English translations from the human-translated file from English into Chinese, i. e., backtranslations of the original English.
According to BIBREF37, translated texts differ from their originals in that they are simpler, more explicit, and more normalised. For example, the synonyms used in an original text may be replaced by a single translation. These differences are referred to as translationese, and have been shown to affect translation quality in the field of machine translation BIBREF38, BIBREF39, BIBREF32, BIBREF33.
We test whether translationese has an effect on assessing parity between translations produced by humans and machines, using relative rankings of translations in the WMT 2017 Chinese–English test set by five raters (see Section SECREF3). Our hypothesis is that the difference between human and machine translation quality is smaller when source texts are translated English (translationese) rather than original Chinese, because a translationese source text should be simpler and thus easier to translate for an MT system. We confirm Laviosa's observation that “translationese” Chinese (that started as English) exhibits less lexical variety than “natively” Chinese text and demonstrate that translationese source texts are generally easier for MT systems to score well on.
Table TABREF36 shows the TrueSkill scores for translations (H$_A$, MT$_1$, and MT$_2$) of the entire test set (Both) versus only the sentences originally written in Chinese or English therein. The human translation H$_A$ outperforms the machine translation MT$_1$ significantly when the original language is Chinese, while the difference between the two is not significant when the original language is English (i. e., translationese input).
We also compare the two subsets of the test set, original and translationese, using type-token ratio (TTR). Our hypothesis is that the TTR will be smaller for the translationese subset, thus its simpler nature getting reflected in a less varied use of language. While both subsets contain a similar number of sentences (1,001 and 1,000), the Chinese subset contains more tokens (26,468) than its English counterpart (22,279). We thus take a subset of the Chinese (840 sentences) containing a similar amount of words to the English data (22,271 words). We then calculate the TTR for these two subsets using bootstrap resampling. The TTR for Chinese ($M=0.1927$, $SD=0.0026$, 95 % confidence interval $[0.1925,0.1928]$) is 13 % higher than that for English ($M=0.1710$, $SD=0.0025$, 95 % confidence interval $[0.1708,0.1711]$).
Our results show that using translationese (Chinese translated from English) rather than original source texts results in higher scores for MT systems in human evaluation, and that the lexical variety of translationese is smaller than that of original text.
<<</Directionality>>>
<<</Reference Translations>>>
<<</Assessing Human–Machine Parity>>>
<<<Translations>>>
We use English translations of the Chinese source texts in the WMT 2017 English–Chinese test set BIBREF18 for all experiments presented in this article:
[labelwidth=1cm, leftmargin=1.25cm]
The professional human translations in the dataset of BIBREF3.[1]
Professional human translations that we ordered from a different translation vendor, which included a post-hoc native English check. We produced these only for the documents that were originally Chinese, as discussed in more detail in Section SECREF35.
The machine translations produced by BIBREF3's BIBREF3 best system (Combo-6),[1] for which the authors found parity with H$_A$.
The machine translations produced by Google's production system (Google Translate) in October 2017, as contained in BIBREF3's BIBREF3 dataset.[1]
Statistical significance is denoted by * ($p\le .05$), ** ($p\le .01$), and *** ($p\le .001$) throughout this article, unless otherwise stated.
<<</Translations>>>
<<</Background>>>
<<<Choice of Raters>>>
Both professional and amateur evaluators can be involved in human evaluation of MT quality. However, from published work in the field BIBREF19, it is fair to say that there is a tendency to “rely on students and amateur evaluators, sometimes with an undefined (or self-rated) proficiency in the languages involved, an unknown expertise with the text type" BIBREF8.
Previous work on evaluation of MT output by professional translators against crowd workers by BIBREF20 showed that for all language pairs (involving 11 languages) evaluated, crowd workers tend to be more accepting of the MT output by giving higher fluency and adequacy scores and performing very little post-editing. The authors argued that non-expert translators lack knowledge of translation and so might not notice subtle differences that make one translation more suitable than another, and therefore, when confronted with a translation that is hard to post-edit, tend to accept the MT rather than try to improve it.
<<</Choice of Raters>>>
<<<Linguistic Context>>>
Another concern is the unit of evaluation. Historically, machine translation has primarily operated on the level of sentences, and so has machine translation evaluation. However, it has been remarked that human raters do not necessarily understand the intended meaning of a sentence shown out-of-context BIBREF25, which limits their ability to spot some mistranslations. Also, a sentence-level evaluation will be blind to errors related to textual cohesion and coherence.
While sentence-level evaluation may be good enough when evaluating MT systems of relatively low quality, we hypothesise that with additional context, raters will be able to make more nuanced quality assessments, and will also reward translations that show more textual cohesion and coherence. We believe that this aspect should be considered in evaluation, especially when making claims about human–machine parity, since human translators can and do take inter-sentential context into account BIBREF15, BIBREF16.
<<</Linguistic Context>>>
<<<Reference Translations>>>
Yet another relevant element in human evaluation is the reference translation used. This is the focus of this section, where we cover two aspects of reference translations that can have an impact on evaluation: quality and directionality.
<<</Reference Translations>>>
<<<Recommendations>>>
Our experiments in Sections SECREF3–SECREF5 show that machine translation quality has not yet reached the level of professional human translation, and that human evaluation methods which are currently considered best practice fail to reveal errors in the output of strong NMT systems. In this section, we recommend a set of evaluation design changes that we believe are needed for assessing human–machine parity, and will strengthen the human evaluation of MT in general.
<<<(R1) Choose professional translators as raters.>>>
In our blind experiment (Section SECREF3), non-experts assess parity between human and machine translation where professional translators do not, indicating that the former neglect more subtle differences between different translation outputs.
<<</(R1) Choose professional translators as raters.>>>
<<<(R2) Evaluate documents, not sentences.>>>
When evaluating sentences in random order, professional translators judge machine translation more favourably as they cannot identify errors related to textual coherence and cohesion, such as different translations of the same product name. Our experiments show that using whole documents (i. e., full news articles) as unit of evaluation increases the rating gap between human and machine translation (Section SECREF4).
<<</(R2) Evaluate documents, not sentences.>>>
<<<(R3) Evaluate fluency in addition to adequacy.>>>
Raters who judge target language fluency without access to the source texts show a stronger preference for human translation than raters with access to the source texts (Sections SECREF4 and SECREF24). In all of our experiments, raters prefer human translation in terms of fluency while, just as in BIBREF3's BIBREF3 evaluation, they find no significant difference between human and machine translation in sentence-level adequacy (Tables TABREF21 and TABREF30). Our error analysis in Table TABREF34 also indicates that MT still lags behind human translation in fluency, specifically in grammaticality.
<<</(R3) Evaluate fluency in addition to adequacy.>>>
<<<(R4) Do not heavily edit reference translations for fluency.>>>
In professional translation workflows, texts are typically revised with a focus on target language fluency after an initial translation step. As shown in our experiment in Section SECREF24, aggressive revision can make translations more fluent but less accurate, to the degree that they become indistinguishable from MT in terms of accuracy (Table TABREF30).
<<</(R4) Do not heavily edit reference translations for fluency.>>>
<<<(R5) Use original source texts.>>>
Raters show a significant preference for human over machine translations of texts that were originally written in the source language, but not for source texts that are translations themselves (Section SECREF35). Our results are further evidence that translated texts tend to be simpler than original texts, and in turn easier to translate with MT.
Our work empirically strengthens and extends the recommendations on human MT evaluation in previous work BIBREF6, BIBREF7, some of which have meanwhile been adopted by the large-scale evaluation campaign at WMT 2019 BIBREF40: the new evaluation protocol uses original source texts only (R5) and gives raters access to document-level context (R2). The findings of WMT 2019 provide further evidence in support of our recommendations. In particular, human English to Czech translation was found to be significantly better than MT BIBREF40; the comparison includes the same MT system (CUNI-Transformer-T2T-2018) which outperformed human translation according to the previous protocol BIBREF5. Results also show a larger difference between human translation and MT in document-level evaluation.
We note that in contrast to WMT, the judgements in our experiments are provided by a small number of human raters: five in the experiments of Sections SECREF3 and SECREF35, four per condition (adequacy and fluency) in Section SECREF4, and one in the fine-grained error analysis presented in Section SECREF24. Moreover, the results presented in this article are based on one text domain (news) and one language direction (Chinese to English), and while a large-scale evaluation with another language pair supports our findings (see above), further experiments with more languages, domains, and raters will be required to increase their external validity.
<<</(R5) Use original source texts.>>>
<<</Recommendations>>>
<<<Conclusion>>>
We compared professional human Chinese to English translations to the output of a strong MT system. In a human evaluation following best practices, BIBREF3 found no significant difference between the two, concluding that their NMT system had reached parity with professional human translation. Our blind qualitative analysis, however, showed that the machine translation output contained significantly more incorrect words, omissions, mistranslated names, and word order errors.
Our experiments show that recent findings of human–machine parity in language translation are owed to weaknesses in the design of human evaluation campaigns. We empirically tested alternatives to what is currently considered best practice in the field, and found that the choice of raters, the availability of linguistic context, and the creation of reference translations have a strong impact on perceived translation quality. As for the choice of raters, professional translators showed a significant preference for human translation, while non-expert raters did not. In terms of linguistic context, raters found human translation significantly more accurate than machine translation when evaluating full documents, but not when evaluating single sentences out of context. They also found human translation significantly more fluent than machine translation, both when evaluating full documents and single sentences. Moreover, we showed that aggressive editing of human reference translations for target language fluency can decrease adequacy to the point that they become indistinguishable from machine translation, and that raters found human translations significantly better than machine translations of original source texts, but not of source texts that were translations themselves.
Our results strongly suggest that in order to reveal errors in the output of strong MT systems, the design of MT quality assessments with human raters should be revisited. To that end, we have offered a set of recommendations, supported by empirical data, which we believe are needed for assessing human–machine parity, and will strengthen the human evaluation of MT in general. Our recommendations have the aim of increasing the validity of MT evaluation, but we are aware of the high cost of having MT evaluation done by professional translators, and on the level of full documents. We welcome future research into alternative evaluation protocols that can demonstrate their validity at a lower cost.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nBackground\nHuman Evaluation of Machine Translation\nAssessing Human–Machine Parity\nChoice of Raters\nEvaluation Protocol\nResults\nLinguistic Context\nDiscussion\nReference Translations\nQuality\nDirectionality\nTranslations\nChoice of Raters\nLinguistic Context\nReference Translations\nRecommendations\n(R1) Choose professional translators as raters.\n(R2) Evaluate documents, not sentences.\n(R3) Evaluate fluency in addition to adequacy.\n(R4) Do not heavily edit reference translations for fluency.\n(R5) Use original source texts.\nConclusion"
],
"type": "outline"
}
|
2003.00576
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
StructSum: Incorporating Latent and Explicit Sentence Dependencies for Single Document Summarization
<<<Abstract>>>
Traditional preneural approaches to single document summarization relied on modeling the intermediate structure of a document before generating the summary. In contrast, the current state of the art neural summarization models do not preserve any intermediate structure, resorting to encoding the document as a sequence of tokens. The goal of this work is two-fold: to improve the quality of generated summaries and to learn interpretable document representations for summarization. To this end, we propose incorporating latent and explicit sentence dependencies into single-document summarization models. We use structure-aware encoders to induce latent sentence relations, and inject explicit coreferring mention graph across sentences to incorporate explicit structure. On the CNN/DM dataset, our model outperforms standard baselines and provides intermediate latent structures for analysis. We present an extensive analysis of our summaries and show that modeling document structure reduces copying long sequences and incorporates richer content from the source document while maintaining comparable summary lengths and an increased degree of abstraction.
<<</Abstract>>>
<<<Introduction>>>
Traditional approaches to abstractive summarization have relied on interpretable structured representations such as graph based sentence centrality BIBREF0, AMR parses BIBREF1, discourse based compression and anaphora constraints BIBREF2. On the other hand, state of the art neural approaches to single document summarization encode the document as a sequence of tokens and compose them into a document representation BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7. Albeit being effective, these systems learn to rely significantly on layout bias associated with the source document BIBREF8 and do not lend themselves easily to interpretation via intermediate structures.
Recent work provides evidence that structured representation of text leads to better document representations BIBREF9, BIBREF10. However, structured representations are under-explored in the neural summarization literature. Motivated by this, we propose a structure-aware end-to-end model (§SECREF2) for summarization. Our proposed model, StructSum, augments the existing pointer-generator network BIBREF3 with two novel components: (1) a latent-structure attention module that adapts structured representations BIBREF11, BIBREF12 for the summarization task, and (2) an explicit-structure attention module, that incorporates a coreference graph. The components together model sentence level dependencies in a document generating rich structured representations. The motivation of this work is to provide a framework to induce rich interpretable latent structures and inject external document structures that can be introduced into any document encoder model.
Encoders with induced latent structures have been shown to benefit several tasks including document classification, natural language inference BIBREF12, BIBREF13, and machine translation BIBREF11. Building on this motivation, our latent structure attention module builds upon BIBREF12 to model the dependencies between sentences in a document. It uses a variant of Kirchhoff’s matrix-tree theorem BIBREF14 to model such dependencies as non-projective tree structures(§SECREF3). The explicit attention module is linguistically-motivated and aims to incorporate sentence-level structures from externally annotated document structures. We incorporate a coreference based sentence dependency graph, which is then combined with the output of the latent structure attention module to produce a hybrid structure-aware sentence representation (§SECREF5).
We evaluate our model on the CNN/DM dataset BIBREF15 and show in §SECREF4 that it outperforms strong baselines by up to 1.1 ROUGE-L. We find that the latent and explicit structures are complementary, both contributing to the final performance improvement. Our modules are also independent of the underlying encoder-decoder architectures, rendering them flexible to be incorporated into any advanced models. Our analysis quantitatively compares our generated summaries with the baselines and reference documents (§SECREF5). It reveals that structure-aware summarization reduces the bias of copying large sequences from the source inherently making the summaries more abstractive by generating $\sim $15% more novel n-grams compared to a competitive baseline. We also show qualitative examples of the learned interpretable sentence dependency structures, motivating further research for structure-aware modeling.
<<</Introduction>>>
<<<StructSum Model>>>
Consider a source document $\mathbf {x}$ consisting of $n$ sentences $\lbrace \mathbf {s}\rbrace $ where each sentence $\mathbf {s}_i$ is composed of a sequence of words. Document summarization aims to map the source document to a target summary of $m$ words $\lbrace y\rbrace $. A typical neural abstractive summarization system is an attentional sequence-to-sequence model that encodes the input sequence $\mathbf {x}$ as a continuous sequence of tokens $\lbrace w\rbrace $ using a BiLSTM. The encoder produces a set of hidden representations $\lbrace \mathbf {h}\rbrace $. An LSTM decoder maps the previously generated token $y_{t-1}$ to a hidden state and computes a soft attention probability distribution $p(\mathbf {a}_t \mid \mathbf {x}, \mathbf {y}_{1:t-1})$ over encoder hidden states. A distribution $p$ over the vocabulary is computed at every timestep $t$ and the network is trained using negative log likelihood loss : $\text{loss}_t = - \mathrm {log}\:p(y_t) $. The pointer-generator network BIBREF3 augments the standard encoder-decoder architecture by linearly interpolating a pointer based copy mechanism. StructSum uses the pointer-generator network as the base model. Our encoder is a structured hierarchical encoder BIBREF16, which computes hidden representations of the sequence both at the token and sentence level. The model then uses the explicit-structure and implicit-structure attention modules to augment the sentence representations with rich sentence dependency information, leveraging both learned latent structure and additional external structure from other NLP modules. The attended vectors are then passed to the decoder, which produces the output sequence for abstractive summarization. In the rest of this section, we describe our model architecture, shown in Figure FIGREF2, in detail.
<<<Encoder>>>
Our hierarchical encoder consists of a BiLSTM encoder over words, followed by sentence level BiLSTM encoder. The word encoder takes a sequence of words in a sentence $\mathbf {s}_i = \lbrace w\rbrace $ as input and produces contextual hidden representation for each word $\mathbf {h}_{w_{ik}}$, where $w_{ik}$ is the $i^{th}$ word of the $k^{th}$ sentence, $k=1:q$ and $q$ is the number of words in the sentence $\mathbf {s}_i$. The word hidden representations are max-pooled at the sentence level and the result is passed to a BiLSTM sentence-encoder which produces new hidden sentence representations for each sentence $\mathbf {h}_{\mathbf {s}_i}$. The sentence hidden representations are then passed as inputs to latent and explicit structure attention modules.
<<</Encoder>>>
<<<Latent Structure (LS) Attention>>>
We model the latent structure of a source document as a non-projective dependency tree and force a pair-wise attention module to automatically induce this tree. We denote the marginal probability of a dependency edge as $a_{ij} = p(z_{ij}=1)$ where $z_{ij}$ is the latent variable representing the edge from sentence $i$ to sentence $j$. We parameterize with a neural network the unnormalized pair-wise scores between sentences and use the Kirchoff's matrix tree theorem BIBREF14 to compute the marginal probability of a dependency edge between any two sentences.
We decompose the representation of sentence $\mathbf {s}_i$ into a semantic vector $\mathbf {g}_{\mathbf {s}_i}$ and structure vector $\mathbf {d}_{\mathbf {s}_i}$ as $\mathbf {h}_{\mathbf {s}_i} = [\mathbf {g}_{\mathbf {s}_i}; \mathbf {d}_{\mathbf {s}_i}]$. Using the structure vectors $\mathbf {d}_{\mathbf {s}_i}, \mathbf {d}_{\mathbf {s}_j}$, we compute a score $f_{ij}$ between sentence pairs $(i,j)$ (where sentence $i$ is the parent node of sentence $j$) and a score for sentence $\mathbf {s}_i$ being the root node $r_i$:
where $F_p, F_c$ and $F_r$ are linear-projection functions to build representations for the parent, child and root node respectively and $W_a$ is the weight for bilinear transformation. Here, $f_{ij}$ is the edge weight between nodes $(i,j)$ in a weighted adjacency graph $\mathbf {F}$ and is computed for all pairs of sentences. Using $f_{ij}$ and $r_i$, we compute normalized attention scores $a_{ij}$ and $a_{i}^r $ using a variant of Kirchhoff’s matrix-tree theorem BIBREF12, BIBREF14 where $a_{ij}$ is the marginal probability of a dependency edge between sentences $(i,j)$ and $a_{i}^r $ is the probability of sentence $i$ being the root.
Using these probabilistic attention weights and the semantic vectors $\lbrace \mathbf {g}_{\mathbf {s}}\rbrace $, we compute the attended sentence representations as:
where $\mathbf {p}_{\mathbf {s}_i}$ is the context vector gathered from possible parents of sentence $i$, $\mathbf {c}_{\mathbf {s}_i}$ is the context vector gathered from possible children, and $\mathbf {g}_{root}$ is a special embedding for the root node. Here, the updated sentence representation $\textit {l}_{\mathbf {s}_i}$ incorporates the implicit structural information.
<<</Latent Structure (LS) Attention>>>
<<<Explicit Structure (ES) Attention>>>
BIBREF2 showed that modeling coreference knowledge through anaphora constraints led to improved clarity or grammaticality in summaries. Taking inspiration from this, we choose coreference links across sentences as our explicit structure. First, we use an off-the-shelf coreference parser to identify coreferring mentions. We then build a coreference based sentence graph by adding a link between sentences $(\mathbf {s}_i, \mathbf {s}_j)$, if they have any coreferring mentions between them. This representation is then converted into a weighted graph by incorporating a weight on the edge between two sentences that is proportional to the number of unique coreferring mentions between them. We normalize these edge weights for every sentence, effectively building a weighted adjacency matrix $\mathbf {K}$ where $k_{ij}$ is given by:
where $m_i$ denotes the set of unique mentions in sentence $\mathbf {s}_i$, ($m_i$ $\bigcap $ $m_j$) denotes the set of co-referring mentions between the two sentences and $z$ is a latent variable representing a link in the coreference sentence graph. $\epsilon = 5e-4$ is a smoothing hyperparameter.
<<<Incorporating explicit structure>>>
Given contextual sentence representations $\lbrace \mathbf {h}_{\mathbf {s}}\rbrace $ and our explicit coreference based weighted adjacency matrix $\mathbf {K}$, we learn an explicit-structure aware representation as follows:
where $F_u$ and $F_e$ are linear projections and $\mathbf {e}_{\mathbf {s}_i}$ is an updated sentence representation which incorporates explicit structural information.
Finally, to combine the two structural representations, we concatenate the latent and explicit sentence vectors as: $\mathbf {h}_{\mathbf {s}_i} = [\mathbf {l}_{\mathbf {s}_i};\mathbf {e}_{\mathbf {s}_i}]$ to form encoder sentence representations of the source document. To provide every token representation with context of the entire document, we keep the same formulation as pointer-generator networks, where each token $w_{ij}$ is mapped to its hidden representation $\mathbf {h}_{w_{ij}}$ using a BiLSTM. The token representation is concatenated with their corresponding structure-aware sentence representation: $\mathbf {h}_{w_{ij}} = [\mathbf {h}_{w_{ij}};\mathbf {h}_{\mathbf {s}_i}]$ where $\mathbf {s}_i$ is the sentence to which the word $w_{ij}$ belongs. The resulting structure-aware token representations can be used to directly replace previous token representations as input to the decoder.
<<</Incorporating explicit structure>>>
<<</Explicit Structure (ES) Attention>>>
<<</StructSum Model>>>
<<<Experiments>>>
<<<Dataset:>>>
We evaluate our approach on the CNN/Daily Mail corpus BIBREF15, BIBREF17 and use the same preprocessing steps as shown in BIBREF3. The CNN/DM summaries have an average of 66 tokens ($\sigma = 26$) and 4.9 sentences. Differing from BIBREF3, we truncate source documents to 700 tokens instead of 400 in training and validation sets to model longer documents with more sentences.
<<</Dataset:>>>
<<<Baselines:>>>
We choose the following baselines based on their relatedness to the task and wide applicability:
BIBREF3 : We re-implement the base pointer-generator model and the additional coverage mechanism. This forms the base model of our implementation and hence our addition of modeling document structure can be directly compared to it.
BIBREF6 : This is a graph-based attention model that is closest in spirit to the method we present in this work. They use a graph attention module to learn attention between sentences, but cannot be easily used to induce interpretable document structures, since their attention scores are not constrained to learn structure. In addition to learning latent and interpretable structured attention between sentences, StructSum also introduces an explicit structure component to inject external document structure.
BIBREF7 : We compare with the DiffMask experiment with this work. This work introduces a separate content selector which tags words and phrases to be copied. The DiffMask variant is an end-to-end variant like ours and hence is included in our baselines.
Our baselines exclude Reinforcement Learning (RL) based systems as they aren't directly comparable, but our approach can be easily introduced in any encoder-decoder based RL system. Since we do not incorporate any pretraining, we do not compare with recent contextual representation based models BIBREF18.
<<</Baselines:>>>
<<<Hyperparameters:>>>
Our encoder uses 256 hidden states for both directions in the one-layer LSTM, and 512 for the single-layer decoder. We use the adagrad optimizer BIBREF19 with a learning rate of 0.15 and an initial accumulator value of 0.1. We do not use dropout and use gradient-clipping with a maximum norm of 2. We selected the best model using early stopping based on the ROUGE score on the validation dataset as our criteria. We also used the coverage penalty during inference as shown in BIBREF7. For decoding, we use beam-search with a beam width of 3. We did not observe significant improvements with higher beam widths.
<<</Hyperparameters:>>>
<<</Experiments>>>
<<<Results>>>
Table TABREF8 shows the results of our work on the CNN/DM dataset. We use the standard ROUGE-1,2 and L BIBREF20 F1 metric to evaluate all our summarization output. We first observe that introducing the capability to learn latent structures already improves our performance on ROUGE-L. It suggests that modeling dependencies between sentences helps the model compose better long sequences w.r.t reference compared to baselines. We do not see a significant improvement in ROUGE-1 and ROUGE-2, hinting that we retrieve similar content words as the baseline but compose them into better contiguous sequences.
We observe similar results when using explicit structures only with the ES attention module. This shows that adding inductive bias in the form of coreference based sentence graphs helps compose long sequences. Our results here are close to the model that uses just LS attention. This demonstrates that LS attention induces good latent dependencies that make up for pure external coreference knowledge.
Finally, our combined model which uses both Latent and Explicit structure performs the best with a strong improvement of 1.08 points in ROUGE-L over our base pointer-generator model and 0.6 points in ROUGE-1. It shows that the latent and explicit information are complementary and a model can jointly leverage them to produce better summaries.
Modeling structure and adding inductive biases also helps a model to converge faster where the combined LS+ES Attention model took 126K iterations for training in comparison to 230K iterations required to train the plain pointer-generator network and an additional 3K iterations for the coverage loss BIBREF3.
<<</Results>>>
<<<Analysis>>>
We present below analysis on the quality of summarization as compared to our base model, the pointer-generator network with coverage BIBREF3 and the reference.
<<<Analysis of Copying>>>
Despite being an abstractive model, the pointer-generator model tends to copy very long sequences of words including whole sentences from the source document (also observed by BIBREF7). Table TABREF15 shows a comparison of the Average Length (Copy Len) of contiguous copied sequences greater than length 3. We observe that the pointer-generator baseline on average copies 16.61 continuous tokens from the source which shows the extractive nature of the model. This indicates that pointer networks, aimed at combining advantages from abstractive and extractive methods by allowing to copy content from the input document, tend to skew towards copying, particularly in this dataset. A consequence of this is that the model fails to interrupt copying at desirable sequence length.
In contrast, modeling document structure through StructSum reduces the length of copied sequences to 9.13 words on average reducing the bias of copying sentences in entirety. This average is closer to the reference (5.07 words) in comparison, without sacrificing task performance. StructSum learns to stop when needed, only copying enough content to generate a coherent summary.
<<</Analysis of Copying>>>
<<<Content Selection and Abstraction>>>
A direct outcome of copying shorter sequences is being able to cover more content from the source document within given length constraints. We observe that this leads to better summarization performance. In our analysis, we compute coverage by computing the number of source sentences from which sequences greater than length 3 are copied in the summary. Table TABREF15 shows a comparison of the coverage of source sentences in the summary content. We see that while the baseline pointer-generator model only copies from 12.1% of the source sentences, we copy content from 24.0% of the source sentences. Additionally, the average length of the summaries produced by StructSum remains mostly unchanged at 66 words on average compared to 61 of the baseline model. This indicates that StructSum produces summaries that draw from a wider selection of sentences from the original article compared to the baseline models.
BIBREF21 show that copying more diverse content in isolation does not necessarily lead to better summaries for extractive summarization. Our analysis suggests that this observation might not extend to abstractive summarization methods. The proportion of novel n-grams generated has been used in the literature to measure the degree of abstraction of summarization models BIBREF3. Figure FIGREF17 compares the percentage of novel n-grams in StructSum as compared to the baseline model. Our model produces novel trigrams 21.0% of the time and copies whole sentences only 21.7% of the time. In comparison, the pointer-generator network has only 6.1% novel trigrams and copies entire sentences 51.7% of the time. This shows that StructSum on average generates 14.7% more novel n-grams in comparison to the pointer-generator baseline.
<<</Content Selection and Abstraction>>>
<<<Layout Bias>>>
Neural abstractive summarization methods applied to news articles are typically biased towards selecting and generating summaries based on the first few sentences of the articles. This stems from the structure of news articles, which present the salient information of the article in the first few sentences and expand in the subsequent ones. As a result, the LEAD 3 baseline, which selects the top three sentences of an article, is widely used in the literature as a strong baseline to evaluate summarization models applied to the news domain BIBREF22. BIBREF8 observed that the current summarization models learn to exploit the layout biases of current datasets and offer limited diversity in their outputs.
To analyze whether StructSum also holds the same layout biases, we compute a distribution of source sentence indices that are used for copying content (copied sequences of length 3 or more are considered). Figure FIGREF19 shows the comparison of coverage of sentences. The coverage of sentences in the reference summaries shows a high proportion of the top 5 sentences of any article being copied to the summary. Additionally, the reference summaries have a smoother tail end distribution with relevant sentences in all positions being copied. It shows that a smooth distribution over all sentences is a desirable feature. We notice that the sequence-to-sequence and pointer-generator framework (with and without coverage enabled) have a stronger bias towards the beginning of the article with a high concentration of copied sentences within the top 5 sentences of the article. In contrast, StructSum improves coverage slightly having a lower concentration of top 5 sentences and copies more tail end sentences than the baselines. However, although the modeling of structure does help, our model has a reasonable gap compared to the reference distribution. We see this as an area of improvement and a direction for future work.
<<</Layout Bias>>>
<<<Document Structures>>>
Similar to BIBREF12, we also look at the quality of the intermediate structures learned by the model. We use the Chu-Liu-Edmonds algorithm BIBREF23, BIBREF24 to extract the maximum spanning tree from the attention score matrix as our sentence structure. Table TABREF20 shows the frequency of various tree depths. We find that the average tree depth is 2.9 and the average proportion of leaf nodes is 88%, consistent with results from tree induction in document classification BIBREF25. Further, we compare latent trees extracted from StructSum with undirected graphs based on coreference and NER. These are constructed similarly to our explicit coreference based sentence graphs in §SECREF5 by linking sentences with overlapping coreference mentions or named entities. We measure the similarity between the learned latent trees and the explicit graphs through precision and recall over edges. The results are shown in Table TABREF22. We observe that our latent graphs have low recall with the linguistic graphs showing that our latent graphs do not capture the coreference or named entity overlaps explicitly, suggesting that the latent and explicit structures capture complementary information.
Figure FIGREF24 shows qualitative examples of our induced structures along with generated summaries from the StructSum model. The first example shows a tree with sentence 3 chosen as root, which was the key sentence mentioned in the reference. We notice that in both examples, the sentences in the lower level of the dependency tree contribute less to the generated summary. Along the same lines, in the examples source sentences used to generate summaries tend to be closer to the root node. In the first summary, all sentences from which content was drawn are either the root node or within depth 1 of the root node. Similarly, in the second example, 4 out of 5 source sentences were at depth=1 in the tree. In the two examples, generated summaries diverged from the reference by omitting certain sentences used in the reference. These sentences appear in the lower section of the tree giving us some insights on which sentences were preferred for the summary generation. Further, in example 1, we notice that the latent structures cluster sentences based on the main topic of the document. Sentences 1,2,3 differ from sentences 5,6,7 on the topic being discussed and our model has clustered the two sets separately.
<<</Document Structures>>>
<<</Analysis>>>
<<<Related Work>>>
Prior to neural models for summarization, document structure played a critical role in generating relevant, diverse and coherent summaries. BIBREF26 formulated document summarization using linguistic features to construct a semantic graph of the document and building a subgraph for the summary. BIBREF27 leverage language-independent syntactic graphs of the source document to do unsupervised document summarization. BIBREF1 parse the source text into a set of AMR graphs, transform the graphs to summary graphs and then generate text from the summary graph. While such systems generate grammatical summaries and preserve linguistic quality BIBREF2, they are often computationally demanding and do not generalize well BIBREF21.
Data-driven neural models for summarization fall into extractive BIBREF13, BIBREF28 or abstractive BIBREF29, BIBREF3, BIBREF7, BIBREF30. BIBREF3 proposed a pointer-generator framework that learns to either generate novel in-vocabulary words or copy words from the source. This model has been the foundation for a lot of follow up work on abstractive summarization BIBREF7, BIBREF31, BIBREF32. Our model extends the pointer-generator model by incorporating latent structure and explicit structure knowledge, making our extension applicable to any of the followup work. BIBREF6 present a graph-based attention system to improve the saliency of summaries. While this model learns attention between sentences, it does not induce interpretable intermediate structures. A lot of recent work looks into incorporating structure into neural models. BIBREF32 infuse source side syntactic structure into the copy mechanism of the pointer-generator model. They identify explicit word-level syntactic features based on dependency parses and parts of speech tags and augment the decoder copy mechanism to attend to them. In contrast, we model sentence level dependency structures in the form of latent or induced structures and explicit coreference based structures. We do not identify any heuristic or salient features other than linking dependent sentences. BIBREF33 propose structural compression and coverage regularizers to provide an objective to neural models to generate concise and informative content. Here, they incorporate structural bias about the target summaries but we choose to model the structure of the source sentence to produce rich document representations. BIBREF34 induce latent document structure for aspect based summarization. BIBREF35 use present long document summarization model applicable for scientific papers, which attends to discourse sections in a document, while BIBREF36 propose an unsupervised model for review summarization which learns a latent discourse structure and uses it to summarize a review. BIBREF37 use discourse structures to improve coherence in blog summarization. These are all complementary directions to our work. To our knowledge, we are the first to simultaneously incorporate latent and explicit document structure in a single framework for document summarization.
<<</Related Work>>>
<<<Conclusion and Future Work>>>
To summarize, our contributions are three-fold. We propose a framework for incorporating latent and explicit document structure in neural abstractive summarization. We introduce a novel explicit-attention module which can incorporate external linguistic structures, and we show one such application where we use coreference to enhance summarization. We show quantitative improvements on the ROUGE metric over strong summarization baselines and demonstrate improvements in abstraction and coverage through extensive qualitative analysis.
StructSum has demonstrated performance gain and higher quality output summaries; with a potential direction to study the role of latent structures in the interpretability of models in the future. Another possible direction is to investigate whether structured representations allow better generalization for transfer learning and summarization in other domains with limited data.
<<</Conclusion and Future Work>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nStructSum Model\nEncoder\nLatent Structure (LS) Attention\nExplicit Structure (ES) Attention\nIncorporating explicit structure\nExperiments\nDataset:\nBaselines:\nHyperparameters:\nResults\nAnalysis\nAnalysis of Copying\nContent Selection and Abstraction\nLayout Bias\nDocument Structures\nRelated Work\nConclusion and Future Work"
],
"type": "outline"
}
|
1909.02635
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Effective Use of Transformer Networks for Entity Tracking
<<<Abstract>>>
Tracking entities in procedural language requires understanding the transformations arising from actions on entities as well as those entities' interactions. While self-attention-based pre-trained language encoders like GPT and BERT have been successfully applied across a range of natural language understanding tasks, their ability to handle the nuances of procedural texts is still untested. In this paper, we explore the use of pre-trained transformer networks for entity tracking tasks in procedural text. First, we test standard lightweight approaches for prediction with pre-trained transformers, and find that these approaches underperform even simple baselines. We show that much stronger results can be attained by restructuring the input to guide the transformer model to focus on a particular entity. Second, we assess the degree to which transformer networks capture the process dynamics, investigating such factors as merged entities and oblique entity references. On two different tasks, ingredient detection in recipes and QA over scientific processes, we achieve state-of-the-art results, but our models still largely attend to shallow context clues and do not form complex representations of intermediate entity or process state.
<<</Abstract>>>
<<<Introduction>>>
Transformer based pre-trained language models BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4 have been shown to perform remarkably well on a range of tasks, including entity-related tasks like coreference resolution BIBREF5 and named entity recognition BIBREF0. This performance has been generally attributed to the robust transfer of lexical semantics to downstream tasks. However, these models are still better at capturing syntax than they are at more entity-focused aspects like coreference BIBREF6, BIBREF7; moreover, existing state-of-the-art architectures for such tasks often perform well looking at only local entity mentions BIBREF8, BIBREF9, BIBREF10 rather than forming truly global entity representations BIBREF11, BIBREF12. Thus, performance on these tasks does not form sufficient evidence that these representations strongly capture entity semantics. Better understanding the models' capabilities requires testing them in domains involving complex entity interactions over longer texts. One such domain is that of procedural language, which is strongly focused on tracking the entities involved and their interactions BIBREF13, BIBREF14, BIBREF15.
This paper investigates the question of how transformer-based models form entity representations and what these representations capture. We expect that after fine-tuning on a target task, a transformer's output representations should somehow capture relevant entity properties, in the sense that these properties can be extracted by shallow classification either from entity tokens or from marker tokens. However, we observe that such “post-conditioning” approaches don't perform significantly better than rule-based baselines on the tasks we study. We address this by proposing entity-centric ways of structuring input to the transformer networks, using the entity to guide the intrinsic self-attention and form entity-centric representations for all the tokens. We find that our proposed methods lead to a significant improvement in performance over baselines.
Although our entity-specific application of transformers is more effective at the entity tracking tasks we study, we perform additional analysis and find that these tasks still do not encourage transformers to form truly deep entity representations. Our performance gain is largely from better understanding of verb semantics in terms of associating process actions with entity the paragraph is conditioned on. The model also does not specialize in “tracking” composed entities per se, again using surface clues like verbs to identify the components involved in a new composition.
We evaluate our models on two datasets specifically designed to invoke procedural understanding: (i) Recipes BIBREF16, and (ii) ProPara BIBREF14. For the Recipes dataset, we classify whether an ingredient was affected in a certain step, which requires understanding when ingredients are combined or the focus of the recipe shifts away from them. The ProPara dataset involves answering a more complex set of questions about physical state changes of components in scientific processes. To handle this more structured setting, our transformer produces potentials consumed by a conditional random field which predicts entity states over time. Using a unidirectional GPT-based architecture, we achieve state-of-the-art results on both the datasets; nevertheless, analysis shows that our approach still falls short of capturing the full space of entity interactions.
<<</Introduction>>>
<<<Background: Process Understanding>>>
Procedural text is a domain of text involved with understanding some kind of process, such as a phenomenon arising in nature or a set of instructions to perform a task. Entity tracking is a core component of understanding such texts.
BIBREF14 introduced the ProPara dataset to probe understanding of scientific processes. The goal is to track the sequence of physical state changes (creation, destruction, and movement) entites undergo over long sequences of process steps. Past work involves both modeling entities across time BIBREF17 and capturing structural constraints inherent in the processes BIBREF18, BIBREF19 Figure FIGREF2b shows an example of the dataset posed as a structured prediction task, as in BIBREF19. For such a domain, it is crucial to capture implicit event occurrences beyond explicit entity mentions. For example, in fuel goes into the generator. The generator converts mechanical energy into electrical energy”, the fuel is implicitly destroyed in the process.
BIBREF15 introduced the task of detecting state changes in recipes in the Recipes dataset and proposed an entity-centric memory network neural architecture for simulating action dynamics. Figure FIGREF2a shows an example from the Recipes dataset with a grid showing ingredient presence. We focus specifically on this core problem of ingredient detection; while only one of the sub-tasks associated with their dataset, it reflects some complex semantics involving understanding the current state of the recipe. Tracking of ingredients in the cooking domain is challenging owing to the compositional nature of recipes whereby ingredients mix together and are aliased as intermediate compositions.
We pose both of these procedural understanding tasks as classification problems, predicting the state of the entity at each timestep from a set of pre-defined classes. In Figure FIGREF2, these classes correspond to either the presence (1) or absence (0) or the sequence of state changes create (C), move (M), destroy (D), exists (E), and none (O).
State-of-the-art approaches on these tasks are inherently entity-centric. Separately, it has been shown that entity-centric language modeling in a continuous framework can lead to better performance for LM related tasks BIBREF20, BIBREF21. Moreover, external data has shown to be useful for modeling process understanding tasks in prior work BIBREF18, BIBREF15, suggesting that pre-trained models may be effective.
With such tasks in place, a strong model will ideally learn to form robust entity-centric representation at each time step instead of solely relying on extracting information from the local entity mentions. This expectation is primarily due to the evolving nature of the process domain where entities undergo complex interactions, form intermediate compositions, and are often accompanied by implicit state changes. We now investigate to what extent this is true in a standard application of transformer models to this problem.
<<</Background: Process Understanding>>>
<<<Studying Basic Transformer Representations for Entity Tracking>>>
<<<Post-conditioning Models>>>
The most natural way to use the pre-trained transformer architectures for the entity tracking tasks is to simply encode the text sequence and then attempt to “read off” entity states from the contextual transformer representation. We call this approach post-conditioning: the transformer runs with no knowledge of which entity or entities we are going to make predictions on, but we only condition on the target entity after the transformer stage.
Figure FIGREF4 depicts this model. Formally, for a labelled pair $(\lbrace s_1, s_2, \dots , s_t\rbrace , y_{et})$, we encode the tokenized sequence of steps up to the current timestep (the sentences are separated by using a special [SEP] token), independent of the entity. We denote by $X=[h_{1}, h_{2},\dots , h_{m}]$ the contextualized hidden representation of the $m$ input tokens from the last layer, and by $\textstyle g_{e}\!=\!\!\!\sum \limits _{\text{ent toks}}\!emb(e_i)$ the entity representation for post conditioning. We now use one of the following two ways to make an entity-specific prediction:
<<<Task Specific Input Token>>>
We append a $\texttt {[CLS]}$ token to the input sequence and use the output representation of the $\texttt {[CLS]}$ token denoted by $h_{ \texttt {[CLS]}}$ concatenated with the learned BPE embeddings of the entity as the representation $c_{e,t}$ for our entity tracking system. We then use a linear layer over it to get class probabilities:
The aim of the [CLS] token is to encode information related to general entity related semantics participating in the recipe (sentence priors). We then use a single linear layer to learn sentence priors and entity priors independently, without strong interaction. We call this model GPT$_{indep}$.
<<</Task Specific Input Token>>>
<<<Entity Based Attention>>>
Second, we explore a more fine-grained way of using the GPT model outputs. Specifically, we use bilinear attention between $g_e$ and the transformer output for the process tokens $X$ to get a contextual representation $c_{e,t}$ for a given entity. Finally, using a feed-forward network followed by softmax layer gives us the class probabilities:
The bilinear attention over the contextual representations of the process tokens allows the model to fetch token content relevant to that particular entity. We call this model GPT$_{attn}$.
<<</Entity Based Attention>>>
<<</Post-conditioning Models>>>
<<<Results and Observations>>>
We evaluate the discussed post-conditioning models on the ingredient detection task of the Recipes dataset. To benchmark the performance, we compare to three rule-based baselines. This includes (i) Majority Class, (ii) Exact Match of an ingredient $e$ in recipe step $s_t$, and (iii) First Occurrence, where we predict the ingredient to be present in all steps following the first exact match. These latter two baselines capture natural modes of reasoning about the dataset: an ingredient is used when it is directly mentioned, or it is used in every step after it is mentioned, reflecting the assumption that a recipe is about incrementally adding ingredients to an ever-growing mixture. We also construct a LSTM baseline to evaluate the performance of ELMo embeddings (ELMo$_{token}$ and ELMo$_{sent}$) BIBREF22 compared to GPT.
Table TABREF10 compares the performance of the discussed models against the baselines, evaluating per-step entity prediction performance. Using the ground truth about ingredient's state, we also report the uncombined (UR) and combined (CR) recalls, which are per-timestep ingredient recall distinguished by whether the ingredient is explicitly mentioned (uncombined) or part of a mixture (combined). Note that Exact Match and First Occ baselines represent high-precision and high-recall regimes for this task, respectively.
As observed from the results, the post-conditioning frameworks underperform compared to the First Occ baseline. While the CR values appear to be high, which would suggest that the model is capturing the addition of ingredients to the mixture, we note that this value is also lower than the corresponding value for First Occ. This result suggests that the model may be approximating the behavior of this baseline, but doing so poorly. The unconditional self-attention mechanism of the transformers does not seem sufficient to capture the entity details at each time step beyond simple presence or absence. Moreover, we see that GPT$_{indep}$ performs somewhat comparably to GPT$_{attn}$, suggesting that consuming the transformer's output with simple attention is not able to really extract the right entity representation.
For ProPara, we observe similar performance trends where the post-conditioning model performed below par with the state-of-the-art architectures.
<<</Results and Observations>>>
<<</Studying Basic Transformer Representations for Entity Tracking>>>
<<<Entity-Conditioned Models>>>
The post-conditioning framework assumes that the transformer network can form strong representations containing entity information accessible in a shallow way based on the target entity. We now propose a model architecture which more strongly conditions on the entity as a part of the intrinsic self-attention mechanism of the transformers.
Our approach consists of structuring input to the transformer network to use and guide the self-attention of the transformers, conditioning it on the entity. Our main mode of encoding the input, the entity-first method, is shown in Figure FIGREF4. The input sequence begins with a [START] token, then the entity under consideration, then a [SEP] token. After each sentence, a [CLS] token is used to anchor the prediction for that sentence. In this model, the transformer can always observe the entity it should be primarily “attending to” from the standpoint of building representations. We also have an entity-last variant where the entity is primarily observed just before the classification token to condition the [CLS] token's self-attention accordingly. These variants are naturally more computationally-intensive than post-conditioned models, as we need to rerun the transformer for each distinct entity we want to make a prediction for.
<<<Sentence Level vs. Document Level>>>
As an additional variation, we can either run the transformer once per document with multiple [CLS] tokens (a document-level model as shown in Figure FIGREF4) or specialize the prediction to a single timestep (a sentence-level model). In a sentence level model, we formulate each pair of entity $e$ and process step $t$ as a separate instance for our classification task. Thus, for a process with $T$ steps and $m$ entities we get $T \times m$ input sequences for fine tuning our classification task.
<<</Sentence Level vs. Document Level>>>
<<<Training Details>>>
In most experiments, we initialize the network with the weights of the standard pre-trained GPT model, then subsequently do either domain specific LM fine-tuning and supervised task specific fine-tuning.
<<<Domain Specific LM fine-tuning>>>
For some procedural domains, we have access to additional unlabeled data. To adapt the LM to capture domain intricacies, we fine-tune the transformer network on this unlabeled corpus.
<<</Domain Specific LM fine-tuning>>>
<<<Supervised Task Fine-Tuning>>>
After the domain specific LM fine-tuning, we fine-tune our network parameters for the end task of entity tracking. For fine-tuning for the task, we have a labelled dataset which we denote by $\mathcal {C}$, the set of labelled pairs $(\lbrace s_1, s_2, \dots , s_t\rbrace , y_{et})$ for a given process. The input is converted according to our chosen entity conditioning procedure, then fed through the pre-trained network.
In addition, we observed that adding the language model loss during task specific fine-tuning leads to better performance as well, possibly because it adapts the LM to our task-specific input formulation. Thus,
<<</Supervised Task Fine-Tuning>>>
<<</Training Details>>>
<<<Experiments: Ingredient Detection>>>
We first evaluate the proposed entity conditioned self-attention model on the Recipes dataset to compare the performance with the post-conditioning variants.
<<<Systems to Compare>>>
We use the pre-trained GPT architecture in the proposed entity conditioned framework with all its variants. BERT mainly differs in that it is bidirectional, though we also use the pre-trained [CLS] and [SEP] tokens instead of introducing new tokens in the input vocabulary and training them from scratch during fine-tuning. Owing to the lengths of the processes, all our experiments are performed on BERT$_{BASE}$.
<<<Neural Process Networks>>>
The most significant prior work on this dataset is the work of BIBREF15. However, their data condition differs significantly from ours: they train on a large noisy training set and do not use any of the high-quality labeled data, instead treating it as dev and test data. Consequently, their model achieves low performance, roughly 56 $F_1 $ while ours achieves $82.5$ $F_1$ (though these are not the exact same test set). Moreover, theirs underperforms the first occurrence baseline, which calls into question the value of that training data. Therefore, we do not compare to this model directly. We use the small set of human-annotated data for our probing task. Our train/dev/test split consists of $600/100/175$ recipes, respectively.
<<</Neural Process Networks>>>
<<</Systems to Compare>>>
<<<Results>>>
Table TABREF20 compares the overall performances of our proposed models. Our best ET$_{GPT}$ model achieves an $F_1$ score of $82.50$. Comparing to the baselines (Majority through First) and post-conditioned models, we see that the early entity conditioning is critical to achieve high performance.
Although the First model still achieves the highest CR, due to operating in a high-recall regime, we see that the ET$_{GPT}$ models all significantly outperform the post-conditioning models on this metric, indicating better modeling of these compositions. Both recall and precision are substantially increaesd compared to these baseline models. Interestingly, the ELMo-based model under-performs the first-occurrence baseline, indicating that the LSTM model is not learning much in terms of recognizing complex entity semantics grounded in long term contexts.
Comparing the four variants of structuring input in proposed architectures as discussed in Section SECREF4, we observe that the document-level, entity-first model is the best performing variant. Given the left-to-right unidirectional transformer architecture, this model notably forms target-specific representations for all process tokens, compared to using the transformer self-attention only to extract entity specific information at the end of the process.
<<</Results>>>
<<<Ablations>>>
We perform ablations to evaluate the model's dependency on the context and on the target ingredient. Table TABREF23 shows the results for these ablations.
<<<Ingredient Specificity>>>
In the “no ingredient” baseline (w/o ing.), the model is not provided with the specific ingredient information. Table TABREF23 shows that while not being a strong baseline, the model achieves decent overall accuracy with the drop in UR being higher compared to CR. This indicates that there are some generic indicators (mixture) that it can pick up to try to guess at overall ingredient presence or absence.
<<</Ingredient Specificity>>>
<<<Context Importance>>>
We compare with a “no context” model (w/o context) which ignore the previous context and only use the current recipe step in determining the ingredient's presence. Table TABREF23 shows that the such model is able to perform surprisingly well, nearly as well as the first occurrence baseline.
This is because the model can often recognize words like verbs (for example, add) or nouns (for example, mixture) that indicate many ingredients are being used, and can do well without really tracking any specific entity as desired for the task.
<<</Context Importance>>>
<<</Ablations>>>
<<</Experiments: Ingredient Detection>>>
<<<State Change Detection (ProPara)>>>
Next, we now focus on a structured task to evaluate the performance of the entity tracking architecture in capturing the structural information in the continuous self-attention framework. For this, we use the ProPara dataset and evaluate our proposed model on the comprehension task.
Figure FIGREF2b shows an example of a short instance from the ProPara dataset. The task of identifying state change follows a structure satisfying the existence cycle; for example, an entity can not be created after destruction. Our prior work BIBREF19 proposed a structured model for the task that achieved state-of-the-art performance. We adapt our proposed entity tracking transformer models to this structured prediction framework, capturing creation, movement, existence (distinct from movement or creation), destruction, and non-existence.
We use the standard evaluation scheme of the ProPara dataset, which is framed as answering the following categories of questions: (Cat-1) Is e created (destroyed, moved) in the process?, (Cat-2) When (step #) is e created (destroyed, moved)?, (Cat-3) Where is e created/destroyed/moved from/to)?
<<</State Change Detection (ProPara)>>>
<<</Entity-Conditioned Models>>>
<<<Challenging Task Phenomena>>>
Based on the results in the previous section, our models clearly achieve strong performance compared to past approaches. We now revisit the challenging cases discussed in Section SECREF2 to see if our entity tracking approaches are modeling sophisticated entity phenomena as advertised. For both datasets and associated tasks, we isolate the specific set of challenging cases grounded in tracking (i) intermediate compositions formed as part of combination of entities leading to no explicit mention, and (ii) implicit events which change entities' states without explicit mention of the affects.
<<<Ingredient Detection>>>
For Recipes, we mainly want to investigate cases of ingredients getting re-engaged in the recipe not in a raw form but in a combined nature with other ingredients and henceforth no explicit mention. For example, eggs in step 4 of Figure FIGREF2a exemplifies this case. The performance in such cases is indicative of how strongly the model can track compositional entities. We also examine the performance for cases where the ingredient is referred by some other name.
<<<Intermediate Compositions>>>
Formally, we pick the set of examples where the ground truth is a transition from $0 \rightarrow 1$ (not present to present) and the 1 is a “combined” case. Table TABREF31 shows the model's performance on this subset of cases, of which there are 1049 in the test set. The model achieves an accuracy of 51.1% on these bigrams, which is relatively low given the overall model performance. In the error cases, the model defaults to the $1\rightarrow 1$ pattern indicative of the First Occ baseline.
<<</Intermediate Compositions>>>
<<<Hypernymy and Synonymy>>>
We observe the model is able to capture ingredients based on their hypernyms (nuts $\rightarrow $ pecans, salad $\rightarrow $ lettuce) and rough synonymy (bourbon $\rightarrow $ scotch). This performance can be partially attributed to the language model pre-training. We can isolate these cases by filtering for uncombined ingredients when there is no matching ingredient token in the step. Out of 552 such cases in the test set, the model predicts 375 correctly giving a recall of $67.9$. This is lower than overall UR; if pre-training behaves as advertised, we expect little degradation in this case, but instead we see performance significantly below the average on uncombined ingredients.
<<</Hypernymy and Synonymy>>>
<<<Impact of external data>>>
One question we can ask of the model's capabilities is to what extent they arise from domain knowledge in the large pre-trained data. We train transformer models from scratch and additionally investigate using the large corpus of unlabeled recipes for our LM pre-training. As can be seen in Table TABREF35, the incorporation of external data leads to major improvements in the overall performance. This gain is largely due to the increase in combined recall. One possible reason could be that external data leads to better understanding of verb semantics and in turn the specific ingredients forming part of the intermediate compositions. Figure FIGREF37 shows that verbs are a critical clue the model relies on to make predictions. Performing LM fine-tuning on top of GPT also gives gains.
<<</Impact of external data>>>
<<</Ingredient Detection>>>
<<<State Change Detection>>>
For ProPara, Table TABREF28 shows that the model does not significantly outperform the SOTA models in state change detection (Cat-1). However, for those correctly detected events, the transformer model outperforms the previous models for detecting the exact step of state change (Cat-2), primarily based on verb semantics. We do a finer-grained study in Table TABREF36 by breaking down the performance for the three state changes: creation (C), movement (M), and destruction (D), separately. Across the three state changes, the model suffers a loss of performance in the movement cases. This is owing to the fact that the movement cases require a deeper compositional and implicit event tracking. Also, a majority of errors leading to false negatives are due to the the formation of new sub-entities which are then mentioned with other names. For example, when talking about weak acid in “the water becomes a weak acid. the water dissolves limestone” the weak acid is also considered to move to the limestone.
<<</State Change Detection>>>
<<</Challenging Task Phenomena>>>
<<<Analysis>>>
The model's performance on these challenging task cases suggests that even though it outperforms baselines, it may not be capturing deep reasoning about entities. To understand what the model actually does, we perform analysis of the model's behavior with respect to the input to understand what cues it is picking up on.
<<<Gradient based Analysis>>>
One way to analyze the model is to compute model gradients with respect to input features BIBREF26, BIBREF25. Figure FIGREF37 shows that in this particular example, the most important model inputs are verbs possibly associated with the entity butter, in addition to the entity's mentions themselves. It further shows that the model learns to extract shallow clues of identifying actions exerted upon only the entity being tracked, regardless of other entities, by leveraging verb semantics.
In an ideal scenario, we would want the model to track constituent entities by translating the “focus” to track their newly formed compositions with other entities, often aliased by other names like mixture, blend, paste etc. However, the low performance on such cases shown in Section SECREF5 gives further evidence that the model is not doing this.
<<</Gradient based Analysis>>>
<<<Input Ablations>>>
We can study which inputs are important more directly by explicitly removing specific certain words from the input process paragraph and evaluating the performance of the resulting input under the current model setup. We mainly did experiments to examine the importance of: (i) verbs, and (ii) other ingredients.
Table TABREF40 presents these ablation studies. We only observe a minor performance drop from $84.59$ to $82.71$ (accuracy) when other ingredients are removed entirely. Removing verbs dropped the performance to $79.08$ and further omitting both leads to $77.79$. This shows the model’s dependence on verb semantics over tracking the other ingredients.
<<</Input Ablations>>>
<<</Analysis>>>
<<<Conclusion>>>
In this paper, we examined the capabilities of transformer networks for capturing entity state semantics. First, we show that the conventional framework of using the transformer networks is not rich enough to capture entity semantics in these cases. We then propose entity-centric ways to formulate richer transformer encoding of the process paragraph, guiding the self-attention in a target entity oriented way. This approach leads to significant performance improvements, but examining model performance more deeply, we conclude that these models still do not model the intermediate compositional entities and perform well by largely relying on surface entity mentions and verb semantics.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nBackground: Process Understanding\nStudying Basic Transformer Representations for Entity Tracking\nPost-conditioning Models\nTask Specific Input Token\nEntity Based Attention\nResults and Observations\nEntity-Conditioned Models\nSentence Level vs. Document Level\nTraining Details\nDomain Specific LM fine-tuning\nSupervised Task Fine-Tuning\nExperiments: Ingredient Detection\nSystems to Compare\nNeural Process Networks\nResults\nAblations\nIngredient Specificity\nContext Importance\nState Change Detection (ProPara)\nChallenging Task Phenomena\nIngredient Detection\nIntermediate Compositions\nHypernymy and Synonymy\nImpact of external data\nState Change Detection\nAnalysis\nGradient based Analysis\nInput Ablations\nConclusion"
],
"type": "outline"
}
|
2004.00139
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
A Swiss German Dictionary: Variation in Speech and Writing
<<<Abstract>>>
We introduce a dictionary containing forms of common words in various Swiss German dialects normalized into High German. As Swiss German is, for now, a predominantly spoken language, there is a significant variation in the written forms, even between speakers of the same dialect. To alleviate the uncertainty associated with this diversity, we complement the pairs of Swiss German - High German words with the Swiss German phonetic transcriptions (SAMPA). This dictionary becomes thus the first resource to combine large-scale spontaneous translation with phonetic transcriptions. Moreover, we control for the regional distribution and insure the equal representation of the major Swiss dialects. The coupling of the phonetic and written Swiss German forms is powerful. We show that they are sufficient to train a Transformer-based phoneme to grapheme model that generates credible novel Swiss German writings. In addition, we show that the inverse mapping - from graphemes to phonemes - can be modeled with a transformer trained with the novel dictionary. This generation of pronunciations for previously unknown words is key in training extensible automated speech recognition (ASR) systems, which are key beneficiaries of this dictionary.
<<</Abstract>>>
<<<Introduction>>>
Swiss German refers to any of the German varieties that are spoken in about two thirds of Switzerland BIBREF0. Besides at least one of those dialectal varieties, Swiss German people also master standard (or 'High') German which is taught in school as the official language of communication.
Swiss German is varies strongly. Many differences exist in the dialectal continuum of the German speaking part of Switzerland. Besides pronunciation, it also varies a lot in writing. Standard German used to be the exclusive language for writing in Switzerland. Writing in Swiss German has only come up rather recently (notably in text messaging). Because of this, there are no orthographic conventions for Swiss German varieties. Even people speaking the same dialect can, and often do, write phonetically identical words differently.
In this paper, we present a dictionary of written standard German words paired with their pronunciation in Swiss German words. Additionally Swiss German spontaneous writings, i.e. writings as they may be used in text messages by native speakers, are paired with Swiss German pronunciations.
The primary motivation for building this dictionary is rendering Swiss German accessible for technologies such as Automatic Speech Recognition (ASR).
This is the first publicly described Swiss German dictionary shared for research purposes. Furthermore, this is the first dictionary that combines pronunciations of Swiss German with spontaneous writings.
<<</Introduction>>>
<<<Related Work>>>
This dictionary complements previously developed resources for Swiss German, which share some common information. Spontaneous noisy writing has already been recorded in text corpora BIBREF1, BIBREF2, BIBREF3, some of which are also normalized. These resources contain relatively large lexicons of words used in context, but they do not contain any information about pronunciation. The features of speech are represented in other resources, such as BIBREF4, BIBREF5, BIBREF6, which, on the other hand, contain relatively small lexicons (small set of words known to vary across dialects). The ArchiMob corpus does contain a large lexicon of speech and writing (Dieth transcription), but the spoken part is available in audio sources only, without phonetic transcription.
This dictionary is the first resource to combine all the relevant information together. A relatively large lexicon has been constructed in which phonetic transcriptions (in the SAMPA alphabet) are mapped to various spontaneous writings controlling for the regional distribution. Some of the representations in this dictionary are produced manually, while others are added using automatic processing.
Automatic word-level conversion between various writings in Swiss German has been addressed in several projects, mostly for the purpose of writing normalization BIBREF7, BIBREF2, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF0, BIBREF12. The task of normalization consist of mapping multiple variants of a single lexical item into a single writing usually identical to standard German (an example would be the Swiss German words aarbet and arbäit which both map to standard German arbeit ('work')). Early data sets were processed manually (SMS). This was followed by an implementation of character-level statistical machine translation models BIBREF13, BIBREF14 and, more recently, with neural sequence-to-sequence technology. The solution by lusettietal18 employes soft-attention encoder-decoder recurrent networks enhanced with synchronous multilevel decoding. ruzsicsetal19 develop these models further to integrate linguistic (PoS) features.
A slightly different task of translating between standard German and Swiss dialects was first addressed with finite state technology BIBREF15. More recently, honnet-etal17 test convolutional neural networks on several data sets.
We continue the work on using neural networks for modeling word-level conversion. Unlike previous work, which dealt with written forms only, we train models for mapping phonetic representations to various possible writings. The proposed solution relies on the latest framework for sequence-to-sequence tasks — transformer networks BIBREF16.
<<</Related Work>>>
<<<Dictionary Content and access>>>
We pair 11'248 standard German written words with their phonetical representations in six different Swiss dialects: Zürich, St. Gallen, Basel, Bern, Visp, and Stans (Figure FIGREF1). The phonetic words were written in a modified version of the Speech Assessment Methods Phonetic Alphabet (SAMPA). The Swiss German phonetic words are also paired with Swiss German writings in the latin alphabet. (From here onwards, a phonetic representation of a Swiss German word will be called a SAMPA and a written Swiss German word will be called a GSW.)
This dictionary comes in two versions as we used two differently sized sets of SAMPA characters. Our extended set including 137 phones allows for a detailed and adequate representation of the diverse pronunciation in Switzerland. The smaller set of 59 phones is easier to compute. The phone reduction was mainly done by splitting up combined SAMPA-characters such as diphthongs. UI s t r $ \lbrace $ tt @ and U I s t r $ \lbrace $ t t @ for example are both representations of the Stans pronunciation of the standard German word austreten ('step out'). The latter representation belongs to the dictionary based on the smaller phoneset. Table TABREF2 shows an example of five dictionary entries based on the bigger phoneset.
For a subset of 9000 of 11'248 standard German words, we have manually annotated GSWs for Visp (9000) and for Zurich (2 x 9000, done by two different annotators). For a subsubset of 600 of those standard German words we have manually annotated GSWs for the four other dialects of St. Gallen, Basel, Bern, and Stans. The remaining writing variants are generated using automatic methods described below.
The dictionary is freely available for research purposes under the creative commons share-alike non-commercial licence via this website http://tiny.uzh.ch/11X.
<<</Dictionary Content and access>>>
<<<Construction of the dictionary>>>
In the following we present the steps of construction of our dictionary, also detailing how we chose the six dialects to represent Swiss German and how, starting with a list of standard German words, we retrieved the mapping SAMPAs and GSWs.
<<<Discretising continuous variation>>>
To be able to represent Swiss German by only a few dialects which differ considerably it is necessary to discretize linguistic varieties. Because, as mentioned earlier, regional language variation in Switzerland is continuous. For this identification of different varieties we used a dialectometric analysis BIBREF17. This analysis is based on lexical, phonological, morphological data of the German speaking areas of Switzerland BIBREF4. As we worked with word-lists and not sentences, we discounted syntactical influences on area boundaries that are also described in that analysis. We represent six differentiated linguistic varieties. We considered working with ten linguistic varieties because this number of areas was the 'best-cut'-analysis in the dialectometric analysis BIBREF17. Yet, due to time restraints and considerable overlap between some of the linguistic varieties, we reduced this number to six. We also made some adjustements to the chosen varieties in order to correspond better to the perception of speakers and in favor of more densely populated areas.
One way to represent the six individualized linguistic varieties would have been to annotate the dialectal centers, i.e. those places that have the average values of dialectal properties within the area where the variety is spoken. However, we chose to represent the linguistic varieties by the most convenient urban places. Those were the dialects of the Cities Zurich, St. Gallen, Basel, Bern, and Visp, and Stans.
<<</Discretising continuous variation>>>
<<<Manual annotation>>>
<<<SAMPAs>>>
For each standard German word in our dictionary we manually annotated its phonetic representation in the six chosen dialects. The information about the pronunciation of Swiss German words is partially available also from other sources but not fully accessible BIBREF4 BIBREF7.
To help us with pronunciation our annotators first used their knowledge as native speakers (for Zurich and Visp). Secondly, they consulted dialect specific grammars BIBREF18 BIBREF19 BIBREF20 BIBREF21 BIBREF22 as well as dialect specific lexica BIBREF23 BIBREF24 BIBREF25. They also considered existing Swiss German dictionaries BIBREF7 BIBREF4, listened to recordings BIBREF0 and conferred with friends and acquaintances originating from the respective locations.
<<</SAMPAs>>>
<<<GSWs>>>
9000 GSWs for Visp German and 2 x 9000 GSWs for Zurich German were annotated by native speakers of the respective dialect. Our annotators created the GSWs while looking at standard German words and without looking at the corresponding SAMPAs for Visp and Zurich. Through this independence from SAMPAs we are able to avoid biases concerning the phonetics as well as the meaning of the word in generating GSWs.
At a later stage of our work, we added each 600 GSWs for the four dialects of St. Gallen, Basel, Bern, and Stans in order to improve our phoneme-to-grapheme(p2g) model (see next section). For the manual annotation of these dialects we had no native speakers. Therefore, when writing the GSWs, our annotators relied on the corresponding SAMPAs of these dialects, which they had made an effort to create before.
<<</GSWs>>>
<<</Manual annotation>>>
<<<Automatic annotation>>>
In order to account for the mentioned variety of everyday Swiss German writing, we aimed for more than one GSW per SAMPA. The heterogeneous writing style makes the SAMPA$\,\rightarrow \,$GSW a one to many relation instead of the regular one to one that speakers of standard languages are accustomed to. To save time in generating the many GSWs, we opted for an automatic process.
We first tried to automatize the generation of GSWs with a rule-based program. Via SAMPAs together with phoneme-to-grapheme mappings we tried to obtain all possible GSWs. Yet, this yielded mostly impossible writings and also not all the writings we had already done manually. We then set up a phoneme-to-grapheme(p2g) model to generate the most likely spellings.
<<<Transformer-based Phoneme to Grapheme (p2g)>>>
The process of generating written forms from a given SAMPA can be viewed as a sequence-to-sequence problem, where the input is a sequence of phonemes and the output is a sequence of graphemes.
We decided to use a Transformer-based model for the phoneme-to-grapheme (p2g) task. The reason for this is twofold. First, the Transformer has shown great success in seq2seq tasks and it has outperformed LSTM and CNN-based models. Second, it is computationally more efficient than LSTM and CNN networks.
The Transformer consists of an encoder and a decoder part. The encoder generates a contextual representation for each input SAMPA that is then fed into the decoder together with the previously decoded grapheme. They both have N identical layers. In the encoder, each layer has a multi-head self-attention layer and a position-wise fully-connected feed-forward layer. While in the decoder, in addition to these two layers, we also have an additional multi-headed attention layer that uses the output of the encoder BIBREF16.
We are using a Pytorch implementation of the Transformer. As a result of the small size of the dataset, we are using a smaller model with only 2 layers and 2 heads. The dimension of the key (d_k) and value (d_v) is 32, the dimension of the model (d_model) and the word vectors (d_word_vec) is 50 and the hidden inner dimension (d_inner_hid) is 400. The model is trained for 55 epochs with a batch size of 64 and a dropout of 0.2. For decoding the output of the model, we are using beam search with beam size 10. We experimented with different beam sizes, but we saw that it does not have significant influence on the result.
The training set is made of 24'000 phonemes-to-graphemes pairs, which are the result of transcribing 8'000 High German words into two Zurich forms and one Visp form. Those transcriptions were made independently by three native speakers. Due to the scarcity of data, we decided not to distinguish between dialects. Hence, a single model receives a sequence of SAMPA symbols and learns to generate a matching sequence of characters.
<<</Transformer-based Phoneme to Grapheme (p2g)>>>
<<<Test set and evaluation>>>
Our team of Swiss German annotators evaluated a test-set of 1000 words. We aimed to exclude only very far-off forms (tagged '0'), such that they are very probably to be seen as false by Swiss German speakers. The accepted writings (tagged '1') might include some that seem off to the Swiss German reader.
In order to consistently rate the output, the criteria shown in table TABREF4 were followed. A GSW was tagged '0' if there was at least one letter added, missing, or changed without comprehensible phonetic reason. GSWs were also tagged '0' if there were at least two mistakes that our annotators saw as minor. 'Minor mistakes' are substitutions of related sounds or spellings, added or omitted geminates, and changes in vowel length.
For each of the 1000 words in the test-set, five GSW-predictions in all six dialects were given to our annotators. For Visp and Zurich they tagged each 1000x5 GSW predictions with 1 or 0. For St. Gallen, Basel, Bern, and Stans, they evaluated 200x5.
In Table TABREF13 we show the result from this evaluation. We count the number of correct GSWs (labeled as '1') among the top 5 candidates generated by the p2g model, where the first candidate is the most relevant, then the second one and so on.
The evaluation was done at a stage where our model was trained only on GSW for Zurich and Visp (see sec. SECREF8). The amount of correct predictions are lower for the dialects of St. Gallen, Basel, Bern, and Stans, mainly because there were some special SAMPA characters we used for those dialects and the model did not have the correlating latin character strings. After the evaluation, we added each 600 GSWs for the four dialects of St. Gallen, Basel, Bern, and Stans to improve the model.
<<</Test set and evaluation>>>
<<<Grapheme to Phoneme (g2p) and its benefits for ASR>>>
Automatic speech recognition (ASR) systems are the main use cases for our dictionary. ASR systems convert spoken language into text. Today, they are widely used in different domains from customer and help centers to voice-controlled assistants and devices. The main resources needed for an ASR system are audio, transcriptions and a phonetic dictionary. The quality of the ASR system is highly dependant of the quality of the dictionary. With our resource we provide such a phonetic dictionary.
To increase the benefits of our data for ASR systems, we also trained a grapheme-to-phoneme (g2p) model: Out-of-vocabulary words can be a problem for ASR system. For those out-of-vocabulary words we need a model that can generate pronunciations from a written form, in real time. This is why we train a grapheme-to-phoneme (g2p) model that generates a sequence of phonemes for a given word. We train the g2p model using our dictionary and compare its performance with a widely used joint-sequence g2p model, Sequitur BIBREF26. For the g2p model we are using the same architecture as for the p2g model. The only difference is input and output vocabulary. The Sequitur and our model are using the dictionary with the same train (19'898 samples), test (2'412 samples) and validation (2'212 samples) split. Additionally, we also test their performance only on the items from the Zurich and Visp dialect, because most of the samples are from this two dialects. In Table TABREF15 we show the result of the comparison of the two models. We compute the edit distance between the predicted and the true pronunciation and report the number of exact matches. In the first columns we have the result using the whole test set with all the dialects, and in the 2nd and 3rd columns we show the number of exact matches only on the samples from the test set that are from the Zurich and Visp dialect. For here we can clearly see that our model performs better than the Sequitur model. The reason why we have less matches in the Visp dialect compared to Zurich is because most of the our data is from the Zurich dialect.
<<</Grapheme to Phoneme (g2p) and its benefits for ASR>>>
<<</Automatic annotation>>>
<<</Construction of the dictionary>>>
<<<Discussion>>>
One of our objectives was to map phonetic words with their writings. There are some mismatches between SAMPA and GSWs in our dictionary, especially when the GSWs were done manually and independently from the SAMPA. Those mismatches occur where there is no straightforward correspondence of a standard German and Swiss German word.
Two kinds of such a missing correspondence can be distinguished. First, there are ambiguous standard German words. And that is necessarily so, as our dictionary is based on a list of standard German words without sentential or any other context. An example for a (morphologically) ambiguous word is standard German liebe. As we did not differentiate upper- and lower-case, it can both mean (a) 'I love' or (b) 'the love'. As evident from table 1, liebe (a) and liebi (b) were mixed in our dictionary. The same is the case for standard German frage which means either (a) 'I ask' or (b) 'the question'. Swiss German fröge, froge, fregu (a) and or (b) fraag, froog were mixed. (For both examples, see table 1.)
The second case of missing straightforward correspondence is distance between standard German and Swiss German. For one, lexical preferences in Swiss German differ from those in standard German. To express that food is 'tasty' in standard German, the word lecker is used. This is also possible in Swiss German, yet the word fein is much more common. Another example is that the standard German word rasch ('swiftly') is uncommon in Swiss German – synonyms of the word are preferred. Both of this shows in the variety of options our annotators chose for those words (see table 1). Also, the same standard German word may have several dialectal versions in Swiss German. For example there is a short and long version for the standard German word grossvater, namely grospi and grossvatter.
A second aim was to represent the way Swiss German speaking people write spontaneously. However, as our annotators wrote the spontaneous GSWs mostly while looking at standard German words, our GSWs might be biased towards standard German orthography. Yet, there is potentially also a standard German influence in the way Swiss German is actually written.
We partly revised our dictionary in order to adapt to everyday writing: We introduced explicit boundary marking into our SAMPAs. We inserted an _ in the SAMPA where there would usually be a space in writing. An example where people would conventionally add a space are corresponding forms to standard German preterite forms, for example 'ging'. The Swiss German corresponding past participles – here isch gange – would (most often) be written separately. So entries like b i n k a N @ in table 1 were changed to b i n _ k a N @.
<<</Discussion>>>
<<<Conclusion>>>
In this work we introduced the first Swiss German dictionary. Through its dual nature - both spontaneous written forms in multiple dialects and accompanying phonetic representations - we believe it will become a valuable resource for multiple tasks, including automated speech recognition (ASR). This resource was created using a combination of manual and automated work, in a collaboration between linguists and data scientists that leverages the best of two worlds - domain knowledge and data-driven focus on likely character combinations.
Through the combination of complementary skills we overcame the difficulty posed by the important variations in written Swiss German and generated a resource that adds value to downstream tasks. We show that the SAMPA to written Swiss German is useful in speech recognition and can replace the previous state of the art. Moreover the written form to SAMPA is promising and has applications in areas like text-to-speech.
We make the dictionary freely available for researchers to expand and use.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Work\nDictionary Content and access\nConstruction of the dictionary\nDiscretising continuous variation\nManual annotation\nSAMPAs\nGSWs\nAutomatic annotation\nTransformer-based Phoneme to Grapheme (p2g)\nTest set and evaluation\nGrapheme to Phoneme (g2p) and its benefits for ASR\nDiscussion\nConclusion"
],
"type": "outline"
}
|
1912.07025
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Indiscapes: Instance Segmentation Networks for Layout Parsing of Historical Indic Manuscripts
<<<Abstract>>>
Historical palm-leaf manuscript and early paper documents from Indian subcontinent form an important part of the world's literary and cultural heritage. Despite their importance, large-scale annotated Indic manuscript image datasets do not exist. To address this deficiency, we introduce Indiscapes, the first ever dataset with multi-regional layout annotations for historical Indic manuscripts. To address the challenge of large diversity in scripts and presence of dense, irregular layout elements (e.g. text lines, pictures, multiple documents per image), we adapt a Fully Convolutional Deep Neural Network architecture for fully automatic, instance-level spatial layout parsing of manuscript images. We demonstrate the effectiveness of proposed architecture on images from the Indiscapes dataset. For annotation flexibility and keeping the non-technical nature of domain experts in mind, we also contribute a custom, web-based GUI annotation tool and a dashboard-style analytics portal. Overall, our contributions set the stage for enabling downstream applications such as OCR and word-spotting in historical Indic manuscripts at scale.
<<</Abstract>>>
<<<Introduction>>>
The collection and analysis of historical document images is a key component in the preservation of culture and heritage. Given its importance, a number of active research efforts exist across the world BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. In this paper, we focus on palm-leaf and early paper documents from the Indian sub-continent. In contrast with modern or recent era documents, such manuscripts are considerably more fragile, prone to degradation from elements of nature and tend to have a short shelf life BIBREF6, BIBREF7, BIBREF8. More worryingly, the domain experts who can decipher such content are small in number and dwindling. Therefore, it is essential to access the content within these documents before it is lost forever.
Surprisingly, no large-scale annotated Indic manuscript image datasets exist for the benefit of researchers in the community. In this paper, we take a significant step to address this gap by creating such a dataset. Given the large diversity in language, script and non-textual regional elements in these manuscripts, spatial layout parsing is crucial in enabling downstream applications such as OCR, word-spotting, style-and-content based retrieval and clustering. For this reason, we first tackle the problem of creating a diverse, annotated spatial layout dataset. This has the immediate advantage of bypassing the hurdle of language and script familiarity for annotators since layout annotation does not require any special expertise unlike text annotation.
In general, manuscripts from Indian subcontinent pose many unique challenges (Figure FIGREF1). To begin with, the documents exhibit a large multiplicity of languages. This is further magnified by variations in intra-language script systems. Along with text, manuscripts may contain pictures, tables, non-pictorial decorative elements in non-standard layouts. A unique aspect of Indic and South-East Asian manuscripts is the frequent presence of holes punched in the document for the purpose of binding BIBREF8, BIBREF9, BIBREF6. These holes cause unnatural gaps within text lines. The physical dimensions of the manuscripts are typically smaller compared to other historical documents, resulting in a dense content layout. Sometimes, multiple manuscript pages are present in a single image. Moreover, imaging-related factors such as varying scan quality play a role as well. Given all of these challenges, it is important to develop robust and scalable approaches for the problem of layout parsing. In addition, given the typical non-technical nature of domain experts who study manuscripts, it is also important to develop easy-to-use graphical interfaces for annotation, post-annotation visualization and analytics.
We make the following contributions:
We introduce Indiscapes, the first ever historical Indic manuscript dataset with detailed spatial layout annotations (Section SECREF3).
We adapt a deep neural network architecture for instance-level spatial layout parsing of historical manuscript images (Section SECREF16).
We also introduce a lightweight web-based GUI for annotation and dashboard-style analytics keeping in mind the non-technical domain experts and the unique layout-level challenges of Indic manuscripts (Section SECREF11).
<<</Introduction>>>
<<<Related Work>>>
A number of research groups have invested significant efforts in the creation and maintenance of annotated, publicly available historical manuscript image datasets BIBREF10, BIBREF11, BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF12. Other collections contain character-level and word-level spatial annotations for South-East Asian palm-leaf manuscripts BIBREF9, BIBREF4, BIBREF13. In these latter set of works, annotations for lines are obtained by considering the polygonal region formed by union of character bounding boxes as a line. While studies on Indic palm-leaf and paper-based manuscripts exist, these are typically conducted on small and often, private collections of documents BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20. No publicly available large-scale, annotated dataset of historical Indic manuscripts exists to the best of our knowledge. In contrast with existing collections, our proposed dataset contains a much larger diversity in terms of document type (palm-leaf and early paper), scripts and annotated layout elements (see Tables TABREF5,TABREF8). An additional level of complexity arises from the presence of multiple manuscript pages within a single image (see Fig. FIGREF1).
A number of contributions can also be found for the task of historical document layout parsing BIBREF21, BIBREF22, BIBREF23, BIBREF24. Wei et al. BIBREF22 explore the effect of using a hybrid feature selection method while using autoencoders for semantic segmentation in five historical English and Medieval European manuscript datasets. Chen et al. BIBREF24 explore the use of Fully Convolutional Networks (FCN) for the same datasets. Barakat et al. BIBREF25 propose a FCN for segmenting closely spaced, arbitrarily oriented text lines from an Arabic manuscript dataset. The mentioned approaches, coupled with efforts to conduct competitions on various aspects of historical document layout analysis have aided progress in this area BIBREF26, BIBREF27, BIBREF28. A variety of layout parsing approaches, including those employing the modern paradigm of deep learning, have been proposed for Indic BIBREF17, BIBREF19, BIBREF29, BIBREF20 and South-East Asian BIBREF23, BIBREF30, BIBREF13, BIBREF31, BIBREF32 palm-leaf and paper manuscript images. However, existing approaches typically employ brittle hand-crafted features or demonstrate performance on datasets which are limited in terms of layout diversity. Similar to many recent works, we employ Fully Convolutional Networks in our approach. However, a crucial distinction lies in our formulation of layout parsing as an instance segmentation problem, rather than just a semantic segmentation problem. This avoids the problem of closely spaced layout regions (e.g. lines) being perceived as contiguous blobs.
The ready availability of annotation and analysis tools has facilitated progress in creation and analysis of historical document manuscripts BIBREF33, BIBREF34, BIBREF35. The tool we propose in the paper contains many of the features found in existing annotation systems. However, some of these systems are primarily oriented towards single-user, offline annotation and do not enable a unified management of annotation process and monitoring of annotator performance. In contrast, our web-based system addresses these aspects and provides additional capabilities. Many of the additional features in our system are tailored for annotation and examining annotation analytics for documents with dense and irregular layout elements, especially those found in Indic manuscripts. In this respect, our annotation system is closer to the recent trend of collaborative, cloud/web-based annotation systems and services BIBREF36, BIBREF37, BIBREF38.
<<</Related Work>>>
<<<Indiscapes: The Indic manuscript dataset>>>
The Indic manuscript document images in our dataset are obtained from two sources. The first source is the publicly available Indic manuscript collection from University of Pennsylvania's Rare Book and Manuscript Library BIBREF39, also referred to as Penn-in-Hand (PIH). From the $2{,}880$ Indic manuscript book-sets, we carefully curated 193 manuscript images for annotation. Our curated selection aims to maximize the diversity of the dataset in terms of various attributes such as the extent of document degradation, script language, presence of non-textual elements (e.g. pictures, tables) and number of lines. Some images contain multiple manuscript pages stacked vertically or horizontally (see bottom-left image in Figure FIGREF1). The second source for manuscript images in our dataset is Bhoomi, an assorted collection of 315 images sourced from multiple Oriental Research Institutes and libraries across India. As with the first collection, we chose a subset intended to maximize the overall diversity of the dataset. However, this latter set of images are characterized by a relatively inferior document quality, presence of multiple languages and from a layout point of view, predominantly contain long, closely and irregularly spaced text lines, binding holes and degradations (Figure FIGREF1). Though some document images contain multiple manuscripts, we do not attempt to split the image into multiple pages. While this poses a challenge for annotation and automatic image parsing, retaining such images in the dataset eliminates manual/semi-automatic intervention. As our results show, our approach can successfully handle such multi-page documents, thereby making it truly an end-to-end system.
Overall, our dataset contains 508 annotated Indic manuscripts. Some salient aspects of the dataset can be viewed in Table TABREF5 and a pictorial illustration of layout regions can be viewed in Figure FIGREF13. Note that multiple regions can overlap, unlike existing historical document datasets which typically contain disjoint region annotations.
For the rest of the section, we discuss the challenges associated with annotating Indic manuscripts (Section SECREF9) and our web-based annotation tool (Section SECREF11).
<<<Annotation Challenges>>>
A variety of unique challenges exist in the context of annotating Indic manuscript layouts. The challenges arise from three major sources.
Content: The documents are written in a large variety of Indic languages. Some languages even exhibit intra-language script variations. A large pool of annotators familiar with the languages and scripts present in the corpus is required to ensure proper annotation of lines and character components.
Layout: Unlike some of the existing datasets, Indic manuscripts contain non-textual elements such as color pictures, tables and document decorations. These elements are frequently interspersed with text in non-standard layouts. In many cases, the manuscripts contain one or more physical holes, designed for a thread-like material to pass through and bind the leaves together as a book. Such holes vary in terms of spatial location, count and hole diameter. When the holes are present in the middle of the document, they cause a break in the contiguity of lines. In some documents, the line contiguity is broken by a `virtual' hole-like gap, possibly intended for creation of the punched hole at a future time. In many cases, the separation between lines is extremely small. The handwritten nature of these documents and the surface material result in extremely uneven lines, necessitating meticulous and slow annotation. If multiple manuscript pages are present, the stacking order could be horizontal or vertical. Overall, the sheer variety in layout elements poses a significant challenge, not only for annotation, but also for automated layout parsing.
Degradations: Historical Indic manuscripts tend to be inherently fragile and prone to damage due to various sources – wood-and-leaf-boring insects, humidity seepage, improper storage and handling etc. While some degradations cause the edges of the document to become frayed, others manifest as irregularly shaped perforations in the document interior. It may be important to identify such degradations before attempting lexically-focused tasks such as OCR or word-spotting.
<<</Annotation Challenges>>>
<<<Annotation Tool>>>
Keeping the aforementioned challenges in mind, we introduce a new browser-based annotation tool (see Figure FIGREF10). The tool is designed to operate both stand-alone and as a web-service. The web-service mode enables features such as distributed parallel sessions by registered annotators, dashboard-based live session monitoring and a wide variety of annotation-related analytics. On the front-end, a freehand region option is provided alongside the usual rectangle and polygon to enable maximum annotation flexibility. The web-service version also features a `Correction-mode' which enables annotators to correct existing annotations from previous annotators. Additionally, the tool has been designed to enable lexical (text) annotations in future.
<<</Annotation Tool>>>
<<</Indiscapes: The Indic manuscript dataset>>>
<<<Indic Manuscript Layout Parsing>>>
To succeed at layout parsing of manuscripts, we require a system which can accurately localize various types of regions (e.g. text lines, isolated character components, physical degradation, pictures, holes). More importantly, we require a system which can isolate individual instances of each region (e.g. multiple text lines) in the manuscript image. Also, in our case, the annotation regions for manuscripts are not disjoint and can overlap (e.g. The annotation region for a text line can overlap with the annotation region of a hole (see Figure FIGREF13)). Therefore, we require a system which can accommodate such overlaps. To meet all of these requirements, we model our problem as one of semantic instance-level segmentation and employ the Mask R-CNN BIBREF40 architecture which has proven to be very effective at the task of object-instance segmentation in photos. Next, we briefly describe the Mask R-CNN architecture and our modifications of the same. Subsequently, we provide details related to implementation (Section SECREF17), model training (Section SECREF18) and inference (Section SECREF19).
<<<Network Architecture>>>
The Mask-RCNN architecture contains three stages as described below (see Figure FIGREF12).
Backbone: The first stage, referred to as the backbone, is used to extract features from the input image. It consists of a convolutional network combined with a feature-pyramid network BIBREF41, thereby enabling multi-scale features to be extracted. We use the first four blocks of ResNet-50 BIBREF42 as the convolutional network.
Region Proposal Network (RPN): This is a convolutional network which scans the pyramid feature map generated by the backbone network and generates rectangular regions commonly called `object proposals' which are likely to contain objects of interest. For each level of the feature pyramid and for each spatial location at a given level, a set of level-specific bounding boxes called anchors are generated. The anchors typically span a range of aspect ratios (e.g. $1:2, 1:1, 2:1$) for flexibility in detection. For each anchor, the RPN network predicts (i) the probability of an object being present (`objectness score') (ii) offset coordinates of a bounding box relative to location of the anchor. The generated bounding boxes are first filtered according to the `objectness score'. From boxes which survive the filtering, those that overlap with the underlying object above a certain threshold are chosen. After applying non-maximal suppression to remove overlapping boxes with relatively smaller objectness scores, the final set of boxes which remain are termed `object proposals' or Regions-of-Interest (RoI).
Multi-Task Branch Networks: The RoIs obtained from RPN are warped into fixed dimensions and overlaid on feature maps extracted from the backbone to obtain RoI-specific features. These features are fed to three parallel task sub-networks. The first sub-network maps these features to region labels (e.g. Hole,Character-Line-Segment) while the second sub-network maps the RoI features to bounding boxes. The third sub-network is fully convolutional and maps the features to the pixel mask of the underlying region. Note that the ability of the architecture to predict masks independently for each RoI plays a crucial role in obtaining instance segmentations. Another advantage is that it naturally addresses situations where annotations or predictions overlap.
<<</Network Architecture>>>
<<<Implementation Details>>>
The dataset splits used for training, validation and test phases can be seen in Table TABREF6. All manuscript images are adaptively resized to ensure the width does not exceed 1024 pixels. The images are padded with zeros such that the input to the deep network has spatial dimensions of $1024 \times 1024$. The ground truth region masks are initially subjected to a similar resizing procedure. Subsequently, they are downsized to $28 \times 28$ in order to match output dimensions of the mask sub-network.
<<<Training>>>
The network is initialized with weights obtained from a Mask R-CNN trained on the MS-COCO BIBREF43 dataset with a ResNet-50 backbone. We found that this results in faster convergence and stabler training compared to using weights from a Mask-RCNN trained on ImageNet BIBREF44 or training from scratch. Within the RPN network, we use custom-designed anchors of 5 different scales and with 3 different aspect ratios. Specifically, we use the following aspect ratios – 1:1,1:3,1:10 – keeping in mind the typical spatial extents of the various region classes. We also limit the number of RoIs (`object proposals') to 512. We use categorical cross entropy loss $\mathcal {L}_{RPN}$ for RPN classification network. Within the task branches, we use categorical cross entropy loss $\mathcal {L}_{r}$ for region classification branch, smooth L1 loss BIBREF45 ($\mathcal {L}_{bb}$) for final bounding box prediction and per-pixel binary cross entropy loss $\mathcal {L}_{mask}$ for mask prediction. The total loss is a convex combination of these losses, i.e. $\mathcal {L} = \lambda _{RPN} \mathcal {L}_{RPN} + \lambda _{r} \mathcal {L}_{r} + \lambda _{bb} \mathcal {L}_{bb} + \lambda _{mask} \mathcal {L}_{mask}$. The weighting factors ($\lambda $s) are set to 1. However, to ensure priority for our task of interest namely mask prediction, we set $\lambda _{mask}=2$. For optimization, we use Stochastic Gradient Descent (SGD) optimizer with a gradient norm clipping value of $0.5$. The batch size, momentum and weight decay are set to 1, $0.9$ and $10^{-3}$ respectively. Given the relatively smaller size of our manuscript dataset compared to the photo dataset (MS-COCO) used to originally train the base Mask R-CNN, we adopt a multi-stage training strategy. For the first stage (30 epochs), we train only the task branch sub-networks using a learning rate of $10^{-3}$ while freezing weights in the rest of the overall network. This ensures that the task branches are fine-tuned for the types of regions contained in manuscript images. For the second stage (20 epochs), we additionally train stage-4 and up of the backbone ResNet-50. This enables extraction of appropriate semantic features from manuscript images. The omission of the initial 3 stages in the backbone for training is due to the fact that they provide generic, re-usable low-level features. To ensure priority coverage of hard-to-localize regions, we use focal loss BIBREF46 for mask generation. For the final stage (15 epochs), we train the entire network using a learning rate of $10^{-4}$.
<<</Training>>>
<<<Inference>>>
During inference, the images are rescaled and processed using the procedure described at the beginning of the subsection. The number of RoIs retained after non-maximal suppression (NMS) from the RPN is set to 1000. From these, we choose the top 100 region detections with objectness score exceeding $0.5$ and feed the corresponding RoIs to the mask branch sub-network for mask generation. It is important to note that this strategy is different from the parallel generation of outputs and use of the task sub-networks during training. The generated masks are then binarized using an empirically chosen threshold of $0.4$ and rescaled to their original size using bilinear interpolation. On these generated masks, NMS with a threshold value of $0.5$ is applied to obtain the final set of predicted masks.
<<</Inference>>>
<<</Implementation Details>>>
<<<Evaluation>>>
For quantitative evaluation, we compute Average Precision (AP) for a particular IoU threshold, a measure widely reported in instance segmentation literature BIBREF47, BIBREF43. We specifically report $AP_{50}$ and $AP_{75}$, corresponding to AP at IoU thresholds 50 and 75 respectively BIBREF40. In addition, we report an overall score by averaging AP at different IoU thresholds ranging from $0.5$ to $0.95$ in steps of $0.05$.
The AP measure characterizes performance at document level. To characterize performance for each region type, we report two additional measures BIBREF24 – average class-wise IoU (cwIoU) and average class-wise per-pixel accuracy (cwAcc). Consider a fixed test document $k$. Suppose there are $r_i$ regions of class $i$ and let ${IoU}_r$ denote the IoU score for one such region $r$, i.e. $1 \leqslant r \leqslant r_i$. The per-class IoU score for class $i$ and document $k$ is computed as ${cwIoU}^d_i = \frac{\sum _r {IoU}_r}{r_i}$. Suppose there are $N_i$ documents containing at least a single region of class $i$ in ground-truth. The overall per-class IoU score for class $i$ is computed as ${cwIoU}_i = \frac{\sum _d {cwIoU}^d_i}{N_i}$. In a similar manner, we define class-wise pixel accuracy ${pwAcc}^d_i$ at document level and average it across all the documents containing class $i$, i.e. ${cwAcc}_i = \frac{\sum _d {pwAcc}^d_i}{N_i}$. Note that our approach for computing class-wise scores prevents documents with a relatively larger number of class instances from dominating the score and in this sense, differs from existing approaches BIBREF24
<<</Evaluation>>>
<<</Indic Manuscript Layout Parsing>>>
<<<Results>>>
We report quantitative results using the measures described in Section SECREF20. Table TABREF14 reports Average Precision and Table TABREF15 reports class-wise average IOUs and per-pixel accuracies. Qualitative results can be viewed in Figure FIGREF13. Despite the challenges posed by manuscripts, our model performs reasonably well across a variety of classes. As the qualitative results indicate, the model predicts accurate masks for almost all the regions. The results also indicate that our model handles overlap between Holes and Character line segments well. From ablative experiments, we found that our choice of focal loss was crucial in obtaining accurate mask boundaries. Unlike traditional semantic segmentation which would have produced a single blob-like region for line segments, our instance-based approach isolates each text line separately. Additionally, the clear demarcation between Page-Boundary and background indicates that our system identifies semantically relevant regions for downstream analysis. As the result at the bottom of Figure FIGREF13 shows, our system can even handle images with multiple pages, thus removing the need for any pre-processing related to isolation of individual pages.
From quantitative results, we observe that Holes, Character line segments, Page boundary and Pictures are parsed the best while Physical degradations are difficult to parse due to the relatively small footprint and inconsistent patterns in degradations. The results show that performance for Penn in Hand (PIH) documents is better compared to Bhoomi manuscripts. We conjecture that the presence of closely spaced and unevenly written lines in latter is the cause. In our approach, two (or more) objects may share the same bounding box in terms of overlap and it is not possible to determine which box to choose during mask prediction. Consequently, an underlying line's boundary may either end up not being detected or the predicted mask might be poorly localized. However, this is not a systemic problem since our model achieves good performance even for very dense Bhoomi document line layouts.
<<</Results>>>
<<<Conclusion>>>
Via this paper, we propose Indiscapes, the first dataset with layout annotations for historical Indic manuscripts. We believe that the availability of layout annotations will play a crucial role in reducing the overall complexity for OCR and other tasks such as word-spotting, style-and-content based retrieval. In the long-term, we intend to expand the dataset, not only numerically but also in terms of layout, script and language diversity. As a significant contribution, we have also adapted a deep-network based instance segmentation framework custom modified for fully automatic layout parsing. Given the general nature of our framework, advances in instance segmentation approaches can be leveraged thereby improving performance over time. Our proposed web-based annotator system, although designed for Indic manuscripts, is flexible, and could be reused for similar manuscripts from Asian subcontinent. We intend to expand the capabilities of our annotator system in many useful ways. For instance, the layout estimated by our deep-network could be provided to annotators for correction, thus reducing annotation efforts. Finally, we plan to have our dataset, instance segmentation system and annotator system publicly available. This would enable large-scale data collection and automated analysis efforts for Indic as well as other historical Asian manuscripts. The repositories related to the systems presented in this paper and the Indiscapes dataset can be accessed at https://ihdia.iiit.ac.in.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Work\nIndiscapes: The Indic manuscript dataset\nAnnotation Challenges\nAnnotation Tool\nIndic Manuscript Layout Parsing\nNetwork Architecture\nImplementation Details\nTraining\nInference\nEvaluation\nResults\nConclusion"
],
"type": "outline"
}
|
1911.01188
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Analysing Coreference in Transformer Outputs
<<<Abstract>>>
We analyse coreference phenomena in three neural machine translation systems trained with different data settings with or without access to explicit intra- and cross-sentential anaphoric information. We compare system performance on two different genres: news and TED talks. To do this, we manually annotate (the possibly incorrect) coreference chains in the MT outputs and evaluate the coreference chain translations. We define an error typology that aims to go further than pronoun translation adequacy and includes types such as incorrect word selection or missing words. The features of coreference chains in automatic translations are also compared to those of the source texts and human translations. The analysis shows stronger potential translationese effects in machine translated outputs than in human translations.
<<</Abstract>>>
<<<Introduction>>>
In the present paper, we analyse coreference in the output of three neural machine translation systems (NMT) that were trained under different settings. We use a transformer architecture BIBREF0 and train it on corpora of different sizes with and without the specific coreference information. Transformers are the current state-of-the-art in NMT BIBREF1 and are solely based on attention, therefore, the kind of errors they produce might be different from other architectures such as CNN or RNN-based ones. Here we focus on one architecture to study the different errors produced only under different data configurations.
Coreference is an important component of discourse coherence which is achieved in how discourse entities (and events) are introduced and discussed. Coreference chains contain mentions of one and the same discourse element throughout a text. These mentions are realised by a variety of linguistic devices such as pronouns, nominal phrases (NPs) and other linguistic means. As languages differ in the range of such linguistic means BIBREF2, BIBREF3, BIBREF4, BIBREF5 and in their contextual restrictions BIBREF6, these differences give rise to problems that may result in incoherent (automatic) translations. We focus on coreference chains in English-German translations belonging to two different genres. In German, pronouns, articles and adjectives (and some nouns) are subject to grammatical gender agreement, whereas in English, only person pronouns carry gender marking. An incorrect translation of a pronoun or a nominal phrase may lead to an incorrect relation in a discourse and will destroy a coreference chain. Recent studies in automatic coreference translation have shown that dedicated systems can lead to improvements in pronoun translation BIBREF7, BIBREF8. However, standard NMT systems work at sentence level, so improvements in NMT translate into improvements on pronouns with intra-sentential antecedents, but the phenomenon of coreference is not limited to anaphoric pronouns, and even less to a subset of them. Document-level machine translation (MT) systems are needed to deal with coreference as a whole. Although some attempts to include extra-sentential information exist BIBREF9, BIBREF10, BIBREF11, BIBREF12, the problem is far from being solved. Besides that, some further problems of NMT that do not seem to be related to coreference at first glance (such as translation of unknown words and proper names or the hallucination of additional words) cause coreference-related errors.
In our work, we focus on the analysis of complete coreference chains, manually annotating them in the three translation variants. We also evaluate them from the point of view of coreference chain translation. The goal of this paper is two-fold. On the one hand, we are interested in various properties of coreference chains in these translations. They include total number of chains, average chain length, the size of the longest chain and the total number of annotated mentions. These features are compared to those of the underlying source texts and also the corresponding human translation reference. On the other hand, we are also interested in the quality of coreference translations. Therefore, we define a typology of errors, and and chain members in MT output are annotated as to whether or not they are correct. The main focus is on such errors as gender, number and case of the mentions, but we also consider wrong word selection or missing words in a chain. Unlike previous work, we do not restrict ourselves to pronouns. Our analyses show that there are further errors that are not directly related to coreference but consequently have an influence on the correctness of coreference chains.
The remainder of the paper is organised as follows. Section SECREF2 introduces the main concepts and presents an overview of related MT studies. Section SECREF3 provides details on the data, systems used and annotation procedures. Section SECREF4 analyses the performance of our transformer systems on coreferent mentions. Finally we summarise and draw conclusions in Section SECREF5.
<<</Introduction>>>
<<<Background and Related Work>>>
<<<Coreference>>>
Coreference is related to cohesion and coherence. The latter is the logical flow of inter-related ideas in a text, whereas cohesion refers to the text-internal relationship of linguistic elements that are overtly connected via lexico-grammatical devices across sentences BIBREF13. As stated by BIBREF14, this connectedness of texts implies dependencies between sentences. And if these dependencies are neglected in translation, the output text no longer has the property of connectedness which makes a sequence of sentences a text. Coreference expresses identity to a referent mentioned in another textual part (not necessarily in neighbouring sentences) contributing to text connectedness. An addressee is following the mentioned referents and identifies them when they are repeated. Identification of certain referents depends not only on a lexical form, but also on other linguistic means, e.g. articles or modifying pronouns BIBREF15. The use of these is influenced by various factors which can be language-dependent (range of linguistic means available in grammar) and also context-independent (pragmatic situation, genre). Thus, the means of expressing reference differ across languages and genres. This has been shown by some studies in the area of contrastive linguistics BIBREF6, BIBREF3, BIBREF5. Analyses in cross-lingual coreference resolution BIBREF16, BIBREF17, BIBREF18, BIBREF19 show that there are still unsolved problems that should be addressed.
<<</Coreference>>>
<<<Translation studies>>>
Differences between languages and genres in the linguistic means expressing reference are important for translation, as the choice of an appropriate referring expression in the target language poses challenges for both human and machine translation. In translation studies, there is a number of corpus-based works analysing these differences in translation. However, most of them are restricted to individual phenomena within coreference. For instance, BIBREF20 analyse abstract anaphors in English-German translations. To our knowledge, they do not consider chains. BIBREF21 in their contrastive analysis of potential coreference chain members in English-German translations, describe transformation patterns that contain different types of referring expressions. However, the authors rely on automatic tagging and parsing procedures and do not include chains into their analysis. The data used by BIBREF4 and BIBREF22 contain manual chain annotations. The authors focus on different categories of anaphoric pronouns in English-Czech translations, though not paying attention to chain features (e.g. their number or size).
Chain features are considered in a contrastive analysis by BIBREF6. Their study concerns different phenomena in a variety of genres in English and German comparable texts. Using contrastive interpretations, they suggest preferred translation strategies from English into German, i.e. translators should use demonstrative pronouns instead of personal pronouns (e.g. dies/das instead of es/it) when translating from English into German and vice versa. However, corpus-based studies show that translators do not necessarily apply such strategies. Instead, they often preserve the source language anaphor's categories BIBREF20 which results in the shining through effects BIBREF23. Moreover, due to the tendency of translators to explicitly realise meanings in translations that were implicit in the source texts BIBREF24, translations are believed to contain more (explicit) referring expressions, and subsequently, more (and longer) coreference chains.
Therefore, in our analysis, we focus on the chain features related to the phenomena of shining through and explicitation. These features include number of mentions, number of chains, average chain length and the longest chain size. Machine-translated texts are compared to their sources and the corresponding human translations in terms of these features. We expect to find shining through and explicitation effects in automatic translations.
<<</Translation studies>>>
<<<Coreference in MT>>>
As explained in the introduction, several recent works tackle the automatic translation of pronouns and also coreference BIBREF25, BIBREF26 and this has, in part, motivated the creation of devoted shared tasks and test sets to evaluate the quality of pronoun translation BIBREF7, BIBREF27, BIBREF28, BIBREF29.
But coreference is a wider phenomenon that affects more linguistic elements. Noun phrases also appear in coreference chains but they are usually studied under coherence and consistency in MT. BIBREF30 use topic modelling to extract coherence chains in the source, predict them in the target and then promote them as translations. BIBREF31 use word embeddings to enforce consistency within documents. Before these works, several methods to post-process the translations and even including a second decoding pass were used BIBREF32, BIBREF33, BIBREF34, BIBREF35.
Recent NMT systems that include context deal with both phenomena, coreference and coherence, but usually context is limited to the previous sentence, so chains as a whole are never considered. BIBREF10 encode both a source and a context sentence and then combine them to obtain a context-aware input. The same idea was implemented before by BIBREF36 where they concatenate a source sentence with the previous one to include context. Caches BIBREF37, memory networks BIBREF38 and hierarchical attention methods BIBREF39 allow to use a wider context. Finally, our work is also related to BIBREF40 and BIBREF41 where their oracle translations are similar to the data-based approach we introduce in Section SECREF4.
<<</Coreference in MT>>>
<<</Background and Related Work>>>
<<<Systems, Methods and Resources>>>
<<<State-of-the-art NMT>>>
Our NMT systems are based on a transformer architecture BIBREF0 as implemented in the Marian toolkit BIBREF42 using the transformer big configuration.
We train three systems (S1, S2 and S3) with the corpora summarised in Table TABREF5. The first two systems are transformer models trained on different amounts of data (6M vs. 18M parallel sentences as seen in the Table). The third system includes a modification to consider the information of full coreference chains throughout a document augmenting the sentence to be translated with this information and it is trained with the same amount of sentence pairs as S1. A variant of the S3 system participated in the news machine translation of the shared task held at WMT 2019 BIBREF43.
<<<S1>>>
is trained with the concatenation of Common Crawl, Europarl, a cleaned version of Rapid and the News Commentary corpus. We oversample the latter in order to have a significant representation of data close to the news genre in the final corpus.
<<</S1>>>
<<<S2>>>
uses the same data as S1 with the addition of a filtered portion of Paracrawl. This corpus is known to be noisy, so we use it to create a larger training corpus but it is diluted by a factor 4 to give more importance to high quality translations.
<<</S2>>>
<<<S3>>>
S3 uses the same data as S1, but this time enriched with the cross- and intra-sentential coreference chain markup as described below. The information is included as follows.
Source documents are annotated with coreference chains using the neural annotator of Stanford CoreNLP BIBREF44. The tool detects pronouns, nominal phrases and proper names as mentions in a chain. For every mention, CoreNLP extracts its gender (male, female, neutral, unknown), number (singular, plural, unknown), and animacy (animate, inanimate, unknown). This information is not added directly but used to enrich the single sentence-based MT training data by applying a set of heuristics implemented in DocTrans:
We enrich pronominal mentions with the exception of "I" with the head (main noun phrase) of the chain. The head is cleaned by removing articles and Saxon genitives and we only consider heads with less than 4 tokens in order to avoid enriching a word with a full sentence
We enrich nominal mentions including proper names with the gender of the head
The head itself is enriched with she/he/it/they depending on its gender and animacy
The enrichment is done with the addition of tags as shown in the examples:
I never cook with $<$b_crf$>$ salt $<$e_crf$>$ it.
$<$b_crf$>$ she $<$e_crf$>$ Biles arrived late.
In the first case heuristic 1 is used, salt is the head of the chain and it is prepended to the pronoun. The second example shows a sentence where heuristic 2 has been used and the proper name Biles has now information about the gender of the person it is referring to.
Afterwards, the NMT system is trained at sentence level in the usual way. The data used for the three systems is cleaned, tokenised, truecased with Moses scripts and BPEd with subword-nmt using separated vocabularies with 50 k subword units each. The validation set ($news2014$) and the test sets described in the following section are pre-processed in the same way.
<<</S3>>>
<<</State-of-the-art NMT>>>
<<<Test data under analysis>>>
As one of our aims is to compare coreference chain properties in automatic translation with those of the source texts and human reference, we derive data from ParCorFull, an English-German corpus annotated with full coreference chains BIBREF46. The corpus contains ca. 160.7 thousand tokens manually annotated with about 14.9 thousand mentions and 4.7 thousand coreference chains. For our analysis, we select a portion of English news texts and TED talks from ParCorFull and translate them with the three NMT systems described in SECREF4 above. As texts considerably differ in their length, we select 17 news texts (494 sentences) and four TED talks (518 sentences). The size (in tokens) of the total data set under analysis – source (src) and human translations (ref) from ParCorFull and the automatic translations produced within this study (S1, S2 and S3) are presented in Table TABREF20.
Notably, automatic translations of TED talks contain more words than the corresponding reference translation, which means that machine-translated texts of this type have also more potential tokens to enter in a coreference relation, and potentially indicating a shining through effect. The same does not happen with the news test set.
<<</Test data under analysis>>>
<<<Manual annotation process>>>
The English sources and their corresponding human translations into German were already manually annotated for coreference chains. We follow the same scheme as BIBREF47 to annotate the MT outputs with coreference chains. This scheme allows the annotator to define each markable as a certain mention type (pronoun, NP, VP or clause). The mentions can be defined further in terms of their cohesive function (antecedent, anaphoric, cataphoric, comparative, substitution, ellipsis, apposition). Antecedents can either be marked as simple or split or as entity or event. The annotation scheme also includes pronoun type (personal, possessive, demonstrative, reflexive, relative) and modifier types of NPs (possessive, demonstrative, definite article, or none for proper names), see BIBREF46 for details. The mentions referring to the same discourse item are linked between each other. We use the annotation tool MMAX2 BIBREF48 which was also used for the annotation of ParCorFull.
In the next step, chain members are annotated for their correctness. For the incorrect translations of mentions, we include the following error categories: gender, number, case, ambiguous and other. The latter category is open, which means that the annotators can add their own error types during the annotation process. With this, the final typology of errors also considered wrong named entity, wrong word, missing word, wrong syntactic structure, spelling error and addressee reference.
The annotation of machine-translated texts was integrated into a university course on discourse phenomena. Our annotators, well-trained students of linguistics, worked in small groups on the assigned annotation tasks (4-5 texts, i.e. 12-15 translations per group). At the beginning of the annotation process, the categories under analysis were discussed within the small groups and also in the class. The final versions of the annotation were then corrected by the instructor.
<<</Manual annotation process>>>
<<</Systems, Methods and Resources>>>
<<<Results and Analyses>>>
<<<Chain features>>>
First, we compare the distribution of several chain features in the three MT outputs, their source texts and the corresponding human translations.
Table TABREF20 shows that, overall, all machine translations contain a greater number of annotated mentions in both news texts and TED talks than in the annotated source (src and src$_{\rm CoreNLP}$) and reference (ref) texts. Notice that src$_{\rm CoreNLP}$ —where coreferences are not manually but automatically annotated with CoreNLP— counts also the tokens that the mentions add to the sentences, but not the tags. The larger number of mentions may indicate a strong explicitation effect observed in machine-translated texts. Interestingly, CoreNLP detects a similar number of mentions in both genres, while human annotators clearly marked more chains for TED than for news. Both genres are in fact quite different in nature; whereas only $37\%$ of the mentions are pronominal in news texts (343 out of 915), the number grows to $58\%$ for TED (577 out of 989), and this could be an indicator of the difficulty of the genres for NMT systems. There is also a variation in terms of chain number between translations of TED talks and news. While automatic translations of news texts contain more chains than the corresponding human annotated sources and references, machine-translated TED talks contain less chains than the sources and human translations. However, there is not much variation between the chain features of the three MT outputs. The chains are also longer in machine-translated output than in reference translations as can be seen by the number of mentions per chain and the length of the longest chain.
<<</Chain features>>>
<<<MT quality at system level>>>
We evaluate the quality of the three transformer engines with two automatic metrics, BLEU BIBREF49 and METEOR BIBREF50. Table TABREF25 shows the scores in two cases: all, when the complete texts are evaluated and coref, when only the subset of sentences that have been augmented in S3 are considered – 265 out of 494 for news and 239 out of 518 for TED. For news, the best system is that trained on more data, S2; but for TED talks S3 with less data has the best performance.
The difference between the behaviour of the systems can be related to the different genres. We have seen that news are dominated by nominal mentions while TED is dominated by pronominal ones. Pronouns mostly need coreference information to be properly translated, while noun phrases can be improved simply because more instances of the nouns appear in the training data. With this, S3 improves the baseline S1 in +1.1 BLEU points for TED$_{coref}$ but -0.2 BLEU points for news$_{coref}$.
However, even if the systems differ in the overall performance, the change is not related to the number of errors in coreference chains. Table TABREF25 also reports the number of mistakes in the translation of coreferent mentions. Whereas the number of errors correlates with translation quality (as measured by BLEU) for news$_{coref}$ this is not the case of TED$_{coref}$.
<<</MT quality at system level>>>
<<<Error analysis>>>
The total distribution for the 10 categories of errors defined in Section SECREF23 can be seen in Figure FIGREF29. Globally, the proportion of errors due to our closed categories (gender, number, case and ambiguous) is larger for TED talks than for news (see analysis in Section SECREF28). Gender is an issue with all systems and genres which does not get solved by the addition of more data. Additionally, news struggle with wrong words and named entities; for this genre the additional error types (see analysis in Section SECREF30) represent around 60% of the errors of S1/S3 to be compared to the 40% of TED talks.
<<<Predefined error categories>>>
0.4em 0.4Within our predefined closed categories (gender, number, case and ambiguous), the gender errors belong to the most frequent errors. They include wrong gender translation of both pronouns, as sie (“her”) instead of ihn (“him”) in example SECREF28 referring to the masculine noun Mindestlohn, and nominal phrases, as der Stasi instead of die Stasi, where a masculine form of the definite article is used instead of a feminine one, in example SECREF28.
.src: [The current minimum wage] of 7.25 US dollars is a pittance... She wants to raise [it] to 15 dollars an hour.
S3: [Der aktuelle Mindestlohn] von 7,25 US-Dollar sei Almosen... Sie möchte [sie] auf 15 Dollar pro Stunde erhöhen.
. src: ...let's have a short look at the history of [the Stasi], because it is really important for understanding [its] self-conception.
S2: Lassen sie uns... einen kurzen Blick auf die Geschichte [des Stasi] werfen denn es wirklich wichtig, [seine] Selbstauffassung zu verstehen.
The gender-related errors are common to all the automatic translations. Interestingly, systems S1 and S3 have more problems with gender in translations of TED talks, whereas they do better in translating news, which leads us to assume that this is a data-dependent issue: while the antecedent for news is in the same sentence it is not for TED talks. A closer look at the texts with a high number of gender problems confirms this assumption —they contain references to females who were translated with male forms of nouns and pronouns (e.g. Mannschaftskapitän instead of Mannschaftskapitänin).
We also observe errors related to gender for the cases of explicitation in translation. Some impersonal English constructions not having direct equivalents in German are translated with personal constructions, which requires an addition of a pronoun. Such cases of explicitation were automatically detected in parallel data in BIBREF21, BIBREF2. They belong to the category of obligatory explicitation, i.e. explicitation dictated by differences in the syntactic and semantic structure of languages, as defined by BIBREF51. An MT system tends to insert a male form instead of a female one even if it's marked as feminine (S3 adds the feminine form she as markup), as illustrated in example SECREF28 where the automatic translation contains the masculine pronoun er (“he”) instead of sie (“she”).
. src: [Biles] earned the first one on Tuesday while serving as the exclamation point to retiring national team coordinator Martha Karolyi's going away party.
ref: [Biles] holte die erste Medaille am Dienstag, während [sie] auf der Abschiedsfeier der sich in Ruhestand begehenden Mannschaftskoordinatorin Martha Karolyi als Ausrufezeichen diente.
S2: [Biles] verdiente den ersten am Dienstag, während [er] als Ausrufezeichen für den pensionierten Koordinator der Nationalmannschaft, Martha Karolyi, diente.
Another interesting case of a problem related to gender is the dependence of the referring expressions on grammatical restrictions in German. In example SECREF28, the source chain contains the pronoun him referring to both a 6-year-old boy and The child. In German, these two nominal phrases have different gender (masculine vs. neutral). The pronoun has grammatical agreement with the second noun of the chain (des Kindes) and not its head (ein 6 Jahre alter Junge).
. src: Police say [a 6-year-old boy] has been shot in Philadelphia... [The child]'s grandparents identified [him] to CBS Philadelphia as [Mahaj Brown].
S1: Die Polizei behauptet, [ein 6 Jahre alter Junge] sei in Philadelphia erschossen worden... Die Großeltern [des Kindes] identifizierten [ihn] mit CBS Philadelphia als [Mahaj Brown].
Case- and number-related errors are less frequent in our data. However, translations of TED talks with S2 contain much more number-related errors than other outputs. Example SECREF28 illustrates this error type which occurs within a sentence. The English source contains the nominal chain in singular the cost – it, whereas the German correspondence Kosten has a plural form and requires a plural pronoun (sie). However, the automatic translation contains the singular pronoun es.
. src: ...to the point where [the cost] is now below 1,000 dollars, and it's confidently predicted that by the year 2015 [it] will be below 100 dollars...
S2: bis zu dem Punkt, wo [die Kosten] jetzt unter 1.000 Dollar liegen, und es ist zuversichtlich, dass [es] bis zum Jahr 2015 unter 100 Dollar liegen wird...
Ambiguous cases often contain a combination of errors or they are difficult to categorise due to the ambiguity of the source pronouns, as the pronoun it in example SECREF28 which may refer either to the noun trouble or even the clause Democracy is in trouble is translated with the pronoun sie (feminine). In case of the first meaning, the pronoun would be correct, but the form of the following verb should be in plural. In case of a singular form, we would need to use a demonstrative pronoun dies (or possibly the personal pronoun es).
. src: Democracy is in trouble... and [it] comes in part from a deep dilemma...
S2: Die Demokratie steckt in Schwierigkeiten ... und [sie] rührt teilweise aus einem tiefen Dilemma her...
<<</Predefined error categories>>>
<<<Additional error types>>>
At first glance, the error types discussed in this section do not seem to be related to coreference —a wrong translation of a noun can be traced back to the training data available and the way NMT deals with unknown words. However, a wrong translation of a noun may result in its invalidity to be a referring expression for a certain discourse item. As a consequence, a coreference chain is damaged. We illustrate a chain with a wrong named entity translation in example SECREF30. The source chain contains five nominal mentions referring to an American gymnast Aly Raisman: silver medalist – “Final Five” teammate – Aly Raisman – Aly Raisman – Raisman. All the three systems used different names. Example SECREF30 illustrates the translation with S2, where Aly Donovan and Aly Encence were used instead of Aly Raisman, and the mention Raisman disappears completely from the chain.
. src: Her total of 62.198 was well clear of [silver medalist] and [“Final Five” teammate] [Aly Raisman]...United States' Simone Biles, left, and [Aly Raisman] embrace after winning gold and silver respectively... [Raisman]'s performance was a bit of revenge from four years ago, when [she] tied...
S2: Ihre Gesamtmenge von 62.198 war deutlich von [Silbermedaillengewinner] und [“Final Five” Teamkollegen] [Aly Donovan]... Die Vereinigten Staaten Simone Biles, links und [Aly Encence] Umarmung nach dem Gewinn von Gold und Silber... Vor vier Jahren, als [sie]...
Example SECREF30 illustrates translation of the chain The scaling in the opposite direction – that scale. The noun phrases Die Verlagerung in die entgegengesetzte Richtung (“the shift in the opposite direction”) and dieses Ausmaß (“extent/scale”) used in the S1 output do not corefer (cf. Wachstum in die entgegengesetzte Richtung and Wachstum in the reference translation). Notice that these cases with long noun phrases are not tackled by S3 either.
. src: [The scaling in the opposite direction]...drive the structure of business towards the creation of new kinds of institutions that can achieve [that scale].
ref: [Wachstum in die entgegengesetzte Richtung]... steuert die Struktur der Geschäfte in Richtung Erschaffung von neuen Institutionen, die [dieses Wachstum] erreichen können.
S1: [Die Verlagerung in die entgegengesetzte Richtung]... treibt die Struktur der Unternehmen in Richtung der Schaffung neuer Arten von Institutionen, die [dieses Ausmaß] erreichen können.
<<</Additional error types>>>
<<<Types of erroneous mentions>>>
Finally, we also analyse the types of the mentions marked as errors. They include either nominal phrases or pronouns. Table TABREF32 shows that there is a variation between the news texts and TED talks in terms of these features. News contain more erroneous nominal phrases, whereas TED talks contain more pronoun-related errors. Whereas both the news and the TED talks have more errors in translating anaphors, there is a higher proportion of erroneous antecedents in the news than in the TED talks.
It is also interesting to see that S3 reduces the percentage of errors in anaphors for TED, but has a similar performance to S2 on news.
<<</Types of erroneous mentions>>>
<<</Error analysis>>>
<<</Results and Analyses>>>
<<<Summary and Conclusions>>>
We analysed coreferences in the translation outputs of three transformer systems that differ in the training data and in whether they have access to explicit intra- and cross-sentential anaphoric information (S3) or not (S1, S2). We see that the translation errors are more dependent on the genre than on the nature of the specific NMT system: whereas news (with mainly NP mentions) contain a majority of errors related to wrong word selection, TED talks (with mainly pronominal mentions) are prone to accumulate errors on gender and number.
System S3 was specifically designed to solve this issue, but we cannot trace the improvement from S1 to S3 by just counting the errors and error types, as some errors disappear and others emerge: coreference quality and automatic translation quality do not correlate in our analysis on TED talks. As a further improvement to address the issue, we could add more parallel data to our training corpus with a higher density of coreference chains such as movie subtitles or parallel TED talks.
We also characterised the originals and translations according to coreference features such as total number of chains and mentions, average chain length and size of the longest chain. We see how NMT translations increase the number of mentions about $30\%$ with respect to human references showing even a more marked explicitation effect than human translations do. As future work, we consider a more detailed comparison of the human and machine translations, and analyse the purpose of the additional mentions added by the NMT systems. It would be also interesting to evaluate of the quality of the automatically computed coreferences chains used for S3.
<<</Summary and Conclusions>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nBackground and Related Work\nCoreference\nTranslation studies\nCoreference in MT\nSystems, Methods and Resources\nState-of-the-art NMT\nS1\nS2\nS3\nTest data under analysis\nManual annotation process\nResults and Analyses\nChain features\nMT quality at system level\nError analysis\nPredefined error categories\nAdditional error types\nTypes of erroneous mentions\nSummary and Conclusions"
],
"type": "outline"
}
|
1910.06701
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
NumNet: Machine Reading Comprehension with Numerical Reasoning
<<<Abstract>>>
Numerical reasoning, such as addition, subtraction, sorting and counting is a critical skill in human's reading comprehension, which has not been well considered in existing machine reading comprehension (MRC) systems. To address this issue, we propose a numerical MRC model named as NumNet, which utilizes a numerically-aware graph neural network to consider the comparing information and performs numerical reasoning over numbers in the question and passage. Our system achieves an EM-score of 64.56% on the DROP dataset, outperforming all existing machine reading comprehension models by considering the numerical relations among numbers.
<<</Abstract>>>
<<<Introduction>>>
Machine reading comprehension (MRC) aims to infer the answer to a question given the document. In recent years, researchers have proposed lots of MRC models BIBREF0, BIBREF1, BIBREF2, BIBREF3 and these models have achieved remarkable results in various public benchmarks such as SQuAD BIBREF4 and RACE BIBREF5. The success of these models is due to two reasons: (1) Multi-layer architectures which allow these models to read the document and the question iteratively for reasoning; (2) Attention mechanisms which would enable these models to focus on the part related to the question in the document.
However, most of existing MRC models are still weak in numerical reasoning such as addition, subtraction, sorting and counting BIBREF6, which are naturally required when reading financial news, scientific articles, etc. BIBREF6 proposed a numerically-aware QANet (NAQANet) model, which divides the answer generation for numerical MRC into three types: (1) extracting spans; (2) counting; (3) addition or subtraction over numbers. NAQANet makes a pioneering attempt to answer numerical questions but still does not explicitly consider numerical reasoning.
To tackle this problem, we introduce a novel model NumNet that integrates numerical reasoning into existing MRC models. A key problem to answer questions requiring numerical reasoning is how to perform numerical comparison in MRC systems, which is crucial for two common types of questions:
(1) Numerical Comparison: The answers of the questions can be directly obtained via performing numerical comparison, such as sorting and comparison, in the documents. For example, in Table TABREF1, for the first question, if the MRC system knows the fact that “$49>47>36>31>22$”, it could easily extract that the second longest field goal is 47-yard.
(2) Numerical Condition: The answers of the questions cannot be directly obtained through simple numerical comparison in the documents, but often require numerical comparison for understanding the text. For example, for the second question in Table TABREF1, an MRC system needs to know which age group made up more than 7% of the population to count the group number.
Hence, our NumNet model considers numerical comparing information among numbers when answering numerical questions. As shown in Figure FIGREF3, NumNet first encodes both the question and passages through an encoding module consisting of convolution layers, self-attention layers and feed-forward layers as well as a passage-question attention layer. After that, we feed the question and passage representations into a numerically-aware graph neural network (NumGNN) to further integrate the comparison information among numbers into their representations. Finally, we utilize the numerically-aware representation of passages to infer the answer to the question.
The experimental results on a public numerical MRC dataset DROP BIBREF6 show that our NumNet model achieves significant and consistent improvement as compared to all baseline methods by explicitly performing numerical reasoning over numbers in the question and passage. In particular, we show that our model could effectively deal with questions requiring sorting with multi-layer NumGNN. The source code of our paper is available at https://github.com/ranqiu92/NumNet.
<<</Introduction>>>
<<<Related Work>>>
<<<Machine Reading Comprehension>>>
Machine reading comprehension (MRC) has become an important research area in NLP. In recent years, researchers have published a large number of annotated MRC datasets such as CNN/Daily Mail BIBREF7, SQuAD BIBREF4, RACE BIBREF5, TriviaQA BIBREF8 and so on. With the blooming of available large-scale MRC datasets, a great number of neural network-based MRC models have been proposed to answer questions for a given document including Attentive Reader BIBREF9, BiDAF BIBREF3, Interactive AoA Reader BIBREF2, Gated Attention Reader BIBREF1, R-Net BIBREF10, DCN BIBREF11, QANet BIBREF12, and achieve promising results in most existing public MRC datasets.
Despite the success of neural network-based MRC models, researchers began to analyze the data and rethink to what extent we have solved the problem of MRC. Some works BIBREF0, BIBREF13, BIBREF14 classify the reasoning skills required to answer the questions into the following types: (1) Exact matching/Paraphrasing; (2) Summary; (3) Logic reasoning; (4) Utilizing external knowledge; (5) Numerical reasoning. They found that most existing MRC models are focusing on dealing with the first three types of questions. However, all these models suffer from problems when answering the questions requiring numerical reasoning. To the best of our knowledge, our work is the first one that explicitly incorporates numerical reasoning into the MRC system. The most relevant work to ours is NAQANet BIBREF6, which adapts the output layer of QANet BIBREF12 to support predicting answers based on counting and addition/subtraction over numbers. However, it does not consider numerical reasoning explicitly during encoding or inference.
<<</Machine Reading Comprehension>>>
<<<Arithmetic Word Problem Solving>>>
Recently, understanding and solving arithmetic word problems (AWP) has attracted the growing interest of NLP researchers. BIBREF15 proposed a simple method to address arithmetic word problems, but mostly focusing on subsets of problems which only require addition and subtraction. After that, BIBREF16 proposed an algorithmic approach which could handle arithmetic word problems with multiple steps and operations. BIBREF17 further formalized the AWP problem as that of generating and scoring equation trees via integer linear programming. BIBREF18 and BIBREF19 proposed sequence to sequence solvers for the AWP problems, which are capable of generating unseen expressions and do not rely on sophisticated manual features. BIBREF20 leveraged deep Q-network to solve the AWP problems, achieving a good balance between effectiveness and efficiency. However, all the existing AWP systems are only trained and validated on small benchmark datasets. BIBREF21 found that the performance of these AWP systems sharply degrades on larger datasets. Moreover, from the perspective of NLP, MRC problems are more challenging than AWP since the passages in MRC are mostly real-world texts which require more complex skills to be understood. Above all, it is nontrivial to adapt most existing AWP models to the MRC scenario. Therefore, we focus on enhancing MRC models with numerical reasoning abilities in this work.
<<</Arithmetic Word Problem Solving>>>
<<</Related Work>>>
<<<Methodology>>>
In this section, we will introduce the framework of our model NumNet and provide the details of the proposed numerically-aware graph neural network (NumGNN) for numerical reasoning.
<<<Framework>>>
An overview of our model NumNet is shown in Figure FIGREF3. We compose our model with encoding module, reasoning module and prediction module. Our major contribution is the reasoning module, which leverages a NumGNN between the encoding module and prediction module to explicitly consider the numerical comparison information and perform numerical reasoning. As NAQANet has been shown effective for handling numerical MRC problem BIBREF6, we leverage it as our base model and mainly focus on the design and integration of the NumGNN in this work.
<<<Encoding Module>>>
Without loss of generality, we use the encoding components of QANet and NAQANet to encode the question and passage into vector-space representations. Formally, the question $Q$ and passage $P$ are first encoded as:
and then the passage-aware question representation and the question-aware passage representation are computed as:
where $\texttt {QANet-Emb-Enc}(\cdot )$ and $\texttt {QANet-Att}(\cdot )$ denote the “stacked embedding encoder layer” and “context-query attention layer” of QANet respectively. The former consists of convolution, self-attention and feed-forward layers. The latter is a passage-question attention layer. $\bar{\mathbf {Q}}$ and $\bar{\mathbf {P}}$ are used by the following components.
<<</Encoding Module>>>
<<<Reasoning Module>>>
First we build a heterogeneous directed graph $\mathcal {G}=(\mathbf {V};\mathbf {E})$, whose nodes ($\mathbf {V}$) are corresponding to the numbers in the question and passage, and edges ($\mathbf {E}$) are used to encode numerical relationships among the numbers. The details will be explained in Sec. SECREF19.
Then we perform reasoning on the graph based on a graph neural network, which can be formally denoted as:
where $\mathbf {W}^M$ is a shared weight matrix, $\mathbf {U}$ is the representations of the nodes corresponding to the numbers, $\texttt {QANet-Mod-Enc}(\cdot )$ is the “model encoder layer” defined in QANet which is similar to $\texttt {QANet-Emb-Enc}(\cdot )$, and the definition of $\texttt {Reasoning}(\cdot )$ will be given in Sec. SECREF23.
Finally, as $\mathbf {U}$ only contains the representations of numbers, to tackle span-style answers containing non-numerical words, we concatenate $\mathbf {U}$ with $\mathbf {M}^P$ to produce numerically-aware passage representation $\mathbf {M}_0$. Formally,
where $[\cdot ;\cdot ]$ denotes matrix concatenation, $\mathbf {W}[k]$ denotes the $k$-th column of a matrix $\mathbf {W}$, $\mathbf {0}$ is a zero vector, $I(i)$ denotes the node index corresponding to the passage word $w_i^p$ which is a number, $\mathbf {W}_0$ is a weight matrix, and $\mathbf {b}_0$ is a bias vector.
<<</Reasoning Module>>>
<<<Prediction Module>>>
Following NAQANet BIBREF6, we divide the answers into four types and use a unique output layer to calculate the conditional answer probability $\Pr (\text{answer}|\text{type})$ for each type :
Passage span: The answer is a span of the passage, and the answer probability is defined as the product of the probabilities of the start and end positions.
Question span: The answer is a span of the question, and the answer probability is also defined as the product of the probabilities of the start and end positions.
Count: The answer is obtained by counting, and it is treated as a multi-class classification problem over ten numbers (0-9), which covers most of the Count type answers in the DROP dataset.
Arithmetic expression: The answer is the result of an arithmetic expression. The expression is obtained in three steps: (1) extract all numbers from the passage; (2) assign a sign (plus, minus or zero) for each number; (3) sum the signed numbers .
Meanwhile, an extra output layer is also used to predict the probability $\Pr (\text{type})$ of each answer type. At training time, the final answer probability is defined as the joint probability over all feasible answer types, i.e., $\sum _{\text{type}}\Pr (\text{type})\Pr (\text{answer}|\text{type})$. Here, the answer type annotation is not required and the probability $\Pr (\text{type})$ is learnt by the model. At test time, the model first selects the most probable answer type greedily and then predicts the best answer accordingly.
Without loss of generality, we leverage the definition of the five output layers in BIBREF6, with $\mathbf {M_0}$ and $\mathbf {Q}$ as inputs. Please refer to the paper for more details due to space limitation.
<<</Prediction Module>>>
<<<Comparison with NAQANet>>>
The major difference between our model and NAQANet is that NAQANet does not have the reasoning module, i.e., $\mathbf {M}_0$ is simply set as $\mathbf {M}^P$. As a result, numbers are treated as common words in NAQANet except in the prediction module, thus NAQANet may struggle to learn the numerical relationships between numbers, and potentially cannot well generalize to unseen numbers. However, as discussed in Sec. SECREF1, the numerical comparison is essential for answering questions requiring numerical reasoning. In our model, the numerical relationships are explicitly represented with the topology of the graph and a NumGNN is used to perform numerical reasoning. Therefore, our NumNet model can handle questions requiring numerical reasoning more effectively, which is verified by the experiments in Sec. SECREF4.
<<</Comparison with NAQANet>>>
<<</Framework>>>
<<<Numerically-aware Graph Construction>>>
We regard all numbers from the question and passage as nodes in the graph for reasoning . The set of nodes corresponding to the numbers occurring in question and passage are denoted as $\mathbf {V}^Q$ and $\mathbf {V}^P$ respectively. And we denote all the nodes as $\mathbf {V}=\mathbf {V}^Q\cup \mathbf {V}^P$, and the number corresponding to a node $v\in \mathbf {V}$ as $n(v)$.
Two sets of edges are considered in this work:
Greater Relation Edge ($\overrightarrow{\mathbf {E}}$): For two nodes $v_i, v_j\in \mathbf {V}$, a directed edge $\overrightarrow{e}_{ij}=(v_i, v_j)$ pointing from $v_i$ to $v_j$ will be added to the graph if $n(v_i)>n(v_j)$, which is denoted as solid arrow in Figure FIGREF3.
Lower or Equal Relation Edge ($\overleftarrow{\mathbf {E}}$): For two nodes $v_i, v_j\in \mathbf {V}$, a directed edge $\overleftarrow{e}_{ij}=(v_j, v_i)$ will be added to the graph if $n(v_i)\le n(v_j)$, which is denoted as dashed arrow in Figure FIGREF3.
Theoretically, $\overrightarrow{\mathbf {E}}$ and $\overleftarrow{\mathbf {E}}$ are complement to each other . However, as a number may occur several times and represent different facts in a document, we add a distinct node for each occurrence in the graph to prevent potential ambiguity. Therefore, it is more reasonable to use both $\overrightarrow{\mathbf {E}}$ and $\overleftarrow{\mathbf {E}}$ in order to encode the equal information among nodes.
<<</Numerically-aware Graph Construction>>>
<<<Numerical Reasoning>>>
As we built the graph $\mathcal {G}=(\mathbf {V},\mathbf {E})$, we leverage NumGNN to perform reasoning, which is corresponding to the function $\texttt {Reasoning}(\cdot )$ in Eq. DISPLAY_FORM10. The reasoning process is as follows:
<<<Initialization>>>
For each node $v^P_i\in \mathbf {V}^P$, its representation is initialized as the corresponding column vector of $\mathbf {M}^P$. Formally, the initial representation is $\mathbf {v}_i^P=\mathbf {M}^P[I^P(v_i^P)]$, where $I^P(v^P_i)$ denotes the word index corresponding to $v_i^P$. Similarly, the initial representation $\mathbf {v}_j^Q$ for a node $v^Q_j\in \mathbf {V}^Q$ is set as the corresponding column vector of $\mathbf {M}^Q$. We denote all the initial node representations as $\mathbf {v}^0=\lbrace \mathbf {v}_i^P\rbrace \cup \lbrace \mathbf {v}_j^Q\rbrace $.
<<</Initialization>>>
<<<One-step Reasoning>>>
Given the graph $\mathcal {G}$ and the node representations $\mathbf {v}$, we use a GNN to perform reasoning in three steps:
(1) Node Relatedness Measure: As only a few numbers are relevant for answering a question generally, we compute a weight for each node to by-pass irrelevant numbers in reasoning. Formally, the weight for node $v_i$ is computed as:
where $\mathbf {W}_v$ is a weight matrix, and $b_v$ is a bias.
(2) Message Propagation: As the role a number plays in reasoning is not only decided by itself, but also related to the context, we propagate messages from each node to its neighbors to help to perform reasoning. As numbers in question and passage may play different roles in reasoning and edges corresponding to different numerical relations should be distinguished, we use relation-specific transform matrices in the message propagation. Formally, we define the following propagation function for calculating the forward-pass update of a node:
where $\widetilde{\mathbf {v}}^{\prime }_i$ is the message representation of node $v_i$, $\texttt {r}_{ji}$ is the relation assigned to edge $e_{ji}$, $\mathbf {W}^{\texttt {r}_{ji}}$ are relation-specific transform matrices, and $\mathcal {N}_i=\lbrace j|(v_j,v_i)\in \mathbf {E}\rbrace $ is the neighbors of node $v_i$.
For each edge $e_{ji}$, $\texttt {r}_{ji}$ is determined by the following two attributes:
Number relation: $>$ or $\le $;
Node types: the two nodes of the edge corresponding to two numbers that: (1) both from the question ($\text{q-q}$); (2) both from the passage ($\text{p-p}$); (3) from the question and the passage respectively ($\text{q-p}$); (4) from the passage and the question respectively ($\text{p-q}$).
Formally, $\texttt {r}_{ij}\in \lbrace >,\le \rbrace \times \lbrace \text{q-q},\text{p-p},\text{q-p},\text{p-q}\rbrace $.
(3) Node Representation Update: As the message representation obtained in the previous step only contains information from the neighbors, it needs to be fused with the node representation to combine with the information carried by the node itself, which is performed as:
where $\mathbf {W}_f$ is a weight matrix, and $\mathbf {b}_f$ is a bias vector.
We denote the entire one-step reasoning process (Eq. DISPLAY_FORM26-DISPLAY_FORM30) as a single function
As the graph $\mathcal {G}$ constructed in Sec. SECREF19 has encoded the numerical relations via its topology, the reasoning process is numerically-aware.
<<</One-step Reasoning>>>
<<<Multi-step Reasoning>>>
By single-step reasoning, we can only infer relations between adjacent nodes. However, relations between multiple nodes may be required for certain tasks, e.g., sorting. Therefore, it is essential to perform multi-step reasoning, which can be done as follows:
where $t\ge 1$. Suppose we perform $K$ steps of reasoning, $\mathbf {v}^K$ is used as $\mathbf {U}$ in Eq. DISPLAY_FORM10.
<<</Multi-step Reasoning>>>
<<</Numerical Reasoning>>>
<<</Methodology>>>
<<<Experiments>>>
<<<Dataset and Evaluation Metrics>>>
We evaluate our proposed model on DROP dataset BIBREF6, which is a public numerical MRC dataset. The DROP dataset is constructed by crowd-sourcing, which asks the annotators to generate question-answer pairs according to the given Wikipedia passages, which require numerical reasoning such as addition, counting, or sorting over numbers in the passages. There are $77,409$ training samples, $9,536$ development samples and $9,622$ testing samples in the dataset.
In this paper, we adopt two metrics including Exact Match (EM) and numerically-focused F1 scores to evaluate our model following BIBREF6. The numerically-focused F1 is set to be 0 when the predicted answer is mismatched for those questions with the numeric golden answer.
<<</Dataset and Evaluation Metrics>>>
<<<Baselines>>>
For comparison, we select several public models as baselines including semantic parsing models:
[topsep=2pt, itemsep=0pt]
Syn Dep BIBREF6, the neural semantic parsing model (KDG) BIBREF22 with Stanford dependencies based sentence representations;
OpenIE BIBREF6, KDG with open information extraction based sentence representations;
SRL BIBREF6, KDG with semantic role labeling based sentence representations;
and traditional MRC models:
[topsep=2pt, itemsep=0pt]
BiDAF BIBREF3, an MRC model which utilizes a bi-directional attention flow network to encode the question and passage;
QANet BIBREF12, which utilizes convolutions and self-attentions as the building blocks of encoders to represent the question and passage;
BERT BIBREF23, a pre-trained bidirectional Transformer-based language model which achieves state-of-the-art performance on lots of public MRC datasets recently;
and numerical MRC models:
[topsep=2pt, itemsep=0pt]
NAQANet BIBREF6, a numerical version of QANet model.
NAQANet+, an enhanced version of NAQANet implemented by ourselves, which further considers real number (e.g. “2.5”), richer arithmetic expression, data augmentation, etc. The enhancements are also used in our NumNet model and the details are given in the Appendix.
<<</Baselines>>>
<<<Experimental Settings>>>
In this paper, we tune our model on the development set and use a grid search to determine the optimal parameters. The dimensions of all the representations (e.g., $\mathbf {Q}$, $\mathbf {P}$, $\mathbf {M}^Q$, $\mathbf {M}^P$, $\mathbf {U}$, $\mathbf {M}_0^{\prime }$, $\mathbf {M}_0$ and $\mathbf {v}$) are set to 128. If not specified, the reasoning step $K$ is set to 3. Since other parameters have little effect on the results, we simply follow the settings used in BIBREF6.
We use the Adam optimizer BIBREF24 with $\beta _1=0.8$, $\beta _2=0.999$, $\epsilon =10^{-7}$ to minimize the objective function. The learning rate is $5 \times 10^{-4}$, L2 weight decay $\lambda $ is $10^{-7}$ and the maximum norm value of gradient clipping is 5. We also apply exponential moving average with a decay rate $0.9999$ on all trainable variables. The model is trained with a batch size of 16 for 40 epochs. Passages and questions are trimmed to 400 and 50 tokens respectively during training, and trimmed to $1,000$ and 100 tokens respectively during prediction .
<<</Experimental Settings>>>
<<<Overall Results>>>
The performance of our NumNet model and other baselines on DROP dataset are shown in Table TABREF47. From the results, we can observe that:
(1) Our NumNet model achieves better results on both the development and testing sets on DROP dataset as compared to semantic parsing-based models, traditional MRC models and even numerical MRC models NAQANet and NAQANet+. The reason is that our NumNet model can make full use of the numerical comparison information over numbers in both question and passage via the proposed NumGNN module.
(2) Our implemented NAQANet+ has a much better performance compared to the original version of NAQANet. It verifies the effectiveness of our proposed enhancements for baseline.
<<</Overall Results>>>
<<<Effect of GNN Structure>>>
In this part, we investigate the effect of different GNN structures on the DROP development set. The results are shown in Table TABREF51. The “Comparison”, “Number” and “ALL” are corresponding to the comparing question subset , the number-type answer subset, and the entire development set, respectively . If we replace the proposed numerically-aware graph (Sec. SECREF19) with a fully connected graph, our model fallbacks to a traditional GNN, denoted as “GNN” in the table. Moreover, “- question num” denotes the numbers in the question is not included in the graph, and “- $\le $ type edge” and “- $>$ type edge” denote edges of $\le $ and $>$ types are not adopted respectively.
As shown in Table TABREF51, our proposed NumGNN leads to statistically significant improvements compared to traditional GNN on both EM and F1 scores especially for comparing questions. It indicates that considering the comparing information over numbers could effectively help the numerical reasoning for comparing questions. Moreover, we find that the numbers in the question are often related to the numerical reasoning for answering the question, thus considering numbers in questions in NumGNN achieves better performance. And the results also justify that encoding “greater relation” and “lower or equal relation” simultaneously in the graph also benefits our model.
<<</Effect of GNN Structure>>>
<<<Effect of GNN Layer Number>>>
The number of NumGNN layers represents the numerical reasoning ability of our models. A $K$-layer version has the ability for $K$-step numerical inference. In this part, we additionally perform experiments to understand the values of the numbers of NumGNN layers. From Figure FIGREF52, we could observe that:
(1) The 2-layer version of NumNet achieves the best performance for the comparing questions. From careful analysis, we find that most comparing questions only require at most 2-step reasoning (e.g., “Who was the second oldest player in the MLB, Clemens or Franco?”), and therefore the 3-layer version of NumNet is more complex but brings no gains for these questions.
(2) The performance of our NumNet model on the overall development set is improved consistently as the number of GNN layers increases. The reason is that some of the numerical questions require reasoning over many numbers in the passage, which could benefit from the multi-step reasoning ability of multi-layer GNN. However, further investigation shows that the performance gain is not stable when $K\ge 4$. We believe it is due to the intrinsic over smoothing problem of GNNs BIBREF25.
<<</Effect of GNN Layer Number>>>
<<<Case Study>>>
We further give some examples to show why incorporating comparing information over numbers in the passage could help numerical reasoning in MRC in Table TABREF53. For the first case, we observe that NAQANet+ gives a wrong prediction, and we find that NAQANet+ will give the same prediction for the question “Which age group is smaller: under the age of 18 or 18 and 24?”. The reason is that NAQANet+ cannot distinguish which one is larger for $10.1\%$ and $56.2\%$. For the second case, NAQANet+ cannot recognize the second longest field goal is 22-yard and also gives a wrong prediction. For these two cases, our NumNet model could give the correct answer through the numeric reasoning, which indicates the effectiveness of our NumNet model.
<<</Case Study>>>
<<<Error Analysis>>>
To investigate how well our NumNet model handles sorting/comparison questions and better understand the remaining challenges, we perform an error analysis on a random sample of NumNet predictions. We find that:
(1) Our NumNet model can answer about 76% of sorting/comparison questions correctly, which indicates that our NumNet model has achieved numerical reasoning ability to some extend.
(2) Among the incorrectly answered sorting/comparison questions, the most ones (26%) are those whose golden answers are multiple nonadjacent spans (row 1 in Table TABREF54), and the second most ones (19%) are those involving comparison with an intermediate number that does not literally occur in the document/question but has to be derived from counting or arithmetic operation (row 1 in Table TABREF54).
<<</Error Analysis>>>
<<<Discussion>>>
By combining the numerically-aware graph and the NumGNN together, our NumNet model achieves the numerical reasoning ability. On one hand, the numerically-aware graph encodes numbers as nodes and relationships between them as the edges, which is required for numerical comparison. On the other hand, through one-step reasoning, our NumGNN could perform comparison and identify the numerical condition. After multiple-step reasoning, our NumGNN could further perform sorting.
However, since the numerically-aware graph is pre-defined, our NumNet is not applicable to the case where an intermediate number has to be derived (e.g., from arithmetic operation) in the reasoning process, which is a major limitation of our model.
<<</Discussion>>>
<<</Experiments>>>
<<<Conclusion and Future Work>>>
Numerical reasoning skills such as addition, subtraction, sorting and counting are naturally required by machine reading comprehension (MRC) problems in practice. Nevertheless, these skills are not taken into account explicitly for most existing MRC models. In this work, we propose a numerical MRC model named NumNet which performs explicit numerical reasoning while reading the passages. To be specific, NumNet encodes the numerical relations among numbers in the question and passage into a graph as its topology, and leverages a numerically-aware graph neural network to perform numerical reasoning on the graph. Our NumNet model outperforms strong baselines with a large margin on the DROP dataset. In the future, we will explore the following directions: (1)As we use a pre-defined reasoning graph in our model, it is incapable of handling reasoning process which involves intermediate numbers that not presented in the graph. How to incorporate dynamic graph into our model is an interesting problem. (2) Compared with methods proposed for arithmetic word problems (AWPs), our model has better natural language understanding ability. However, the methods for AWPs can handle much richer arithmetic expressions. Therefore, how to combine both of their abilities to develop a more powerful numerical MRC model is an interesting future direction. (3) Symbolic reasoning plays a crucial role in human reading comprehension. Our work integrates numerical reasoning, which is a special case of symbolic reasoning, into traditional MRC systems. How to incorporate more sophisticated symbolic reasoning abilities into MRC systems is also a valuable future direction.
<<</Conclusion and Future Work>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Work\nMachine Reading Comprehension\nArithmetic Word Problem Solving\nMethodology\nFramework\nEncoding Module\nReasoning Module\nPrediction Module\nComparison with NAQANet\nNumerically-aware Graph Construction\nNumerical Reasoning\nInitialization\nOne-step Reasoning\nMulti-step Reasoning\nExperiments\nDataset and Evaluation Metrics\nBaselines\nExperimental Settings\nOverall Results\nEffect of GNN Structure\nEffect of GNN Layer Number\nCase Study\nError Analysis\nDiscussion\nConclusion and Future Work"
],
"type": "outline"
}
|
2001.10179
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Multi-modal Sentiment Analysis using Super Characters Method on Low-power CNN Accelerator Device
<<<Abstract>>>
Recent years NLP research has witnessed the record-breaking accuracy improvement by DNN models. However, power consumption is one of the practical concerns for deploying NLP systems. Most of the current state-of-the-art algorithms are implemented on GPUs, which is not power-efficient and the deployment cost is also very high. On the other hand, CNN Domain Specific Accelerator (CNN-DSA) has been in mass production providing low-power and low cost computation power. In this paper, we will implement the Super Characters method on the CNN-DSA. In addition, we modify the Super Characters method to utilize the multi-modal data, i.e. text plus tabular data in the CL-Aff sharedtask.
<<</Abstract>>>
<<<Introduction>>>
The need to classify sentiment based on the multi-modal input arises in many different problems in customer related marketing fields. Super Characters BIBREF0 is a two-step method for sentiment analysis. It first converts text into images; then feeds the images into CNN models to classify the sentiment. Sentiment classification performance on large text contents from customer online comments shows that the Super Character method is superior to other existing methods. The Super Characters method also shows that the pretrained models on a larger dataset help improve accuracy by finetuning the CNN model on a smaller dataset. Compared with from-scratch trained Super Characters model, the finetuned one improves the accuracy from 95.7% to 97.8% on the well-known Chinese dataset of Fudan Corpus. Squared English Word (SEW) BIBREF1 is an extension of the Super Characters method into Latin Languages. With the wide availability of low-power CNN accelerator chips BIBREF2 BIBREF3, Super Characters method has the great potential to be deployed in large scale by saving power and fast inference speed. In addition, it is easy to deploy as well. The recent work also extend its applications to chatbot BIBREF4, image captioning BIBREF5, and also tabular data machine learning BIBREF6.
The CL-AFF Shared TaskBIBREF7 is part of the Affective Content Analysis workshop at AAAI 2020. It builds upon the OffMyChest datasetBIBREF8, which contains 12,860 samples of training data and 5,000 samples of testing data. Each sample is a multi-modal input containing both text and tabular data. The text input is an English sentence from Reddit. The tabular data is the corresponding log information for each sentence, like wordcount, created utc time and etc. And each sample has six sets of binary classification labels, EmotionDisclosure?(Yes$|$No), InformationDisclosure?(Yes$|$No), Support?(Yes$|$No), EmmotionSupport?(Yes$|$No), InformationSupport?(Yes$|$No), GeneralSupport?(Yes$|$No). In this paper, we will apply Super Characters on this data set to classify the muti-modal input.
<<</Introduction>>>
<<<Super Characters for Multi-modal Sentiment Analysis and Low-Power Hardware Solution>>>
For multi-modal sentiment analysis, we can simply split the image into two parts. One for the text input, and the other for the tabular data. Such that both can be embedded into the Super Characters image. The CNN accelerator chip comes together with a Model Development Kit (MDK) for CNN model training, which feeds the two-dimensional Super Characters images into MDK and then obtain the fixed point model. Then, using the Software Development Kit (SDK) to load the model into the chip and send command to the CNN accelerator chip, such as to read an image, or to forward pass the image through the network to get the inference result. The advantage of using the CNN accelerator is low-power, it consumes only 300mw for an input of size 3x224x224 RGB image at the speed of 140fps. Compared with other models using GPU or FPGA, this solution implement the heavy-lifting DNN computations in the CNN accelerator chip, and the host computer is only responsible for memory read/write to generate the designed Super Character image. This has shown good result on system implementations for NLP applications BIBREF9.
<<</Super Characters for Multi-modal Sentiment Analysis and Low-Power Hardware Solution>>>
<<<Experiments>>>
<<<Data Exploration>>>
The training data set has 12,860 samples with 16 columns. The first ten columns are attributes, including sentenceid, author, nchar, created_utc, score, subreddit, label, full_text, wordcount, and id. And the other six columns are labels for each of the tasks of Emotion_disclosure, Information_disclosure, Support, Emmotion_support, Information_support, and General_support. Each task is a binary classification problem based on the ten attributes. So there will be 60 models to be trained for a 10-fold validation. The test data set has 5000 samples with only the ten columns of attributes. The system run will give labels on these test samples based on the 10-fold training.
For the training data, unique ids are 3634 compared to the whole training 12,860. While for the testing data, this number is only 2443 compared to the whole testing dataset 5000, meaning some of the records may come from the same discussion thread. And the unique authors are 7556 for training, and 3769 for testing, which means some of the authors are active that they may published more than one comments.
Based on this, we have considered to include author names in the multi-modal model as well, since a comment may be biased by the personality of its author. The maximum length of an author's name is 20 charactors, if SEW BIBREF1 is to be used to project the names onto a two-dimensional embedding. On the other hand, the nchar which indicates the number of characters for the full_text has a maximum value of 9993, and the maximum wordcount is 481. The column “label" has 37 unique values, which are different combinations of strings like “husband", “wife", “boyfriend", “girlfriend", and their abbreviations like “bf",“gf". The column “subreddit" is a categorical attribute with values in (“offmychest", “CasualConversation"). After converting the Unix time in the column of “created_utc", we found that the records are generated from 2017 to 2018. The column score has integers ranging from -44 to 1838 with 251 unique values.
<<</Data Exploration>>>
<<<Design SuperCharacters Image>>>
The sentence length distribution is given in Figure FIGREF3. The layout design for the full_text will be based on this. Since we present the English words using SEW BIBREF1 method, the size of each English word on the SuperCharacters image should better be calculated by (224/N)*(224/N) if the whole image is set to 224x224. Here N is an integer. The dimension is set to 224x224 because of the chip specification.
<<<Design Option One>>>
In this design setting, we only include the full_text information and ignore the other attributes. If N=7, it means each row has 7 words, and each word has (224/7)*(224/7)=32*32 pixels. In this setting we can hold up to 49 words in full_text. For the records with words more than 49, the full_text will ignore the words from the 49th. In this case, only 0.86% of the training data and 1.98% of the testing data will have to cut the sentence at 49 words. An example of this design setting is in Figure FIGREF4.
<<</Design Option One>>>
<<<Design Option Two>>>
If N=8, it means each row has 8 words, and each word has (224/8)*(224/8)=28*28 pixels. And if we set the cutlength=40, it means that we will have 5 rows for the full_text, and the other 3 rows will not be used for text, but all the space of the 224*(3*28) square pixels will be used for the tabular data given in the attributes other than full_text". For the records with words more than 40, the full_text will ignore the words from the 40th. In this case, only 2.03% of the training data and 4.14% of the testing data will have to cut the sentence at 40 words. We have the option to use the bottom part of the image to embed the other attributes. The id and sentenceid should be unrelated to the prediction, so these two attributes are not included. One example having the full_text, author, wordcount, created_utc, subreddit, score, nchar, and label is given in Figure FIGREF4.
However, the 10-fold training accuracy on this design is not good. This is partially because some of the attributes do not contribute to prediction but adds more noise instead. For example, the created time may not be very related to the prediction of the tasks but occupies a good portion of the embedding area of the image. In addition, since most of the wordcounts are centered around less than twenty, the two-dimensional embeddings of the full_text should have better resolution if the cutlength is smaller than 40. So the font size will be larger and easier for CNN to learn.
<<</Design Option Two>>>
<<<Design Option Three>>>
This design setting cuts the cut length of the full_text sentence to 42, and leave the space of the last row for some important attributes, including subreddit, wordcount, score, and label. An example of this design setting is in Figure FIGREF4.
<<</Design Option Three>>>
<<<Design Option Four>>>
This is data augmentation for Design Option Three. For a small data set, we need more data with the same semantic meaning generated from the raw labeled data without adding any noise. For Super Characters, the text are projected into the image. Adding some spaces at the front should not change the semantic meaning, and at the same time increased the number of generated Super Characters images. For each sentence, if the sentence length is less than 42, we will add one space at the front and then generate the Super Characters image. This process iterates until the length of the sentence with the added space reaches 42. An example of this design setting is in Figure FIGREF4.
<<</Design Option Four>>>
<<</Design SuperCharacters Image>>>
<<<Experimental Results>>>
After comparison, only Design Option One and Design Option Four are kept for the entire 10-fold training and validation.
For the system runs, it is limited to submit a maximum of 10 system runs. So, only the first five 10-folds models on both Design Option One and Design Option Four are tested against the 5000 testing data and submitted. The details of these 10 system runs are given in Table TABREF10$-$TABREF15.
In general, Design Option Four are a little better than Design Option One, but these results are still not good. The results are a little better than constantly predict one class. We can see that the results on this OffMyChest data is not as good as on AffCon19 CLAFF shared task. And compared with Super Characters on Wikipedia data set, the accuracy on this data is not as accurate as well.
Several methods could be used to further improve the accuracy. First, pretrained model may help improve. For this shared task, the size of training examples are relatively small to understand the complex definition of these 6 tasks. Second, other data augmentation method could be introduced in order to further boost the accuracy. For example, replacing word with its synonyms. Third, the data set is skewed data set. We can balance the data set by upsampling.
<<</Experimental Results>>>
<<</Experiments>>>
<<<Conclusion>>>
In this paper, we proposed modified version of Super Characters, in order to make it work on multi-modal data. In the case of this AffCon CLAFF shared task, the multi-modal data includes text data and tabular data. In addition, we deploy the models on low-power CNN chips, which proves the feasibility of applying DNN models with consideration of real-world practical concerns such as power and speed. The Super Characters method is relatively new and starts to attrack attentions for application scenarios. Pretrained models on large corpus would be very helpful for the Super Characters method, as success of pretrained model is observed for NLP models like ELMO and BERT. For fine-tuning on small datasets, data augmentation should further boost the generalization capability.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nSuper Characters for Multi-modal Sentiment Analysis and Low-Power Hardware Solution\nExperiments\nData Exploration\nDesign SuperCharacters Image\nDesign Option One\nDesign Option Two\nDesign Option Three\nDesign Option Four\nExperimental Results\nConclusion"
],
"type": "outline"
}
|
1911.03842
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Queens are Powerful too: Mitigating Gender Bias in Dialogue Generation
<<<Abstract>>>
Models often easily learn biases present in the training data, and their predictions directly reflect this bias. We analyze the presence of gender bias in dialogue and examine the subsequent effect on generative chitchat dialogue models. Based on this analysis, we propose a combination of three techniques to mitigate bias: counterfactual data augmentation, targeted data collection, and conditional training. We focus on the multi-player text-based fantasy adventure dataset LIGHT as a testbed for our work. LIGHT contains gender imbalance between male and female characters with around 1.6 times as many male characters, likely because it is entirely collected by crowdworkers and reflects common biases that exist in fantasy or medieval settings. We show that (i) our proposed techniques mitigate gender bias by balancing the genderedness of generated dialogue utterances; and (ii) they work particularly well in combination. Further, we show through various metrics---such as quantity of gendered words, a dialogue safety classifier, and human evaluation---that our models generate less gendered, but still engaging chitchat responses.
<<</Abstract>>>
<<<Introduction>>>
Since machine learning algorithms learn to model patterns present in training datasets, what they learn is affected by data quality. Analysis has found that model predictions directly reflect the biases found in training datasets, such as image classifiers learning to associate ethnicity with specific activities BIBREF1. Recent work in natural language processing has found similar biases, such as in word embeddings BIBREF2, BIBREF3, BIBREF4, object classification BIBREF5, natural language inference BIBREF6, and coreference resolution BIBREF7. Less work has focused on the biases present in dialogue utterances BIBREF8, BIBREF9, despite bias being clearly present in human interactions, and the rapid development of dialogue agents for real-world use-cases, such as interactive assistants. In this work we aim to address this by focusing on mitigating gender bias.
We use the dialogue dataset from the LIGHT text adventure world BIBREF0 as a testbed for our investigation into de-biasing dialogues. The dataset consists of a set of crowd-sourced locations, characters, and objects, which form the backdrop for the dialogues between characters. In the dialogue creation phase, crowdworkers are presented with personas for characters—which themselves were written by other crowdworkers—that they should enact; the dialogues the crowdworkers generate from these personas form the dialogue dataset. Dialogue datasets are susceptible to reflecting the biases of the crowdworkers as they are often collected solely via crowdsourcing. Further, the game's medieval setting may encourage crowdworkers to generate text which accentuates the historical biases and inequalities of that time period BIBREF10, BIBREF11. However, despite the fact that the dialogues take place in a fantasy adventure world, LIGHT is a game and thus we are under no obligation to recreate historical biases in this environment, and can instead use creative license to shape it into a fun world with gender parity.
We use the dialogues in LIGHT because we find that it is highly imbalanced with respect to gender: there are over 60% more male-gendered characters than female. We primarily address the discrepancy in the representation of male and female genders, although there are many characters that are gender neutral (like “trees") or for which the gender could not be determined. We did not find any explicitly identified non-binary characters. We note that this is a bias in and of itself, and should be addressed in future work. We show that training on gender biased data leads existing generative dialogue models to amplify gender bias further. To offset this, we collect additional in-domain personas and dialogues to balance gender and increase the diversity of personas in the dataset. Next, we combine this approach with Counterfactual Data Augmentation and methods for controllable text generation to mitigate the bias in dialogue generation. Our proposed techniques create models that produce engaging responses with less gender bias.
<<</Introduction>>>
<<<Sources of Bias in Dialogue Datasets>>>
<<<Bias in Character Personas>>>
Recent work in dialogue incorporates personas, or personality descriptions that ground speaker's chat, such as I love fishing BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16. Personas have been shown to increase engagingness and improve consistency. However, they can be a starting point for bias BIBREF17, BIBREF18, BIBREF9, as bias in the personas propagates to subsequent conversations.
<<<Qualitative Examination.>>>
Analyzing the personas in LIGHT qualitatively, we find many examples of bias. For example, the character girl contains the line I regularly clean and cook dinner. Further examples are given in Table TABREF1.
<<</Qualitative Examination.>>>
<<<Quantitative Examination.>>>
We quantitatively analyze bias by first examining whether the existing personas are offensive, and second, evaluating their gender balance. To assess the pervasiveness of unsafe content present in personas, we asked three independent annotators to examine each character's persona for potentially offensive content. If annotators selected that the content was offensive or maybe offensive, they were asked to place it in one of four categories – racist, sexist, classist, other – and to provide a reason for their response. Just over 2% of personas were flagged by at least one annotator, and these personas are removed from the dataset.
We further examined gender bias in personas. Annotators were asked to label the gender of each character based on their persona description (choosing “neutral" if it was not explicit in the persona). This annotation is possible because some personas include lines such as I am a young woman, although the majority of personas do not mention an explicit gender. Annotators found nearly 50% more male-gendered characters than female-gendered characters (Table TABREF5).
While annotators labeled personas as explicitly male, female, or gender-neutral, gender bias may still exist in personas beyond explicit sentences such as I am a young man. For example, personas can contain gendered references such as I want to follow in my father's footsteps rather than mother's footsteps. These relational nouns BIBREF19, BIBREF20 such as father encode a specific relationship that can be gender biased. In this example, that relationship would be between the character and a man, rather than a woman. We analyzed the frequency of references to other gendered characters in the personas by counting the appearance of gendered words using the list compiled by BIBREF21 (for example he vs. she), and find that men are disproportionately referred to in the personas: there are nearly 3x as many mentions of men than women.
<<</Quantitative Examination.>>>
<<</Bias in Character Personas>>>
<<<Bias in Dialogue Utterances>>>
After analyzing the bias in LIGHT personas, we go on to analyze the bias in dialogues created from those personas and how to quantify it.
<<<Measuring Bias.>>>
Sexism is clearly present in many datasets BIBREF9, but finding a good way to measure sexism, especially at scale, can be challenging. A simple answer would be to rely on crowdworkers operating under their own notions of “sexism” to annotate the dialogues. However, in our experience, crowdworkers hold a range of views, often different from ours, as to what counts as sexism, making mere human evaluation far from sufficient. Note that the original LIGHT personas and dialogues were generated by crowdworkers, leaving little reason to believe that crowdworkers will be proficient at spotting the sexism that they themselves embued the dataset with in the first place. Therefore, we supplement our crowdworker-collected human annotations of gender bias with additional quantitative measurements: we measure the ratio of gendered words (taken from the union of several existing gendered word lists that were each created through either automatic means, or by experts BIBREF21, BIBREF22, BIBREF23), and we run an existing dialogue safety classifier BIBREF24 to measure offensiveness of the dialogues.
<<</Measuring Bias.>>>
<<</Bias in Dialogue Utterances>>>
<<</Sources of Bias in Dialogue Datasets>>>
<<<Methodology: Mitigating Bias in Generative Dialogue>>>
We explore both data augmentation and algorithmic methods to mitigate bias in generative Transformer dialogue models. We describe first our modeling setting and then the three proposed techniques for mitigating bias. Using (i) counterfactual data augmentation BIBREF25 to swap gendered words and (ii) additional data collection with crowdworkers, we create a gender-balanced dataset. Further, (iii) we describe a controllable generation method which moderates the male and female gendered words it produces.
<<<Models>>>
Following BIBREF0, in all of our experiments we fine-tune a large, pre-trained Transformer encoder-decoder neural network on the dialogues in the LIGHT dataset. The model was pre-trained on Reddit conversations, using a previously existing Reddit dataset extracted and obtained by a third party and made available on pushshift.io. During pre-training, models were trained to generate a comment conditioned on the full thread leading up to the comment. Comments containing URLs or that were under 5 characters in length were removed from the corpus, as were all child comments, resulting in approximately $2,200$ million training examples. The model is a 8 layer encoder, 8 layer decoder with 512 dimensional embeddings and 16 attention heads, and is based on the ParlAI implementation of BIBREF26. For generation, we decode sequences with beam search with beam size 5.
<<</Models>>>
<<<Counterfactual Data Augmentation>>>
One of the solutions that has been proposed for mitigating gender bias on the word embedding level is Counterfactual Data Augmentation (CDA) BIBREF25. We apply this method by augmenting our dataset with a copy of every dialogue with gendered words swapped using the gendered word pair list provided by BIBREF21. For example, all instances of grandmother are swapped with grandfather.
<<</Counterfactual Data Augmentation>>>
<<<Positive-Bias Data Collection>>>
To create a more gender-balanced dataset, we collect additional data using a Positive-Bias Data Collection (Pos. Data) strategy.
<<<Gender-swapping Existing Personas>>>
There are a larger number of male-gendered character personas than female-gendered character personas (see Section SECREF2), so we balance existing personas using gender-swapping. For every gendered character in the dataset, we ask annotators to create a new character with a persona of the opposite gender that is otherwise identical except for referring nouns or pronouns. Additionally, we ask annotators to swap the gender of any characters that are referred to in the persona text for a given character.
<<</Gender-swapping Existing Personas>>>
<<<New and Diverse characters>>>
As discussed in Section SECREF2, it is insufficient to simply balance references to men and women in the dataset, as there may be bias in the form of sexism. While it is challenging to detect sexism, we attempt to offset this type of bias by collecting a set of interesting and independent characters. We do this by seeding workers with examples like adventurer with the persona I am an woman passionate about exploring a world I have not yet seen. I embark on ambitious adventures. We give the additional instruction to attempt to create diverse characters. Even with this instruction, crowdworkers still created roughly 3x as many male-gendered characters as female-gendered characters. We exclude male-gendered characters created in this fashion.
In combination with the gender swapped personas above, this yields a new set of 2,676 character personas (compared to 1,877 from the original dataset), for which the number of men and women and the number of references to male or female gendered words is roughly balanced: see Table TABREF5.
<<</New and Diverse characters>>>
<<<New dialogues>>>
Finally, we collect additional dialogues with these newly created gender balanced character personas, favoring conversations that feature female gendered characters to offset the imbalance in the original data. We added further instructions for annotators to be mindful of gender bias during their conversations, and in particular to assume equality between genders – social, economic, political, or otherwise – in this fantasy setting. In total, we collect 507 new dialogues containing 6,658 new dialogue utterances in total (about 6% of the size of the full LIGHT dataset).
<<</New dialogues>>>
<<</Positive-Bias Data Collection>>>
<<<Conditional Training>>>
Bias in dialogue can manifest itself in various forms, but one form is the imbalanced use of gendered words. For example, LIGHT contains far more male-gendered words than female-gendered words rather than an even split between words of both genders. To create models that can generate a gender-balanced number of gendered words, we propose Conditional Training (CT) for controlling generative model output BIBREF27, BIBREF28, BIBREF29, BIBREF30. Previous work proposed a mechanism to train models with specific control tokens so models learn to associate the control token with the desired text properties BIBREF28, then modifying the control tokens during inference to produce the desired result.
Prior to training, each dialogue response is binned into one of four bins – $\text{F}^{0/+}\text{M}^{0/+}$ – where $\text{F}^{0}$ indicates that there are zero female gendered words in the response and $\text{F}^{+}$ indicates the presence of at least one female gendered word. The gendered words are determined via an aggregation of existing lists of gendered nouns and adjectives from BIBREF21, BIBREF22, BIBREF23. The bins are used to train a conditional model by appending a special token (indicating the bin for the target response) to the end of the input which is given to the encoder. At inference time, the bins can be manipulated to produce dialogue outputs with various quantities of gendered words.
<<</Conditional Training>>>
<<</Methodology: Mitigating Bias in Generative Dialogue>>>
<<<Results>>>
We train generative Transformer models using each of these methods – Counterfactual Data Augmentation that augments with swaps of gendered words (CDA, §SECREF19), adding new dialogues (Positive-Bias Data Collection, §SECREF20), and controllable generation to control the quantity of gendered words (CT, §SECREF24) – and finally combine all of these methods together (ALL).
<<<Bias is Amplified in Generation>>>
Existing Transformer generative dialogue models BIBREF31, BIBREF32, BIBREF0 are trained to take as input the dialogue context and generate the next utterance. Previous work has shown that machine learning models reflect the biases present in data BIBREF4, BIBREF3, and that these biases can be easy to learn compared to more challenging reasoning BIBREF2, BIBREF33. Generative models often use beam search or top-k sampling BIBREF34 to decode, and these methods are well-known to produce generic text BIBREF35, which makes them susceptible statistical biases present in datasets.
As shown in Table TABREF11, we find that existing models actually amplify bias. When the trained model generates gendered words (i.e., words from our gendered word list), it generates male-gendered words the vast majority of the time – even on utterances for which it is supposed to generate only female-gendered words (i.e., the gold label only contains female-gendered words), it generates male-gendered words nearly $78\%$ of the time.
Additionally, following BIBREF8, we run an offensive language classifier on the gold responses and the model generated utterances (Table TABREF16) and find that the model produces more offensive utterances than exist in the dataset.
<<</Bias is Amplified in Generation>>>
<<<Genderedness of Generated Text>>>
We analyze the performance of the various techniques by dividing the test set using the four genderedness bins – $\text{F}^{0}\text{M}^{0}$, $\text{F}^{0}\text{M}^{+}$, $\text{F}^{+}\text{M}^{0}$, and $\text{F}^{+}\text{M}^{+}$ – and calculate the F1 word overlap with the gold response, the percentage of gendered words generated (% gend. words), and the percentage of male-gendered words generated (relative to the sum total of gendered words generated by the model). We compare to the gold labels from the test set and a baseline model that does not use any of the bias mitigation techniques. Results for all methods are displayed in Table TABREF11.
Each of the methods we explore improve in % gendered words, % male bias, and F1 over the baseline Transformer generation model, but we find combining all methods in one – the ALL model is the most advantageous. While ALL has more data than CDA and CT, more data alone is not enough — the Positive-Bias Data Collection model does not achieve as good results. Both the CT and ALL models benefit from knowing the data split ($\text{F}^{0}\text{M}^{0}$, for example), and both models yield a genderedness ratio closest to ground truth.
<<</Genderedness of Generated Text>>>
<<<Conditional Training Controls Gendered Words>>>
Our proposed CT method can be used to control the use of gendered words in generated dialogues. We examine the effect of such training by generating responses on the test set by conditioning the ALL model on a singular bin for all examples. Results are shown in Figure FIGREF12. Changing the bin radically changes the genderedness of generated text without significant changes to F1.
Examples of generated text from both the baseline and the ALL model are shown in Table TABREF31. The baseline model generates male-gendered words even when the gold response contains no gendered words or only female-gendered words, even generating unlikely sequences such as “my name is abigail. i am the king of this kingdom.".
<<</Conditional Training Controls Gendered Words>>>
<<<Safety of Generated Text>>>
Using a dialogue safety classifier BIBREF24, we find that our proposed de-biased models are rated as less offensive compared to the baseline generative Transformer and the LIGHT data (see Table TABREF16).
<<</Safety of Generated Text>>>
<<<Human Evaluation>>>
Finally, we use human evaluation to compare the quality of our de-biasing methods. We use the dialogue evaluation system Acute-Eval BIBREF36 to ask human evaluators to compare two conversations from different models and decide which model is more biased and which model is more engaging. Following Acute-Eval, we collect 100 human and model paired chats. Conversations from a human and baseline model are compared to conversations from a human and the ALL model with all generations set to the $\text{F}^{0}\text{M}^{0}$ gender-neutral control bin. Evaluators are asked which model is more engaging and for which model they find it more difficult to predict the gender of the speaker. We found that asking about difficulty of predicting a speaker's gender was much more effective than asking evaluators to evaluate sexism or gender bias. Figure FIGREF17 shows that evaluators rate the ALL model harder to predict the gender of (statistically significant at $p < 0.01$) while engagingness does not change. Our proposed methods are able to mitigate gender bias without degrading dialogue quality.
<<</Human Evaluation>>>
<<</Results>>>
<<<Conclusion>>>
We analyze gender bias in dialogue and propose a general purpose method for understanding and mitigating bias in character personas and their associated dialogues. We present techniques using data augmentation and controllable generation to reduce gender bias in neural language generation for dialogue. We use the dataset LIGHT as a testbed for this work. By integrating these methods together, our models provide control over how gendered dialogue is and decrease the offensiveness of the generated utterances. Overall, our proposed methodology reduces the effect of bias while maintaining dialogue engagingness.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nSources of Bias in Dialogue Datasets\nBias in Character Personas\nQualitative Examination.\nQuantitative Examination.\nBias in Dialogue Utterances\nMeasuring Bias.\nMethodology: Mitigating Bias in Generative Dialogue\nModels\nCounterfactual Data Augmentation\nPositive-Bias Data Collection\nGender-swapping Existing Personas\nNew and Diverse characters\nNew dialogues\nConditional Training\nResults\nBias is Amplified in Generation\nGenderedness of Generated Text\nConditional Training Controls Gendered Words\nSafety of Generated Text\nHuman Evaluation\nConclusion"
],
"type": "outline"
}
|
1911.06191
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Microsoft Research Asia's Systems for WMT19
<<<Abstract>>>
We Microsoft Research Asia made submissions to 11 language directions in the WMT19 news translation tasks. We won the first place for 8 of the 11 directions and the second place for the other three. Our basic systems are built on Transformer, back translation and knowledge distillation. We integrate several of our rececent techniques to enhance the baseline systems: multi-agent dual learning (MADL), masked sequence-to-sequence pre-training (MASS), neural architecture optimization (NAO), and soft contextual data augmentation (SCA).
<<</Abstract>>>
<<<Introduction>>>
We participated in the WMT19 shared news translation task in 11 translation directions. We achieved first place for 8 directions: German$\leftrightarrow $English, German$\leftrightarrow $French, Chinese$\leftrightarrow $English, English$\rightarrow $Lithuanian, English$\rightarrow $Finnish, and Russian$\rightarrow $English, and three other directions were placed second (ranked by teams), which included Lithuanian$\rightarrow $English, Finnish$\rightarrow $English, and English$\rightarrow $Kazakh.
Our basic systems are based on Transformer, back translation and knowledge distillation. We experimented with several techniques we proposed recently. In brief, the innovations we introduced are:
<<<Multi-agent dual learning (MADL)>>>
The core idea of dual learning is to leverage the duality between the primal task (mapping from domain $\mathcal {X}$ to domain $\mathcal {Y}$) and dual task (mapping from domain $\mathcal {Y}$ to $\mathcal {X}$ ) to boost the performances of both tasks. MADL BIBREF0 extends the dual learning BIBREF1, BIBREF2 framework by introducing multiple primal and dual models. It was integrated into our submitted systems for German$\leftrightarrow $English and German$\leftrightarrow $French translations.
<<</Multi-agent dual learning (MADL)>>>
<<<Masked sequence-to-sequence pretraining (MASS)>>>
Pre-training and fine-tuning have achieved great success in language understanding. MASS BIBREF3, a pre-training method designed for language generation, adopts the encoder-decoder framework to reconstruct a sentence fragment given the remaining part of the sentence: its encoder takes a sentence with randomly masked fragment (several consecutive tokens) as input, and its decoder tries to predict this masked fragment. It was integrated into our submitted systems for Chinese$\rightarrow $English and English$\rightarrow $Lithuanian translations.
<<</Masked sequence-to-sequence pretraining (MASS)>>>
<<<Neural architecture optimization (NAO)>>>
As well known, the evolution of neural network architecture plays a key role in advancing neural machine translation. Neural architecture optimization (NAO), our newly proposed method BIBREF4, leverages the power of a gradient-based method to conduct optimization and guide the creation of better neural architecture in a continuous and more compact space given the historically observed architectures and their performances. It was applied in English$\leftrightarrow $Finnish translations in our submitted systems.
<<</Neural architecture optimization (NAO)>>>
<<<Soft contextual data augmentation (SCA)>>>
While data augmentation is an important trick to boost the accuracy of deep learning methods in computer vision tasks, its study in natural language tasks is relatively limited. SCA BIBREF5 softly augments a randomly chosen word in a sentence by its contextual mixture of multiple related words, i.e., replacing the one-hot representation of a word by a distribution provided by a language model over the vocabulary. It was applied in Russian$\rightarrow $English translation in our submitted systems.
<<</Soft contextual data augmentation (SCA)>>>
<<</Introduction>>>
<<<Our Techniques>>>
<<<Masked sequence-to-sequence pre-training (MASS)>>>
MASS is a pre-training method for language generation. For machine translation, it can leverage monolingual data in two languages to pre-train a translation model. Given a sentence $x \in \mathcal {X}$, we denote $x^{\setminus u:v}$ as a modified version of $x$ where its fragment from position $u$ to $v$ are masked, $0<u<v<m$ and $m$ is the number of tokens of sentence $x$. We denote $k=v-u+1$ as the number of tokens being masked from position $u$ to $v$. We replace each masked token by a special symbol $[\mathbb {M}]$, and the length of the masked sentence is not changed. $x^{u:v}$ denotes the sentence fragment of $x$ from $u$ to $v$.
MASS pre-trains a sequence to sequence model by predicting the sentence fragment $x^{u:v}$ taking the masked sequence $x^{\setminus u:v}$ as input. We use the log likelihood as the objective function:
where $\mathcal {X}$, $\mathcal {Y}$ denote the source and target domain. In addition to zero/low-resource setting BIBREF7, we also extend MASS to supervised setting where bilingual sentence pair $(x, y) \in (\mathcal {X}, \mathcal {Y})$ can be leveraged for pre-training. The log likelihood in the supervised setting is as follows:
where $[\cdot ;\cdot ]$ represents the concatenation operation. $P(y|x^{\setminus u:v};\theta )$ and $P(x|y^{\setminus u:v};\theta )$ denote the probability of translating a masked sequence to another language, which encourage the encoder to extract meaningful representations of unmasked input tokens in order to predict the masked output sequence. $P(x^{u:v}|[x^{\setminus u:v}; y^{\setminus u:v}];\theta )$ and $P(y^{u:v}|[x^{\setminus u:v}; y^{\setminus u:v}];\theta )$ denote the probability of generating the masked source/target segment given both the masked source and target sequences, which encourage the model to extract cross-lingual information. $P(y^{u:v}|x^{\setminus u:v};\theta )$ and $P(x^{u:v}|y^{\setminus u:v};\theta )$ denote the probability of generating the masked fragment given only the masked sequence in another language. More details about MASS can be found in BIBREF3.
<<</Masked sequence-to-sequence pre-training (MASS)>>>
<<</Our Techniques>>>
<<<Submitted Systems>>>
<<<English@!START@$\leftrightarrow $@!END@German>>>
We submit constrained systems to both English to German and German to English translations, with the same techniques.
<<<Dataset>>>
We concatenate “Europarl v9”, “News Commentary v14”, “Common Crawl corpus” and “Document-split Rapid corpus” as the basic bilingual dataset (denoted as $\mathcal {B}_0$). Since “Paracrawl” data is noisy, we select 20M bilingual data from this corpus using the script filter_interactive.py. The two parts of bilingual data are concatenated together (denoted as $\mathcal {B}_1$). We clean $\mathcal {B}_1$ by normalizing the sentences, removing non-printable characters, and tokenization. We share a vocabulary for the two languages and apply BPE for word segmentation with 35000 merge operations. (We tried different BPE merge operations but found no significant differences.) For monolingual data, we use $120M$ English sentences (denoted as $\mathcal {M}_{\text{en}}$) and $120M$ German sentences (denoted as $\mathcal {M}_{\text{de}}$) from Newscrawl, and preprocess them in the same way as bilingual data. We use newstest 2016 and the validation set and newstest 2018 as the test set.
<<</Dataset>>>
<<<Model Configuration>>>
We use the PyTorch implementation of Transformer. We choose the Transformer_big setting, in which both the encoder and decoder are of six layers. The dropout rate is fixed as $0.2$. We set the batchsize as 4096 and the parameter –update-freq as 16. We apply Adam BIBREF10 optimizer with learning rate $5\times 10^{-4}$.
<<</Model Configuration>>>
<<<Training Pipeline>>>
The pipeline consists of three steps:
1. Pre-train two English$\rightarrow $German translation models (denoted as $\bar{f}_1$ and $\bar{f}_2$) and two German$\rightarrow $English translation models (denoted as $\bar{g}_1$ and $\bar{g}_2$) on $\mathcal {B}_1$; pre-train another English$\rightarrow $German (denoted as $\bar{f}_3$) and German$\rightarrow $English (denoted as $\bar{g}_3$) on $\mathcal {B}_0$.
2. Apply back translation following BIBREF11, BIBREF12. We back-translate $\mathcal {M}_{\text{en}}$ and $\mathcal {M}_{\text{de}}$ using $\bar{f}_3$ and $\bar{g}_3$ with beam search, add noise to the translated sentences BIBREF12, merge the synthetic data with $\mathcal {B}_1$, and train one English$\rightarrow $German model $f_0$ and one German$\rightarrow $English model $g_0$ for seven days on eight V100 GPUs.
3. Apply MADL to $f_0$ and $g_0$. That is, the $F_\alpha $ in Eqn.(DISPLAY_FORM8) is specified as the combination of $f_0,\bar{f}_1,\bar{f}_2$ with equal weights; and $G_\beta $ consists of $g_0,\bar{g}_1,\bar{g}_2$. During training, we will only update $f_0$ and $g_0$. To speed up training, we randomly select $20M$ monolingual English and German sentences from $\mathcal {M}_{\text{en}}$ and $\mathcal {M}_{\text{de}}$ respectively instead of using all monolingual sentences. The eventual output models are denoted as $f_1$ and $g_1$ respectively. This step takes 3 days on four P40 GPUs.
<<</Training Pipeline>>>
<<<Results>>>
The results are summarized in Table TABREF24, which are evaluated by sacreBLEU. The baseline is the average accuracy of models using only bitext, i.e., $\bar{f}_1$ and $\bar{f}_2$ for English$\rightarrow $German translation and $\bar{g}_1$ and $\bar{g}_2$ for German$\rightarrow $English, and BT is the accuracy of the model after back-translation training. As can be seen, back translation improves accuracy. For example, back-translation boosts the BLEU score from $45.6$ to $47.4$ on news18 English$\rightarrow $German translation, which is $1.8$ point improvement. MADL further boosts BLEU to $50.4$, obtaining another 3-point improvement, demonstrating the effectiveness of our method.
For the final submission, we accumulate many translation models (trained using bitext, back translation, and MADL, with different random seeds) and do knowledge distillation on the source sentences from WMT14 to WMT19 test sets. Take English$\rightarrow $German translation as an example. Denote the English inputs as $\mathcal {T}=\lbrace s_i\rbrace _{i=1}^{N_T}$, where $N_T$ is the size of the test set. For each $s$ in $\mathcal {T}$, we translate $s$ to $d^\prime $ using $M$ English$\rightarrow $German models and eventually obtain
where $f^{(j)}$ is the $j$-th translation model we accumulated, $\mathcal {T}$ is the combination of inputs from WMT14 to WMT19. After obtaining $\mathcal {E}$, we randomly select $N_TM$ bitext pairs (denoted as $\mathcal {B}_2$) from $\mathcal {B}_1$ and finetune model $f_1$ on $\mathcal {B}_2\cup \mathcal {E}$. We stop tuning when the BLEU scores of WMT16 (i.e., the validation set) drops.
We eventually obtain $44.9$ BLEU score for English$\rightarrow $German and $42.8$ for German$\rightarrow $English on WMT19 test sets and are ranked in the first place in these two translation tasks.
<<</Results>>>
<<</English@!START@$\leftrightarrow $@!END@German>>>
<<<German@!START@$\leftrightarrow $@!END@French>>>
For German$\leftrightarrow $French translation, we follow a similar process as the one used to English$\leftrightarrow $German tasks introduced in Section SECREF17. We merge the “commoncrawl”, “europarl-v7” and part of “de-fr.bicleaner07” selected by filter_interactive.py as the bilingual data. We collect $20M$ monolingual sentences for French and $20M$ for German from newscrawl. The data pre-processing rule and training procedure are the same as that used in Section SECREF17. We split $9k$ sentences from the “dev08_14” as the validation set and use the remaining ones as the test set.
The results of German$\leftrightarrow $French translation on the test set are summarized in Table TABREF27.
Again, our method achieves significant improvement over the baselines. Specifically, MADL boosts the baseline of German$\rightarrow $French and French$\rightarrow $German by 2 and $1.5$ points respectively.
Our submitted German$\rightarrow $French is a single system trained by MADL, achieving $37.3$ BLEU on WMT19. The French$\rightarrow $German is an ensemble of three independently trained models, achieving $35.0$ BLEU score. Our systems are ranked in the first place for both German$\rightarrow $French and French$\rightarrow $German in the leaderboard.
<<</German@!START@$\leftrightarrow $@!END@French>>>
<<<Chinese@!START@$\rightarrow $@!END@English>>>
<<<MASS Pre-training>>>
We pre-train MASS (Transfomer_big) with both monolingual and bilingual data. We use 100M Chinese and 300M English monolingual sentences for the unsupervised setting (Equation DISPLAY_FORM10), and with a total of 18M and 56M bilingual sentence pairs for the supervised settings (Equation DISPLAY_FORM11). We share the encoder and decoder for all the losses in Equation DISPLAY_FORM10 and DISPLAY_FORM11. We then fine-tune the MASS pre-trained model on both 18M and 56M bilingual sentence pairs to get the baseline translation model for both Chinese$\rightarrow $English and English$\rightarrow $Chinese.
<<</MASS Pre-training>>>
<<<Back Translation and Knowledge Distillation>>>
We randomly choose 40M monolingual sentences for Chinese and English respectively for back translation BIBREF11, BIBREF1 and knowledge distillation BIBREF15, BIBREF16. We iterate back translation and knowledge distillation multiple times, to gradually boost the performance of the model.
<<</Back Translation and Knowledge Distillation>>>
<<<WMT19 Submission>>>
For the WMT19 submission, we conduct fine-tuning and speculation to further boost the accuracy by using the source sentences in the WMT19 test set. We first filter the bilingual as well as pseudo-generated data according to the relevance to the source sentences. We use the filter method in BIBREF17 and continue to train the model on the filtered data. Second, we conduct speculation on the test source sentences following the practice in BIBREF17. The final BLEU score of our submission is 39.3, ranked in the first place in the leaderboard.
<<</WMT19 Submission>>>
<<</Chinese@!START@$\rightarrow $@!END@English>>>
<<<English@!START@$\leftrightarrow $@!END@Lithuanian>>>
For English$\leftrightarrow $Lithuanian translation, we follow the similar process as that for Chinese$\rightarrow $English task introduced in Section SECREF28. We use all the WMT bilingual data, which is 2.24M after filtration. We use the same English monolingual data as used in Chinese-English. We select 100M Lithuanian monolingual data from official commoncrawl and use all the wiki and news Lithuanian monolingual data provided by WMT. In addition, we crawl 5M Lithuanian news data from LRT website. We share the BPE vocabulary between English and Lithuanian, and the vocabulary size is 65K.
All the bilingual and monolingual data are used for MASS pre-training, and all the bilingual data are used for fine-tuning. For iterative back translation and knowledge distillation, we split 24M English monolingual data as well as 12M Lithuanian monolingual data into 5 parts through sampling with replacement, to get different models independently so as to increase diversity in re-ranking/ensemble. Each model uses 8M English monolingual data and 6M Lithuanian monolingual data. For our WMT19 submission, different from zh-en, speculation technology is not used.
The BLEU scores on newsdev19 are shown in Table TABREF41. Our final submissions for WMT19 achieves 20.1 BLEU points for English$\rightarrow $Lithuanian translation (ranked in the first place) and 35.6 for Lithuanian$\rightarrow $English translation (ranked in the second place).
<<</English@!START@$\leftrightarrow $@!END@Lithuanian>>>
<<<English@!START@$\leftrightarrow $@!END@Finnish>>>
<<<Preprocess>>>
We use the official English-Finnish data from WMT19, including both bilingual data and monolingual data. After de-duplicating, the bilingual data contains $8.8M$ aligned sentence pairs. We share the vocabulary for English and Finnish with $46k$ BPE units. We use the WMT17 and WMT18 English-Finnish test sets as two development datasets, and tune hyper-parameters based on the concatenation of them.
<<</Preprocess>>>
<<<Architecture search>>>
We use NAO to search sequence-to-sequence architectures for English-Finnish translation tasks, as introduced in subsection SECREF12. We use PyTorch for our implementations. Due to time limitations, we are not targeting at finding better neural architectures than Transformer; instead we target at models with comparable performance to Transformer, while providing diversity in the reranking process. The whole search process takes $2.5$ days on 16 P40 GPU cards and the discovered neural architecture, named as NAONet, is visualized in the Appendix.
<<</Architecture search>>>
<<<Train single models>>>
The final system for English-Finnish is obtained through reranking of three strong model checkpoints, respectively from the Transformer model decoding from left to right (L2R Transformer), the Transformer model decoding from right to left (R2L Transformer) and NAONet decoding from left to right. All the models have 6-6 layers in encoder/decoder, and are obtained using the same process which is detailed as below.
Step 1: Base models. Train two models $P_1(x|y)$ and $P_1(y|x)$ based on all the bilingual dataset ($8.8$M), respectively for English$\rightarrow $Finnish and Finnish$\rightarrow $English translations.
Step 2: Back translation. Do the normal back translation BIBREF11, BIBREF1 using $P_1$ and $P_2$. Specifically we choose $10M$ monolingual English corpus, use $P_1(y|x)$ to generate the $10M$ pseudo bitext with beam search (beam size is set to 5), and mix it with the bilingual data to continue the training of $P_1(x|y)$. The ratio of mixing is set as $1:1$ through up-sampling. The model obtained through such a process is denoted as $P_2(x|y)$. The same process is applied to the opposite direction and the new model $P_2(y|x)$ is attained.
Step 3: Back translation + knowledge distillation. In this step we generate more pseudo bitext by sequence level knowledge distillation BIBREF15 apart from using back translation. To be more concrete, as the first step, similar to Step 2, we choose $15M$ monolingual English and Finnish corpus, and generate the translations using $P_2(y|x)$ and $P_2(x|y)$, respectively. The resulting pseudo bitext is respectively denoted as $D_{x\rightarrow y}$ and $D_{y\rightarrow x}$. Then we concatenate all the bilingual data, $D_{x\rightarrow y}$ and $D_{y\rightarrow x}$, and use the whole corpus to train a new English-Finnish model from scratch. The attained model is denoted as $P_3(y|x)$.
Step 4: Finetune. In this step we try a very simple data selection method to handle the domain mismatch problem in WMT. We remove all the bilingual corpus from Paracrawl which is generally assumed to be quite noisy BIBREF18 and use the remaining bilingual corpus ($4.5M$) to finetune $P_3(y|x)$ for one epoch. The resulting model is denoted as $P_4(y|x)$ which is set as the final model checkpoint.
To investigate the effects of the four steps, we record the resulting BLEU scores on WMT17 and WMT18 test sets in Table TABREF46, taking the L2R Transformer model as an example. Furthermore, we report the final BLEU scores of the three models after the four steps in Table TABREF47. All the results are obtained via beam size 5 and length penalty $1.0$. The similar results for Finnish-English translation are shown in Table TABREF48.
<<</Train single models>>>
<<<Re-ranking>>>
We use n-best re-ranking to deliver the final translation results using the three model checkpoints introduced in the last subsection. The beam size is set as 12. The weights of the three models, as well as the length penalty in generation, are tuned on the WMT-18 test sets. The results are shown in the second row of Table TABREF50.
We would also like to investigate what is the influence of the NAONet to the re-ranking results. To achieve that, in re-ranking we replace NAONet with another model from L2R Transformer, trained with the same process in subsection SECREF45 with the difference only in random seeds, while maintain the other two models unchanged. The results are illustrated in the last row of Table TABREF50. From the comparison of the two rows in Table TABREF50, we can see the new architecture NAONet discovered via NAO brings more diversity in the ranking, thus leading to better results. We also report the similar results for Finnish-English tasks in Table TABREF51.
Our systems achieve $27.4$ for and $31.9$ for English$\rightarrow $Finnish and Finnish$\rightarrow $English, ranked in the first place and second place (by teams), respectively.
<<</Re-ranking>>>
<<</English@!START@$\leftrightarrow $@!END@Finnish>>>
<<<Russian@!START@$\rightarrow $@!END@English>>>
<<<Our system>>>
Our final system for Russian$\rightarrow $English translation is a combination of Transformer network BIBREF9, back translation BIBREF11, knowledge distillation BIBREF15, soft contextual data augmentation BIBREF5, and model ensemble. We use Transformer_big as network architecture. We first train two models, English$\rightarrow $Russian and Russian$\rightarrow $English respectively, on bilingual pairs as baseline model. Based on these two models, we perform back translation and knowledge distillation on monolingual data, generating 40M synthetic data. Combining both bilingual and synthetic data, we get a large train corpus with 56M pairs in total. We upsample the bilingual pairs and shuffle the combined corpus to ensure the balance between bilingual and synthetic data. Finally, we train the Russian$\rightarrow $English model from scratch. During the training, we also use soft contextual data augmentation to further enhance training. Following the above procedures, 5 different models are trained and ensembled for final submission.
<<</Our system>>>
<<</Russian@!START@$\rightarrow $@!END@English>>>
<<<English@!START@$\rightarrow $@!END@Kazakh>>>
<<<Result>>>
Our final submission achieves 10.6 BLEU score, ranked second by teams in the leaderboard.
<<</Result>>>
<<</English@!START@$\rightarrow $@!END@Kazakh>>>
<<</Submitted Systems>>>
<<<Conclusions>>>
This paper describes Microsoft Research Asia's neural machine translation systems for the WMT19 shared news translation tasks. Our systems are built on Transformer, back translation and knowledge distillation, enhanced with our recently proposed techniques: multi-agent dual learning (MADL), masked sequence-to-sequence pre-training (MASS), neural architecture optimization (NAO), and soft contextual data augmentation (SCA). Due to time and GPU limitations, we only apply each technique to a subset of translation tasks. We believe combining them together will further improve the translation accuracy and will conduct experiments in the future. Furthermore, some other techniques such as deliberation learning BIBREF20, adversarial learning BIBREF21, and reinforcement learning BIBREF22, BIBREF23 could also hep and are worthy of exploration.
<<</Conclusions>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nMulti-agent dual learning (MADL)\nMasked sequence-to-sequence pretraining (MASS)\nNeural architecture optimization (NAO)\nSoft contextual data augmentation (SCA)\nOur Techniques\nMasked sequence-to-sequence pre-training (MASS)\nSubmitted Systems\nEnglish@!START@$\\leftrightarrow $@!END@German\nDataset\nModel Configuration\nTraining Pipeline\nResults\nGerman@!START@$\\leftrightarrow $@!END@French\nChinese@!START@$\\rightarrow $@!END@English\nMASS Pre-training\nBack Translation and Knowledge Distillation\nWMT19 Submission\nEnglish@!START@$\\leftrightarrow $@!END@Lithuanian\nEnglish@!START@$\\leftrightarrow $@!END@Finnish\nPreprocess\nArchitecture search\nTrain single models\nRe-ranking\nRussian@!START@$\\rightarrow $@!END@English\nOur system\nEnglish@!START@$\\rightarrow $@!END@Kazakh\nResult\nConclusions"
],
"type": "outline"
}
|
2002.12328
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Few-shot Natural Language Generation for Task-Oriented Dialog
<<<Abstract>>>
As a crucial component in task-oriented dialog systems, the Natural Language Generation (NLG) module converts a dialog act represented in a semantic form into a response in natural language. The success of traditional template-based or statistical models typically relies on heavily annotated data, which is infeasible for new domains. Therefore, it is pivotal for an NLG system to generalize well with limited labelled data in real applications. To this end, we present FewShotWoz, the first NLG benchmark to simulate the few-shot learning setting in task-oriented dialog systems. Further, we develop the SC-GPT model. It is pre-trained on a large set of annotated NLG corpus to acquire the controllable generation ability, and fine-tuned with only a few domain-specific labels to adapt to new domains. Experiments on FewShotWoz and the large Multi-Domain-WOZ datasets show that the proposed SC-GPT significantly outperforms existing methods, measured by various automatic metrics and human evaluations.
<<</Abstract>>>
<<<Introduction>>>
Task-oriented dialog systems are becoming increasingly popular, as they can assist users in various daily activities such as ticket booking and restaurant reservations. In a typical task-oriented dialog system, the Natural Language Generation (NLG) module plays a crucial role: it converts a system action (often specified in a semantic form selected by a dialog policy) into a final response in natural language. Hence, the response should be adequate to represent semantic dialog actions, and fluent to engage users' attention. As the ultimate interface to interacts with users, NLG plays a significant impact on the users' experience.
Existing methods for NLG can be broadly summarized into two major categories. $({1})$ Template-based methods require domain experts to handcraft templates for each domain, and the system fills in slot-values afterward BIBREF0, BIBREF1. Thus, the produced responses are often adequate to contain the required semantic information, but not always fluent and nature, hurting users' experiences. $({2})$ Statistical language models such as neural networks BIBREF2 learn to generate fluent responses via training from labelled corpus. One canonical model is semantically conditioned LSTM (SC-LSTM) BIBREF3, which encodes dialog acts with one-hot representations and uses it as an extra feature to inform the sentence generation process. Despite its good performance on simple domains, it requires large amounts of domain-specific annotated data which is not available for many domains in real-world applications. Even worse, this renders severe scalability issues when the number of possible combinations of dialog acts grows exponentially with the number of slots in more complex domains.
We revisit the current research benchmarks for NLG, and notice that each dialog domain is extensively labelled to favor model training. However, this is in contrast to the real-world application scenarios, where only very limited amounts of labelled data are available for new domains. To simulate such a few-shot learning setting, we have developed a new benchmark dataset, called FewShotWOZ, based on the MultiWOZ BIBREF4 and Cambridge NLG datasets BIBREF5. FewShotWOZ consists of dialog utterances from 7 domains. For each domain, we provide less than 50 labeled utterances for fine-tuning. We believe that FewShotWOZ can better inspire research to address the challenge of learning data-hungry statistical models with very limited amounts of labelled data in real-world scenarios.
To deal with the challenge of few-shot learning, we develop the SC-GPT model. SC-GPT is a multi-layer Transformer neural language model, trained in three steps: $({1})$ Pre-trained on plain text, similar to GPT-2 BIBREF6; $({2})$ Continuously pre-trained on large amounts of dialog-act labeled utterances corpora to acquire the ability of controllable generation; $({3})$ Fine-tuned for a target domain using very limited amounts of domain labels. Unlike GPT-2, SC-GPT generates semantically controlled responses that are conditioned on the given semantic form, similar to SC-LSTM but requiring much less domain labels to generalize to new domains.
In summary, our key contributions are three-fold:
A new benchmark FewShotWOZ is introduced to simulate the few-shot adaptation setting where only a handful of training data from each domain is available.
We propose a new model SC-GPT. To our best knowledge, this work is the first study of exploiting state-of-the-art pre-trained language models for NLG in task-oriented dialog systems.
On the MultiWOZ dataset, SC-GPT creates a new SOTA, outperforming previous models by 4 points in BLEU. On FewShotWOZ, SC-GPT outperforms several strong baselines such as SC-LSTM and HDSA BIBREF7, showing that SC-GPT adapts to new domain much more effectively, requiring much smaller amounts of in-domain labels. We release our code and dataset for reproducible research.
<<</Introduction>>>
<<<Background>>>
A typical task-oriented spoken dialog system uses a pipeline architecture, as shown in Figure FIGREF2 (a), where each dialog turn is processed using a four-step procedure. $({1})$ Transcriptions of user’s input are first passed to the natural language understanding (NLU) module, where the user’s intention and other key information are extracted. $({2})$ This information is then formatted as the input to dialog state tracking (DST), which maintains the current state of the dialog. $({3})$ Outputs of DST are passed to the dialog policy module, which produces a dialog act based on the facts or entities retrieved from external resources (such as a database or a knowledge base). $({4})$ The dialog act emitted by the dialog policy module serves as the input to the NLG, through which a system response in natural language is generated. In this paper, we focus on the NLG component of task-oriented dialog systems, how to produce natural language responses conditioned on dialog acts.
Specifically, dialog act $$ is defined as the combination of intent $$ and slot-value pairs $\lbrace (s_i, v_i)\rbrace ^P_{i=1}$:
where $P$ is the number of pairs, which varies in different dialog acts.
Intents are usually used to distinguish different types of system actions. Typical examples include inform, request, confirm, select
Slot-value pairs indicate the category and content of the information to express in the utterance, respectively.
The goal of NLG is to translate $$ into a natural language response $= [x_1, \cdots , x_T]$, where $T$ is the sequence length. In Figure FIGREF2 (b), we show an example of the dialog act: $\textit {\texttt {confirm}~(name=Hilton, area=center)}$, and the corresponding natural language response is “Let me confirm that you are searching for Hilton in the center area”.
<<</Background>>>
<<<Semantically Conditioned GPT>>>
We tackle this generation problem using conditional neural language models. Given training data of $N$ samples $=\lbrace (_n, _n)\rbrace _{n=1}^{N}$, our goal is to build a statistical model parameterized by $$ to characterize $p_{}(| )$. To leverage the sequential structure of response, one may further decompose the joint probability of $$ using the chain rule, casting an auto-regressive generation process as follows:
where $x_{<t}$ indicates all tokens before $t$.
Learning $$ is performed via maximizing the log-likelihood (MLE) of the conditional probabilities in (DISPLAY_FORM13) over the entire training dataset:
In this paper, we employ the Transformers BIBREF8 to parameterize the conditionals in (DISPLAY_FORM13). To enable strong generalization and controllable ability for the learned model, we propose the following three-stage procedure as the training recipe.
<<<Massive Plain Language Pre-training.>>>
Large models trained on massive training corpus usually generalize better to new domains. Inspired by this, we inherit the GPT-2 architecture BIBREF6 as the backbone language model. GPT-2 is an auto-regressive language model that leverages 12-24 layers of masked, multi-head self-attention Transformers. GPT-2 is pre-trained on extremely massive text data OpenWebText BIBREF6. It has demonstrated superior performance on characterizing human language data distribution and knowledge transfer. Given text prompts, GPT-2 can often generate realistic sentences.
<<</Massive Plain Language Pre-training.>>>
<<<Dialog-Act Controlled Pre-training.>>>
To enable the guidance of dialog act in response generation, we propose to continuously pre-train the GPT-2 model on large amounts of annotated (dialog act, response) pairs. The pre-training dataset includes annotated training pairs from Schema-Guided Dialog corpus, MultiWOZ corpus, Frame corpus, and Facebook Multilingual Dialog Corpus. The total size of the pre-training corpus is around 400k examples.
We firstly pre-process dialog act $$ into a sequence of control codes using the following format:
Meanwhile, the output sequence $^{\prime }$ is pre-processed via appending $$ with a special start token [BOS] and an end token [EOS]. Finally, the sequentialized dialog act $^{\prime }$ is concatenated with its augmented response $^{\prime }$, and then fed into GPT-2. During training, the prediction loss is only computed for $^{\prime }$, and $^{\prime }$ provides the attended conditions. Since the dialog act represents the semantics of the generated sentences, we follow the naming convention of SC-LSTM, and term our model as Semantically Conditioned Generative Pre-training (SC-GPT). The overall architecture of SC-GPT is illustrated in Figure FIGREF12.
<<</Dialog-Act Controlled Pre-training.>>>
<<<Fine-tuning.>>>
For a new domain, a dialog act usually contains novel intents or slot-value pairs, and annotated training samples are often limited. We fine-tune SC-GPT on limited amounts of domain-specific labels for adaptation. The fine-tuning follows the same procedure of dialog-act controlled pre-training, as described above, but uses only a few dozens of domain labels.
It is worth noticing that the above recipe has several favorable properties:
Flexibility. SC-GPT operates on a sequence of tokens without delexicalization, which means that SC-GPT does not assume a fixed one-hot or tree-structured dialog act representation vectors. Hence, it has great flexibility in extending to novel dialog acts.
Controllability. In contrast to GPT-2 that generates natural sentences without high-level semantic guidance, SC-GPT can generate sentences with adequate intent and slot-value information and maintain its fluency.
Generalizability. SC-GPT is able to generalize significantly better than SC-LSTM, due to the pre-training on massive plain text corpora and annotated dialog datasets.
<<</Fine-tuning.>>>
<<</Semantically Conditioned GPT>>>
<<<Dataset: FewShotWOZ>>>
<<<Revisiting NLG Benchmarks.>>>
The three commonly used NLG datasets in developing and evaluating task-oriented dialog systems are E2E NLG BIBREF9 BAGEL BIBREF10 and RNNLG BIBREF5, as summarized in Table TABREF23. We observe two issues from their shared statistics: $({1})$ All the datasets contain a large number of labelled training samples for each domain, ranging from hundreds to tens of thousands. However, the cost of labeling is high in practice, labeling 50 utterances is 5 hours per domain. Creating such an extensively annotated dataset for each new domain is prohibitively expensive. $({2})$ The percentage of distinct delexicalised dialog acts between training and testing data is quite small. For example, the delexicalised dialog acts in testing is 100% covered by the training set for the E2E NLG dataset. It renders difficulties in evaluating the model's generalization ability for new domains.
<<</Revisiting NLG Benchmarks.>>>
<<<FewShotWOZ.>>>
To build a setting for more pragmatic NLG scenarios, we introduce a new dataset FewShotWOZ to better reflect real application complexity, and encourage the community to develop algorithms that are capable of generalizing with only a few domain-specific labels for each (new) domain. The dataset statistics are shown in the last column of Table TABREF23. We see that FewShotWOZ is different from the other datasets in three aspects: $({1})$ More domains. FewShotWOZ contains seven domains in total, which is larger than any existing NLG datasets. $({2})$ Less training instances. Importantly, FewShotWOZ has a much smaller number of training instances per domain, aiming to evaluate the few-shot learning ability. $({3})$ Lower training/testing overlap. FewShotWOZ has only 8.82% overlap, significantly smaller than the other datasets, which amount to more than 90% overlap. The average number of intents per instance in $\mathtt {Attraction}$/ $\mathtt {Taxi}$/ $\mathtt {Train}$ domain is 2, 1.33, and 2.05, respectively. In contrast, there is only one intent for each example in the other datasets. The NLG task defined on FewShotWOZ requires the models to learn to generalize over new compositions of intents. The details of FewShotWOZ is shown in Table TABREF26.
<<</FewShotWOZ.>>>
<<<Collection Protocols.>>>
We construct FewShotWOZ via re-organizing data samples from RNNLG and MultiWOZ datasets BIBREF4. For each domain in RNNLG, we first group utterances according to their delexicalised dialog acts, and keep only one utterance as the target sentence. To ensure diversity, we consider three domains from MultiWOZ: $\mathtt {Attraction}$, $\mathtt {Taxi}$, and $\mathtt {Train}$. Since MultiWOZ is a cross-domain dataset, the dialog act of an utterance may exist in multiple domains. We choose to keep utterances whose dialog act appears only in one domain. Similar delexicalising processing is applied to ensure that each dialog act has only one target utterance. Finally, to simulate the few-shot learning in practice, we randomly sample 50 training examples for each domain, except the $\mathtt {Taxi}$ domain, which has 40 examples.
<<</Collection Protocols.>>>
<<</Dataset: FewShotWOZ>>>
<<<Related Work>>>
<<<Pre-trained Models.>>>
Pre-trained language models (PLMs) have substantially advanced the state-of-the-art across a variety of natural language processing (NLP) tasks BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16. PLMs are often trained to predict words based on their context on massive text data, and the learned models can be fine-tuned to adapt to various downstream tasks. The closest line of research to ours are GPT-2 BIBREF6, CTRL BIBREF15 and Grover BIBREF17. GPT-2 first investigated missive Transformer-based auto-regressive language models with large-scale text data for pre-training. After fine-tuning, GPT-2 achieves drastic improvements on several generation tasks. One drawback of GPT-2 is the lack of high-level semantic controlling ability in language generation. To alleviate this issue, CTRL BIBREF15 was introduced to train the model based on pre-defined codes such as text style, content description, and task-specific behavior, meanwhile Grover BIBREF17 was proposed to generate news articles conditioned on authors, dates Although conceptually similar to our SC-GPT, CTRL and Grover cannot be readily applied to NLG in task-oriented dialog systems, as the conditioning codes are quite different. Another controllable generation work for GPT-2 is PPLM BIBREF18, which provides a decoding scheme to guide the generation process using key-words or classifiers, without re-training the model. In this paper, we focus on pre-training an NLG model conditioned on finer-grained semantic dialog acts, which are more desirable for dialog systems.
<<</Pre-trained Models.>>>
<<<Dialog.>>>
Various dialog systems have been developed BIBREF2, including task-oriented dialog systems such as Rasa, Microsoft Bot Framework, and Conversational Learner, and chit-chat systems such as XiaoIce BIBREF19, DialoGPT BIBREF20, Meena BIBREF21. In this paper, we focus on task-oriented systems, particularly the NLG module. With the blooming of deep learning, neural sequential models have shown powerful capability and flexibility in NLG. Extensive efforts have been made, including new architecture choices such as RNNs BIBREF22, attention RNNs BIBREF23, SC-LSTM BIBREF3 and its variants BIBREF24, BIBREF25, as well as learning objectives BIBREF26. However, they all require large amounts of annotated data to reach satisfactory performance. A more realistic scenario is to require much less labeling and improve the sample efficiency of models, This is especially important when deploying the models to new domains, where dialog acts need to be labelled from scratch. Our paper aims to formally set up such a research scenario by proposing a new dataset FewShotWOZ, and a new model SC-GPT.
<<</Dialog.>>>
<<</Related Work>>>
<<<Experiments>>>
In this section, we evaluate the proposed SC-GPT on the FewShotWOZ and MultiWOZ datasets to answer two research questions: $({1})$ Is SC-GPT an effective model for strong generalization and controllability in dialog response generation? $({2})$ Does FewShotWOZ meet the goal of effectively evaluating the generalization of NLG models in the few-shot learning setting?
<<<Experimental Setup>>>
<<<Implementation details.>>>
The model was built upon Huggingface Pytorch Transformer BIBREF27. We use GPT2-Medium with 345M parameters as the initial checkpoint, and byte pair encodings BIBREF28 for the tokenization. Linear rate scheduler with start rate as 5e-5 was used for both pre-training and fine-tuning. Adam BIBREF29 with weight decay was used to optimize the parameters. For pre-training, the model was trained with a mini-batch of 8 on an 8 Nvidia V100 machine until observing no significant progress on validation loss or up to 20 epochs, whichever is earlier. For fine-tuning on FewShotWOZ, models were trained on each domain separately with five epochs.
<<</Implementation details.>>>
<<<Automatic metrics.>>>
Following BIBREF3, BLEU scores and the slot error rate (ERR) are reported. BLEU score evaluates how natural the generated utterance is compared with human readers. ERR measures the exact matching of the slot tokens in the candidate utterances. $\text{ERR}=(p+q)/M$, where $M$ is the total number of slots in the dialog act, and $p$, $q$ is the number of missing and redundant slots in the given realisation. For each dialog act, we generate five utterances and select the top one with the lowest ERR as the final output.
<<</Automatic metrics.>>>
<<<Human evaluation.>>>
We conducted the human evaluation using Amazon Mechanical Turk to assess subjective quality. We recruit master level workers (who have good prior approval rates) to perform a human comparison between generated responses from two systems (which are randomly sampled from comparison systems). The workers are required to judge each utterance from 1 (bad) to 3 (good) in terms of informativeness and naturalness. Informativeness indicates the extent to which generated utterance contains all the information specified in the dialog act. Naturalness denotes whether the utterance is as natural as a human does. To reduce judgement bias, we distribute each question to three different workers. Finally, we collected in total of 5800 judges.
<<</Human evaluation.>>>
<<<Baselines.>>>
We compare with three baseline methods. $({1})$ SC-LSTM BIBREF3 is a canonical model and a strong baseline that uses an additional dialog act vector and a reading gate to guide the utterance generation. $({2})$ GPT-2 BIBREF6 is used to directly fine-tune on the domain-specific labels, without pre-training on the large-scale corpus of (dialog act, response) pairs. $({3})$ HDSA BIBREF7 is a state-of-the-art model on MultiWOZ. It leverages dialog act structures to enable transfer in the multi-domain setting, showing superior performance than SC-LSTM.
<<</Baselines.>>>
<<</Experimental Setup>>>
<<<FewShotWOZ>>>
Table TABREF33 reports the automatic evaluation performance of different methods on FewShotWOZ. SC-LSTM fails to learn the generation effectively in this few-shot learning setting. The generated utterances are poor in quality and suffer from inaccurate slot rendering. In addition, GPT-2 performs consistently better than SC-LSTM in all the domains. It reveals the feasibility of using a pre-trained language model for NLG, though only limited annotations are available for fine-tuning. Importantly, SC-GPT performs significantly better than GPT and SC-LSTM in terms of both BLEU and ERR. In all the domains, SC-GPT reduces the ERR to a significantly lower level, revealing its strong controllability power. This verifies the importance of pre-training on large annotated dialog data, as SC-GPT learns how to generate utterances specified by the dialog acts accurately.
Table TABREF34 shows the human assessment on FewShotWOZ. The results exhibit the same trend with automatic evaluation. SC-GPT outperforms GPT-2 and SC-LSTM significantly in both metrics, SC-GPT can better control the generation to convey information in the dialog act while maintaining good fluency. Note that the gap between SC-GPT and human annotation is still large, indicating that the proposed FewShotWOZ exhibits an under-explored research area, and provides a large space to encourage future research for improvement.
<<</FewShotWOZ>>>
<<<MultiWOZ>>>
The results on MultiWOZ are shown in Table TABREF42. Following BIBREF7, Entity F1 BIBREF30 is used to evaluate the entity coverage accuracy (including all slot values, days, numbers, and reference, ). Again, SC-GPT achieves the best performance on BLEU score. Note that GPT-2 performs similarly with SC-GPT on the full MultiWOZ dataset, this is because MultiWOZ contains 57k utterances, which is large enough for GPT-2 to achieve good performance. The results also confirm that with enough annotated data, conditional language model formulation performs significantly better than HDSA, a strong competitor that leverages graph/tree-structure information to encode dialog acts.
To study how SC-GPT performs with different training data sizes. We further conduct experiments with varying percentages of training data on MultiWOZ, ranging from 0.1% (50 examples) to 50%. As shown in Table TABREF43, the observations are consistent with FewShotWOZ. SC-GPT performs consistently better than GPT-2, HDSA, and SC-LSTM for a wide range of dataset sizes, and the improvement is more substantial when the fewer numbers of in-domain labels are used for fine-tuning.
Table TABREF44 shows the human assessment results on MultiWOZ. The results are consistent with the automatic evaluation. It is interesting to see that $({1})$ the gap between the new state-of-the-art method (SC-GPT ) and human performance on FewShotWOZ (as shown in Table TABREF34) is much larger than that on MultiWOZ; $({2})$ the human rating on the naturalness of SC-GPT is even higher than humans on MultiWOZ, while there is a visible gap on FewShotWOZ. These results demonstrate that FewShotWOZ presents a challenging few-shot learning setting, SG-GPT serves as a simple and strong baseline in this setting, and the combined provides a platform for researchers to develop NLG models that are able to generalize to new domains and generate semantically conditioned and fluent responses.
<<</MultiWOZ>>>
<<<Analysis>>>
We perform detailed analysis to investigate SG-GPT's flexibility, controllability and generalizability. The test set is split into two subsets - seen and unseen. If a dialog act of an example appears in the training set, the example is marked as seen; otherwise, it is marked as unseen. Table TABREF48 compares different models on the seen and unseen subsets in the $\mathtt {restaurant}$ domain. SC-GPT yields higher BLEU and lower ERR, and the improvement is more significant on the unseen set. For example, SC-GPT reduces ERR to 4.96, an order of magnitude lower than SC-LSTM and only 1/3 of GPT-2. This demonstrates that SC-GPT generalizes well to novel dialog acts, and is able to precisely ground in them to compose fluent responses. This is further confirmed by the quantitative comparison in Table TABREF45, where we compare the generated utterance examples of different models. While the baseline methods prone to over-generate or miss important slots, SC-GPT can successfully generate fluent natural language utterances that share precise semantic conditions with the ground-truth references.
We further simulate the process when deploying SC-GPT for a new domain, using the examples provided in the RASA dialog toolkit . We first fine-tune SC-GPT using a few training examples (only 16 instances in this new domain), and then generate utterances based on novel dialog acts that are unseen in training data, shown in Table TABREF49. In practice, it is desirable for an NLG system to deal with an extending domain whose dialog acts change dynamically. We simulate the setting by editing the original input dialog acts, such as inserting or deleting a slot, or substituting a slot value.
Since SC-LSTM is infeasible in the setting of an extending domain, we compare SC-GPT with GPT-2. Results show that SC-GPT produces better utterances than GPT-2. SC-GPT can generate reasonably good natural language responses with different combinations of editing operations, showing its high flexibility to generalize to new dialog acts with very limited training data, and produce controllable responses.
<<</Analysis>>>
<<</Experiments>>>
<<<Conclusion and Future Work>>>
In this paper, we have made two major contributions towards developing a more pragmatic NLG module for task-oriented dialog systems: $({1})$ A new benchmark FewShotWOZ is introduced to simulate the few-shot learning scenarios with scarce labelled data in real-world applications. $({2})$ A new model SC-GPT is proposed to endow the NLG module with strong semantically controlling and generalization ability. Empirical results on both FewShotWOZ and MultiWOZ show that SC-GPT achieves the best overall performance in both automatic and human evaluations.
There are two interesting directions for future work. The first is to design mechanisms to generate more interpersonal responses which are proven to help improve user experiences BIBREF31, BIBREF19. The other is to generalize the generative pre-training idea to all four modules in the dialog system pipeline for end-to-end training. Since these four modules process information in order, one may organize their input/output as segments, and pre-train a segment-level auto-regressive model.
<<</Conclusion and Future Work>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nBackground\nSemantically Conditioned GPT\nMassive Plain Language Pre-training.\nDialog-Act Controlled Pre-training.\nFine-tuning.\nDataset: FewShotWOZ\nRevisiting NLG Benchmarks.\nFewShotWOZ.\nCollection Protocols.\nRelated Work\nPre-trained Models.\nDialog.\nExperiments\nExperimental Setup\nImplementation details.\nAutomatic metrics.\nHuman evaluation.\nBaselines.\nFewShotWOZ\nMultiWOZ\nAnalysis\nConclusion and Future Work"
],
"type": "outline"
}
|
1908.09951
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
An Emotional Analysis of False Information in Social Media and News Articles
<<<Abstract>>>
Fake news is risky since it has been created to manipulate the readers' opinions and beliefs. In this work, we compared the language of false news to the real one of real news from an emotional perspective, considering a set of false information types (propaganda, hoax, clickbait, and satire) from social media and online news articles sources. Our experiments showed that false information has different emotional patterns in each of its types, and emotions play a key role in deceiving the reader. Based on that, we proposed a LSTM neural network model that is emotionally-infused to detect false news.
<<</Abstract>>>
<<<Introduction>>>
With the complicated political and economic situations in many countries, some agendas are publishing suspicious news to affect public opinions regarding specific issues BIBREF0. The spreading of this phenomenon is increasing recently with the large usage of social media and online news sources. Many anonymous accounts in social media platforms start to appear, as well as new online news agencies without presenting a clear identity of the owner. Twitter has recently detected a campaign organized by agencies from two different countries to affect the results of the last U.S. presidential elections of 2016. The initial disclosures by Twitter have included 3,841 accounts. A similar attempt was done by Facebook, as they detected coordinated efforts to influence U.S. politics ahead of the 2018 midterm elections.
False information is categorized into 8 types according to BIBREF1. Some of these types are intentional to deceive where others are not. In this work, we are interested in analyzing 4 main types, i.e. hoaxes, propagandas, clickbaits, and satires. These types can be classified into two main categories - misinformation and disinformation - where misinformation considers false information that is published without the intent to deceive (e.g. satire). Disinformation can be seen as a specific kind of false information with the aim to mislead the reader (e.g. hoax, propaganda, and clickbait). Propagandas are fabricated stories spread to harm the interest of a particular party. Hoaxes are similar to propagandas but the main aim of the writer is not to manipulate the readers' opinions but to convince them of the validity of a paranoia-fueled story BIBREF2. Clickbait is another type of disinformation that refers to the deliberate use of misleading headlines, thumbnails, or stories' snippets to redirect attention (for traffic attention). Satire is the only type of misinformation, where the writer's main purpose is not to mislead the reader, but rather to deliver the story in an ironic way (to entertain or to be sarcastic).
The topic of fake news is gaining attention due to its risky consequences. A vast set of campaigns has been organized to tackle fake news. The owner of Wikipedia encyclopedia created the news site WikiTribune to encourage the evidence-based journalism.
Another way of addressing this issue is by fact-checking websites. These websites like politifact.com, snopes.com and factchecking.org aim to debunk false news by manually assess the credibility of claims that have been circulated massively in online platforms. These campaigns were not limited to the English language where other languages such as Arabic have been targeted by some sites like fatabyyano.net.
<<<Hypothesis>>>
Trusted news is recounting its content in a naturalistic way without attempting to affect the opinion of the reader. On the other hand, false news is taking advantage of the presented issue sensitivity to affect the readers' emotions which sequentially may affect their opinions as well. A set of works has been done previously to investigate the language of false information. The authors in BIBREF3 have studied rumours in Twitter. They have investigated a corpus of true and false tweets rumours from different aspects. From an emotional point of view, they found that false rumours inspired fear, disgust, and surprise in their replies while the true ones inspired joy and anticipation. Some kinds of false information are similar to other language phenomena. For example, satire by its definition showed similarity with irony language. The work in BIBREF4 showed that affective features work well in the detection of irony. In addition, they confirmed that positive words are more relevant for identifying sarcasm and negative words for irony BIBREF5. The results of these works motivate us to investigate the impact of emotions on false news types. These are the research questions we aim to answer:
RQ1 Can emotional features help detecting false information?
RQ2 Do the emotions have similar importance distributions in both Twitter and news articles sources?
RQ3 Which of the emotions have a statistically significant difference between false information and truthful ones?
RQ4 What are the top-N emotions that discriminate false information types in both textual sources?
In this work, we investigate suspicious news in two different sources: Twitter and online news articles. Concerning the news articles source, we focus on the beginning part of them, since they are fairly long, and the emotional analysis could be biased by their length. We believe that the beginning part of false news articles can present a unique emotional pattern for each false information type since the writer in this part is normally trying to trigger some emotions in the reader.
Throughout the emotional analysis, we go beyond the superficial analysis of words. We hope that our findings in this work will contribute to fake news detection.
The key contributions of this article are:
Model: We propose an approach that combines emotional information from documents in a deep neural network. We compare the obtained results with a set of baselines. The results show that our approach is promising.
Analysis: We show a comprehensive analysis on two false information datasets collected from social media and online news articles, based on a large set of emotions. We compare the differences from an affective perspective in both sources, and obtain valuable insights on how emotions can contribute to detect false news.
The rest of the paper is structured as follows; After a brief review of related work in Section SECREF2, Section SECREF3 introduces our emotionally-infused model. Then, we present the evaluation framework in Section SECREF4. Section SECREF5 reports the experiments and the results, followed by an analysis on the false information types from emotional perspective in Section SECREF6. Finally, the conclusions of this work are summarized in Section SECREF7.
<<</Hypothesis>>>
<<</Introduction>>>
<<<Related Work>>>
The work that has been done previously on the analysis of false information is rather small regarding the approaches that were proposed. In this section, we present some recent works on the language analysis and detection of false information. Recent attempts tried to analyze the language of false news to give a better understanding. A work done in BIBREF6 has studied the false information in Twitter from a linguistic perspective. The authors found that real tweets contain significantly fewer bias markers, hedges, subjective terms, and less harmful words. They also found that propaganda news targets morals more than satires and hoaxes but less than clickbaits. Furthermore, satirical news contains more loyalty and fewer betrayal morals compared to propaganda. In addition, they built a model that combined a set of features (graph-based, cues words, and syntax) and achieved a good performance comparing to other baselines (71% vs. 59% macro-F1). Another similar work BIBREF2 has been done to characterize the language of false information (propaganda, hoax, and satire) in online news articles. The authors have studied the language from different perspectives: the existence of weak and strong subjectivity, hedges, and the degree of dramatization using a lexicon from Wiktionary. As well, they employed in their study the LIWC dictionary to exploit the existence of personal pronouns, swear, sexual, etc. words. The results showed that false news types tend to use first and second personal pronouns more than truthful news. Moreover, the results showed that false news generally uses words to exaggerate (subjectives, superlatives, and modal adverbs), and specifically, the satire type uses more adverbs. Hoax stories tend to use fewer superlatives and comparatives, and propagandas use relatively more assertive verbs. Moving away from these previous false information types, the work in BIBREF3 has focused on analyzing rumours in Twitter (from factuality perspective: True or False). They analyzed about 126,000 rumours and found that falsehood widespread significantly further, faster, deeper, and more broadly than truth in many domains. In addition, they found that false rumours are more novel than truthful ones, which made people more likely to share them. From an emotional perspective, they found that false rumours triggered "fear", "disgust", and "surprise" in replies while truthful ones triggered "anticipation", "sadness", "joy", and "trust". Another work BIBREF7 has studied the problem of detecting hoaxes by analyzing features related to the content in Wikipedia. The work showed that some features like hoaxes articles' length as well as the ratio of wiki markups (images, references, links to other articles and to external URLs, etc.) are important to discriminate hoaxes from legitimate articles. Many approaches have been proposed on fake news detection. In general, they are divided into social media and news claims-based approaches. The authors in BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12 have proposed supervised methods using recurrent neural networks or by extracting manual features like a set of regular expressions, content-based, network-based etc. As an example, the work by BIBREF13 assessed the credibility of tweets by analyzing trending topics. They used message-based, user-based, and propagation-based features, and they found that some features related to the user information like user's age, number of followers, statuse counts etc. have helped the most to discriminate truthful from deceitful tweets. Other news claims-based approaches BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18 have been mainly focusing on inferring the credibility of the claims by retrieving evidences from Google or Bing search engines. These approaches have employed a different set of features starting from manual features (e.g. cosine similarity between the claims and the results, Alexa Rank of the evidence source, etc.) to a fully automatic approach using deep learning networks. A recent trend started to appear and is trying to approach the detection of fake news from a stance perspective. The aim is to predict how other articles orient to a specific fact BIBREF19, BIBREF20, BIBREF21.
<<</Related Work>>>
<<<Emotionally-infused Model>>>
In this section we describe the Emotionally-Infused Network we propose (EIN).
<<<Emotional Lexicons>>>
Several emotional models well-grounded in psychology science have been proposed, such as the ones by Magda Arnold BIBREF22, Paul Ekman BIBREF23, Robert Plutchik BIBREF24, and Gerrod Parrot BIBREF25. On the basis of each of them, many emotional resources (lexicons) were built in the literature. In this work, we consider several emotional resources to increase the coverage of the emotional words in texts as well to have a wider range of emotions in the analysis. Concretely, we use EmoSenticNet, EmoLex, SentiSense, LIWC and Empath:
EmoSenticNet BIBREF26 is a lexical resource that assigns WordNet-Affect emotion labels to SenticNet concepts. It has a total of 13,189 entries annotated using the six Ekman's basic emotions.
EmoLex BIBREF27 is a word-emotion association lexicon that is labeled using the eight Plutchik's emotions. This lexicon contains 14,181 words.
SentiSense BIBREF28 is a concept-based affective lexicon that attaches emotional meanings to concepts from the WordNet lexical database. SentiSense has 5,496 words labeled with emotions from a set of 14 emotional categories, which is an edited version of the merge between Arnold, Plutchik, and Parrott models.
LIWC BIBREF29 is a linguistic dictionary that contains 4,500 words categorized to analyze psycholinguistic patterns in text. Linguistic Inquiry and Word Count (LIWC) has 4 emotional categories: "sadness", "anger", "positive emotion", and "negative emotion".
Empath BIBREF30 is a tool that uses deep learning and word embeddings to build a semantically meaningful lexicon for concepts. Empath uses Parrott's model for the emotional representation, but we use only the primary emotions (6 emotions) in the Pattrott's hierarchy ("love", "joy", "surprise", "anger", "sadness", "fear").
In our study we consider the 17 emotions that we shown in Figure FIGREF14.
<<</Emotional Lexicons>>>
<<<Model>>>
We choose an Long short-term memory (LSTM) BIBREF31 that takes the sequence of words as input and predicts the false information type. The input of our network is based on word embedding (content-based) and emotional features (see Figure FIGREF24).
<<</Model>>>
<<<Input Representation>>>
Our network consists of two branches. In the content-based one, we use an embedding layer followed by a LSTM layer. Then, we add an attention layer BIBREF32 to make this branch focus on (highlighting) particular words over others . The attention mechanism assigns a weight to each word vector result from the LSTM layer with a focus on the classification class. The input representation for this branch is represented as follows: the input sentence $S$ of length $n$ is represented as $[S\textsubscript {1}, S\textsubscript {2} .. S\textsubscript {n}]$ where $S\textsubscript {n} \in {\rm I\!R}^d$; ${\rm I\!R}^d$ is a d-dimensional word embedding vector of the $i$-th word in the input sentence. The output vectors of the words are passed to the LSTM layer, where the LSTM learns the hidden state $h\textsubscript {t}$ by capturing the previous timesteps (past features). The produced hidden state $h\textsubscript {t}$ at each time step is passed to the attention layer which computes a "context" vector $c\textsubscript {t}$ as the weighted mean of the state sequence $h$ by:
Where $T$ is the total number of timesteps in the input sequence and $\alpha \textsubscript {tj}$ is a weight computed at each time step $j$ for each state hj. This output vector is then concatenated with the output from the densea (see Figure FIGREF24) layer and passed to the denseb layer, which precedes a final Softmax function to predict the output classes. Since the content-based branch is concatenated with the other emotional-based branch.
On the other hand, the input representation for the emotional-based branch is defined as follows: we have $N$ emotional lexicons $L\textsubscript {n}$ where $n\in [1, 5]$, each lexicon has $M$ number of emotions depending on the emotion model that the lexicon uses (e.g. Plutchik, Arnold, etc.). The emotion vector $E\textsubscript {m}$ of an input document using the $n$-th emotional lexicon is $L\textsubscript {n}E\textsubscript {m}$. In our implementation, the emotional vector $E\textsubscript {m}$ of a Lexicon $L\textsubscript {n}$ is built using word frequency and normalized by the input sentence's length. Each input sentence is represented using:
Where $v \in {\rm I\!R}^q$ and $q$ is:
<<</Input Representation>>>
<<</Emotionally-infused Model>>>
<<<Evaluation Framework>>>
<<<Datasets>>>
Annotated data is a crucial source of information to analyze false information. Current status of previous works lacks available datasets of false information, where the majority of the works focus on annotating datasets from a factuality perspective. However, to analyze the existence of emotions across different sources of news, we rely on two publicly available datasets and a list contains suspicious Twitter accounts.
<<<News Articles>>>
Our dataset source of news articles is described in BIBREF2. This dataset was built from two different sources, for the trusted news (real news) they sampled news articles from the English Gigaword corpus. For the false news, they collected articles from seven different unreliable news sites. These news articles include satires, hoaxes, and propagandas but not clickbaits. Since we are interested also in analyzing clickbaits, we slice a sample from an available clickbait dataset BIBREF33 that was originally collected from two sources: Wikinews articles' headlines and other online sites that are known to publish clickbaits. The satire, hoax, and propaganda news articles are considerably long (some of them reach the length of 5,000 words). This length could affect the quality of the analysis as we mentioned before. We focus on analyzing the initial part of the article. Our intuition is that it is where emotion-bearing words will be more frequent. Therefore, we shorten long news articles into a maximum length of N words (N=300). We choose the value of N based on the length of the shortest articles. Moreover, we process the dataset by removing very short articles, redundant articles or articles that do not have a textual content.
<<</News Articles>>>
<<<Twitter>>>
For this dataset, we rely on a list of several Twitter accounts for each type of false information from BIBREF6. This list was created based on public resources that annotated suspicious Twitter accounts. The authors in BIBREF6 have built a dataset by collecting tweets from these accounts and they made it available. For the real news, we merge this list with another 32 Twitter accounts from BIBREF34. In this work we could not use the previous dataset and we decide to collect tweets again. For each of these accounts, we collected the last M tweets posted (M=1000). By investigating these accounts manually, we found that many tweets just contain links without textual news. Therefore, to ensure of the quality of the crawled data, we chose a high value for M (also to have enough data). After the collecting process, we processed these tweets by removing duplicated, very short tweets, and tweets without textual content. Table TABREF35 shows a summary for both datasets.
<<</Twitter>>>
<<</Datasets>>>
<<<Baselines>>>
Emotions have been used in many natural language processing tasks and they showed their efficiency BIBREF35. We aim at investigating their efficiency to detect false information. In addition to EIN, we created a model (Emotion-based Model) that uses emotional features only and compare it to two baselines. Our aim is to investigate if the emotional features independently can detect false news. The two baselines of this model are Majority Class baseline (MC) and the Random selection baseline (RAN).
For the EIN model, we compare it to different baselines: a) The first one is bag-of-words with a support vector machine classifier (BOW-SVM). We test different classifiers, and we choose SVM since it gives the highest result in the 10-fold Cross Validation (CV); b) We use another baseline that is based on word embeddings where for each input document we extract an average word embedding vector by taking the mean of the embeddings for the document's words. Similarly, we test different classifiers and the Logistic Regression classifier shows the best performance (WE-LR); c) The last baseline is the same as our neural architecture but without the emotional features branch: an LSTM layer followed by attention and dense layers.
<<</Baselines>>>
<<</Evaluation Framework>>>
<<<Experiments and Results>>>
<<<Emotion-based Model>>>
In our experiments, we use $20\%$ of each of the datasets for testing and we apply 10-fold cross-validation on the remain part for selecting the best classifier as well for tuning it. We tested many classifiers and we finally choose Random Forest for both datasets since it obtained the best results. Table TABREF39 presents the classification results on both datasets.
The results in both datasets show that emotional features clearly detect false news, compared to the baselines (RQ1). The emotional features perform better in the news articles dataset compared with these of tweets. We are interested in investigating also how good are the emotional features in detecting each class comparing to the RAN baseline. We choose the RAN baseline since it shows better results with regard to macro-F1 score. For doing so, we investigated the True Positive (TP) classification ratio for each class in each dataset.
The clickbait class shows the highest TPs comparing to the other classes. From this we can infer that clickbaits exploit emotions much more than the other classes to deceive the reader. It is worth to mention that for the hoax class the proposed approach is better than the random baselines with a small ratio ($4\%$ difference). This could be justified by the fact that hoaxes, by definition, try to convince the reader of the credibility of a false story. Hence, the writer tries to deliver the story in a normal way without allowing the reader to fall under suspicion. The number of instances related to the false information classes in the news articles dataset is the same. Therefore, there is not a majority class that the classifier can be biased to. This is not the case in the Twitter dataset. For the Twitter dataset, the dataset is not balanced. Therefore, where the results are biased by the majority class (propaganda). But in general, all the classes' TP ratios are larger than the corresponding ones obtained with RAN baseline. From these results, we can conclude that suspicious news exploits emotions with the aim to mislead the reader. Following, we present the results obtained by the proposed emotionally-infused model.
<<</Emotion-based Model>>>
<<<Emotionally-Infused Model>>>
In the neural model, to reduce the computational costs, instead of the cross-validation process we take another $20\%$ from the training part as a validation set (other than the $20\%$ that is prepared for testing). For the pretrained word embeddings, we use Google News Word2Vec 300-Embeddings in the neural network as well as in the W2V-LR baseline. For the classical machine learning classifiers for the baselines, we use the Scikit-Learn python library, and for the deep learning network, we use Keras library with Tensorflow as backend. To tune our deep learning network (hyper-parameters), we use the Hyperopt library. And to reduce the effect of overfitting, we use early stopping technique.
In Table TABREF44 we summarize the parameters with respect to each dataset. We have to mention that we use Dropout after the dense layer in the emotional features branch (Dropc) as well as after the attention layer in the other one (Dropd) before the concatenation process. Since it is a multiclass classification process, we use categorical cross-entropy loss function. A summary of the models' parameters is presented in Table TABREF44.
Table TABREF47 summarizes the performance of the proposed model in comparison to those obtained by the baselines. We report Macro- precision, recall, and F1, including also the metric of accuracy; for comparing the models' results we consider the macro of metrics since it shows an averaged result over all the classes. The baselines that we propose clearly show high results, where the LSTM baseline has the best performance in news articles dataset. In Twitter there is a different scenario, the BOW-SVM baseline shows a higher performance with respect to LSTM. We are interested in investigating the reason behind that. Therefore, we checked the coverage ratio of the used embeddings in the Twitter dataset. We have to mention that we excluded stop words during representing the input documents using the pre-trained Google News word embeddings. In the news articles dataset, we found that the coverage ratio of the embeddings is around $94\%$ while in Twitter it is around $70\%$. Therefore, we tuned the word embeddings during the training process to improve the document's representation since we have a larger dataset from Twitter. This process contributed with $1.9\%$ on the final macro-F1 results in Twitter (the result without tuning is $53.51\%$). Even though, the results obtained with the LSTM baseline is still lower than the one obtained with BOW-SVM. This experiment gives us some intuition that the weaker performance on Twitter may be due to the embeddings. Therefore, we tried different embeddings but none of them improved the result. The second baseline (W2V-LR) proved the same issue regarding the embeddings. The W2V-LR macro-F1 result in the news articles dataset is competitive, where it is much lower in Twitter. The usage of LSTM is two folds: in addition to being a good baseline, it shows also how much the emotional features contribute in the emotionally-infused network.
EIN results outperform the baselines with a large margin (around 2% in Twitter and 7% in news articles), especially in the news articles dataset. The margin between EIN and the best baseline is lower in the Twitter dataset. The results also show that combining emotional features clearly boosts the performance. We can figure out the improvement by comparing the results of EIN to LSTM. EIN shows superior results in news articles dataset with regard to the LSTM (79.43%). A similar case appears in the Twitter dataset but with a lower margin (59.70%). The results of EIN in Twitter dataset show that emotional features help the weak coverage of word embeddings to improve the performance as well as to overcome the BOW-SVM baseline.
We observed before that clickbait TP's ratio of the news articles dataset is the highest one, and this result points out that the clickbait class is less difficult to detect specifically from an emotional perspective. Therefore, in order to assess how our model separates false information types, we employ dimensionality reduction using t-distributed Stochastic Neighbor Embedding (T-SNE) technique BIBREF36 to project the document's representation from a high dimensional space to a 2D plane. Thus, we project the embeddings in EIN by extracting them from the outputs of Denseb layer (see Figure FIGREF48). We extract the embeddings twice, once from a random epoch (epoch 10) at the beginning of the training phase and the other at the last epoch.
Our aim from the early epoch projection is to validate what we have noticed: the clickbait class is less difficult to detect with regard to the other classes. As we can notice in the 10-epoch plot, the clickbait class needs few epochs to be separated from the other types, and this supports what we found previously in the manual investigation of the classes' TP ratios. Despite this clear separation, there is still an overlapping with some real-news records. This results points out that emotions in clickbaits play a key role in deceiving the reader. Also, the figure shows that the disinformation classes still need more training epochs for better separation. Real-news records are totally overlapped with the false information classes as well as the false information classes with each other. On the other hand, for the last epoch, clearly, the classes are separated from each other and the more important, from the real news. But generally, there still a small overlapping between satires and hoaxes as well few records from the propaganda class.
<<</Emotionally-Infused Model>>>
<<<EIN as Clickbaits Detector>>>
From the previous results in Section SECREF37 as well as from what we notice in Figure FIGREF48, EIN obtains a clear separability of the clickbait class. These observations motivate us to investigate EIN as clickbait detector. Concretely, we test EIN on the source of our clickbait instances BIBREF33 in the news articles dataset. As we mentioned previously, this dataset originally was built using two different text sources. For clickbaits, the authors have manually identified a set of online sites that publish many clickbait articles. Whereas for the negative class, they collected headlines from a corpus of Wikinews articles collected in other research work. They took 7,500 samples from each class for the final version of the dataset. The authors also proposed a clickbaits detector model (Stop_Clickbait) that employed a combination of features: sentence structure (sentence length, average length of words, the ratio of the number of stop words to the number of thematic words and the longest separation between the syntactically dependent words), word patterns (presence of cardinal number at the beginning of the sentence, presence of unusual punctuation patterns), clickbait language (presence of hyperbolic words, common clickbait phrases, internet slangs and determiners), and N-grams features (word, Part-Of-Speech, and syntactic n-grams). Using this set of features group, the authors tested different classifiers where SVM showed the state-of-the-art results. They considered Accuracy, Precision, Recall and F1 to compare their approach to a baseline (an online web browser extension for clickbaits detection called Downworthy).
In this experiment, we consider the third baseline (LSTM) to observe the improvement of the emotional features in the EIN model. Different from the previous experiments, this is a binary classification task. Therefore, we use binary cross-entropy as loss function and we change the Softmax layer to a Sigmoid function. The new parameters for both LSTM and EIN models are mentioned in Table TABREF44.
In Table TABREF51 we present the results of the Stop_Clickbait approach, LSTM baseline, and the EIN model. The results show that our baseline outperforms the proposed clickbait detector with a good margin. Furthermore, the results of the EIN are superior to the LSTM and the Stop_Clickbait detector. Considering emotions in the EIN deep learning approach improved the detection of false information. This is due to the fact that in clickbaits emotions are employed to deceive the reader.
<<</EIN as Clickbaits Detector>>>
<<</Experiments and Results>>>
<<<Discussion>>>
The results show that the detection of suspicious news in Twitter is harder than detecting them in news articles. Overall, the results of EIN showed that emotional features improve the performance of our model, especially in the case of the news articles dataset. We manually inspected the Twitter dataset and observed that the language of the tweets has differences compared to the news articles one. We found that news in Twitter has many abbreviations (amp, wrt, JFK...etc.), bad words abbreviations (WTF, LMFO...etc.), informal language presentation, and typos. This reduces the coverage ratio of word embeddings. We also noticed that suspicious news in Twitter are more related to sexual issues. To validate our observations, we extracted the mean value of sexual words using a list of sexual terms BIBREF37. The mean value is the average number of times a sexual/bad word appears in a tweet normalized by the length of the tweet. The mean value in Twitter is 0.003 while in news articles is 0.0024. Similarly, suspicious news in Twitter presented more insulting words than in news articles where the mean value in Twitter is 0.0027 and 0.0017 in news articles.
Following, we focus on analyzing false information from an emotional perspective. We are aiming to answer the rest of the questions, RQ2, RQ3, and RQ4.
RQ2 Do the emotions have similar importance distributions in both Twitter and news articles sources?
Intuitively, the emotions contribution in the classification process is not the same, where some words could manifest the existence of specific kind of emotions rather than others. To investigate this point, we use Information Gain (IG) in order to identify the importance of emotions in discriminating between real and all the other types of false news (multiclass task) in both Twitter and news articles datasets (see Figure FIGREF54). Before going through the ranking of features importance, we notice that the emotions ranking shapes are very similar in both Twitter and news articles. This states that despite the fact that the language is different, both sources have similar overall emotions distribution. In other words, false news employs a similar emotional pattern in both text sources. Since the news language in Twitter is not presented clearly as in news articles, this observation can help to build a cross-source system that is trained on suspicious news from news articles to detect the corresponding ones in Twitter. Figure FIGREF54 shows also that the emotion "joy" is the most important emotion in both datasets. It also mentions that "despair" and "hate" are almost not used in the classification process. The ranking of the features in both sources is different, where in the news articles dataset the top important emotions are "joy", "anticipation", "fear", and "disgust" respectively. On the other hand, the top ones in Twitter are "joy", "sadness", "fear", and "disgust".
.
RQ3 Which of the emotions have a statistically significant difference between false information and truthful ones?
We measure statically significant differences using the t-test on emotions across real news and false news (binary task) in the both datasets in Figure FIGREF55. These findings provide a deeper understanding of the EIN performance. The results show that "joy", "neg_emo", "ambiguous", "anticipation", "calmness", "disgust", "trust" and "surprise" have significant statistical differences between real and suspicious news in both datasets. Some other emotions such as "despair" and "anger" have no statistical difference in both datasets. It turns out that the results we obtain are generally consistent with the IG results in research question RQ2. We notice in the IG analysis that some emotions have a higher importance in one of the news sources: "sadness", "anger", and "fear" have a higher importance in Twitter than in news articles, and the opposite for "hope". We observe the same findings using the t-test.
.
RQ4 What are the top-N emotions that discriminate false information types in both textual sources?
False information types are different in the way they present the news to the reader. This raises a question: what are the top employed emotions in each type of false information? In Table TABREF57, we present the first three emotions that contribute mostly to the classification process to each type. This can indicate to us what are the emotion types that are used mostly in each type of false information.
Table TABREF57 shows that clickbaits express "surprise" and "negative emotion" at the most. This validates the definition of clickbaits as "attention redirection" by exploiting the reader and convincing him/her that there is an unexpected thing with negative emotion. The result of seeing "fear" in the top features in Twitter is interesting; one of the recent studies is presenting the hypothesis that says: curiosity is the best remedy for fear BIBREF38 based on psychological interpretations. Taking into account the definition of clickbaits as "attention redirection", looking at our results, we can proof this hypothesis. Furthermore, despite the language differences in both datasets, we obtain almost the same results, which emphasize our results. For hoaxes, it is not simple to interpret a specific pattern of emotions in the results. We might justify it by the fact that hoaxes are written to convince the reader of the validity of a story. Therefore, the writer is trying to present the story in a normal way (truthful) similar to a real story. Therefore, the top emotions are not unique to the hoax type. But what we find from the top hoaxes emotions in both datasets is that they are generally different except the emotion "like". Despite the natural narrative way of presenting the story, the analysis shows that the writer still uses "like" to grab reader's attention smoothly. Propaganda type has clearer emotional interpretation considering its definition. We find that propaganda expresses "joy", "fear" and at the same time "calmness" in the news articles. Both "joy" and "fear" are contrary from an emotional polar perspective, where "joy" shows the extreme of the positive emotions and "fear" the extreme negative, and at the same time, "calmness" is present. The emotional shifting between the two extremes is a clear attempt of opinion manipulation from an emotional perspective. We obtain a similar emotion set from Twitter, but instead of "joy" we get "hope". Lastly, satire is defined as a type of parody presented in a typical format of mainstream journalism, but in a similar way to irony and sarcasm phenomena BIBREF39. The results of the analysis show that "disgust" and "positive emotion" are present in both datasets, but we get "negative emotion" in the news articles and "sadness" in Twitter (both are placed in the negative side of emotions). We are interested in investigating the cause of the emotion "disgust" which appeared in the results from both datasets. We conduct a manual analysis on the text of the satire type in both datasets in order to shed some light on the possible causes. We notice that the satire language in the news often employs the emotion "disgust" to give a sense of humor. Figure FIGREF58 shows some examples from the news articles dataset highlighting the words that triggered the emotion "disgust".
<<</Discussion>>>
<<<Conclusions and Future Work>>>
In this article we have presented an emotionally-infused deep learning network that uses emotional features to identify false information in Twitter and news articles sources. We performed several experiments to investigate the effectiveness of the emotional features in identifying false information. We validated the performance of the model by comparing it to a LSTM network and other baselines. The results on the two datasets showed that clickbaits have a simpler manipulation language where emotions help detecting them. This demonstrates that emotions play a key role in deceiving the reader. Based on this result, we investigated our model performance on a clickbaits dataset and we compared it to the state-of-the-art performance. Our model showed superior results near to 96% F1 value.
Overall results confirmed that emotional features have boosted EIN model performance achieving better results on 3 different datasets (RQ1). These results emphasized the importance of emotional features in the detection of false information. In Twitter, false news content is deliberately sexual oriented and it uses many insulting words. Our analysis showed that emotions can help detecting false information also in Twitter. In the analysis section, we answered a set of questions regarding the emotions distribution in false news. We found that emotions have similar importance distribution in Twitter and news articles regardless of the differences in the used languages (RQ2). The analysis showed that most of the used emotions have statistical significant difference between real and false news (RQ3). Emotions plays a different role in each type of false information in line with its definition (RQ4). We found that clickbaits try to attract the attention of the reader by mainly employing the "surprise" emotion. Propagandas are manipulating the feelings of the readers by using extreme positive and negative emotions, with triggering a sense of "calmness" to confuse the readers and enforcing a feeling of confidence. Satire news instead use the "disgust" emotion to give a sense of humor. To sum up, we can say that the initial part of false news contains more emotions than the rest of document. Our approach exploit this fact for their detection.
To the best of our knowledge, this is the first work that analyzes the impact of emotions in the detection of false information considering both social media and news articles. As a future work, the results of our approach as a clickbaits detector motivate us to develop for a clickbaits detector as a web browser extension. Also, we will study how the emotions flow inside the articles of each kind of false information, which is worthy to be investigated as the results of this work confirmed.
<<</Conclusions and Future Work>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nHypothesis\nRelated Work\nEmotionally-infused Model\nEmotional Lexicons\nModel\nInput Representation\nEvaluation Framework\nDatasets\nNews Articles\nTwitter\nBaselines\nExperiments and Results\nEmotion-based Model\nEmotionally-Infused Model\nEIN as Clickbaits Detector\nDiscussion\nConclusions and Future Work"
],
"type": "outline"
}
|
1911.11698
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Doc2Vec on the PubMed corpus: study of a new approach to generate related articles
<<<Abstract>>>
PubMed is the biggest and most used bibliographic database worldwide, hosting more than 26M biomedical publications. One of its useful features is the "similar articles" section, allowing the end-user to find scientific articles linked to the consulted document in term of context. The aim of this study is to analyze whether it is possible to replace the statistic model PubMed Related Articles (pmra) with a document embedding method. Doc2Vec algorithm was used to train models allowing to vectorize documents. Six of its parameters were optimised by following a grid-search strategy to train more than 1,900 models. Parameters combination leading to the best accuracy was used to train models on abstracts from the PubMed database. Four evaluations tasks were defined to determine what does or does not influence the proximity between documents for both Doc2Vec and pmra. The two different Doc2Vec architectures have different abilities to link documents about a common context. The terminological indexing, words and stems contents of linked documents are highly similar between pmra and Doc2Vec PV-DBOW architecture. These algorithms are also more likely to bring closer documents having a similar size. In contrary, the manual evaluation shows much better results for the pmra algorithm. While the pmra algorithm links documents by explicitly using terminological indexing in its formula, Doc2Vec does not need a prior indexing. It can infer relations between documents sharing a similar indexing, without any knowledge about them, particularly regarding the PV-DBOW architecture. In contrary, the human evaluation, without any clear agreement between evaluators, implies future studies to better understand this difference between PV-DBOW and pmra algorithm.
<<</Abstract>>>
<<<Abstract>>>
Background PubMed is the biggest and most used bibliographic database worldwide, hosting more than 26M biomedical publications. One of its useful features is the “similar articles” section, allowing the end-user to find scientific articles linked to the consulted document in term of context. The aim of this study is to analyze whether it is possible to replace the statistic model PubMed Related Articles (pmra) with a document embedding method.
Methods Doc2Vec algorithm was used to train models allowing to vectorize documents. Six of its parameters were optimised by following a grid-search strategy to train more than 1,900 models. Parameters combination leading to the best accuracy was used to train models on abstracts from the PubMed database. Four evaluations tasks were defined to determine what does or does not influence the proximity between documents for both Doc2Vec and pmra.
Results The two different Doc2Vec architectures have different abilities to link documents about a common context. The terminological indexing, words and stems contents of linked documents are highly similar between pmra and Doc2Vec PV-DBOW architecture. These algorithms are also more likely to bring closer documents having a similar size. In contrary, the manual evaluation shows much better results for the pmra algorithm.
Conclusions While the pmra algorithm links documents by explicitly using terminological indexing in its formula, Doc2Vec does not need a prior indexing. It can infer relations between documents sharing a similar indexing, without any knowledge about them, particularly regarding the PV-DBOW architecture. In contrary, the human evaluation, without any clear agreement between evaluators, implies future studies to better understand this difference between PV-DBOW and pmra algorithm.
<<</Abstract>>>
<<<Background>>>
<<<PubMed>>>
PubMed is the largest database of bio-medical articles worldwide with more than 29,000,000 freely available abstracts. Each article is identified by an unique PubMed IDentifier (PMID) and is indexed with the Medical Subject Headings (MeSH) terminology. In order to facilitate the Information Retrieval (IR) process for the end-user, PubMed launched in 2007 a service of related articles search, available both through its Graphical User Interface (GUI) and its Application Programming Interface (API). Regarding the GUI, while the user is reading a publication, a panel presents title of articles that may be linked to the current reading. For the API, the user must query eLink with a given PMID BIBREF0. The output will be a list of others PMIDs, each associated with the similarity score computed by the pmra (pubmed related article) model BIBREF1.
<<</PubMed>>>
<<<The pmra model>>>
To do so, each document is tokenized into many topics $S_{i}$. Then, the probability $P(C|D)$ that the user will find relevant the document C when reading the document D will be calculated. For this purpose, the authors brought the concept of eliteness. Briefly, a topic $S_{i}$ is presented as elite topic for a given document if a word $W_{i}$ representing $S_{i}$ is used with a high frequency in this document. This work allows to bring closer documents sharing a maximum of elite topics. In the article presenting the pmra model, authors claim that “the deployed algorithm in PubMed also takes advantage of MeSH terms, which we do not discuss here”. We can thus assume that a similar score is computed thanks to the associated MeSH terms with both documents D and C. Such an indexing is highly time-consuming and has to be manually performed.
<<</The pmra model>>>
<<<Documents embedding>>>
Nowadays, embedding models allow to represent a text into a vector of fixed dimensions. The primary purpose of this mathematical representation of documents was to be able to use texts as input of deep neural networks. However, these models have been used by the IR community as well: once all fitted in the same multidimensional space, the cosine distance between two documents vectors can estimate the proximity between these two texts. In 2013, Mikolov et al. released a word embedding method called Word2Vec (W2V) BIBREF2. Briefly, this algorithm uses unsupervised learning to train a model which embeds a word as a vector while preserving its semantic meaning. Following this work, Mikolov and Le released in 2014 a method to vectorize complete texts BIBREF3. This algorithm, called Doc2Vec (D2V), is highly similar to W2V and comes with two architectures. The Distributed Memory Model of Paragraph Vectors (PV-DM) first trains a W2V model. This word embedding will be common for all texts from a given corpus C on which it was trained. Then, each document $D_{x}$ from C will be assigned to a randomly initialised vector of fixed length, which will be concatenated with vectors of words composing $D_{x}$ during the training time (words and documents vectors are sharing the same number of dimensions). This concatenation will be used by a final classifier to predict the next token of a randomly selected window of words. The accuracy of this task can be calculated and used to compute a loss function, used to back-propagate errors to the model, which leads to a modification of the document’s representation. The Distributed Bag of Words version of Paragraph Vector (PV-DBOW) is highly similar to the PV-DM, the main difference being the goal of the final classifier. Instead of concatenating vector from the document with word vectors, the goal here is to output words from this window just by using the mathematical representation of the document.
<<</Documents embedding>>>
<<<Related Work>>>
Doc2Vec has been used for many cases of similar document retrieval. In 2016, Lee et al. used D2V to clusterize positive and negative sentiments with an accuracy of 76.4% BIBREF4. The same year, Lau and Baldwin showed that D2V provides a robust representation of documents, estimated with two tasks: document similarity to retrieve 12 different classes and sentences similarity scoring BIBREF5. Recently, studies started to use documents embedding on the PubMed corpus. In 2017, Gargiulo et al. used a combination of words vectors coming from the abstract to bring closer similar documents from Pubmed BIBREF6. Same year, Wang and Koopman used the PubMed database to compare D2V and their own document embedding method BIBREF7. Their designed accuracy measurement task was consisting in retrieving documents having a small cosine distance with the embedding of a query. Recently, Chen et al. released BioSentVec, a set of sentence vectors created from PubMed with the algorithm sent2vec BIBREF8, BIBREF9. However, their evaluation task was based on public sentences similarity datasets, when the goal here is to embed entire abstracts as vectors and to use them to search for similar articles versus the pmra model. In 2008, the related articles feature of PubMed has been compared (using a manual evaluation) with one that uses both a TF-IDF BIBREF10 representation of the documents and Lin’s distance BIBREF11 to compare their MeSH terms BIBREF12. Thus, no study was designed so far to compare documents embedding and the pmra algorithm. The objectives of this study were to measure the ability of these two models to infer the similarity between documents from PubMed and to search what impacts the most this proximity. To do so, different evaluation tasks were defined to cover a wide range of aspects of document analogy, from their context to their morphological similarities.
<<</Related Work>>>
<<</Background>>>
<<<Methods>>>
<<<Material>>>
During this study, the optimisation of the model’s parameters and one of the evaluation tasks require associated MeSH terms with the abstracts from PubMed. Briefly, the MeSH is a medical terminology, used to index documents on PubMed to perform keywords-based queries. The MEDOC program was used to create a MySQL database filled with 26,345,267 articles from the PubMed bulk downloads on October 2018, 5th BIBREF13. Then, 16,048,372 articles having both an abstract and at least one associated MeSH term were selected for this study. For each, the PMID, title, abstract and MeSH terms were extracted. The titles and abstracts were lowered, tokenized and concatenated to compose the PubMed documents corpus.
<<</Material>>>
<<<Optimisation>>>
Among all available parameters to tune the D2V algorithm released by Gensim, six of them were selected for optimisation BIBREF14. The window_size parameter affects the size of the sliding window used to parse texts. The alpha parameter represents the learning rate of the network. The sample setting allows the model to reduce the importance given to high-frequency words. The dm parameter defines the training used architecture (PV-DM or PV-DBOW). The hs option defines whether hierarchical softmax or negative sampling is used during the training. Finally, the vector_size parameter affects the number of dimensions composing the resulting vector.
A list of possible values was defined for each of these six parameters. The full amount of possible combinations of these parameters were sent to slave nodes on a cluster, each node training a D2V model with a unique combination of parameters on 85% of 100,000 documents randomly selected from the corpus. Every article from the remaining 15% were then sent to each trained model and queried for the top-ten closest articles. For each model, a final accuracy score represented by the average of common MeSH terms percentage between each document $D_{i}$ from the 15,000 extracted texts and their returning top-ten closest documents was calculated. The combination of parameters with the highest score was kept for both PV-DBOW and PV-DM.
<<</Optimisation>>>
<<<Training>>>
The final models were trained on a server powered by four XEON E7 (144 threads) and 1To of RAM. Among the total corpus (16,048,372 documents), 1% (160,482) was extracted as a test set (named TeS) and was discarded from the training. The final models were trained on 15,887,890 documents representing the training set called TrS.
<<</Training>>>
<<<Evaluation>>>
The goal here being to assess if D2V could effectively replace the related-document function on PubMed, five different document similarity evaluations were designed as seen on figure FIGREF9. These tasks were designed to cover every similarities, from the most general (the context) to the character-level similarity.
Indeed, a reliable algorithm to find related documents should be able to bring closer texts sharing either a similar context, some important ideas (stems of words), an amount of non-stemmed vocabulary (e.g. verbs tenses are taken in account) and should not be based on raw character-similarity (two documents sharing the same proportion of letter “A” or having a similar length should not be brought together if they do not exhibit upper levels similarity).
<<<String length>>>
To assess whether a similar length could lead to convergence of two documents, the size of the query document $D_{x}$ has been compared with the top-close document $C_{x}$ for 10,000 document randomly selected from the TeS after some pre-processing steps (stopwords and spaces were removed from both documents).
<<</String length>>>
<<<Words co-occurrences>>>
A matrix of words co-occurrence was constructed on the total corpus from PubMed. Briefly, each document was lowered and tokenized. A matrix was filled with the number of times that two words co-occur in a single document. Then, for 5,000 documents $D_{x}$ from the TeS, all models were queried for the top-close document $C_{x}$. All possible combinations between all words $WD_{x} \in D_{x}$ and all words $WC_{x} \in C_{x}$ (excluding stopwords) were extracted, 500 couples were randomly selected and the number of times each of them was co-occurring was extracted from the matrix. The average value of this list was calculated, reflecting the proximity between D and C regarding their words content. This score was also calculated between each $D_{x}$ and the top-close document $C_{x}$ returned by the pmra algorithm.
<<</Words co-occurrences>>>
<<<Stems co-occurrences>>>
The evaluation task explained above was also applied on 10,000 stemmed texts (using the Gensim’s PorterStemmer to only keep word’s roots). The influence of the conjugation form or other suffixes can be assessed.
<<</Stems co-occurrences>>>
<<<MeSH similarity>>>
It is possible to compare the ability of both pmra and D2V to bring closer articles which were indexed with common labels. To do so, 5,000 documents $D_{x}$ randomly selected from the TeS were sent to both pmra and D2V architectures, and the top-five closer articles $C_{x}$ were extracted. The following rules were then applied to each MeSH found associated with $D_{x}$ for each document $C_{x_i}$ : add 1 to the score if this MeSH term is found in both $D_{x}$ and $C_{x_i}$, add 3 if this MeSH is defined as major topic and add 1 for each qualifier in common between $D_{x}$ and Cxi regarding this particular MeSH term. Then, the mean of these five scores was calculated for both pmra and D2V.
<<</MeSH similarity>>>
<<<Manual evaluation>>>
Among all documents contained in the TeS, 10 articles $D_{x}$ have been randomly selected. All of them were sent to the pmra and to the most accurate of the two D2V architectures, regarding the automatic evaluations explained above. Each model was then queried for the ten closest articles for each $D_{x_i} \in D_{x}$ and the relevance between $D_{x_i}$ and every of the top-ten documents was blindly assessed by a three-modality scale used in other standard Information Retrieval test sets: bad (0), partial (1) or full relevance (2) BIBREF15. In addition, evaluators have been asked to rank publications according their relevant proximity with the query, the first being the closest from their perspective. Two medical doctors and two medical data librarians took part in this evaluation.
<<</Manual evaluation>>>
<<</Evaluation>>>
<<</Methods>>>
<<<Results>>>
<<</Results>>>
<<<Discussion>>>
In this study, the ability of D2V to infer similarity between biomedical abstracts has been compared versus the pmra, the algorithm actually used in Pubmed.
Regarding the strings length task, even if trending lines slopes are very close to zero, a slight negative correlation is observed between the difference in terms of character and scores calculated by PV-DBOW and pmra. This result can be relativized. Indeed, it was expected that two different abstracts regarding their number of characters are more likely to be different in term of context. The longest text can treat more subjects with different words (explaining D2V’s results) or to be associated with more MeSH labels (clarifying pmra ones’).
Words or stems content analysis does not showed any particular correlation between common words/stems and scores computed by both D2V models or pmra. Inverse results could have been expected, regarding the way pmra is linking documents (using common terms between documents). The score brought to the pmra model by the MeSH terms should be quite important for the final scoring formula. However, among all possible couples of words between two documents, only 500 were randomly selected, due to computational limits. Random sampling effect could have led to these results.
D2V takes in account many language features such as bi- or trigrams, synonyms, other related meanings and stopwords. No prior knowledge of analysis on the documents are needed. The pmra is based (in addition to words) on the manual MeSH indexing of the document, even if this aspect was not discussed in the Lin and Wilbur’s publication. This indexing step is highly time-consuming and employs more than 50 people to assign labels on documents from PubMed. The result displayed on the figure FIGREF23 could have been expected for the pmra algorithm, this model using the MeSH terms on the statistical formula used to link documents as well as elite or elitness terms. It was thus expected that two documents sharing a lot of indexing labels would have been seen close by the pmra. However, these MeSH descriptors were only used to select the appropriate parameters used to train the D2V models. The fact that D2V still manages, with the PV-DBOW architecture, to find documents that are close to each other regarding the MeSH indexing demonstrates its ability to capture an article’s subject solely with its abstract and title.
Regarding the manual evaluation, D2V PV-DBOW model has been very largely underrated compared to the pmra model. Its results have been seen as not accurate more than three times compared to the Pubmed's model. Regarding the ranking of the results, the average position of the pmra is centred around 7, while D2V's one is around 14. However, the real signification of these results can be relativised. Indeed, the agreement between the four annotators is only moderate and no general consensus can be extracted.
This study also has some limitations. First, the MeSH indexing of documents on PubMed can occur on full-text data, while both optimisation of the hyper-parameters and an evaluation task are based on abstracts' indexing. However, this bias should have a limited impact on the results. The indexing being based on the main topics from the documents, these subjects should also be cited in the abstract. About this manual indexing, a bias is brought by the indexers. It is well-known in the information retrieval community that intra- and inter-indexers bias exist.
As the parameters optimisation step relied only on MeSH terms, it assumed that a model trained on articles’ abstracts can be optimised with MeSH terms which are selected according to the full text of the articles. In other words, this optimisation assumed an abstract is enough to semantically represent the whole text. But this is not completely true. If it was, MeSH terms would have not be selected on full texts in the first place. Also, the principle that a PubMed related article feature has to give articles which have a lot of MeSH terms in common has been followed throughout this work.
To go further, as mentioned in the paper presenting D2V, the concatenation of vectors from both PV-DM and PV-DBOW for a single document could lead to a better accuracy. A third model could be designed by the merge of the two presented here. Another moot point on the text embedding community is about the part-of-speech tagging of the text before sending it to the model (during both training and utilisation). This supplementary information could lead to a better understanding of the text, particularly due to the disambiguation of homonyms.
<<</Discussion>>>
<<<Conclusion>>>
This study showed that Doc2Vec PV-DBOW, an unsupervised text embedding technique, can infer similarity between biomedical articles' abstract. It requires no prior knowledge on the documents such as text indexing and is not impacted by raw words content or document structure. This algorithm was able to link documents sharing MeSH labels in a similar way the pmra did. A manual evaluation returned very low scores for the D2V PV-DBOW model, but with a highly moderate agreement between evaluators. More investigation should be carried out to understand this difference between the evaluation based on the MeSH indexing (performed by humans) and the manual evaluation.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nAbstract\nBackground\nPubMed\nThe pmra model\nDocuments embedding\nRelated Work\nMethods\nMaterial\nOptimisation\nTraining\nEvaluation\nString length\nWords co-occurrences\nStems co-occurrences\nMeSH similarity\nManual evaluation\nResults\nDiscussion\nConclusion"
],
"type": "outline"
}
|
2002.02492
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Consistency of a Recurrent Language Model With Respect to Incomplete Decoding
<<<Abstract>>>
Despite strong performance on a variety of tasks, neural sequence models trained with maximum likelihood have been shown to exhibit issues such as length bias and degenerate repetition. We study the related issue of receiving infinite-length sequences from a recurrent language model when using common decoding algorithms. To analyze this issue, we first define inconsistency of a decoding algorithm, meaning that the algorithm can yield an infinite-length sequence that has zero probability under the model. We prove that commonly used incomplete decoding algorithms - greedy search, beam search, top-k sampling, and nucleus sampling - are inconsistent, despite the fact that recurrent language models are trained to produce sequences of finite length. Based on these insights, we propose two remedies which address inconsistency: consistent variants of top-k and nucleus sampling, and a self-terminating recurrent language model. Empirical results show that inconsistency occurs in practice, and that the proposed methods prevent inconsistency.
<<</Abstract>>>
<<<Introduction>>>
Neural sequence models trained with maximum likelihood estimation (MLE) have become a standard approach to modeling sequences in a variety of natural language applications such as machine translation BIBREF0, dialogue modeling BIBREF1, and language modeling BIBREF2. Despite this success, MLE-trained neural sequence models have been shown to exhibit issues such as length bias BIBREF3, BIBREF4 and degenerate repetition BIBREF5. These issues are suspected to be related to the maximum likelihood objective's local normalization, which results in a discrepancy between the learned model's distribution and the distribution induced by the decoding algorithm used to generate sequences BIBREF6, BIBREF7. This has prompted the development of alternative decoding methods BIBREF8, BIBREF5 and training objectives BIBREF9, BIBREF10. In this paper, we formalize and study this discrepancy between the model and the decoding algorithm.
We begin by formally defining recurrent neural language models, a family that encompasses neural models used in practice, such as recurrent neural networks BIBREF11, BIBREF12, BIBREF13, and transformers BIBREF14. Next, we formally define a decoding algorithm – a function that induces a distribution over sequences given a recurrent language model and a context distribution – which is used to obtain probable sequences from a model. In this paper, we show that the distribution induced by a decoding algorithm can contradict this intended use; instead, the decoding algorithm may return improbable, infinite-length sequences.
Our main finding is that a sequence which receives zero probability under a recurrent language model's distribution can receive nonzero probability under the distribution induced by a decoding algorithm. This occurs when the recurrent language model always ranks the sequence termination token outside of the set of tokens considered at each decoding step, yielding an infinite-length, zero probability sequence. This holds whenever the decoding algorithm is incomplete, in the sense that the algorithm excludes tokens from consideration at each step of decoding, which is the case for common methods such as greedy search, beam search, top-$k$ sampling BIBREF15, and nucleus sampling BIBREF5. We formalize our main finding using the notion of consistency BIBREF16 – whether a distribution assigns probability mass only to finite sequences – and prove that a consistent recurrent language model paired with an incomplete decoding algorithm can induce an inconsistent sequence distribution.
Based on the insight that inconsistency occurs due to the behavior of the termination token under incomplete decoding, we develop two methods for addressing inconsistency. First, we propose consistent sampling methods which guarantee that the termination token is not excluded from selection during decoding. Second, we introduce a self-terminating recurrent language model which ensures that the termination token is eventually ranked above all others, guaranteeing consistency under incomplete decoding.
To empirically measure inconsistency, we decode sequences from trained recurrent language models and measure the proportion of sequences with lengths far exceeding the maximum training sequence length. Our experiments on the Wikitext2 dataset BIBREF17 suggest that inconsistency occurs in practice when using incomplete decoding methods, while the proposed consistent sampling methods and self-terminating model parameterization prevent inconsistency and maintain language modeling quality.
The theoretical analysis reveals defects of existing decoding algorithms, providing a way to develop future models, inference procedures, and learning algorithms. We present methods related to sampling and model parameterization, but there are more directions which we leave to the future; we close with directions related to sequence-level learning.
<<</Introduction>>>
<<<Background>>>
We begin our discussion by establishing background definitions. First, we define a sequence which is the main object of our investigation.
Definition 2.1 (Sequence) A sequence $Y$ is an ordered collection of items from a predefined finite vocabulary $V$. A sequence of finite length always ends with a special token $\left<\text{eos}\right>\in V$ that only appears at the end of a sequence.
Each model we consider generates a sequence conditioned on context information, such as a prefix in sentence completion. To consider this, we define a context distribution.
Definition 2.2 (Context distribution) A context distribution $p(C)$ is a probability distribution defined over a set $\mathcal {C}$. An element $C\in \mathcal {C}$ is called a context.
<<<Recurrent Language Models>>>
A recurrent language model is an autoregressive model of a sequence distribution, where each conditional probability is parameterized with a neural network. Importantly, we assume that all tokens in a sequence are dependent on each other under a recurrent language model. This allows us to avoid cases in which the model degenerates to a Markovian language model, such as an $n$-gram model with a finite $n$.
Definition 2.3 (Recurrent language model) A recurrent language model $p_\theta $ is a neural network that computes the following conditional probability at each time step
where $h_t = f_{\theta }(y_t, h_{t-1})$ and $h_0 = g_{\theta }(C)$, and $u,c,\theta $ are parameters. A recurrent language model thereby computes the probability of a sequence $Y=(y_1, \ldots , y_T)$ by
where $y_{<t}=(y_1,\ldots ,y_{t-1})$. This distribution satisfies
Practical variants of the recurrent language model differ by the choice of transition function $f_{\theta }$ BIBREF11, BIBREF13, BIBREF12, BIBREF14. The use of softmax BIBREF18 implies that every unique token in the vocabulary is considered at every location of a sequence.
Remark 2.1 Under the conditional distribution of a recurrent language model, every token $v\in V$ is assigned a positive probability. This implies that $0 < p_\theta (v\,|\,y_{<t}, C) < 1.$ In addition, it follows that any finite sequence is probable by a recurrent language model under any context, i.e., $p_{\theta }(Y\,|\,C) > 0$ for any sequence $Y$ of finite length.
<<</Recurrent Language Models>>>
<<<Decoding Algorithms>>>
Because it is intractable to decode the most probable sequence, it is necessary in practice to use an approximate decoding algorithm.
Definition 2.4 (Decoding algorithm) A decoding algorithm $\mathcal {F}(p_{\theta }, C)$ is a function that generates a sequence $\tilde{Y}$ given a recurrent language model $p_{\theta }$ and context $C$. Let $q_{\mathcal {F}}$ denote the distribution induced by the decoding algorithm $\mathcal {F}$.
We consider two families of decoding algorithms. In our analysis we only consider decoding algorithms that decode in a single pass, forward in time, without modifying previously selected tokens.
<<<Stochastic decoding.>>>
The first family consists of stochastic algorithms. Among them, ancestral sampling is asymptotically unbiased and can be used for finding the most probable sequence, although it requires a substantial number of samples to achieve a low-variance estimate.
Definition 2.5 (Ancestral sampling) Ancestral sampling $\mathcal {F}_{\text{anc}}$ generates a sequence from a recurrent language model $p_{\theta }$ given context $C$ by recursively sampling from $p_{\theta }(y_t\,|\,\tilde{y}_{<t}, C)$ until $\tilde{y}_t = \left<\text{eos}\right>$:
In order to avoid the high variance, two approximate stochastic decoding algorithms have recently been proposed and tested with recurrent language models. Top-$k$ sampling considers only a subset of the $k$ most probable tokens from the vocabulary at a time, while nucleus sampling considers only the minimal subset of most probable tokens whose total probability is higher than a predefined threshold.
Definition 2.6 (Top-$k$ sampling BIBREF15) Top-$k$ sampling $\mathcal {F}_{\text{top-k}}$ generates a sequence from a recurrent language model $p_{\theta }$ given context $C$ by recursively sampling from the following proposal distribution:
Definition 2.7 (Nucleus sampling BIBREF5) Nucleus sampling $\mathcal {F}_{\text{nuc-}\mu }$ generates a sequence from a recurrent language model $p_{\theta }$ given context $C$ by recursively sampling from the following proposal distribution. Let $v_1,\ldots ,v_{|V|}$ denote tokens in $V$ such that $p_{\theta }(v_i\,|\,y_{<t},C) \ge p_{\theta }(v_j\,|\,y_{<t},C)$ for all $i < j$, and define
where $V_{\mu } = \left\lbrace v_1, \cdots , v_{k_\mu } \right\rbrace $ with
<<</Stochastic decoding.>>>
<<<Deterministic decoding.>>>
The other family consists of deterministic decoding algorithms, where a token is selected deterministically according to a rule at each decoding step. The most naive algorithm, called greedy decoding, simply takes the most probable token at each step.
Definition 2.8 (Greedy decoding) Greedy decoding $\mathcal {F}_{\text{greedy}}$ generates a sequence from a recurrent language model $p_{\theta }$ given context $C$ by recursively selecting the most likely token from $p_{\theta }(y_t | \tilde{y}_{<t}, C)$ until $\tilde{y}_t = \left<\text{eos}\right>$:
In contrast to greedy decoding, beam search operates on the level of partial sequences or prefixes.
Definition 2.9 (Prefix) A prefix $\rho _t$ is an ordered collection of items from $V$. The score of a prefix is
where $\rho _t[\tau ]$ is a token at time $\tau $ from $\rho _t$.
Starting from a set of empty prefixes, at each iteration a new prefix set is formed by expanding each prefix, then choosing the highest scoring expanded prefixes.
Definition 2.10 (Beam search) Beam search with width $k$, $\mathcal {F}_{\text{beam}-k}$, generates a sequence from a recurrent language model $p_{\theta }$ by maintaining a size-$k$ prefix set $\mathrm {P}_t^{\text{top}}$. Starting with $P_0^{top}=\varnothing $, at each iteration $t\in \lbrace 1,2,\ldots \rbrace $ beam search forms a new prefix set $\mathrm {P}_t^{\text{top}}$ by expanding the current set, $\mathrm {P}_t = \bigcup _{\rho \in \mathrm {P}_{t-1}^{\text{top}}} \lbrace \rho \circ v\, |\, v\in V\rbrace $ (where $\rho \circ v$ is concatenation), then choosing the $k$ highest scoring elements,
Any $\rho \in \mathrm {P}_t^{\text{top}}$ ending with $\left<\text{eos}\right>$ is restricted from being expanded further, and is added to a set $S$. Beam search ends when $S$ contains $k$ sequences, and returns the highest scoring sequence in $S$.
<<</Deterministic decoding.>>>
<<<Incompleteness.>>>
Other than ancestral sampling, the decoding algorithms above are incomplete in that they only consider a strict subset of the the full vocabulary $V$ at each time step, aside from the trivial case of $k=|V|$.
Definition 2.11 (Incomplete Decoding) A decoding algorithm $\mathcal {F}$ is incomplete when for each context $C$ and prefix $y_{<t}$, there is a strict subset $V^{\prime }_t\subsetneq V$ such that
<<</Incompleteness.>>>
<<</Decoding Algorithms>>>
<<</Background>>>
<<<Consistency of a Decoding Algorithm>>>
<<<Definition of consistency.>>>
A recurrent language model $p_{\theta }$ may assign a positive probability to an infinitely long sequence, in which case we call the model inconsistent. This notion of consistency was raised and analyzed earlier, for instance by BIBREF19 and BIBREF16, in terms of whether the distribution induced by $p_{\theta }$ is concentrated on finite sequences. We extend their definition to account for the context $C$.
Definition 3.1 (Consistency of a recurrent language model) A recurrent language model is consistent under a context distribution $p(C)$ if $p_{\theta }(|Y|=\infty ) = 0$. Otherwise, the recurrent language model is said to be inconsistent.
Any sequence decoded from a consistent model for a given probable context is guaranteed to terminate.
Lemma 3.1 If a recurrent language model $p_{\theta }$ is consistent, $p_{\theta }(|Y|=\infty \,|\,C)=0$ for any probable context $C$.
Next, we establish a practical condition under which a recurrent language model is consistent.
Lemma 3.2 A recurrent language model $p_{\theta }$ is consistent if $\Vert h_t\Vert _p$ is uniformly bounded for some $p\ge 1$.
[Proof sketch] If $\Vert h_t\Vert _p$ is bounded, then each $u_v^\top h_t$ is bounded, hence $p_{\theta }(\left<\text{eos}\right>| y_{<t}, C)>\xi >0$ for a constant $\xi $. Thus $p_{\theta }(|Y|=\infty ) \le \lim _{t\rightarrow \infty } (1 - \xi )^t = 0$, meaning that $p_{\theta }$ is consistent.
Although this condition is practical because layer normalization or bounded activation functions BIBREF11, BIBREF12, BIBREF14 result in bounded $h_t$, we show that even if a recurrent language model is consistent, a decoding algorithm may produce an infinite-length sequence. We formalize this discrepancy using the consistency of a decoding algorithm.
Definition 3.2 (Consistency of a decoding algorithm) A decoding algorithm $\mathcal {F}$ is consistent with respect to a consistent recurrent language model $p_{\theta }$ under a context distribution $p(C)$ if the decoding algorithm $\mathcal {F}$ preserves the consistency of the model $p_{\theta }$, that is, $q_{\mathcal {F}}(|Y|=\infty )=0$.
When a consistent recurrent language model $p_{\theta }$ and a decoding algorithm $\mathcal {F}$ induce a consistent distribution $q_{\mathcal {F}}$, we say that $p_{\theta }$ paired with $\mathcal {F}$ is consistent. For instance, any consistent recurrent language model paired with ancestral sampling is consistent, because the induced distribution $q_{\mathcal {F}_{\text{anc}}}$ is the same as the distribution of the original model. We also have an analogue of Lemma UNKREF21.
Lemma 3.3 A consistent decoding algorithm with respect to a consistent recurrent language model decodes only probable sequences. That is, if $q_{\mathcal {F}}(Y\,|\,C)>0$, then $p_{\theta }(Y\,|\,C)>0$ for any probable context $C$.
<<</Definition of consistency.>>>
<<<Inconsistency of incomplete decoding.>>>
Any incomplete decoding algorithm (Definition UNKREF18) can be inconsistent regardless of the context distribution, because there is a recurrent language model that places $\left<\text{eos}\right>$ outside of $V^{\prime }_t$ at every step of decoding. To show this, we construct a consistent recurrent language model whose distribution induced by an incomplete decoding algorithm is inconsistent.
Theorem 3.4 (Inconsistency of an incomplete decoding algorithm) There exists a consistent recurrent language model $p_{\theta }$ from which an incomplete decoding algorithm $\mathcal {F}$, that considers only up to $(|V|-1)$-most likely tokens according to $p_{\theta }(y_t\,|\,y_{<t},C)$ at each step $t$, finds a sequence $\tilde{Y}$ whose probability under $p_{\theta }$ is 0 for any context distribution.
We prove this theorem by constructing a $\tanh $ recurrent network. We define the recurrent function $f_{\theta }$ as
where $e(y_{t}) \in \mathbb {R}^{|V|}$ is a one-hot representation of $y_t$, $W_h \in \mathbb {R}^{d \times d}$ where every entry is positive, and $I$ is an identity matrix of size $|V| \times |V|$. $h_0 = g_{\theta }(C)$ is constructed to consist of positive values only. Because each element of $|h_t|$ is bounded by 1, the constructed recurrent language model $p_{\theta }$ is consistent by Lemma UNKREF23.
For $v \ne \left<\text{eos}\right>$, we set $u_v$ (see Definition UNKREF4) to be
where all elements of $\bar{u}_v$ are positive and $e(v)$ is a one-hot representation of $v$. $c_v$ is set to zero. Next, let
where all elements of $\bar{u}_{\left<\text{eos}\right>}$ are negative.
This defines a valid recurrent language model (Definition UNKREF4), since the conditional distribution at each time $t$ is influenced by all the previous tokens. More specifically, the logit of a token $v$ depends on $\sum _{t^{\prime }=1}^t {1}(y_{t^{\prime }} = v)$, where 1 is an indicator function.
This recurrent language model always outputs positive logits for non-$\left<\text{eos}\right>$ tokens, and outputs negative logits for the $\left<\text{eos}\right>$ token. This implies $p(\left<\text{eos}\right>|\,y_{<t}, C) < p(v\,|\,y_{<t}, C)$ for all $v \in V \backslash \left\lbrace \left<\text{eos}\right>\right\rbrace $. This means that $\left<\text{eos}\right>$ is always ranked last at each time step, so an incomplete decoding algorithm that considers at most $(|V|-1)$ most probable tokens at each time step from $p_{\theta }(y_t\,|\,y_{<t}, C)$ cannot decode $\left<\text{eos}\right>$ and thus always decodes an infinitely long sequence.
The log-probability of this infinitely long sequence $\hat{Y}$ is
For any $v\in V$,
where $b_v = \sum _{v^{\prime }\ne v} \exp (-\Vert u_{v^{\prime }}\Vert _1)$. The last inequality holds because $x/(x+b_v)$ is increasing in $x>0$. Therefore, the log-probability $\log p_{\theta }(\hat{Y}\,|\,C)$ diverges as $|\hat{Y}| \rightarrow \infty $, and thus $p_{\theta }(\hat{Y}\,|\,C) = 0$, which implies the decoding algorithm $\mathcal {F}$ is inconsistent by Lemma UNKREF25. Greedy decoding, beam search, top-$k$ sampling, and nucleus sampling are all inconsistent according to this theorem; there are consistent models $p_{\theta }$ that induce inconsistent distributions when paired with these decoding algorithms.
<<</Inconsistency of incomplete decoding.>>>
<<</Consistency of a Decoding Algorithm>>>
<<<Fixing the inconsistency>>>
In this section, we consider two ways to prevent inconsistency arising from incomplete decoding algorithms. First, we introduce consistent versions of top-$k$ and nucleus sampling. Second, we introduce the self-terminating recurrent language model, which is consistent when paired with any of the decoding algorithms considered in this paper.
<<<Consistent Sampling Algorithms>>>
The proof of Theorem UNKREF27 suggests that inconsistency of incomplete decoding algorithms arises from the fact that $\left<\text{eos}\right>$ may be excluded indefinitely from the set of top-ranked tokens. We propose a simple modification to top-$k$ and nucleus sampling that forces $\left<\text{eos}\right>$ to be included at each step of decoding. First, we give a condition for when a particular model $p_{\theta }$ paired with a decoding algorithm $\mathcal {F}$ is consistent.
Theorem 4.1 Let $p_{\theta }$ be a consistent recurrent language model. If a decoding algorithm $\mathcal {F}$ satisfies $q_{\mathcal {F}}(\left<\text{eos}\right>|\,y_{<t}, C) \ge p_{\theta }(\left<\text{eos}\right>|\,y_{<t}, C)$ for every prefix $y_{<t}$ and context $C$, then the decoding algorithm $\mathcal {F}$ is consistent with respect to the model $p_{\theta }$.
Let $P^{\prime }_{t-1}$ denote a set of all prefixes $y_{<t}$ of length $t-1$. For $t\ge 1$,
Taking the limit $t\rightarrow \infty $ and expectation over $C$ on both sides, we have
from which the decoding algorithm is consistent.
We define consistent variants of top-$k$ and nucleus sampling which satisfy this condition.
Definition 4.1 (Consistent top-$k$ sampling) Consistent top-$k$ sampling is top-$k$ sampling with the following modified proposal distribution:
where $V^{\prime } = \left\lbrace \left<\text{eos}\right>\right\rbrace \cup \underset{v^{\prime }}{\arg \text{top-k}}\ p_{\theta }(v^{\prime }\,|\,y_{<t}, C)$.
Definition 4.2 (Consistent nucleus sampling) Consistent nucleus sampling is nucleus sampling with the following modified proposal distribution:
The induced probability of $\left<\text{eos}\right>$ under these two algorithms is always equal to or larger than the model's probability. By Theorem UNKREF29, these algorithms are consistent with respect to any consistent recurrent language model.
<<</Consistent Sampling Algorithms>>>
<<<A Self-Terminating Recurrent Language Model>>>
Although these consistent sampling algorithms can be used with any recurrent language model, their stochastic nature may not be suitable for finding a single, highly probable sequence. To avoid this limitation, we propose the self-terminating recurrent language model (STRLM).
Definition 4.3 (Self-terminating recurrent language model) A self-terminating recurrent language model computes the following conditional probability at each time step:
where
with $\sigma : \mathbb {R} \rightarrow [0,1-\epsilon ]$ and $\epsilon \in (0,1)$. $h_t$ is computed as in the original recurrent language model.
The underlying idea is that the probability of $\left<\text{eos}\right>$ increases monotonically. The model is consistent when paired with greedy decoding.
Theorem 4.2 Greedy decoding is consistent with respect to any self-terminating recurrent language model.
Let $p_{t}^{\left<\text{eos}\right>}$ denote $p_{\theta }(\left<\text{eos}\right>|\,y_{<t}, C)$ and $a_{t}^{\left<\text{eos}\right>}$ denote $u_{\left<\text{eos}\right>}^\top h_t + c_{\left<\text{eos}\right>}$. By Definition UNKREF33 we have
Take $B=-\log 2 / \log (1-\epsilon )$. We then have $p_{t}^{\left<\text{eos}\right>}>1/2$ for all $t > B$, which implies that $\left<\text{eos}\right>$ is always the most probable token after time step $B$. Hence, the sequence length is less than $B$ with probability 1. Beam search is also consistent with respect to any self-terminating recurrent language model according to a similar argument; see Appendix for the proof.
<<</A Self-Terminating Recurrent Language Model>>>
<<</Fixing the inconsistency>>>
<<<Empirical Validation>>>
The theoretical results rely on the existence of a model that results in inconsistency; it remains to be shown that inconsistency with respect to incomplete decoding occurs with recurrent language models encountered in practice. Moreover, while the proposed consistent sampling methods and self-terminating recurrent language model carry theoretical guarantees in terms of consistency, we must check whether they retain language modeling quality. To do so, we perform two experiments using a sequence completion task. In each experiment, we use the beginning of a sequence as context, then decode continuations from a trained recurrent language model and measure the proportion of non-terminated sequences in order to approximately measure inconsistency. The first experiment (§SECREF45) shows that inconsistency occurs in practice, and the second experiment (§SECREF47) shows the effectiveness of the proposed approaches.
<<<Sequence completion.>>>
We evaluate recurrent language models on a sequence completion task, which has previously been used to evaluate the effectiveness of sequence models, e.g. BIBREF20, BIBREF21, BIBREF2, BIBREF5, BIBREF10. Sequence completion is a general setting for studying the behavior of language models, encompassing machine translation BIBREF0, story generation BIBREF15, and dialogue modeling BIBREF1. The task consists of decoding a continuation $\hat{Y}\sim \mathcal {F}(p_{\theta }, C)$ given a length-$k$ prefix $C=(c_1,\ldots ,c_k)$, resulting in a completion $(c_1,\ldots ,c_k,\hat{y}_1\ldots ,\hat{y}_T)$.
<<</Sequence completion.>>>
<<<Dataset.>>>
We use the Wikitext2 dataset BIBREF17 consisting of paragraphs from Wikipedia, since it has frequently been used to evaluate language models BIBREF22, BIBREF23, BIBREF24. We split each paragraph into sentences using Spacy, resulting in roughly 100k sequences (78,274 train, 8,464 valid, 9,708 test). We split each sequence, using the first $k$ tokens as a context and the remaining tokens as a continuation. To ensure that each sequence contains a prefix, we prepend padding tokens to make it length $k$. Special $\left<\text{bos}\right>$ and $\left<\text{eos}\right>$ tokens are then inserted at the beginning and end of every sequence. Our experiments use $k=10$. We model sequences at the word level with a vocabulary size of 33,182. The average training sequence length is 24 tokens, with a maximum of 137.
<<</Dataset.>>>
<<<Context distribution.>>>
We define empirical context distributions with prefixes from the train, valid, and test sets,
where $\mathcal {D}=\lbrace (C^{(n)},Y^{(n)})\rbrace _{n=1}^{N}$ is a dataset split.
<<</Context distribution.>>>
<<<Evaluation metrics.>>>
We use finite sequences to approximately measure the consistency of a model paired with a decoding algorithm, since decoding an infinite-length sequence is impossible. We use the proportion of decoded continuations that are longer than a predefined limit,
where $\hat{Y}^{(n)}\sim \mathcal {F}(p_{\theta }, C^{(n)})$ for each context $C^{(n)}$ in $\mathcal {D}$. We call $r_L$ the non-termination ratio of the decoding algorithm $\mathcal {F}$ for an underlying model and context distribution. A value of $r_L$ greater than zero means that some sequences did not terminate within $L$ steps. When $L$ is infinity, this implies that the model paired with the decoding algorithm is inconsistent. In practice, we use a finite $L$ that is substantially larger than the maximum training sequence length, and we interpret a non-zero $r_L$ as evidence that the model paired with the decoding algorithm is inconsistent. We use $L=1500$, which is more than 10 times the maximum training sequence length.
In each experiment, we report the mean and standard deviation of metrics across 10 independent initializations. Unless specified otherwise, we report metrics using the test context distribution, since the train, valid, and randomly generated context distributions had similar results.
<<</Evaluation metrics.>>>
<<<Training.>>>
We train recurrent language models for sequence completion with maximum likelihood, using the following loss on each sequence $Y=(c_1,\ldots ,c_k,y_1,\ldots ,y_T)$:
This amounts to running the full training sequence through a recurrent model and zeroing the loss for the first $k$ tokens, so that the first $k$ steps correspond to learning a $g_{\theta }$ that encodes the context. Each model is trained on a single Nvidia P40 GPU for up to 100 epochs, stopping early when validation perplexity does not decrease for 10 consecutive epochs.
<<</Training.>>>
<<<Models.>>>
We consider recurrent neural networks with hyperbolic tangent activations ($\tanh $-RNN) BIBREF11 and LSTM units (LSTM-RNN) BIBREF13. We perform an initial hyper-parameter sweep and select the best set of hyper-parameters for each of $\tanh $-RNN and LSTM-RNN based on the validation perplexities. With this best set of hyperparameters, we train each of these models with 10 different initializations. The choice of $\tanh $ and LSTM RNNs implies that all of the recurrent language models that we train are consistent according to Lemma UNKREF23. Our LSTM models achieve similar test perplexity ($91.86 \pm 0.4$) to those reported in previous work BIBREF24; see Appendix for further details.
Additionally, we train self-terminating $\tanh $-RNN and LSTM-RNN variants (Definition UNKREF33) at various values of $\epsilon $, which controls a lower bound on the termination probability at each step. We use $\sigma (x)=(1-\epsilon )\text{sigmoid}(x)$. We use the hyper-parameters selected in the preceding grid search.
<<</Models.>>>
<<<Inconsistency of Recurrent Language Models>>>
In this experiment, we demonstrate evidence of inconsistency with incomplete decoding methods (Theorem UNKREF27).
Table TABREF43 shows non-termination ratios for the recurrent language models using the incomplete decoding algorithms considered in this work, along with ancestral sampling. Decoding with ancestral sampling always resulted in sequences that terminated within $L$ steps, since the induced distribution is the same as that of the consistent model. On the other hand, the non-zero non-termination ratios for the incomplete decoding algorithms suggest inconsistency with respect to each algorithm, providing evidence for Theorem UNKREF27.
In particular, greedy search, beam search, and nucleus sampling yielded non-terminating sequences with both the $\tanh $ and LSTM RNNs. Using greedy decoding, roughly 6% of all contexts resulted in a non-terminating continuation with the $\tanh $-RNN, and roughly 1% with the LSTM-RNN. Nucleus sampling also produced non-terminating sequences with the $\tanh $-RNN (2.49%, nuc-0.2) and LSTM-RNN (0.76%, nuc-0.2), with the amount of non-termination decreasing as $\mu $ increased (see Definition UNKREF11), likely due to $\left<\text{eos}\right>$ having a higher chance of being included in $V_{\mu }$. Top-$k$ sampling resulted in non-terminating sequences with the $\tanh $-RNN, but not with the LSTM, implying that $\left<\text{eos}\right>$ was ranked within the top $k$ positions on at least one timestep during each decoding. Beam search produced non-terminating sequences with both the $\tanh $-RNN (beam-2,4) and LSTM-RNN (beam-2) models. This means that $\left<\text{eos}\right>$ was outside of the top tokens (determined by the beam width) considered at each step, since in our experiments we terminated the beam search when a single beam prefix contained $\left<\text{eos}\right>$. With the LSTM-RNN, a larger beam width (beam-4) prevented non-termination.
<<</Inconsistency of Recurrent Language Models>>>
<<<Consistency of the Proposed Methods>>>
In this experiment, we evaluate the consistent variants of top-$k$ and nucleus sampling (§SECREF28) as well as the self-terminating recurrent language model (§SECREF32) in terms of consistency and language modeling quality.
<<<Consistent sampling.>>>
Table TABREF44 shows that consistent nucleus and top-$k$ sampling (§SECREF28) resulted in only terminating sequences, except for a few cases that we attribute to the finite limit $L$ used to measure the non-termination ratio. The example continuations in Table TABREF46 show that the sampling tends to preserve language modeling quality on prefixes that led to termination with the baseline (first row). On prefixes that led to non-termination with the baseline (second & third rows), the quality tends to improve since the continuation now terminates. Since the model's non-$\left<\text{eos}\right>$ token probabilities at each step are only modified by a multiplicative constant, the sampling process can still enter a repetitive cycle (e.g. when the constant is close to 1), though the cycle is guaranteed to eventually terminate.
<<</Consistent sampling.>>>
<<<Self-terminating RNN.>>>
As seen in Table TABREF50, the self-terminating recurrent language models with $\epsilon \in \lbrace 10^{-2},10^{-3}\rbrace $ are consistent with respect to greedy decoding, at the expense of perplexity compared to the vanilla model. The value of $\epsilon $ from Definition UNKREF33, which controls a lower-bound on termination probability at each step, influences both $r_L$ and perplexity. When $\epsilon $ is too large ($\epsilon =10^{-2}$), perplexity degrades. When $\epsilon $ is too small ($\epsilon =10^{-4}$), the lower-bound grows slowly, so $\left<\text{eos}\right>$ is not guaranteed to be top-ranked within $L$ steps, and the metrics resemble the baseline's. An $\epsilon $ of $10^{-3}$ balanced consistency and language modeling quality, with a zero non-termination ratio and perplexity within 3 points of the baseline.
For the example decoded sequences in Table TABREF46, generation quality is similar when both the self-terminating and baseline models terminate (first row). For prefixes that led to non-termination with the baseline, the self-terminating variant can yield a finite sequence with reasonable quality (second row). This suggests that some cases of degenerate repetition BIBREF5, BIBREF10 may be attributed to inconsistency. However, in other cases the self-terminating model enters a repetitive (but finite) cycle that resembles the baseline (third row), showing that consistency does not necessarily eliminate degenerate repetition.
<<</Self-terminating RNN.>>>
<<</Consistency of the Proposed Methods>>>
<<</Empirical Validation>>>
<<<Future Directions>>>
The methods we proposed in this paper have focused on how to resolve inconsistency from the viewpoint of decoding algorithms or model parameterization. Another approach is to address the issue of inconsistency in the learning phase.
One interesting direction is to investigate whether maximum likelihood learning is a cause of inconsistency. Given a training set $\left\lbrace (C^{(n)}, Y^{(n)}) \right\rbrace _{n=1}^N$ drawn from a data distribution, maximum likelihood learning solves:
where $\Omega (\theta )$ is a regularizer and $\lambda $ is a regularization weight.
Inconsistency may arise from the lack of decoding in solving this optimization problem. Maximum likelihood learning fits the model $p_{\theta }$ using the data distribution, whereas a decoded sequence from the trained model follows the distribution $q_{\mathcal {F}}$ induced by a decoding algorithm. Based on this discrepancy, we make a strong conjecture: we cannot be guaranteed to obtain a good consistent sequence generator using maximum likelihood learning and greedy decoding. Sequence-level learning, however, uses a decoding algorithm during training BIBREF25, BIBREF26. We hypothesize that sequence-level learning can result in a good sequence generator that is consistent with respect to incomplete decoding.
<<</Future Directions>>>
<<<Conclusion>>>
We extended the notion of consistency of a recurrent language model put forward by BIBREF16 to incorporate a decoding algorithm, and used it to analyze the discrepancy between a model and the distribution induced by a decoding algorithm. We proved that incomplete decoding is inconsistent, and proposed two methods to prevent this: consistent decoding and the self-terminating recurrent language model. Using a sequence completion task, we confirmed that empirical inconsistency occurs in practice, and that each method prevents inconsistency while maintaining the quality of generated sequences. We suspect the absence of decoding in maximum likelihood estimation as a cause behind this inconsistency, and suggest investigating sequence-level learning as an alternative in the future.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nBackground\nRecurrent Language Models\nDecoding Algorithms\nStochastic decoding.\nDeterministic decoding.\nIncompleteness.\nConsistency of a Decoding Algorithm\nDefinition of consistency.\nInconsistency of incomplete decoding.\nFixing the inconsistency\nConsistent Sampling Algorithms\nA Self-Terminating Recurrent Language Model\nEmpirical Validation\nSequence completion.\nDataset.\nContext distribution.\nEvaluation metrics.\nTraining.\nModels.\nInconsistency of Recurrent Language Models\nConsistency of the Proposed Methods\nConsistent sampling.\nSelf-terminating RNN.\nFuture Directions\nConclusion"
],
"type": "outline"
}
|
2001.06354
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Modality-Balanced Models for Visual Dialogue
<<<Abstract>>>
The Visual Dialog task requires a model to exploit both image and conversational context information to generate the next response to the dialogue. However, via manual analysis, we find that a large number of conversational questions can be answered by only looking at the image without any access to the context history, while others still need the conversation context to predict the correct answers. We demonstrate that due to this reason, previous joint-modality (history and image) models over-rely on and are more prone to memorizing the dialogue history (e.g., by extracting certain keywords or patterns in the context information), whereas image-only models are more generalizable (because they cannot memorize or extract keywords from history) and perform substantially better at the primary normalized discounted cumulative gain (NDCG) task metric which allows multiple correct answers. Hence, this observation encourages us to explicitly maintain two models, i.e., an image-only model and an image-history joint model, and combine their complementary abilities for a more balanced multimodal model. We present multiple methods for this integration of the two models, via ensemble and consensus dropout fusion with shared parameters. Empirically, our models achieve strong results on the Visual Dialog challenge 2019 (rank 3 on NDCG and high balance across metrics), and substantially outperform the winner of the Visual Dialog challenge 2018 on most metrics.
<<</Abstract>>>
<<<Introduction>>>
When we pursue conversations, context is important to keep the topic consistent or to answer questions which are asked by others, since most new utterances are made conditioned on related mentions or topic clues in the previous utterances in the conversation history. However, conversation history is not necessarily needed for all interactions, for instance, someone can change topics during a conversation and can ask a sudden new question which is not related to the context. This is similar to the setup in the Visual Dialog task BIBREF0, in which one agent (say the `asker') keeps asking questions and the other one (say the `answerer') keeps answering the questions based on an image for multiple rounds. The asker can ask a question from the conversation context. Then the answerer should answer the question by considering the conversation history as well as the image information, e.g., if the asker asks a question, “Are they in pots?” (Q4 in Fig. FIGREF1), the answerer should find a clue in the past question-answer pairs “Is there a lot of plants?” - “I only see 2.” (Q3-A3 in Fig. FIGREF1) and figure out what `they' means first to answer the question correctly. On the other hand, some questions in this task are independent of the past conversation history, e.g., “Can you see a building?” (Q8 in Fig. FIGREF1), where the answerer does not need to look at conversation context and can answer the question only based on the image information.
We first conduct a manual investigation on the Visual Dialog dataset (VisDial) to figure out how many questions can be answered only with images and how many of them need conversation history to be answered. This investigation shows that around 80% of the questions can be answered only with images. Moreover, on the model side, we verify this observation by building a model that uses only images to answer questions. As expected, this image-only model works very well on the primary task metric of NDCG (evaluated on dense annotations which consider multiple similar answers as correct ones with similarity weights on them) without any help from the conversation history (see Table TABREF40). However, we find that the image-only model does not get higher scores on other metrics such as mean reciprocal rank (MRR), recall@k, and mean rank (evaluated on single ground-truth answers). Because the image-only model does not use any conversation-history information, we hypothesize that this scoring behavior might be related to the amount of history information available, and hence we also conduct additional experiments by building an image-history joint model and train it with different lengths of history features. From these experiments, we see a tendency that a model with the less amount of history features gets a higher NDCG score (with lower values for other metrics), whereas a model with more history information has the opposite behavior. Previously, BIBREF1 argued that the Visdial dataset has an answer bias such that a simple model without vision or dialogue history could achieve reasonable results. However, our motivation is different from theirs. The purpose of our paper is to find characteristics of existing multimodal models on the dataset (which are biased towards the language information in the dialogue history), analyze behaviors of these models on different metrics, as well as employ this analysis to build better, less biased models that achieve more balanced scores.
Since NDCG measures more of a model's generalization ability (because it allows multiple similar answers), while the other metrics measure a model's preciseness, we interpret the results of these above experiments to mean that a model with more history information tends to predict correct answers by memorizing keywords or patterns in the history while a model with less history information (i.e., the image-only model) is better at generalization by avoiding relying on such exact-match extracted information. We think that an ideal model should have more balanced behavior and scores over all the metrics rather than having higher scores only for a certain metric and such a model could be considered as the one with both preciseness and generalization. To this end, we propose two models, an image-only and an image-history-joint model. We analyze that the answers these two models produce are complementarily good, and better at different metrics. Hence, we integrate these two models (image-only and image-history-joint) in two ways: consensus-dropout-fusion and ensemble. Our final consensus-dropout-fusion ensemble model scores strongly on both NDCG and recall metrics for the VisDial v1.0 test dataset, and these scores outperform the state-of-the-art of the Visual Dialog challenge 2018 on most metrics. Also, our model shows competitive balanced results in the Visual Dialog challenge 2019 (test-std leaderboard rank 3 based on NDCG metric and high balance across metrics).
<<</Introduction>>>
<<<Related Work>>>
<<<Visual Question Answering (VQA)>>>
Visual question answering is a task in which a machine is asked to answer a question about an image. The recent success of deep neural networks and massive data collection BIBREF2 has made the field more active. One of the most challenging parts of the task is to ground the meaning of text on visual evidence. Co-attention BIBREF3 is proposed to integrate information from different modalities (i.e., image and language) and more advanced approaches have shown good performance BIBREF4, BIBREF5, BIBREF6. A bilinear approach has also been proposed to replace simple addition or concatenation approaches for fusing the two modalities BIBREF7, BIBREF8, BIBREF9, BIBREF10. In our work, we employ multi-modal factorized bilinear pooling (MFB) BIBREF11 to fuse a question and image-history features.
<<</Visual Question Answering (VQA)>>>
<<<Visual Dialog>>>
The Visual Dialog task BIBREF0 can be seen as an extended version of the VQA task, with multiple rounds of sequential question-answer pairs as dialog history, including an image caption, which should be referred to before answering a given question. This conversation history can help a model better predict correct answers by giving direct or indirect clues for the answers, or proper context for co-reference resolution. However, having conversation history also means that a model should extract relevant information from the history and introduces another challenge to the task. Many approaches have been proposed to handle this challenge. BIBREF12 tries to extract the clues from history recursively while BIBREF13 and BIBREF14 employ co-attention to fuse visual, history, and question features. In our work, we employ BIBREF15's approach to fuse visual and history features before they are attended by a question. Our joint model with fused features has much information from history and we find that it is in complementary relation with our image-only model. Thus, we combine the two models to take the most appropriate information from each model to answer questions.
<<</Visual Dialog>>>
<<</Related Work>>>
<<<Models>>>
In the Visual Dialog task BIBREF0, two agents interact via natural language with respect to an image. The asker keeps asking about the image given an image caption without seeing the image. The other agent (i.e., answerer) keeps answering the questions by viewing the image. They conduct multiple rounds of conversation accumulating question-answer pairs which are called `history' (Figure FIGREF1). The full history $\textrm {HISTORY}$ consists of question-answer pairs as well as an image caption which describes the given image, such that at a current time point $t$, the previous history is $\textrm {HISTORY}_t = \lbrace C, (Q_{1},A_{1}), (Q_{2},A_{2}), ..., (Q_{t-1},A_{t-1}) \rbrace $, where $C$ is the image caption and $Q_{t-1}$ and $A_{t-1}$ are the question and answer at round $t-1$, respectively. Then, given a new current time-stamp question $Q_t$, the history $\textrm {HISTORY}_t$, and the image, the model has to rank 100 candidate answers from the answerer's perspective.
<<<Features>>>
Visual Features: For visual features, we use object features which are extracted from an image by using Faster R-CNN BIBREF16. The visual feature, $V_{rcnn} \in \mathbb {R}^{k \times d_{v}}$, is a matrix whose rows correspond to objects, where $k$ is the number of objects (k=36 in our experiment), $d_{v}$ is dimension size of visual feature ($d_{v}$ = 2048 for ResNet backbone).
Question Features: The word sequence of a question at round $r$, $W_{q_{r}} = \lbrace w_{q_{r}1}, w_{q_{r}2},..., w_{q_{r}T_{q_r}}\rbrace $ is encoded via an LSTM-RNN BIBREF17,
and, we take the last hidden state as a question representation: $q_{r} = h_{T_{q_{r}}}^{q_{r}}$, where $T_{q_{r}}$ is the length of the question at round $r$.
History Features: History $H_r$ is a history feature at round $r$ encoded from concatenation of a question and a ground truth answer, such that
where $T_{a_{r-1}}$ is the length of the answer of round $r-1$, and the length of history at round $r$ is $T_{h_{r}}=T_{q_{r-1}}+T_{a_{r-1}} $. The history $H_r$ is also encoded with an LSTM,
We also take the last hidden state as history representation at round $r$: $H_r = h_{T_{h_r}}^{h_r}$. Note that the first history feature $H_1$ comes from the image caption $C$.
<<</Features>>>
<<<Image-Only Model>>>
We first build a model which only uses visual features to answer questions. We employ a state-of-the-art `bottom-up and top-down' approach from BIBREF18, in which we apply the attention mechanism over detected object features. We also adopt the multi-modal factorized bilinear pooling (MFB) method BIBREF11 to calculate attention weights over the visual features with respect to a question feature. From projected visual features and a question feature, we obtain $z \in \mathbb {R}^{k \times d_{m}}$ by applying MFB:
where $\textrm {Linear}_{d_v\times d}$ is a linear projection which projects points from a $d_v$-dimension space to a $d$-dimension space.
where $M$, $N$ $\in \mathbb {R}^{d_{m} \times d \times m}$ are trainable parameters, $d$ is the dimension of projected visual features and a question feature, $d_m$ is dimension of the fused feature, and $m$ is the number of factors. ${1}_k$ $\in \mathbb {R}^k$ is a vector whose elements are all one. Following BIBREF11, we also apply the power normalization and $\ell _2$ normalization to obtain $\hat{z}_{r}$. After applying linear projection, the softmax operation is applied to get a weight vector $\alpha $: $\alpha _{r} = \textrm {softmax}(L\hat{z}_{r}^{\top })$. We then get a visual representation vector, $v_{r}$ by weighted summing the projected visual features: $v_{r} = \sum _{i=1}^k \alpha _{ri}V_i$, where $L \in \mathbb {R}^{1 \times d_m }$ is trainable parameter, and $V_i$ is the $i$-th row vector of visual feature matrix $V$. The visual representation vector and a question feature vector are combined with element-wise product after linear projection. After one more linear projection, we get the final feature, $f_{v_{r}}^{q_{r}}$ which is further used to rank answers.
where $\textrm {fc}_*$ is an fully-connected layer.
<<<Answer Selection>>>
For each round, there are 100 candidate answers. The $l$-th answer at round $r$,
is encoded in the same way as question and history.
where $T_{a_{rl}}$ is the length of the $l$-th candidate answer. Scores for each candidate answer are calculated by dot product between fused feature $f_{v_r}^{q_r}$ and each candidate answer representation, $a_{rl}$: $s_{rl} = f_{v_r}^{q_r}\cdot a_{rl}$.
<<</Answer Selection>>>
<<</Image-Only Model>>>
<<<Image-History Joint Model>>>
We calculate the similarity matrix, $S_r \in \mathbb {R}^{k \times r} $ between visual and history features following BIBREF15.
where $w_s \in \mathbb {R}^{3d}$ is trainable parameter and $H_j$ is the $j$-th row vector of the history feature $H_{1:r}$. From the similarity matrix, the new fused history representation is:
Similarly, the new fused visual representation is:
These fused features are then fed to the MFB module and attended over w.r.t. a question feature, respectively, following the same process as a visual feature in the image-only model. The weighted-summed features are combined with a question feature through element-wise product and concatenated together to produce the integrated representation:
where $v_{r}^f$ and $h_{r}^f$ are weighted-sum of fused features with respect to a question feature. Figure FIGREF5 depicts the whole process of the joint model in this section.
<<<Round Dropout>>>
To prevent the model from over-relying on history information, we propose a novel dropout approach in which some rounds of history features are dropped out (Figure FIGREF17). To be specific, we randomly pick up to 3 rounds of history from entire history except image caption feature and throw them away.
where $N_h^r$ is number of history features at round $r$ and $N_D^r$ is the number of history features to drop at round $r$.
<<</Round Dropout>>>
<<</Image-History Joint Model>>>
<<<Combining Image-Only & Image-History Joint Models>>>
Since each of our models has different abilities, we exploit their complementary abilities together by combining them in two ways. The first is our novel consensus dropout fusion which integrates the two models in training time. The other way is to build an ensemble model from the two models at test time.
<<<Consensus Dropout Fusion>>>
In order to integrate the image-only model and the image-history joint model into one model, we propose a novel integration method called consensus dropout fusion. Our consensus dropout fusion is the combination of a consensus method and an instance dropout method (Figure FIGREF23).
<<<Consensus>>>
We employ a consensus method in which logits from each model are added to produce the final logit following BIBREF19's approach.
where $L_{I}$ and $L_{J}$ are the logit from image-only model and image-hitory joint model, respectively, and $L_{IJ}$ is the new logit obtained by adding the two logits.
<<</Consensus>>>
<<<Instance Dropout>>>
To allow the image-only model to have a stronger effect producing more balanced results over all metrics, we apply dropout to instances of the logit of the joint model. To be specific, when we add two logits, we multiply $L_{J}$ by $I_{drop}$,
where ${1}_{(N\times R)} \in \mathbb {R}^{(N\times R)}$ and ${1}_{d} \in \mathbb {R}^{d}$ are all-ones vectors of $(N\times R)$ and $d$ dimension, respectively. $N$ is the training batch size and $R$ is the length of rounds of the conversation history. The dropout mask, $\xi $, is calculated following BIBREF20's work.
<<</Instance Dropout>>>
<<</Consensus Dropout Fusion>>>
<<<Ensemble>>>
We also integrate our 2 models via an ensemble. We train each model separately and combine them at test time. To be specific, we take logits from the pre-trained models and select the answer with the highest sum of logits.
<<</Ensemble>>>
<<</Combining Image-Only & Image-History Joint Models>>>
<<</Models>>>
<<<Experimental Setup>>>
<<<Dataset>>>
We use the VisDial v1.0 BIBREF0 dataset to train our models, where one example has an image with its caption, 9 question-answer pairs, and follow-up questions and candidate answers for each round. At round $r$, the caption and the previous question-answer pairs become conversational context. The whole dataset is split into 123,287/2,000/8,000 images for train/validation/test, respectively. Unlike the images in the train and validation sets, the images in the test set have only one follow-up question and candidate answers and their corresponding conversational context.
<<</Dataset>>>
<<<Metrics>>>
For evaluation, the Visual Dialog task employs four metrics. NDCG is the primary metric of the Visual Dialog Challenge which considers multiple similar answers as correct ones. The other three are MRR, recall@k, and mean rank where they only consider the rank of a single answer. Our experiments show the scores of NDCG and non-NDCG metrics from our image-only and joint models have a trade-off relationship due to their different ability (as shown in Sec.SECREF41) in completing Visual Dialog tasks: the image-only model has a high NDCG and low non-NDCG values while the joint model has a low NDCG and high non-NDCG values.
<<</Metrics>>>
<<<Training Details>>>
In our models, the size of word vectors is 300, the dimension of visual feature is 2048, and hidden size of LSTM units which are used for encoders of questions, context history, and candidate answers is 512. We employ Adam BIBREF21 as the optimizer. We set the initial learning rate to 0.001 and decrease it by 0.0001 per epoch until 8th epoch and decay by 0.5 from 9th epoch on. For round dropout, we set the maximum number of history features to be dropped to 3 and we tune the p value to 0.25 for our instance dropout in the consensus dropout fusion module. Cross-entropy is used to calculate the loss.
<<</Training Details>>>
<<</Experimental Setup>>>
<<<Analysis and Results>>>
In this section, we first discuss how many questions are answered only from image and how many of them need image and history jointly to be answered by conducting a manual investigation. We find that a large portion of questions in the VisDial dataset can be answered by only using images. Next, to verify the observation from the manual investigation, we perform a follow-up experiment and find a trade-off relation between the amount of history features and the metric scoring trend of models. We then analyze the answers from two models (image-only and image-history joint model) and show they are in complementary relation. Lastly, we show each model can make up for the other by being combined in consensus dropout fusion or in an ensemble model.
<<<Human Evaluation: Is Image Alone Enough?>>>
We conduct a human evaluation on image, history, and question. To be specific, we randomly select 100 images (which leads to 1000 questions) from the validation set for the evaluation and count the number of questions which can be answered only with images and the number of questions which need conversation context to be answered (ground-truth answers are provided to check if the answers can be inferred given corresponding questions and images instead of providing all the 100 candidate answers). Two annotators conduct the experiment independently and questions on which both annotators mark as being able to be answered only with images are classified as only-image questions otherwise as need-history questions. The inter-annotation agreement (kappa) is 0.74. As shown in Table TABREF36, around 80% of the questions can be answered only from images. Conversely, this also implies that a model needs conversation context to better perform the task. However, as discussed in Sec.SECREF1, using only history is not enough either (only 1% of the questions can be answered) and thus history should be used jointly with images. Note that we consider a question with a pronoun as answerable only with an image if the pronoun can be inferred (co-reference) from the corresponding image (e.g., a question mentions `he' and the image has only one person who is a boy).
<<</Human Evaluation: Is Image Alone Enough?>>>
<<<Reduced Question-Answer Rounds>>>
We next run our joint model with various lengths of history. To be specific, we make our joint model use only k previous history features to answer a question. As shown in Table TABREF40, there is a trade-off between the values of metrics and the number of history features. As the number of history features the joint model uses is increased, the score of NDCG is decreased while other metrics are increased. On the other hand, as the number of history features the joint model uses is decreased the score of NDCG is increased while other metrics are decreased. If we see the Visual Dialog primary task metric of NDCG as a barometer of the model's ability to generalize and the other metrics can be seen as an indicator of preciseness, this means that decreased size of history gives a model the ability of generalization at the cost of preciseness. From this tendency, the image-only model has the highest NDCG score.
<<</Reduced Question-Answer Rounds>>>
<<<Complementary Relation>>>
If the image-only model is good at NDCG, can we exploit its ability by combining it with the joint model? To figure out this possibility, we compare each answer from the image-only model and the joint model. To be specific, for R@1, we list up the correct answers from each model and count answers which are in both sets, i.e., the intersection. From the intersection, we obtain the union of the two sets. For NDCG, there is not one single correct answer. So we roughly calculate the intersection by taking minimum values between the two models' scores and averaging them. As we can see in Table TABREF42, the intersections do not take the entire score of either model for both metrics. This could mean image-only and joint models have room to be improved by combining them together.
<<</Complementary Relation>>>
<<<Model Combination Results>>>
Considering the complementary relation between image-only model and joint model, combining the two models would be a good approach to take the best from the both. So, we integrate these two models via two methods: consensus dropout fusion and ensemble (see Sec.SECREF26).
<<<Consensus Dropout Fusion Results>>>
As shown in Table TABREF46, consensus dropout fusion improves the score of NDCG by around 1.0 from the score of the joint model while still yielding comparable scores for other metrics. Unlike ensemble way, consensus dropout fusion does not require much increase in the number of model parameters.
<<</Consensus Dropout Fusion Results>>>
<<<Ensemble Model Results>>>
As also shown in Table TABREF46, the ensemble model seems to take the best results from each model. Specifically, the NDCG score of the ensemble model is comparable to that of the image-only model and the scores of other metrics are comparable to those of the image-history joint model. From this experiment, we can confirm that the two models are in complementary relation.
<<</Ensemble Model Results>>>
<<</Model Combination Results>>>
<<<Final Visual Dialog Test Results>>>
For the evaluation on the test-standard dataset of VisDial v1.0, we try 6 image-only model ensemble and 6 consensus dropout fusion model ensemble. As shown in Table TABREF48, our two models show competitive results compared to the state-of-the-art on the Visual Dialog challenge 2018 (DL-61 was the winner of the Visual Dialog challenge 2018). Specifically, our image-only model shows much higher NDCG score (60.16). On the other hand, our consensus dropout fusion model shows more balanced results over all metrics while still outperforming on most evaluation metrics (NDCG, MRR, R@1, and R@5). Compared to results of the Visual Dialog challenge 2019, our models also show strong results. Although ReDAN+ BIBREF26 and MReaL–BDAI show higher NDCG scores, our consensus dropout fusion model shows more balanced results over metrics while still having a competitive NDCG score compared to DAN BIBREF25, with rank 3 based on NDCG metric and high balance rank based on metric average.
<<<Ensemble on More Models>>>
We also run an ensemble model from our image-only, joint, and consensus dropout fusion models (6 of each and total 18 models) and evaluate it on the test-standard dataset of the VisDial v1.0. This model's scores (NDCG: 59.90, MRR: 64.05, R@1: 50.28, R@5: 80.95, R@10: 90.60, Mean: 4.00) are in between our image-only ensemble model and our consensus dropout fusion ensemble model, i.e., this ensemble model has a higher NDCG than the consensus dropout fusion ensemble model and higher non-NDCG scores than the image-only ensemble model. This result shows that our image-only, joint, and consensus dropout fusion models make up for each other by being combined in an ensemble model as we expected.
<<</Ensemble on More Models>>>
<<</Final Visual Dialog Test Results>>>
<<</Analysis and Results>>>
<<<Ablation Study>>>
Round Dropout: As shown in Table TABREF52, our round dropout (see Sec.SECREF24) improves the NDCG score by 1.2. A possible interpretation is that round dropout could help the model avoid from over-fitting to some patterns in the history features by intentionally dropping some of the features in the training session.
Consensus Dropout Fusion and Dropout Rate: We run our consensus dropout fusion model (see Sec.SECREF27) with different instance dropout rates to figure out how the dropout rates affect the performance of the model. As shown in Table.TABREF53, as the dropout rate increases the NDCG score is also increased while scores of non-NDCG metrics are decreased. By changing the dropout rate, we can modulate the influence of each model (image-only and joint models) over the combined model. We choose a value of 0.25 for the dropout rate since it yields more balanced scores over all metrics.
Ensemble Combination: We try different combinations from image-only and joint models to build ensemble models. The total number of models amounts to 3, i.e., image-only + image-only (I+I), joint + joint (J+J), and image-only + joint (I+J) ensemble models. As shown in Table TABREF54, scores of the I+J ensemble model are comparable to same-kind ensemble models (I+I and J+J). To be specific, for the NDCG metric, the I+J model outperforms the J+J model, while, for other metrics (MRR, recall@k, and mean rank), the I+J model outperforms the I+I model. This might imply that the balanced scores (i.e., high scores over all metrics) of the I+J model are from the complementary relation between image-only and image-history joint model.
Output Examples: Due to space constraints and no supplementary allowed in AAAI rules, we provide detailed examples in this arxiv version's appendix, showing the coreference and memorization phenomena of the joint image-history model as well as the image-only model's example outputs on image-only questions. Examples of only-image questions, and the ranking lists of the image-history joint and image-only models are also provided.
<<</Ablation Study>>>
<<<Conclusion>>>
We first showed that current multimodal models on the Visual Dialog task over-rely on the dialogue history, and relatedly, image-only and image-history joint models achieve complementary performance gains. Hence, to balance the best abilities from each model, we proposed two ways of combining them: consensus dropout fusion and ensemble. Our consensus dropout fusion and ensemble model achieve strong ranks on multiple leaderboards. Specifically, the models show higher scores than the state-of-the-art results of the Visual Dialog challenge 2018 and more balanced scores than highest ranked results of the Visual Dialog challenge 2019. Given the characteristics of the dataset and current model behaviors, a potential future direction is to combine the power of the two models dynamically, e.g., learn to select a proper model based on the question type.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Work\nVisual Question Answering (VQA)\nVisual Dialog\nModels\nFeatures\nImage-Only Model\nAnswer Selection\nImage-History Joint Model\nRound Dropout\nCombining Image-Only & Image-History Joint Models\nConsensus Dropout Fusion\nConsensus\nInstance Dropout\nEnsemble\nExperimental Setup\nDataset\nMetrics\nTraining Details\nAnalysis and Results\nHuman Evaluation: Is Image Alone Enough?\nReduced Question-Answer Rounds\nComplementary Relation\nModel Combination Results\nConsensus Dropout Fusion Results\nEnsemble Model Results\nFinal Visual Dialog Test Results\nEnsemble on More Models\nAblation Study\nConclusion"
],
"type": "outline"
}
|
1910.08210
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
RTFM: Generalising to Novel Environment Dynamics via Reading
<<<Abstract>>>
Obtaining policies that can generalise to new environments in reinforcement learning is challenging. In this work, we demonstrate that language understanding via a reading policy learner is a promising vehicle for generalisation to new environments. We propose a grounded policy learning problem, Read to Fight Monsters (RTFM), in which the agent must jointly reason over a language goal, relevant dynamics described in a document, and environment observations. We procedurally generate environment dynamics and corresponding language descriptions of the dynamics, such that agents must read to understand new environment dynamics instead of memorising any particular information. In addition, we propose txt2π, a model that captures three-way interactions between the goal, document, and observations. On RTFM, txt2π generalises to new environments with dynamics not seen during training via reading. Furthermore, our model outperforms baselines such as FiLM and language-conditioned CNNs on RTFM. Through curriculum learning, txt2π produces policies that excel on complex RTFM tasks requiring several reasoning and coreference steps.
<<</Abstract>>>
<<<Introduction>>>
Reinforcement learning (RL) has been successful in a variety of areas such as continuous control BIBREF0, dialogue systems BIBREF1, and game-playing BIBREF2. However, RL adoption in real-world problems is limited due to poor sample efficiency and failure to generalise to environments even slightly different from those seen during training. We explore language-conditioned policy learning, where agents use machine reading to discover strategies required to solve a task, thereby leveraging language as a means to generalise to new environments.
Prior work on language grounding and language-based RL (see BIBREF3 for a recent survey) are limited to scenarios in which language specifies the goal for some fixed environment dynamics BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, or the dynamics of the environment vary and are presented in language for some fixed goal BIBREF9. In practice, changes to goals and to environment dynamics tend to occur simultaneously—given some goal, we need to find and interpret relevant information to understand how to achieve the goal. That is, the agent should account for variations in both by selectively reading, thereby generalising to environments with dynamics not seen during training.
Our contributions are two-fold. First, we propose a grounded policy learning problem that we call (). In , the agent must jointly reason over a language goal, a document that specifies environment dynamics, and environment observations. In particular, it must identify relevant information in the document to shape its policy and accomplish the goal. To necessitate reading comprehension, we expose the agent to ever changing environment dynamics and corresponding language descriptions such that it cannot avoid reading by memorising any particular environment dynamics. We procedurally generate environment dynamics and natural language templated descriptions of dynamics and goals to produced a combinatorially large number of environment dynamics to train and evaluate .
Second, we propose to model the joint reasoning problem in . We show that generalises to goals and environment dynamics not seen during training, and outperforms previous language-conditioned models such as language-conditioned CNNs and FiLM BIBREF10, BIBREF6 both in terms of sample efficiency and final win-rate on . Through curriculum learning where we adapt trained on simpler tasks to more complex tasks, we obtain agents that generalise to tasks with natural language documents that require five hops of reasoning between the goal, document, and environment observations. Our qualitative analyses show that attends to parts of the document relevant to the goal and environment observations, and that the resulting agents exhibit complex behaviour such as retrieving correct items, engaging correct enemies after acquiring correct items, and avoiding incorrect enemies. Finally, we highlight the complexity of in scaling to longer documents, richer dynamics, and natural language variations. We show that significant improvement in language-grounded policy learning is needed to solve these problems in the future.
<<</Introduction>>>
<<<Related Work>>>
<<<Language-conditioned policy learning.>>>
A growing body of research is learning policies that follow imperative instructions. The granularity of instructions vary from high-level instructions for application control BIBREF11 and games BIBREF5, BIBREF6 to step-by-step navigation BIBREF7. In contrast to learning policies for imperative instructions, BIBREF4, BIBREF9 infer a policy for a fixed goal using features extracted from high level strategy descriptions and general information about domain dynamics. Unlike prior work, we study the combination of imperative instructions and descriptions of dynamics. Furthermore, we require that the agent learn to filter out irrelevant information to focus on dynamics relevant to accomplishing the goal.
<<</Language-conditioned policy learning.>>>
<<<Language grounding.>>>
Language grounding refers to interpreting language in a non-linguistic context. Examples of such context include images BIBREF12, games BIBREF13, BIBREF14, robot control BIBREF15, BIBREF16, and navigation BIBREF17. We study language grounding in interactive games similar to BIBREF11, BIBREF5 or BIBREF8, where executable semantics are not provided and the agent must learn through experience. Unlike prior work, we require grounding between an underspecified goal, a document of environment dynamics, and world observations. In addition, we focus on generalisation to not only new goal descriptions but new environments dynamics.
<<</Language grounding.>>>
<<</Related Work>>>
<<<>>>
We consider a scenario where the agent must jointly reason over a language goal, relevant environment dynamics specified in a text document, and environment observations. In reading the document, the agent should identify relevant information key to solving the goal in the environment. A successful agent needs to perform this language grounding to generalise to new environments with dynamics not seen during training.
To study generalisation via reading, the environment dynamics must differ every episode such that the agent cannot avoid reading by memorising a limited set of dynamics. Consequently, we procedurally generate a large number of unique environment dynamics (e.g. effective(blessed items, poison monsters)), along with language descriptions of environment dynamics (e.g. blessed items are effective against poison monsters) and goals (e.g. Defeat the order of the forest). We couple a large, customisable ontology inspired by rogue-like games such as NetHack or Diablo, with natural language templates to create a combinatorially rich set of environment dynamics to learn from and evaluate on.
In , the agent is given a document of environment dynamics, observations of the environment, and an underspecified goal instruction. Figure FIGREF3 illustrates an instance of the game. Concretely, we design a set of dynamics that consists of monsters (e.g. wolf, goblin), teams (e.g. Order of the Forest), element types (e.g. fire, poison), item modifiers (e.g. fanatical, arcane), and items (e.g. sword, hammer). When the player is in the same cell with a monster or weapon, the player picks up the item or engages in combat with the monster. The player can possess one item at a time, and drops existing weapons if they pick up a new weapon. A monster moves towards the player with 60% probability, and otherwise moves randomly. The dynamics, the agent's inventory, and the underspecified goal are rendered as text. The game world is rendered as a matrix of text in which each cell describes the entity occupying the cell. We use human-written templates for stating which monsters belong to which team, which modifiers are effective against which element, and which team the agent should defeat (see appendix SECREF13 for details). In order to achieve the goal, the agent must cross-reference relevant information in the document and as well as in the observations.
During every episode, we subsample a set of groups, monsters, modifiers, and elements to use. We randomly generate group assignments of which monsters belong to which team and which modifier is effective against which element. A document that consists of randomly ordered statements corresponding to this group assignment is presented to the agent. We sample one element, one team, and a monster from that team (e.g. “fire goblin” from “Order of the forest”) to be the target monster. Additionally, we sample one modifier that beats the element and an item to be the item that defeats the target monster (e.g. “fanatical sword”). Similarly, we sample an element, a team, and a monster from a different team to be the distractor monster (e.g. poison bat), as well as an item that defeats the distractor monster (e.g. arcane hammer).
In order to win the game (e.g. Figure FIGREF3), the agent must
identify the target team from the goal (e.g. Order of the Forest)
identify the monsters that belong to that team (e.g. goblin, jaguar, and lynx)
identify which monster is in the world (e.g. goblin), and its element (e.g. fire)
identify the modifiers that are effective against this element (e.g. fanatical, shimmering)
find which modifier is present (e.g. fanatical), and the item with the modifier (e.g. sword)
pick up the correct item (e.g. fanatical sword)
engage the correct monster in combat (e.g. fire goblin).
If the agent deviates from this trajectory (e.g. does not have correct item before engaging in combat, engages with distractor monster), it cannot defeat the target monster and therefore will lose the game. The agent receives a reward of +1 if it wins the game and -1 otherwise.
presents challenges not found in prior work in that it requires a large number of grounding steps in order to solve a task. In order to perform this grounding, the agent must jointly reason over a language goal and document of dynamics, as well as environment observations. In addition to the environment, the positions of the target and distractor within the document are randomised—the agent cannot memorise ordering patterns in order to solve the grounding problems, and must instead identify information relevant to the goal and environment at hand.
We split environments into train and eval sets. No assignments of monster-team-modifier-element are shared between train and eval to test whether the agent is able to generalise to new environments with dynamics not seen during training via reading. There are more than 2 million train or eval environments without considering the natural language templates, and 200 million otherwise. With random ordering of templates, the number of unique documents exceeds 15 billion.
In addition to the main tasks, we also study a simpler formulation called that has a fixed goal. In , the agent must interpret a document that describes the environment dynamics in order to solve the task. Given an set of characters (e.g. a-z), we sample 3 characters and set up a rock-paper-scissors-like dependency graph between the characters (e.g. “a beats b, b beats c, c beats a”). We then spawn a monster in the world with a randomly assigned type (e.g. “b goblin”), as well as an item corresponding to each type (e.g. “a”, “b”, and “c”). The attributes of the agent, monster, and items are set up such that the player must obtain the correct item and then engage the monster in order to win. Any other sequence of actions (e.g. engaging the monster without the correct weapon) results in a loss. The winning policy should then be to first identify the type of monster present, then cross-reference the document to find which item defeats that type, then pick up the item, and finally engage the monster in combat. Figure FIGREF49 shows an instance of .
<<</>>>
<<<Model>>>
We propose the model, which builds representations that capture three-way interactions between the goal, document describing environment dynamics, and environment observations. We begin with definition of the () layer, which forms the core of our model.
<<<() layer>>>
Feature-wise linear modulation (FiLM), which modulates visual inputs using representations of textual instructions, is an effective method for image captioning BIBREF10 and instruction following BIBREF6. In , the agent must not only filter concepts in the visual domain using language but filter concepts in the text domain using visual observations. To support this, builds codependent representations of text and visual inputs by further incorporating conditional representations of the text given visual observations. Figure FIGREF12 shows the layer.
We use upper-case bold letters to denote tensors, lower-case bold letters for vectors, and non-bold letters for scalars. Exact dimensions of these variables are shown in Table TABREF42 in appendix SECREF8. Let $_$ denote a fixed-length $_$-dimensional representation of the text and $_$ the representation of visual inputs with height $H$, width $W$, and $_$ channels. Let $$ denote a convolution layer. Let + and * symbols denote element-wise addition and multiplication operations that broadcast over spatial dimensions. We first modulate visual features using text features:
Unlike FiLM, we additionally modulate text features using visual features:
The output of the layer consists of the sum of the modulated features $$, as well as a max-pooled summary $$ over this sum across spatial dimensions.
<<</() layer>>>
<<<The model>>>
We model interactions between observations from the environment, goal, and document using layers. We first encode text inputs using bidirectional LSTMs, then compute summaries using self-attention and conditional summaries using attention. We concatenate text summaries into text features, which, along with visual features, are processed through consecutive layers. In this case of a textual environment, we consider the grid of word embeddings as the visual features for . The final output is further processed by MLPs to compute a policy distribution over actions and a baseline for advantage estimation. Figure FIGREF18 shows the model.
Let $_$ denote word embeddings corresponding to the observations from the environment, where $_[:, :, i, j]$ represents the embeddings corresponding to the $_$-word string that describes the objects in location $(i, j)$ in the grid-world. Let $_$, $_$, and $_$ respectively denote the embeddings corresponding to the $_$-word document, the $_$-word inventory, and the $_$-word goal. We first compute a fixed-length summary $_$ of the the goal using a bidirectional LSTM BIBREF18 followed by self-attention BIBREF19, BIBREF20.
We abbreviate self-attention over the goal as $_= (_)$. We similarly compute a summary of the inventory as $_= (_(_))$. Next, we represent the document encoding conditioned on the goal using dot-product attention BIBREF21.
We abbreviate attention over the document encoding conditioned on the goal summary as $_= {_}{_}$. Next, we build the joint representation of the inputs using successive layers. At each layer, the visual input to the layer is the concatenation of the output of the previous layer with positional features. For each cell, the positional feature $_$ consists of the $x$ and $y$ distance from the cell to the agent's position respectively, normalized by the width and height of the grid-world. The text input is the concatenation of the goal summary, the inventory summary, the attention over the document given the goal, and the attention over the document given the previous visual summary. Let ${a; b}$ denote the feature-wise concatenation of $a$ and $b$. For the $i$th layer, we have
$_{\text{-}}(_)$ is another encoding of the document similar to $_$, produced using a separate LSTM, such that the document is encoded differently for attention with the visual features and with the goal. For $i = 0$, we concatenate the bag-of-words embeddings of the grid with positional features as the initial visual features $^{(0)} = {\sum _j_{, j}; _}$. We max pool a linear transform of the initial visual features to compute the initial visual summary $^{(0)} = (_^{(0)} + _)$. Let $$ denote visual summary of the last layer. We compute the policy $$ and baseline $$ as
where $_{\rm policy}$ and $_{\rm baseline}$ are 2-layer multi-layer perceptrons with $$ activation. We train using an implementation of IMPALA BIBREF22, which decouples actors from learners and uses V-trace for off-policy correction. Please refer to appendix SECREF10 for details.
<<</The model>>>
<<</Model>>>
<<<Experiments>>>
We consider variants of by varying the size of the grid-world ($6\times 6$ vs $10\times 10$), allowing many-to-one group assignments to make disambiguation more difficult (group), allowing dynamic, moving monsters that hunt down the player (dyna), and using natural language templated documents (nl). In the absence of many-to-one assignments, the agent does not need to perform steps 3 and 5 in section SECREF3 as there is no need to disambiguate among many assignees, making it easier to identify relevant information.
We compare to the FiLM model by BIBREF6 and a language-conditioned residual CNN model. We train on one set of dynamics (e.g. group assignments of monsters and modifiers) and evaluated on a held-out set of dynamics. We also study three variants of . In no_task_attn, the document attention conditioned on the goal utterance ((DISPLAY_FORM26)) is removed and the goal instead represented through self-attention and concatenated with the rest of the text features. In no_vis_attn, we do not attend over the document given the visual output of the previous layer ((DISPLAY_FORM27)), and the document is instead represented through self-attention. In no_text_mod, text modulation using visual features ((DISPLAY_FORM14)) is removed. Please see appendix SECREF9 for model details on our model and baselines, and appendix SECREF10 for training details.
<<<Comparison to baselines and ablations>>>
We compare to baselines and ablated variants on a simplified variant of in which there are one-to-one group assignments (no group), stationary monsters (no dyna), and no natural language templated descriptions (no nl). Figure FIGREF29 shows that compared to baselines and ablated variants, is more sample efficient and converges to higher performance. Moreover, no ablated variant is able to solve the tasks—it is the combination of ablated features that enables to win consistently. Qualitatively, the ablated variants converge to locally optimum policies in which the agent often picks up a random item and then attacks the correct monster, resulting in a $\sim 50$% win rate. Table FIGREF29 shows that all models, with the exception of the CNN baseline, generalise to new evaluation environments with dynamics and world configurations not seen during training, with outperforming FiLM and the CNN model. We find similar results for , its ablated variants, and baselines on other tasks (see appendix SECREF11 for details).
<<</Comparison to baselines and ablations>>>
<<<Curriculum learning for complex environments>>>
Due to the long sequence of co-references the agent must perform in order to solve the full ($10\times 10$ with moving monsters, many-to-one group assignments, and natural language templated documents) we design a curriculum to facilitate policy learning by starting with simpler variants of . We start with the simplest variant (no group, no dyna, no nl) and then add in an additional dimension of complexity. We repeatedly add more complexity until we obtain $10\times 10$ worlds with moving monsters, many-to-one group assignments and natural language templated descriptions. The performance across the curriculum is shown in Table TABREF32 (see Figure FIGREF58 in appendix SECREF12 for training curves of each stage). We see that curriculum learning is crucial to making progress on , and that initial policy training (first row of Table TABREF32) with additional complexities in any of the dimensions result in significantly worse performance. We take each of the 5 runs after training through the whole curriculum and evaluate them on dynamics not seen during training. Table TABREF33 shows variants of the last stage of the curriculum in which the model was trained on $6\times 6$ versions of the full and in which the model was trained on $10\times 10$ versions of the full . We see that models trained on smaller worlds generalise to bigger worlds. Despite curriculum learning, however, performance of the final model trail that of human players, who can consistently solve . This highlights the difficulties of the problem and suggests that there is significant room for improvement in developing better language grounded policy learners.
<<<Attention maps.>>>
Figure FIGREF34 shows attention conditioned on the goal and on observation summaries produced by intermediate layers. Goal-conditioned attention consistently locates the clause that contains the team the agent is supposed to attack. Intermediate layer attentions focus on regions near modifiers and monsters, particularly those that are present in the observations. These results suggests that attention mechanisms in help identify relevant information in the document.
<<</Attention maps.>>>
<<<Analysis of trajectories and failure modes.>>>
We examine trajectories from well-performing policies (80% win rate) as well as poorly-performing policies (50% win rate) on the full . We find that well-performing policies exhibit a number of consistent behaviours such as identifying the correct item to pick up to fight the target monster, avoiding distractors, and engaging target monsters after acquiring the correct item. In contrast, the poorly-performing policies occasionally pick up the wrong item, causing the agent to lose when engaging with a monster. In addition, it occasionally gets stuck in evading monsters indefinitely, causing the agent to lose when the time runs out. Replays of both policies can be found in GIFs in the supplementary materials.
<<</Analysis of trajectories and failure modes.>>>
<<</Curriculum learning for complex environments>>>
<<</Experiments>>>
<<<Conclusion>>>
We proposed , a grounded policy learning problem in which the agent must jointly reason over a language goal, relevant dynamics specified in a document, and environment observations. In order to study , we procedurally generated a combinatorially large number of environment dynamics such that the model cannot memorise a set of environment dynamics and must instead generalise via reading. We proposed , a model that captures three-way interactions between the goal, document, and observations, and that generalises to new environments with dynamics not seen during training. outperforms baselines such as FiLM and language-conditioned CNNs. Through curriculum learning, performs well on complex tasks that require several reasoning and coreference steps with natural language templated goals and descriptions of the dynamics. Our work suggests that language understanding via reading is a promising way to learn policies that generalise to new environments. Despite curriculum learning, our best models trail performance of human players, suggesting that there is ample room for improvement in grounded policy learning on complex problems. In addition to jointly learning policies based on external documentation and language goals, we are interested in exploring how to use supporting evidence in external documentation to reason about plans BIBREF23 and induce hierarchical policies BIBREF24, BIBREF25.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Work\nLanguage-conditioned policy learning.\nLanguage grounding.\n\nModel\n() layer\nThe model\nExperiments\nComparison to baselines and ablations\nCurriculum learning for complex environments\nAttention maps.\nAnalysis of trajectories and failure modes.\nConclusion"
],
"type": "outline"
}
|
1908.08593
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Revealing the Dark Secrets of BERT
<<<Abstract>>>
BERT-based architectures currently give state-of-the-art performance on many NLP tasks, but little is known about the exact mechanisms that contribute to its success. In the current work, we focus on the interpretation of self-attention, which is one of the fundamental underlying components of BERT. Using a subset of GLUE tasks and a set of handcrafted features-of-interest, we propose the methodology and carry out a qualitative and quantitative analysis of the information encoded by the individual BERT's heads. Our findings suggest that there is a limited set of attention patterns that are repeated across different heads, indicating the overall model overparametrization. While different heads consistently use the same attention patterns, they have varying impact on performance across different tasks. We show that manually disabling attention in certain heads leads to a performance improvement over the regular fine-tuned BERT models.
<<</Abstract>>>
<<<Introduction>>>
Over the past year, models based on the Transformer architecture BIBREF0 have become the de-facto standard for state-of-the-art performance on many natural language processing (NLP) tasks BIBREF1, BIBREF2. Their key feature is the self-attention mechanism that provides an alternative to conventionally used recurrent neural networks (RNN).
One of the most popular Transformer-based models is BERT, which learns text representations using a bi-directional Transformer encoder pre-trained on the language modeling task BIBREF2. BERT-based architectures have produced new state-of-the-art performance on a range of NLP tasks of different nature, domain, and complexity, including question answering, sequence tagging, sentiment analysis, and inference. State-of-the-art performance is usually obtained by fine-tuning the pre-trained model on the specific task. In particular, BERT-based models are currently dominating the leaderboards for SQuAD BIBREF3 and GLUE benchmarks BIBREF4.
However, the exact mechanisms that contribute to the BERT's outstanding performance still remain unclear. We address this problem through selecting a set of linguistic features of interest and conducting a series of experiments that aim to provide insights about how well these features are captured by BERT. This paper makes the following contributions:
We propose the methodology and offer the first detailed analysis of BERT's capacity to capture different kinds of linguistic information by encoding it in its self-attention weights.
We present the evidence of BERT's overparametrization and suggest a counter-intuitive yet frustratingly simple way of improving its performance, showing absolute gains of up to 3.2%.
<<</Introduction>>>
<<<Related work>>>
There have been several recent attempts to assess BERT's ability to capture structural properties of language. BIBREF5 demonstrated that BERT consistently assigns higher scores to the correct verb forms as opposed to the incorrect one in a masked language modeling task, suggesting some ability to model subject-verb agreement. BIBREF6 extended this work to using multiple layers and tasks, supporting the claim that BERT's intermediate layers capture rich linguistic information. On the other hand, BIBREF7 concluded that LSTMs generalize to longer sequences better, and are more robust with respect to agreement distractors, compared to Transformers.
BIBREF8 investigated the transferability of contextualized word representations to a number of probing tasks requiring linguistic knowledge. Their findings suggest that (a) the middle layers of Transformer-based architectures are the most transferable to other tasks, and (b) higher layers of Transformers are not as task specific as the ones of RNNs. BIBREF9 argued that models using self-attention outperform CNN- and RNN-based models on a word sense disambiguation task due to their ability to extract semantic features from text.
Our work contributes to the above discussion, but rather than examining representations extracted from different layers, we focus on the understanding of the self-attention mechanism itself, since it is the key feature of Transformer-based models.
Another research direction that is relevant to our work is neural network pruning. BIBREF10 showed that widely used complex architectures suffer from overparameterization, and can be significantly reduced in size without a loss in performance. BIBREF5 observed that the smaller version of BERT achieves better scores on a number of syntax-testing experiments than the larger one. BIBREF11 questioned the necessity of computation-heavy neural networks, proving that a simple yet carefully tuned BiLSTM without attention achieves the best or at least competitive results compared to more complex architectures on the document classification task. BIBREF12 presented more evidence of unnecessary complexity of the self-attention mechanism, and proposed a more lightweight and scalable dynamic convolution-based architecture that outperforms the self-attention baseline. These studies suggest a potential direction for future research, and are in good accordance with our observations.
<<</Related work>>>
<<<Methodology>>>
We pose the following research questions:
What are the common attention patterns, how do they change during fine-tuning, and how does that impact the performance on a given task? (Sec. SECREF17, SECREF30)
What linguistic knowledge is encoded in self-attention weights of the fine-tuned models and what portion of it comes from the pre-trained BERT? (Sec. SECREF25, SECREF34, SECREF36)
How different are the self-attention patterns of different heads, and how important are they for a given task? (Sec. SECREF39)
The answers to these questions come from a series of experiments with the basic pre-trained or the fine-tuned BERT models, as will be discussed below. All the experiments with the pre-trained BERT were conducted using the model provided with the PyTorch implementation of BERT (bert-base-uncased, 12-layer, 768-hidden, 12-heads, 110M parameters). We chose this smaller version of BERT because it shows competitive, if not better, performance while having fewer layers and heads, which makes it more interpretable.
We use the following subset of GLUE tasks BIBREF4 for fine-tuning:
MRPC: the Microsoft Research Paraphrase Corpus BIBREF13
STS-B: the Semantic Textual Similarity Benchmark BIBREF14
SST-2: the Stanford Sentiment Treebank, two-way classification BIBREF15
QQP: the Quora Question Pairs dataset
RTE: the Recognizing Textual Entailment datasets
QNLI: Question-answering NLI based on the Stanford Question Answering Dataset BIBREF3
MNLI: the Multi-Genre Natural Language Inference Corpus, matched section BIBREF16
Please refer to the original GLUE paper for details on the QQP and RTE datasets BIBREF4. We excluded two tasks: CoLa and the Winograd Schema Challenge. The latter is excluded due to the small size of the dataset. As for CoLa (the task of predicting linguistic acceptability judgments), GLUE authors report that the human performance is only 66.4, which is explained by the problems with the underlying methodology BIBREF17. Note also that CoLa is not included in the upcoming version of GLUE BIBREF18. All fine-tuning experiments follow the parameters reported in the original study (a batch size of 32 and 3 epochs, see devlin2018bert).
In all these experiments, for a given input, we extract self-attention weights for each head in every layer. This results in a 2D float array of shape $L\times L$, where $L$ is the length of an input sequence. We will refer to such arrays as self-attention maps. Analysis of individual self-attention maps allows us to determine which target tokens are attended to the most as the input is processed token by token. We use these experiments to analyze how BERT processes different kinds of linguistic information, including the processing of different parts of speech (nouns, pronouns, and verbs), syntactic roles (objects, subjects), semantic relations, and negation tokens.
<<</Methodology>>>
<<<Experiments>>>
In this section, we present the experiments conducted to address the above research questions.
<<<BERT's self-attention patterns>>>
Manual inspection of self-attention maps for both basic pre-trained and fine-tuned BERT models suggested that there is a limited set of self-attention maps types that are repeatedly encoded across different heads. Consistently with previous observations, we identified five frequently occurring patterns, examples of which are shown in fig:atttypes:
Vertical: mainly corresponds to attention to special BERT tokens [CLS] and [SEP];
Diagonal: formed by the attention to the previous/following tokens;
Vertical+Diagonal: a mix of the previous two types,
Block: intra-sentence attention for the tasks with two distinct sentences (such as, for example, RTE or MRPC),
Heterogeneous: highly variable depending on the specific input and cannot be characterized by a distinct structure.
Whereas the attention to the special tokens is important for cross-sentence reasoning, and the attention to the previous/following token comes from language model pre-training, we hypothesize that the last of the listed types is more likely to capture interpretable linguistic features, necessary for language understanding.
To get a rough estimate of the percentage of attention heads that may capture linguistically interpretable information, we manually annotated around 400 sample self-attention maps as belonging to one of the five classes. The self-attention maps were obtained by feeding random input examples from selected tasks into the corresponding fine-tuned BERT model. This produced a somewhat unbalanced dataset, in which the “Vertical” class accounted for 30% of all samples. We then trained a convolutional neural network with 8 convolutional layers and ReLU activation functions to classify input maps into one of these classes. This model achieved the F1 score of 0.86 on the annotated dataset. We used this classifier to estimate the proportion of different self-attention patterns for the target GLUE tasks using up to 1000 examples (where available) from each validation set.
<<<Results>>>
fig:attentionbydataset shows that the self-attention map types described above are consistently repeated across different heads and tasks. While a large portion of encoded information corresponds to attention to the previous/following token, to the special tokens, or a mixture of the two (the first three classes), the estimated upper bound on all heads in the “Heterogeneous” category (i.e. the ones that could be informative) varies from 32% (MRPC) to 61% (QQP) depending on the task.
We would like to emphasize that this only gives the upper bound on the percentage of attention heads that could potentially capture meaningful structural information beyond adjacency and separator tokens.
<<</Results>>>
<<</BERT's self-attention patterns>>>
<<<Relation-specific heads in BERT>>>
In this experiment, our goal was to understand whether different syntactic and semantic relations are captured by self-attention patterns. While a large number of such relations could be investigated, we chose to examine semantic role relations defined in frame semantics, since they can be viewed as being at the intersection of syntax and semantics. Specifically, we focused on whether BERT captures FrameNet's relations between frame-evoking lexical units (predicates) and core frame elements BIBREF19, and whether the links between them produce higher attention weights in certain specific heads. We used pre-trained BERT in these experiments.
The data for this experiment comes from FrameNet BIBREF19, a database that contains frame annotations for example sentences for different lexical units. Frame elements correspond to semantic roles for a given frame, for example, “buyer", “seller", and “goods” for the “Commercial_transaction" frame evoked by the words “sell” and “spend” or “topic” and “text” for the “Scrutiny” semantic frame evoked by the verb “address”. fig:framenet shows an example of such annotation.
We extracted sample sentences for every lexical unit in the database and identified the corresponding core frame elements. Annotated elements in FrameNet may be rather long, so we considered only the sentences with frame elements of 3 tokens or less. Since each sentences is annotated only for one frame, semantic links from other frames can exist between unmarked elements. We therefore filter out all the sentences longer than 12 tokens, since shorter sentences are less likely to evoke multiple frames.
To establish whether BERT attention captures semantic relations that do not simply correspond to the previous/following token, we exclude sentences where the linked objects are less than two tokens apart. This leaves us with 473 annotated sentences.
For each of these sentences, we obtain pre-trained BERT's attention weights for each of the 144 heads. For every head, we return the maximum absolute attention weight among those token pairs that correspond to the annotated semantic link contained within a given sentence. We then average the derived scores over all the collected examples. This strategy allows us to identify the heads that prioritize the features correlated with frame-semantic relations within a sentence.
<<</Relation-specific heads in BERT>>>
<<<Change in self-attention patterns after fine-tuning>>>
Fine-tuning has a huge effect on performance, and this section attempts to find out why. To study how attention per head changes on average for each of the target GLUE tasks, we calculate cosine similarity between pre-trained and fine-tuned BERT's flattened arrays of attention weights. We average the derived similarities over all the development set examples. To evaluate contribution of pre-trained BERT to overall performance on the tasks, we consider two configurations of weights initialization, namely, pre-trained BERT weights and weights randomly sampled from normal distribution.
<<</Change in self-attention patterns after fine-tuning>>>
<<<Attention to linguistic features>>>
In this experiment, we investigate whether fine-tuning BERT for a given task creates self-attention patterns which emphasize specific linguistic features. In this case, certain kinds of tokens may get high attention weights from all the other tokens in the sentence, producing vertical stripes on the corresponding attention maps (fig:atttypes).
We tested this hypothesis by checking whether there are vertical stripe patterns corresponding to certain linguistically interpretable features, and to what extent such features are relevant for solving a given task. In particular, we investigated attention to nouns, verbs, pronouns, subjects, objects, and negation words, and special BERT tokens across the tasks.
For every head, we compute the sum of self-attention weights assigned to the token of interest from each input token. Since the weights depend on the number of tokens in the input sequence, this sum is normalized by sequence length. This allows us to aggregate the weights for this feature across different examples. If there are multiple tokens of the same type (e.g. several nouns or negations), we take the maximum value. We disregard input sentences that do not contain a given feature.
For each investigated feature, we calculate this aggregated attention score for each head in every layer and build a map in order to detect the heads potentially responsible for this feature. We then compare the obtained maps to the ones derived using the pre-trained BERT model. This comparison enables us to determine if a particular feature is important for a specific task and whether it contributes to some tasks more than to others.
<<</Attention to linguistic features>>>
<<<Token-to-token attention>>>
To complement the experiments in Sec. SECREF34 and SECREF25, in this section, we investigate the attention patterns between tokens in the same sentence, i.e. whether any of the tokens are particularly important while a given token is being processed. We were interested specifically in the verb-subject relation and the noun-pronoun relation. Also, since BERT uses the representation of the [CLS] token in the last layer to make the prediction, we used the features from the experiment in Sec. SECREF34 in order to check if they get higher attention weights while the model is processing the [CLS] token.
<<</Token-to-token attention>>>
<<<Disabling self-attention heads>>>
Since there does seem to be a certain degree of specialization for different heads, we investigated the effects of disabling different heads in BERT and the resulting effects on task performance. Since BERT relies heavily on the learned attention weights, we define disabling a head as modifying the attention values of a head to be constant $a = \frac{1}{L}$ for every token in the input sentence, where $L$ is the length of the sentence. Thus, every token receives the same attention, effectively disabling the learned attention patterns while maintaining the information flow of the original model. Note that by using this framework, we can disable an arbitrary number of heads, ranging from a single head per model to the whole layer or multiple layers.
<<</Disabling self-attention heads>>>
<<</Experiments>>>
<<<Discussion>>>
In general, our results suggest that even the smaller base BERT model is significantly overparametrized. This is supported by the discovery of repeated self-attention patterns in different heads, as well as the fact that disabling both single and multiple heads is not detrimental to model performance and in some cases even improves it.
We found no evidence that attention patterns that are mappable onto core frame-semantic relations actually improve BERT's performance. 2 out of 144 heads that seem to be “responsible" for these relations (see Section SECREF25) do not appear to be important in any of the GLUE tasks: disabling of either one does not lead to a drop of accuracy. This implies that fine-tuned BERT does not rely on this piece of semantic information and prioritizes other features instead. For instance, we noticed that both STS-B and RTE fine-tuned models rely on attention in the same pair of heads (head 1 in the fourth layer, and head 12 in the second layer), as shown in Figure FIGREF37. We manually checked the attention maps in those heads for a set of random inputs, and established that both of them have high weights for words that appear in both sentences of the input examples. This most likely means that word-by-word comparison of the two sentences provides a solid strategy of making a classification prediction for STS-B and RTE. Unfortunately, we were not able to provide a conceptually similar interpretation of heads important for other tasks.
<<</Discussion>>>
<<<Conclusion>>>
In this work, we proposed a set of methods for analyzing self-attention mechanisms of BERT, comparing attention patterns for the pre-trained and fine-tuned versions of BERT.
Our most surprising finding is that, although attention is the key BERT's underlying mechanism, the model can benefit from attention "disabling". Moreover, we demonstrated that there is redundancy in the information encoded by different heads and the same patterns get consistently repeated regardless of the target task. We believe that these two findings together suggest a further direction for research on BERT interpretation, namely, model pruning and finding an optimal sub-architecture reducing data repetition.
Another direction for future work is to study self-attention patterns in a different language. We think that it would allow to disentangle attention maps potentially encoding linguistic information and heads that use simple heuristics like attending to the following/previous tokens.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated work\nMethodology\nExperiments\nBERT's self-attention patterns\nResults\nRelation-specific heads in BERT\nChange in self-attention patterns after fine-tuning\nAttention to linguistic features\nToken-to-token attention\nDisabling self-attention heads\nDiscussion\nConclusion"
],
"type": "outline"
}
|
1911.02711
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Exploring Hierarchical Interaction Between Review and Summary for Better Sentiment Analysis
<<<Abstract>>>
Sentiment analysis provides a useful overview of customer review contents. Many review websites allow a user to enter a summary in addition to a full review. It has been shown that jointly predicting the review summary and the sentiment rating benefits both tasks. However, these methods consider the integration of review and summary information in an implicit manner, which limits their performance to some extent. In this paper, we propose a hierarchically-refined attention network for better exploiting multi-interaction between a review and its summary for sentiment analysis. In particular, the representation of a review is layer-wise refined by attention over the summary representation. Empirical results show that our model can better make use of user-written summaries for review sentiment analysis, and is also more effective compared to existing methods when the user summary is replaced with summary generated by an automatic summarization system.
<<</Abstract>>>
<<<Introduction>>>
Sentiment analysis BIBREF0, BIBREF1 is a fundamental task in natural language processing. In particular, sentiment analysis of user reviews has wide applicationsBIBREF2, BIBREF3, BIBREF4, BIBREF5. In many review websites such as Amazon and IMDb, the user is allowed to give a summary in addition to their review. Summaries usually contain more abstract information about the review. As shown in Figure FIGREF3, two screenshots of reviews were taken from Amazon and IMDb websites, respectively. The user-written summaries of these reviews can be highly indicative of the final polarity. As a result, it is worth considering them together with the review itself for making sentiment classification.
To this end, some recent work BIBREF6, BIBREF7 exploits joint modeling. The model structure can be illustrated by Figure FIGREF4. In particular, given a review input, a model is trained to simultaneously predict the sentiment and summary. As a result, both summary information and review information are integrated in the review encoder through back-propagation training. However, one limitation of this method is that it does not explicitly encode a summary during test time.
One solution, as shown in Figure FIGREF4, is to train a separate summary generator, which learns to predict a summary given a review. This allows a sentiment classifier to simultaneously encode the review and its summary, before making a prediction using both representations. One further advantage of this model is that it can make use of a user-given summary if it is available with the review, which is the case for the review websites shown in Figure 1. We therefore investigate such a model. One limitation of this method, however, is that it does not capture interaction of review and summary information as thoroughly as the method shown in Figure FIGREF4, since the review and the summary are encoded using two separate encoders.
To address this issue, we further investigate a joint encoder for review and summary, which is demonstrated in Figure FIGREF4. The model works by jointly encoding the review and the summary in a multi-layer structure, incrementally updating the representation of the review by consulting the summary representation at each layer. As shown in Figure FIGREF5, our model consists of a summary encoder, a hierarchically-refined review encoder and an output layer. The review encoder is composed of multiple attention layers, each consisting of a sequence encoding layer and an attention inference layer. Summary information is integrated into the representation of the review content at each attention layer, thus, a more abstract review representation is learned in subsequent layers based on a lower-layer representation. This mechanism allows the summary to better guide the representation of the review in a bottom-up manner for improved sentiment classification.
We evaluate our proposed model on the SNAP (Stanford Network Analysis Project) Amazon review datasets BIBREF8, which contain not only reviews and ratings, but also golden summaries. In scenarios where there is no user-written summary for a review, we use pointer-generator network BIBREF9 to generate abstractive summaries. Empirical results show that our model significantly outperforms all strong baselines, including joint modeling, separate encoder and joint encoder methods. In addition, our model achieves new state-of-the-art performance, attaining 2.1% (with generated summary) and 4.8% (with golden summary) absolutely improvements compared to the previous best method on SNAP Amazon review benchmark.
<<</Introduction>>>
<<<Related Work>>>
The majority of recent sentiment analysis models are based on either convolutional or recurrent neural networks to encode sequences BIBREF10, BIBREF11.
In particular, attention-based models have been widely explored, which assign attention weights to hidden states to generate a representation of the input sequence. A hierarchical model with two levels of attention mechanisms was proposed for document classification BIBREF12. Self-attention mechanism has also been used in sentiment analysis BIBREF13, BIBREF14. However, BIBREF15 empirically showed that self-attention mechanism does not consistently agree with the most salient features, which means that self-attention models may suffer from attending on explicit but irrelevant sentimental words.
Rationales were also introduced to sentiment analysis task. BIBREF16 proposed a unsupervised latent model that selects a rationale and then uses the rationale for sentiment analysis. A rationale-augmented CNN model BIBREF17 was proposed, which regards golden rationales as additional input and uses the probability as rationale-level attention weights to generate the final representation for text classification.
There has also been work focusing on joint summarization and sentiment classification BIBREF6, BIBREF7, whose general structures are illustrated in Figure FIGREF4. These models can predict sentiment label and summary simultaneously. However, they do not encode summaries explicitly during test time, which makes their performance be limited to some extent.
<<</Related Work>>>
<<<Method>>>
In this section, we introduce our proposed model in details. We first give the problem formulation, followed by an overview of the proposed model, and explain each layer of our model in details, before finally giving the loss function and training methods.
<<<Problem Formulation>>>
The input to our task is a pair $(X^w, X^s)$, where $X^w = x^w_1, x^w_2, ..., x^w_n$ is a summary and $X^s = x^s_1, x^s_2,...,x^s_m$ is a review, the task is to predict the sentiment label $y \in [1, 5]$, where 1 denotes the most negative sentiment and 5 denotes the most positive sentiment. $n$ and $m$ denote the size of the review and summary in the number of words, respectively. The training set is $D=\lbrace (X^w_i, X^s_i, y_i)\rbrace |_{i=1}^M$ where $M$ is the total number of training examples.
<<</Problem Formulation>>>
<<<Model Overview>>>
Figure FIGREF5 gives the architecture of the proposed model, which consists of three modules: a summary encoder, a hierarchically-refined review encoder and an output layer. The summary encoder encodes the summary into a hidden state matrix. The review encoder consists of several layers for representing $\mathbf {x}^w$, each containing a sequence encoding sublayer and an attention inference sublayer. The sequence encoding sublayer encodes the review text as a word sequence. The attention inference layer acts as a key component, which takes the hidden states from both the original review and the summary as input calculating dot-product attention weights for original review under additional supervision from summary information. Multi-head attention BIBREF18 as well as residual connection are also adopted. The output layer predicts the potential sentiment label according to hidden states from the previous layer.
<<</Model Overview>>>
<<<Summary Encoder>>>
Input for the summary encoder is a sequence of summary word representations $\mathbf {x}^s = \mathbf {x}^s_1, \mathbf {x}^s_2, ..., \mathbf {x}^s_m = \lbrace emb(x_1^s), ..., emb(x_m^s)\rbrace $, where $emb$ denotes a word embedding lookup table. Word representations are fed into a standard BiLSTM. We adopt a standard LSTM formulation, where a sequence of hidden states $\mathbf {h}_t$ are calculated from a sequence of $\mathbf {x}_t$($t \in [1,...,m]$).
A forward left-to-right LSTM layer and a backward right-to-left LSTM yield a sequence of forward hidden states $\lbrace {\stackrel{\rightarrow }{\mathbf {h}_1^s}},...,{\stackrel{\rightarrow }{\mathbf {h}_n^s}}\rbrace $ and a sequence of backward hidden states $\lbrace {\stackrel{\leftarrow }{\mathbf {h}_1^s}},...,{\stackrel{\leftarrow }{\mathbf {h}_n^s}}\rbrace $, respectively. The two hidden states are concatenated to form a final representation:
We then apply an average-pooling operation over the hidden and take $\mathbf {h}^s = avg\_pooling(\mathbf {h}^s_1, \mathbf {h}^s_2,...,\mathbf {h}^s_n)$ as the final representation of summary text.
<<</Summary Encoder>>>
<<<Hierarchically-Refined Review Encoder>>>
The hierarchically-refined review encoder consists of several review encoder layers, each of which is composed of a sequence encoding layer and an attention inference layer.
<<<Sequence Encoding Layer>>>
Given a review $\mathbf {x}^w = \lbrace emb(x_1^w),...,emb(x_n^w)\rbrace $, another BiLSTM is adopted (the same equation with different parameters compared to the one used in the summary encoder), deriving a sequence of review hidden states $\mathbf {H}^w=\lbrace \mathbf {h}^w_1, \mathbf {h}^w_2,...,\mathbf {h}^s_n \rbrace $.
<<</Sequence Encoding Layer>>>
<<<Attention Inference Layer>>>
In the attention inference layer, we model the dependencies between the original review and the summary with multi-head dot-product attention.Each head produces an attention matrix $\mathbf {\alpha } \in \mathbb {R}^{d_h \times 1}$ consisting of a set of similarity scores between the hidden state of each token of the review text and the summary representation. The hidden state outputs are calculated by
where $\mathbf {W}_i^Q \in \mathbb {R}^{d_{h} \times \frac{d_{h}}{k}}$, $\mathbf {W}_i^K \in \mathbb {R}^{d_{h} \times \frac{d_{h}}{k}}$ and $\mathbf {W}_i^V \in \mathbb {R}^{d_{h} \times \frac{d_{h}}{k}}$ are model parameters. $Q$, $K$ and $V$ represent Query, Key and Value, respectively. $k$ is the number of parallel heads and $i \in [1,k]$ indicates which head is being processed.
Following BIBREF18, we adopt a residual connection around each attention inference layer, followed by layer normalization BIBREF19 :
$\mathbf {H}$ is then fed to the subsequent sequence encoding layer as input, if any.
According to the equations of standard LSTM and Equation DISPLAY_FORM13, tokens of the original review that are the most relevant to the summary are focused on more by consulting summary representation. The hidden states $\mathbf {H}^{w,s}$ are thus a representation matrix of the review text that encompass key features of summary representation. Multi-head attention mechanism ensures that multi-faced semantic dependency features can be captured during the process, which is beneficial for scenarios where several key points exist in one review.
Note also that our design of the review encoding part of the hierarchically-refined attention network is similar to the Transformer architecture in the use of multi-head attention, residual connection and layer normalization BIBREF18. However, our experiments show that bi-directional LSTM works better compared to self-attention network as a basic layer structure. This may result from the fact that Transformer requires a larger amount of training data for the most effectiveness.
<<</Attention Inference Layer>>>
<<</Hierarchically-Refined Review Encoder>>>
<<<Output Layer>>>
Finally, global average pooling is applied after the previous layer, and then followed by a classifier layer:
where $\hat{y}$ is the predicted sentiment label; $\mathbf {W}$ and $\mathbf {b}$ are parameters to be learned.
<<</Output Layer>>>
<<<Training>>>
Given a dataset $D={\lbrace (X^w_t,X^s_t,y_t)\rbrace }|^{|T|}_{t=1}$, our model can be trained by minimizing the cross-entropy loss between
where $\mathbf {p}^{y_t}$ denotes the value of the label in $\mathbf {p}$ that corresponds to $y_t$.
<<</Training>>>
<<</Method>>>
<<<Experiments>>>
We compare our model with several strong baselines and previous state-of-the-art methods, investigating its main effects.
<<<Datasets>>>
We empirically compare different methods using Amazon SNAP Review Dataset BIBREF20, which is a part of Stanford Network Analysis Project. The raw dataset consists of around 34 millions Amazon reviews in different domains, such as books, games, sports and movies. Each review mainly contains a product ID, a piece of user information, a plain text review, a review summary and an overall sentiment rating which ranges from 1 to 5. The statistics of our adopted dataset is shown in Table TABREF20. For fair comparison with previous work, we adopt the same partitions used by previous work BIBREF6, BIBREF7, which is, for each domain, the first 1000 samples are taken as the development set, the following 1000 samples as the test set, and the rest as the training set.
<<</Datasets>>>
<<<Experimental Settings>>>
We use GloVe BIBREF22 300-dimensional embeddings as pretrained word vectors. A LSTM hidden size of 256 and four heads for multi-head attention mechanism are adopted. We use Adam BIBREF23 to optimize our model, with an initial learning rate of 0.0003, a decay rate of 0.97, momentum parameters $\beta _1 = 0.9$, $\beta _2 = 0.999$, and $\epsilon = 1 \times 10^{-8}$. The dropout rate is set depending on the size of each dataset, which is 0.5 for both Toys & Games and Sports & Outdoors and 0.2 for Movies & TV.
We conduct experiments with both golden summaries and generated summaries. For generating automatic-decoded summaries, we train a pointer-generator network (PG-Net) with coverage mechanism BIBREF9, which is a specially designed sequence-to-sequence attention-based model that can generate the summary by copying words from the text document or generating words from a fixed vocabulary set at the same time. We generally follow the experimental settings in the original paper except for some minor adjustments specially made for our datasets. Noted that in our work PG-Net can be replaced by any other summarization model.
<<</Experimental Settings>>>
<<<Baselines>>>
<<<HSSC @!START@BIBREF6@!END@.>>>
This model adopts encoder parameter sharing for jointly sentiment classification and summarization. It predicts the sentiment label using a highway layer, concatenating the hidden state in summary decoder and the original text representation in encoder.
<<</HSSC @!START@BIBREF6@!END@.>>>
<<<SAHSSC @!START@BIBREF7@!END@.>>>
This work also adopts encoder parameter sharing for jointly sentiment classification and summarization. They use two separate BiLSTMs with self-attention mechanism for generating review and summary representations.
<<</SAHSSC @!START@BIBREF7@!END@.>>>
<<<BiLSTM+Pooling.>>>
For this baseline, we use a BiLSTM with hidden sizes of 256 in both directions, and average pooling across all hidden states to form the representation. This method serves as a naive baseline for making use of both review and summary in sentiment classification. It can also be used to compare the effectiveness of the review itself, the summary itself and the combination of both when used as inputs to the problem.
<<</BiLSTM+Pooling.>>>
<<<BiLSTM+Self-attention @!START@BIBREF13@!END@.>>>
This baseline uses a BiLSTM with hidden size of 256 in both directions. On the top of BiLSTM, self-attention is used to provide a set of summation weight vectors for the final representation. This method is conceptually simple yet gives the state-of-the-art results for many classification and text matching tasks. Its main difference to our model lies in the fact that attention is performed only in the top hidden layer in this method, yet in every layer in ours.
<<</BiLSTM+Self-attention @!START@BIBREF13@!END@.>>>
<<<BiLSTM+Hard Attention>>>
To demonstrate the efficiency of our model structure, we also adopt hard attention BIBREF24 for comparison, which is supervised using an extractive summarization objective. In particular, words in the original review that match to the corresponding summary are treated as the summary in their original order. In the case of Figure FIGREF3, the extractive summaries for the review are “James Cameron's Titanic is easily the most overrated film in history”, which corresponds to the user-written summary “James Cameron's 1997 Titanic is easily the most overrated film in history!”. The model also calculates another loss between attention weights and extractive summary labels, so that the hard attention weights are trained to strictly follow the extractive summary.
For baselines that adopt the separate encoder structure, we generally calculate the representations of review and summary separately with two encoders that hold their own parameters, and then concatenate the two representations alongside the hidden-size dimension. For the joint encoder baselines, we first concatenate the review and summary text, and then encode the concatenated text with one single encoder.
<<</BiLSTM+Hard Attention>>>
<<</Baselines>>>
<<<Development Experiments>>>
We use the Toys & Games development set to investigate different key configurations of our model. The results are shown in Table TABREF29.
<<<Self-attention Baseline>>>
We compare different numbers of BiLSTM layers and hidden sizes in BiLSTM self-attention. As can be seen, with more layers a stacked BiLSTM with larger hidden sizes does not give better results compared to a hidden size of 256 either.
<<</Self-attention Baseline>>>
<<<Hidden Size>>>
We see an evident improvement of our model when the hidden size increases from 128 to 256. However, the improvement becomes relatively small compared to a large increase in the number of parameters when the hidden size is further increased to 360. Therefore, we adopt 256 as the hidden size in our experiments.
<<</Hidden Size>>>
<<<Number of Layers>>>
As Table TABREF29 shows, the accuracy increases when increasing layer numbers from 1 to 2. More layers do not increase the accuracy on development set. We thus set 2 as the number of review encoder layers in the experiments. The best performing model size is comparable to that of the BiLSTM self-attention, demonstrating that the number of parameters is not the key factor to models' performance.
<<</Number of Layers>>>
<<</Development Experiments>>>
<<<Results>>>
Table TABREF34 and Table TABREF35 show the final results. Our model outperforms all the baseline models and the top-performing models with both generated summary and golden summary, for all the three datasets. In the scenario where golden summaries are used, BiLSTM+self-attention performs the best among all the baselines, which shows that attention is a useful way to integrate summary and review information. Hard-attention receives more supervision information compared with soft-attention, by supervision signals from extractive summaries. However, it underperforms the soft attention model, which indicates that the most salient words for making sentiment classification may not strictly overlap with extractive summaries. This justifies the importance of user written or automatic-generated summary.
A comparison between models that use summary information and those that do not use summary information shows that the review summary is useful for sentiment classification. In addition, the same models work consistently better when the user written gold summary is used compared to a system generated summary, which is intuitively reasonable since the current state-of-the-art abstractive summarization models are far from perfect. Interestingly, as shown in the second section of the table, the gold summary itself does not lead to better sentiment accuracy compared with the review itself, which shows that summaries better serve as auxiliary information sources to review contents.
With both gold summaries and automatic-generated summaries, our model gives better results as compared to BiLSTM+self-attention. The latter integrates information from reviews and summaries only in the top representation layer, which is also the standard practice in question answering BIBREF25 and machine translation BIBREF26 models. In contrast, our model integrates summary information into the review representation in each layer, thereby allowing the integrated representation to be hierarchically refined, leading to more abstract hidden states.
Finally, the fact that with gold summary, our baseline and final models outperforms the state-of-the-art methods by jointly training shows the importance of making use of user written summaries when they are available. Even with system summary, out models still outperforms HSSC and SAHSSC, showing that our network is more effective than parameter sharing under the same setting without input summaries.
<<<Review Length>>>
Figure FIGREF37 consists of line graphs on the accuracy of BiLSTM+self-attention, BiLSTM+pooling and our model against the review length. As the review length increases, the performance of all models decreases. BiLSTM+self-attention does not outperform BiLSTM+pooling on long text. Our method gives better results compared to two baseline models for long reviews, demonstrating that our model is effective for capturing long-term dependency. This is likely because hierarchically-refined attention maintains the most salient information while ignoring the redundant parts of the original review text. Our model can thus be more robust when review has irrelevant sentimental words, which usually exists in larger reviews such as the example in Figure FIGREF3. The hierarchical architecture allows the lower layers to encode local information, while the higher layers can capture long-term dependency and thus better encode global information.
<<</Review Length>>>
<<<Case Study>>>
Our model has a natural advantage of interpretability thanks to the use of attention inference layer. We visualize the hierarchically-refined attention of two samples from the test set of Toys & Games. We also visualize self-attention distribution for fair comparison. To make the visualizations clear and to avoid confusion, we choose to visualize the most salient parts, by rescaling all attention weights into an interval of $[0, 100]$ and adopting 50 as a threshold for attention visualization, showing only attention weights $\ge 50$.
As shown in Figure FIGREF38, the example with generated summary has 5 stars as its golden rating score. The summary text is “fun for the whole new game in all ages ! ! ! fun ! ! !", which suggests that the game is (1) fun (from word “fun") and (2) not difficult to learn (from phrase “all ages"). It can be seen that both the self-attention model and the first layer of our model attend to the strongly positive phrase “quite fun", which is relevant to the word “fun" in the summary. In comparisons the second layer attends to the phrase “much easier", which is relevant to the phrase “in all ages" in the summary. This verifies our model's effectiveness of leveraging abstractive summary information.
Figure FIGREF38 illustrates a 5-star-rating example with golden summary. The summary text is “Favorite Game to Teach to Newbies". As shown in the heatmap, self-attention can only attend to some general sentimental words, such as “hard", “fun", “immensely" and “most", which deviates from the main idea of the document text. In comparison, the first layer of our model attends to phrases like “easy to teach", which is a perfect match of the phrase “teach to newbies" in the summary. This shows that the shallow sequence inference layer can learn direct similarity matching information under the supervision of summarization. In addition, the second layer of our model attends to phrases including “would recommend this to anyone", which links to “easy to teach" and “Teach to Newbies", showing that the deeper sequence inference layer of our model can learn potential connections between the review and the summary.
<<</Case Study>>>
<<</Results>>>
<<</Experiments>>>
<<<Conclusion>>>
We investigated a hierarchically-refined attention network for better sentiment prediction. Our model allows multi-interaction between summary and review representation in a hierarchical manner. Empirical results show that the proposed method outperforms all strong baselines and previous work and achieves new state-of-the-art performance on SNAP Amazon Review dataset.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Work\nMethod\nProblem Formulation\nModel Overview\nSummary Encoder\nHierarchically-Refined Review Encoder\nSequence Encoding Layer\nAttention Inference Layer\nOutput Layer\nTraining\nExperiments\nDatasets\nExperimental Settings\nBaselines\nHSSC @!START@BIBREF6@!END@.\nSAHSSC @!START@BIBREF7@!END@.\nBiLSTM+Pooling.\nBiLSTM+Self-attention @!START@BIBREF13@!END@.\nBiLSTM+Hard Attention\nDevelopment Experiments\nSelf-attention Baseline\nHidden Size\nNumber of Layers\nResults\nReview Length\nCase Study\nConclusion"
],
"type": "outline"
}
|
1910.13890
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
A Latent Morphology Model for Open-Vocabulary Neural Machine Translation
<<<Abstract>>>
Translation into morphologically-rich languages challenges neural machine translation (NMT) models with extremely sparse vocabularies where atomic treatment of surface forms is unrealistic. This problem is typically addressed by either pre-processing words into subword units or performing translation directly at the level of characters. The former is based on word segmentation algorithms optimized using corpus-level statistics with no regard to the translation task. The latter learns directly from translation data but requires rather deep architectures. In this paper, we propose to translate words by modeling word formation through a hierarchical latent variable model which mimics the process of morphological inflection. Our model generates words one character at a time by composing two latent representations: a continuous one, aimed at capturing the lexical semantics, and a set of (approximately) discrete features, aimed at capturing the morphosyntactic function, which are shared among different surface forms. Our model achieves better accuracy in translation into three morphologically-rich languages than conventional open-vocabulary NMT methods, while also demonstrating a better generalization capacity under low to mid-resource settings.
<<</Abstract>>>
<<<Introduction>>>
Neural machine translation (NMT) systems are conventionally trained based on the approach of maximizing the log-likelihood on a training corpus in order to learn distributed representations of words according to their sentence context, which is highly demanding in terms of training data as well as the network capacity. Under conditions of lexical sparsity, which may include the cases when the amount of training examples is insufficient to observe words in different context, and particularly in translation of morphologically-rich languages, where the same word can have exponentially many different surface realizations due to syntactic conditions, which are often rarely or ever observed in any set of collected examples, the model may suffer in learning accurate representations of words. The standard approach to overcome this limitation is to replace the word representations in the model with subword units that are shared among words, which are, in principle, more reliable as they are observed more frequently in varying context BIBREF0, BIBREF1. One drawback related to this approach, however, is that the estimation of the subword vocabulary relies on word segmentation methods optimized using corpus-dependent statistics, disregarding any linguistic notion and the translation objective, which may result in morphological errors during splitting, resulting in subword units that are semantically ambiguous as they might be used in far too many lexical contexts BIBREF2. Moreover, the words are generated predicting multiple subword units, which makes generalizing to unseen word forms more difficult, where some of the subword units that could be used to reconstruct a given word may be unlikely in the given context. To alleviate the sub-optimal effects of using explicit segmentation and generalize better to new morphological forms, recent studies explored the idea of extending the same approach to model translation directly at the level of characters BIBREF3, BIBREF4, which, in turn, have demonstrated the requirement of using comparably deeper networks, as the network would then need to learn longer distance grammatical dependencies BIBREF5.
In this paper, we explore the benefit of explicitly modeling variations in the surface forms of words using methods from deep latent variable modeling in order to improve the translation accuracy in low-resource and morphologically-rich languages. Latent variable models allow us to inject inductive biases relevant to the task, which, in our case, is word formation, and we believe that follows a certain hierarchical procedure. Our model translates words one character at a time based on word representations learned compositionally from sub-lexical components, which are parameterized by a hierarchical latent variable model mimicking the process of morphological inflection, consisting of a continuous-space dense vector capturing the lexical semantics, and a set of (approximately) discrete features, representing the morphosyntactic role of the word in a given sentence. Each word representation during decoding is reformulated based on the shared latent morphological features, aiding in learning more reliable representations of words under sparse settings by generalizing across their different surface forms. We evaluate our method in translating English into three morphologically-rich languages each with a distinct morphological typology: Arabic, Czech and Turkish, and show that our model is able to obtain better translation accuracy and generalization capacity than conventional approaches to open-vocabulary NMT.
<<</Introduction>>>
<<<Evaluation>>>
<<<Models>>>
We evaluate our model by comparing it in machine translation against three baselines which constitute the conventional open-vocabulary NMT methods, including architectures using atomic parameterization either with subword units segmented with BPE BIBREF0 or characters, and the hierarchical parameterization method employed for generating all words in the output. We implement all architectures using Pytorch BIBREF6 within the OpenNMT-py framework BIBREF7.
<<</Models>>>
<<<Data and Languages>>>
In order to evaluate our model we design two sets of experiments. The experiments in §SECREF8 aim to evaluate different methods under low-resource settings, for languages with different morphological typology. We model the machine translation task from English into three languages with distinct morphological characteristics: Arabic (templatic), Czech (fusional), and Turkish (agglutinative). We use the TED Talks corpora BIBREF8 for training the NMT models for these experiments. In §SECREF10, we conduct more experiments in Turkish to demonstrate the case of increased data sparsity using multi-domain training corpora, where we extend the training set using corpora from EU Bookshop BIBREF9, Global Voices, Gnome, Tatoeba, Ubuntu BIBREF10, KDE4 BIBREF11, Open Subtitles BIBREF12 and SETIMES BIBREF13. The statistical characteristics of the training sets are given in Tables TABREF16 and TABREF17. We use the official evaluation sets of the IWSLT for validating and testing the accuracy of the models. In order to increase the number of unknown and rare words in the evaluation sets we measure accuracy on large test sets combining evaluation sets from many years (Table TABREF18 presents the evaluation sets used for development and testing). The accuracy of each model output is measured using BLEU BIBREF15 and chrF3 BIBREF16 metrics, whereas the significance of the improvements are computed using bootstrap hypothesis testing BIBREF17.
<<</Data and Languages>>>
<<<Training Settings>>>
All models are implemented using gated recurrent units (GRU) BIBREF18, and have a single-layer bi-RNN encoder. The source sides of the data used for training all NMT models, and the target sides of the data used in training the subword-level NMT models are segmented using BPE with 16,000 merge rules. We implement all decoders using a comparable number of GRU parameters, including 3-layer stacked-GRU subword and character-level decoders, where the attention is computed after the 1st layer BIBREF19 and a 3-layer hierarchical decoder which implements the attention mechanism after the 2nd layer. All models use an embedding dimension and GRU size of 512. The latent morphology model uses the same hierarchical GRU architecture, where the middle layer is augmented using 4 multi-layer perceptrons with 256 hidden units. We use a lemma vector dimension of 150, 10 inflectional features (See §SECREF21 for experiments conducted to tune the feature dimensions) and set the regularization constant to $\rho =0.4$. All models are trained using the Adam optimizer BIBREF20 with a batch size of 100, dropout rate of 0.2, learning rate of 0.0004 and learning rate decay of 0.8, applied when the perplexity does not decrease at a given epoch. Translations are generated with beam search with a beam size of 5, where the hierarchical models implement the hierarchical beam search BIBREF21.
<<</Training Settings>>>
<<<Results>>>
<<<The Effect of Morphological Typology>>>
The experiment results given in Table TABREF9 shows the performance of each model in translating English into Arabic, Czech and Turkish. In Turkish, the most sparse target language in our benchmark, using character-based decoding shows to be more advantageous compared to the subword-level and hierarchical models, due to the fact that reduced granularity in the vocabulary units might aid in better predicting words under conditions of high data sparsity. In Arabic, on the other hand, using a hierarchical decoding model shows to be advantageous compared to the character-level decoder, as it might be useful in better learning syntactic dependencies, whereas it also outperforms the subword-level decoder. Using the latent morphology model provides improvements of 0.51 and 0.30 BLEU points in Arabic and Turkish over the best performing baselines, respectively. The fact that our model can efficiently work in both Arabic and Turkish suggests that it can handle the generation of both concatenative and non-concatenative morphological transformations. The results in the English-to-Czech translation direction do not indicate a specific advantage of using either method for generating fusional morphology, where morphemes are already optimized at the surface level, although our model is still able to achieve translation accuracy comparable to the character-level model.
<<</The Effect of Morphological Typology>>>
<<<The Effect of Data Size>>>
The experiment conducted in the English-to-Turkish translation direction by increasing the amount of training data with multi-domain corpora demonstrates a more challenging case, where there is a greater possibility of observing rare words, either in the form of morphological inflections due to the complex agglutinative morphology of Turkish, or ambiguous terminology raising from the multi-domain characteristics. In this experiment, the character-level model experiences a drop in performance and its accuracy is much lower than the subword-level one, suggesting that its capacity cannot cope with the increased amount of sparsity. Empirical results suggest that with increased capacity, character-level models carry the potential to reach comparable performance to subword-level models BIBREF4. Our model reaches a much larger improvement of 0.82 BLEU points over the subword-level and 2.54 BLEU points over the character-level decoders, suggesting that it could make use of the increased sparsity in learning more accurate representations.
<<</The Effect of Data Size>>>
<<<Predicting Unseen Words>>>
In addition to general evaluation using automatic metrics, we perform a more focused analysis to illustrate the performance of different methods in predicting unseen words. We sample the sentences from the development sets which contain out-of-vocabulary words, and compute the average perplexity per character on these sentences using different NMT models, as suggested by BIBREF22. In general, the highest perplexities are obtained using the subword-based model, suggesting that generating unseen words using subword units is indeed increasing the difficulty of prediction, compared to the character-level which obtains the lowest perplexity. This result indicates that increased granularity aids in reducing the uncertainty during prediction. Similar to the results in §SECREF8, in Czech the values are almost comparable. Due to its stochastic nature, our model yields higher perplexity values compared to the hierarchical model, whereas the values range between subword and character-based models, possibly finding an optimal level of granularity between the two solutions.
<<</Predicting Unseen Words>>>
<<<Feature Variations>>>
In order to understand whether the latent inflectional features in fact capture information about variations related to morphological transformations, we try generating different surface forms of the same lemma by assigning different values to the inflectional features. We use the latent morphology model based decoder to translate the English word `go', and after sampling the lemma, we fix its value and vary the values of the inflectional features at random positions for generating different outputs. Table TABREF14 presents different sets of feature values and the corresponding outputs generated by the decoder.
The model generates different surface forms for different sets of features, confirming that latent variables encode information related to the infinitive form of the verb, as well as its formality conditions, prepositions, person, number and tense. We also observe that many trials based on different feature combinations may result in the same outputs, although some feature values may not be set in a single-word context. Varying the features individually does not necessarily yield distinct changes in the output, suggesting that some features may act jointly in determining the word form.
<<</Feature Variations>>>
<<</Results>>>
<<</Evaluation>>>
<<<Conclusion>>>
In this paper we presented a novel decoding architecture for NMT employing a hierarchical latent variable model to promote sparsity in lexical representations, which demonstrated promising application for morphologically-rich and low-resource languages. Our model generates words one character at a time by composing two latent features representing their lemmas and inflectional features. We evaluate our model against conventional open-vocabulary NMT solutions such as subword and character-level decoding methods in translationg English into three morphologically-rich languages with different morphological typologies under low to mid-resource settings. Our results show that our model can significantly outperform subword-level NMT models, whereas demonstrates better capacity than character-level models in coping with increased amounts of data sparsity. We also conduct ablation studies on the effect of feature variations to the predictions, which prove that despite being completely unsupervised, our model can in fact capture morphosyntactic information and generalize to different surface forms of words.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nEvaluation\nModels\nData and Languages\nTraining Settings\nResults\nThe Effect of Morphological Typology\nThe Effect of Data Size\nPredicting Unseen Words\nFeature Variations\nConclusion"
],
"type": "outline"
}
|
1909.01492
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Achieving Verified Robustness to Symbol Substitutions via Interval Bound Propagation
<<<Abstract>>>
Neural networks are part of many contemporary NLP systems, yet their empirical successes come at the price of vulnerability to adversarial attacks. Previous work has used adversarial training and data augmentation to partially mitigate such brittleness, but these are unlikely to find worst-case adversaries due to the complexity of the search space arising from discrete text perturbations. In this work, we approach the problem from the opposite direction: to formally verify a system's robustness against a predefined class of adversarial attacks. We study text classification under synonym replacements or character flip perturbations. We propose modeling these input perturbations as a simplex and then using Interval Bound Propagation -- a formal model verification method. We modify the conventional log-likelihood training objective to train models that can be efficiently verified, which would otherwise come with exponential search complexity. The resulting models show only little difference in terms of nominal accuracy, but have much improved verified accuracy under perturbations and come with an efficiently computable formal guarantee on worst case adversaries.
<<</Abstract>>>
<<<Introduction>>>
Deep models have been shown to be vulnerable against adversarial input perturbations BIBREF0, BIBREF1. Small, semantically invariant input alterations can lead to drastic changes in predictions, leading to poor performance on adversarially chosen samples. Recent work BIBREF2, BIBREF3, BIBREF4 also exposed the vulnerabilities of neural NLP models, e.g. with small character perturbations BIBREF5 or paraphrases BIBREF6, BIBREF7. These adversarial attacks highlight often unintuitive model failure modes and present a challenge to deploying NLP models.
Common attempts to mitigate the issue are adversarial training BIBREF5 and data augmentation BIBREF3, BIBREF8, which lead to improved accuracy on adversarial examples. However, this might cause a false sense of security, as there is generally no guarantee that stronger adversaries could not circumvent defenses to find other successful attacks BIBREF9, BIBREF10, BIBREF11. Rather than continuing the race with adversaries, formal verification BIBREF12, BIBREF13, BIBREF14 offers a different approach: it aims at providing provable guarantees to a given model specification. In the case of adversarial robustness, such a specification can be formulated as prediction consistency under any altered – but semantically invariant – input change.
In this paper, we study verifiable robustness, i.e., providing a certificate that for a given network and test input, no attack or perturbation under the specification can change predictions, using the example of text classification tasks, Stanford Sentiment Treebank (SST) BIBREF15 and AG News BIBREF16. The specification against which we verify is that a text classification model should preserve its prediction under character (or synonym) substitutions in a character (or word) based model. We propose modeling these input perturbations as a simplex and then using Interval Bound Propagation (IBP) BIBREF17, BIBREF18, BIBREF19 to compute worst case bounds on specification satisfaction, as illustrated in Figure FIGREF1. Since these bounds can be computed efficiently, we can furthermore derive an auxiliary objective for models to become verifiable. The resulting classifiers are efficiently verifiable and improve robustness on adversarial examples, while maintaining comparable performance in terms of nominal test accuracy.
The contributions of this paper are twofold:
To the best of our knowledge, this paper is the first to introduce verification and verifiable training for neural networks in natural language processing (§SECREF3).
Through a series of experiments (§SECREF4), we demonstrate (a) the effectiveness of modeling input perturbations as a simplex and using simplex bounds with IBP for training and testing, (b) the weakness of adversarial training under exhaustive verification, (c) the effects of perturbation space on the performance of different methods, and (d) the impact of using GloVe and counter-fitted embeddings on the IBP verification bounds.
<<</Introduction>>>
<<<Related Work>>>
<<<Adversarial Examples in NLP.>>>
Creating adversarial examples for NLP systems requires identifying semantically invariant text transformations to define an input perturbation space. In this paper, given our specification, we study word- and character-level HotFlip attacks BIBREF5 – which consist of character and synonym replacements – on text classification tasks. We compare our verifiable approach to other defenses including adversarial training BIBREF20 and data augmentation BIBREF8, BIBREF3. Note that some existing adversarial perturbations such as syntactically controlled paraphrasing BIBREF7, exploiting backtranslation systems BIBREF6, or using targeted keyword attack BIBREF21 are beyond the specification in this paper.
<<</Adversarial Examples in NLP.>>>
<<<Formal Verification of Neural Networks.>>>
Formal verification provides a provable guarantee that models are consistent with a specification for all possible model inputs. Previous work can be categorised into complete methods that use Mixed-Integer Programming (MIP) BIBREF22, BIBREF23 or Satisfiability Modulo Theory (SMT) BIBREF14, BIBREF24, and incomplete methods that solve a convex relaxation of the verification problem BIBREF25, BIBREF26, BIBREF27. Complete methods perform exhaustive enumeration to find the worst case. Hence, complete methods are expensive and difficult to scale, though they provide exact robustness bounds. Incomplete methods provide loose robustness bounds, but can be more scalable and used inside the training loop for training models to be robust and verifiable BIBREF28, BIBREF26, BIBREF19, BIBREF17. Our work is the first to extend incomplete verification to text classification, considering input perturbations on a simplex and minimising worst case bounds to adversarial attacks in text classification. We highlight that the verification of neural networks is an extremely challenging task, and that scaling complete and incomplete methods to large models remains an open challenge.
<<</Formal Verification of Neural Networks.>>>
<<<Representations of Combinatorial Spaces.>>>
Word lattices and hypergraphs are data structures that have often been used to efficiently represent and process exponentially large numbers of sentences without exhaustively enumerating them. Applications include automatic speech recognition (ASR) output rescoring BIBREF29, machine translation of ASR outputs BIBREF30, paraphrase variants BIBREF31, and word segmentation alternatives BIBREF32. The specifications used to characterise the space of adversarial attacks are likewise a compact representation, and the algorithms discussed below operate on them without exhaustive enumeration.
<<</Representations of Combinatorial Spaces.>>>
<<</Related Work>>>
<<<Methodology>>>
We assume a fixed initial vector representation $\mathbf {z} _0$ of a given input sentence $z$ (e.g. the concatenation of pretrained word embeddings) and use a neural network model, i.e. a series of differentiable transformations $h_k$:
where $\mathbf {z} _k$ is the vector of activations in the $k$-th layer and the final output $\mathbf {z} _K$ consists of the logits for each class. Typically each $h_k$ will be an affine transformation followed by an activation function (e.g. ReLU or sigmoid). The affine transformation can be a convolution (with the inputs and outputs having an implied 2D structure) of a vector of activations at each point in a sequence; in what follows these activations will be concatenated along the sequence to form a vector $\mathbf {z} _k$.
<<<Verification>>>
Verification is the process of examining whether the output of a model satisfies a given specification. Formally, this means establishing whether the following holds true for a given normal model input $\mathbf {x} _0$: $\forall \mathbf {z} _0 \in \mathcal {X}_\mathrm {in}(\mathbf {x} _0):~ \mathbf {z} _K \in \mathcal {X}_\mathrm {out}$, where $\mathcal {X}_\mathrm {out}$ characterizes a constraint on the outputs, and $\mathcal {X}_\mathrm {in}(\mathbf {x} _0)$ defines a neighbourhood of $\mathbf {x} _0$ throughout which the constraint should be satisfied.
In our concrete use case, we consider a specification of robustness against adversarial attacks which are defined by bounded input perturbations (synonym flips up to $\delta $ words, or character flips up to $\delta $ characters) of the original sentence $x$. The attack space $\mathcal {X}_\mathrm {in} (\mathbf {x} _0)$ is the set of vector representations (embeddings) of all such perturbed sentences. Denoting by $z_{K,y}$ the logit of label $y$, we formulate the output constraint that for all classes $y: z_{K,y_\textrm {true}} \ge z_{K,y}$. This specification establishes that the prediction of all perturbed sentences $\mathbf {z} _0 \in \mathcal {X}_\mathrm {in}(\mathbf {x} _0)$ should correspond to the correct label $y_\textrm {true}$. This specification may equivalently be formulated as a set of half-space constraints on the logits: for each class $y$
where $\mathbf {e}_{i}$ is a one-hot vector with 1 in the $i$-th position. In other words, the true class logit should be greater or equal than those for all other classes $y$, which means the prediction remains constant.
<<</Verification>>>
<<<Verification as Optimisation>>>
Verifying the specification in Eq. (DISPLAY_FORM10) can be done by solving the following constrained optimisation problem to find the input that would most strongly violate it:
where $\mathbf {c} $ is a vector with entries $c_y = 1$, $c_{y_\textrm {true}} = -1$ and 0 everywhere else. If the optimal value of the above optimisation problem is smaller than 0, then the specification in Eq. (DISPLAY_FORM10) is satisfied, otherwise a counter-example has been found. In our case, this corresponds to a successful adversarial attack.
<<</Verification as Optimisation>>>
<<<Modeling Input Perturbations using Simplices>>>
In the interests of computational feasibility, we will actually attempt to verify the specification on a larger, but more tractable input perturbation space $\bar{\mathcal {X}}_\mathrm {in} \supseteq \mathcal {X}_\mathrm {in}$. Any data point that is verifiable on this larger input perturbation space is necessarily verifiable with respect to the original specification.
In the domain of image classification, $\mathcal {X}_\mathrm {in}$ is often modeled as an $L_\infty $-ball, corresponding to input perturbations in which each pixel may be independently varied within a small interval. However, using such interval bounds is unsuitable for our situation of perturbations consisting of a small number $\delta $ of symbol substitutions. Although we could construct an axis-aligned bounding box $\bar{\mathcal {X}}_\mathrm {in}$ in embedding space that encompasses all of $\mathcal {X}_\mathrm {in}$, it would over-approximate the perturbation space to such an extent that it would contain perturbations where all symbols in the sentence have been substituted simultaneously. To remedy this, we propose a tighter over-approximation in the form of a `simplex' in embedding space. We first define this for the special case $\delta =1$, in which $\mathcal {X}_\mathrm {in} = \lbrace \mathbf {x} _0\rbrace \cup \lbrace \mathbf {p} ^{(m)}_0 : 1\le m\le M\rbrace $ consists of the representations of all $M$ sentences $p^{(m)}$ derived from $x$ by performing a single synonym (or character) substitution, together with the unperturbed sentence $x$ itself. In this case we define $\bar{\mathcal {X}}_\mathrm {in}$ to be the convex hull $\mathcal {S}_1$ of $\mathcal {X}_\mathrm {in}$. Note we are not considering contextual embeddings BIBREF33 here. Each `vertex' $\mathbf {p} ^{(m)}_0$ is a sequence of embedding vectors that differs from $\mathbf {x} _0$ at only one word (or character) position.
For a larger perturbation radius $\delta >1$, the cardinality of $\mathcal {X}_\mathrm {in}$ grows exponentially, so manipulating its convex hull becomes infeasible. However, dilating $\mathcal {S}_1$ centered at $\mathbf {x} _0$, scaling it up by a factor of $\delta $, yields a simplex $\mathcal {S}_\delta $ with $M+1$ vertices that contains $\mathcal {X}_\mathrm {in}$.
More formally, we define a region in the input embedding space based on the $M$ `elementary' perturbations $\lbrace \mathbf {p} ^{(m)}_0: m = 1 \ldots M\rbrace $ of $\mathbf {x} _0$ defined earlier for the $\delta =1$ case. For perturbations of up to $\delta $ substitutions, we define $\bar{\mathcal {X}}_\mathrm {in}(\mathbf {x} _0)$ as the convex hull of $\lbrace \mathbf {z} ^{(m)}_0: m = 0 \ldots M\rbrace $, where $\mathbf {z} ^{(0)}_0=\mathbf {x} _0$ denotes the original (unperturbed) sentence representation and, for $m\ge 1$, $\mathbf {z} ^{(m)}_0 = \mathbf {x} _0+\delta \cdot (\mathbf {p} ^{(m)}_0-\mathbf {x} _0)$. The convex hull is an over-approximation of $\mathcal {X}_\mathrm {in}(\mathbf {x} _0)$: it contains the representations of all sentences derived from $x$ by performing up to $\delta $ substitutions at distinct word (or character) positions.
<<</Modeling Input Perturbations using Simplices>>>
<<<Interval Bound Propagation>>>
To estimate the optimal value of the problem (DISPLAY_FORM12), given an input $\mathbf {z} _0$, we can propagate the upper/lower bounds on the activations $\mathbf {z} _k$ of each layer using interval arithmetic BIBREF17.
We begin by computing interval bounds on the first layer's activations. Recall that any input $\mathbf {z} _0 \in \mathcal {X}_\mathrm {in}$ will lie within the convex hull of certain vertices $\lbrace \mathbf {z} ^{(m)}_0: m = 0 \ldots M\rbrace $. Then, assuming that the first layer $h_1$ is an affine transformation (e.g. linear or convolutional) followed by a monotonic activation function, the lower and upper bounds on the components $z_{1,i}$ of the first layer's activations $\mathbf {z} _1$ are as follows:
Note that these bounds are efficient to compute (by passing each perturbation $\mathbf {z} ^{(m)}_0$ through the first layer); in particular there is no need to compute the convex hull polytope.
For subsequent layers $k>1$, the bounds on the components $z_{k,i}$ of $\mathbf {z} _k$ are:
The above optimisation problems can be solved in closed form quickly for affine layers and monotonic activation functions, as illustrated in IBP. Finally, the lower and upper bounds of the output logits $\mathbf {z} _K$ can be used to construct an upper bound on the solution of (DISPLAY_FORM12):
<<<Verifiable Training.>>>
The upper bound in (DISPLAY_FORM17) is fast to compute (only requires two forward passes for upper and lower bounds through the network). Hence, we can define a loss to optimise models such that the models are trained to be verifiable. Solving (DISPLAY_FORM17) is equivalent to finding the worst-case logit difference, and this is achieved when the logit of the true class is equal to its lower bound, and all other logits equal to their upper bounds. Concretely, for each class $y \ne y_\textrm {true} $: $\hat{\mathbf {z}}_{K,y}(\delta ) = \overline{\mathbf {z}}_{K,y} (\delta ) $, and $\hat{\mathbf {z}}_{K,y_\textrm {true}}(\delta ) = \underline{\mathbf {z}}_{K,y_\textrm {true}} (\delta ) $. The training loss can then be formulated as
where $\ell $ is the cross-entropy loss, $\kappa $ a hyperparameter that controls the relative weights between the classification loss $L_\textrm {normal}$ and specification loss $L_\textrm {spec}$. If $\delta = 0$ then $\mathbf {z} _K = \hat{\mathbf {z}}_K(\delta )$, and thus $L$ reduces to a standard classification loss. Empirically, we found that a curriculum-based training, starting with $\kappa $=1 and linearly decreasing to 0.25, is effective for verifiable training.
<<</Verifiable Training.>>>
<<</Interval Bound Propagation>>>
<<</Methodology>>>
<<<Experiments>>>
We conduct verification experiments on two text classification datasets, Stanford Sentiment Treebank (SST) BIBREF15 and AG News corpus, processed in BIBREF16. We focus on word-level and character-level experiments on SST and character-level experiments on AG News. Our specification is that models should preserve their prediction against up to $\delta $ synonym substitutions or character typos, respectively.
<<<A Motivating Example>>>
We provide an example from Table TABREF29 to highlight different evaluation metrics and training methods. Given a sentence, “you ' ve seen them a million times .”, that is predicted correctly (called Nominal Accuracy) by a classification model, we want to further examine whether the model is robust against character typos (e.g., up to $\delta =3$ typos) to this example. One way is to use some heuristic to search for a valid example with up to 3 typos that can change the prediction the most (called adversarial example). We evaluate the model using this adversarial example and report the performance (called Adversarial Accuracy). However, even if the adversarial example is predicted correctly, one can still ask: is the model truly robust against any typos (up to 3) to this example? In order to have a certificate that the prediction will not change under any $\delta =3$ character typos (called verifiably robust), we could in theory exhaustively search over all possible cases and check whether any of the predictions is changed (called Oracle Accuracy). If we only allow a character to be replaced by another character nearby on the keyboard, already for this short sentence we need to exhaustively search over 2,951 possible perturbations. To avoid this combinatorial growth, we can instead model all possible perturbations using the proposed simplex bounds and propagate the bounds through IBP at the cost of two forward passes. Following Eq. (DISPLAY_FORM12), we can check whether this example can be verified to be robust against all perturbations (called IBP-Verified Accuracy).
There are also a number of ways in which the training procedure can be enhanced to improve the verifiable robustness of a model against typos to the sentence. The baseline is to train the model with the original/normal sentence directly (called Normal Training). Another way is to randomly sample typo sentences among the 2,951 possible perturbations and add these sentences to the training data (called Data Augmentation Training). Yet another way is to find, at each training iteration, the adversarial example among the (subset of) 2,951 possible perturbations that can change the prediction the most; we then use the adversarial example alongside the training example (called Adversarial Training). Finally, as simplex bounds with IBP is efficient to run, we can train a model to be verifiable by minimising Eq. (DISPLAY_FORM19) (called Verifiable Training).
<<</A Motivating Example>>>
<<<Baselines>>>
In this section we detail our baseline models.
<<<Adversarial Training.>>>
In adversarial training BIBREF34, BIBREF20, the goal is to optimise the following saddle point problem:
where the inner maximisation problem is to find an adversarial perturbation $\mathbf {z} _0\in \mathcal {X}_\mathrm {in}(\mathbf {x} _0)$ that can maximise the loss. In the inner maximisation problem, we use HotFlip BIBREF5 with perturbation budget $\delta $ to find the adversarial example. The outer minimisation problem aims to update model parameters such that the adversarial risk of (DISPLAY_FORM24) is minimised. To balance between the adversarial robustness and nominal accuracy, we use an interpolation weight of 0.5 between the original cross-entropy loss and the adversarial risk.
<<</Adversarial Training.>>>
<<<Data Augmentation Training.>>>
In the data augmentation setup, we randomly sample a valid perturbation $z$ with perturbation budget $\delta $ from a normal input $x$, and minimise the cross-entropy loss given the perturbed sample $z$ (denoted as data augmentation loss). We also set the interpolation weight between the data augmentation loss and the original normal cross-entropy loss to 0.5.
<<</Data Augmentation Training.>>>
<<<Normal Training.>>>
In normal training, we use the likelihood-based training using the normal training input $x$.
<<</Normal Training.>>>
<<</Baselines>>>
<<<Setup>>>
We use a shallow convolutional network with a small number of fully-connected layers for SST and AG News experiments. The detailed model architectures and hyperparameter details are introduced in the supplementary material. Although we use shallow models for ease of verifiable training, our nominal accuracy is on par with previous work such as BIBREF15 (85.4%) and BIBREF35 (84.3%) in SST and BIBREF16 (87.18%) in AG News. During training, we set the maximum number of perturbations to $\delta =3$, and evaluate performance with the maximum number of perturbations from $\delta =1$ to 6 at test time.
For word-level experiments, we construct the synonym pairs using the PPDB database BIBREF36 and filter the synonyms with fine-grained part-of-speech tags using Spacy BIBREF37. For character-level experiments, we use synthetic keyboard typos from BIBREF3, and allow one possible alteration per character that is adjacent to it on an American keyboard. The allowable input perturbation space is much larger than for word-level synonym substitutions, as shown in Table TABREF48.
<<</Setup>>>
<<<Evaluation Metrics>>>
We use the following four metrics to evaluate our models: i) test set accuracy (called Acc.), ii) adversarial test accuracy (called Adv. Acc.), which uses samples generated by HotFlip attacks on the original test examples, iii) verifiable accuracy under IBP verification (called IBP-verified), that is, the ratio of test samples for which IBP can verify that the specification is not violated, and iv) exhaustively verified accuracy (called Oracle), computed by enumerating all possible perturbations given the perturbation budget $\delta $, where a sample is verifiably robust if the prediction is unchanged under all valid perturbations.
<<</Evaluation Metrics>>>
<<<Results>>>
Table TABREF28 shows the results of IBP training and baseline models under $\delta =3$ and $\delta =2$ perturbations on SST and AG News, respectively. Figures FIGREF31 and FIGREF36 show the character- and word-level results with $\delta $ between 1 and 6 under four metrics on the SST test set; similar figures for SST word-level (adversarial training, data augmentation) models and AG News dataset can be found in the supplementary material.
<<<Oracle Accuracy and Adversarial Accuracy.>>>
In Table TABREF28, comparing adversarial accuracy with exhaustive verification accuracy (oracle), we observe that although adversarial training is effective at defending against HotFlip attacks (74.9 / 76.8 / 85.5%), the oracle adversarial accuracy under exhaustive testing (25.8 / 74.6 / 81.6%) is much lower in SST-character / SST-word / AG-character level, respectively. For illustration, we show some concrete adversarial examples from the HotFlip attack in Table TABREF29. For some samples, even though the model is robust with respect to HotFlip attacks, its predictions are incorrect for stronger adversarial examples obtained using the exhaustive verification oracle. This underscores the need for verification, as robustness with respect to suboptimal adversarial attacks alone might give a false sense of security.
<<</Oracle Accuracy and Adversarial Accuracy.>>>
<<<Effectiveness of Simplex Bounds with IBP.>>>
Rather than sampling individual points from the perturbation space, IBP training covers the full space at once. The resulting models achieve the highest exhaustively verified accuracy at the cost of only moderate deterioration in nominal accuracy (Table TABREF28). At test time, IBP allows for constant-time verification with arbitrary $\delta $, whereas exhaustive verification requires evaluation over an exponentially growing search space.
<<</Effectiveness of Simplex Bounds with IBP.>>>
<<<Perturbation Space Size.>>>
In Table TABREF28, when the perturbation space is larger (SST character-level vs. SST word-level), (a) across models, there is a larger gap in adversarial accuracy and true robustness (oracle); (b) the difference in oracle robustness between IBP and adversarial training is even larger (73.1% vs. 25.8% and 76.5% vs. 74.6%).
<<</Perturbation Space Size.>>>
<<<Perturbation Budget.>>>
In Figures FIGREF31 and FIGREF36, we compare normal training, adversarial training, data augmentation, and verifiable training models with four metrics under various perturbation budgets on the SST dataset. Overall, as the perturbation budget increases, the adversarial accuracy, oracle accuracy, and IBP-verified accuracy decrease. We can observe that even for large perturbation budgets, verifiably trained models are still able to verify a sizable number of samples. Again, although adversarial accuracy flattens for larger perturbation budgets in the word level experiments, oracle verification can further find counterexamples to change the prediction. Note that exhaustive verification becomes intractable with large perturbation sizes.
<<</Perturbation Budget.>>>
<<<Computational Cost of Exhaustive Verification.>>>
The perturbation space in NLP problems is discrete and finite, and a valid option to verify the specification is to exhaustively generate predictions for all $\mathbf {z} _0 \in \mathcal {X}_\mathrm {in} (\mathbf {x} _0)$, and then check if at least one does not match the correct label. Conversely, such an exhaustive (oracle) approach can also identify the strongest possible attack. But the size of $\mathcal {X}_\mathrm {in}$ grows exponentially with $\delta $, and exhaustive verification quickly becomes prohibitively expensive.
In Table TABREF48, we show the maximum perturbation space size in the SST and AG News test set for different perturbation radii $\delta $. This number grows exponentially as $\delta $ increases. To further illustrate this, Figure FIGREF49 shows the number of forward passes required to verify a given proportion of the SST test set for an IBP-trained model using exhaustive verification and IBP verification. IBP reaches verification levels comparable to an exhaustive verification oracle, but requires only two forward passes to verify any sample – one pass for computing the upper, and one for the lower bounds. Exhaustive verification, on the other hand, requires several orders of magnitude more forward passes, and there is a tail of samples with extremely large attack spaces.
<<</Computational Cost of Exhaustive Verification.>>>
<<</Results>>>
<<<Counter-Fitted Embeddings>>>
As shown in Figures FIGREF31 and FIGREF36, although IBP can verify arbitrary networks in theory, the verification bound is very loose except for models trained to be IBP-verifiable. One possible reason is the potentially large volume of the perturbation simplex. Since representations of substitution words/characters are not necessarily close to those of synonyms/typos in embedding space, the vertices of the simplex could be far apart, and thus cover a large area in representation space. Therefore, when propagating the interval bounds through the network, the interval bounds become too loose and fail to verify most of the examples if the models are not specifically trained. To test this hypothesis, we follow BIBREF38 and use fine-tuned GloVe embeddings trained to respect linguistic constraints; these representations (called counter-fitted embeddings) force synonyms to be closer and antonyms to be farther apart using word pairs from the PPDB database BIBREF36 and WordNet BIBREF39. We repeat the word level experiments with these counter-fitted embeddings, Figures FIGREF36 and FIGREF36 show the experimental results. We observe that IBP verified accuracy is now substantially higher across models, especially for $\delta =1, 2, 3$. The examples which IBP can verify increase by up to 33.2% when using the counter-fitted embeddings (normal training, $\delta =1$). Moreover, adversarial and exhaustively verified accuracy are also improved, at the cost of a mild deterioration in nominal test accuracy. The IBP-trained model also further improves both its oracle accuracy and IBP verified accuracy. These results validate our hypothesis that reducing the simplex volume via soft linguistic constraints can provide even tighter bounds for IBP, resulting in larger proportions of verifiable samples.
<<</Counter-Fitted Embeddings>>>
<<</Experiments>>>
<<<Discussion>>>
Our experiments indicate that adversarial attacks are not always the worst adversarial inputs, which can only be revealed via verification. On the other hand, exhaustive verification is computationally very expensive. Our results show that using the proposed simplex bounds with IBP can verify a sizable amount of test samples, and can be considered a potent verification method in an NLP context. We note however two limitations within the scope of this work: i) limited model depth: we only investigated models with few layers. IBP bounds are likely to become looser as the number of layers increases. ii) limited model types: we only studied models with CNN and fully connected layers.
We focused on the HotFlip attack to showcase specification verification in the NLP context, with the goal of understanding factors that impact its effectiveness (e.g. the perturbation space volume, see Section SECREF50). It is worth noting that symbol substitution is general enough to encompass other threat models such as lexical entailment perturbations BIBREF40, and could potentially be extended to the addition of pre/postfixes BIBREF2, BIBREF41.
Interesting directions of future work include: tightening IBP bounds to allow applicability to deeper models, investigating bound propagation in other types of neural architectures (e.g. those based on recurrent networks or self-attention), and exploring other forms of specifications in NLP.
<<</Discussion>>>
<<<Conclusion>>>
We introduced formal verification of text classification models against synonym and character flip perturbations. Through experiments, we demonstrated the effectiveness of the proposed simplex bounds with IBP both during training and testing, and found weaknesses of adversarial training compared with exhaustive verification. Verifiably trained models achieve the highest exhaustive verification accuracy on SST and AG News. IBP verifies models in constant time, which is exponentially more efficient than naive verification via exhaustive search.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Work\nAdversarial Examples in NLP.\nFormal Verification of Neural Networks.\nRepresentations of Combinatorial Spaces.\nMethodology\nVerification\nVerification as Optimisation\nModeling Input Perturbations using Simplices\nInterval Bound Propagation\nVerifiable Training.\nExperiments\nA Motivating Example\nBaselines\nAdversarial Training.\nData Augmentation Training.\nNormal Training.\nSetup\nEvaluation Metrics\nResults\nOracle Accuracy and Adversarial Accuracy.\nEffectiveness of Simplex Bounds with IBP.\nPerturbation Space Size.\nPerturbation Budget.\nComputational Cost of Exhaustive Verification.\nCounter-Fitted Embeddings\nDiscussion\nConclusion"
],
"type": "outline"
}
|
1908.06006
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Bidirectional Context-Aware Hierarchical Attention Network for Document Understanding
<<<Abstract>>>
The Hierarchical Attention Network (HAN) has made great strides, but it suffers a major limitation: at level 1, each sentence is encoded in complete isolation. In this work, we propose and compare several modifications of HAN in which the sentence encoder is able to make context-aware attentional decisions (CAHAN). Furthermore, we propose a bidirectional document encoder that processes the document forwards and backwards, using the preceding and following sentences as context. Experiments on three large-scale sentiment and topic classification datasets show that the bidirectional version of CAHAN outperforms HAN everywhere, with only a modest increase in computation time. While results are promising, we expect the superiority of CAHAN to be even more evident on tasks requiring a deeper understanding of the input documents, such as abstractive summarization. Code is publicly available.
<<</Abstract>>>
<<<Introduction>>>
Recently, hierarchical architectures have become ubiquitous in NLP. They have been applied to a wide variety of tasks such as language modeling and generation BIBREF0, BIBREF1, neural machine translation (NMT) BIBREF2, summarization BIBREF3, sentiment and topic classification BIBREF4, BIBREF5, and spoken language understanding BIBREF6, BIBREF7, to cite only a few examples. All hierarchical architectures capitalize on the same intuitive idea that the representation of the input text should be learned in a bottom-up fashion by using a different encoder at each granularity level (e.g., words, sentences, paragraphs), where the encoder at level $l+1$ takes as input the output of the encoder at level $l$.
One of the earliest and most influential examples is the Hierarchical Attention Network (HAN) of BIBREF5 (see Fig. FIGREF6 and section SECREF2). It is a two-level architecture, where at level 1, each sentence in the document is separately encoded by the same sentence encoder, resulting in a sequence of sentence vectors. That sequence is then processed at level 2 by the document encoder which returns a single vector representing the entire document. The sentence and document encoders are both self-attentional bidirectional Recurrent Neural Networks (RNNs), with different parameters.
<<<Observed problem>>>
HAN was highly successful and established new state of the art on six large-scale sentiment and topic classification datasets. However, it has a major weakness: at level 1, each sentence is encoded in isolation. That is, while producing the representation of a given sentence in the document, HAN completely ignores the other sentences. This lack of communication is obviously suboptimal. For example, in Fig. FIGREF2, the same highly negative feature (“terrible value”) has been repeated at the beginning of each sentence in the document. Because it encodes each sentence independently, HAN has no choice but to spend most of its attentional budget on the most salient feature every time. As a result, HAN neglects the other aspects of the document. On the other hand, CAHAN is informed about the context, and thus quickly stops spending attention weight on the same highly negative pattern, knowing that is has already been covered. CAHAN is then able to cover the other topics in the document (“seafood”,“scallops” and “mussels”; “entree” and “appetizer”; triple negation in the fourth sentence).
As another example, consider the edge case of a document containing the same sentence repeated several times, as shown in Fig. FIGREF3. With HAN, the exact same embedding is produced for each instantiation of the sentence, as a result of the context-blind self-attention mechanism always making the same alignment decisions. However, the context-aware sentence encoder of CAHAN allows it to extract complementary, rather than redundant information, from each instantiation of the sentence. This results in better coverage (“reasonably priced”, “arrived late”), in a richer document representation, and ultimately in a more accurate prediction (positive instead of very positive).
One may argue that in basic HAN, the document encoder at level 2 already does capture some notion of context, by assigning importance scores to sentences. However, at level 2, the sentence vectors have already been formed, and it is too late to modify them. Since the document encoder can only rank the sentence representations, it cannot address issues like high redundancy. In that case, important subtopics or details in the document will not be covered, no matter sentence scores.
<<</Observed problem>>>
<<<Context-aware HAN>>>
In this work, we propose and evaluate several modifications of the HAN architecture that allow the sentence encoder at level 1 to make its attentional decisions based on contextual information, allowing it to learn richer document representations. Another significant contribution is the introduction of a bidirectional version of the document encoder, where one RNN processes the document forwards, using the preceding sentences as context, and another one processes it backwards, using the following sentences as context.
The remainder of this paper is structured as follows. We start by formally introducing basic HAN (section SECREF2), we then explain our contributions (section SECREF3), and detail our experimental setup (section SECREF4). Finally, we interpret our results and list areas of future development (sections SECREF5 and SECREF7). Related work is reviewed in section SECREF6.
<<</Context-aware HAN>>>
<<</Introduction>>>
<<<HAN>>>
The baseline HAN model as introduced by BIBREF5 is shown in Fig. FIGREF6 along with our modifications (disregard the bold lines for the baseline). The sentence and document encoders, used respectively at level 1 and level 2, have different parameters but share the exact same architecture. Thus, in what follows, we only describe the sentence encoder in detail.
<<<Notation>>>
Next, we use boldface upper case for tensors, upper case for matrices, boldface lower case for vectors, and lower case for scalars. We define a document $\mathbf {X} \in \mathbb {R}^{N \times T_i \times d}$ as a sequence of $N$ sentences $(S_1, \dots , S_N)$. Each sentence $S_i$ is a sequence of $T_i$ $d$-dimensional word vectors $(\mathbf {x}_{i1}, \dots , \mathbf {x}_{iT_i}) \in \mathbb {R}^{T_i \times d}$.
<<</Notation>>>
<<<Sentence encoder>>>
First, the sentence-level bidirectional RNN $f_s$ processes the input sentence $S_i$ and returns a sequence of $T_i$ $2d_s$-dimensional hidden states $(\mathbf {h}_{i1},\dots , \mathbf {h}_{iT_i}) \in \mathbb {R}^{T_i \times 2d_s}$. $f_s$ is composed of two non-stacking RNNs $\overrightarrow{f_s}$ and $\overleftarrow{f_s}$ with Gated Recurrent Units BIBREF8, respectively parsing $S_i$ from left to right and right to left:
$\overrightarrow{f_s}$ and $\overleftarrow{f_s}$ have the same hidden layer dimensionality $d_s$, but different parameters. At each time step $t$, the word annotations they return are concatenated, producing $2d_s$-dimensional annotations that summarize the immediate context surrounding each word:
Then, a self-attention mechanism computes the representation $\mathbf {s}_i$ of sentence $S_i$ as a weighted sum of its word annotations:
Where the vector of attentional coefficients $\mathbf {\alpha }$ is a softmax-normalized version of the alignment vector $\mathbf {e}$, which itself is obtained by passing the word annotations through a dense layer (parameterized by $W_s \in \mathbb {R}^{2d_s\times 2d_s}$) and comparing the output with a trainable vector $\mathbf {u}_s \in \mathbb {R}^{2d_s}$:
$\mathbf {u}_s$ is initialized randomly. It can be interpreted as a “super-word” whose vector contains the ideal combination of latent topics, on average. The closest the annotation of a word is to this ideal representation, the more attention that word will be given.
The sentence encoder is applied to all sentences in document $\mathbf {X}$, producing a sequence of $N$ sentence vectors $(\mathbf {s_1},\dots ,\mathbf {s_N}) \in \mathbb {R}^{N\times 2d_s}$.
<<</Sentence encoder>>>
<<<Document encoder>>>
The document encoder is a self-attentional bidirectional GRU-RNN, like the sentence encoder, but it has different parameters. The dimensionality of its hidden states is $2d_d$. The document encoder is applied only once, to the sequence of sentence vectors, to produce the sequence of sentence annotations $(\mathbf {h}_{1}, \dots , \mathbf {h}_{N})$. Then, a self-attention layer outputs the final document vector.
<<</Document encoder>>>
<<</HAN>>>
<<<Proposed architecture: CAHAN>>>
As was previously explained, each sentence is encoded independently by HAN, without considering any kind of contextual information. To solve this issue, we inject a context vector $\mathbf {c_i}$ into the self-attention mechanism, to guide the model during the computation of the word alignment coefficients. In effect, Eq. DISPLAY_FORM12 becomes:
We propose two approaches for computing $\mathbf {c_i}$, namely CAHAN-SUM and CAHAN-RNN, shown as the two bolded connections in Fig. FIGREF6.
<<<Summed context (CAHAN-SUM)>>>
We introduce two settings, (1) left-to-right and bidirectional. Whenever there is no preceding/following sentence, i.e., at the beginning/end of a document, the context vector is initialized with zeroes.
<<<Left-to-right (LR)>>>
In the LR case, the context vector is computed as the sum of the preceding sentence representations:
<<</Left-to-right (LR)>>>
<<<Bidirectional (BI)>>>
In the BI case, we compute two context vectors, respectively by summing the representations of the sentences preceding and following the current sentence $S_i$. These two vectors are passed to two identical context-aware self-attention mechanisms (Eq. DISPLAY_FORM14) with different parameters. The resulting forward and backward sentence representations are then processed respectively by the forward and backward RNNs of the document encoder at level 2, and the resulting annotations are concatenated to produce the final sentence annotations.
CAHAN-SUM was inspired by the coverage vectors of seq2seq architectures, which have been shown very effective in addressing under(over)-translation in NMT BIBREF9, and repetition in summarization BIBREF10. Such coverage vectors are typically computed as the sum, over all previous decoder steps, of the attention distribution over the source tokens. However, in our case, we cannot keep track of the attention distribution history, since sentences are unique and cannot be aligned. This is why we work with sentence representations instead.
<<</Bidirectional (BI)>>>
<<<Centroid version (@!START@$\mu $@!END@)>>>
$\overrightarrow{\mathbf {c}_i}$, as defined by Eq. DISPLAY_FORM17, grows larger in magnitude as $i$ increases (the sum has more and more terms), which can blur the alignment decisions for the sentences at the end of a document (LR case), or both at the end and beginning of a document, when reading forwards and backwards (BI case). Therefore, we also experiment with a centroid, rather than sum, context vector:
<<</Centroid version (@!START@$\mu $@!END@)>>>
<<</Summed context (CAHAN-SUM)>>>
<<<Recurrent Context (CAHAN-RNN)>>>
Here, we capitalize on the capability of RNNs, especially when equipped with LSTM or GRU units, to keep track of information over long time periods. We simply use as context vector the document encoder annotation at the preceding/following time step. That is, we have, in the LR case:
By design, $\mathbf {h}_{i-1}$ summarizes the entire history $(\mathbf {s_1},\dots ,\mathbf {s_{i-1}})$ of sentence vectors, with a preference for the most recent time steps. If the sequence is very long though, even a GRU-RNN will eventually forget about the first elements. However, for the relatively short documents we experiment with (see Table TABREF29), we can assume the annotations of the document encoder to faithfully represent the entire sequence.
<<</Recurrent Context (CAHAN-RNN)>>>
<<<Gated context>>>
In NMT, BIBREF11 introduced a gating mechanism to allow the decoder to balance the contribution of the source and target information in generating the next word. The same idea can be found in numerous other NMT studies, e.g., BIBREF2, BIBREF12, BIBREF13. Inspired by this line of research, we propose a modification of Eq. DISPLAY_FORM14 to let our model explicitly decide how much contextual information it should take into account in making its alignment decisions:
$\mathbf {\lambda }$ is produced by a trainable mechanism taking as input the word annotations and the context vector:
The sigmoid activation ensures that $\mathbf {\lambda }$ plays a filtering role, by squashing all its entries to $[0,1]$.
The gate gives more expressiveness to the attention mechanism. Indeed, contextual information should not always be given the same importance, depending on the situation. E.g., when most of the document has been processed, context is likely to be very important, in order to limit redundancy and increase coverage. However, at the beginning of a document, or in the case of a very short or focused sentence, context might not be useful as only one single topic might be extractable from the sentence anyways.
From an optimization perspective, $\mathbf {\lambda }$ also has the desirable effect of regulating the magnitude of the context vector, preventing it from pushing the tanh to regions of very small gradient. This is especially useful with CAHAN-SUM, as in that case, $\mathbf {c}_i$ gets large towards the end/beginning of documents (forwards/backwards reading).
<<</Gated context>>>
<<<Complexity and sequentiality>>>
Assuming that $d \sim 2d_s$ and that $d_s \sim d_d$, which holds in practice under reasonable settings, all matrix multiplications in the network have similar complexity, of order of magnitude $\mathcal {O}(d^2)$. Moreover, since we use GRU-RNNs, there are 6 matrix multiplication per encoder. This number is doubled, as we use bidirectional RNNs. Finally, the two self-attention mechanisms, one at each level, add two multiplications. Therefore, in the HAN baseline architecture, there are a total of 26 matrix multiplications (13 at each level).
To that, CAHAN-SUM and CAHAN-RNN simply add one matrix multiplication ($W_c\mathbf {c}_i$ in Eq. DISPLAY_FORM14) in the LR case and two in the BI case. This corresponds to negligible 4% and 8% increases in total computational cost. On top of that, gating adds two multiplications in the LR case ($W_{\lambda _1}\mathbf {h}_{it}$ and $W_{\lambda _2}\mathbf {c}_i$ in Eq. DISPLAY_FORM25) and four in the BI case. All in all, this represents three and six extra multiplications compared to basic HAN, resp. in the LR and BI cases. Again, this corresponds to small increases in computational cost, of 11.5% and 23%, respectively.
However, with CAHAN-SUM, the representations of the preceding/following sentences are now required before computing the current sentence representation. With CAHAN-RNN, one even has to wait until the level 2 RNN has processed the preceding/following sentence vectors before being able to encode the current sentence. Therefore, the sentence encoding process, which was parallelizable with basic HAN due to independence, has now become a sequential process. This is why in practice, we observe slightly greater runtime increases, in the range 5-22% (see Table TABREF43).
<<</Complexity and sequentiality>>>
<<</Proposed architecture: CAHAN>>>
<<<Experimental setup>>>
<<<Datasets>>>
We evaluate the quality of the document embeddings learned by the different variants of CAHAN and the HAN baseline on three of the large-scale document classification datasets introduced by BIBREF14 and used in the original HAN paper BIBREF5. They fall into two categories: topic classification (Yahoo) and fine-grained sentiment analysis (Amazon, Yelp). Dataset statistics are shown in Table TABREF29. Classes are perfectly balanced, for all datasets.
<<</Datasets>>>
<<<Model configuration>>>
This subsection describes the preprocessing and hyperparameter setting we used.
<<<Preprocessing and word embeddings>>>
For preprocessing (and the HAN baseline), we used the publicly available implementation of BIBREF15, which closely follows the description and details given in the original HAN paper BIBREF5. More precisely, on each dataset, we randomly split the training set into training (90%) and validation (10%). Documents are then tokenized into sentences and sentences are tokenized into tokens. The tokens appearing less than 5 times in the corpus are replaced with a special UNK token. Finally, we pre-train our own word vectors with word2vec BIBREF16 on the training and validation splits.
<<</Preprocessing and word embeddings>>>
<<<Hyperparameters>>>
We do not tune any hyperparameter except the learning rate (see subsection SECREF35). We set the hidden layer dimensionality of the two RNN encoders to $d_s=50$ and $d_d=50$. Thus, the word annotations, sentence vectors, sentence annotations and document vector all have size 100.
With regularization in mind, we set the dimensionality of the word embeddings to $d=200$ on the very large datasets (Amazon and Yahoo!) and to $d=100$ on Yelp, as shown in Table TABREF29. We also use a greater batch size of 128 on the large datasets, versus 64 on Yelp.
<<</Hyperparameters>>>
<<</Model configuration>>>
<<<Training details>>>
We zero-pad sentences and documents. Like in BIBREF5, to make the most out of each batch, we ensure they are as dense as possible by using a bucketing strategy. More precisely, we build each batch so that it contains documents of approximately the same size, in number of sentences. For regularization, we use dropout BIBREF17 with a rate of 0.5 at each layer. For classification, the document vectors are passed to a dense layer with softmax activation, whose dimensionality is equal to the number of categories to be predicted.
Initialization has a significant impact on performance. To make sure the differences we measure are due to differences in the models and not in initial condition, we use the same initialization weights for each model.
<<<SGD with cyclical learning rate>>>
To minimize the categorical cross-entropy loss, we use the stochastic gradient descent optimizer with a triangular cyclical learning rate schedule and opposite triangular momentum schedule BIBREF18, BIBREF19. Following the authors' recommendations, we use a fixed $[0.85,0.95]$ momentum range, while for the learning rate, we perform a range test on the validation set, for each model, searching the $[0.001,3]$ range. With a triangular schedule, the learning rate linearly increases for a certain number of iterations (half-cycle), and then linearly decreases back to its initial value during the second half of the cycle. Cycles are repeated until training ends. High learning rate values make training faster, by allowing large updates and the use of greater batch sizes while keeping the amount of regularization constant. Also, the cyclical schedule injects beneficial stochastic noise to the gradient updates, which improves generalization BIBREF20.
We use cycles of 12 epochs, and an early stopping strategy, monitoring the test loss, with a patience of slightly more than one cycle. We set the maximum number of epochs for all models to 50.
<<</SGD with cyclical learning rate>>>
<<</Training details>>>
<<</Experimental setup>>>
<<<Results>>>
As can be seen in Table TABREF37, the best version of CAHAN (SUM-BI-$\Sigma $) consistently outperforms the HAN baseline, which shows that taking contextual information into account helps producing better document representations.
Also, the two unidirectional variants (LR) slightly underperform the baseline and are clearly inferior to BI, which illustrates the value added by processing the document forwards and backwards, using preceding and following sentences as context.
<<<Summing vs. averaging>>>
In the unidirectional case, it is surprising to note that CAHAN-SUM-LR-$\mu $ is slightly better than CAHAN-SUM-LR-$\Sigma $, i.e., the centroid-based context vector (Eq. DISPLAY_FORM20) is better than the sum-based one (Eq. DISPLAY_FORM17). Indeed, from an information theory standpoint, it should be the opposite, as summing keeps track of all information whereas averaging is lossy. We hypothesize that towards the end of a document, the sum-based context vector grows large in magnitude, which perturbs the alignment decisions and deteriorates the quality of the sentence vectors. On the other hand, the centroid-based vector, which has constant magnitude, does not suffer from this issue. We further hypothesize that this issue is attenuated in the bidirectional case (CAHAN-SUM-BI-$\mu $ and CAHAN-SUM-BI-$\Sigma $ are on par) due to a counterbalancing phenomenon. Indeed, the last sentences processed by the left-to-right encoder are the first ones processed by the right-to-left encoder. Therefore, through concatenation, the overall quality of the sentence embeddings stays constant.
<<</Summing vs. averaging>>>
<<<Gating>>>
As expected, gating improves performance, especially for the $\Sigma $ variants of CAHAN-SUM (and especially the LR ones). To be noted are significant boosts of 0.45 and 0.24 in accuracy respectively for CAHAN-SUM-LR-$\Sigma $ and CAHAN-SUM-BI-$\Sigma $ on Yelp. On Amazon, gating also offers CAHAN-SUM-LR-$\Sigma $ a nice 0.27 improvement. These positive results give a clue that regulating the magnitude of the context vector $\mathbf {c}_i$ is indeed beneficial.
Nevertheless, gating also improves the performance of the $\mu $ variants of CAHAN, which do not suffer from the context vector magnitude issue. This shows that gating is also helpful via giving more expressiveness to the model. For instance, on Amazon, gating boosts the performance of CAHAN-SUM-BI-$\mu $ by 0.12.
It is interesting to note that overall, gating is mostly effective on Yelp and Amazon. We attribute this to the difference in task. Sentiment analysis may rely more on contextual information than topic classification.
<<</Gating>>>
<<<CAHAN-RNN-BI>>>
The consistently bad performance of CAHAN-RNN-BI is to be noted. This was unexpected, as an equivalent approach was used by BIBREF6 for dialogue act classification, with significant improvements. We hypothesize that in our case, CAHAN-RNN-BI is not effective because, unlike utterances in a speech transcription, sentences in a document are not ordered in a temporal fashion. In other words, sentences far away from the current sentence are not necessarily less relevant than closer sentences. Thus, considering each sentence equally is better than imposing an implicit time-decay via a RNN.
<<</CAHAN-RNN-BI>>>
<<<Runtimes>>>
We compare the average runtime per iteration of some variants of CAHAN to that of basic HAN in Table TABREF43. For CAHAN-SUM-$\Sigma $, we observe that the unidirectional variant (LR) is 5.7% slower than basic HAN (37 vs. 35ms per iteration), whereas the bidirectional variant (BI) is 23% slower (43 vs. 35 ms). When gating, these number increase to 14.3% and 37% (40 and 48ms vs. 35ms). These differences are not far from our theoretical expectations (see subsection SECREF26), especially for LR. Indeed, recall that based on matrix multiplication counts, we had forecasted increases of 4% and 8% (11.5% and 23% when using gating), respectively for LR and BI. The gap for BI can be explained by a probable bottleneck in the implementation.
CAHAN-RNN adds the same number of matrix multiplications as CAHAN-SUM, so we should in principle observe the same increases. However, as was explained in subsection SECREF26, with CAHAN-RNN we have to wait until the level 2 RNN has processed the preceding or preceding/following sentence vectors (LR or BI case) before being able to encode the current sentence. This explains the extra-time needed (40 vs. 37ms and 49 vs. 43ms).
<<</Runtimes>>>
<<</Results>>>
<<<Related work>>>
In what follows, we provide a review of the relevant literature. One should note that by context, in this paper, we do not refer to the intra-sentence or internal context vector of seq2seq encoders BIBREF21, BIBREF11, BIBREF13. Rather, we refer to the cross-sentence, external, or document-level context. A few studies only have focused on developing models that take that type of context into account. Most of these studies originate from NMT. We briefly describe them next.
BIBREF2 obtain a global context vector by feeding a fixed number of the previous source sentences to HAN. They then compare two ways of injecting it into the encoder-decoder model. First, they propose a warm-start approach, in which the encoder and/or decoder hidden states are initialized with the context vector. Second, they experiment with an auxiliary strategy in which the intra-sentence context vector of the encoder is concatenated with the global context vector and passed either (i) directly to the decoder, or (ii) after going through a filtering gate. However, unlike our mechanism and that of BIBREF11, BIBREF12, BIBREF13, which all feature two coupled gates, the mechanism of BIBREF2 has only one gate. All strategies proposed by BIBREF2 significantly improve performance, but first place is reached by a combination of the warm-start and gated techniques.
BIBREF22 use an approach similar to the auxiliary approach of BIBREF2, but they compute the context vector only from the sentence immediately preceding the current source sentence. They then pass it to a dedicated encoder featuring a customized attention mechanism.
BIBREF12 and BIBREF23 both extend the Transformer architecture BIBREF24 with a context encoder featuring self-attentional and feed-forward layers. Then, BIBREF12 combine the context representation with the source representation produced by the basic Transformer encoder via a gating mechanism. They do not modify the decoder part of the Transformer.
BIBREF23 go one step further by passing the contextual information both to the encoder and the decoder. In both cases, they add a self-attention mechanism over the context representation. For the decoder though, they also replace the residual connection after the context self-attention with a gating mechanism, to limit the influence of the context information on the source information.
One piece of work closely related to our study is BIBREF3. The authors also use a hierarchical attention architecture, where at level 1, each paragraph of a document is encoded by a dedicated encoder. All encoders share the same stacking bi-RNN architecture. Moreover, they communicate at each layer to produce context-aware annotations of the words in their paragraphs. More precisely, at a given layer of the stacking RNN, a given encoder is passed the average of the representations learned by the other encoders at the corresponding layer (like with CAHAN-SUM-$\mu $). This context vector is then combined with the hidden states and passed as input to the upper RNN layer. At level 2, the top RNN layer annotations are passed to a word attention mechanism followed by a paragraph attention mechanism. A major difference with our work is that the authors combine the encoder with a decoder, to perform abstractive summarization of long documents, whereas we only focus on the encoding part. The word and paragraph attentional decisions at level 2 are thus made by the decoder. Another significant difference is that the authors use reinforcement learning for training, instead of SGD.
Context-aware models have also been proposed in other NLP domains. E.g., for spoken language understanding, BIBREF7 prepend and append the current utterance with two special word vectors respectively summarizing the $C$ preceding and following utterances (respectively), where $C$ is a hyperparameter. This indirectly initializes the hidden states of the left-to-right and right-to-left components of a bidirectional RNN, like with the warm-start approach of BIBREF2. On the other hand, BIBREF6 rely on a mechanism equivalent to LR-CAHAN-RNN. They find that it significantly boosts dialogue act classification accuracy. As discussed in section SECREF5, we hypothesize that CAHAN-RNN is not effective in our application because sentences in a document are not ordered in a temporal manner.
<<</Related work>>>
<<<Discussion and next steps>>>
While bidirectional CAHAN-SUM systematically outperforms HAN, margins are modest. We attribute this to the fact that the datasets used in our experiments contain short documents (see Table TABREF29) featuring simple sentences. Thus, the superior expressiveness of CAHAN is not able to show. To address this issue, we plan in future work to experiment on datasets featuring long documents containing complex sentences.
Moreover, the tasks of sentiment and topic classification do not require a deep understanding of the input documents. Even if a given document contains some complex sentences with multiple clauses and subtopics, capturing the polarity of only one simple, unambiguous sentence or pattern may be enough to accurately predict the category of the entire document (e.g., “hated”, “loved”, “definitely recommends”, “will never come back”, etc.). Thus, we hypothesize that when trained to solve such tasks, CAHAN does not learn to use its context-aware capabilities to the fullest extent.
One solution, and promising area of future work, would consist in explicitly giving CAHAN knowledge about coverage, diversity, and redundancy. This could be done by modifying the sentence attention mechanism and/or by adding a term to the loss. Another natural next step is to experiment on tasks requiring a deeper understanding of text, such as end-to-end abstractive summarization. Some other ideas for improvement include combining CAHAN-SUM with CAHAN-RNN, and/or the mean and centroid vectors; for CAHAN-SUM, obtaining the centroid vector through a trainable mechanism rather than via pooling; and experimenting with a trainable matrix (instead of vector) in the self-attention at both level 1 and level 2, like in BIBREF25. Finally, the context vector could be seen as an external, general summary of the document, and be pre-computed offline by a dedicated encoder.
<<</Discussion and next steps>>>
<<<Conclusion>>>
In this paper, we proposed several modifications of the HAN architecture that make the sentence encoder context-aware (CAHAN). Results show that taking context into account is beneficial. Specifically, the bidirectional version of the document encoder, that processes the documents forwards and backwards, using the preceding and following sentences as context, outperforms the HAN baseline on all datasets and is superior to the undirectional variant. Moreover, the computational overhead is small. Experiments on tasks requiring a deeper understanding of the input documents should better highlight the superiority of CAHAN.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nObserved problem\nContext-aware HAN\nHAN\nNotation\nSentence encoder\nDocument encoder\nProposed architecture: CAHAN\nSummed context (CAHAN-SUM)\nLeft-to-right (LR)\nBidirectional (BI)\nCentroid version (@!START@$\\mu $@!END@)\nRecurrent Context (CAHAN-RNN)\nGated context\nComplexity and sequentiality\nExperimental setup\nDatasets\nModel configuration\nPreprocessing and word embeddings\nHyperparameters\nTraining details\nSGD with cyclical learning rate\nResults\nSumming vs. averaging\nGating\nCAHAN-RNN-BI\nRuntimes\nRelated work\nDiscussion and next steps\nConclusion"
],
"type": "outline"
}
|
1909.02776
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Features in Extractive Supervised Single-document Summarization: Case of Persian News
<<<Abstract>>>
Text summarization has been one of the most challenging areas of research in NLP. Much effort has been made to overcome this challenge by using either the abstractive or extractive methods. Extractive methods are more popular, due to their simplicity compared with the more elaborate abstractive methods. In extractive approaches, the system will not generate sentences. Instead, it learns how to score sentences within the text by using some textual features and subsequently selecting those with the highest-rank. Therefore, the core objective is ranking and it highly depends on the document. This dependency has been unnoticed by many state-of-the-art solutions. In this work, the features of the document are integrated into vectors of every sentence. In this way, the system becomes informed about the context, increases the precision of the learned model and consequently produces comprehensive and brief summaries.
<<</Abstract>>>
<<<Introduction>>>
From the early days of artificial intelligence, automatically summarizing a text was an interesting task for many researchers. Followed by the advance of the World Wide Web and the advent of concepts such as Social networks, Big Data, and Cloud computing among others, text summarization became a crucial task in many applications BIBREF0, BIBREF1, BIBREF2. For example, it is essential, in many search engines and text retrieval systems to display a portion of each result entry which is representative of the whole text BIBREF3, BIBREF4. It is also becoming essential for managers and the general public to gain the gist of news and articles immediately, in order to save time, while being inundated with information on social media BIBREF5.
Researchers have approached this challenge from various perspectives and have obtained some promising results BIBREF6, BIBREF7. However, this area continues to present more research challenges and has a long path to maturity.
One method of investigating this challenge, is (supervised) extractive summarization. Extractive implementations use a ranking mechanism and select top-n-ranked sentences as the summary BIBREF8. Sentences of a document are represented as vectors of features. Using summarization corpora, a rank will be assigned to each sentence, based on its presence in several human-written summaries (golden summaries). The system should then learn how to use those features to predict the rank of sentences in any given text. Various machine learning approaches such as regression and classification algorithms are used to perform the ranking task BIBREF9, BIBREF10.
As far as our knowledge goes, in all current implementations, sets of sentence vectors of every document are merged together to compose a larger set, which is then passed to the learning model as a matrix. In this approach, the locality of ranks is disregarded. In other words, the rank of sentences is highly relative to the context and document. A sentence might be ranked high in one document while being ranked lower in another. As a result, merging sentences of a whole dataset into a matrix removes document boundaries and a main source of information will be lost.
We addressed this issue by taking certain features of documents into account, such as its length, topical category and so on in addition to some new sentence features that also reflect document properties. Thus, more information will be provided to the model, and ranking could be done with respect to local features of the document. Our experiments show that this rectification leads to improvement in both the performance of the learned model and the quality of produced summaries. We also represent a new baseline for the evaluation of extractive text summarizers which can be used to measure the performance of any summarizing method more accurately.
The remainder of this paper is organized as follows. (Section SECREF2) reviews related works. (Section SECREF3) presents the proposed method and evaluation measures. (Section SECREF5) discusses how the experiments are set up. The results are discussed in (Section SECREF5), and finally (Section SECREF6) concludes the paper.
<<</Introduction>>>
<<<Related works>>>
Text summarization has been widely studied by both academic and enterprise disciplines. Text summarization methods may be classified into different types. Based on input type, there are single-document BIBREF11, BIBREF12 vs multi-document summarization methods BIBREF13, BIBREF14, BIBREF15. Based on language, there are mono-lingual, bilingual and multi-lingual methods BIBREF16. There are also “query focused” methods in which a summary relevant to a given query is produced BIBREF17. From the perspective of procedure, however, there are two main approaches: abstractive vs extractive BIBREF18.
Abstractive approaches try to generate a new short text based on the concepts understood from the original text BIBREF19. This usually requires a full pass through NLP pipeline and is faced with many complexities and challenges BIBREF20. The abstractive approach relies on linguistic methods to examine and interpret the text in order to find new concepts and expressions. The output is a new shorter text which consists of the most important information from the original text document BIBREF8.
Extractive approaches, on the other hand, select a few sentences from the document based on some measures in order to place them in a summary BIBREF8.
A broad range of methods has been examined in this approach, including graph-based BIBREF8, BIBREF21, unsupervised BIBREF21, BIBREF22 and supervised (corpus-based) methods BIBREF9, BIBREF23, BIBREF24. In supervised methods, training data is generally needed to select important content from the documents. In these methods, usually, the problem is reduced to a classification or regression problem, and machine learning techniques applied to the dataset of documents and their gold summaries represented by some features. Support Vector Machines (SVM) BIBREF25 and neural networks BIBREF26 are more popular sentence classification algorithms.
The key step in extractive summarization is to determine the importance of sentences in the document BIBREF27. Previous studies examine the ordinal position of sentences BIBREF28, BIBREF29, length of sentences BIBREF9, the ratio of nouns, the Ratio of Verbs, Ratio of Adjectives, Ratio of Adverbs BIBREF30, the Ratio of Numerical entities BIBREF31, BIBREF32 and Cue Words BIBREF28.
Gupta and Lehal in their survey of text summarization techniques list the following groups of features: content-based, title-based, location-based, length-based, proper noun and upper-case word-based, font-based, specific phrase-based, and features based on sentence similarity to other sentences in a text BIBREF8. Previous studies use different sentence features such as terms from keywords/key phrases, terms from user queries, frequency of words, and position of words/sentences for text summarization BIBREF33.
However, in most cases, selection and weighting of features are an important matter of debate. Some works have been carried out with respect to this BIBREF34, but none, to the best of our knowledge, has shown that target attribute is highly related to the scope of the document. It is occasionally mentioned but not included in practice. For instance, Ferreira et al studied various combinations of sentence scoring methods on three types of documents in BIBREF6 and BIBREF31 and concluded that the weight of features varies, dependent on the properties of context: “the effectiveness of sentence scoring methods for automatic extractive text summarization algorithms depends on the kind of text one wants to summarize, the length of documents, the kind of language used, and their structure.”. JY Yeh et al in BIBREF35 utilized a Genetic Algorithm (GA) to find the weight of features for calculating sentence scores. However, their following statement implies that performance of weights is generally dependent to genre, that could be seen as a feature of context: “It cannot be guaranteed that the score function whose feature weights are obtained by GA definitely performs well for the test corpus; nevertheless, if the genre of the test corpus is close to that of the training corpus, we can make a prediction that the score function will work well.” BIBREF35. Berenjkoub et al studied the effectiveness of various subsets of features in summarization of distinct sections of scientific papers BIBREF36. They showed that some features work well only in some specific portion of text, for example, on the abstract section, while others perform better on the methodology section. This could be considered to be a consequence of differences in the structure and context of each section.
All the above studies imply the significance of document context in ranking. Nevertheless, it has not been given enough attention in the NLP community, and even sometimes is neglected. For instance, authors in BIBREF30 suggest the use of a wide range of various features. Among these, seventeen part-of-speech based sentences features have been introduced, all of which are sentence-normalized, but not document-normalized, i.e. they count the ratio of a syntactic unit e.g. verbs, divided by the number of words in a sentence. Such features do not consider the total number of those units, e.g. verbs, in the whole document.
Our work contributes to this line of research and includes document features in the learning and ranking processes.
<<</Related works>>>
<<<Incorporating Document Features>>>
As a way to investigate the need for document features in sentence ranking (as explained in the introduction and related works), we introduced several document-level features and incorporated them in the summarization process. These features are listed under subsection (SECREF4). Although stages of our method do not differ from general supervised extractive summarization, the whole process is explained in order to clarify the method of investigation.
Every supervised summarization has two phases. The first is the “Learning Phase”, a corpus of ideal summaries is used to train the system how to rank sentences. The second is the “Summarization Phase”, where the system applies its learning gained from the first phase, in order to rank the sentences of a new given text. A process of selection is then performed to form a summary. Each of these phases has several intricacies which are briefly described in the following sections.
<<<Learning Phase>>>
The input to this phase is a dataset of documents, each of which is associated with several human-written summaries. The output is a learned model with a good level of accuracy that is able to reliably predict the rank of sentences, in almost the same way that a human may rank them. To accomplish this, it is necessary to first perform normalization and transform various forms of phrases into their canonical form. Then, every text should be tokenized to sentences, and further tokenized to words. Another prerequisite is to remove stop words. The following subtasks should be carried out next.
<<<Feature Extraction>>>
Foremost, it is necessary to represent each sentence with those features that have the most distinguishing effect on the prediction of the rank. Many features have been examined in the literature. We entitle some as “document-aware” because they do implicitly represent some information about a document. However, other features have been used, that say nothing about the document in which they appeared. We call them “document-unaware”. In the previous sections, we argued that this lack of information might be misleading for the system, especially when we train it with sample sentences from different documents. Thus, we modified some document-unaware features and derived new features that cover document properties. We also examined the effect of incorporating explicit features of a document into vectors of its sentences. The following sub-sections describe the features mentioned above in more detail.
<<<Document-unaware Features>>>
Ordinal position: It is shown that inclusion of sentence, in summary, is relevant to its position in the document or even in a paragraph. Intuitively, sentences at the beginning or the end of a text are more likely to be included in the summary. Depending on how it is defined, this feature might be document-unaware or not. For example, in BIBREF29 and BIBREF37 it is defined as $\frac{5}{5}$ for the first sentence, $\frac{4}{5}$ for the second, and so on to $\frac{1}{5}$ for fifth and zero for remaining sentences. In another research conducted by Wong et al. BIBREF9, it is defined as $\frac{1}{sentence\ number}$. With such a definition, we may have several sentences, for example, with position=$\frac{1}{5}$ in the training set, these may not have the same sense of position. While a sentence position=$\frac{1}{5}$ means “among the firsts” in a document with 40 sentences, it has a totally different meaning of “in the middle”, in another document containing 10 sentences. Thus, a useful feature formula should involve differences of documents which may change the meaning of information within it. In our experiments, we used the definition of BIBREF9. A document-aware version of position will be introduced in (SECREF6).
Length of sentence: the intuition behind this feature is that sentences of too long or too short length are less likely to be included in the summary. Like sentence position, this feature is also subject to the wrong definition that makes it document-unaware. For example, in BIBREF9 it is defined as a number of words in a sentence. Such a definition does not take into account that a sentence with, say 15 words may be considered long if all other sentences of document have fewer words. Another sentence with the same number of words may be regarded as short, because other sentences in that document have more than 15 words. This might occur due to different writing styles. However, we included this in our experiments to compare its effect with that of its document-aware counterpart, which will be listed in (SECREF6).
The Ratio of Nouns: is defined in BIBREF30 as the number of nouns divided by total number of words in the sentence, after stop-words are removed. Three other features, Ratio of Verbs, Ratio of Adjectives, and Ratio of Adverbs are defined in the same manner and proved to have a positive effect on ranking performance. From our perspective, however, a sentence with a ratio of nouns =0.5, for example, in a document containing many nouns, must be discriminated in the training set from another sentence with the same ratio of nouns, that appeared in another document having fewer nouns. This feature does not represent how many nouns are there in the document, which is important in sentence ranking. The same discussion goes on to justify the need to consider the number of verbs, adjectives, and adverbs in the document. The impact of these features is examined in our experiments and compared to that of their document-aware counterparts.
The Ratio of Numerical entities: assuming that sentences containing more numerical data are probably giving us more information, this feature may help us in ranking BIBREF31, BIBREF32. For calculation, we count the occurrences of numbers and digits proportional to the length of sentence. This feature must be less weighted if almost all sentences of a document have numerical data. However, it does not count numbers and digits in other sentences of the document.
Cue Words: if a sentence contains special phrases such as “in conclusion”, “overall”, “to summarize”, “in a nutshell” and so forth, its selection as a part of the summary is more probable than others. The number of these phrases is counted for this feature.
<<</Document-unaware Features>>>
<<<Document-aware Features>>>
Cosine position: As mentioned in (SECREF5) a good definition of position should take into account document length. A well-known formula used in the literature BIBREF38, BIBREF7 is
in which index is an integer representing the order of sentences and T is the total number of sentences in document. This feature ranges from 0 to 1, the closer to the beginning or to the end, the higher value this feature will take. $\alpha $ is a tuning parameter. As it increases, the value of this feature will be distributed more equally over sentences. In this manner, equal values of this feature in the training set represent a uniform notion of position in a document, so it becomes document-aware.
Relative Length: the intuition behind this feature is explained in (SECREF5). A discussion went there that a simple count of words does not take into account that a sentence with a certain number of words may be considered long or short, based on the other sentences appeared the document. Taking this into consideration, we divided the number of words in the sentence by the average length of sentences in the document. More formally, the formula is:
in which n is number of sentences in the document and $s_i$ is the i’th sentence of it. Values greater than 1 could be interpreted as long and vice versa.
TF-ISF: this feature counts the frequency of terms in a document and assigns higher values to sentences having more frequent terms. It also discounts terms which appear in more sentences. Since it is well explained in the literature, we have not included details and formula which are in references BIBREF34 and BIBREF39. Nonetheless, the aspect that matters in our discussion is that both frequency and inverse sentence frequency are terms which involve properties of context, and consequently are document-aware.
POS features: Here we introduce another way to include the ratio of part of speech (POS) units in features and keep them document-normalized. To do this, the number of occurrences of each POS unit should be divided by the number of them in the document, instead of that occurring in a sentence. The formal definition of the new document-aware features are as follows:
<<</Document-aware Features>>>
<<<Explicit Document Features>>>
In order to further investigate how effective are document specific features in sentence ranking, we defined several features for documents. These features are then calculated for each document and repeated in the feature vector of every sentence of that document. Their formal definition is described below and their effect is examined in the result and discussion section (SECREF5):
Document sentences: An important property of a document that affects summarization is the total number of sentences participating in sentence ranking. As this number grows, a summarizer should be more selective and precise. Also, some sentence features such as cue words, maybe more weighted for longer documents. In addition, the main contextual information is probably more distributed over sentences. In such a case even lower values of other features should be considered important.
Document words: the number of words in the document is another notion of document length. Since the number of sentences alone is not enough to represent document length, this feature should also be considered.
Topical category: different topics such as political, economic, etc. have different writing styles and this might affect sentence ranking. For instance, numerical entities may appear more in economic or sport reports than in religious or social news. Therefore the weight of this attribute should be more or less, based on a document’s category. So it needs to be included.
An overview of our feature set is represented by example in figure FIGREF15. Column ID is just for enumeration and column Target is explained in the next section.
<<</Explicit Document Features>>>
<<</Feature Extraction>>>
<<<Target Assignment>>>
Every feature vector needs a target value from which the system should learn how to rank sentences. The value of target is usually determined based on golden summaries. If a sentence is included in a majority of human-written extracts, its target is near to 1. In contrast, it would be closer to 0 if the sentence could not be found in any human-made summaries. In some datasets, like the one we used, golden summaries are not absolutely extractive, and they are not composed of exact copies of sentences in the original text. In such cases, a measure of similarity between the sentence whose target we are looking for, and each ideal summaries’ sentence will be calculated. This results in real values between 0 and 1 for this attribute. Section (SECREF4) includes more details about target assignment.
<<</Target Assignment>>>
<<<Training Model>>>
Since target attribute values vary between zero and one, we opted to use regression methods for the learning task. To build a training and a test set, a global matrix is composed in which every row corresponds to a sentence in the corpus and each column corresponds to a feature. The last column is for target attribute which will be omitted in the test set. It might be required to perform scaling on certain columns, depending on its corresponding feature and range of values.
In cases where the dataset is large, the total number of sentences which are not included in golden summaries, and consequently have lower targets, is many times larger than the number of included sentences. This might lead the regression bias toward lower target values. To avoid this, dataset balancing is needed. That is to leave aside a portion of not included sentences and not to feed them to learner model.
Lastly, in this phase, the regression model should be fitted on training set and be evaluated on a test set as described in sections (SECREF4) and (SECREF5).
<<</Training Model>>>
<<</Learning Phase>>>
<<<Summarization Phase>>>
Having acquired a model that can precisely rank sentences, we can apply it to any new given text and use ranked sentences in order to create a summary. This summarization process could also be executed on dataset texts, in order to evaluate how precisely our method resembles human-written summaries. In this section, we briefly describe the summarization process. The evaluation process is explained in section (SECREF22).
<<<Sentence Ranking>>>
In comparison with learning phase, in which a global matrix was used, this time a local matrix is composed whose rows correspond with the sentences of the input text. If during learning, any scaling was performed on features, they should be carried out here in the same manner. The matrix is then fed to the regressor obtained in the previous phase, and a rank value between zero and one will be predicted for each sentence.
<<</Sentence Ranking>>>
<<<Sentence Selection>>>
By sorting sentences based on their ranks, the most appropriate sentences for being included in summary will be determined. To preserve readability, however, it is important to place them in the summary in the same order they appeared in the input document.
Another consideration is the cut-off length. How many of the top sentences should we select for summary? The answer should be as simple as a constant number, a percentage of total sentences, or it could be determined by more advanced heuristics. We allowed cut-off length to be an input parameter. This allows us, in the evaluation phase, to produce summaries of dataset documents in the same length as golden summaries. This makes the comparison more equitable.
<<</Sentence Selection>>>
<<</Summarization Phase>>>
<<<Evaluation Measures>>>
In this section, some measures are described to evaluate the performance of both phases explained in the previous section: the learning phase and summarization phase. The former is evaluated using common regression metrics such as mean square error (MSE) and coefficient of determination (R2). The latter is carried out using ROUGE which is a well-known metric for evaluating summarization systems.
Mean Square Error (MSE) is the average of squared errors in all estimated targets. An ideal regressor tends to make this measure as near as possible to zero. Though, an exact zero for MSE is not desirable, because it is suspected to be due to over fitting.
The coefficient of determination is another metric for evaluating how well a regression model is fitted to data. It ranges from $-\infty $ to 1. As it approaches 1, “goodness-of-fit” is increased, while negative values show that the mean of data is a better estimator for target BIBREF40.
ROUGE is proposed in BIBREF41 as an evaluation metric for summaries. It matches n-grams in both system produced summaries and reference summaries and returns the percentage of matching in terms of precision, recall and f-measure. There is a variety of ROUGE family metrics, namely ROUGE-1, ROUGE-2, and ROUGE-L. In ROUGE-1 the overlap of 1-grams, each word, is calculated. In ROUGE-2 the bigrams are considered as units of comparison. The ROUGE-L uses the Longest Common Subsequence (LCS) to measure resemblance. Nevertheless, we found that ROUGE assessments are always relatively high, even for a summary that is produced perfunctorily. Hence, we also designed a random summarizer that selects random sentences for the summary, and evaluated it by ROUGE. This could be used as a baseline for comparison.
<<</Evaluation Measures>>>
<<</Incorporating Document Features>>>
<<<Experiments>>>
Two experiments were set up to verify our hypothesis: “sentence ranking is highly dependent to document, and features must also represent context”. The first experiment involves document-unaware features (listed in section SECREF5) alongside TF-ISF. In the second experiment, document-aware features were used instead of document-unaware ones. We also set up a random summarizer based on a random regressor that acts as a baseline for comparisons. More details are recorded in section (SECREF25).
A good experimental study should be as reproducible as possible. Here we explain the technical details that are more specific to our dataset, to allow the interested user to set up the same experiments for further research.
<<<Dataset>>>
We used the Pasokh dataset BIBREF42 that contains 100 Persian news documents each of which is associated with 5 summaries. Each summary consists of several sentences of the original text, selected by a human expert. Some sentences are slightly modified and are not, therefore, an exact copy of any original sentences. Documents are categorized into six categories such as political, economic and so on. The length of documents ranges from 4 to 156 sentences. Overall, it has about 2,500 sentences.
<<</Dataset>>>
<<<Extracting Features and Scaling>>>
All features introduced in section SECREF4 are calculated. Pre-processing, sentence and word tokenization, stop words removal, and part of speech tagging is performed using the Hazm library BIBREF43. The majority of features have a range between zero and one. Other features are passed to a min-max scaler to transform into the same range. For the category feature which is nominal, the one-hot-encoding method applied and six flag features used instead.
<<</Extracting Features and Scaling>>>
<<</Experiments>>>
<<<Results and Discussion>>>
In section (SECREF22) MSE, R2 and ROUGE scores are remarked as evaluation measures. The results of our experiments are reported below in terms of these measures. For better comparison, we also ran another experiment in which the random regressor was used for ranking sentences and producing summaries. Table TABREF28 shows and compares MSE and R2 reported from these experiments. The results show that in experiment 2, the mean squared error is reduced and the r2 score is increased. This means that using document-aware features leads to a more accurate learned model, proving our hypothesis about the relationship between document features and target ranks.
ROUGE scores are displayed separately in terms of precision, recall and f-measure in Figures FIGREF29 to FIGREF31 respectively. F-measure scores are displayed in the figure FIGREF29, comparing ROUGE-1, ROUGE-2 and ROUGE-L. Figures FIGREF30 and FIGREF31 allow comparison of precision and recall scores. The higher values gained in experiment 2, confirm that document-aware features perform better than unaware features.
These results are also interpretable from viewpoint of entropy-based decision tree methods. In learning phase, impurity of features within the whole dataset will be measured, and features having higher information gain will take place in upper levels of tree. But in summarization phase, within which decisions have to be made within a single document, impurity of those features may be low, causing less effective decisions and precision's. By incorporating document features, we help model to use different features (thus different trees) for different documents.
Another insight gained from these charts is that a random summarizer resulted in scores more than 50% in all measures, and without using document-aware features, the model achieves a small improvement over a random summarizer.
<<</Results and Discussion>>>
<<<Conclusion>>>
This paper has discussed that in supervised extractive summarization, we cannot learn to rank by considering dataset sentences as independent educational examples. The rank of sentences is dependent on each other within a document. To overcome this issue, we suggested incorporating document features explicitly in the feature vector of sentences. We also suggested using features that take into account the properties of document. We named this kind of features as document-aware. Conducted experiments demonstrated the benefit of adding explicit document features, as well as document-aware features, both in model precision and summary quality. For future work, more document-aware features can be examined. It is also possible to run the same experiments on an English (or any other language) dataset, if available. Another clue for study is measuring degree of entropy difference between dataset and single documents, in a standard dataset.
Our source code is hosted on GitHub and is published for later reference, further experiments and reproducing results. A web interface and a Telegram bot is also implemented as demo.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated works\nIncorporating Document Features\nLearning Phase\nFeature Extraction\nDocument-unaware Features\nDocument-aware Features\nExplicit Document Features\nTarget Assignment\nTraining Model\nSummarization Phase\nSentence Ranking\nSentence Selection\nEvaluation Measures\nExperiments\nDataset\nExtracting Features and Scaling\nResults and Discussion\nConclusion"
],
"type": "outline"
}
|
1909.09018
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Corporate IT-Support Help-Desk Process Hybrid-Automation Solution with Machine Learning Approach
<<<Abstract>>>
Comprehensive IT support teams in large scale organizations require more man power for handling engagement and requests of employees from different channels on a 24×7 basis. Automated email technical queries help desk is proposed to have instant real-time quick solutions and email categorisation. Email topic modelling with various machine learning, deep-learning approaches are compared with different features for a scalable, generalised solution along with sure-shot static rules. Email's title, body, attachment, OCR text, and some feature engineered custom features are given as input elements. XGBoost cascaded hierarchical models, Bi-LSTM model with word embeddings perform well showing 77.3 overall accuracy For the real world corporate email data set. By introducing the thresholding techniques, the overall automation system architecture provides 85.6 percentage of accuracy for real world corporate emails. Combination of quick fixes, static rules, ML categorization as a low cost inference solution reduces 81 percentage of the human effort in the process of automation and real time implementation.
<<</Abstract>>>
<<<Introduction>>>
In an organization, the Information Technology (IT) support help desk operation is an important unit which handles the IT services of a business. Many large scale organizations would have a comprehensive IT support team to handle engagement and requests with employees on a 24$\times $7 basis. As any routinized tasks, most processes of the support help desk unit are considered repetitive in nature BIBREF0. Some may occur on a daily basis and others may occur more frequently. Many support engineers and agent would spend time on these repetitive task such as entering information to an application, resetting passwords, unlocking applications, creating credentials, activating services, preparing documentation, etc.
The industry has now come realize that many repetitive business processes and tasks can be automated by using Robotic Process Automation (RPA) bots or robotic processes automotive software bots BIBREF1. The idea is to take the repetitive workload and hand it over to the RPA bots so that the employees could focus on more value adding tasks and decision making to the organization. The RPA bot would also help to reduce the human errors and make processes more efficient, which would finally intent results in cost saving and productivity increase.
Our proposed automated approach is not only focused on automating repetitive tasks but also looking at historical data, enabling IT support desk process to identify unforeseen insights and patterns. Analyzing the data from various sources such as email communications, service request information generated from support ticketing applications and even conversational data from chats has helped us to identify the type of Service Requests (SR) raised and their respective solutions, as well as fixes done by the support agents. This approach has helped us create a classification model to identify the issue types and provide quick fixes and resolutions from the collected data.
<<</Introduction>>>
<<<Related Work>>>
WrÃblewska has conducted a project on the topic of RPA of unstructured data which was focused on building an Artificial Intelligence (AI) system dedicated to tasks regarding the processing of formal documents used in different kinds of business procedures BIBREF2. His approach was introduced to automate the debt collecting process. Possible applications of Machine Learning (ML) methods to improve the efficacy of these processes were described. In the case study done by Aguirre, it was concluded that companies should consider RPA to be more suitable for high volume standardized tasks that are rule-driven, with no requirement for subjective judgement, creativity or interpretation skills BIBREF3. Back office business processes such as accounts payable, accounts receivable, billing, travel and expenses, fixed assets and human resource administration are good candidates for RPA.
Extreme multi-class and multi-label text classification problems are solved by the methodology named Hierarchical Label Set Expansion (HLSE) BIBREF4. This paper presents the deep Learning architecture devoted to text classification, in which the data labels are regularized, the hierarchical label set is defined and different word embeddings are used BIBREF3, BIBREF5, BIBREF6.
The traditional model performed better than the the deep learning models for 8,841 emails collected over 3 years, because this particular classification task carried out by Haoran may not require the ordered sequence representation of tokens that deep learning models provide BIBREF7. This paper claims that a bagged voting model surpasses the performance of any individual models.
In their survey, Kamran and other researchers analyzed text feature extractions BIBREF8, BIBREF9, dimentionality reduction methods, existing algorithms and techniques, evaluation methods and limitations BIBREF6 and advantages based on applications. Paramesh et al and Seongwook et al compare the different classification algorithms such as multinomial naive bayes logistic regression, K-Nearest neighbour and Support Vector Machines (SVM) on real-world IT infrastructure ticket classifier system data, using different evaluation metrics in their research BIBREF10, BIBREF11. They claimed that SVM to have performed well on all the data samples. Random forest (RF) or naive bayes (NB) performed best in terms of correctly uncovering human intuitions. Hartmann et al and his team present in their study that RF exhibits high performance in sentiment classification research done on 41 social media data sets covering major social media platforms, where the SVM never outperforms the RF BIBREF12. Cognitive RPA is efficiently undertaken as a low cost solution with Microsoft Azure Language Understanding Intelligent Service (LUIS) BIBREF8 and Azure machine learning studio.
Section III of this paper elaborates the process of automation. The section IV explains about the email classification approach, and the section V illustrates the results and their respective analysis. Finally, section VI contains the conclusion of the results.
<<</Related Work>>>
<<<Method>>>
We are proposing a hybrid-process automation, in which we are introducing the automation architecture while adopting the manual process methodology. Incoming emails, that cannot be classified or understood by the knowledge base of the automation system will be sent for manual classification solution.
<<<Manual Process>>>
Providing technical support for large firms around the world has many challenges such as coordinating a vast amounts of mails and matching experts with employees who are in need of that expertise. When a technical issue is raised from a base level employee who works with applications, it is sent to the middle level and then to the higher level management of the respective regional branches throughout the hierarchical business architecture. Once it is approved by the branch manager, the issue email is forwarded to the technical coordinator to categorize the issue based on the priority level and technical requirements. Technical coordinator is responsible for the issues raised from the regional branches all over the world.
Each regional branch is given a unique name such as New York, Sydney, London, Beijing and Toronto mentioned as Category1 (cat1). Category1 is identified by looking at the email address of the sender. Each regional branch has different plant applications that need different experts' consultation. Plant applications such as SAP, Darwin and infrastructure are mentioned as Category2 (cat2). The possible plot of the issue emails such as computer, manufacturing, userID, userunlock, financial, planning, purchasing issue generated by employees working in various plant applications across various regions are mentioned as Category3.
Mapping table is created with the plants placed in the regional offices and the issues created by the plants. Category1, Category2, Category3 contains 84, 8 and 77 unique categories to be classified. Table I shows some examples for each categories. Once all three categories are finalized by the technical coordinator, email tickets will be created and assigned to the admin-groups. Respective technical people in the admin-groups will provide consultancy and solve the issues. Not only one technician can handle issues assigned to many different admin groups allocated to him, but also particular admin category can be handled by many technicians as a group as well.
<<</Manual Process>>>
<<<Proposed Automation System>>>
In addition to replacing the technical coordinator role with AI bot to classify the raised email-issue tickets for respective admin groups, we propose instant quick fixes for some emails in an automated manner. High level workflow is described in Fig. 1. The AI bot has three main stages
Quick fixes
Static rules
Email classifier
All the incoming mails are preprocessed for better quality of inputs. Signatures, greetings, Uniform Resource Locators (URL) are removed. Key body is extracted from the forwarded mails by digging deep into the mail contents. If an email contains attachments, Optical Character Recognition (OCR) is used to extract the text contents from the attachments.
<<<Quickfixes>>>
Microsoft LUIS is used for instant quick fixes to provide solution based on prioritized emails. Fig. 2 shows the bot framework LUIS architecture that handles the quick fixes. Quick fixes are trained with most occurring samples that need quick solutions. LUIS is a model that artificial intelligence applications use to predict the intention of phrases spoke. There are 3 main key phases categorized as defining phase, training phase and publishing phase. Natural language is extremely flexible with LUIS. Intents are the type of defined words that are supported by utterances. An action the user wants to perform can be defined by an intent. Fig. 3 elaborates the intent matching breakdown mechanism. Entities are identified form the sentences. Suitable entity will be selected for generating tickets.
If an incoming email is identified with the matched intent, cat1, cat2, cat3 will be allocated. Tickets will be created for admin-groups. The issue will be solved using automated messages through a chat bot solution. If the issue is solved, then the ticket will be closed by the quick fixes. If it is too complicated for the knowledge of the BOT then it creates ticket for adminGroup for the assistance of consultants.
The emails identified by static rules and keywords are classified with the highest accuracy. The knowledge base of static rules and keywords are gathered using feature engineering and the insights from the technical coordinator. Remaining emails are sent to a complex ensemble machine learning model to be classified.
Different types of emails are treated in a different way for efficient execution and to reduce the error.
<<</Quickfixes>>>
<<<First mail>>>
Fig. 4 shows the flow of email categorization response for new incoming emails. If an incoming mail is a fresh new mail, it is initially subjected to cleaning. OCR will extract the texts from the attachment depending on the attachments' availability. Cat1 is assigned according to the knowledge of the database and sender details. According to the priority, emails are passed through LUIS. Thereafter if LUIS fails to solve the issue ML model will assign the cat2, cat3, Admin group for ticket creation.
<<</First mail>>>
<<<Forwarded mail>>>
If incoming mail is a continuation of previous email, it is directly handled by LUIS question and answer self automated support. Then it follows the normal procedure of categorization. Fig. 5 clearly illustrates the flow.
Fig. 6 explains the overall architecture. Static rules are mentioned as T-codes. Every categorized mails has to be assigned to respective consultant denoted as assignTo.
<<</Forwarded mail>>>
<<</Proposed Automation System>>>
<<</Method>>>
<<<Email classifier using machine learning>>>
<<<Preprocessing>>>
Preprocessing is necessary to increase the accuracy of a text classification model, because it avoids the classification model focusing attention on unwanted sentences and intents. Emails are fed into Microsoft-Bot services. It handles the headers and outputs the processed channel-data in JavaScript Object Notation (JSON) format. The channel data summarizes the information such like sender, receiver, body, subject and important metadata. Regular expression (regex) can be used for searching strings by defining a search pattern. Regex findings are created to remove unwanted words from the channel data queries for further processing of the emails.
OCR has to be accurate in detecting text in an image. Microsoft-OCR is used for text recognition of this automation process. It extracts the recognized characters into a machine-usable character stream. Accuracy of the text recognition depends on the image quality such as blurry images, small text size, complex background, shadows and handwritten text. Since most of the image attachments are computer generated images and screen shots of error messages, Microsoft-OCR capabilities fits for the use case.
260000 emails are taken from past history. Extracted preprocessed data from Microsoft-Bot and OCR services are saved as Comma-separated Values (CSV) files. It is further processed before feeding to machine learning model. Unwanted words are removed from the context using nltk library stopwords and manually collected stopwords. URLs, punctuation marks are removed. Every word is tokenized, lemmatized and normalized, i.e. title, body, OCR, from, to, CC, Cat1, Cat2, and Cat3.
<<</Preprocessing>>>
<<<Feature selection>>>
Since the sender and receiver varies with time because of new employees' arrivals and old employees' resignations. In order to handle this fluctuating situation, To, CC, From columns are dropped from the input data. Cat1 is known from the email address. Cat2, Cat3 for specific cat1 is described in the table1. Cat2 and Cat3 are merged and defined as target category for classification. Nearly 180 custom features are created based on the plant's availability and region mapping. It is embedded to understand the availability of plant and the issue for the given region denoted as Unique-Category. Based on mapping table (extension of table1), custom features ensures that whether the plant application (cat2) and the technical issue (cat3) belongs to the regional plant (cat1).
By the analysis made from the existing samples and from the human semantic knowledge of the technical coordinator, it is sensed that not only the title of the email is enough to predict the category, but also the attachment and body play a major role.
<<</Feature selection>>>
<<<Machine learning approach>>>
Even though labelled data set was provided, initially unsupervised learning algorithm K-Nearest Neighbor (KNN) clustering was applied to the data set to observe the possibility of clusters BIBREF13. Since number of unique categories of the target field (Unique-Cat) is 77, there are many common words between categories. It is too confusing and not showing promising categories and accuracies. Multi class multi label classification supervised algorithms such as random forest, XGBoost are used as benchmarks.
<<<Random forest>>>
Random Forest is a bagging Algorithm, an ensemble learning method for classification that operates by constructing a multitude of decision trees at training time and outputting the class that has highest mean majority vote of the classesBIBREF14.
<<</Random forest>>>
<<<XGBoost>>>
XGBoost is a decision-tree-based ensemble Machine Learning algorithm that uses a gradient boosting framework. It is used commonly in the classification problems involving unstructured dataBIBREF5.
<<</XGBoost>>>
<<<Hierarchical Model>>>
Since the number of target labels are high, achieving the higher accuracy is difficult, while keeping all the categories under same feature selection method. Some categories performs well with lower TF-IDF vectorizing range and higher n grams features even though they showed lower accuracy in the overall single model. Therefore, hierarchical machine learning models are built to classify 31 categories in the first classification model and remaining categories are named as low-accu and predicted as one category. In the next model, predicted low-accu categories are again classified into 47 categories. Comparatively this hierarchical model works well since various feature selection methods are used for various categoriesBIBREF5.
<<</Hierarchical Model>>>
<<</Machine learning approach>>>
<<<Deep learning approach>>>
<<<LSTM>>>
Long short term memory is an artificial neural network architecture which outperforms most of the machine learning algorithms. In the deep learning approach, feature selection is done in neurons weight matrix by itself. Bidirectional long short term memory (LSTM) is used with glove word embedding to predict the categoriesBIBREF15.
<<</LSTM>>>
<<<BERT>>>
Even though Bert is the state of the art model, for the considered data set it hasn't shown-up with the maximum breach of accuracy for the expected automationBIBREF16. When we consider the commercial model for the inference, having a dedicated Kubernetes cluster with high performance computer is costly. So complex models with high computation power are not considered as abetter solution.
<<</BERT>>>
<<</Deep learning approach>>>
<<<Threshold Selection>>>
In order to classify only higher confident emails, the thresholds for each and every 73 categories are defined. For an incoming email, the probability of assigning each category will be calculated. Best category will be selected based on the maximum probability out of those 73 probabilities. By looking at overall F-score, thresholding decisions are made. For the low accuracy categories (accuracy less than 75 percentage) higher threshold level is set. For middle accuracy categories (accuracy less than 90 percentage) min probability of correctly classified samples are taken. Higher accuracy categories (accuracy greater than 90 percentage) are left free with 0 threshold to classify all the incoming emails. The threshold techniques as a bottle neck decreases the number of samples classified by the autonomous process, but it increases the accuracy of the classified samples. The proposed thresholds satisfy the expected manual workload reduction as well as the accuracy percentage.
In this paper Randomforest, XGBoost, LSTM, Bidirectional LSTM with embeddings are analyzed with different input features. Complex deep-learning models such like transformers are not used in order to go for low cost inference solution. Train set and test set are divided as 80:20 percentage. Precision, recall, F-score are taken as evaluation metrics.
<<</Threshold Selection>>>
<<</Email classifier using machine learning>>>
<<<Results and Analysis>>>
Automation of quick email replies for technical queries increase the overall efficiency of day to day processes by 3 percentage. Even though replacing the manual Human email-assigner entirely with AI bot is not possible, yet the automation ML model handles 61 percentage of incoming emails correctly. It is reducing massive human effort per day. For generalization purpose email's title, body, attachments are considered in increasing accuracy, while ignoring sender, receiver, carbon copy information. Table II shows the accuracy percentages for different models with different feature selection methods. An accuracy of 77.3 percentage was obtained without any thresholding techniques for 73 classes multiclasss multi label classification problem. With threshold adjustments for each categories, it was increased to 85.6 percentage. Increasing threshold values results in reducing the number of mails classified by ML-model. It is necessary to handle limited number of high confident emails by the ML-model due to ensure the promising accuracy levels. Feature Engineering for custom feature selection and, Hierarchical cascade modelling increases the accuracy of the XGBoost machine learning model to reach accuracy of the LSTM models. By cascading model1 (mod1) with 83.2 accuracy for 31 classes and model2 (mod2) with 71.1 accuracy for 47 low-accuracy classes, overall hierarchical model exhibited 76.5 accuracy. All the accuracy terms refers F-score. Selected keywords were used as static rules accurate classification. Since accuracy is considerably satisfactory for the automation process, the system was deployed. The incorrectly classified mails are handled manually after the proper notification by the technical consultant.
Fig. 7 Shows emails classified by the ML, static rules and manual process represented in daily basis. Incoming emails per day varies between 30 to 120. It clearly illustrates the effect of retraining. After 10-April, the percentages of emails classified per day was increased as well as accuracy.
Fig. 8 shows average monthly analysis of incoming mails after each retraining. Average Monthly incoming mails are calculated as 1467 per month by considering a 4 months period. Initial training was done on august 2018 with 170,000 samples, model was able to classify nearly 50 percentage of incoming emails. After the second retraining on january 2019 with 200,000 sample, model classified 58 percentage of incoming mails per month. Third retraining was done on April 2019 with 260000 samples. Results stated that nearly 61 percentage of incoming mails were handled by ML model. Nearly 20 percentage of incoming emails were handled by static rules. Automation bot was proved to handle 81 percentage of the total incoming mails per month including ML and static rules, leading to efficient human-machine interaction, Instant problem solving and fast process.
<<</Results and Analysis>>>
<<<Conclusion>>>
Quick fixes from Microsoft LUIS Bot framework provides instant solutions for the raised email queries. Input text features of emails such as title, body, attachment OCR text and the feature engineered custom features all together outperform for the considered real word email data set. Sure-shot Static rules and hierarchical machine learning model with statistically calculated threshold enhances the accuracy of the overall system to an acceptable percentage. Bidirectional LSTM with word embedding techniques are implemented finally with thresholding techniques. Less complex Machine learning models lead to low cost virtual machine solutions for serving. Robotic Process Automation Architecture reduces human effort of email support desk by 81 percentage while having a reasonable accuracy of 85.6 percentage.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Work\nMethod\nManual Process\nProposed Automation System\nQuickfixes\nFirst mail\nForwarded mail\nEmail classifier using machine learning\nPreprocessing\nFeature selection\nMachine learning approach\nRandom forest\nXGBoost\nHierarchical Model\nDeep learning approach\nLSTM\nBERT\nThreshold Selection\nResults and Analysis\nConclusion"
],
"type": "outline"
}
|
1911.03154
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
How to Do Simultaneous Translation Better with Consecutive Neural Machine Translation?
<<<Abstract>>>
Despite the success of neural machine translation (NMT), simultaneous neural machine translation (SNMT), the task of translating in real time before a full sentence has been observed, remains challenging due to the syntactic structure difference and simultaneity requirements. In this paper, we propose a general framework to improve simultaneous translation with a pretrained consecutive neural machine translation (CNMT) model. Our framework contains two parts: prefix translation that utilizes a pretrained CNMT model to better translate source prefixes and a stopping criterion that determines when to stop the prefix translation. Experiments on three translation corpora and two language pairs show the efficacy of the proposed framework on balancing the quality and latency in simultaneous translation.
<<</Abstract>>>
<<<Introduction>>>
Simultaneous translation BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, the task of producing a partial translation of a sentence before the whole input sentence ends, is useful in many scenarios including outbound tourism, international summit and multilateral negotiations. Different from the consecutive translation in which translation quality alone matters, simultaneous translation trades off between translation quality and latency. The syntactic structure difference between the source and target language makes simultaneous translation more challenging. For example, when translating from a verb-final (SOV) language (e.g., Japanese) to a verb-media (SVO) language (e.g., English), the verb appears much later in the source sequence than in the target language. Some premature translations can lead to significant loss in quality BIBREF5.
Recently, a number of researchers have endeavored to explore methods for simultaneous translation in the context of NMT BIBREF6, BIBREF7, BIBREF8, BIBREF9. Some of them propose sophisticated training frameworks explicitly designed for simultaneous translation BIBREF5, BIBREF10. These approaches are either memory inefficient during training BIBREF5 or hard to implement BIBREF10. Others utilize a full-sentence base model to perform simultaneous translation by modifications to the encoder and the decoding process. To match the incremental source context, they replace the bidirectional encoder with a left-to-right encoder BIBREF3, BIBREF11, BIBREF4, BIBREF12 or recompute the encoder hidden states BIBREF13. On top of that, heuristic algorithms BIBREF3, BIBREF14 or a READ/WRITE model trained with reinforcement learning BIBREF11, BIBREF4, BIBREF12 or supervised learning BIBREF13 are used to decide, at every step, whether to wait for the next source token or output a target token. However, these models either cannot directly use a pretrained vanilla CNMT model with bidirectional encoder as the base model or work in a sub-optimal way in the decoding stage.
In this paper, we study the problem of how to do simultaneous translation better with a pretrained vanilla CNMT model. We formulate simultaneous translation as two nested loops: an outer loop that updates input buffer with newly observed source tokens and an inner loop that translates source tokens in the buffer updated at each outer step. For the outer loop, the input buffer can be updated by an ASR system with an arbitrary update schedule. For the inner loop, we perform prefix translation using the pretrained CNMT model with dynamically built encoder and decoder hidden states. We also design two novel stopping criteria for the inner loop: Length and EOS (LE) controller that stops with heuristics, and Trainable (TN) controller that learns to stop with a better quality and latency balance. We evaluate our method on IWSLT16 German-English (DE-EN) translation in both directions, WMT15 English-German (EN-DE) translation in both directions, and NIST Chinese-to-English (ZH$\rightarrow $EN) translation. The result shows our method consistently improves over the de-facto baselines, and achieves low latency and reasonable BLEU scores.
<<</Introduction>>>
<<<Background>>>
Given a set of source–target sentence pairs $\left\langle \mathbf {x}_m,\mathbf {y}^*_m\right\rangle _{m=1}^M$, a consecutive NMT model can be trained by maximizing the log-likelihood of the target sentence from its entire source side context:
where $\phi $ is a set of model parameters. At inference time, the NMT model first encodes a source language sentence $\mathbf {x}=\lbrace x_1,...,x_{T_\eta }\rbrace $ with its encoder and passes the encoded representations $\mathbf {h}=\lbrace h_1,...,h_{T_\eta }\rbrace $ to a greedy decoder. Then the greedy decoder generates a translated sentence in the target language by sequentially choosing the most likely token at each step $t$:
The distribution of next target word is defined as:
where $z_{t}$ is the decoder hidden state at position $t$. In consecutive NMT, once obtained, the encoder hidden states $\mathbf {h}$ and the decoder hidden state $z_t$ are not updated anymore and will be reused during the entire decoding process.
<<</Background>>>
<<<Simultaneous NMT>>>
In SNMT, we receive streaming input tokens, and learn to translate them in real-time. We formulate simultaneous translation as two nested loops: the outer loop that updates an input buffer with newly observed source tokens and the inner loop that translates source tokens in the buffer updated at each outer step.
More precisely, suppose at the end of an outer step $s-1$, the input buffer is $\mathbf {x}^{s-1} = \lbrace x_1, ..., x_{\eta \left[ s-1\right]}\rbrace $, and the output buffer is $\mathbf {y}^{s-1} = \lbrace y_1, ..., y_{\tau \left[ s-1\right]}\rbrace $. Then at outer step $s$, the system translates with the following steps:
The system observes $c_s > 0$ new source tokens and updates the input buffer to be $\mathbf {x}^{s} = \lbrace x_1, ..., x_{\eta \left[ s\right]}\rbrace $ where $\eta \left[ s\right]=\eta \left[ s-1\right]+c_s$.
Then, the system starts inner loop translation and writes $w_s>=0$ target tokens to the output buffer. The output buffer is updated to be $\mathbf {y}^{s} = \lbrace y_1, ..., y_{\tau \left[ s\right]}\rbrace $ where $\tau \left[ s\right]=\tau \left[ s-1\right]+w_s$.
The simultaneous decoding process continues until no more source tokens are added in the outer loop. We define the last outer step as the terminal outer step $S$, and other outer steps as non-terminal outer steps.
For the outer loop, we make no assumption about the value of $c_s$, while all previous work assumes $c_s=1$. This setting is more realistic because a) increasing $c_s$ can reduce the number of outer steps, thus reducing computation cost; b) in a real speech translation application, an ASR system may generate multiple tokens at a time.
For the inner loop, we adapt a pretrained vanilla CNMT model to perform partial translation with two important concerns:
Prefix translation: given a source prefix $\mathbf {x}^s = \lbrace x_1, ..., x_{\eta \left[ s\right]}\rbrace $ and a target prefix $\mathbf {y}^s_{\tau \left[ s-1\right]} = \lbrace y_1, ..., y_{\tau \left[ s-1\right]}\rbrace $, how to predict the remaining target tokens?
Stopping criterion: since the NMT model is trained with full sentences, how to design the stopping criterion for it when translating partial source sentcnes?
<<<Prefix Translation>>>
At an outer step $s$, given encoder hidden states $\mathbf {h}^s$ for source prefix $\mathbf {x}^s= \lbrace x_1, ..., x_{\eta \left[ s\right]}\rbrace $ and decoder hidden states $\mathbf {z}_{\tau \left[ s\right]-1}^s$ for target prefix $\mathbf {y}_{\tau \left[ s-1\right]}^s= \lbrace y_1, ..., y_{\tau \left[ s-1\right]}\rbrace $, we perform prefix translation sequentially with a greedy decoder:
where $t$ starts from $t=\tau \left[ s-1\right]+1$. The prefix translation terminates when a stopping criterion meets, yielding a translation $\mathbf {y}^s = \lbrace y_1, ..., y_{\tau \left[ s\right]}\rbrace $.
However, a major problem comes from the above translation method: how can we obtain the encoder hidden states $\mathbf {h}^s$ and decoder hidden states $\mathbf {z}_{\tau \left[ s\right]-1}^s$ at the beginning of prefix translation? In CNMT, the encoder hidden states and previous decoder hidden states are reused at each decoding time step. Different from CNMT, SNMT is fed with an incremental source side context. On the encoder side, we can address this by either reusing previous encoder hidden states BIBREF3, BIBREF4, BIBREF14, BIBREF12:
or dynamically re-building all encoder hidden states BIBREF5:
On the decoder side, since the encoder hidden states have been updated from $\mathbf {h}^{s-1}$ to $\mathbf {h}^s$, we can choose to reuse previous decoder hidden states BIBREF3, BIBREF4, BIBREF14, BIBREF5:
or rebuild all previous decoder hidden states from current encoder hidden states $\mathbf {h}^s$ with force decoding:
To better predict the remaining target tokens, we rebuild all encoder and decoder hidden states following Eq. DISPLAY_FORM11 and DISPLAY_FORM13 at the beginning of prefix translation. This strategy ensures that all encoder and decoder hidden states are obtained by attending to the same source tokens, which is consistent with how encoder and decoder hidden states are computed at training time. Besides, these attainable source tokens are all available source context at current time. Compared with using Eq. DISPLAY_FORM10 or DISPLAY_FORM12, our method can potentially better utilize the available source context.
<<</Prefix Translation>>>
<<<Stopping Criterion>>>
In consecutive NMT, the decoding algorithm such as greedy decoding or beam search terminates when the translator predicts an EOS token or the length of the translation meets a predefined threshold:
where $\text{maxlen}$, $u$ and $v$ are all hyper-parameters. In fairseq-py, they set $\text{maxlen}=+\infty $, $u=0$ and $v=200$ at inference time by default. The decoding for most source sentences terminates when the translator predicts the EOS token. In simultaneous decoding, since we use a NMT model pretrained on full sentences to translate partial source sentences, it tends to predict EOS when the source context has been fully translated. However, such strategy could be too aggressive for simultaneous translation. Fig. FIGREF18 shows such an example. At outer step 2, the translator predicts “you EOS", emiting target token “you". However, “you" is not the expected translation for “你" in the context of “你好。". The right decision is that prefix translation at outer step 2 should stop without emitting any words.
To alleviate such problems and do better simultaneous translation with pretrained CNMT model, we propose two novel stopping criteria for prefix translation.
<<<Length and EOS Control>>>
In consecutive translation, the decoding process stops mainly when predicting EOS. In contrast, for prefix translation at non-terminal outer step, we use both length and EOS to stop the prefix translation process. We achieve this by setting the hyper-parameters in Eq. DISPLAY_FORM15 as $\text{maxlen}=+\infty $, $u=1$ and $v=-d$, where $d$ is a non-negative integer. The hyper-parameter $d$ determines the translation latency of the system.
More specifically, before prefix translation at outer step $s$, we have source prefix $\mathbf {x}^s = \lbrace x_1, ..., x_{\eta \left[ s\right]}\rbrace $ and target prefix $\mathbf {y}_{\tau \left[ s-1\right]}^s = \lbrace y_1, ..., y_{\tau \left[ s-1\right]}\rbrace $. Prefix translation terminates at inner step $w_s$ when predicting an EOS token or satisfying:
We call this stopping criterion as Length and EOS (LE) stopping controller.
<<</Length and EOS Control>>>
<<<Learning When to Stop>>>
Although simple and easy to implement, LE controller lacks the capability to learn the optimal timing with which to stop prefix translation. Therefore, we design a small trainable network called Trainable (TN) stopping controller to learn when to stop prefix translation for non-terminal outer step. Fig. FIGREF22 shows the illustration.
At each inner decoding step $k$ for non-terminal outer step $s$, the TN controller utilizes a stochastic policy $\pi _\theta $ parameterized by a neural network to make the binary decision on whether to stop translation at current stage:
where $z_{\tau \left[ s-1\right]+k}^s$ is the current decoder hidden state. The prefix translation stops if the TN controller predicts $a_{\tau \left[ s-1\right]+k}=1$. The controller function $f_\theta $ can take on a variety of forms, and for simplicity we implement with a feedforward network with two hidden layers, followed by a softmax layer.
To train the TN controller, we freeze the NMT model with pretrained parameters, and optimize the TN network with policy gradient for reward maximization $\mathcal {J}= \mathbb {E}_{\pi _{\theta }}(\sum _{t=1}^{T_\tau } r_t )$. With a trained TN controller, prefix translation stops at inner decoding step $w_s$ when predicting an EOS token or satisfying:
In the following, we talk about the details of the reward function and the training detail with policy gradient.
<<<Reward>>>
To trade-off between translation quality and latency, we define the reward function at inner decoding step $k$ of outer step $s$ as:
where $t=\tau \left[ s-1\right]+k$, and $r_t^Q$ and $r_t^D$ are rewards related to quality and delay, respectively. $\alpha \ge 0$ is a hyper-parameter that we adjust to balance the trade-off between translation quality and delay. Similar to BIBREF4, we utilize sentence-level BLEU BIBREF15, BIBREF16 with reward shaping BIBREF17 as the reward for quality:
where
is the intermediate reward. Note that the higher the values of BLEU are, the more rewards the TN controller receives.
Following BIBREF4, BIBREF5, we use average lagging (AL) as the reward for latency:
where
$l(t)$ is the number of observed source tokens when generating the $t$-th target token, $t_e= \mathop {\rm argmin}_{t}{(l(t)=|\mathbf {x}|)}$ denotes the earliest point when the system observes the full source sentence, $\lambda =\frac{|\mathbf {y}|}{|\mathbf {x}|}$ represents the target-to-source length ratio and $d^* \ge 0$ is a hyper-parameter called target delay that indicates the desired system latency. Note that the lower the values of AL are, the more rewards the TN controller receives.
<<</Reward>>>
<<<Policy Gradient>>>
We train the TN controller with policy gradientBIBREF18, and the gradients are:
where $R_t=\sum _{i=t}^{T_\tau } r_i$ is the cumulative future rewards for the current decision. We can adopt any sampling approach to estimate the expected gradient. In our experiments, we randomly sample multiple action trajectories from the current policy $\pi _{\theta }$ and estimate the gradient with the collected accumulated reward. We try the variance reduction techniques by subtracting a baseline average reward estimated by a linear regression model from $R_t$ and find that it does not help to improve the performance. Therefore, we just normalize the reward in each mini batch without using baseline reward for simplicity.
<<</Policy Gradient>>>
<<</Learning When to Stop>>>
<<</Stopping Criterion>>>
<<</Simultaneous NMT>>>
<<<Experiments>>>
<<<Settings>>>
<<<Dataset>>>
We compare our approach with the baselines on WMT15 German-English (DE-EN) translation in both directions. This is also the most widely used dataset to evaluate SNMT's performance BIBREF3, BIBREF4, BIBREF5, BIBREF10, BIBREF13. To further evaluate our approach's efficacy in trading off translation quality and latency on other language pair and spoken language, we also conduct experiments with the proposed LE and TN method on NIST Chinese-to-English (ZH$\rightarrow $EN) translation and IWSLT16 German-English (DE-EN) translation in both directions. For WMT15, we use newstest2014 for validation and newstest2015 for test. For NIST, we use MT02 for validation, and MT05, MT06, MT08 for test. For IWSLT16, we use tst13 for validation and tst14 for test. Table TABREF32 shows the details. All the data is tokenized and segmented into subword symbols using byte-pair encoding BIBREF19 to restrict the size of the vocabulary. We use 40,000 joint merge operations on WMT15, and 24,000 on IWSLT16. For NIST, we use 30,000 merge operations for source and target side separately. Without explicitly mention, we simulate simultaneous translation scenario at inference time with these datasets by assuming that the system observes one new source token at each outer step, i.e., $c_s=1$.
<<</Dataset>>>
<<<Pretrained NMT Model>>>
We use Transformer BIBREF8 trained with maximum likelihood estimation as the pretrained CNMT model and implement our method based on fairseq-py. We follow the setting in transformer_iwslt_de_en for IWSLT16 dataset, and transformer_wmt_en_de for WMT15 and NIST dataset. Fairseq-py adds an EOS token for all source sentences during training and inference. Therefore, to be consistent with the CNMT model implemented with fairseq-py, we also add an EOS token at the end of the source prefix for prefix translation.
<<</Pretrained NMT Model>>>
<<<TN Controller>>>
To train the TN controller, we use a mini-batch size of 8,16,16 and sample 5,10,10 trajectories for each sentence pair in a batch for IWSLT16, WMT15 and NIST, respectively. We set the number of newly observed source tokens at each outer step to be 1 during the training for simplicity. We set $\alpha $ to be $0.04$, and $d^*$ to be $2,5,8$. All our TN controllers are trained with policy gradient using Adam optimizer BIBREF20 with 30,000 updates. We select the last model as our final TN controller.
<<</TN Controller>>>
<<<Baseline>>>
We compare our model against three baselines that utilize a pretrained CNMT model to perform simultaneous translation:
test_time_waitk: the test-time waitk simultaneous decoding algorithm proposed in BIBREF5, i.e., using a full-sentence model but decoding it with a waitk policy. We report the results when $k=1,3,5,7,9$.
SL: the SL model proposed in BIBREF13, which learns an adaptive READ/WRITE policy from oracle READ/WRITE sequences generated with heuristics. We report the results $\rho =0.65,0.6,0.55,0.5,0.45,0.4$.
BIBREF4: the adaptation of BIBREF4's two-staged full-sentence model + reinforcement learning on Transformer by BIBREF5. We report the results when using $CW=2,5,8$ as the target delay.
We report the result with $d=0,2,4,6,8$ for our proposed LE method and $d^*=2,5,8$ for our proposed TN method. For all baselines, we cite the results reported in BIBREF13. Since they did not mention the details of data preprocessing, we cannot compare the BLEU and AL scores directly with theirs. Therefore, we normalize the BLEU and AL scores with its corresponding upper bound, i.e. the BLEU and AL scores obtained when the pretrained Transformer performs standard greedy decoding (Greedy).
<<</Baseline>>>
<<</Settings>>>
<<<Results>>>
We compare our method with the baselines on the test set of WMT15 EN$\rightarrow $DE and DE$\rightarrow $EN translation tasks. Fig. FIGREF40 shows the result. The points closer to the upper left corner indicate better overall performance, namely low latency and high quality. In all these figures, we observe that, as latency increases, all methods improve in quality. The TN stopping controller significantly outperforms all the baseline systems in both translation tasks, demonstrating that it indeed learns the appropriate timing to stop prefix translation. The LE controller outperforms the baselines on WMT15 EN$\rightarrow $DE translation at high latency region and performs similarly or worse on other cases.
We show the model's efficacy in trading off quality and latency on other language pair and spoken language in Fig. FIGREF41. The TN controller obtains better performance on all translation tasks, especially at the low latency region. For example, on IWSLT16 EN$\rightarrow $ DE translation, it is +$2.5$ to +$3.3$ BLEU ahead of the LE method. TN also obtains promising translation quality with acceptable latency: with a lag of $<7$ tokens, TN obtains 96.95%, 97.20% and 94.03% BLEU with respect to consecutive greedy decoding for IWSLT16 EN$\rightarrow $DE, IWSLT16 DE$\rightarrow $EN and NIST ZH$\rightarrow $EN translation, respectively.
<<</Results>>>
<<<Analyze>>>
We analyze the effect of different ways (Eq. DISPLAY_FORM10-DISPLAY_FORM13) to obtain the encoder and decoder hidden states at the beginning of prefix translation with the LE controller. Fig. FIGREF42 shows the result. We try three variants: a) dynamically rebuild all encoder/decoder hidden states (none); b) reuse decoder hidden states and rebuild all encoder hidden states (decoder); c) reuse previous encoder hidden states and rebuild all decoder hidden states (encoder). The left Y axis and X axis show BLEU-vs-AL curve. We observe that if reusing previous encoder hidden states (encoder), the translation fails. We ascribe this to the discrepancy between training and decoding for the encoder. We also observe that when $d=0,2$, reusing decoder hidden states (decoder) obtain negative AL. To analyze this, we plot the translation to reference length ratio versus AL curve with the right Y axis and X axis. It shows that with decoder, the decoding process stops too early and generates too short translations. Therefore, to avoid such problem and to be consistent with the training process of the CNMT model, it is important to dynamically rebuild all encoder/decoder hidden states for prefix translation.
Since we make no assumption about the $c_s$, i.e., the number of newly observed source tokens at each outer step, we test the effect of different $c_s$ at this section. Fig. FIGREF43 shows the result with the LE and TN controllers on the test set of WMT15 EN$\rightarrow $DE translation. We observe that as $c_s$ increases, both LE and TN trend to improve in quality and worsen in latency. When $c_s=1$, LE controller obtains the best balance between quality and latency. In contrast, TN controller obtains similar quality and latency balance with different $c_s$, demonstrating that TN controller successfully learns the right timing to stop regardless of the input update schedule.
We also analyze the TN controller's adaptability by monitoring the initial delay, i.e., the number of observed source tokens before emitting the first target token, on the test set of WMT15 EN$\rightarrow $DE translation, as shown in Fig. FIGREF52. $d^*$ is the target delay measured with AL (used in Eq. DISPLAY_FORM29). It demonstrates that the TN controller has a lot of variance in it's initial delay. The distribution of initial delay changes with different target delay: with higher target delay, the average initial delay is larger. For most sentences, the initial delay is within $1-7$.
In speech translation, listeners are also concerned with long silences during which no translation occurs. Following BIBREF4, BIBREF5, we use Consecutive Wait (CW) to measure this:
Fig. FIGREF54 shows the BLEU-vs-CW plots for our proposed two algorithms. The TN controller has higher CW than the LE controller. This is because TN controller prefers consecutive updating output buffer (e.g., it often produces $w_s$ as $0\ 0\ 0\ 0\ 3\ 0\ 0\ 0\ 0\ 0\ 5\ 0\ 0\ 0\ 0\ 4\ ...$) while the LE controller often updates its output buffer following the input buffer (e.g., it often produces $w_s$ as $0\ 0\ 0\ 0\ 1\ 1\ 1\ 1\ 1\ 1\ ...$ when $d=4$). Although larger than LE, the CW for TN ($< 6$) is acceptable for most speech translation scenarios.
<<</Analyze>>>
<<<Translation Examples>>>
Fig. FIGREF55 shows three translation examples with the LE and TN controllers on the test set of NIST ZH$\rightarrow $EN and WMT15 EN$\rightarrow $DE translation. In manual inspection of these examples and others, we find that the TN controller learns a conservative timing for stopping prefix translation. For example, in example 2, our method outputs translation “wu bangguo attended the signing ceremony” when observing “吴邦国 出席 签字 仪式 并”, instead of a more radical translation “wu bangguo attended the signing ceremony and”. Such strategy helps to alleviate the problem of premature translation, i.e., translating before observing enough future context.
<<</Translation Examples>>>
<<</Experiments>>>
<<<Related Work>>>
A number of works in simultaneous translation divide the translation process into two stages. A segmentation component first divides the incoming text into segments, and then each segment is translated by a translator independently or with previous context. The segmentation boundaries can be predicted by prosodic pauses detected in speech BIBREF0, BIBREF21, linguistic cues BIBREF22, BIBREF23, or a classifier based on alignment information BIBREF24, BIBREF25 and translation accuracy BIBREF1, BIBREF2, BIBREF26.
Some authors have recently endeavored to perform simultaneous translation in the context of NMT. BIBREF3, BIBREF14, BIBREF5 introduce a manually designed criterion to control when to translate. BIBREF11, BIBREF4, BIBREF12 extend the criterion into a trainable agent in a reinforcement learning framework. However, these work either develop sophisticated training frameworks explicitly designed for simultaneous translation BIBREF5 or fail to use a pretrained consecutive NMT model in an optimal way BIBREF3, BIBREF14, BIBREF11, BIBREF4, BIBREF12, BIBREF13. In contrast, our work is significantly different from theirs in the way of using pretrained consecutive NMT model to perform simultaneous translation and the design of the two stopping criteria.
<<</Related Work>>>
<<<Conclusion>>>
We have presented a novel framework for improving simultaneous translation with a pretrained consecutive NMT model. The basic idea is to translate partial source sentence with the pretrained consecutive NMT model and stops the translation with two novel stopping criteria. Extensive experiments demonstrate that our method outperforms the state-of-the-art baselines in balancing between translation quality and latency.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nBackground\nSimultaneous NMT\nPrefix Translation\nStopping Criterion\nLength and EOS Control\nLearning When to Stop\nReward\nPolicy Gradient\nExperiments\nSettings\nDataset\nPretrained NMT Model\nTN Controller\nBaseline\nResults\nAnalyze\nTranslation Examples\nRelated Work\nConclusion"
],
"type": "outline"
}
|
1909.05360
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Joint Event and Temporal Relation Extraction with Shared Representations and Structured Prediction
<<<Abstract>>>
We propose a joint event and temporal relation extraction model with shared representation learning and structured prediction. The proposed method has two advantages over existing work. First, it improves event representation by allowing the event and relation modules to share the same contextualized embeddings and neural representation learner. Second, it avoids error propagation in the conventional pipeline systems by leveraging structured inference and learning methods to assign both the event labels and the temporal relation labels jointly. Experiments show that the proposed method can improve both event extraction and temporal relation extraction over state-of-the-art systems, with the end-to-end F1 improved by 10% and 6.8% on two benchmark datasets respectively.
<<</Abstract>>>
<<<Introduction>>>
The extraction of temporal relations among events is an important natural language understanding (NLU) task that can benefit many downstream tasks such as question answering, information retrieval, and narrative generation. The task can be modeled as building a graph for a given text, whose nodes represent events and edges are labeled with temporal relations correspondingly. Figure FIGREF1 illustrates such a graph for the text shown therein. The nodes assassination, slaughtered, rampage, war, and Hutu are the candidate events, and different types of edges specify different temporal relations between them: assassination is BEFORE rampage, rampage INCLUDES slaughtered, and the relation between slaughtered and war is VAGUE. Since “Hutu” is actually not an event, a system is expected to annotate the relations between “Hutu” and all other nodes in the graph as NONE (i.e., no relation).
As far as we know, all existing systems treat this task as a pipeline of two separate subtasks, i.e., event extraction and temporal relation classification, and they also assume that gold events are given when training the relation classifier BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. Specifically, they built end-to-end systems that extract events first and then predict temporal relations between them (Fig. FIGREF1). In these pipeline models, event extraction errors will propagate to the relation classification step and cannot be corrected afterwards. Our first contribution is the proposal of a joint model that extracts both events and temporal relations simultaneously (see Fig. FIGREF1). The motivation is that if we train the relation classifier with NONE relations between non-events, then it will potentially have the capability of correcting event extraction mistakes. For instance in Fig. FIGREF1, if the relation classifier predicts NONE for (Hutu, war) with a high confidence, then this is a strong signal that can be used by the event classifier to infer that at least one of them is not an event.
Our second contribution is that we improve event representations by sharing the same contextualized embeddings and neural representation learner between the event extraction and temporal relation extraction modules for the first time. On top of the shared embeddings and neural representation learner, the proposed model produces a graph-structured output representing all the events and relations in the given sentences. A valid graph prediction in this context should satisfy two structural constraints. First, the temporal relation should always be NONE between two non-events or between one event and one non-event. Second, for those temporal relations among events, no loops should exist due to the transitive property of time (e.g., if A is before B and B is before C, then A must be before C). The validity of a graph is guaranteed by solving an integer linear programming (ILP) optimization problem with those structural constraints, and our joint model is trained by structural support vector machines (SSVM) in an end-to-end fashion.
Results show that, according to the end-to-end $F_1$ score for temporal relation extraction, the proposed method improves CAEVO BIBREF3 by 10% on TB-Dense, and improves CogCompTime BIBREF6 by 6.8% on MATRES. We further show ablation studies to confirm that the proposed joint model with shared representations and structured learning is very effective for this task.
<<</Introduction>>>
<<<Related Work>>>
In this section we briefly summarize the existing work on event extraction and temporal relation extraction. To the best of our knowledge, there is no prior work on joint event and relation extraction, so we will review joint entity and relation extraction works instead.
Existing event extraction methods in the temporal relation domain, as in the TempEval3 workshop BIBREF2, all use conventional machine learning models (logistic regression, SVM, or Max-entropy) with hand-engineered features (e.g., ClearTK BIBREF7 and NavyTime BIBREF8). While other domains have shown progress on event extraction using neural methods BIBREF9, BIBREF10, BIBREF11, recent progress in the temporal relation domain is focused more on the setting where gold events are provided. Therefore, we first show the performance of a neural event extractor on this task, although it is not our main contribution.
Early attempts on temporal relation extraction use local pair-wise classification with hand-engineered features BIBREF12, BIBREF0, BIBREF13, BIBREF14. Later efforts, such as ClearTK BIBREF7, UTTime BIBREF15, NavyTime BIBREF8, and CAEVO BIBREF3 improve earlier work with better linguistic and syntactic rules. BIBREF16, BIBREF4, BIBREF17 explore structured learning for this task, and more recently, neural methods have also been shown effective BIBREF18, BIBREF19, BIBREF20, BIBREF5.
In practice, we need to extract both events and those temporal relations among them from raw text. All the works above treat this as two subtasks that are solved in a pipeline. To the best of our knowledge, there has been no existing work on joint event-temporal relation extraction. However, the idea of “joint” has been studied for entity-relation extraction in many works. BIBREF21 frame their joint model as table filling tasks, map tabular representation into sequential predictions with heuristic rules, and construct global loss to compute the best joint predictions. BIBREF22 define a global structure for joint entity and relation extraction, encode local and global features based on domain and linguistic knowledge. and leverage beam-search to find global optimal assignments for entities and relations. BIBREF23 leverage LSTM architectures to jointly predict both entity and relations, but fall short on ensuring prediction consistency. BIBREF24 combine the benefits of both neural net and global optimization with beam search. Motivated by these works, we propose an end-to-end trainable neural structured support vector machine (neural SSVM) model to simultaneously extract events and their relations from text and ensure the global structure via ILP constraints. Next, we will describe in detail our proposed method.
<<</Related Work>>>
<<<Joint Event-Relation Extraction Model>>>
In this section we first provide an overview of our neural SSVM model, and then describe each component in our framework in detail (i.e., the multi-tasking neural scoring module, and how inference and learning are performed). We denote the set of all possible relation labels (including NONE) as $\mathcal {R}$, all event candidates (both events and non-events) as $\mathcal {E}$, and all relation candidates as $\mathcal {E}\mathcal {E}$.
<<<Neural SSVM>>>
Our neural SSVM adapts the SSVM loss as:
where $\bar{S}^n_{\mathcal {E}} = S(\hat{y}^n_\mathcal {E}; x^n) - S(y^n_\mathcal {E};x^n)$ and $\bar{S}^n_{\mathcal {R}} = S(\hat{y}^n_\mathcal {R}; x^n) - S(y^n_\mathcal {R};x^n)$ ; $\Phi $ denotes model parameters, $n$ indexes instances, $M^n = |\mathcal {E}|^n + |\mathcal {E}\mathcal {E}|^n$ denotes the total number of relations $|\mathcal {E}|^n$ and events $|\mathcal {E}\mathcal {E}|^n$ in instance $n$. $y^n,\hat{y}^n$ denote the gold and predicted global assignments of events and relations for instance $n$—each of which consists of either one hot vector representing true and predicted relation labels $y_{\mathcal {R}}^n, \hat{y}_{\mathcal {R}}^n \in \lbrace 0, 1\rbrace ^{|\mathcal {E}\mathcal {E}|}$, or entity labels $y_{\mathcal {E}}^n, \hat{y}_{\mathcal {E}}^n \in \lbrace 0, 1\rbrace ^{|\mathcal {E}|}$. A maximum a posteriori probability (MAP) inference is needed to find $\hat{y}^n$, which we formulate as an interger linear programming (ILP) problem and describe more details in Section SECREF12. $\Delta (y^n, \hat{y}^n)$ is a distance measurement between the gold and the predicted assignments; we simply use the Hamming distance. $C$ and $C_{\mathcal {E}}$ are the hyper-parameters to balance the losses between event, relation and the regularizer, and $S(y^n_\mathcal {E};x^n), S(y^n_\mathcal {R};x^n)$ are scoring functions, which we design a multi-tasking neural architecture to learn. The intuition behind the SSVM loss is that it requires the score of gold output structure $y^n$ to be greater than the score of the best output structure under the current model $\hat{y}^n$ with a margin $\Delta (y^n, \hat{y}^n)$ or else there will be some loss. The training objective is to minimize the loss.
The major difference between our neural-SSVM and the traditional SSVM model is the scoring function. Traditional SSVM uses a linear function over hand-crafted features to compute the scores, whereas we propose to use a recurrent neural network to estimate the scoring function and train the entire architecture end-to-end.
<<</Neural SSVM>>>
<<<Multi-Tasking Neural Scoring Function>>>
The recurrent neural network (RNN) architecture has been widely adopted by prior temporal extraction work to encode context information BIBREF18, BIBREF19, BIBREF20. Motivated by these works, we adopt a RNN-based scoring function for both event and relation prediction in order to learn features in a data driven way and capture long-term contexts in the input. In Fig. FIGREF6, we skip the input layer for simplicity.
The bottom layer corresponds to contextualized word representations denoted as $v_k$. We use ($i, j$) $\in \mathcal {E}\mathcal {E}$ to denote a candidate relation and $i \in \mathcal {E}$ to indicate a candidate event in the input sentences of length N. We fix word embeddings computed by a pre-trained BERT-base model BIBREF27. They are then fed into a BiLSTM layer to further encode task-specific contextual information. Both event and relation tasks share this layer.
The event scorer is illustrated by the left two branches following the BiLSTM layer. We simply concatenate both forward and backward hidden vectors to encode the context of each token. As for the relation scorer shown in the right branches, for each pair ($i,j$) we take the forward and backward hidden vectors corresponding to them, $f_i, b_i, f_j, b_j$, and concatenate them with linguistic features as in previous event relation prediction research. We denote linguistic features as $L_{i,j}$ and only use simple features provided in the original datasets: token distance, tense, and polarity of events.
Finally, all hidden vectors and linguistic features are concatenated to form the input to compute the probability of being an event or a softmax distribution over all possible relation labels—which we refer to as the RNN-based scoring function in the following sections.
<<</Multi-Tasking Neural Scoring Function>>>
<<<MAP Inference>>>
A MAP inference is needed both during training to obtain $\hat{y}^n$ in the loss function (Equation DISPLAY_FORM8), as well as during the test time to get globally coherent assignments. We formulate the inference problem as an ILP problem. The inference framework is established by constructing a global objective function using scores from local scorers and imposing several global constraints: 1) one-label assignment, 2) event-relation consistency, and 3) symmetry and transitivity as in BIBREF28, BIBREF29, BIBREF30, BIBREF31, BIBREF4.
<<<Objective Function>>>
The objective function of the global inference is to find the global assignment that has the highest probability under the current model, as specified in Equation DISPLAY_FORM14:
where $y^e_k$ is a binary indicator of whether the $k$-th candidate is an event or not, and $y^r_{i,j}$ is a binary indicator specifying whether the global prediction of the relation between $(i,j)$ is $r \in \mathcal {R}$. $S(y^e_k,x), \forall e \in \lbrace 0, 1\rbrace $ and $S(y^r_{i,j},x), \forall r \in \mathcal {R}$ are the scoring functions obtained from the event and relation scoring functions, respectively. The output of the global inference $\bf {\hat{y}}$ is a collection of optimal label assignments for all events and relation candidates in a fixed context. $C_{\mathcal {E}}$ is a hyper-parameter controlling weights between relation and event. The constraint that follows immediately from the objective function is that the global inference should only assign one label for all entities and relations.
<<</Objective Function>>>
<<<Constraints>>>
We introduce several additional constraints to ensure the resulting optimal output graph forms a valid and plausible event graph.
<<<Event-Relation Consistency.>>>
Event and relation prediction consistency is defined with the following property: a pair of input tokens have a positive temporal relation if and only if both tokens are events. The following global constraints will satisfy this property,
where $e^P_i$ denotes an event and $e^N_i$ denotes a non-event token. $r^P_{i,j}$ indicates positive relations: BEFORE, AFTER, SIMULTANEOUS, INCLUDES, IS_INCLUDED, VAGUE and $r^N_{i,j}$ indicate a negative relation, i.e., NONE. A formal proof of this property can be found in Appendix A.
<<</Event-Relation Consistency.>>>
<<<Symmetry and Transitivity Constraint.>>>
We also explore the symmetry and transitivity constraints of relations. They are specified as follows:
Intuitively, the symmetry constraint forces two pairs of events with flipping orders to have reversed relations. For example, if $r_{i,j}$ = BEFORE, then $r_{j,i}$ = AFTER. The transitivity constraint rules that if ($i,j$), ($j,k$) and ($i,k$) pairs exist in the graph, the label (relation) prediction of ($i,k$) pair has to fall into the transitivity set specifyed by ($i,j$) and ($j,k$) pairs. The full transitivity table can be found in BIBREF25.
<<</Symmetry and Transitivity Constraint.>>>
<<</Constraints>>>
<<</MAP Inference>>>
<<<Learning>>>
We begin by experimenting with optimizing SSVM loss directly, but model performance degrades. Therefore, we develop a two-state learning approach which first trains a pipeline version of the joint model without feedback from global constraints. In other words, the local neural scoring functions are optimized with cross-entropy loss using gold events and relation candidates that are constructed directly from the outputs of the event model. During the second stage, we switch to the global SSVM loss function in Equation DISPLAY_FORM8 and re-optimize the network to adjust for global properties. We will provide more details in Section SECREF4.
<<</Learning>>>
<<</Joint Event-Relation Extraction Model>>>
<<<Implementation Details>>>
In this section we describe implementation details of the baselines and our four models to build an end-to-end event temporal relation extraction system with an emphasis on the structured joint model. In Section SECREF6 we will compare and contrast them and show why our proposed structured joint model works the best.
<<<Baselines>>>
We run two event and relation extraction systems, CAEVO BIBREF3 and CogCompTime BIBREF6, on TB-Dense and MATRES, respectively. These two methods both leverage conventional learning algorithms (i.e., MaxEnt and averaged perceptron, respectively) based on manually designed features to obtain separate models for events and temporal relations, and conduct end-to-end relation extraction as a pipeline. Note BIBREF3 does not report event and end-to-end temporal relation extraction performances, so we calculate the scores per our implementation.
<<</Baselines>>>
<<<End-to-End Event Temporal Relation Extraction>>>
<<<Single-Task Model.>>>
The most basic way to build an end-to-end system is to train separate event detection and relation prediction models with gold labels, as we mentioned in our introduction. In other words, the BiLSTM layer is not shared as in Fig. FIGREF6. During evaluation and test time, we use the outputs from the event detection model to construct relation candidates and apply the relation prediction model to make the final prediction.
<<</Single-Task Model.>>>
<<<Multi-Task Model.>>>
This is the same as the single-task model except that the BiLSTM layer is now shared for both event and relation tasks. Note that both single-task and multi-task models are not trained to tackle the NONE relation directly. They both rely on the predictions of the event model to annotate relations as either positive pairs or NONE.
<<</Multi-Task Model.>>>
<<<Pipeline Joint Model.>>>
This shares the same architecture as the multi-task model, except that during training, we use the predictions of the event model to construct relation candidates to train the relation model. This strategy will generate NONE pairs during training if one argument of the relation candidate is not an event. These NONE pairs will help the relation model to distinguish negative relations from positive ones, and thus become more robust to event prediction errors. We train this model with gold events and relation candidates during the first several epochs in order to obtain a relatively accurate event model and switch to a pipeline version afterwards inspired by BIBREF23.
<<</Pipeline Joint Model.>>>
<<<Structured Joint Model.>>>
This is described in detail in Section SECREF3. However, we experience difficulties in training the model with SSVM loss from scratch. This is due to large amounts of non-event tokens, and the model is not capable of distinguishing them in the beginning. We thus adopt a two-stage learning procedure where we take the best pipeline joint model and re-optimize it with the SSVM loss.
To restrict the search space for events in the ILP inference of the SSVM loss, we use the predicted probabilities from the event detection model to filter out non-events since the event model has a strong performance, as shown in Section SECREF6. Note that this is very different from the pipeline model where events are first predicted and relations are constructed with predicted events. Here, we only leverage an additional hyper-parameter $T_{evt}$ to filter out highly unlikely event candidates. Both event and relation labels are assigned simutaneously during the global inference with ILP, as specified in Section SECREF12. We also filter out tokens with POS tags that do not appear in the training set as most of the events are either nouns or verbs in TB-Dense, and all events are verbs in MATRES.
<<</Structured Joint Model.>>>
<<<Hyper-Parameters.>>>
All single-task, multi-task and pipeline joint models are trained by minimizing cross-entropy loss. We observe that model performances vary significantly with dropout ratio, hidden layer dimensions of the BiLSTM model and entity weight in the loss function (with relation weight fixed at 1.0). We leverage a pre-trained BERT model to compute word embedding and all MLP scoring functions have one hidden layer. In the SSVM loss function, we fix the value of $C = 1$, but fine-tune $C_\mathcal {E}$ in the objective function in Equation DISPLAY_FORM14. Hyper-parameters are chosen using a standard development set for TB-Dense and a random holdout-set based on an 80/20 split of training data for MATRES. To solve ILP in the inference process, we leverage an off-the-shelf solver provided by Gurobi optimizer; i.e. the best solutions from the Gurobi optimizer are inputs to the global training. The best combination of hyper-parameters can be found in Table 9 in our appendix.
<<</Hyper-Parameters.>>>
<<</End-to-End Event Temporal Relation Extraction>>>
<<</Implementation Details>>>
<<<Experimental Setup>>>
In this section we first provide a brief overview of temporal relation data and describe the specific datasets used in this paper. We also explain the evaluation metrics at the end.
<<<Temporal Relation Data>>>
Temporal relation corpora such as TimeBank BIBREF32 and RED BIBREF33 facilitate the research in temporal relation extraction. The common issue in these corpora is missing annotations. Collecting densely annotated temporal relation corpora with all events and relations fully annotated is reported to be a challenging task as annotators could easily overlook some facts BIBREF34, BIBREF35, BIBREF3, BIBREF4, which made both modeling and evaluation extremely difficult in previous event temporal relation research.
The TB-Dense dataset mitigates this issue by forcing annotators to examine all pairs of events within the same or neighboring sentences, and it has been widely evaluated on this task BIBREF3, BIBREF4, BIBREF19, BIBREF5. Recent data construction efforts such as MATRES BIBREF25 further enhance the data quality by using a multi-axis annotation scheme and adopting a start-point of events to improve inter-annotator agreements. We use TB-Dense and MATRES in our experiments and briefly summarize the data statistics in Table TABREF33.
<<</Temporal Relation Data>>>
<<<Evaluation Metrics>>>
To be consistent with previous research, we adopt two different evaluation metrics. The first one is the standard micro-average scores. For densely annotated data, the micro-average metric should share the same precision, recall and F1 scores. However, since our joint model includes NONE pairs, we follow the convention of IE tasks and exclude them from evaluation. The second one is similar except that we exclude both NONE and VAGUE pairs following BIBREF6. Please refer to Figure 4 in the appendix for a visualizations of the two metrics.
<<</Evaluation Metrics>>>
<<</Experimental Setup>>>
<<<Results and Analysis>>>
The main results of this paper can be found in Table TABREF34. All best-recall and F1 scores are achieved by our structured joint model, and the results outperform the baseline systems by 10.0% and 6.8% on end-to-end relation extraction per F1 scores and 3.5% and 2.6% on event extraction per F1 scores. The best precision score for the TB-Dense dataset is achieved by CAEVO, which indicates that the linguistic rule-based system can make highly precise predictions by being conservative.
Table TABREF35 shows a more detailed analysis, in which we can see that our single-task models with BERT embeddings and a BiLSTM encoder already outperform the baseline systems on end-to-end relation extraction tasks by 4.9% and 4.4% respectively. In the following sections we discuss step-by-step improvement by adopting multi-task, pipeline joint, and structured joint models on end-to-end relation extraction, event extraction, and relation extraction on gold event pairs.
<<<End-to-End Relation Extraction>>>
<<<TB-Dense.>>>
The improvements over the single-task model per F1 score are 4.1% and 4.2% for the multi-task and pipeline joint model respectively. This indicates that the pipeline joint model is helpful only marginally. Table TABREF46 shows that the structured joint model improves both precision and recall scores for BEFORE and AFTER and achieves the best end-to-end relation extraction performance at 49.4%—which outperforms the baseline system by 10.0% and the single-task model by 5.1%.
<<</TB-Dense.>>>
<<<MATRES.>>>
Compared to the single-task model, the multi-task model improves F1 scores by 1.5%, while the pipeline joint model improves F1 scores by 1.3%—which means that pipeline joint training does not bring any gains for MATRES. The structured joint model reaches the best end-to-end F1 score at 59.6%, which outperforms the baseline system by 6.8% and the single-task model by 2.4%. We speculate that the gains come from the joint model's ability to help deal with NONE pairs, since recall scores for BEFORE and AFTER increase by 1.5% and 1.1% respectively (Table 10 in our appendix).
<<</MATRES.>>>
<<</End-to-End Relation Extraction>>>
<<<Event Extraction>>>
<<</Event Extraction>>>
<<<Relation Extraction with Gold Events>>>
<<</Relation Extraction with Gold Events>>>
<<<Discussion>>>
<<<Label Imbalance.>>>
One way to mitigate the label imbalance issue is to increase the sample weights for small classes during model training. We investigate the impact of class weights by refitting our single-task model with larger weights on INCLUDES, IS_INCLUDED and VAGUE in the cross-entropy loss.
Figure FIGREF50 shows that increasing class weights up to 4 times can significantly improve the F1 scores of INCLUDES and IS_INCLUDED classes with a decrease less than 2% for the overall F1 score. Performance of INCLUDES and IS_INCLUDED eventually degrades when class weights are too large. These results seem to suggest that more labels are needed in order to improve the performance on both of these two classes and the overall model. For SIMULTANEOUS, our model does not make any correct predictions for both TB-DENSE and MATRES by increasing class weight up to 10 times, which implies that SIMULTANEOUS could be a hard temporal relation to predict in general.
<<</Label Imbalance.>>>
<<<Global Constraints.>>>
In Table TABREF51 we conduct an ablation study to understand the contributions from the event-relation prediction consistency constraint and the temporal relation transitivity constraint for the structured joint model. As we can see, the event-relation consistency help s improve the F1 scores by 0.9% and 1% for TB-Dense and MATRES, respectively, but the gain by using transitivity is either non-existing or marginal. We hypothesize two potential reasons: 1) We leveraged BERT contextualized embedding as word representation, which could tackle transitivity in the input context; 2) NONE pairs could make transitivity rule less useful, as positive pairs can be predicted as NONE and transitivity rule does not apply to NONE pairs.
<<</Global Constraints.>>>
<<<Error Analysis.>>>
By comparing gold and predicted labels for events and temporal relations and examining predicted probabilities for events, we identified three major sources of mistakes made by our structured model, as illustrated in Table TABREF57 with examples.
<<</Error Analysis.>>>
<<<Type 1.>>>
Both events in Ex 1 are assigned low scores by the event module ($<< 0.01$). Although the structured joint model is designed to predict events and relations jointly, we leverage the event module to filter out tokens with scores lower than a threshold. Consequently, some true events can be mistakenly predicted as non-events, and the relation pairs including them are automatically assigned NONE.
<<</Type 1.>>>
<<<Type 2.>>>
In Ex 2 the event module assigns high scores to tokens happened (0.97) and according (0.89), but according is not an event. When the structured model makes inference jointly, the decision will weigh heavily towards assigning 1 (event) to both tokens. With the event-relation consistency constraint, this pair is highly likely to be predicted as having a positive temporal relation. Nearly all mistakes made in this category follow the same pattern illustrated by this example.
<<</Type 2.>>>
<<<Type 3.>>>
The existence of VAGUE makes temporal relation prediction challenging as it can be easily confused with other temporal relations, as shown in Ex 3. This challenge is compounded with NONE in our end-to-end extraction task.
Type 1 and Type 2 errors suggest that building a stronger event detection module will be helpful for both event and temporal relation extraction tasks. To improve the performance on VAGUE pairs, we could either build a stronger model that incorporates both contextual information and commonsense knowledge or create datasets with annotations that better separate VAGUE from other positive temporal relations.
<<</Type 3.>>>
<<</Discussion>>>
<<</Results and Analysis>>>
<<<Conclusion>>>
In this paper we investigate building an end-to-end event temporal relation extraction system. We propose a novel neural structured prediction model with joint representation learning to make predictions on events and relations simultaneously; this can avoid error propagation in previous pipeline systems. Experiments and comparative studies on two benchmark datasets show that the proposed model is effective for end-to-end event temporal relation extraction. Specifically, we improve the performances of previously published systems by 10% and 6.8% on the TB-Dense and MATRES datasets, respectively.
Future research can focus on creating more robust structured constraints between events and relations, especially considering event types, to improve the quality of global assignments using ILP. Since a better event model is generally helpful for relation extraction, another promising direction would be to incorporate multiple datasets to enhance the performance of our event extraction systems.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Work\nJoint Event-Relation Extraction Model\nNeural SSVM\nMulti-Tasking Neural Scoring Function\nMAP Inference\nObjective Function\nConstraints\nEvent-Relation Consistency.\nSymmetry and Transitivity Constraint.\nLearning\nImplementation Details\nBaselines\nEnd-to-End Event Temporal Relation Extraction\nSingle-Task Model.\nMulti-Task Model.\nPipeline Joint Model.\nStructured Joint Model.\nHyper-Parameters.\nExperimental Setup\nTemporal Relation Data\nEvaluation Metrics\nResults and Analysis\nEnd-to-End Relation Extraction\nTB-Dense.\nMATRES.\nEvent Extraction\nRelation Extraction with Gold Events\nDiscussion\nLabel Imbalance.\nGlobal Constraints.\nError Analysis.\nType 1.\nType 2.\nType 3.\nConclusion"
],
"type": "outline"
}
|
2003.12738
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Variational Transformers for Diverse Response Generation
<<<Abstract>>>
Despite the great promise of Transformers in many sequence modeling tasks (e.g., machine translation), their deterministic nature hinders them from generalizing to high entropy tasks such as dialogue response generation. Previous work proposes to capture the variability of dialogue responses with a recurrent neural network (RNN)-based conditional variational autoencoder (CVAE). However, the autoregressive computation of the RNN limits the training efficiency. Therefore, we propose the Variational Transformer (VT), a variational self-attentive feed-forward sequence model. The VT combines the parallelizability and global receptive field of the Transformer with the variational nature of the CVAE by incorporating stochastic latent variables into Transformers. We explore two types of the VT: 1) modeling the discourse-level diversity with a global latent variable; and 2) augmenting the Transformer decoder with a sequence of fine-grained latent variables. Then, the proposed models are evaluated on three conversational datasets with both automatic metric and human evaluation. The experimental results show that our models improve standard Transformers and other baselines in terms of diversity, semantic relevance, and human judgment.
<<</Abstract>>>
<<<Introduction>>>
Convolutional and fully-attentional feed-forward architectures, such as Transformers BIBREF0, have emerged as effective alternatives to RNNs BIBREF1 in wide range of NLP tasks. These architectures remove the computational temporal dependency during the training and effectively address the long-standing vanishing gradients problem of recurrent models by processing all inputs simultaneously. Notably, transformers apply a fully attention strategy, where each token in the sequence is informed by other tokens via a self-attention mechanism. It acts as an effectively global receptive field across the whole sequences which absence in RNNs. Despite the powerful modeling capability of trasnformers, they often fail to model one-to-many relation in dialogue response generation tasks BIBREF2 due to their deterministic nature. As a result, they generate dull and generic response (e.g., “I am not sure"), especially with greedy and beam search, which are widely used in other sequence modeling tasks. There have been attempts to generate diverse and informative dialogue responses by incorporating latent variable(s) into the RNN encoder-decoder architecture. In particular BIBREF2 adapt a conditional variational autoencoder (CVAE) to capture discourse-level variations of dialogue, while BIBREF3 and BIBREF4 integrates latent variables in the hidden states of the RNN decoder. However, the inherently sequential computation of aforementioned models limit the efficiency for large scale training.
In this paper, we introduce the Variational Transformer (VT) a variational self-attentive feed-forward sequence model to address the aforementioned issues. The VT combine the parallelizability and global receptive field of the transformer with the variational nature of CVAE by incorporating stochastic latent variables into transformers. We explore two types of VT: 1) Global Variational Transformer (GVT), and 2) Sequential Variational Transformer. The GVT is the extension of CVAE in BIBREF2, which modeling the discourse-level diversity with a global latent variable, While SVT, inspired by variational autoregressive models BIBREF3, BIBREF4, incorporates a sequence of latent variables into decoding process by using a novel variational decoder layer. Unlike previous approaches BIBREF2, BIBREF3, BIBREF4, SVT uses Non-causal Multi-head Attention, which attend to future tokens for computing posterior latent variables instead of using an additional encoder.
The proposed VT architectures integrate stochastic latent variables into Transformers. The experimental results on a three conversation dataset demonstrate that our models can generate more informative and coherent responses.
<<</Introduction>>>
<<<Related work>>>
<<<Neural Conversational Models>>>
Conversational systems has been widely studied BIBREF5, BIBREF6, BIBREF7, BIBREF8. Compare to rule-based systems BIBREF5, BIBREF6, sequence-to-sequence conversation models achieve superior performance in terms of scalable training and generalization ability BIBREF7. However, it has been pointed out that encoder-decoder models tend to generate generic and repetitive responses like “I am sorry" BIBREF9. To address this issue, there have been three main lines of work. The first is adding additional information (e.g., persona) as input to guild model generate more informative responses BIBREF10, BIBREF11. The second modifies the learning objective to promote more diverse generation BIBREF9, and the third integrates stochastic latent variables into Seq2Seq models by using the CVAE framework BIBREF12, BIBREF2. Our work comes within this third line introducing a novel model, the Variational Transformer, to improve dialogue response generation.
<<</Neural Conversational Models>>>
<<<Conditional Variational Autoencoders>>>
Many works have attempted to combine CVAEs with encoder-decoder architectures for sequence generation tasks. BIBREF13 propose a variational encoder-decoder model for neural machine translation, while BIBREF14 apply variational recurrent neural networks (VRNN) BIBREF15 for text summarization. BIBREF2 and BIBREF16 explore incorporating meta features into CVAE framework in dialogue response generation tasks. BIBREF3 and BIBREF4 propose variational autoregressive decoders which enhanced by highly multi-modal latent variables to capture the high variability in dialogue responses. BIBREF17 further augment variational autoregressive decoders with dynamic memory networks for improving generation quality. We unify the previous successful ideas of CVAE, and explore the combinations of CVAE and Transformer.
<<</Conditional Variational Autoencoders>>>
<<<Fully Attentional Networks>>>
Taking advantage of the parallel-in-time structure and global receptive field, Transformers BIBREF0 have recently been shown to achieve impressive results on various sequence modeling tasks. Based on this, several follow-up models have been presented. The Image Transformer BIBREF18 has been proposed for image generation, while the MultiModel BIBREF19 integrates convolution, attention and sparsely-gated mixture-of-expert blocks into a single deep-learning model for simultaneously learning multiple tasks from various domains. BIBREF20 proposed a fully attentional mixture-of-expert model (MoEL) for empathetic dialogue modeling. The Universal Transformer BIBREF1 incorporates the recurrent inductive bias of RNNs into the standard Transformer, and achieves better result on a wide range of algorithmic and language understanding tasks. BIBREF21 introduce the Latent Transformer (LT) for non-autoregressive machine translation. During training, the LT first autoencodes a target sequence into a shorter sequence discrete latent variables. Then a parallel decoder decodes the target using discrete latent variables and an input sequence. Different from the LT BIBREF21, the VT generates continuous latent variables during the decoding process.
<<</Fully Attentional Networks>>>
<<</Related work>>>
<<<Preliminaries>>>
<<<Conditional Variational Autoencoder for Dialogue Generation>>>
The CVAE framework BIBREF22 represents a dyadic conversation via three random variables: the input condition $c$, including conversation context and meta features (meta features can be ignored when not available); a latent variable $z$; and the target response $x$. A CVAE can be efficiently trained with Stochastic Gradient Variational Bayes (SGVB) BIBREF23 by maximizing the variational lower bound of $x$ given c, according to:
The typical CVAE consists of a prior network $p_{\theta }(z | c)$, which is used to approximate $p(z | c)$, a recognition network $p_{\phi }(z | c, x)$, which is used to approximate posterior distribution $q(z | c, x)$, and a decoder $p_{\theta }(x | z, c)$, which is used to approximate $p(x | z, c)$. By assuming z follows multivariate Gaussian distribution with a diagonal co-variance matrix, the evidence lower bound (ELBO) can be written as
where $\mathcal {L}_{REC}$ denotes the reconstruction loss and $\mathcal {L}_{KL}$ denotes the Kullback-Leibler (KL) divergence between the posterior and prior.
In dialogue generation tasks, previous works BIBREF2, BIBREF16 apply RNN encoders (with GRU or LSTM cell) to encode dialogue contexts and responses separately. The condition $c$ is represented by the concatenation of the last hidden state of the context encoder and the meta features (e.g., topic, emotion), while the response $x$ is represented by the last hidden state of response encoder. Then the prior network $p_{\theta }(z | c)$ and the recognition network $p_{\phi }(z | c, x)$ parameterized by multi-layer perceptrons (MLPs) are applied to approximate the means and the log variances of the prior latent distribution $\mathcal {N}\left(z ; \mu ^{\prime }, \sigma ^{\prime 2} \mathbf {I}\right)$ and posterior latent distribution $\mathcal {N}\left(z ; \mu , \sigma ^{2} \mathbf {I}\right)$. With the reparameterization trick BIBREF23, we can obtain samples of the prior latent variable (for testing) from $\mathcal {N}\left(z ; \mu ^{\prime }, \sigma ^{\prime 2} \mathbf {I}\right)$ and samples of the posterior latent variable (for training) from $\mathcal {N}\left(z ; \mu , \sigma ^{2} \mathbf {I}\right)$. Finally, an RNN decoder use $z$ and $c$ as the initial state to predicts the response $x$.
The vanishing latent variable problem BIBREF24 is a common issue in RNN-based CVAEs. That is, the powerful autoregressive RNN decoder first learns to ignore the latent variable, and decodes the response by only condition on the previous tokens. Thus the latent variable fails to encode the meaningful information, and the CVAE deteriorates to seq2seq model. To alleviate this issue, KL annealing BIBREF24 and bag-of-word loss BIBREF2 have been proposed, and have shown effectiveness in various dialogue tasks BIBREF2, BIBREF16.
<<</Conditional Variational Autoencoder for Dialogue Generation>>>
<<<CVAE with Transformer>>>
The aforementioned RNN-based CVAE framework integrate the latent variable into the initial state of RNN decoder, while in transformer, it is more flexible to incorporate the latent variable embedding into the first input token of the decoder to generate the initial state.
The overall architecture of GVT is depicted in Figure FIGREF9. Different from RNNs, the Transformer encoder maps an input sequence of symbol representations to a sequence of contextualized representations BIBREF0. In order to get fixed dimension representations of the response and context, we add a special token $CLS$ at the beginning of the input sequence as in BERT BIBREF25, to compute the weighted sum of the output representations via self-attention. Thus the output representation of the token $CLS$ is considered as the representation of the whole sequence. Then we introduce a recognition network and a prior network to compute the posterior latent variable and prior latent variable as in BIBREF2, BIBREF16. We add the latent variable sample $z$ and meta features $m$ (can be ignored when not available) into $e_{SOS}$, the embedding of the start-of-sequence token $SOS$:
Finally, the transformer decoder decodes the response $x$ sequentially while attending to the new embedding $e^{\prime }_{SOS}$ of token $SOS$ with latent information.
This design enhances the CVAE framework with the global receptive field, and each position of the GVT can directly access the latent information via the multi-head self-attention mechanism. However, we still observe that the GVT suffers the vanishing latent variable problem as RNN-based CVAE because the decoder can bypass the latent information by paying less attention to the $SOS$ token. Hence, we apply the KL annealing, and bag-of-word auxiliary loss $\mathcal {L}_{bow}$ as in BIBREF2, BIBREF16 to preserve the useful information of the latent variable. Therefore, the learning objective of the GVT is defined as follows:
<<</CVAE with Transformer>>>
<<</Preliminaries>>>
<<<Sequential Variational Transformer>>>
In order to augment the capacity of the latent variable with multi-modal distributions and to better utilize the latent information, we further explore incorporating a sequence of latent variables in decoding process. We introduce Sequential Variational Transformer (SVT) with a novel variational decoder layer which generate latent variables for each position: $z=\left(z_{1}, \dots , z_{T}\right)$. Similar to BIBREF3, we interpret the latent variables as a generation plan for the future sequence. Unlike previous CVAE models which use an extra encoder to encode the response separately BIBREF2, BIBREF16 or use a backward RNN to encode the future sequence for each time step BIBREF3, BIBREF4, SVT uses a Non-causal Multi-head Attention which leaks the future information to the recognition network for computing the posterior latent variables.
As shown in Figure FIGREF13, the SVT shares the same encoder as the standard Transformer BIBREF0, while its decoder consists of a variational decoder layer followed by a stack of $N$ standard Transformer decoder layers. The variational decoder layer has two paths for computing the posterior latent variable and prior latent variable respectively. We denote them as Posterior Path and Prior Path.
<<<Prior Path>>>
The Prior Path (solid line in Figure FIGREF13) has a masked multi-head self-attention sub-layer which performs causal attention on the shifted response, followed by a multi-head self-attention sub-layer which performs encoder-decoder multi-head attention on the context encoder. The last sub-layer is composed of a MLP prior network which approximates a sequence of prior latent variable for each position, and a Position-wise Feed-Forward Network (FFN) which fuse the latent information $z$ with the observed information representation $o^P$ before the prior network (shown in Figure FIGREF13). Specifically, we concatenate $o^P$ with $z$ as the input to the FNN, and the FNN pass the fused representation to the next layer. Same as BIBREF0, in the variational decoder layer, each sub-layer is followed by a residual connection and layer normalization. That is, the output of each sub-layer is $LayerNorm(x + Sublayer(x))$.
We decompose the response $x$ as $x = \left(x_1, \cdots , x_T\right)$ and the latent variable $z$ as $z=\left(z_{1}, \dots , z_{T}\right)$. The prior model produces latent variables at each position $z_t$ by not only conditioning on the input condition $c$ (the concatenation of context and meta features), but also conditioning on the observed response tokens $x_{1:t-1}$. By assuming $z_t$ follows a multivariate Gaussian distribution, the prior model becomes:
where
<<</Prior Path>>>
<<<Posterior Path>>>
The only difference between the Posterior Path (dash line in Figure FIGREF13) and Prior Path is that the mask is removed from the masked multi-head attention. Thus the masked (casual) multi-head attention become non-casual multi-head attention, which allows each position to attend to the subsequent positions. Then, the second multi-head attention sub-layer (shared the same weight with prior path) performs posterior attention on the encoder and passes the posterior observed information $o_R$ to the recognition network. The recognition network produces the posterior latent variable for each position $z_t$ as:
where
During the training, the posterior path guides the learning of prior path via KL divergence constraint:
In the training phase, the posterior latent variables from Equation DISPLAY_FORM17 are passed to the FFN, while in the testing phase the Posterior Path will be blocked and the posterior latent variables will be replaced with the prior latent variables from Equation DISPLAY_FORM15.
During the decoding process, each response token $x_t$ is generated by conditioning on observed response tokens $x_{1:t-1}$, latent variables $z_{1:t}$, and the input condition $c$. The decoding process of the SVT is:
<<</Posterior Path>>>
<<<Auxiliary Loss>>>
As we expect the latent variables to be a generation plan for the future sequence, we inject such bias into latent variables by using an auxiliary loss: Sequential-Bag-of-Word (SBOW) which proposed by BIBREF4. The idea of the SBOW auxiliary objective is to sequentially predict the bag of succeeding target words $x_{t:T}$ by using latent variable $z_t$. In our case, the succeeding words prediction also leverages the observed information $c$ and $x_{1:t-1}$. Thus the auxiliary loss at each position is computed by:
where $f_{aux}$ is a feed-forward neural network with the softmax output.
<<</Auxiliary Loss>>>
<<<Learning>>>
The evidence lower bound (ELBO) objective of SVT is the sum of the reconstruction loss $\mathcal {L}_{REC}(t)$ and Kullback-Leibler divergence loss $\mathcal {L}_{KL}(t)$ at each position:
We regularize the ELBO learning objective with an auxiliary loss $\mathcal {L}_{sbow}$ to enhance the expressiveness of the latent variables. Therefore, the final learning objective is formulated as follows:
where,
<<</Learning>>>
<<</Sequential Variational Transformer>>>
<<<Experiments>>>
<<<Dataset>>>
We evaluate the proposed models on three conversationet dataset such as MojiTalk BIBREF16, PersonaChat BIBREF11, Empathetic-Dialogues BIBREF26.
<<<MojiTalk>>>
dataset consists of 596,959 post and response pairs from Twitter. Each response is labeled by one emoji which indicates the response emotion. There are 64 emoji labels in total with unbalanced distribution. We use the preprocessed data and vocabulary released from BIBREF16 and follow the same split of train/validation/test set.
<<</MojiTalk>>>
<<<PersonaChat & Empathetic-Dialogues>>>
are one-to-one multi-turn conversation datasets. In PersonaChat (Persona), the conversations are revolve around personas which are established by four to six persona sentences. While in Empathetic-Dialogues (ED), the conversation are mostly about situation that happened to one of the speaker and another speaker is trying to understand the feeling and reply accordingly. Both datasets are about modeling social skills and the goal is to make user more engaging. Therefore, we combine the train/validation/test set of two datasets.
<<</PersonaChat & Empathetic-Dialogues>>>
<<</Dataset>>>
<<<Baselines>>>
We compare the proposed models with the following baselines:
<<<Seq2Seq.>>>
An attention-based sequence-to-sequence model with the emoji vector as additional input as discribed in MojiTalk BIBREF16.
<<</Seq2Seq.>>>
<<<CVAE.>>>
An RNN-based conditional variational autoencoder for dialogue response generation BIBREF16, which uses a multivariate Gaussian latent variable to model the response and concatenate it with the last hidden state of the encoder as the initial state of the decoder. KL annealing, early stopping strategy and bag-of-word auxiliary loss are applied during the training. We use the implementation released by BIBREF16.
<<</CVAE.>>>
<<<Transformer.>>>
A transformer BIBREF0 trained by using a Maximum Likelihood Estimation (MLE) objective and can be considered as the base model for both the GVT and SVT.
<<</Transformer.>>>
<<</Baselines>>>
<<<Hyper-parameters and Training Setup>>>
We use a 4-layer Transformer as our base model. The hidden size is set to be 300 everywhere, and the word embedding is initialized with the 300-dimensional pre-trained GloVe embeddings for both encoder and decoder. The multi-head attention sub-layers are made up of 4 attention heads each with embedding dimension 64. The size of latent variable is 300. The recognition network and the prior network are parameterized by 3-layer MLPs with 512 hidden dimension. Following the training setup of BIBREF16, we first train our baseline transformer model with the MLE objective and use it to initialize its counterparts in both GVT and SVT. Then the models are trained end-to-end by the Adam optimizer with the initial learning rate $2\times 10^{-4}$. KL annealing and early stopping strategy are applied as in BIBREF16. In the test time, we use greedy decoding strategy for all models.
<<</Hyper-parameters and Training Setup>>>
<<<Automatic Evaluation>>>
<<<PPL & KLD.>>>
The evaluation metrics include Perplexity (PPL) and Kullback-Leibler divergence between the posterior and prior (KLD). A well trained model should achieve a low reconstruction and small but non-trivial KL distance BIBREF27.
<<</PPL & KLD.>>>
<<<Diversity.>>>
To measure the generation diversity, we calculate Dist-1, Dist-2, and Dist-3, the ratio of the number of distinct n-grams (unigrams, bigrams, and trigrams) over the total number of n-grams. A higher distinct n-grams ratio indicates more diverse generation.
<<</Diversity.>>>
<<<Embeddings Similarity.>>>
This metric computes the cosine similarity between the sentence embedding of a generated sequence and that of a ground-truth response. In our experiments, we introduce two different ways to represent sentence embeddings. The first is $\textbf {EMB}_\textbf {FT}$ BIBREF28 that calculates the average of word embeddings in a sentence using FastText BIBREF29 which is trained with Common Crawl and Wikipedia data. We use FastText embeddings instead of other pre-trained word embeddings because it can handle out-of-vocabulary issue. However, representing a sentence by simply taking the average of word embeddings ignores the context information. Therefore, we propose to use a pre-trained language model BERT BIBREF25 to compute the contextualized sentence representation. Specifically, we use a pre-trained BERT to encode a generated sentence and a ground-truth response, and average the output representation of both to obtain the sentence embeddings. We denote such contextualized sentence embedding as $\textbf {EMB}_\textbf {BERT}$.
<<</Embeddings Similarity.>>>
<<</Automatic Evaluation>>>
<<<Human Evaluation>>>
In the human evaluation, we prepare multiple-choice questions for human evaluators and the answers are the generation results from the five models (Seq2Seq, CVAE, Transformer, GVT, and SVT). we first randomly sample 100 dialogues and their corresponding responses from our models and the baselines. For each response, we assign three human annotators to select the most coherent (on topic) response to the context (multiple answers are allowed). In addition, annotators also need to choose the best response correlated to the given emoji label in Mojitalk and the most engaging response in PersonaChat and Empathetic-Dialogues. If there is no response that satisfies the evaluators, they can choose “all answers are bad", which means none of the answer is chosen. We compute the rate that each model is chosen to quantify generation quality regarding to the human standard.
<<</Human Evaluation>>>
<<</Experiments>>>
<<<Results>>>
<<<Quantitative Analysis>>>
The automatic evaluation results are shown in Table TABREF35. Transformer-based models have significantly lower perplexity compared to RNN-based models which indicate that the global receptive field performed by multi-head self-attention boost the modeling capacity. However, deterministic Seq2Seq and Transformer models tends to generate generic responses which leads to a low diversity score. Meanwhile incorporating a stochastic latent variable into both models (CVAE and GVT) promote more diverse generation results and boost the diversity scores such as Dist-1, Dist-2, and Dist-3.
Compare to baseline models, the GVT achieves relatively lower reconstruction PPL, which suggests that the global latent variable contains rich latent information (e.g., topic) for response generation. Meanwhile, the sequential latent variables of the SVT encode fine-grained latent information and further improve the reconstruction PPL.
On the other hand, SVT achieves the highest score in terms of two semantic relevance-oriented metrics such as $\textbf {EMB}_\textbf {FT}$ and $\textbf {EMB}_\textbf {BERT}$ in MojiTalk dataset, while in the combined dataset of Persona and ED, we observe performance drop of SVT compare to other models. This is because both Persona and ED are well designed and have lower entropy than MojiTalk which collected from Twitter. We hypothesize that the sequential latent variables have no advantage in term of similarity to single, fixed "gold response" when model low entropy response. Indeed, in open domain dialogue response generation, automatic metric is not always aligned with the human judgement BIBREF28. In contrast, human evaluation result reported in Table TABREF35 demonstrates the generations of SVT are closer to the human standard in terms of coherence, invoked emotion and engagedness.
<<</Quantitative Analysis>>>
<<<Qualitative Analysis>>>
Table TABREF42 compares the generation of the proposed models with baselines given the same contexts. We observe that the Seq2Seq and vanilla transformer tend to generate generic and repetitive responses (e.g., i am not sure) in MojiTalk due to their deterministic structure fail to capture the variability in dialogue response. By incorporating stochastic latent variables, the CVAE and GVT can generate more diverse responses, but their responses are sometimes digressive (e.g., example 5). Interestingly, GVT and SVT generalize the topic beyong the context which make the dialogue more engaging (e.g., example 4). In general, SVT is able to generate more coherent and informative responses.
<<</Qualitative Analysis>>>
<<</Results>>>
<<<Conclusion>>>
This paper introduces the Variational Transformer (VT), a variational self-attentive feed-forward sequence model that combines the global receptive field of a Transformer with the variational nature of a CVAE. We propose two types of the VT: 1) the Global Variational Transformer (GVT) which incorporates a global latent variable as additional input to the transformer decoder; and 2) the Sequential Variational Transformer (SVT) which generates latent variables for each position during decoding process. Quantitative and qualitative experimental results shows that our models outperform baselines in terms of diversity, semantic relevance, and human judgment. In future work, we will utilize the pre-training language models BIBREF30 as the back-bone to strengthen the language model of the VT for better generation.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated work\nNeural Conversational Models\nConditional Variational Autoencoders\nFully Attentional Networks\nPreliminaries\nConditional Variational Autoencoder for Dialogue Generation\nCVAE with Transformer\nSequential Variational Transformer\nPrior Path\nPosterior Path\nAuxiliary Loss\nLearning\nExperiments\nDataset\nMojiTalk\nPersonaChat & Empathetic-Dialogues\nBaselines\nSeq2Seq.\nCVAE.\nTransformer.\nHyper-parameters and Training Setup\nAutomatic Evaluation\nPPL & KLD.\nDiversity.\nEmbeddings Similarity.\nHuman Evaluation\nResults\nQuantitative Analysis\nQualitative Analysis\nConclusion"
],
"type": "outline"
}
|
1909.03544
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Czech Text Processing with Contextual Embeddings: POS Tagging, Lemmatization, Parsing and NER
<<<Abstract>>>
Contextualized embeddings, which capture appropriate word meaning depending on context, have recently been proposed. We evaluate two meth ods for precomputing such embeddings, BERT and Flair, on four Czech text processing tasks: part-of-speech (POS) tagging, lemmatization, dependency pars ing and named entity recognition (NER). The first three tasks, POS tagging, lemmatization and dependency parsing, are evaluated on two corpora: the Prague Dependency Treebank 3.5 and the Universal Dependencies 2.3. The named entity recognition (NER) is evaluated on the Czech Named Entity Corpus 1.1 and 2.0. We report state-of-the-art results for the above mentioned tasks and corpora.
<<</Abstract>>>
<<<Introduction>>>
Recently, a novel way of computing word embeddings has been proposed. Instead of computing one word embedding for each word which sums over all its occurrences, ignoring the appropriate word meaning in various contexts, the contextualized embeddings are computed for each word occurrence, taking into account the whole sentence. Three ways of computing such contextualized embeddings have been proposed: ELMo BIBREF0, BERT BIBREF1 and Flair BIBREF2, along with precomputed models.
Peters et al. (2018) BIBREF0 obtain the proposed embeddings, called ELMo, from internal states of deep bidirectional language model, pretrained on a large corpus. Akbik et al. (2018) BIBREF2 introduced Flair, contextualized word embeddings obtained from internal states of a character-level bidirectional language model, thus significantly increasing state of the art of POS tagging, chunking and NER tasks. Last, but not least, Devlin et al. (2018) BIBREF1 employ a Transformer BIBREF3 to compute contextualized embeddings from preceeding and following context at the same time, at the cost of increased processing costs. The new BERT embeddings achieved state-of-the-art results in eleven natural language tasks.
Using two of these methods, for which precomputed models for Czech are available, namely BERT and Flair, we present our models for four NLP tasks: part-of-speech (POS) tagging, lemmatization, dependency parsing and named entity recognition (NER). Adding the contextualized embeddings as optional inputs in strong artificial neural network baselines, we report state-of-the-art results in these four tasks.
<<</Introduction>>>
<<<Related Work>>>
As for the Prague Dependency Treebank (PDT) BIBREF4, most of the previous works are non-neural systems with one exception of BIBREF5 who hold the state of the art for Czech POS tagging and lemmatization, achieved with the recurrent neural network (RNN) using end-to-end trainable word embeddings and character-level word embeddings. Otherwise, Spoustová et al. (2009) BIBREF6 used an averaged perceptron for POS tagging. For parsing the PDT, Holan and Zabokrtský (2006) BIBREF7 and Novák and Žabokrtský (2007) BIBREF8 used a combination of non-neural parsing techniques .
In the multilingual shared task CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies BIBREF9, raw text is processed and the POS tagging, lemmatization and dependency parsing are evaluated on the Universal Dependencies (UD) BIBREF10. Czech is one of the 57 evaluated languages. Interestingly, all 26 participant systems employed the artificial neural networks in some way. Of these, 3 participant systems used (a slightly modified variant of) the only newly presented contextualized embeddings called ELMo BIBREF0, most notably one of the shared task winners BIBREF11. BERT and Flair were not available at the time.
For the Czech NER, Straková et al. (2016) BIBREF12 use an artificial neural network with word- and character-level word embeddings to perform NER on the Czech Named Entity Corpus (CNEC) BIBREF13, BIBREF14, BIBREF15.
<<</Related Work>>>
<<<Datasets>>>
<<<Prague Dependency Treebank 3.5>>>
The Prague Dependency Treebank 3.5 BIBREF4 is a 2018 edition of the core Prague Dependency Treebank. The Prague Dependency Treebank 3.5 contains the same texts as the previous versions since 2.0, and is divided into train, dtest, and etest subparts, where dtest is used as a development set and etest as a test set. The dataset consists of several layers – the morphological m-layer is the largest and contains morphological annotations (POS tags and lemmas), the analytical a-layer contains labeled dependency trees, and the t-layer is the smallest and contains tectogrammatical trees. The statistics of PDT 3.5 sizes is presented in Table TABREF7.
A detailed description of the morphological system can be found in BIBREF16, a specification of the syntactic annotations has been presented in BIBREF17. We note that in PDT, lemmas with the same word form are disambiguated using a number suffix – for example, English lemmas for the word forms can (noun) and can (verb) would be annotated as can-1 and can-2.
In evaluation, we compute:
[noitemsep,topsep=0pt]
POS tagging accuracy,
lemmatization accuracy,
unlabeled attachment score (UAS),
labeled attachment score (LAS).
<<</Prague Dependency Treebank 3.5>>>
<<<Universal Dependencies>>>
The Universal Dependencies project BIBREF10 seeks to develop cross-linguistically consistent treebank annotation of morphology and syntax for many languages. We evaluate the Czech PDT treebank of UD 2.3 BIBREF18, which is an automated conversion of PDT 3.5 a-layer to Universal Dependencies annotation. The original POS tags are used to generate UPOS (universal POS tags), XPOS (language-specific POS tags, in this case the original PDT tags), and Feats (universal morphological features). The UD lemmas are the raw textual lemmas, so the discriminative numeric suffix of PDT is dropped. The dependency trees are converted according to the UD guidelines, adapting both the unlabeled trees and the dependency labels.
To compute the evaluation scores, we use the official CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies BIBREF9 evaluation script, which produces the following metrics:
[noitemsep,topsep=0pt]
UPOS – universal POS tags accuracy,
XPOS – language-specific POS tags accuracy,
UFeats – universal subset of morphological features accuracy,
Lemmas – lemmatization accuracy,
UAS – unlabeled attachment score, LAS – labeled attachment score,
MLAS – morphology-aware LAS, BLEX – bi-lexical dependency score.
<<</Universal Dependencies>>>
<<<Czech Named Entity Corpus>>>
The Czech Named Entity Corpus 1.1 BIBREF13, BIBREF14 is a corpus of $5\,868$ Czech sentences with manually annotated $33\,662$ Czech named entities, classified according to a two-level hierarchy of 62 named entities.
The Czech Named Entity Corpus 2.0 BIBREF15 contains $8\,993$ Czech sentences with manually annotated $35\,220$ Czech named entities, classified according to a two-level hierarchy of 46 named entities.
We evaluate the NER task with the official CNEC evaluation script. Similarly to previous literature BIBREF13, BIBREF12 etc., the script only evaluates the first round annotation classes for the CNEC 1.1. For the CNEC 2.0, the script evaluates all annotated classes.
<<</Czech Named Entity Corpus>>>
<<</Datasets>>>
<<<Neural Architectures>>>
All our neural architectures are recurrent neural networks (RNNs). The POS tagging, lemmatization and dependency parsing is performed with the UDPipe 2.0 (Section SECREF16) and NER is performed with our new sequence-to-sequence model (Section SECREF36).
<<<POS Tagging, Lemmatization, and Dependency Parsing>>>
We perform POS tagging, lemmatization and dependency parsing using UDPipe 2.0 BIBREF19, one of the three winning systems of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies BIBREF9 and an overall winner of The 2018 Shared Task on Extrinsic Parser Evaluation BIBREF20. An overview of this architecture is presented in Figure FIGREF17 and the full details of the architecture and the training procedure are available in BIBREF19.
<<<POS Tagging and Lemmatization>>>
The tagger employs a standard bi-LSTM architecture. After embedding input words, three bidirectional LSTM BIBREF21 layers are performed, followed by a softmax output layers for POS tags and lemmas. While a classification output layer is natural for POS tags, we also apply it to lemmatization and generate lemmas by classifying the input words into lemma generation rules, therefore considering lemmatization as another tagging task.
We construct a lemma generation rule from a given form and lemma as follows:
[noitemsep,topsep=0pt]
We start by finding the longest continuous substring of the form and the lemma. If it is empty, we use the lemma itself as the class.
If there is a common substring of the form and the lemma, we compute the shortest edit script converting the prefix of the form into the prefix of the lemma, and the shortest edit script converting the suffix of the form to the suffix of the lemma. The edit scripts permit the operations delete_current_char and insert_char(c).
All above operations are performed case insensitively. To indicate correct casing of the lemma, we consider the lemma to be a concatenation of segments, where each segment is composed of either a sequence of lowercase characters, or a sequence of uppercase characters. We represent the lemma casing by encoding the beginning of every such segment, where the offsets in the first half of the lemma are computed relatively to the start of the lemma, and the offsets in the second half of the lemma are computed relatively to the end of the lemma.
<<</POS Tagging and Lemmatization>>>
<<<Dependency Parsing>>>
The dependency parsing is again predicted using UDPipe 2.0 architecture. After embedding input words, three bidirectional LSTM BIBREF21 layers are again performed, followed by a biaffine attention layer BIBREF22 producing labeled dependency trees.
In our evaluation we do not utilize gold POS tags and lemmas on the test set for dependency parsing. Instead, we consider three ways of employing them during parsing:
[noitemsep,topsep=0pt]
not using them at all;
adding predicted POS tags and lemmas on input;
perform joint training of POS tags, lemmatization, and dependency parsing. In this case, we share first two bidirectional LSTM layers between the tagger and the parser.
<<</Dependency Parsing>>>
<<<Input Embeddings>>>
In our baseline model, we use the end-to-end word embeddings and also character-level word embeddings (bidirectional GRUs, BIBREF23, BIBREF24, BIBREF25 of dimension 256) trained specifically for the task.
Our architecture can optionally employ the following additional inputs
[noitemsep,topsep=0pt]
pretrained word embeddings (WE): For the PDT experiments, we generate the word embeddings with word2vec on a concatenation of large raw Czech corpora available from the LINDAT/CLARIN repository. For UD Czech, we use FastText word embeddings BIBREF27 of dimension 300, which we pretrain on Czech Wikipedia using segmentation and tokenization trained from the UD data.
BERT BIBREF1: Pretrained contextual word embeddings of dimension 768 from the Base model. We average the last four layers of the BERT model to produce the embeddings. Because BERT utilizes word pieces, we decompose UD words into appropriate subwords and then average the generated embeddings over subwords belonging to the same word.
Flair BIBREF2: Pretrained contextual word embeddings of dimension 4096.
<<</Input Embeddings>>>
<<<POS Tags and Lemmas Decoding>>>
Optionally, we employ a morphological dictionary MorfFlex BIBREF28 during decoding. If the morphological dictionary is used, it may produce analyses for an input word as (POS tag, lemma) pairs. If any are generated, we choose the pair with maximum likelihood given by both the POS tag and lemmatization model.
<<</POS Tags and Lemmas Decoding>>>
<<</POS Tagging, Lemmatization, and Dependency Parsing>>>
<<<Named Entity Recognition>>>
We use a novel approach BIBREF29 for nested named entity recognition (NER) to capture the nested entities in the Czech Named Entity Corpus. The nested entities are encoded in a sequence and the problem of nested NER is then viewed as a sequence-to-sequence (seq2seq) problem, in which the input sequence consists of the input tokens (forms) and the output sequence of the linearized entity labels.
The system is a encoder-decoder architecture. The encoder is a bi-directional LSTM and the decoder is a LSTM. The encoded labels are predicted one by one by the decoder, until the decoder outputs the "<eow>" (end of word) label and moves to the next token. We use a hard attention on the word whose label(s) is being predicted.
We train the network using the lazy variant of the Adam optimizer BIBREF30, which only updates accumulators for variables that appear in the current batch, with parameters $\beta _1=0.9$ and $\beta _2=0.98$. We use mini-batches of size 8. As a regularization, we apply dropout with rate $0.5$ and the word dropout replaces $20\%$ of words by the unknown token to force the network to rely more on context. We did not perform any complex hyperparameter search.
In this model, we use the following word- and character-level word embeddings:
[noitemsep,topsep=0pt]
pretrained word embeddings: We use the FastText BIBREF27 word embeddings of dimension 300 from the publicly available Czech model.
end-to-end word embeddings: We embed the input forms and lemmas (256 dimensions) and POS tags (one-hot).
end-to-end character-level word embeddings: We use bidirectional GRUs BIBREF23, BIBREF24 of dimension 128 in line with BIBREF25: we represent every Unicode character with a vector of dimension 128, and concatenate GRU outputs for forward and reversed word characters.
Optionally, we add the BERT BIBREF1 and the Flair BIBREF2 contextualized embeddings in the same way as in the UDPipe 2.0 (Section SECREF16).
<<</Named Entity Recognition>>>
<<</Neural Architectures>>>
<<<Results>>>
<<<POS Tagging and Lemmatization on PDT 3.5>>>
The POS tagging and lemmatization results are presented in Table TABREF44. The word2vec word embeddings (WE) considerably increase performance compared to the baseline, especially in POS tagging. When only Flair embeddings are added to the baseline, we also observe an improvement, but not as high. We hypothesise that the lower performance (in contrast with the results reported in BIBREF2) is caused by the size of the training data, because we train the word2vec WE on considerably larger dataset than the Czech Flair model. However, when WE and Flair embeddings are combined, performance moderately increases, demonstrating that the two embedding methods produce at least partially complementary representations.
The BERT embeddings alone bring highest improvement in performance. Furthermore, combination with WE or Flair again yields performance increase. The best results are achieved by exploiting all three embedding methods, substantially exceeding state-of-the-art results.
Utilization of morphological dictionary improves prediction accuracy. However, as the performance of a model itself increases, the gains obtained by the morphological dictionary diminishes – for a model without any pretrained embeddings, morphological dictionary improves POS tagging by and lemmatization by $0.43\%$ and $0.45\%$, while the best performing model gains only $0.11\%$ and $0.23\%$.
<<</POS Tagging and Lemmatization on PDT 3.5>>>
<<<Dependency Parsing on PDT 3.5>>>
The evaluation of the contextualized embeddings methods as well as various ways of POS tag utilization is presented in Table TABREF44. Without POS tags and lemmas, the Flair embeddings bring only a slight improvement in dependency parsing when added to WE. In contrast, BERT embeddings employment results in substantial gains, increasing UAS and LAS by 1.6% and 2.1%. A combination of BERT and Flair embeddings does not result in any performance improvement, demonstrating that BERT syntactic representations encompass the Flair embeddings.
When introducing POS tags and lemmas predicted by the best model from Section SECREF43 as inputs for dependency parsing, the performance increases only slightly. A better way of POS tags and lemmas exploitation is achieved in a joint model, which predicts POS tags, lemmas, and dependency trees simultaneously. Again, BERT embeddings bring significant improvements, but in contrast to syntax parsing only, adding Flair embeddings to BERT results in moderate gain – we hypothesise that the increase is due to the complementary morphological information present in Flair embeddings (cf. Section SECREF43). Note that the joint model achieves better parsing accuracy than the one given gold POS tags and lemmas on input. However, the POS tags and lemmas predicted by the joint model are of slightly lower quality compared to a standalone tagger of the best configuration from Section SECREF43.
Table TABREF44 compares our best model with state-of-the-art results on PDT 2.0 (note that some of the related work used only a subset of PDT 2.0 and/or utilized gold morphological annotation). To our best knowledge, research on PDT parsing was performed mostly in the first decade of this century, therefore even our baseline model substantially surpasses previous works. Our best model with contextualized embeddings achieves nearly 50% error reduction both in UAS and LAS.
<<</Dependency Parsing on PDT 3.5>>>
<<<POS Tagging, Lemmatization and Dependency Parsing on Universal Dependencies>>>
Table TABREF47 shows the performance of analyzed embedding methods in a joint model performing POS tagging, lemmatization, and dependency parsing on Czech PDT UD 2.3 treebank. This treebank is derived from PDT 3.5 a-layer, with original POS tags kept in XPOS, and the dependency trees and lemmas modified according to UD guidelines.
We observe that the word2vec WEs perform similarly to Flair embeddings in this setting. Our hypothesis is that the word2vec WEs performance loss (compared to WEs in Section SECREF43) is caused by using a considerably smaller raw corpus to pretrain the WEs (Czech Wikipedia with 785M words, compared to 4G words used in Section SECREF43), due to licensing reasons. BERT embeddings once more deliver the highest improvement, especially in dependency parsing, and our best model employs all three embedding methods.
In the previous ablation experiments, we used the gold segmentation and tokenization in the Czech PDT UD 2.3 treebank. For comparison with state of the art, Czech PDT UD 2.2 treebank without gold segmentation and tokenization is used in evaluation, according to the CoNLL 2018 shared task training and evaluation protocol. Our system reuses segmentation and tokenization produced by UDPipe 2.0 in the CoNLL 2018 shared task and surpasses previous works substantially in all metrics (bottom part of Table TABREF47).
Comparing the results with a joint tagging and parsing PDT 3.5 model from Table TABREF7, we observe that the XPOS results are nearly identical as expected. Lemmatization on the UD treebank is performed without the discriminative numeric suffixes (see Section SECREF3) and therefore reaches better performance. Both UAS and LAS are also better on the UD treebank, which we assume is caused by the different annotation scheme.
<<</POS Tagging, Lemmatization and Dependency Parsing on Universal Dependencies>>>
<<</Results>>>
<<<Conclusion>>>
We have presented an evaluation of two contextualized embeddings methods, namely BERT and Flair. By utilizing these embeddings as input to deep neural networks, we have achieved state-of-the-art results in several Czech text processing tasks, namely in POS tagging, lemmatization, dependency parsing and named entity recognition.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Work\nDatasets\nPrague Dependency Treebank 3.5\nUniversal Dependencies\nCzech Named Entity Corpus\nNeural Architectures\nPOS Tagging, Lemmatization, and Dependency Parsing\nPOS Tagging and Lemmatization\nDependency Parsing\nInput Embeddings\nPOS Tags and Lemmas Decoding\nNamed Entity Recognition\nResults\nPOS Tagging and Lemmatization on PDT 3.5\nDependency Parsing on PDT 3.5\nPOS Tagging, Lemmatization and Dependency Parsing on Universal Dependencies\nConclusion"
],
"type": "outline"
}
|
1909.12642
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
HateMonitors: Language Agnostic Abuse Detection in Social Media
<<<Abstract>>>
Reducing hateful and offensive content in online social media pose a dual problem for the moderators. On the one hand, rigid censorship on social media cannot be imposed. On the other, the free flow of such content cannot be allowed. Hence, we require efficient abusive language detection system to detect such harmful content in social media. In this paper, we present our machine learning model, HateMonitor, developed for Hate Speech and Offensive Content Identification in Indo-European Languages (HASOC), a shared task at FIRE 2019. We have used a Gradient Boosting model, along with BERT and LASER embeddings, to make the system language agnostic. Our model came at First position for the German sub-task A. We have also made our model public at this https URL .
<<</Abstract>>>
<<<Introduction>>>
In social media, abusive language denotes a text which contains any form of unacceptable language in a post or a comment. Abusive language can be divided into hate speech, offensive language and profanity. Hate speech is a derogatory comment that hurts an entire group in terms of ethnicity, race or gender. Offensive language is similar to derogatory comment, but it is targeted towards an individual. Profanity refers to any use of unacceptable language without a specific target. While profanity is the least threatening, hate speech has the most detrimental effect on the society.
Social media moderators are having a hard time in combating the rampant spread of hate speech as it is closely related to the other forms of abusive language. The evolution of new slangs and multilingualism, further adding to the complexity.
Recently, there has been a sharp rise in hate speech related incidents in India, the lynchings being the clear indication BIBREF1. Arun et al. BIBREF1 suggests that hate speech in India is very complicated as people are not directly spreading hate but are spreading misinformation against a particular community. Hence, it has become imperative to study hate speech in Indian language.
For the first time, a shared task on abusive content detection has been released for Hindi language at HASOC 2019. This will fuel the hate speech and offensive language research for Indian languages. The inclusion of datasets for English and German language will give a performance comparison for detection of abusive content in high and low resource language.
In this paper, we focus on the detection of multilingual hate speech detection that are written in Hindi, English, and German and describe our submission (HateMonitors) for HASOC at FIRE 2019 competition. Our system concatenates two types of sentence embeddings to represent each tweet and use machine learning models for classification.
<<</Introduction>>>
<<<Related works>>>
Analyzing abusive language in social media is a daunting task. Waseem et al. BIBREF2 categorizes abusive language into two sub-classes – hate speech and offensive language. In their analysis of abusive language, Classifying abusive language into these two subtypes is more challenging due to the correlation between offensive language and hate speech BIBREF3. Nobata et al. BIBREF4 uses predefined language element and embeddings to train a regression model. With the introduction of better classification models BIBREF5, BIBREF6 and newer features BIBREF7, BIBREF3, BIBREF8, the research in hate and offensive speech detection has gained momentum.
Silva et al. BIBREF9 performed a large scale study to understand the target of such hate speech on two social media platforms: Twitter and Whisper. These target could be the Refugees and Immigrants BIBREF10, Jews BIBREF11, BIBREF12 and Muslims BIBREF13, BIBREF14. People could become the target of hate speech based on Nationality BIBREF15, sex BIBREF16, BIBREF17, and gender BIBREF18, BIBREF19 as well. Public expressions of hate speech affects the devaluation of minority members BIBREF20, the exclusion of minorities from the society BIBREF21, and tend to diffuse through the network at a faster rate BIBREF22.
One of the key issues with the current state of the hate and offensive language research is that the majority of the research is dedicated to the English language on BIBREF23. Few researchers have tried to solve the problem of abusive language in other languages BIBREF10, BIBREF24, but the works are mostly monolingual. Any online social media platform contains people of different ethnicity, which results in the spread of information in multiple languages. Hence, a robust classifier is needed, which can deal with abusive language in the multilingual domain. Several shared tasks like HASOC BIBREF0, HaSpeeDe BIBREF25, GermEval BIBREF26, AMI BIBREF27, HatEval BIBREF28 have focused on detection of abusive text in multiple languages recently.
<<</Related works>>>
<<<Dataset and Task description>>>
The dataset at HASOC 2019 were given in three languages: Hindi, English, and German. Dataset in Hindi and English had three subtasks each, while German had only two subtasks. We participated in all the tasks provided by the organisers and decided to develop a single model that would be language agnostic. We used the same model architecture for all the three languages.
<<<Datasets>>>
We present the statistics for HASOC dataset in Table TABREF5. From the table, we can observe that the dataset for the German language is highly unbalanced, English and Hindi are more or less balanced for sub-task A. For sub-task B German dataset is balanced but others are unbalanced. For sub-task C both the datasets are highly unbalanced.
<<</Datasets>>>
<<<Tasks>>>
Sub-task A consists of building a binary classification model which can predict if a given piece of text is hateful and offensive (HOF) or not (NOT). A data point is annotated as HOF if it contains any form of non-acceptable language such as hate speech, aggression, profanity. Each of the three languages had this subtask.
Sub-task B consists of building a multi-class classification model which can predict the three different classes in the data points annotated as HOF: Hate speech (HATE), Offensive language (OFFN), and Profane (PRFN). Again all three languages have this sub-task.
Sub-task C consists of building a binary classification model which can predict the type of offense: Targeted (TIN) and Untargeted (UNT). Sub-task C was not conducted for the German dataset.
<<</Tasks>>>
<<</Dataset and Task description>>>
<<<System Description>>>
In this section, we will explain the details about our system, which comprises of two sub-parts- feature generation and model selection. Figure FIGREF15 shows the architecture of our system.
<<<Feature Generation>>>
<<<Preprocessing:>>>
We preprocess the tweets before performing the feature extraction. The following steps were followed:
We remove all the URLs.
Convert text to lowercase. This step was not applied to the Hindi language since Devanagari script does not have lowercase and uppercase characters.
We did not normalize the mentions in the text as they could potentially reveal important information for the embeddings encoders.
Any numerical figure was normalized to a string `number'.
We did not remove any punctuation and stop-words since the context of the sentence might get lost in such a process. Since we are using sentence embedding, it is essential to keep the context of the sentence intact.
<<</Preprocessing:>>>
<<<Feature vectors:>>>
The preprocessed posts are then used to generate features for the classifier. For our model, we decided to generate two types of feature vector: BERT Embeddings and LASER Embeddings. For each post, we generate the BERT and LASER Embedding, which are then concatenated and fed as input to the final classifier.
Multilingual BERT embeddings: Bidirectional Encoder Representations from Transformers(BERT) BIBREF29 has played a key role in the advancement of natural language processing domain (NLP). BERT is a language model which is trained to predict the masked words in a sentence. To generate the sentence embedding for a post, we take the mean of the last 11 layers (out of 12) to get a sentence vector with length of 768.
LASER embeddings: Researchers at Facebook released a language agnostic sentence embeddings representations (LASER) BIBREF30, where the model jointly learns on 93 languages. The model takes the sentence as input and produces a vector representation of length 1024. The model is able to handle code mixing as well BIBREF31.
We pass the preprocessed sentences through each of these embedding models and got two separate sentence representation. Further, we concatenate the embeddings into one single feature vector of length 1792, which is then passed to the final classification model.
<<</Feature vectors:>>>
<<</Feature Generation>>>
<<<Our Model>>>
The amount of data in each category was insufficient to train a deep learning model. Building such deep models would lead to overfitting. So, we resorted to using simpler models such as SVM and Gradient boosted trees. Gradient boosted trees BIBREF32 are often the choice for systems where features are pre-extracted from the raw data. In the category of gradient boosted trees, Light Gradient Boosting Machine (LGBM) BIBREF33 is considered one of the most efficient in terms of memory footprint. Moreover, it has been part of winning solutions of many competition . Hence, we used LGBM as model for the downstream tasks in this competition.
<<</Our Model>>>
<<</System Description>>>
<<<Results>>>
The performance of our models across different languages for sub-task A are shown in table TABREF19. Our model got the first position in the German sub-task with a macro F1 score of 0.62. The results of sub-task B and sub-task C is shown in table TABREF20 and TABREF21 respectively.
<<</Results>>>
<<<Discussion>>>
In the results of subtask A, models are mainly affected by imbalance of the dataset. The training dataset of Hindi dataset was more balanced than English or German dataset. Hence, the results were around 0.78. As the dataset in German language was highly imbalanced, the results drops to 0.62. In subtask B, the highest F1 score reached was by the profane class for each language in table TABREF20. The model got confused between OFFN, HATE and PRFN labels which suggests that these models are not able to capture the context in the sentence. The subtask C was again a case of imbalanced dataset as targeted(TIN) label gets the highest F1 score in table TABREF21.
<<</Discussion>>>
<<<Conclusion>>>
In this shared task, we experimented with zero-shot transfer learning on abusive text detection with pre-trained BERT and LASER sentence embeddings. We use an LGBM model to train the embeddings to perform downstream task. Our model for German language got the first position. The results provided a strong baseline for further research in multilingual hate speech. We have also made the models public for use by other researchers.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated works\nDataset and Task description\nDatasets\nTasks\nSystem Description\nFeature Generation\nPreprocessing:\nFeature vectors:\nOur Model\nResults\nDiscussion\nConclusion"
],
"type": "outline"
}
|
2003.00639
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Learning from Easy to Complex: Adaptive Multi-curricula Learning for Neural Dialogue Generation
<<<Abstract>>>
Current state-of-the-art neural dialogue systems are mainly data-driven and are trained on human-generated responses. However, due to the subjectivity and open-ended nature of human conversations, the complexity of training dialogues varies greatly. The noise and uneven complexity of query-response pairs impede the learning efficiency and effects of the neural dialogue generation models. What is more, so far, there are no unified dialogue complexity measurements, and the dialogue complexity embodies multiple aspects of attributes---specificity, repetitiveness, relevance, etc. Inspired by human behaviors of learning to converse, where children learn from easy dialogues to complex ones and dynamically adjust their learning progress, in this paper, we first analyze five dialogue attributes to measure the dialogue complexity in multiple perspectives on three publicly available corpora. Then, we propose an adaptive multi-curricula learning framework to schedule a committee of the organized curricula. The framework is established upon the reinforcement learning paradigm, which automatically chooses different curricula at the evolving learning process according to the learning status of the neural dialogue generation model. Extensive experiments conducted on five state-of-the-art models demonstrate its learning efficiency and effectiveness with respect to 13 automatic evaluation metrics and human judgments.
<<</Abstract>>>
<<<Introduction>>>
Teaching machines to converse with humans naturally and engagingly is a fundamentally interesting and challenging problem in AI research. Many contemporary state-of-the-art approaches BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6 for dialogue generation follow the data-driven paradigm: trained on a plethora of query-response pairs, the model attempts to mimic human conversations. As a data-driven approach, the quality of generated responses in neural dialogue generation heavily depends on the training data. As such, in order to train a robust and well-behaved model, most works obtain large-scale query-response pairs by crawling human-generated conversations from publicly available sources such as OpenSubtitles BIBREF7.
However, due to the subjectivity and open-ended nature of human conversations, the complexity of training dialogues varies greatly BIBREF8. Table TABREF1 shows samples drawn from OpenSubtitles BIBREF7, which contains millions of human-human conversations converted from movie transcripts. The response of the third sample “Yurakutei kikuhiko.” looks quite strange in terms of the given query, while the first sample is clearly easier to learn. The noise and uneven complexity of query-response pairs impede the learning efficiency and effects of the neural dialogue generation models.
Babies learn to speak by first imitating easy and exact utterances repeatedly taught by their patient parents. As children grow up, they learn grade by grade, from simple conversations to more complex ones. Inspired by such human behaviors of learning to converse, in this paper, we introduce curriculum learning to bring the neural dialogue model with easy-to-complex learning curriculum, where the model first learns from easy conversations and then gradually manages more complicated dialogues. Nevertheless, organizing a curriculum with increasing difficulty faces insurmountable obstacles: 1) automatic evaluation of dialogue complexity is a non-trivial task. BIBREF9 defined the difficulty for the training examples with respect to the sentence length and word rarity in neural machine translation. BIBREF10 expressed the difficulty regarding the value of the objective function. So far, there is no unified approach in measuring dialogue complexity. 2) Unlike the single metric of complexity in other tasks, dialogue complexity embodies multiple aspects of attributes BIBREF11—the specificity and repetitiveness of the response, the relevance between the query and the response, etc. As such, in this paper, we study the dialogue distributions along five aspects of attributes to gather multiple perspectives on dialogue complexity, resulting with five curricula accordingly.
Conventional curriculum learning organizes the training samples into one curriculum, whereas we employ multiple curricula for dialogue learning. Enlightened by the phenomenon that children usually adjust the learning focus of multiple curricula dynamically in order to acquire a good mark, we further propose an adaptive multi-curricula learning framework, established upon the reinforcement learning paradigm, to automatically choose different curricula at different learning stages according to the learning status of the neural dialogue generation model.
Detailed analysis and experiments demonstrate that the proposed framework effectively increases the learning efficiency and gains better performances on five state-of-the-art dialogue generation models regarding three publicly available conversational corpora. Code for this work is available on https://github.com/hengyicai/Adaptive_Multi-curricula_Learning_for_Dialog.
<<</Introduction>>>
<<<Curriculum Plausibility>>>
Intuitively, a well-organized curriculum should provide the model learning with easy dialogues first, and then gradually increase the curriculum difficulty. However, currently, there is no unified approach for dialogue complexity evaluation, where the complexity involves multiple aspects of attributes. In this paper, we prepare the syllabus for dialogue learning with respect to five dialogue attributes. To ensure the universality and general applicability of the curriculum, we perform an in-depth investigation on three publicly available conversation corpora, PersonaChat BIBREF12, DailyDialog BIBREF13 and OpenSubtitles BIBREF7, consisting of 140 248, 66 594 and 358 668 real-life conversation samples, respectively.
<<<Conversational Attributes>>>
<<<Specificity>>>
A notorious problem for neural dialogue generation model is that the model is prone to generate generic responses. The most unspecific responses are easy to learn, but are short and meaningless, while the most specific responses, consisting of too many rare words, are too difficult to learn, especially at the initial learning stage. Following BIBREF11, we measure the specificity of the response in terms of each word $w$ using Normalized Inverse Document Frequency (NIDF, ranging from 0 to 1):
where $\text{IDF}(w)=\log {\frac{N_r}{N_w}}$. $N_r$ is the number of responses in the training set and $N_w$ is the number of those responses that contain $w$. $\text{idf}_{min}$ and $\text{idf}_{max}$ are the minimum and maximum IDFs, taken over all words in the vocabulary. The specificity of a response $r$ is measured as the mean NIDF of the words in $r$.
<<</Specificity>>>
<<<Repetitiveness>>>
Repetitive responses are easy to generate in current auto-regressive response decoding, where response generation loops frequently, whereas diverse and informative responses are much more complicated for neural dialogue generation. We measure the repetitiveness of a response $r$ as:
where $I(\cdot )$ is an indicator function that takes the value 1 when $w_i \in \lbrace w_0, \cdots , w_{i-1}\rbrace $ is true and 0 otherwise.
<<</Repetitiveness>>>
<<<Query-relatedness>>>
A conversation is considered to be coherent if the response correlates well with the given query. For example, given a query “I like to paint”, the response “What kind of things do you paint?” is more relevant and easier to learn than another loosely-coupled response “Do you have any pets?”. Following previous work BIBREF14, we measure the query-relatedness using the cosine similarities between the query and its corresponding response in the embedding space: $\textit {cos\_sim}(\textit {sent\_emb}(c), \textit {sent\_emb}(r))$, where $c$ is the query and $r$ is the response. The sentence embedding is computed by taking the average word embedding weighted by the smooth inverse frequency $\textit {sent\_emb}(e)=\frac{1}{|e|}\sum _{w\in {}e}\frac{0.001}{0.001 + p(w)}emb(w)$ of words BIBREF15, where $emb(w)$ and $p(w)$ are the embedding and the probability of word $w$ respectively.
<<</Query-relatedness>>>
<<<Continuity>>>
A coherent response not only responds to the given query, but also triggers the next utterance. An interactive conversation is carried out for multiple rounds and a response in the current turn also acts as the query in the next turn. As such, we introduce the continuity metric, which is similar to the query-relatedness metric, to assess the continuity of a response $r$ with respect to the subsequent utterance $u$, by measuring the cosine similarities between them.
<<</Continuity>>>
<<<Model Confidence>>>
Despite the heuristic dialogue attributes, we further introduce the model confidence as an attribute, which distinguishes the easy-learnt samples from the under-learnt samples in terms of the model learning ability. A pretrained neural dialogue generation model assigns a relatively higher confidence probability for the easy-learnt samples than the under-learnt samples. Inspired by BIBREF16, BIBREF17, we employ the negative loss value of a dialogue sample under the pretrained model as the model confidence measure, indicating whether a sampled response is easy to be generated. Here we choose the attention-based sequence-to-sequence architecture with a cross-entropy objective as the underlying dialogue model.
<<</Model Confidence>>>
<<</Conversational Attributes>>>
<<<Dialogue Analysis>>>
<<<Distributions among Attributes>>>
The distributions of the data samples regarding the aforementioned five attributes are shown in Figure FIGREF11. Although the attribute score distributions on three corpora are similar, they also have disparities: 1) Outliers frequently appear among all the distributions, which exhibits the uneven dialogue complexity. 2) In terms of query-relatedness and continuity, to our surprise, the medians of the two distributions on PersonaChat are obviously smaller than the corresponding distributions on DailyDialog and OpenSubtitles. PersonaChat is manually created by crowd-sourcing, while DailyDialog and OpenSubtitles are collected from almost real-life conversations. 3) With respect to the model confidence (the negative loss value), the median of PersonaChat is relatively smaller, which illustrates that it is more difficult for the neural dialogue generation model to learn from PersonaChat.
<<</Distributions among Attributes>>>
<<<Attributes Independence>>>
So far, we have analyzed five dialogue attributes. A question might be raised that how well the proposed attributes correlate with each other. To validate the correlations of these conversation attributes, we summarize the statistics of the Kendall $\tau $ correlations for each dataset in Table TABREF12. We find that these attributes, in general, show little correlations with each other. This partially validates that dialogue complexity involves multiple perspectives.
<<</Attributes Independence>>>
<<</Dialogue Analysis>>>
<<</Curriculum Plausibility>>>
<<<Curriculum Dialogue Learning>>>
We propose an adaptive multi-curricula learning framework to accelerate dialogue learning and improve the performance of the neural dialogue generation model.
<<<Single Curriculum Dialogue Learning>>>
We first illustrate how a dialogue generation model exploits the curriculum by taking single curriculum dialogue learning as an example, where the curriculum is arranged by sorting each sample in the dialogue training set $\mathcal {D}_{train}$ according to one attribute. Then, at training time step $t$, a batch of training examples is sampled from the top $f(t)$ portions of the total sorted training samples, where the progressing function $f(t)$ determines the learning rate of the curriculum. Following BIBREF9, we define the progressing function $f(t)$ as $f(t)\triangleq min(1, \sqrt{t\frac{1-c_0^2}{T} + c_0^2})$, where $c_0 > 0$ is set to 0.01 and $T$ is the duration of curriculum learning. At the early stage of the training process, the neural dialogue generation model learns from the samples drawing from the front part of the curriculum. As the advance of the curriculum, the difficulty gradually increases, as more complex training examples appear. After training $T$ batches, each batch of training instances is drawn from the whole training set, which is same as the conventional training procedure without a curriculum.
<<</Single Curriculum Dialogue Learning>>>
<<<Adaptive Multi-curricula Learning>>>
Dialogue complexity consists of multi-perspectives of attributes. We extend the naive single curriculum learning into the multi-curricula setting, where we provide the neural dialogue generation model with five different learning curricula, and each curriculum is prepared by ordering the training set in terms of the corresponding attribute metric accordingly. Scheduling multiple curricula in the same learning pace is obviously inappropriate. Enlightened by the phenomenon that children usually adjust the learning progress of multiple curricula dynamically in order to acquire a good mark, we further introduce an adaptive multi-curricula learning framework, to automatically choose different curricula at different learning stages according to the learning status of the neural dialogue generation model.
The adaptive multi-curricula learning framework is established upon the reinforcement learning (RL) paradigm. Figure FIGREF18 illustrates the overall learning process. The multi-curricula learning scheme is scheduled according to the model's performance on the validation set, where the scheduling mechanism acts as the policy $\pi $ interacting with the dialogue model to acquire the learning status $s$. The reward of the multi-curricula learning mechanism $m_t$ indicates how well the current dialogue model performs. A positive reward is expected if a multi-curricula scheduling action $a_t$ brings improvements on the model's performance, and the current mini-batch of training samples is drawn consulting with the scheduling action $a_t$. The neural dialogue generation model learns from those mini-batches, resulting with a new learning status $s_{t+1}$. The adaptive multi-curricula learning framework is optimized to maximize the reward. Such learning process loops continuously until the performance of the neural dialogue generation model converges.
More specifically, the learning status of the dialogue model is represented as the state. Similar to other curriculum learning framework BIBREF18, BIBREF19, the learning status consists of several features, including the passed mini-batch number, the average historical training loss, the loss value on the training data, the margin value of predicted probabilities and the last validation metric values. To enable the proposed framework to be aware of the learning progress $\varrho _i$ regarding each attribute $i$, we also exploit $\varrho =\lbrace \varrho _0, \varrho _1, \cdots , \varrho _{k-1}\rbrace $ for state representations, where $k$ stands for the number of curricula, here $k=5$, and $\varrho _i$ can be simply measured as the learning steps on the attribute $i$. The multi-curricula learning framework samples a scheduling action $a_t$ per step by its policy $\Phi _\theta (a|s)$ with parameters $\theta $ to be learnt, and the scheduling action $a_t \in \lbrace 0, 1, \cdots , k-1\rbrace $ chooses one of the curricula. Then, a mini-batch of dialogue instances is sampled from the top $f(\varrho _i)$ portions of the chosen curriculum. The dialogue model is validated every $\Gamma $ training steps and the curriculum policy is updated at $\Gamma $-round intervals according to a reward $m_\Gamma $. To accelerate the neural dialogue learning, $m_\Gamma $ is defined as the ratio of two consecutive performance deviations on a held-out validation set: $m_\Gamma =\frac{\delta _{\Gamma }}{\delta _{\Gamma _{\text{prev}}}} - 1$. The performance deviation $\delta _{\Gamma }$ is calculated in terms of 13 automatic evaluation metrics $\lbrace \xi _1, \xi _2, \cdots , \xi _{13}\rbrace $ used in the experiments:
where $\xi _i^{\Gamma }$ is the evaluation score of metric $i$ computed at the current validation turn and $\xi _i^{\Gamma _{\text{prev}}}$ is computed at the previous validation turn. Each score is normalized into $[0,1]$.
The curriculum policy is trained by maximizing the expected reward: $J(\theta )=\mathbb {E}_{\Phi _\theta (a|s)}[M(s,a)]$, where $M(s,a)$ is the state-action value function. Since $M(s,a)$ is non-differentiable w.r.t. $\theta $, in this work, we use REINFORCE BIBREF20, a likelihood ratio policy gradient algorithm to optimize $J(\theta )$ based on the gradient:
where $v_t$ is the sampled estimation of reward $M(s_t, a_t)$ from one episode execution of the policy $\Phi _\theta (a|s)$. In our implementation, $v_t$ is computed as the terminal reward $m_\Gamma $.
<<</Adaptive Multi-curricula Learning>>>
<<</Curriculum Dialogue Learning>>>
<<<Experiments>>>
<<<Experiment Settings>>>
We perform experiments using the following state-of-the-art models: (1) SEQ2SEQ: a sequence-to-sequence model with attention mechanisms BIBREF21, (2) CVAE: a conditional variational auto-encoder model with KL-annealing and a BOW loss BIBREF2, (3) Transformer: an encoder-decoder architecture relying solely on attention mechanisms BIBREF22, (4) HRED: a generalized sequence-to-sequence model with the hierarchical RNN encoder BIBREF23, (5) DialogWAE: a conditional Wasserstein auto-encoder, which models the distribution of data by training a GAN within the latent variable space BIBREF6. We adopt several standard metrics widely used in existing works to measure the performance of dialogue generation models, including BLEU BIBREF24, embedding-based metrics (Average, Extrema, Greedy and Coherence) BIBREF25, BIBREF26, entropy-based metrics (Ent-{1,2}) BIBREF0 and distinct metrics (Dist-{1,2,3} and Intra-{1,2,3}) BIBREF1, BIBREF6.
<<</Experiment Settings>>>
<<<Implementation and Reproducibility>>>
Our experiments are performed using ParlAI BIBREF27. Regarding model implementations, we employ a 2-layer bidirectional LSTM as the encoder and a unidirectional one as the decoder for the SEQ2SEQ and CVAE. The hidden size is set to 512, and the latent size is set to 64 for CVAE. For the Transformer, the hidden size, attention heads and number of hidden layers are set to 512, 8 and 6, respectively. In terms of HRED and DialogWAE, the utterance encoder is a bidirectional GRU with 512 hidden units in each direction. The context encoder and decoder are both GRUs with 512 hidden units. Regarding the curriculum length $T$, we set its value in the following manner: we train the baseline model using the vanilla training procedure and compute the number of training steps it takes to reach approximately 110% of its final loss value. We then set $T$ to this value. Each model is trained using two protocols: the vanilla training procedure without using any curriculum and our proposed adaptive multi-curricula learning procedure, keeping other configurations the same.
<<</Implementation and Reproducibility>>>
<<<Overall Performance and Human Evaluation>>>
The automatic evaluation results of our proposed multi-curricula learning framework and the comparison models are listed in Table TABREF21. Compared with the vanilla training procedure, our curriculum learning framework 1) brings solid improvements for all the five dialogue models regarding almost all the evaluation metrics, 2) achieves competitive performance across three datasets, affirming the superiority and general applicability of our proposed framework. We also notice that the relative improvements of Distinct on OpenSubtitles are much larger (up to 122.46%) than the other two experiment datasets. We conjecture that the OpenSubtitles, with extremely uneven-complexity dialogue samples, benefits more from the multi-curricula learning paradigm.
We conduct a human evaluation to validate the effectiveness of the proposed multi-curricula learning framework. We employ the DailyDialog as the evaluation corpus since it is closer to our daily conversations and easier for humans to make the judgment. We randomly sampled 100 cases from the test set and compared the generated responses of the models trained with the vanilla learning procedure and the multi-curricula learning framework. Three annotators, who have no knowledge about which system the response is from, are then required to evaluate among win (response$_1$ is better), loss (response$_2$ is better) and tie (they are equally good or bad) independently, considering four aspects: coherence, logical consistency, fluency and diversity. Cases with different rating results are counted as “tie”. Table TABREF25 reveals the results of the subjective evaluation. We observe that our multi-curricula learning framework outperforms the vanilla training method on all the five dialogue models and the kappa scores indicate that the annotators came to a fair agreement in the judgment. We checked the cases on which the vanilla training method loses to our multi-curricula learning method and found that the vanilla training method usually leads to irrelevant, generic and repetitive responses, while our method effectively alleviates such defects.
<<</Overall Performance and Human Evaluation>>>
<<<Model Analysis>>>
<<<Single vs Multi-curricula>>>
To further glean the insights regarding the effects of the five conversational attributes on the proposed learning framework, we conduct the ablation test using the SEQ2SEQ model by only exploiting a single attribute during the curriculum learning. Table TABREF26 reports the ablation test results on the DailyDialog. We observe that the curriculum learning leads to consistent performance improvements, even with one single conversational attribute. When applying the multi-curricula learning method to the model, we observe the nearly best performance.
<<</Single vs Multi-curricula>>>
<<<Effects of Adaptive Multi-curricula Learning>>>
Adaptive multi-curricula learning enables the model to choose different curricula at different learning stages according to the learning status of the underlying model. As shown in Table TABREF27, we notice the performance drops when replacing the RL-based curriculum policy with the random policy, indicating that choosing different curricula according to the learning status of the model benefits the model training. When training the model with anti-curriculum learning, i.e., feeding examples to the model in the complex-to-easy manner, we also observe consistent performance decreases, affirming the effectiveness of the easy-to-complex learning manner.
<<</Effects of Adaptive Multi-curricula Learning>>>
<<<Learning Efficiency>>>
Figure FIGREF28 shows comparative results when training the SEQ2SEQ model on DailyDialog with different training protocols. As shown in Figure FIGREF28, our training method accelerates the learning effectively and consistently outperforms the baseline by a large margin in most cases.
<<</Learning Efficiency>>>
<<<Multi-curricula Learning Route>>>
To glean insights on how the proposed adaptive multi-curricula learning framework performs, we present the choosing curriculum distributions $\pi (a_t|s_t)$ during the model learning in Figure FIGREF29. We notice that the model focuses more on the curriculum of “query-relatedness” at the initial learning stage. As the learning proceeds, the model gradually turns its attention to other curricula. At the final stage, the model pays more attention to the “model confidence” curriculum. Such dynamic learning route is quite similar to the human learning behavior.
<<</Multi-curricula Learning Route>>>
<<<Examples with Different Learning Frequencies>>>
As shown in Table TABREF30, the most frequently learnt examples are comprehensively far better than those seldom learnt examples, which exhibits the effectiveness of the adaptive multi-curricula learning framework.
<<</Examples with Different Learning Frequencies>>>
<<</Model Analysis>>>
<<</Experiments>>>
<<<Related Work>>>
Neural dialogue generation. Neural generation models for dialogue, despite their ubiquity in current research, are still far from the real-world applications. Previous approaches enhancing neural dialogue generation models mainly focus on the learning systems by incorporating extra information to the dialogue models such as relevant dialogue history BIBREF5, topics BIBREF28, emotions BIBREF3, out-sourcing knowledge BIBREF4 or exemplars BIBREF29. Latent variables BIBREF0, BIBREF2 also benefit the model with more diverse response generations. In contrast with the previous researches, which pay most attention to the underlying dialogue models, in this work, we concentrate on the dialogue learning process and investigate how the performance of existing dialogue models can be improved on the conversation corpora with varying levels of complexity, by simply adapting the training protocols. BIBREF30 attributed the generic/uninteresting responses to the high-entropy utterances in the training set and proposed to improve dataset quality through data filtering. Though straightforward, the filtering threshold need be carefully chosen to prevent the data size decreasing too much. BIBREF8, BIBREF31 proposed to investigate instance weighting into dialogue systems. However, it is difficult to accurately define the “weight” of an example in conversation systems, since the dialogue data is of high diversity and complexity. Our proposed adaptive multi-curricula learning framework, concentrating on different curricula at evolving learning process according to the learning status of the underlying model, enables dialogue systems gradually proceed from easy to more complex samples in training and thus efficiently improves the response quality.
Curriculum learning in NLP. BIBREF18 examined curriculum learning and demonstrated empirically that such curriculum approaches indeed help decrease training times and sometimes even improve generalization. BIBREF32 managed curriculum learning as an optimization problem. Curriculum learning has also been applied to many NLP tasks. To name a few, BIBREF10 applied self-paced learning for neural question answering. BIBREF33 proposed a curriculum learning based natural answer generation framework, dealing with low-quality QA-pairs first and then gradually learn more complete answers. BIBREF34 proposed curriculum pointer-generator networks for reading comprehension over long narratives. BIBREF9 applied curriculum learning for neural machine translation (NMT), aiming to reduce the need for specialized training heuristics and boost the performance of existing NMT systems. In our work, instead of organizing the curriculum only from a single aspect, we provide an adaptive multi-curricula dialogue learning framework, grounding our analysis on five conversation attributes regarding the dialogue complexity.
<<</Related Work>>>
<<<Conclusion>>>
In this paper, we propose an adaptive multi-curricula dialogue learning framework, to enable the dialogue models to gradually proceed from easy samples to more complex ones in training. We first define and analyze five conversational attributes regarding the complexity and easiness of dialogue samples, and then present an adaptive multi-curricula learning framework, which chooses different curricula at different training stages according to the learning status of the model. Extensive experiments conducted on three large-scale datasets and five state-of-the-art conversation models show that our proposed learning framework is able to boost the performance of existing dialogue systems.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nCurriculum Plausibility\nConversational Attributes\nSpecificity\nRepetitiveness\nQuery-relatedness\nContinuity\nModel Confidence\nDialogue Analysis\nDistributions among Attributes\nAttributes Independence\nCurriculum Dialogue Learning\nSingle Curriculum Dialogue Learning\nAdaptive Multi-curricula Learning\nExperiments\nExperiment Settings\nImplementation and Reproducibility\nOverall Performance and Human Evaluation\nModel Analysis\nSingle vs Multi-curricula\nEffects of Adaptive Multi-curricula Learning\nLearning Efficiency\nMulti-curricula Learning Route\nExamples with Different Learning Frequencies\nRelated Work\nConclusion"
],
"type": "outline"
}
|
1909.13668
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
On the Importance of the Kullback-Leibler Divergence Term in Variational Autoencoders for Text Generation
<<<Abstract>>>
Variational Autoencoders (VAEs) are known to suffer from learning uninformative latent representation of the input due to issues such as approximated posterior collapse, or entanglement of the latent space. We impose an explicit constraint on the Kullback-Leibler (KL) divergence term inside the VAE objective function. While the explicit constraint naturally avoids posterior collapse, we use it to further understand the significance of the KL term in controlling the information transmitted through the VAE channel. Within this framework, we explore different properties of the estimated posterior distribution, and highlight the trade-off between the amount of information encoded in a latent code during training, and the generative capacity of the model.
<<</Abstract>>>
<<<Introduction>>>
Despite the recent success of deep generative models such as Variational Autoencoders (VAEs) BIBREF0 and Generative Adversarial Networks (GANs) BIBREF1 in different areas of Machine Learning, they have failed to produce similar generative quality in NLP. In this paper we focus on VAEs and their mathematical underpinning to explain their behaviors in the context of text generation.
The vanilla VAE applied to text BIBREF2 consists of an encoder (inference) and decoder (generative) networks: Given an input $x$, the encoder network parameterizes $q_\phi (z|x)$ and infers about latent continuous representations of $x$, while the decoder network parameterizes $p_\theta (x|z)$ and generates $x$ from the continuous code $z$. The two models are jointly trained by maximizing the Evidence Lower Bound (ELBO), $\mathcal {L}(\theta , \phi ; x,z)$:
where the first term is the reconstruction term, and the second term is the Kullback-Leibler (KL) divergence between the posterior distribution of latent variable $z$ and its prior $p({z})$ (i.e., $\mathcal {N}(0,I)$). The KL term can be interpreted as a regularizer which prevents the inference network from copying ${x}$ into ${z}$, and for the case of a Gaussian prior and posterior has a closed-form solution.
With powerful autoregressive decoders, such as LSTMs, the internal decoder's cells are likely to suffice for representing the sentence, leading to a sub-optimal solution where the decoder ignores the inferred latent code ${z}$. This allows the encoder to become independent of $x$, an issue known as posterior collapse ($q_\phi ({z}|{x})\approx p({z})$) where the inference network produces uninformative latent variables. Several solutions have been proposed to address the posterior collapse issue: (i) Modifying the architecture of the model by weakening decoders BIBREF2, BIBREF3, BIBREF4, BIBREF5, or introducing additional connections between the encoder and decoder to enforce the dependence between $x$ and $z$ BIBREF6, BIBREF7, BIBREF8; (ii) Using more flexible or multimodal priors BIBREF9, BIBREF10; (iii) Alternating the training by focusing on the inference network in the earlier stages BIBREF11, or augmenting amortized optimization of VAEs with instance-based optimization of stochastic variational inference BIBREF12, BIBREF13.
All of the aforementioned approaches impose one or more of the following limitations: restraining the choice of decoder, modifying the training algorithm, or requiring a substantial alternation of the objective function. As exceptions to these, $\delta $-VAE BIBREF14 and $\beta $-VAE BIBREF15 aim to avoid the posterior collapse by explicitly controlling the regularizer term in eqn. DISPLAY_FORM2. While $\delta $-VAE aims to impose a lower bound on the divergence term, $\beta $-VAE (betavae) controls the impact of regularization via an additional hyperparameter (i.e., $\beta D_{KL}\big (q_\phi ({z}|{x}) || p({z})\big )$). A special case of $\beta $-VAE is annealing BIBREF2, where $\beta $ increases from 0 to 1 during training.
In this study, we propose to use an extension of $\beta $-VAE BIBREF16 which permits us to explicitly control the magnitude of the KL term while avoiding the posterior collapse issue even in the existence of a powerful decoder. We use this framework to examine different properties of the estimated posterior and the generative behaviour of VAEs and discuss them in the context of text generation via various qualitative and quantitative experiments.
<<</Introduction>>>
<<<Kullback-Leibler Divergence in VAE>>>
We take the encoder-decoder of VAEs as the sender-receiver in a communication network. Given an input message $x$, a sender generates a compressed encoding of $x$ denoted by $z$, while the receiver aims to fully decode $z$ back into $x$. The quality of this communication can be explained in terms of rate (R) which measures the compression level of $z$ as compared to the original message $x$, and distortion (D) which quantities the overall performance of the communication in encoding a message at sender and successfully decoding it at the receiver. Additionally, the capacity of the encoder channel can be measured in terms of the amount of mutual information between $x$ and $z$, denoted by $\text{I}({x};{z})$ BIBREF17.
<<<Reconstruction vs. KL>>>
The reconstruction loss can naturally measure distortion ($D := - \big \langle \log p_\theta ({x}|{z}) \big \rangle $), while the KL term quantifies the amount of compression (rate; $R := D_{KL}[q_\phi ({z}|{x})|| p({z})]$) by measuring the divergence between a channel that transmits zero bit of information about $x$, denoted by $p(z)$, and the encoder channel of VAEs, $q_\phi (z|x)$.
BIBREF18 introduced the $H-D \le \text{I}({x};{z}) \le R$ bounds, where $H$ is the empirical data entropy (a constant). These bounds on mutual information allow us to analyze the trade-off between the reconstruction and KL terms in eqn. (DISPLAY_FORM2). For instance, since $\text{I}({x};{z})$ is non-negative (using Jensen's inequality), the posterior collapse can be explained as the situation where $\text{I}({x};{z})=0$, where encoder transmits no information about $x$, causing $R=0, D=H$. Increasing $\text{I}({x};{z})$ can be encouraged by increasing both bounds: increasing the upper-bound (KL term) can be seen as the mean to control the maximum capacity of the encoder channel, while reducing the distortion (reconstruction loss) will tighten the bound by pushing the lower bound to its limits ($H-D\rightarrow H$). A similar effect on the lower-bound can be encouraged by using stronger decoders which could potentially decrease the reconstruction loss. Hence, having a framework that permits the use of strong decoders while avoiding the posterior collapse is desirable. Similarly, channel capacity can be decreased.
<<</Reconstruction vs. KL>>>
<<<Explicit KL Control via @!START@$\beta $@!END@-VAE>>>
Given the above interpretation, we now turn to a slightly different formulation of ELBO based on $\beta $-VAE BIBREF15. This allows control of the trade-off between the reconstruction and KL terms, as well as to set explicit KL value. While $\beta $-VAE offers regularizing the ELBO via an additional coefficient $\beta \in {\rm I\!R}^+$, a simple extension BIBREF16 of its objective function incorporates an additional hyperparameter $C$ to explicitly control the magnitude of the KL term,
where $C\!\! \in \!\! {\rm I\!R}^+$ and $| . |$ denotes the absolute value. While we could apply constraint optimization to impose the explicit constraint of $\text{KL}\!\!=\!\!C$, we found that the above objective function satisfies the constraint (experiment). Alternatively, it has been shown BIBREF21 the similar effect could be reached by replacing the second term in eqn. DISPLAY_FORM6 with $\max \big (C,D_{KL}\big (q_\phi ({z}|{x}) || p({z})\big )\big )$ at the risk of breaking the ELBO when $\text{KL}\!\!<\!\!C$ BIBREF22.
<<</Explicit KL Control via @!START@$\beta $@!END@-VAE>>>
<<</Kullback-Leibler Divergence in VAE>>>
<<<Experiments>>>
We conduct various experiments to illustrate the properties that are encouraged via different KL magnitudes. In particular, we revisit the interdependence between rate and distortion, and shed light on the impact of KL on the sharpness of the approximated posteriors. Then, through a set of qualitative and quantitative experiments for text generation, we demonstrate how certain generative behaviours could be imposed on VAEs via a range of maximum channel capacities. Finally, we run some experiments to find if any form of syntactic information is encoded in the latent space. For all experiments, we use the objective function of eqn. DISPLAY_FORM6 with $\beta =1$. We do not use larger $\beta $s because the constraint $\text{KL}=C$ is always satisfied.
<<<Corpora>>>
We use 5 different corpora covering different domains and size through this section: Yelp and Yahoo BIBREF4 both have ($100k$,$10k$,$10k$) sentences in (train, dev, test) sets and $20k$ words in vocabulary, Children's Book Test (CBT; BIBREF23) has ($192k$,$10k$,$12k$) sentences and $12k$ vocab, Wikipedia (WIKI; BIBREF24) has ($2m$,$270k$,$270k$) sentences and $20k$ vocab, and WebText BIBREF25 has ($1m$,$23k$,$24k$) sentences and $22k$ vocab.
<<</Corpora>>>
<<<Models>>>
We examine three VAE architectures, covering a range of decoding strengths to examine if the objective function in eqn. DISPLAY_FORM6 is immune to posterior collapse regardless of the choice of encoder-decoder architectures: $\beta _C$-VAELSTM with (LSTM encoder, LSTM decoder), $\beta _C$-VAEGRU with (GRU encoder, GRU decoder) BIBREF26, and $\beta _C$-VAECNN with (LSTM encoder, CNN decoder) BIBREF27. The dimension of word embeddings is 256 and the dimension of the latent variable is 64. The encoder and the decoder, for both VAELSTM and VAEGRU, have hidden size of 512 dimensions. VAECNN has exactly the same encoder as VAELSTM, while the decoder follows similar architecture to GLU with a bottleneck structure (with two blocks) BIBREF27 and has 512 channels externally and 128 internally for the convolutions with the filter size of 20. All models were trained for 10 epochs and optimised the objective function (eqn. DISPLAY_FORM6) with Adam BIBREF28 with following learning rates: $10^{-5}\times 85$ for VAEGRU and VAELSTM, and $10^{-4}$ for VAECNN. To couple the encoder with the decoder we concatenate the latent variable to word embeddings at each time step without initialisation of hidden state.
<<</Models>>>
<<<Rate and Distortion>>>
To analyse the dependence between the values of explicit rate ($C$) and distortion, we trained our models with different values of $C$, ranging from 10 to 100. Figure FIGREF8 reports the results for $\beta _C$-VAEGRU, $\beta _C$-VAELSTM, and $\beta _C$-VAECNN models on Yahoo and Yelp corpora. In all our experiments we found that $C\!-\!1\!\le KL\!\le \! C\!+\!1$, demonstrating that the objective function effectively imposed the desired constraint on KL term. Hence, setting any $C>0$ can in practice avoid the collapse issue.
The general trend is that by increasing the value of $C$ one can get a better reconstruction (lower distortion) while the amount of gain varies depending on the VAE's architecture and corpus. Additionally, we measured rate and distortion on CBT, WIKI, and WebText corpora using $\beta _C$-VAELSTM and observed the same trend with the increase of $C$, see Table TABREF12. This observation is consistent with the bound on $\text{I}({x};{z})$ we discussed earlier (expl) such that with an increase of KL we increase an upper bound on $\text{I}({x};{z})$ which in turn allows to have smaller values of reconstruction loss. Additionally, as reported in Table TABREF12, encouraging higher rates (via larger $C$) encourages more active units (AU; BIBREF29) in the latent code $z$.
As an additional verification, we also group the test sentences into buckets based on their length and report BLEU-2/4 and ROUGE-2/4 metrics to measure the quality of reconstruction step in Table TABREF12. As expected, we observe that increasing rate has a consistently positive impact on improving BLEU and ROUGE scores.
<<</Rate and Distortion>>>
<<<Aggregated Posterior>>>
To understand how the approximated posteriors are being affected by the magnitude of the KL, we adopted an approach from BIBREF6 and looked at the divergence between the aggregated posterior, $q_\phi (z)=\sum _{x\sim q(x)} q_\phi (z|x)$, and prior $p(z$). Since during generation we generate samples from the prior, ideally we would like the aggregated posterior to be as close as possible to the prior.
We obtained unbiased samples of ${z}$ first by sampling an ${x}$ from data and then ${z} \sim q_\phi ({z}|{x})$, and measured the log determinant of covariance of the samples ($\log \det (\mathrm {Cov}[q_\phi ({z})])$). As reported in Figure FIGREF8, we observed that $\log \det (\mathrm {Cov}[q_\phi ({z})])$ degrades as $C$ grows, indicating sharper approximate posteriors. We then consider the difference of $p(z)$ and $q(z)$ in their means and variances, by computing the KL divergence from the moment-matching Gaussian fit of $q(z)$ to $p(z)$: This returns smaller values for $\beta _{C=5}$-VAEGRU (Yelp: 0, Yahoo: 0), and larger values for $\beta _{C=100}$-VAEGRU (Yelp: 8, Yahoo: 5), which illustrates that the overlap between $q_\phi ({z})$ and $p(z)$ shrinks further as $C$ grows.
The above observation is better pronounced in Table TABREF12, where we also report the mean ($||\mu ||^2_2$) of unbiased samples of $z$, highlighting the divergence from the mean of the prior distribution as rate increases. Therefore, for the case of lower $C$, the latent variables observed during training are closer to the generated sample from the prior which makes the decoder more suitable for generation purpose. We will examine this hypothesis in the following section.
<<</Aggregated Posterior>>>
<<<Text Generation>>>
To empirically examine how channel capacity translates into generative capacity of the model, we experimented with the $\beta _C$-VAELSTM models from Table TABREF12. To generate a novel sentence, after a model was trained, a latent variable $z$ is sampled from the prior distribution and then transformed into a sequence of words by the decoder $p(x|z)$.
During decoding for generation we try three decoding schemes: (i) Greedy: which selects the most probable word at each step, (ii) Top-k BIBREF30: which at each step samples from the K most probable words, and (iii) Nucleus Sampling (NS) BIBREF31: which at each step samples from a flexible subset of most probable words chosen based on their cumulative mass (set by a threshold $p$, where $p = 1$ means sampling from the full distribution). While similar to Top-k, the benefit of NS scheme is that the vocabulary size at each time step of decoding varies, a property that encourages diversity and avoids degenerate text patterns of greedy or beam search decoding BIBREF31. We experiment with NS $(p=\lbrace 0.5, 0.9\rbrace )$ and Top-k $(k=\lbrace 5, 15\rbrace )$.
<<<Qualitative Analysis>>>
We follow the settings of homotopy experiment BIBREF2 where first a set of latent variables was obtained by performing a linear interpolation between $z_1 \sim p(z)$ and $z_2 \sim p(z)$. Then each $z$ in the set was converted into a sequence of words by the decoder $p(x|z)$. Besides the initial motivation of BIBREF2 to examine how neighbouring latent codes look like, our additional incentive is to analyse how sensitive the decoder is to small variations in the latent variable when trained with different channel capacities, $C=\lbrace 3,15,100\rbrace $.
Table TABREF17 shows the generated sentences via different decoding schemes for each channel capacity. For space reason, we only report the generated sentences for greedy, Top-$k=15$, and NS $p=0.9$. To make the generated sequences comparable across different decoding schemes or C values, we use the same samples of $z$ for decoding.
<<<Sensitivity of Decoder>>>
To examine the sensitivity of the decoder to variations of the latent variable, we consider the sentences generate with the greedy decoding scheme (the first column in Table TABREF17). The other two schemes are not suitable for this analysis as they include sampling procedure. This means that if we decode the same latent variable twice we will get two different sentences. We observed that with lower channel capacity ($C=3$) the decoder tends to generate identical sentences for the interpolated latent variables (we highlight these sentences in gray), exhibiting decoder's lower sensitivity to $z$'s variations. However, with the increase of channel capacity ($C=15,100$) the decoder becomes more sensitive. This observation is further supported by the increasing pattern of active units in Table TABREF12: Given that AU increases with increase of $C$ one would expect that activation pattern of a latent variable becomes more complex as it comprises more information. Therefore small change in the pattern would have a greater effect on the decoder.
<<</Sensitivity of Decoder>>>
<<<Coherence of Sequences>>>
We observe that the model trained with large values of $C$ compromises sequences' coherence during the sampling. This is especially evident when we compare $C=3$ with $C=100$. Analysis of Top-15 and NS (p=0.9) generated samples reveals that the lack of coherence is not due to the greedy decoding scheme per se, and can be attributed to the model in general. To understand this behavior further, we need two additional results from Table TABREF12: LogDetCov and $||\mu ||^2_2$. One can notice that as $C$ increases LogDetCov decreases and $||\mu ||^2_2$ increases. This indicates that the aggregated posterior becomes further apart from the prior, hence the latent codes seen during the training diverge more from the codes sampled from the prior during generation. We speculate this contributes to the coherence of the generated samples, as the decoder is not equipped to decode prior samples properly at higher $C$s.
<<</Coherence of Sequences>>>
<<</Qualitative Analysis>>>
<<<Quantitative Analysis>>>
Quantitative analysis of generated text without gold reference sequences (e.g. in Machine Translation or Summarization) has been a long-standing challenge. Recently, there have been efforts towards this direction, with proposal such as self-BLEU BIBREF32, forward cross entropy BIBREF33 and Fréchet InferSent Distance BIBREF33. We opted for FCE as a complementary metric to our qualitative analysis. To calculate FCE, first a collection of synthetic sentences are generated by sampling $z\sim p(z)$ and decoding the samples into sentences. The synthetic sequences are then used to train a language model (an LSTM with the parametrisation of our decoder). The FCE score is estimated by reporting the negative log likelihood (NLL) of the trained LM on the set of human generated sentences.
We generated synthetic corpora using trained models from Table TABREF12 with different C and decoding schemes and using the same exact $z$ samples for all corpora. Since the generated corpora using different C values would have different coverage of words in the test set (i.e., Out-of-Vocabulary ratios), we used a fixed vocabulary to minimize the effect of different vocabularies in our analysis. Our dictionary contains words that are common in all of the three corpora, while the rest of the words that don't exist in this dictionary are replaced with 〈unk〉 symbol. Similarly, we used this fixed dictionary to preprocess the test sets. Also, to reduce bias to a particular set of sampled $z$'s we measure the FCE score three times, each time we sampled a new training corpus from a $\beta _C$-VAELSTM decoder and trained an LM from scratch. In Table TABREF20 we report the average FCE (NLL) for the generated corpora.
In the qualitative analysis we observed that the text generated by the $\beta _C$-VAELSTM trained with large values of $C=100$ exhibits lower quality (i.e., in terms of coherence). This observation is supported by the FCE score of NS(p=0.9) decoding scheme (TABREF20), since the performance drops when the LM is trained on the corpus generated with $C=100$. The generated corpora with $C=3$ and $C=15$ achieve similar FCE score. However, these patterns are reversed for Greedy decoding scheme, where the general tendency of FCE scores suggests that for larger values of $C$ the $\beta _C$-VAELSTM seems to generate text which better approximates the natural sentences in the test set. To understand this further, we report additional statistics in Table TABREF20: percentage of 〈unk〉 symbols, self-BLEU and average sentence length in the corpus.
The average sentence length, in the generated corpora is very similar for both decoding schemes, removing the possibility that the pathological pattern on FCE scores was caused by difference in sentence length. However, we observe that for Greedy decoding more than $30\%$ of the test set consists of 〈unk〉. Intuitively, seeing more evidence of this symbol during training would improve our estimate for the 〈unk〉. As reported in the table, the $\%$unk increases on almost all corpora as $C$ grows, which is then translated into getting a better FCE score at test. Therefore, we believe that FCE at high $\%$unk is not a reliable quantitative metric to assess the quality of the generated syntactic corpora. Furthermore, for Greedy decoding, self-BLEU decreases when $C$ increases. This suggests that generated sentences for higher value of $C$ are more diverse. Hence, the LM trained on more diverse corpora can generalise better, which in turn affects the FCE.
In contrast, the effect the 〈unk〉 symbol has on the corpora generated with the NS(p=0.9) decoding scheme is minimal for two reasons: First, the vocabulary size for the generated corpora, for all values of $C$ is close to the original corpus (the corpus we used to train the $\beta _C$-VAELSTM). Second, the vocabularies of the corpora generated with three values of $C$ is very close to each other. As a result, minimum replacement of the words with the 〈unk〉 symbol is required, making the experiment to be more reflective of the quality of the generated text. Similarly, self-BLEU for the NS(p=0.9) is the same for all values of $C$. This suggests that the diversity of sentences has minimal, if any, effect on the FCE.
<<</Quantitative Analysis>>>
<<</Text Generation>>>
<<<Syntactic Test>>>
In this section, we explore if any form of syntactic information is captured by the encoder and represented in the latent codes despite the lack of any explicit syntactic signal during the training of the $\beta _C$-VAELSTM. To train the models we used the same WIKI data set as in BIBREF24, but we filtered out all the sentences that are longer than 50 space-separated tokens. We use the data set of BIBREF24 which consists of pairs of grammatical and ungrammatical sentences to test various syntactic phenomenon. For example, a pair in subject-verb agreement category would be: (The author laughs, The author laugh). We encode both the grammatical and ungrammatical sentences into the latent codes $z^+$ and $z^-$, respectively. Then we condition the decoder on the $z^+$ and try to determine whether the decoder assigns higher probability to the grammatical sentence (denoted by $x^+$): $p(x^-|z^+) < p(x^+|z^+)$ (denoted by p1 in Table TABREF28). We repeat the same experiment but this time try to determine whether the decoder, when conditioned on the ungrammatical code ($z^-$), still prefers to assign higher probability to the grammatical sentence: $p(x^-|z^-) < p(x^+|z^-)$ (denoted by p2 in Table TABREF28). Table TABREF28 shows the p1 and p2 for the $\beta _C$-VAELSTM model trained with $C=\lbrace 3,100\rbrace $. Both the p1 and p2 are similar to the accuracy and correspond to how many times a grammatical sentence was assigned a higher probability.
As reported for C=3, p1 and p2 match in almost all cases. This is to some degree expected since lower channel capacity encourages a more dominating decoder which in our case was trained on grammatical sentences from the WIKI. On the other hand, this illustrates that despite avoiding the KL-collapse issue, the dependence of the decoder on the latent code is so negligible that the decoder hardly distinguishes the grammatical and ungrammatical inputs. This changes for $C=100$, as in almost all the cases the decoder becomes strongly dependent on the latent code and can differentiate between what it has seen as input and the closely similar sentence it hasn't received as the input: The decoder assigns larger probability to the ungrammatical sentence when conditioned on the $z^-$ and, similarly, larger probability to the grammatical sentence when conditioned on the $z^+$.
However, the above observations neither confirm nor reject existence of grammar signal in the latent codes. We run a second set of experiments where we aim to discard sentence specific information from the latent codes by averaging the codes inside each syntactic category. The averaged codes are denoted by $\bar{z}^+$ and $\bar{z}^-$, and the corresponding accuracies are reported by p̄1 and p̄2 in Table TABREF28. Our hypothesis is that the only invariant factor during averaging the codes inside a category is the grammatical property of its corresponding sentences.
As expected, due to the weak dependence of decoder on latent code, the performance of the model under $C=3$ is almost identical (not included for space limits) when comparing p1 vs. p̄1, and p2 vs. p̄2. However, for $C=100$ the performance of the model deteriorates. While we leave further exploration of this behavior to our future work, we speculate this could be an indication of two things: the increase of complexity in the latent code which encourages a higher variance around the mean, or the absence of syntactic signal in the latent codes.
<<</Syntactic Test>>>
<<</Experiments>>>
<<<Discussion and Conclusion>>>
In this paper we analysed the interdependence of the KL term in Evidence Lower Bound (ELBO) and the properties of the approximated posterior for text generation. To perform the analysis we used an information theoretic framework based on a variant of $\beta $-VAE objective, which permits explicit control of the KL term, and treats KL as a mechanism to control the amount of information transmitted between the encoder and decoder.
The immediate impact of the explicit constraint is avoiding the collapse issue ($D_{KL}=0$) by setting a non-zero positive constraint ($C\ge 0$) on the KL term ($|D_{KL}\big (q_\phi ({z}|{x}) || p({z})\big )-C|$). We experimented with a range of constraints ($C$) on the KL term and various powerful and weak decoder architectures (LSTM, GRU, and CNN), and empirically confirmed that in all cases the constraint was satisfied.
We showed that the higher value of KL encourages not only divergence from the prior distribution, but also a sharper and more concentrated approximated posteriors. It encourages the decoder to be more sensitive to the variations on the latent code, and makes the model with higher KL less suitable for generation as the latent variables observed during training are farther away from the prior samples used during generation. To analyse its impact on generation we conducted a set of qualitative and quantitative experiments.
In the qualitative analysis we showed that small and large values of KL term impose different properties on the generated text: the decoder trained under smaller KL term tends to generate repetitive but mainly plausible sentences, while for larger KL the generated sentences were diverse but incoherent. This behaviour was observed across three different decoding schemes and complemented by a quantitative analysis where we measured the performance of an LSTM LM trained on different VAE-generated synthetic corpora via different KL magnitudes, and tested on human generated sentences.
Finally, in an attempt to understand the ability of the latent code in VAEs to represent some form of syntactic information, we tested the ability of the model to distinguish between grammatical and ungrammatical sentences. We verified that at lower (and still non-zero) KL the decoder tends to pay less attention to the latent code, but our findings regarding the presence of a syntactic signal in the latent code were inconclusive. We leave it as a possible avenue to explore in our future work. Also, we plan to develop practical algorithms for the automatic selection of the $C$'s value, and verify our findings under multi-modal priors and complex posteriors.
<<</Discussion and Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nKullback-Leibler Divergence in VAE\nReconstruction vs. KL\nExplicit KL Control via @!START@$\\beta $@!END@-VAE\nExperiments\nCorpora\nModels\nRate and Distortion\nAggregated Posterior\nText Generation\nQualitative Analysis\nSensitivity of Decoder\nCoherence of Sequences\nQuantitative Analysis\nSyntactic Test\nDiscussion and Conclusion"
],
"type": "outline"
}
|
2003.01472
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Seshat: A tool for managing and verifying annotation campaigns of audio data
<<<Abstract>>>
We introduce Seshat, a new, simple and open-source software to efficiently manage annotations of speech corpora. The Seshat software allows users to easily customise and manage annotations of large audio corpora while ensuring compliance with the formatting and naming conventions of the annotated output files. In addition, it includes procedures for checking the content of annotations following specific rules are implemented in personalised parsers. Finally, we propose a double-annotation mode, for which Seshat computes automatically an associated inter-annotator agreement with the $\gamma$ measure taking into account the categorisation and segmentation discrepancies.
<<</Abstract>>>
<<<Introduction>>>
Large corpora of speech, obtained in the laboratory and in naturalistic conditions, become easier to collect. This new trend broadens the scope of scientific questions on speech and language that can be answered. However, this poses an important challenge for the construction of reliable and usable annotations. Managing annotators and ensuring the quality of their annotations are highly demanding tasks for research endeavours and industrial projects BIBREF0. When organised manually, the manager of annotation campaigns usually faces three major problems: the mishandling of files (e.g., character-encoding problems, incorrect naming of files), the non-conformity of the annotations BIBREF1, and the inconsistency of the annotations BIBREF2.
In this paper, we introduce Seshat, a system for the automated management of annotation campaigns for audio/speech data which addresses these challenges. It is built on two components that communicate via a Restful API: a back-end (server) written in Flask and a front-end (client) in Angular Typescript. Seshat is easy to install for non-developers and easy to use for researchers and annotators while having some extension capabilities for developers.
In Section SECREF2, we describe the related work on annotations tools, which do not provide solutions to all the aforementioned challenges during corpus creation. In Section SECREF3, we make an overview of the different functionalities of the software. Then, we explain, in Section SECREF4, the architecture of the software, and also the several UX/UI design and engineering choices that have been made to facilitate the usage of the platform. We describe how to use of Seshat in Section SECREF5 and Section SECREF6 presents two specific use-cases. Finally, we conclude and describe future plans for Seshat in Section SECREF7.
<<</Introduction>>>
<<<Related Work>>>
Self-hosted annotation systems. There are many standalone solutions for the transcription of speech data that are already used by researchers: Transcriber BIBREF3, Wavesurfer BIBREF4, Praat BIBREF5, ELAN BIBREF6, XTrans BIBREF7. These systems allow the playback of sound data and the construction of different layers of annotations with various specifications, with some advanced capabilities (such as annotations with hierarchical or no relationship between layers, number of audio channels, video support). Yet, these solutions lack a management system: each researcher must track the files assigned to annotators and build a pipeline to parse (and eventually check) the output annotation files. Moreover, checking can only be done once the annotations have been submitted to the researchers. This task becomes quickly untraceable as the number of files and annotators grow. In addition, most of these transcription systems do not provide a way to evaluate consistency (intra- and inter-annotator agreement) that would be appropriate for speech data BIBREF8.
Web-based annotations systems. There are several web-based annotation systems for the annotation of audio data. Among them we find light-weight systems, like the VIA software BIBREF9 or Praat on the web BIBREF10 that allow to build simple layers of annotations. However, they do not provide a proper management system for a pool of annotators nor do they integrate annotation checking.
On the other side of the spectrum, there are more sophisticated systems with various capabilities. Camomille BIBREF11 and the EMU-SDMS system (that can also be used offline) BIBREF12 allow to work with speech data and to distribute the tasks to several annotators. But these systems require expertise in web hosting and technologies to deploy and modify them.
Finally, WebAnno BIBREF13 and GATE Teamware BIBREF14 are the tools that most closely match our main contributions regarding quality control (conformity and consistency checking), annotators' management and flexibility. WebAnno includes consistency checking with the integration of different metrics BIBREF15. However, these tools have only been built for text data. The format and all the custom layers have been designed for Natural Language Processing tasks. Porting WebAnno to support speech data seemed a major engineering challenge. That is why it appeared necessary to develop a new and user-friendly tool addressed to the speech community.
<<</Related Work>>>
<<<Overview of Seshat>>>
Seshat is a user-friendly web-based interface whose objective is to smoothly manage large campaigns of audio data annotation, see Figure FIGREF8. Below, we describe the several terms used in Seshat's workflow:
[font=, leftmargin=1cm, style=nextline]
A set of audio/speech files that a Campaign Manager wants to annotate. It is indicated either by a folder containing sound files, or by a CSV summarizing a set of files. We support the same formats as Praat so far: WAV, Flac and MP3.
An object that enables the Campaign Manager to assign Annotation Tasks to the Annotators. It references a Corpus, and allows the Manager to track the annotation's tasks progress and completion in real time. At its creation, a Textgrid Checking Scheme can also be defined for that campaign.
It is contained in an Annotation Campaign, it references an audio file from the campaign's designated Audio Corpus, and assigned to Annotators. It can either be a Single Annotator Task (assigned to one Annotator) or a Double Annotator Task (assigned to two annotators, who will annotatote the assigned task in parallel).
A set of rules defining the TextGrid files' structure and content of the annotations. It is set at the beginning of the Annotation Campaign's creation, and is used to enforce that all TextGrids from the campaign contain the same amount of Tiers, with the same names. It can also enforce, for certain chosen tiers, a set of valid annotations.
Users with the rights to create Annotation Campaigns and Annotators user accounts, and assign Annotation Tasks to Annotators.
Users who are assigned a set of Annotation Tasks. Their job is to complete the annotation of the audio files with the Praat software.
If the TextGrid file they submit does not comply with their Annotation Task's TextGrid Checking Scheme, Seshat pinpoint their annotation errors with detailed messages. The annotator can re-submit the concerned file to the platform based on these different feedbacks.
Once they they connected to their instance of Seshat, campaign managers can access ongoing annotation campaigns or create new ones. Campaign managers are able to add annotators, assign annotation tasks and track progress. Annotator see a list of assigned tasks. The first step for them is to download the sound file with its corresponding auto-generated template TextGrid. In the current implementation, the annotation work has to be done locally with Praat. An upcoming version will use of web tools like Praat on the web BIBREF10. Once the task is completed, the TextGrid file is to be uploaded to Seshat via the web interface. We used the TextGrid format because of the wide acceptance of the Praat software in the speech science community (e.g., language acquisition research, clinical linguistics, phonetics and phonology).
The Textgrid Checking Scheme that encompasses rules on the tier's naming, file structure, and the content of the annotations, is associated with a specific campaign and defined at the creation of the campaign. Seshat back-end will automatically check that the submitted TextGrid file conforms to the Annotation Campaign's Textgrid Checking Scheme.
Seshat allows the campaign manager to create two type of tasks: single annotator, and double annotator. Regarding the first task, one audio file is attributed to one annotator. Once the annotation is completed, Sesha automatically checks the conformity of the annotation, and only declares a tasks completed if the conformity checks is passed. Regarding the second task, one audio file is attributed to two annotators. The two annotators annotate the same file independently, then the two versions are merged and the annotators are guided through a compare and review process to agree one final version. We summarise in the Figure FIGREF7 the different steps for the double-annotator task. At each step during merging, the two annotators are provided feedbacks to focus on where are the disagreements. This process also results in the computation of an Inter-annotator agreement for each file. The double annotator task can be used to train new annotators alongside experts.
Annotating speech data is a joint task of segmentation and categorisation of audio events. That is why we adopted the $\gamma $ measure BIBREF8 to evaluate the inter- or intra- annotator agreement in each individual tier. Campaign manager can customise the distance used by $\gamma $ by inserting a custom distance along their own parser (See short snippet of code for a parser of French Phonetics with the SAMPA alphabet in Algorithm ).
<<</Overview of Seshat>>>
<<<Development>>>
<<<Engineering choices>>>
Our utmost priority when building Seshat was to make it as easy as possible for others to deploy, use, administer and eventually contribute to. To do so, we chose the most common frameworks that are free and open-source, all of which are detailed in the following sections. Additionally, to match the current trend in web development, we decided to use the so-called "web-app" architecture for Seshat, i.e., we separated the application into two distinct entities: a front-end, running on the browser, and a back-end, serving data to the front-end and interacting with the database.
<<<Back-end Choices>>>
The back-end system runs on a server. It holds and updates the campaign databases and runs the annotation checking and inter-rater agreement evaluation services. We chose Python, given its widespread use in the scientific community, with a wide array of speech and linguistic packages. Moreover, its usage on the back-end side will allow the future integration of powerful speech processing tools like Pyannote BIBREF16 to semi-automatize annotations. We thus went for Python3.6 for Seshat's server back-end. We used the Flask-Smorest extension (which is based on Flask) to clearly and thoroughly document our API, which can be exported to the popular OpenAPI 3.0.2 RESTful API description format.
The files and server data are stored on a MongoDB database, chosen for its flexible document model and general ease of use. We used the Object-Relational Mapping (ORM) MongoEngine to define our database schemas and interact with that database. MongoDB's GridFS system also allowed us to directly store annotation files (which are usually very light-weight) directly in the database, instead of going through the file system.
<<</Back-end Choices>>>
<<<Front-end Choices>>>
The front-end handles all of the interactions between the users (campaing manager or annotator) with the databses. It is implemented as an App within their browser. We decided to base Seshat's front-end on the Angular Typescript framework. Despite its' steep learning curve, it enforces strict design patterns that guarantee that others can make additions to our code without jeopardising the stability of the App. Angular Typescript has a wide community support in the web development industry and is backed by Google and Microsoft. Moreover, the fact that it is based on TypeScript alleviates the numerous shortcomings of JavaScript, ensuring our implementation's readability and stability.
<<</Front-end Choices>>>
<<</Engineering choices>>>
<<<UX/UI Choices>>>
The interface and the features we selected for our implementation are the process of a year-long iterative process involving a team of annotators, two campaign managers and software engineers. We followed some guiding principles from the recent Material design language. Our goal while designing our interface (with the help of a professional designer) was to make it fully usable by non-technical people. We also put some extra care into the annotators' interface to give them a clear sense of what is to be done, how they should follow the annotation protocol, and how to correct potential errors in their annotations (See Figure FIGREF21) The goal was to reduce the number of actions to perform for annotators and enable to focus only on the annotations content.
<<</UX/UI Choices>>>
<<</Development>>>
<<<Using Seshat>>>
<<<Installation and Setup>>>
Setting up a modern fully-fledged web service is a arduous task, usually requiring a seasoned system administrator as well as sometimes having very precise system requirements. Luckily, the Docker virtualisation platform ensures that anyone with a recent-enough install of that software can set up Seshat in about one command (while still allowing some flexibility via a configuration file). For those willing to have a more tightly-controlled installation of Seshat on their system, we also fully specify the manual installation steps in our online documentation).
Importing an audio corpus that you are willing to annotate is easy as dropping files into a default `corpora/` folder. It is possible to either drop a folder containing audio files (with no constraints on the folder's structure), or a CSV file listing audio filenames along with their durations (in case the files are sensitive and you're not willing to risk them being hosted on the server). It is then possible to review the automatically imported files via the web interface.
<<</Installation and Setup>>>
<<<Launching and monitoring an annotation campaign>>>
The Campaign manager can easily define and monitor annotation campaign. As shown in Figure FIGREF33, the online form enable to choose corpora, pre-define and pre-configure the annotations scheme (tiers and parsers). There are 2 types of tiers already implemented by default: one with no check at all, and one with pre-defined categories. For the latter, these categories are pre-defined when the campaign is created.
Only Campaign managers can access and build new campaigns. If Campaign manager have several campaigns they can easily switch between them via the menu bar or get a full overview with the dashboard (See Figure FIGREF26). The campaign managers can visualise the progress of the assigned tasks at the campaign level or more precisely at the task level. They can retrieve all the intermediate files that have been created for each task. For instance, the campaign manager can examine qualitatively and quantitatively what are the annotation differences before the merge phases of the double annotator task.
<<</Launching and monitoring an annotation campaign>>>
<<<Scripting API>>>
For those willing to interact with Seshat using code, it is possible to interact with Seshat using either its RESTful API or its command-line interface (CLI). The API endpoints that can be called are all listed in a simple interface, and can be made from any programming language able to make HTTP requests. The CLI interface can be used via your terminal, and therefore can be interacted with using Bash scripts.
A typical usage of these features would be to assign annotation tasks from a large speech corpus (spoken by several speakers) to a large pool of annotators, all the while making sure each annotator has a similar number of tasks, with each speaker being evenly distributed among annotators as well. This would be tedious to do manually via the user interface, but easy to program in any scripting language.
<<</Scripting API>>>
<<<Annotation Parser Customisation>>>
We aimed at a reasonable trade-off between simplicity and flexibility for the TextGrid annotations checking component. However, we understand (from our own experience in particular) that sometimes annotations can follow a very specific and complex standard (for instance, parsing SAMPA phonemes strings). To allow users to define their own annotation standards, we added the possibility for users to define an annotation parser, via a simple package-based extension system (taking inspiration from pyannote's extension system). Anyone willing to create a new annotation parser has to be able to program in Python and have a minimal understanding of its packaging system.
As presented in our example French SAMPA Parser (Algorithm ), implementing a custom annotation parsers only requires the overload of two methods from Seshat's BaseCustomParser class:
check-annotation: takes an annotation string as input and raises an error if and only if the annotation is deemed to be invalid. It doesn't return anything.
distance: takes two annotations as input and should return a float corresponding to the distance between these two annotations.
<<</Annotation Parser Customisation>>>
<<<Inter-rater agreement: the @!START@$\gamma $@!END@ measure>>>
It is necessary have a measure of confidence to obtain high-quality datasets and therefore to draw valid conclusions from annotations. Annotations tasks of audio and speech data usually have some specificities. The items to annotate have to be both segmented in time and categorised. The segments can be hierarchically defined or overlapping. In addition, the audio stream may require only sparse annotations (especially in-the-wild recordings which contain a lot of non-speech segments). To evaluate speech annotations, the measure needs to take these characteristics into account. That is why we decided to re-implement and compute the $\gamma $ measure (see mathet2015unified for its design and the advantages of this measure over previous agreement measures).
First, the $\gamma $ software aligns (tier-wise) the annotations of the different annotators. To align the two sets of annotations the $\gamma $ measure the distance between all the individual units. The difference of position of two annotated units $u$ and $v$ is measured with the positional distance:
If the tiers are categorical, the distance for the content of the annotated units $u$ and $v$ is defined as:
This distance can be over-written by the custom parser as mentioned above. These two distance are summed with equal weights to obtain the distance between every annotated units from 2 annotators. Then, it is possible to obtain the disorder $\delta (a)$ of a specific alignment $a$ by summing the distance of all the aligned units in $a$. All possible alignments $a$ are considered and the one that minimises the disorder $\delta (a)$ is kept.
To get the value of $\gamma $, the disorder is chance-corrected to obtain an expected disorder. It is obtained by re-sampling randomly the annotations of the annotators. This means that real annotations are drawn from the annotators, and one position in the audio is randomly chosen. The annotation is split at this random position and the two parts are permuted. It is then possible to obtain an approximation of the expected disorder $\delta _e$. The final agreement measure is defined as:
This $\gamma $ measure is automatically computed by the back-end server for the double-annotator tasks. The Campaign manager can retrieve these measures in Seshat by downloading a simple CSV file.
<<</Inter-rater agreement: the @!START@$\gamma $@!END@ measure>>>
<<</Using Seshat>>>
<<<Use cases>>>
We present two use cases on which Seshat was developped: clinical interviews, and daylong child-centered recordings.
<<<Clinical interviews>>>
Seshat was intially developped to study the impact of Huntington's Disease BIBREF17 on speech and language production. One hundred and fifty two interviews between a neuropsychologist and a patient with the Huntington's Disease (HD) were recorded between June 2018 and November 2019. The campaign manager created a campaign with multiple tiers to annotate the turn takings and the speech/non speech boundaries of the utterances of the patient. For both tasks, the annotations did not need to cover completely the audio (sparsity property mentioned above). For the Turn-taking annotations, there are 3 pre-defined tiers, each one with a single class ('Patient', 'Non-Patient', and 'Noise'), which results in possible overlap between these classes. For the Utterance annotations, there is only one pre-defined class ('Utterance').
To this date, a total of 67 files have been fully annotated with the help of Seshat by a cohort of 18 speech pathologist students (see Figure FIGREF33). Among these, 16 have been done by 2 different annotators independently with the Double-annotator task. The results are summarised in Table TABREF34.
Even though there are more categories for Turn-Takings than Utterance (gut2004measuring reported that the more categories the more the task is difficult in speech annotations), the mean $\gamma $ for the Turn-Takings $\gamma = 0.64$ is slightly higher than the one for Utterance $\gamma = 0.61$. And the range of values for the Turn-Takings is smaller than the Utterance. Indeed, the speech pathologists reported the difficulty to annotate the boundary of utterances in spontaneous speech, with several ambiguous cases due to pauses. These results will help us to redefine the protocol and be more precise on the given instructions.
<<</Clinical interviews>>>
<<<In-the-wild child-centered recordings>>>
The Seshat software is also currently used to annotate audio files in a study of day-long audio-recordings captured by two devices (LENA BIBREF18, and a BabyCloud baby-logger device) worn by young children growing up in remote Papua New Guinea. The project aims at establishing language input and outcomes in this seldom-studied population. To establish reliability levels, 20 1-min files were double-annotated by 2 speech pathology students. Among the tasks given to the annotators there was: (1) locating the portions of Speech (Speech activity), (2) locating the speech produced by an adult that is directed to a child or not (Adult-Directed Speech versus Child-Directed Speech). As in the previous example, the annotations do not need to cover the full audio file. The Speech Activity task has only 1 class ('Speech') and the Addressee task has 2 classes ('ADS', 'CDS').
These recordings have been done in naturalistic and noisy conditions; moreover, the annotators do not understand the language. Probably as a result of these challenges, agreement between annotators is lower than in the Clinical interviews use case. This information is nonetheless valuable to the researchers, as it can help them appropriately lower their confidence in the ensuing speech quantity estimates.
<<</In-the-wild child-centered recordings>>>
<<</Use cases>>>
<<<Conclusion and Future work>>>
Seshat is a new tool for the management of audio annotation efforts. Seshat enables users to define their own campaign of annotations. Based on this configuration, Seshat automatically enforces the format of the annotations returned by the annotators. Besides, we also add the capability to finely tailor the parsing of the annotations. Finally, Seshat provides automatic routines to compute the inter-rate agreements that are specifically designed for audio annotations. Seshat lays some foundations for more advanced features, either for the interface or the annotation capabilities. In future work, we plan to implement an automatic task assignments and an integration of a diarization processing step to reduce human effort. Another planned feature is to add possibility for the campaign manager to design more complex annotation workflows such as, for instance, dependencies between tiers or more intermediate steps of annotations.
<<</Conclusion and Future work>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Work\nOverview of Seshat\nDevelopment\nEngineering choices\nBack-end Choices\nFront-end Choices\nUX/UI Choices\nUsing Seshat\nInstallation and Setup\nLaunching and monitoring an annotation campaign\nScripting API\nAnnotation Parser Customisation\nInter-rater agreement: the @!START@$\\gamma $@!END@ measure\nUse cases\nClinical interviews\nIn-the-wild child-centered recordings\nConclusion and Future work"
],
"type": "outline"
}
|
2004.01980
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Hooks in the Headline: Learning to Generate Headlines with Controlled Styles
<<<Abstract>>>
Current summarization systems only produce plain, factual headlines, but do not meet the practical needs of creating memorable titles to increase exposure. We propose a new task, Stylistic Headline Generation (SHG), to enrich the headlines with three style options (humor, romance and clickbait), in order to attract more readers. With no style-specific article-headline pair (only a standard headline summarization dataset and mono-style corpora), our method TitleStylist generates style-specific headlines by combining the summarization and reconstruction tasks into a multitasking framework. We also introduced a novel parameter sharing scheme to further disentangle the style from the text. Through both automatic and human evaluation, we demonstrate that TitleStylist can generate relevant, fluent headlines with three target styles: humor, romance, and clickbait. The attraction score of our model generated headlines surpasses that of the state-of-the-art summarization model by 9.68%, and even outperforms human-written references.
<<</Abstract>>>
<<<Introduction>>>
Every good article needs a good title, which should not only be able to condense the core meaning of the text, but also sound appealing to the readers for more exposure and memorableness. However, currently even the best Headline Generation (HG) system can only fulfill the above requirement yet performs poorly on the latter. For example, in Figure FIGREF2, the plain headline by an HG model “Summ: Leopard Frog Found in New York City” is less eye-catching than the style-carrying ones such as “What's That Chuckle You Hear? It May Be the New Frog From NYC.”
To bridge the gap between the practical needs for attractive headlines and the plain HG by the current summarization systems, we propose a new task of Stylistic Headline Generation (SHG). Given an article, it aims to generate a headline with a target style such as humorous, romantic, and click-baity. It has broad applications in reader-adapted title generation, slogan suggestion, auto-fill for online post headlines, and many others.
SHG is a highly skilled creative process, and usually only possessed by expert writers. One of the most famous headlines in American publications, “Sticks Nix Hick Pix,” could be such an example. In contrast, the current best summarization systems are at most comparable to novice writers who provide a plain descriptive representation of the text body as the title BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4. These systems usually use a language generation model that mixes styles with other linguistic patterns and inherently lacks a mechanism to control the style explicitly. More fundamentally, the training data comprise of a mixture of styles (e.g., the Gigaword dataset BIBREF5), obstructing the models from learning a distinct style.
In this paper, we propose the new task SHG, to emphasize the explicit control of style in headline generation. We present a novel headline generation model, TitleStylist, to produce enticing titles with target styles including humorous, romantic, and click-baity. Our model leverages a multitasking framework to train both a summarization model on headline-article pairs, and a Denoising Autoencoder (DAE) on a style corpus. In particular, based on the transformer architecture BIBREF6, we use the style-dependent layer normalization and the style-guided encoder-attention to disentangle the language style factors from the text. This design enables us to use the shared content to generate headlines that are more relevant to the articles, as well as to control the style by plugging in a set of style-specific parameters. We validate the model on three tasks: humorous, romantic, and click-baity headline generation. Both automatic and human evaluations show that TitleStylist can generate headlines with the desired styles that appeal more to human readers, as in Figure FIGREF2.
The main contributions of our paper are listed below:
To the best of our knowledge, it is the first research on the generation of attractive news headlines with styles without any supervised style-specific article-headline paired data.
Through both automatic and human evaluation, we demonstrated that our proposed TitleStylist can generate relevant, fluent headlines with three styles (humor, romance, and clickbait), and they are even more attractive than human-written ones.
Our model can flexibly incorporate multiple styles, thus efficiently and automatically providing humans with various creative headline options for references and inspiring them to think out of the box.
<<</Introduction>>>
<<<Related Work>>>
Our work is related to summarization and text style transfer.
<<<Headline Generation as Summarization>>>
Headline generation is a very popular area of research. Traditional headline generation methods mostly focus on the extractive strategies using linguistic features and handcrafted rules BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13. To enrich the diversity of the extractive summarization, abstractive models were then proposed. With the help of neural networks, BIBREF14 proposed attention-based summarization (ABS) to make BIBREF15's framework of summarization more powerful. Many recent works extended ABS by utilizing additional features BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22. Other variants of the standard headline generation setting include headlines for community question answering BIBREF23, multiple headline generation BIBREF24, user-specific generation using user embeddings in recommendation systems BIBREF25, bilingual headline generation BIBREF26 and question-style headline generation BIBREF27.
Only a few works have recently started to focus on increasing the attractiveness of generated headlines BIBREF28, BIBREF29. BIBREF28 focuses on controlling several features of the summary text such as text length, and the style of two different news outlets, CNN and DailyMail. These controls serve as a way to boost the model performance, and the CNN- and DailyMail-style control shows a negligible improvement. BIBREF29 utilized reinforcement learning to encourage the headline generation system to generate more sensational headlines via using the readers' comment rate as the reward, which however cannot explicitly control or manipulate the styles of headlines. BIBREF30 proposed a style transfer approach to transfer a non-clickbait headline into a clickbait one. This method requires paired news articles-headlines data for the target style; however, for many styles such as humor and romance, there are no available headlines. Our model does not have this limitation, thus enabling transferring to many more styles.
<<</Headline Generation as Summarization>>>
<<<Text Style Transfer>>>
Our work is also related to text style transfer, which aims to change the style attribute of the text while preserving its content. First proposed by BIBREF31, it has achieved great progress in recent years BIBREF32, BIBREF33, BIBREF34, BIBREF35, BIBREF36, BIBREF37, BIBREF38. However, all these methods demand a text corpus for the target style; however, in our case, it is expensive and technically challenging to collect news headlines with humor and romance styles, which makes this category of methods not applicable to our problem.
<<</Text Style Transfer>>>
<<</Related Work>>>
<<<Methods>>>
<<<Problem Formulation>>>
The model is trained on a source dataset $S$ and target dataset $T$. The source dataset $S=\lbrace (\mathbf {a^{(i)}},\mathbf {h^{(i)}})\rbrace _{i=1}^N$ consists of pairs of a news article $\mathbf {a}$ and its plain headline $\mathbf {h}$. We assume that the source corpus has a distribution $P(A, H)$, where $A=\lbrace \mathbf {a^{(i)}}\rbrace _{i=1}^N$, and $H=\lbrace \mathbf {h^{(i)}}\rbrace _{i=1}^N$. The target corpus $T=\lbrace \mathbf {t^{(i)}}\rbrace _{i=1}^{M}$ comprises of sentences $\mathbf {t}$ written in a specific style (e.g., humor). We assume that it conforms to the distribution $P(T)$.
Note that the target corpus $T$ only contains style-carrying sentences, not necessarily headlines — it can be just book text. Also no sentence $\mathbf {t}$ is paired with a news article. Overall, our task is to learn the conditional distribution $P(T|A)$ using only $S$ and $T$. This task is fully unsupervised because there is no sample from the joint distribution $P(A, T)$.
<<</Problem Formulation>>>
<<<Seq2Seq Model Architecture>>>
For summarization, we adopt a sequence-to-sequence (Seq2Seq) model based on the Transformer architecture BIBREF6. As in Figure FIGREF8, it consists of a 6-layer encoder $E(\mathbf {\cdot }; \mathbf {\theta _E})$ and a 6-layer decoder $G(\mathbf {\cdot }; \mathbf {\theta _G})$ with a hidden size of 1024 and a feed-forward filter size of 4096. For better generation quality, we initialize with the MASS model BIBREF3. MASS is pretrained by masking a sentence fragment in the encoder, and then predicting it in the decoder on large-scale English monolingual data. This pretraining is adopted in the current state-of-the-art systems across various summarization benchmark tasks including HG.
<<</Seq2Seq Model Architecture>>>
<<<Multitask Training Scheme>>>
To disentangle the latent style from the text, we adopt a multitask learning framework BIBREF39, training on summarization and DAE simultaneously (as shown in Figure FIGREF10).
<<<Supervised Seq2Seq Training for @!START@$E_S$@!END@ and @!START@$G_S$@!END@>>>
With the source domain dataset $S$, based on the encoder-decoder architecture, we can learn the conditional distribution $P(H|A)$ by training $\mathbf {z}_S=E_S(A)$ and $H_S=G_S(\mathbf {z_S})$ to solve the supervised Seq2Seq learning task, where $\mathbf {z_S}$ is the learned latent representation in the source domain. The loss function of this task is
where $\mathbf {\theta _{E_S}}$ and $\mathbf {\theta _{G_S}}$ are the set of model parameters of the encoder and decoder in the source domain and $p(\mathbf {h}|\mathbf {a})$ denotes the overall probability of generating an output sequence $\mathbf {h}$ given the input article $\mathbf {a}$, which can be further expanded as follows:
where $L$ is the sequence length.
<<</Supervised Seq2Seq Training for @!START@$E_S$@!END@ and @!START@$G_S$@!END@>>>
<<<DAE Training for @!START@$\mathbf {\theta _{E_T}}$@!END@ and @!START@$\mathbf {\theta _{G_T}}$@!END@>>>
For the target style corpus $T$, since we only have the sentence $\mathbf {t}$ without paired news articles, we train $\mathbf {z_T}=E_T(\mathbf {\tilde{t}})$ and $\mathbf {t}=G_T(\mathbf {z_T})$ by solving an unsupervised reconstruction learning task, where $\mathbf {z_T}$ is the learned latent representation in the target domain, and $\mathbf {\tilde{t}}$ is the corrupted version of $\mathbf {t}$ by randomly deleting or blanking some words and shuffling the word orders. To train the model, we minimize the reconstruction error $\mathcal {L}_T$:
where $\mathbf {\theta _{E_T}}$ and $\mathbf {\theta _{G_T}}$ are the set of model parameters for the encoder and generator in the target domain. We train the whole model by jointly minimizing the supervised Seq2Seq training loss $\mathcal {L}_S$ and the unsupervised denoised auto-encoding loss $\mathcal {L}_T$ via multitask learning, so the total loss becomes
where $\lambda $ is a hyper-parameter.
<<</DAE Training for @!START@$\mathbf {\theta _{E_T}}$@!END@ and @!START@$\mathbf {\theta _{G_T}}$@!END@>>>
<<</Multitask Training Scheme>>>
<<<Parameter-Sharing Scheme>>>
More constraints are necessary in the multitask training process. We aim to infer the conditional distribution as $ P(T|A)=G_T(E_S(A))$. However, without samples from $P(A, T)$, this is a challenging or even impossible task if $E_S$ and $E_T$, or $G_S$ and $G_T$ are completely independent of each other. Hence, we need to add some constraints to the network by relating $E_S$ and $E_T$, and $G_S$ and $G_T$. The simplest design is to share all parameters between $E_S$ and $E_T$, and apply the same strategy to $G_S$ and $G_T$. The intuition behind this design is that by exposing the model to both summarization task and style-carrying text reconstruction task, the model would acquire some sense of the target style while summarizing the article. However, to encourage the model to better disentangle the content and style of text and more explicitly learn the style contained in the target corpus $T$, we share all parameters of the encoder between two domains, i.e., between $E_S$ and $E_T$, whereas we divide the parameters of the decoder into two types: style-independent parameters $\mathbf {\theta _{\mathrm {ind}}}$ and style-dependent parameters $\mathbf {\theta _{\mathrm {dep}}}$. This means that only the style-independent parameters are shared between $G_S$ and $G_T$ while the style-dependent parameters are not. More specifically, the parameters of the layer normalization and encoder attention modules are made style-dependent as detailed below.
<<<Type 1. Style Layer Normalization>>>
Inspired by previous work on image style transfer BIBREF40, we make the scaling and shifting parameters for layer normalization in the transformer architecture un-shared for each style. This style layer normalization approach aims to transform a layer’s activation $\mathbf {x}$ into a normalized activation $\mathbf {z}$ specific to the style $s$:
where $\mu $ and $\sigma $ are the mean and standard deviation of the batch of $\mathbf {x}$, and $\gamma _s$ and $\beta _s$ are style-specific parameters learned from data.
Specifically, for the transformer decoder architecture, we use a style-specific self-attention layer normalization and final layer normalization for the source and target domains on all six decoder layers.
<<</Type 1. Style Layer Normalization>>>
<<<Type 2. Style-Guided Encoder Attention>>>
Our model architecture contains the attention mechanism, where the decoder infers the probability of the next word not only conditioned on the previous words but also on the encoded input hidden states. The attention patterns should be different for the summarization and the reconstruction tasks due to their different inherent nature. We insert this thinking into the model by introducing the style-guided encoder attention into the multi-head attention module, which is defined as follows:
where $\mathbf {\mathrm {query}}$, $\mathbf {\mathrm {key}}$, and $\mathbf {\mathrm {value}}$ denote the triple of inputs into the multi-head attention module; $\mathbf {W_q^s}$, $\mathbf {W_k}$, and $\mathbf {W_v}$ denote the scaled dot-product matrix for affine transformation; $d_{\mathrm {model}}$ is the dimension of the hidden states. We specialize the dot-product matrix $\mathbf {W_q^s}$ of the query for different styles, so that $\mathbf {Q}$ can be different to induce diverse attention patterns.
<<</Type 2. Style-Guided Encoder Attention>>>
<<</Parameter-Sharing Scheme>>>
<<</Methods>>>
<<<Experiments>>>
<<<Datasets>>>
We compile a rich source dataset by combining the New York Times (NYT) and CNN, as well as three target style corpora on humorous, romantic, and click-baity text. The average sentence length in the NYT, CNN, Humor, Romance, and Clickbait datasets are 8.8, 9.2, 12.6, 11.6 and 8.7 words, respectively.
<<<Source Dataset>>>
The source dataset contains news articles paired with corresponding headlines. To enrich the training corpus, we combine two datasets: the New York Times (56K) and CNN (90K). After combining these two datasets, we randomly selected 3,000 pairs as the validation set and another 3,000 pairs as the test set.
We first extracted the archival abstracts and headlines from the New York Times (NYT) corpus BIBREF41 and treat the abstracts as the news articles. Following the standard pre-processing procedures BIBREF42, we filtered out advertisement-related articles (as they are very different from news reports), resulting in 56,899 news abstracts-headlines pairs.
We then add into our source set the CNN summarization dataset, which is widely used for training abstractive summarization models BIBREF43. We use the short summaries in the original dataset as the news abstracts and automatically parsed the headlines for each news from the dumped news web pages, and in total collected 90,236 news abstract-headline pairs.
<<</Source Dataset>>>
<<<Three Target Style Corpora>>>
<<<Humor and Romance>>>
For the target style datasets, we follow BIBREF44 to use humor and romance novel collections in BookCorpus BIBREF45 as the Humor and Romance datasets. We split the documents into sentences, tokenized the text, and collected 500K sentences as our datasets.
<<</Humor and Romance>>>
<<<Clickbait>>>
We also tried to learn the writing style from the click-baity headlines since they have shown superior attraction to readers. Thus we used The Examiner - SpamClickBait News dataset, denoted as the Clickbait dataset. We collected 500K headlines for our use.
Some examples from each style corpus are listed in Table TABREF32.
<<</Clickbait>>>
<<</Three Target Style Corpora>>>
<<</Datasets>>>
<<<Baselines>>>
We compared the proposed TitleStylist against the following five strong baseline approaches.
<<<Neural Headline Generation (NHG)>>>
We train the state-of-the-art summarization model, MASS BIBREF3, on our collected news abstracts-headlines paired data.
<<</Neural Headline Generation (NHG)>>>
<<<Gigaword-MASS>>>
We test an off-the-shelf headline generation model, MASS from BIBREF3, which is already trained on Gigaword, a large-scale headline generation dataset with around 4 million articles.
<<</Gigaword-MASS>>>
<<<Neural Story Teller (NST)>>>
It breaks down the task into two steps, which first generates headlines from the aforementioned NHG model, then applies style shift techniques to generate style-specific headlines BIBREF46. In brief, this method uses the Skip-Thought model to encode a sentence into a representation vector and then manipulates its style by a linear transformation. Afterward, this transformed representation vector is used to initialize a language model pretrained on a style-specific corpus so that a stylistic headline can be generated. More details of this method can refer to the official website.
<<</Neural Story Teller (NST)>>>
<<<Fine-Tuned>>>
We first train the NHG model as mentioned above, then further fine-tuned it on the target style corpus via DAE training.
<<</Fine-Tuned>>>
<<<Multitask>>>
We share all parameters between $E_S$ and $E_T$, and between $G_S$ and $G_T$, and trained the model on both the summarization and DAE tasks. The model architecture is the same as NHG.
<<</Multitask>>>
<<</Baselines>>>
<<<Evaluation Metrics>>>
To evaluate the performance of the proposed TitleStylist in generating attractive headlines with styles, we propose a comprehensive twofold strategy of both automatic evaluation and human evaluation.
<<<Setup of Human Evaluation>>>
We randomly sampled 50 news abstracts from the test set and asked three native-speaker annotators for evaluation to score the generated headlines. Specifically, we conduct two tasks to evaluate on four criteria: (1) relevance, (2) attractiveness, (3) language fluency, and (4) style strength. For the first task, the human raters are asked to evaluate these outputs on the first three aspects, relevance, attractiveness, and language fluency on a Likert scale from 1 to 10 (integer values). For relevance, human annotators are asked to evaluate how semantically relevant the headline is to the news body. For attractiveness, annotators are asked how attractive the headlines are. For fluency, we ask the annotators to evaluate how fluent and readable the text is. After the collection of human evaluation results, we averaged the scores as the final score. In addition, we have another independent human evaluation task about the style strength – we present the generated headlines from TitleStylist and baselines to the human judges and let them choose the one that most conforms to the target style such as humor. Then we define the style strength score as the proportion of choices.
<<</Setup of Human Evaluation>>>
<<<Setup of Automatic Evaluation>>>
Apart from the comprehensive human evaluation, we use automatic evaluation to measure the generation quality through two conventional aspects: summarization quality and language fluency. Note that the purpose of this two-way automatic evaluation is to confirm that the performance of our model is in an acceptable range. Good automatic evaluation performances are necessary proofs to compliment human evaluations on the model effectiveness.
<<<Summarization Quality>>>
We use the standard automatic evaluation metrics for summarization with the original headlines as the reference: BLEU BIBREF47, METEOR BIBREF48, ROUGE BIBREF49 and CIDEr BIBREF50. For ROUGE, we used the Files2ROUGE toolkit, and for other metrics, we used the pycocoeval toolkit.
<<</Summarization Quality>>>
<<<Language Fluency>>>
We fine-tuned the GPT-2 medium model BIBREF51 on our collected headlines and then used it to measure the perplexity (PPL) on the generated outputs.
<<</Language Fluency>>>
<<</Setup of Automatic Evaluation>>>
<<</Evaluation Metrics>>>
<<<Experimental Details>>>
We used the fairseq code base BIBREF52. During training, we use Adam optimizer with an initial learning rate of $5\times 10^{-4}$, and the batch size is set as 3072 tokens for each GPU with the parameters update frequency set as 4. For the random corruption for DAE training, we follow the standard practice to randomly delete or blank the word with a uniform probability of $0.2$, and randomly shuffled the word order within 5 tokens. All datasets are lower-cased. $\lambda $ is set as 0.5 in experiments. For each iteration of training, we randomly draw a batch of data either from the source dataset or from the target style corpus, and the sampling strategy follows the uniform distribution with the probability being equal to $\lambda $.
<<</Experimental Details>>>
<<</Experiments>>>
<<<Results and Discussion>>>
<<<Human Evaluation Results>>>
The human evaluation is to have a comprehensive measurement of the performances. We conduct experiments on four criteria, relevance, attraction, fluency, and style strength. We summarize the human evaluation results on the first three criteria in Table TABREF51, and the last criteria in Table TABREF57. Note that through automatic evaluation, the baselines NST, Fine-tuned, and Gigaword-MASS perform poorer than other methods (in Section SECREF58), thereby we removed them in human evaluation to save unnecessary work for human raters.
<<<Relevance>>>
We first look at the relevance scores in Table TABREF51. It is interesting but not surprising that the pure summarization model NHG achieves the highest relevance score. The outputs from NHG are usually like an organic reorganization of several keywords in the source context (as shown in Table TABREF52), thus appearing most relevant. It is noteworthy that the generated headlines of our TitleStylist for all three styles are close to the original human-written headlines in terms of relevance, validating that our generation results are qualified in this aspect. Another finding is that more attractive or more stylistic headlines would lose some relevance since they need to use more words outside the news body for improved creativity.
<<</Relevance>>>
<<<Attraction>>>
In terms of attraction scores in Table TABREF51, we have three findings: (1) The human-written headlines are more attractive than those from NHG, which agrees with our observation in Section SECREF1. (2) Our TitleStylist can generate more attractive headlines over the NHG and Multitask baselines for all three styles, demonstrating that adapting the model to these styles could improve the attraction and specialization of some parameters in the model for different styles can further enhance the attraction. (3) Adapting the model to the “Clickbait” style could create the most attractive headlines, even out-weighting the original ones, which agrees with the fact that click-baity headlines are better at drawing readers' attention. To be noted, although we learned the “Clickbait” style into our summarization system, we still made sure that we are generating relevant headlines instead of too exaggerated ones, which can be verified by our relevance scores.
<<</Attraction>>>
<<<Fluency>>>
The human-annotated fluency scores in Table TABREF51 verified that our TitleStylist generated headlines are comparable or superior to the human-written headlines in terms of readability.
<<</Fluency>>>
<<<Style Strength>>>
We also validated that our TitleStylist can carry more styles compared with the Multitask and NHG baselines by summarizing the percentage of choices by humans for the most humorous or romantic headlines in Table TABREF57.
<<</Style Strength>>>
<<</Human Evaluation Results>>>
<<<Automatic Evaluation Results>>>
Apart from the human evaluation of the overall generation quality on four criteria, we also conducted a conventional automatic assessment to gauge only the summarization quality. This evaluation does not take other measures such as the style strength into consideration, but it serves as important complimentary proof to ensure that the model has an acceptable level of summarization ability.
Table TABREF59 summarizes the automatic evaluation results of our proposed TitleStylist model and all baselines. We use the summarization-related evaluation metrics, i.e., BLEU, ROUGE, CIDEr, and METEOR, to measure how relevant the generated headlines are to the news articles, to some extent, by comparing them to the original human-written headlines. In Table TABREF59, the first row “NHG” shows the performance of the current state-of-the-art summarization model on our data, and Table TABREF52 provides two examples of its generation output. Our ultimate goal is to generate more attractive headlines than these while maintaining relevance to the news body.
From Table TABREF59, the baseline Gigaword-MASS scored worse than NHG, revealing that directly applying an off-the-shelf headline generation model to new in-domain data is not feasible, although this model has been trained on more than 20 times larger dataset. Both NST and Fine-tuned baselines present very poor summarization performance, and the reason could be that both of them cast the problem into two steps: summarization and style transfer, and the latter step is absent of the summarization task, which prevents the model from maintaining its summarization capability.
In contrast, the Multitask baseline involves the summarization and style transfer (via reconstruction training) processes at the same time and shows superior summarization performance even compared with NHG. This reveals that the unsupervised reconstruction task can indeed help improve the supervised summarization task. More importantly, we use two different types of corpora for the reconstruction task: one consists of headlines that are similar to the news data for the summarization task, and the other consists of text from novels that are entirely different from the news data. However, unsupervised reconstruction training on both types of data can contribute to the summarization task, which throws light on the potential future work in summarization by incorporating unsupervised learning as augmentation.
We find that in Table TABREF59 TitleStylist-F achieves the best summarization performance. This implicates that, compared with the Multitask baseline where the two tasks share all parameters, specialization of layer normalization and encoder-attention parameters can make $G_S$ focus more on summarization.
It is noteworthy that the summarization scores for TitleStylist are lower than TitleStylist-F but still comparable to NHG. This agrees with the fact that the $G_T$ branch more focuses on bringing in stylistic linguistic patterns into the generated summaries, thus the outputs would deviate from the pure summarization to some degree. However, the relevance degree of them remains close to the baseline NHG, which is the starting point we want to improve on. Later in the next section, we will further validate that these headlines are faithful to the new article through human evaluation.
We also reported the perplexity (PPL) of the generated headlines to evaluate the language fluency, as shown in Table TABREF59. All outputs from baselines NHG and Multitask and our proposed TitleStylist show similar PPL compared with the test set (used in the fine-tuning stage) PPL 42.5, indicating that they are all fluent expressions for news headlines.
<<</Automatic Evaluation Results>>>
<<<Extension to Multi-Style>>>
We progressively expand TitleStylist to include all three target styles (humor, romance, and clickbait) to demonstrate the flexibility of our model. That is, we simultaneously trained the summarization task on the headlines data and the DAE task on the three target style corpora. And we made the layer normalization and encoder-attention parameters specialized for these four styles (fact, humor, romance, and clickbait) and shared the other parameters. We compared this multi-style version, TitleStylist-Versatile, with the previously presented single-style counterpart, as shown in Table TABREF61. From this table, we see that the BLEU and ROUGE-L scores of TitleStylist-Versatile are comparable to TitleStylist for all three styles. Besides, we conducted another human study to determine the better headline between the two models in terms of attraction, and we allow human annotators to choose both options if they deem them as equivalent. The result is presented in the last column of Table TABREF61, which shows that the attraction of TitleStylist-Versatile outputs is competitive to TitleStylist. TitleStylist-Versatile thus generates multiple headlines in different styles altogether, which is a novel and efficient feature.
<<</Extension to Multi-Style>>>
<<</Results and Discussion>>>
<<<Conclusion>>>
We have proposed a new task of Stylistic Headline Generation (SHG) to emphasize explicit control of styles in headline generation for improved attraction. To this end, we presented a multitask framework to induce styles into summarization, and proposed the parameters sharing scheme to enhance both summarization and stylization capabilities. Through experiments, we validated our proposed TitleStylist can generate more attractive headlines than state-of-the-art HG models.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Work\nHeadline Generation as Summarization\nText Style Transfer\nMethods\nProblem Formulation\nSeq2Seq Model Architecture\nMultitask Training Scheme\nSupervised Seq2Seq Training for @!START@$E_S$@!END@ and @!START@$G_S$@!END@\nDAE Training for @!START@$\\mathbf {\\theta _{E_T}}$@!END@ and @!START@$\\mathbf {\\theta _{G_T}}$@!END@\nParameter-Sharing Scheme\nType 1. Style Layer Normalization\nType 2. Style-Guided Encoder Attention\nExperiments\nDatasets\nSource Dataset\nThree Target Style Corpora\nHumor and Romance\nClickbait\nBaselines\nNeural Headline Generation (NHG)\nGigaword-MASS\nNeural Story Teller (NST)\nFine-Tuned\nMultitask\nEvaluation Metrics\nSetup of Human Evaluation\nSetup of Automatic Evaluation\nSummarization Quality\nLanguage Fluency\nExperimental Details\nResults and Discussion\nHuman Evaluation Results\nRelevance\nAttraction\nFluency\nStyle Strength\nAutomatic Evaluation Results\nExtension to Multi-Style\nConclusion"
],
"type": "outline"
}
|
1911.03597
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Zero-Shot Paraphrase Generation with Multilingual Language Models
<<<Abstract>>>
Leveraging multilingual parallel texts to automatically generate paraphrases has drawn much attention as size of high-quality paraphrase corpus is limited. Round-trip translation, also known as the pivoting method, is a typical approach to this end. However, we notice that the pivoting process involves multiple machine translation models and is likely to incur semantic drift during the two-step translations. In this paper, inspired by the Transformer-based language models, we propose a simple and unified paraphrasing model, which is purely trained on multilingual parallel data and can conduct zero-shot paraphrase generation in one step. Compared with the pivoting approach, paraphrases generated by our model is more semantically similar to the input sentence. Moreover, since our model shares the same architecture as GPT (Radford et al., 2018), we are able to pre-train the model on large-scale unparallel corpus, which further improves the fluency of the output sentences. In addition, we introduce the mechanism of denoising auto-encoder (DAE) to improve diversity and robustness of the model. Experimental results show that our model surpasses the pivoting method in terms of relevance, diversity, fluency and efficiency.
<<</Abstract>>>
<<<Introduction>>>
Paraphrasing is to express the same meaning using different expressions. Paraphrase generation plays an important role in various natural language processing (NLP) tasks such as response diversification in dialogue system, query reformulation in information retrieval, and data augmentation in machine translation. Recently, models based on Seq2Seq learning BIBREF1 have achieved the state-of-the-art results on paraphrase generation. Most of these models BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6 focus on training the paraphrasing models based on a paraphrase corpus, which contains a number of pairs of paraphrases. However, high-quality paraphrases are usually difficult to acquire in practice, which becomes the major limitation of these methods. Therefore, we focus on zero-shot paraphrase generation approach in this paper, which aims to generate paraphrases without requiring a paraphrase corpus.
A natural choice is to leverage the bilingual or multilingual parallel data used in machine translation, which are of great quantity and quality. The basic assumption is that if two sentences in one language (e.g., English) have the same translation in another language (e.g., French), they are assumed to have the same meaning, i.e., they are paraphrases of each other. Therefore, one typical solution for paraphrasing in one language is to pivot over a translation in another language. Specifically, it is implemented as the round-trip translation, where the input sentence is translated into a foreign sentence, then back-translated into a sentence in the same language as input BIBREF7. The process is shown in Figure FIGREF1. Apparently, two machine translation systems (English$\rightarrow $French and French$\leftarrow $English) are needed to conduct the generation of a paraphrase.
Although the pivoting approach works in general, there are several intrinsic defects. First, the round-trip system can hardly explore all the paths of paraphrasing, since it is pivoted through the finite intermedia outputs of a translation system. More formally, let $Z$ denote the meaning representation of a sentence $X$, and finding paraphrases of $X$ can be treated as sampling another sentence $Y$ conditioning on the representation $Z$. Ideally, paraphrases should be generated by following $P(Y|X) = \int _{Z} P(Y|Z)P(Z|X)dZ$, which is marginalized over all possible values of $Z$. However, in the round-trip translation, only one or several $Z$s are sampled from the machine translation system $P(Z|X)$, which can lead to an inaccurate approximation of the whole distribution and is prone to the problem of semantic drift due to the sampling variances. Second, the results are determined by the pre-existing translation systems, and it is difficult to optimize the pipeline end-to-end. Last, the system is not efficient especially at the inference stage, because it needs two rounds of translation decoding.
To address these issues, we propose a single-step zero-shot paraphrase generation model, which can be trained on machine translation corpora in an end-to-end fashion. Unlike the pivoting approach, our proposed model does not involve explicit translation between multiple languages. Instead, it directly learns the paraphrasing distribution $P(Y|X)$ from the parallel data sampled from $P(Z|X)$ and $P(Y|Z)$. Specifically, we build a Transformer-based BIBREF8 language model, which is trained on the concatenated bilingual parallel sentences with language indicators. At inference stage, given a input sentence in a particular language, the model is guided to generate sentences in the same language, which are deemed as paraphrases of the input. Our model is simple and compact, and can empirically reduce the risk of semantic drift to a large extent. Moreover, we can initialize our model with generative pre-training (GPT) BIBREF0 on monolingual data, which can benefit the generation in low-resource languages. Finally, we borrow the idea of denoising auto-encoder (DAE) to further enhance robustness in paraphrase generation.
We conduct experiments on zero-shot paraphrase generation task, and find that the proposed model significantly outperforms the pivoting approach in terms of both automatic and human evaluations. Meanwhile, the training and inference cost are largely reduced compared to the pivot-based methods which involves multiple systems.
<<</Introduction>>>
<<<Methodology>>>
<<<Transformer-based Language Model>>>
Transformer-based language model (TLM) is a neural language model constructed with a stack of Transformer decoder layers BIBREF8. Given a sequence of tokens, TLM is trained with maximizing the likelihood:
where $X=[x_1,x_2,\ldots ,x_n]$ is a sentence in a language (e.g., English), and $\theta $ denotes the parameters of the model. Each Transformer layer is composed of multi-head self-attention, layer normalization and a feed-forward network. We refer reader to the original paper for details of each component. Formally, the decoding probability is given by
where $x_i$ denotes the token embedding, $p_i$ denote the positional embedding and $h_i$ denotes the output states of the $i$-th token, and $W_e$ and $W_o$ are the input and output embedding matrices.
Although TLM is normally employed to model monolingual sequences, there is no barrier to utilize TLM to model sequences in multiple languages. In this paper, inspired by BIBREF9, we concatenate pairs of sentences from bilingual parallel corpora (e.g., English$\rightarrow $French) as training instances to the model. Let $X$ and $Y$ denote the parallel sentences in two different languages, the training objective becomes
This bilingual language model can be regarded as the decoder-only model compared to the traditional encoder-decoder model. It has been proved to work effectively on monolingual text-to-text generation tasks such as summarization BIBREF10. The advantages of such architecture include less model parameters, easier optimization and potential better performance for longer sequences. Furthermore, it naturally integrates with language models pre-training on monolingual corpus.
For each input sequence of concatenated sentences, we add special tokens $\langle $bos$\rangle $ and $\langle $eos$\rangle $ at the beginning and the end, and $\langle $delim$\rangle $ in between the sentences. Moreover, at the beginning of each sentence, we add a special token as its language identifier, for instance, $\langle $en$\rangle $ for English, $\langle $fr$\rangle $ for French. One example of English$\rightarrow $French training sequence is “$\langle $bos$\rangle $ $\langle $en$\rangle $ cat sat on the mat $\langle $delim$\rangle $ $\langle $fr$\rangle $ chat assis sur le tapis $\langle $eos$\rangle $".
At inference stage, the model predicts the next word as the conventional auto-regressive model:
<<</Transformer-based Language Model>>>
<<<Zero-shot Paraphrase Generation>>>
We train the bilingual language model on multiple bilingual corpora, for example, English$\leftrightarrow $French and German$\leftrightarrow $Chinese. Once the language model has been trained, we can conduct zero-shot paraphrase generation based on the model. Specifically, given an input sentence that is fed into the language model, we set the output language identifier the same as input, and then simply conduct decoding to generate paraphrases of the input sentence.
Figure FIGREF2 illustrates the training and decoding process of our model. In the training stage, the model is trained to sequentially generate the input sentence and its translation in a specific language. Training is conducted in the way of teacher-forcing. In the decoding stage, after an English sentence “$\langle $bos$\rangle $ $\langle $en$\rangle $ cat sat on the mat $\langle $delim$\rangle $" is fed to the model, we intentionally set the output language identifier as “$\langle $en$\rangle $", in order to guide the model to continue to generate English words. At the same time, since the model has been trained on translation corpus, it implicitly learns to keep the semantic meaning of the output sentence the same as the input. Accordingly, the model will probably generate the paraphrases of the input sentence, such as “the cat sitting on the carpet $\langle $eos$\rangle $".
It should be noted our model can obviously be trained on parallel paraphrase data without any modification. But in this paper, we will mainly focus on the research and evaluation in the zero-shot learning setting.
In the preliminary experiments of zero-shot paraphrasing, we find the model does not perform consistently well and sometimes fails to generate the words in the correct language as indicated by the language identifier. Similar phenomenon has been observed in the research of zero-shot neural machine translation BIBREF11, BIBREF12, BIBREF13, which is referred as the degeneracy problem by BIBREF13. To address these problems in zero-shot paraphrase generation, we propose several techniques to improve the quality and diversity of the model as follows.
<<<Language Embeddings>>>
The language identifier prior to the sentence does not always guarantee the language of the sequences generated by the model. In order to keep the language consistency, we introduce language embeddings, where each language is assigned a specific vector representation. Supposing that the language embedding for the $i$-th token in a sentence is $a_i$, we concatenate the language embedding with the Transformer output states and feed it to the softmax layer for predicting each token:
We empirically demonstrate that the language embedding added to each tokens can effectively guide the model to generate sentences in the required language. Note that we still let the model to learn the output distribution for each language rather than simply restricting the vocabularies of output space. This offers flexibility to handle coding switching cases commonly seen in real-world data, e.g., English words could also appear in French sentences.
<<</Language Embeddings>>>
<<<Pre-Training on Monolingual Corpora>>>
Language model pre-training has shown its effectiveness in language generation tasks such as machine translation, text summarization and generative question answering BIBREF14, BIBREF15, BIBREF16. It is particularly helpful to the low/zero-resource tasks since the knowledge learned from large-scale monolingual corpus can be transferred to downstream tasks via the pre-training-then-fine-tuning approach. Since our model for paraphrase generation shares the same architecture as the language model, we are able to pre-train the model on massive monolingual data.
Pre-training on monolingual data is conducted in the same way as training on parallel data, except that each training example contains only one sentence with the beginning/end of sequence tokens and the language identifier. The language embeddings are also employed. The pre-training objective is the same as Equation (DISPLAY_FORM4).
In our experiments, we first pre-train the model on monolingual corpora of multiple languages respectively, and then fine-tune the model on parallel corpora.
<<</Pre-Training on Monolingual Corpora>>>
<<<Denoising Auto-Encoder>>>
We adopt the idea of denoising auto-encoder (DAE) to further improve the robustness of our paraphrasing model. DAE is originally proposed to learn intermediate representations that are robust to partial corruption of the inputs in training auto-encoders BIBREF17. Specifically, the initial input $X$ is first partially corrupted as $\tilde{X}$, which can be treated as sampling from a noise distribution $\tilde{X}\sim {q(\tilde{X}|X)}$. Then, an auto-encoder is trained to recover the original $X$ from the noisy input $\tilde{X}$ by minimizing the reconstruction error. In the applications of text generation BIBREF18 and machine translation BIBREF19, DAE has shown to be able to learn representations that are more robust to input noises and also generalize to unseen examples.
Inspired by BIBREF19, we directly inject three different types of noises into input sentence that are commonly encountered in real applications.
1) Deletion: We randomly delete 1% tokens from source sentences, for example, “cat sat on the mat $\mapsto $ cat on the mat."
2) Insertion: We insert a random token into source sentences in 1% random positions, for example, “cat sat on the mat $\mapsto $ cat sat on red the mat."
3) Reordering: We randomly swap 1% tokens in source sentences, and keep the distance between tokens being swapped within 5. “cat sat on the mat $\mapsto $ mat sat on the cat."
By introducing such noises into the input sentences while keeping the target sentences clean in training, our model can be more stable in generating paraphrases and generalisable to unseen sentences in the training corpus. The training objective with DAE becomes
Once the model is trained, we generate paraphrases of a given sentence based on $P(Y|X;\theta )$.
<<</Denoising Auto-Encoder>>>
<<</Zero-shot Paraphrase Generation>>>
<<</Methodology>>>
<<<Experiments>>>
<<<Datasets>>>
We adopt the mixture of two multilingual translation corpus as our training data: MultiUN BIBREF20 and OpenSubtitles BIBREF21. MultiUN consists of 463,406 official documents in six languages, containing around 300M words for each language. OpenSubtitles is a corpus consisting of movie and TV subtitles, which contains 2.6B sentences over 60 languages. We select four shared languages of the two corpora: English, Spanish, Russian and Chinese. Statistics of the training corpus are shown in Table TABREF14. Sentences are tokenized by Wordpiece as in BERT. A multilingual vocabulary of 50K tokens is used. For validation and testing, we randomly sample 10000 sentences respectively from each language pair. The rest data are used for training. For monolingual pre-training, we use English Wikipedia corpus, which contains 2,500M words.
<<</Datasets>>>
<<<Experimental Settings>>>
We implement our model in Tensorflow BIBREF22. The size of our Transformer model is identical to BERT-base BIBREF23. The model is constituted by 12 layers of Transformer blocks. Number of dimension of token embedding, position embedding and transformer hidden state are 768, while that of states in position-wise feed-forward networks are 3072. The number of attention heads is 12. Models are train using Adam optimization BIBREF24 with a learning rate up to $1e-4$, $\beta _1=0.9$, $\beta _2=0.999$ and $L2$ weight decay of 0.01. We use top-k truncated random sampling strategy for inference that only sample from k candidate words with highest probabilities.
Throughout our experiments, we train and evaluate two models for paraphrase generation: the bilingual model and the multilingual model. The bilingual models are trained only with English$\leftrightarrow $Chinese, while the multilingual models are trained with all the data between the four languages. The round-trip translation baseline is based on the Transformer-based neural translation model.
<<</Experimental Settings>>>
<<<Automatic Evaluation>>>
We evaluate the relevance between input and generated paraphrase as well as the diversity among multiple generated paraphrases from the same input. For relevance, we use the cosine similarity between the sentential representations BIBREF25. Specifically, we use the Glove-840B embeddings BIBREF26 for word representation and Vector Extrema BIBREF25 for sentential representation. For generation diversity, we employ two evaluation metrics: Distinct-2 and inverse Self-BLEU (defined as: $1-$Self-BLEU) BIBREF27. Larger values of Distinct-2 and inverse Self-BLEU indicate higher diversity of the generation.
For each model, we draw curves in Figure FIGREF15 with the aforementioned metrics as coordinates, and each data-point is obtained at a specific sampling temperature. Since a good paraphrasing model should generate both relevant and diverse paraphrases, the model with curve lying towards the up-right corner is regarded as with good performance.
<<<Comparison with Baseline>>>
First we compare our models with the conventional pivoting method, i.e., round-trip translation. As shown in Figure FIGREF15 (a)(b), either the bilingual or the multilingual model is better than the baseline in terms of relevance and diversity in most cases. In other words, with the same generation diversity (measured by both Distinct-2 and Self-BLEU), our models can generate paraphrase with more semantically similarity to the input sentence.
Note that in Figure FIGREF15 (a), there is a cross point between the curve of the bilingual model and the baseline curve when relevance is around 0.71. We particularly investigate generated paraphrases around this point and find that the baseline actually achieves better relevance when Distinct-2 is at a high level ($>$0.3). It means our bilingual model is semantically drifting faster than the baseline model as the Distinct-2 diversity increases. The round-trip translation performs two-round of supervised translations, while the zero-shot paraphrasing performs single-round unsupervised `translation' (paraphrasing). We suspect that the unsupervised paraphrasing can be more sensitive to the decoding strategy. It also implies the latent, language-agnostic representation may be not well learned in our bilingual model. While on the other hand, our multilingual model alleviate this insufficiency. We further verify and analyze it as follows.
<<</Comparison with Baseline>>>
<<<Multilingual Models>>>
As mentioned above, our bilingual model can be unstable in some cases due to the lack of a well-learned language-agnostic semantic representation. A natural method is to introduce multilingual corpus, which consists of various translation directions. Training over multilingual corpus forces the model to decouple the language type and semantic representation.
Empirical results shows that our multilingual model performs significantly better than the bilingual model. The red and blue curves in Figure FIGREF15 (a)(b) demonstrates a great improvement of our multilingual model over the bilingual model. In addition, the multilingual model also significantly outperforms the baseline in the setting with the reasonable relevance scores.
<<</Multilingual Models>>>
<<<Monolingual Pre-Training>>>
As shown in Figure FIGREF15 (a)(b), the model with language model pre-training almost performs equally to its contemporary without pre-training. However, evaluations on fluency uncover the value of pre-training. We evaluate a group of models over our test set in terms of fluency, using a n-grams language model trained on 14k public domain books.
As depicted in Table TABREF25, models with language model pre-training stably achieves greater log-probabilities than the model without pre-training. Namely, language model pre-training brings better fluency.
<<</Monolingual Pre-Training>>>
<<</Automatic Evaluation>>>
<<<Human Evaluation>>>
200 sentences are sampled from our test set for human evaluation. The human evaluation guidance generally follows that of BIBREF5 but with a compressed scoring range from [1, 5] to [1, 4]. We recruit five human annotators to evaluate models in semantic relevance and fluency. A test example consists of one input sentence, one generated sentence from baseline model and one generated sentence from our model. We randomly permute a pair of generated sentences to reduce annotators' bias on a certain model. Each example is evaluated by two annotators.
As shown in Table TABREF28, our method outperforms the baseline in both relevance and fluency significantly. We further calculate agreement (Cohen's kappa) between two annotators.
Both round-trip translation and our method performs well as to fluency. But the huge gap of relevance between the two systems draw much attention of us. We investigate the test set in details and find that round-trip approach indeed generate more noise as shown in case studies.
<<</Human Evaluation>>>
<<<Case Studies>>>
We further study some generated cases from different models. All results in Table TABREF30 are generated over our test set using randomly sampling. For both baseline and multilingual model, we tune their sampling temperatures to control the Distinct-2 and the inverse Self-BLEU at 0.31 and 0.47 respectively.
In the case studies, we find that our method usually generates sentences with better relevance to source inputs, while the round-trip translation method can sometimes run into serious semantic drift. In the second case, our model demonstrates a good feature that it maintains the meaning and even a proper noun $guide$ unchanged while modifies the source sentence by both changing and reordering words. This feature may be introduced by DAE perturbation strategies which improves model's robustness and diversity simultaneously. These results evidence that our methods outperforms the baseline in both relevance and diversity.
<<</Case Studies>>>
<<</Experiments>>>
<<<Related Work>>>
Generating paraphrases based on deep neural networks, especially Seq2Seq models, has become the mainstream approach. A majority of neural paraphrasing models tried to improve generation quality and diversity with high-quality paraphrase corpora. BIBREF2 starts a deep learning line of paraphrase generation through introducing stacked residual LSTM network. A word constraint model proposed by BIBREF3 improves both generation quality and diversity. BIBREF4 adopts variational auto-encoder to further improve generation diversity. BIBREF5 utilize neural reinforcement learning and adversarial training to promote generation quality. BIBREF6 decompose paraphrase generation into phrase-level and sentence-level.
Several works tried to generate paraphrases from monolingual non-parallel or translation corpora. BIBREF28 exploits Markov Network model to extract paraphrase tables from monolingual corpus. BIBREF29, BIBREF30 and BIBREF31 create paraphrase corpus through clustering and aligning paraphrases from crawled articles or headlines. With parallel translation corpora, pivoting approaches such round-trip translation BIBREF7 and back-translation BIBREF32 are explored.
However, to the best knowledge of us, none of these paraphrase generation models has been trained directly from parallel translation corpora as a single-round end-to-end model.
<<</Related Work>>>
<<<Conclusions>>>
In this work, we have proposed a Transformer-based model for zero-shot paraphrase generation, which can leverage huge amount of off-the-shelf translation corpora. Moreover, we improve generation fluency of our model with language model pre-training. Empirical results from both automatic and human evaluation demonstrate that our model surpasses the conventional pivoting approaches in terms of relevance, diversity, fluency and efficiency. Nevertheless, there are some interesting directions to be explored. For instance, how to obtain a better latent semantic representation with multi-modal data and how to further improve the generation diversity without sacrificing relevance. We plan to strike these challenging yet valuable problems in the future.
<<</Conclusions>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nMethodology\nTransformer-based Language Model\nZero-shot Paraphrase Generation\nLanguage Embeddings\nPre-Training on Monolingual Corpora\nDenoising Auto-Encoder\nExperiments\nDatasets\nExperimental Settings\nAutomatic Evaluation\nComparison with Baseline\nMultilingual Models\nMonolingual Pre-Training\nHuman Evaluation\nCase Studies\nRelated Work\nConclusions"
],
"type": "outline"
}
|
2003.08132
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Gender Representation in Open Source Speech Resources
<<<Abstract>>>
With the rise of artificial intelligence (AI) and the growing use of deep-learning architectures, the question of ethics, transparency and fairness of AI systems has become a central concern within the research community. We address transparency and fairness in spoken language systems by proposing a study about gender representation in speech resources available through the Open Speech and Language Resource platform. We show that finding gender information in open source corpora is not straightforward and that gender balance depends on other corpus characteristics (elicited/non elicited speech, low/high resource language, speech task targeted). The paper ends with recommendations about metadata and gender information for researchers in order to assure better transparency of the speech systems built using such corpora.
<<</Abstract>>>
<<<>>>
1.1em
<<</>>>
<<<Introduction>>>
The ever growing use of machine learning has put data at the center of the industrial and research spheres. Indeed, for a system to learn how to associate an input X to an output Y, many paired examples are needed to learn this mapping process. This need for data coupled with the improvement in computing power and algorithm efficiency has led to the era of big data. But data is not only needed in mass, but also with a certain level of quality. In this paper we argue that one of the main quality of data is its transparency.
In recent years, concerns have been raised about the biases existing in the systems. A well-known case in Natural Language Processing (NLP) is the example of word embeddings, with the studies of bolukbasi2016man and caliskan2017semantics which showed that data are socially constructed and hence encapsulate a handful of social representations and power structures, such as gender stereotypes. Gender-bias has also been found in machine translation tasks BIBREF0, as well as facial recognition BIBREF1 and is now at the center of research debates. In previous work, we investigated the impact of gender imbalance in training data on the performance of an automatic speech recognition (ASR) system, showing that the under-representation of women led to a performance bias of the system for female speakers BIBREF2.
In this paper, we survey the gender representation within an open platform gathering speech and language resources to develop speech processing tools. The aim of this survey is twofold: firstly, we investigate the gender balance within speech corpora in terms of speaker representation but also in terms of speech time available for each gender category. Secondly we propose a reflection about general practices when releasing resources, basing ourselves on some recommendations from previous work.
Contributions. The contributions of our work are the following:
an exploration of 66 different speech corpora in terms of gender, showing that gender balance is achieved in terms of speakers in elicited corpora, but that it is not the case for non-elicited speech, nor for the speech time allocated to each gender category
an assessment of the global lack of meta-data within free open source corpora, alongside recommendations and guidelines for resources descriptions, based on previous work
<<</Introduction>>>
<<<OpenSLR>>>
Open Speech Language Resources (OpenSLR) is a platform created by Daniel Povey. It provides a central hub to gather open speech and language resources, allowing them to be accessed and downloaded freely. OpenSLR currently hosts 83 resources. These resources consist of speech recordings with transcriptions but also of softwares as well as lexicons and textual data for language modeling. As resources are costly to produce, they are most of the time a paying service. Therefore it is hard to study gender representation at scale. We thus focus on the corpora available on OpenSLR due to their free access and to the fact that OpenSLR is explicitly made to help develop speech systems (mostly ASR but also text-to-speech (TTS) systems). In our work, we focus on speech data only.
Out of the 83 resources gathered on the platform, we recorded 53 speech resources. We did not take into account multiple releases of the same corpora but only kept the last version (e.g. TED LIUM BIBREF3) and we also removed subsets of bigger corpora (e.g. LibriTTS corpus BIBREF4). We make the distinction between a resource and a corpus, as each resource can contain several languages (e.g. Vystadial korvas2014) or several accent/dialect of a same language (e.g. the crowdsourced high-quality UK and Ireland English Dialect speech data set googleuken2019). In our terminology, we define a corpus as monolingual and monodialectal, so resources containing different dialects or languages will be considered as containing different corpora.
We ended up with 66 corpora, in 33 different languages with 51 dialect/accent variations. The variety is also great in terms of speech types (elicited and read speech, broadcast news, TEDTalks, meetings, phonecalls, audiobooks, etc.), which is not suprising, given the many different actors who contributed to this platform. We consider this sample to be of reasonable size to tackle the question of gender representation in speech corpora. OpenSLR also constitutes a good indicator of general practice as it does not expect a defined format nor does have explicit requirements about data structures, hence attesting of what metadata resources creators consider important to share when releasing resources for free on the Web.
<<</OpenSLR>>>
<<<Methodology>>>
In order to study gender representation within speech resources, let us start by defining what gender is. In this work, we consider gender as a binary category (male and female speakers). Nevertheless, we are aware that gender as an identity also exists outside of these two categories, but we did not find any mention of non-binary speakers within the corpora surveyed in our study.
Following work by doukhan2018open, we wanted to explore the corpora looking at the number of speakers of each gender category as well as their speech duration, considering both variables as good features to account for gender representation. After the download, we manually extracted information about gender representation in each corpus.
<<<Speaker Information and Lack of Meta-Data>>>
The first difficulty we came across was the general absence of information. As gender in technology is a relatively recent research interest, most of the time gender demographics are not made available by the resources creators. So, on top of the further-mentioned general corpus characteristics (see Section SECREF11), we also report in our final table where the gender information was found and whether it was provided in the first place or not.
The provided attribute corresponds to whether gender info was given somewhere, and the found_in attribute corresponds to where we extracted the gender demographics from. The different modalities are paper, if a paper was explicitly cited along the resource, metadata if a metadata file was included, indexed if the gender was explicitly indexed within data or if data was structured in terms of gender and manually if the gender information are the results of a manual research made by ourselves, trying to either find a paper describing the resources, or by relying on regularities that seems like speaker ID and listening to the recordings. We acknowledge that this last method has some methodological shortcomings: we relied on our perceptual stereotypes to distinguish male from female speakers, most of the time for languages we have no knowledge of, but considering the global lack of data, we used it when corpora were small enough in order to increase our sample size.
<<</Speaker Information and Lack of Meta-Data>>>
<<<Speech Time Information and Data Consistency>>>
The second difficulty regards the fact that speech time information are not standardised, making impossible to obtain speech time for individual speakers or gender categories. When speech time information is provided, the statistics given do not all refer to the same measurements. Some authors report speech duration in hours e.g. panayotov2015librispeech,hernandez2018ted, some the number of utterances (e.g BIBREF5) or sentences (e.g. googleuken2019), the definition of these two terms never being clearly defined. We gathered all information available, meaning that our final table contains some empty cells, and we found that there was no consistency between speech duration and number of utterances, excluding the possibility to approximate one by the other. As a result, we decided to rely on the size of the corpora as a (rough) approximation of the amount of speech data available, the text files representing a small proportion of the resources size. This method however has drawbacks as not all corpora used the same file format, nor the same sampling rate. Sampling rate has been provided as well in the final table, but we decided to rely on qualitative categories, a corpus being considered small if its size is under 5GB, medium if it is between 5 and 50GB and large if above.
<<</Speech Time Information and Data Consistency>>>
<<<Corpora Characteristics>>>
The final result consists of a table reporting all the characteristics of the corpora. The chosen features are the following:
the resource identifier (id) as defined on OpenSLR
the language (lang)
the dialect or accent if specified (dial)
the total number of speakers as well as the number of male and female speakers (#spk, #spk_m, #spk_f)
the total number of utterances as well as the total number of utterances for male and female speakers (#utt, #utt_m, #utt_f)
the total duration, or speech time, as well as the duration for male and female speakers (dur, dur_m, dur_f)
the size of the resource in gigabytes (sizeGB) as well as a qualitative label (size, taking its value between “big", “medium", “small")
the sampling rate (sampling)
the speech task targeted for the resource (task)
is it elicited speech or not: we define as non-elicited speech data which would have existed without the creation of the resources (e.g TedTalks, audiobooks, etc.), other speech data are considered as elicited
the language status (lang_status): a language is considered either as high- or low-resourced. The language status is defined from a technological point of view (i.e. are there resources or NLP systems available for this language?). It is fixed at the language granularity (hence the name), regardless of the dialect or accent (if provided).
the year of the release (year)
the authors of the resource (producer)
<<</Corpora Characteristics>>>
<<</Methodology>>>
<<<Analysis>>>
<<<Gender Information Availability>>>
Before diving into the gender analysis, we report the number of corpora for which gender information was provided. Indeed, 36.4% of the corpora do not give any gender information regarding the speakers. Moreover, almost 20% of the corpora do not provide any speaker information whatsoever. Table sums up the number of corpora for which speaker's gender information was provided and if it was, where it was found. We first looked at the metadata file if available. If no metadata was provided, we searched whether gender was indexed within the data structure. At last, if we still could not find anything, we looked for a paper describing the data set. This search pipeline results in ordered levels for our found_in category, meaning papers might also be available for corpora with the “metadata" or “indexed" modalities.
When gender information was given it was most of the time in terms of number of speakers in each gender categories, as only five corpora provide speech time for each category. Table reports what type of information was provided in terms of gender, in the subset of the 42 corpora containing gender information. We observe that gender information is easier to find when it regards the number of speakers, than when it accounts for the quantity of data available for each gender group. Due to this lack of data, we did not study the speech time per gender category as intended, but we relied on utterance count when available. It is worth noticing however, that we did not find any consistency between speech time and number of utterances, so such results must be taken with caution.
Out of the 42 corpora providing gender information, 41 reported speaker counts for each gender category. We manually gathered speaker gender information for 7 more corpora, as explained in the previous section, reaching a final sample size of 47 corpora.
<<</Gender Information Availability>>>
<<<Gender Distribution Among Speakers>>>
<<<Elicited vs Non-Elicited Data>>>
Generally, when gender demographics are provided, we observe the following distribution: out of the 6,072 speakers, 3,050 are women and 3,022 are men, so parity is almost achieved. We then look at whether data was elicited or not, non-elicited speech being speech that would have existed without the corpus creation such as TEDTalks, interviews, radio broadcast and so on. We assume that if data was not elicited, gender imbalance might emerge. Indeed, non-elicited data often comes from the media, and it has been shown, that women are under-represented in this type of data BIBREF6. This disparity of gender representation in French media BIBREF7, BIBREF8 precisely led us to the present survey. Our expectations are reinforced by examples such as the resource of Spanish TEDTalks, which states in its description regarding the speakers that “most of them are men" mena2019. We report results in Table .
In both cases (respectively elicited and non-elicited speech), gender difference is relatively small (respectively 5.6 percentage points and 5.8 points), far from the 30 percentage points difference observed in BIBREF2. A possible explanation is that either elicited or not, corpora are the result of a controlled process, so gender disparity will be reduced as much as possible by the corpus authors. However, we notice that, apart from Librispeech BIBREF9, all the non-elicited corpora are small corpora. When removing Librispeech from the analysis, we observe a 1/3-2/3 female to male ratio, coherent with our previous findings. This can be explained by the care put by the creators of the Librispeech data set to "[ensure] a gender balance at the speaker level and in terms of the amount of data available for each gender" BIBREF9, while general gender disparity is observed in smaller corpora.
What emerges from these results is that when data sets are not elicited or carefully balanced, gender disparity creeps in. This gender imbalance is not observed at the scale of the entire OpenSLR platform, due to the fact that most of the corpora are elicited (89.1%). Hence, the existence of such gender gap is prevented by a careful control during the data set creation process.
<<</Elicited vs Non-Elicited Data>>>
<<<High-resource vs Low-resource Languages>>>
In the elicited corpora made available on OpenSLR, some are of low-resource languages other high-resource languages (mostly regional variation of high-resources languages). When looking at gender in these elicited corpora, we do not observe a difference depending on the language status. However, we can notice that high-resource corpora contain twice as many speakers, all low-resource language corpora being small corpora.
<<</High-resource vs Low-resource Languages>>>
<<<“How Can I Help?": Spoken Language Tasks>>>
Speech corpora are built in order to train systems, most of the time ASR or TTS ones. We carry out our gender analysis taking into account the task addressed and obtain the results reported in Table . We observe that if gender representation is almost balanced within ASR corpora, women are better represented in TTS-oriented data sets. This can be related to the UN report of recommendation for gender-equal digital education stating that nowadays, most of the vocal assistants are given female voices which raises educational and societal problems BIBREF10. This gendered design of vocal assistants is sometimes justified by relying on gender stereotypes such as “female voices are perceived as more helpful, sympathetic or pleasant." TTS systems being often used to create such assistants, we can assume that using female voices has become general practice to ensure the adoption of the system by the users. This claim can however be nuanced by nass2005wired who showed that other factors might be worth taking into account to design gendered voices, such as social identification and cultural gender stereotypes.
<<</“How Can I Help?": Spoken Language Tasks>>>
<<</Gender Distribution Among Speakers>>>
<<<Speech Time and Gender>>>
Due to a global lack of speech time information, we did not analyse the amount of data available per speaker category. However, utterance counts were often reported, or easily found within the corpora. We gathered utterance counts for a total of 32 corpora. We observe that if gender balance is almost achieved in terms of number of speakers, at the utterance level, men speech is more represented. But this disparity is only the effect of three corpora containing 51,463 and 26,567 korvas2014 and 8376 mena2019 utterances for male speakers, while the mean number of utterances per corpora is respectively 1942 for male speakers and 1983 for female speakers. Removing these three outliers, we observe that utterances count is balanced between gender categories.
It is worth noticing, that the high amount of utterances of the outliers is surprising considering that these three corpora are small (2.1GB, 2.8GB) and medium (5.2GB). This highlights the problem of the notion of utterance which is never being explicitly defined. Such difference in granularity is thus preventing comparison between corpora.
<<</Speech Time and Gender>>>
<<<Evolution over Time>>>
When collecting data, we noticed that the more recent the resources, the easier it was to find gender information, attesting of the emergence of gender in technology as a relevant topic. As pointed out by Kate crawford2017nips in her NeurIPS keynote talk, fairness in AI has recently become a huge part of the research effort in AI and machine learning. As a result, methodology papers have been published, with for example the work of bender2018data, for NLP data and systems, encouraging the community towards rich and explicit data statements. Figure FIGREF34 shows the evolution of gender information availability in the last 10 years. We can see that this peek of interest is also present in our data, with more resources provided with gender information after 2017.
<<</Evolution over Time>>>
<<</Analysis>>>
<<<Recommendations>>>
The social impact of big data and the ethical problems raised by NLP systems have already been discussed by previous work. wilkinson2016fair developed principles for scientific data management and stewardship, the FAIR Data Principles, based on four foundational data characteristics that are Findability, Accessibility, Interoperability and Reusability BIBREF11. In our case, findability and accessibility are taken into account by design, resources on OpenSLR being freely accessible. Interoperability and Reusability of data are however not yet achieved. Another attempt to integrate this discussion about data description within the NLP community has been made by COUILLAULT14.424, who proposed an Ethics and Big Data Charter, to help resources creators describe data from a legal and ethical point of view. hovy2016social highlighted the different social implications of NLP systems, such as exclusion, overgeneralisation and exposure problems. More recently, work by bender2018data proposed the notion of data statement to ensure data transparency.
The common point of all these studies is that information is key. The FAIR Principles are a baseline to guarantee the reproducibility of scientific findings. We need data to be described exhaustively in order to acknowledge demographic bias that may exist within our corpora. As pointed out by hovy2016social, language is always situated and so are language resources. This demographic bias in itself will always exist, but by not mentioning it in the data description we might create tools and systems that will have negative impacts on society. The authors presented the notion of exclusion as a demographic misrepresentation leading to exclusion of certain groups in the use of a technology, due to the fact that this technology fail to take them into account during its developing process. This directly relates to our work on ASR performance on women speech, and we can assume that this can be extended to other speaker characteristics, such as accent or age. To prevent such collateral consequences of NLP systems, bender2018data advocated the use of data statement, as a professional and research practice. We hope the present study will encourage researchers and resources creators to describe exhaustively their data sets, following the guidelines proposed by these authors.
<<<On the Importance of Meta-Data>>>
The first take-away of our survey is that obtaining an exhaustive description of the speakers within speech resources is not straightforward. This lack of meta-data is a problem in itself as it prevents guaranteeing the generalisability of systems or linguistics findings based on these corpora, as pointed out by bender2018data. As they rightly highlighted in their paper, the problem is also an ethical one as we have no way of controlling the existence of representation disparity in data. And this disparity may lead to bias in our systems.
We observed that most of the speech resources available contain elicited speech and that on average, researchers are careful as to balance the speakers in terms of gender when crafting data. But this cannot be said about corpora containing non-elicited speech. And apart from Librispeech, we observed a general gender imbalance, which can lead to a performance decrease on female speech BIBREF2. Speech time measurements are not consistent throughout our panel of resources and utterance counts are not reliable. We gathered the size of the corpora as well as the sampling rate in order to estimate the amount of speech time available, but variation in terms of precision, bit-rate, encoding and containers prevent us from reaching reliable results. Yet, speech time information enables us to know the quantity of data available for each category and this directly impacts the systems. This information is now given in papers such as the one describing the latest version of TEDLIUM, as this information is paramount for speaker adaptation.
bender2018data proposed to provide the following information alongside corpus releases: curation rationale, language variety, speaker demographic, annotator demographic, speech situation, text characteristics, recording quality and others. Information we can add to their recommendations relates to the duration of the data sets in hours or minutes, globally and per speaker and/or gender category. This could allow to quickly check the gender balance in terms of quantity of data available for each category, without relying on an unreliable notion of utterance. This descriptive work is of importance for the future corpora, but should also be made for the data sets already released as they are likely to be used again by the community.
<<</On the Importance of Meta-Data>>>
<<<Transparency in Evaluation>>>
Word Error Rate (WER) is usually computed as the sum of the errors made on the test data set divided by the total number of words. But if such an evaluation allows for an easy comparison of the systems, it fails to acknowledge for their performance variations. In our survey, 13 of the 66 corpora had a paper describing the resources. When the paper reported ASR results, none of them reported gendered evaluation even if gender information about the data was provided. Reporting results for different categories is the most straightforward way to check for performance bias or overfitting behaviours. Providing data statements is a first step towards, but for an open and fair science, the next step should be to also take into account such information in the evaluation process. A recent work in this direction has been made by mitchell2019model who proposed to describe model performance in model cards, thus encouraging a transparent report of model results.
<<</Transparency in Evaluation>>>
<<</Recommendations>>>
<<<Conclusion>>>
In our gender survey of the corpora available on the OpenSLR platform, we observe the following trends: parity is globally achieved on the whole, but interactions with other corpus characteristics reveal that gender misrepresentation needs more than just a number of speakers to be identified. In non-elicited data (meaning type of speech that would have existed without the creation of the corpus, such as TEDTalks or radio broadcast), we found that, except in Librispeech where gender balance is controlled, men are more represented than women. It also seems that most of the corpora aimed at developing TTS systems contain mostly female voices, maybe due to the stereotype associating female voice with caring activities. We also observe that gender description of data has been taken into account by the community, with an increased number of corpora provided with gender meta-data in the last two years. Our sample containing only 66 corpora, we acknowledge that our results cannot necessarily be extended to all language resources, however it allows us to open discussion about general corpus description practices, pointing out a lack of meta-data and to actualise the discourse around the social implications of NLP systems. We advocate for a more open science and technology by following guidelines such as the FAIR Data Principle or providing data statements, in order to ensure scientific generalisation and interoperability while preventing social harm.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\n\nIntroduction\nOpenSLR\nMethodology\nSpeaker Information and Lack of Meta-Data\nSpeech Time Information and Data Consistency\nCorpora Characteristics\nAnalysis\nGender Information Availability\nGender Distribution Among Speakers\nElicited vs Non-Elicited Data\nHigh-resource vs Low-resource Languages\n“How Can I Help?\": Spoken Language Tasks\nSpeech Time and Gender\nEvolution over Time\nRecommendations\nOn the Importance of Meta-Data\nTransparency in Evaluation\nConclusion"
],
"type": "outline"
}
|
2001.02380
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
A Neural Approach to Discourse Relation Signal Detection
<<<Abstract>>>
Previous data-driven work investigating the types and distributions of discourse relation signals, including discourse markers such as 'however' or phrases such as 'as a result' has focused on the relative frequencies of signal words within and outside text from each discourse relation. Such approaches do not allow us to quantify the signaling strength of individual instances of a signal on a scale (e.g. more or less discourse-relevant instances of 'and'), to assess the distribution of ambiguity for signals, or to identify words that hinder discourse relation identification in context ('anti-signals' or 'distractors'). In this paper we present a data-driven approach to signal detection using a distantly supervised neural network and develop a metric, {\Delta}s (or 'delta-softmax'), to quantify signaling strength. Ranging between -1 and 1 and relying on recent advances in contextualized words embeddings, the metric represents each word's positive or negative contribution to the identifiability of a relation in specific instances in context. Based on an English corpus annotated for discourse relations using Rhetorical Structure Theory and signal type annotations anchored to specific tokens, our analysis examines the reliability of the metric, the places where it overlaps with and differs from human judgments, and the implications for identifying features that neural models may need in order to perform better on automatic discourse relation classification.
<<</Abstract>>>
<<<Introduction>>>
The development of formal frameworks for the analysis of discourse relations has long gone hand in hand with work on signaling devices. The analysis of discourse relations is also closely tied to what a discourse structure should look like and what discourse goals should be fulfilled in relation to the interpretation of discourse relations BIBREF0. Earlier work on the establishment of inventories of discourse relations and their formalization (BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6) relied on the existence of `discourse markers' (DMs) or `connectives', including conjunctions such as because or if, adverbials such as however or as a result, and coordinations such as but, to identify and distinguish relations such as condition in SECREF1, concession in SECREF1, cause in SECREF1, or contrast, result etc., depending on the postulated inventory of relations (signals for these relations as identified by human analysts are given in bold; examples come from the GUM corpus BIBREF7, presented in Section SECREF3).
. [If you work for a company,]$_{\textsc {condition}}$ [they pay you that money.]
. [Albeit limited,]$_{\textsc {concession}}$ [these results provide valuable insight into SI interpretation by Chitonga-speaking children.]
. [not all would have been interviewed at Wave 3] [due to differential patterns of temporary attrition]$_{\textsc {cause}}$
The same reasoning of identifying relations based on overt signals has been applied to the comparison of discourse relations across languages, by comparing inventories of similar function words cross-linguistically (BIBREF8, BIBREF9); and the annotation guidelines of prominent contemporary corpora rely on such markers as well: for instance, the Penn Discourse Treebank (see BIBREF10) explicitly refers to either the presence of DMs or the possibility of their insertion in cases of implicit discourse relations, and DM analysis in Rhetorical Structure Theory BIBREF11 has also shown the important role of DMs as signals of discourse relations at all hierarchical levels of discourse analysis BIBREF12.
At the same time, research over the past two decades analyzing the full range of possible cues that humans use to identify the presence of discourse relations has suggested that classic DMs such as conjunctions and adverbials are only a part of the network of signals that writers or speakers can harness for discourse structuring, which also includes entity-based cohesion devices (e.g. certain uses of anaphora, see BIBREF13), alternative lexicalizations using content words, as well as syntactic constructions (see BIBREF14 and the addition of alternative lexicalization constructions, AltLexC, in the latest version of PDTB, BIBREF15).
In previous work, two main approaches to extracting the inventory of discourse signal types in an open-ended framework can be identified: data-driven approaches, which attempt to extract relevant words from distributional properties of the data, using frequencies or association measures capturing their co-occurrences with certain relation types (e.g. BIBREF16, BIBREF17); and manual annotation efforts (e.g. BIBREF10, BIBREF18), which develop categorization schemes and guidelines for human evaluation of signaling devices. The former family of methods benefits from an unbiased openness to any and every type of word which may reliably co-occur with some relation types, whether or not a human might notice it while annotating, as well as the naturally graded and comparable nature of the resulting quantitative scores, but, as we will show, falls short in identifying specific cases of a word being a signal (or not) in context. By contrast, the latter approach allows for the identification of individual instances of signaling devices, but relies on less open-ended guidelines and is categorical in nature: a word either is or isn't a signal in context, providing less access to concepts such as signaling strength. The goal of this paper is to develop and evaluate a model of discourse signal identification that is built bottom up from the data, but retains sensitivity to context in the evaluation of each individual example. In addition, even though this work is conducted within Rhetorical Structural Theory, we hope that it can shed light on signal identification of discourse relations across genres and provide empirical evidence to motivate research on theory-neutral and genre-diverse discourse processing, which would be beneficial for pushing forward theories of discourse across frameworks or formalisms. Furthermore, employing a computational approach to studying discourse relations has a promising impact on various NLP downstream tasks such as question answering and document summarization etc. For example, BIBREF20 incorporated discourse information into the task of automated text comprehension and benefited from such information without relying on explicit annotations of discourse structure during training, which outperformed state-of-the-art text comprehension systems at the time.
Towards this goal, we begin by reviewing some previous work in the traditions sketched out above in the next section, and point out some open questions which we would like to address. In Section SECREF3 we present the discourse annotated data that we will be using, which covers a number of English text types from the Web annotated for 20 discourse relations in the framework of Rhetorical Structure Theory, and is enriched with human annotations of discourse relation signaling devices for a subset of the data. Moreover, we also propose a taxonomy of anchored signals based on the discourse annotated data used in this paper, illustrating the properties and the distribution of the anchorable signals.
In Section SECREF4 we then train a distantly supervised neural network model which is made aware of the relations present in the data, but attempts to learn which words signal those relations without any exposure to explicit signal annotations. We evaluate the accuracy of our model using state-of-the-art pretrained and contextualized character and word embeddings, and develop a metric for signaling strength based on a masking concept similar to permutation importance, which naturally lends itself to the definition of both positive and negative or `anti-signals', which we will refer to as `distractors'.
In Section SECREF5, we combine the anchoring annotation data from Section SECREF3 with the model's predictions to evaluate how `human-like' its performance is, using an information retrieval approach measuring recall@k and assessing the stability of different signal types based on how the model scores them. We develop a visualization for tokenwise signaling strength and perform error analysis for some signals found by the model which were not flagged by humans and vice versa, and point out the strengths and weaknesses of the architecture. Section SECREF6 offers further discussion of what we can learn from the model, what kinds of additional features it might benefit from given the error analysis, and what the distributions of scores for individual signals can teach us about the ambiguity and reliability of different signal types, opening up avenues for further research.
<<</Introduction>>>
<<<Previous Work>>>
<<<Data-driven Approaches>>>
A straightforward approach to identifying discourse relation signals in corpora with discourse parses is to extract frequency counts for all lexical types or lemmas and cross-tabulate them with discourse relations, such as sentences annotated as cause, elaboration, etc. (e.g. BIBREF21, BIBREF22, BIBREF17). Table TABREF4, reproduced from BIBREF17, illustrates this approach for the Russian RST Treebank.
This approach quickly reveals the core inventory of cue words in the language, and in particular the class of low-ambiguity discourse markers (DMs), such as odnako `however' signaling contrast (see Fraser 1999 on delimiting the class of explicit DMs) or relative pronouns signaling elaboration. As such, it can be very helpful for corpus-based lexicography of discourse markers (cf. BIBREF23). The approach can potentially include multiword expressions, if applied equally to multi-token spans (e.g. as a result), and because it is language-independent, it also allows for a straightforward comparison of connectives or other DMs across languages. Results may also converge across frameworks, as the frequency analysis may reveal the same items in different corpora annotated using different frameworks. For instance, the inventory of connectives found in work on the Penn Discourse Treebank (PDTB, see BIBREF10) largely converges with findings on connectives using RST (see BIBREF24, BIBREF18): conjunctions such as but can mark different kinds of contrastive relations at a high level, and adverbs such as meanwhile can convey contemporaneousness, among other things, even when more fine-grained analyses are applied. However, a purely frequentist approach runs into problems on multiple levels, as we will show in Section SECREF4: high frequency and specificity to a small number of relations characterize only the most common and unambiguous discourse markers, but not less common ones. Additionally, differentiating actual and potentially ambiguous usages of candidate words in context requires substantial qualitative analysis (see BIBREF25), which is not reflected in aggregated counts, and signals that belong to a class of relations (e.g. a variety of distinct contrastive relation types) may appear to be non-specific, when in fact they reliably mark a superset of relations.
Other studies have used more sophisticated metrics, such as point-wise mutual information (PMI), to identify words associated with particular relations BIBREF16. Using the PDTB corpus, BIBREF16 extracted such scores and measured the contribution of different signal types based on the information gain which they deliver for the classification of discourse relations at various degrees of granularity, as expressed by the hierarchical labels of PDTB relation types. This approach is most similar to the goal given to our own model in Section SECREF4, but is less detailed in that the aggregation process assigns a single number to each candidate lexical item, rather than assigning contextual scores to each instance.
Finally we note that for hierarchical discourse annotation schemes, the data-driven approaches described here become less feasible at higher levels of abstraction, as relations connecting entire paragraphs encompass large amounts of text, and it is therefore difficult to find words with high specificity to those relations. As a result, approaches using human annotation of discourse relation signals may ultimately be irreplaceable.
<<</Data-driven Approaches>>>
<<<Discourse Relation Signal Annotations>>>
Discourse relation signals are broadly classified into two categorizes: anchored signals and unanchored signals. By `anchoring' we refer to associating signals with concrete token spans in texts. Intuitively, most of the signals are anchorable since they correspond to certain token spans. However, it is also possible for a discourse relation to be signaled but remain unanchored. Results from BIBREF27 indicated that there are several signaled but unanchored relations such as preparation and background since they are high-level discourse relations that capture and correspond to genre features such as interview layout in interviews where the conversation is constructed as a question-answer scheme, and are thus rarely anchored to tokens.
The Penn Discourse Treebank (PDTB V3, BIBREF15) is the largest discourse annotated corpus of English, and the largest resource annotated explicitly for discourse relation signals such as connectives, with similar corpora having been developed for a variety of languages (e.g. BIBREF28 for Turkish, BIBREF29 for Chinese). However the annotation scheme used by PDTB is ahierarchical, annotating only pairs of textual argument spans connected by a discourse relation, and disregarding relations at higher levels, such as relations between paragraphs or other groups of discourse units. Additionally, the annotation scheme used for explicit signals is limited to specific sets of expressions and constructions, and does not include some types of potential signals, such as the graphical layout of a document, lexical chains of (non-coreferring) content words that are not seen as connectives, or genre conventions which may signal the discourse function for parts of a text. It is nevertheless a very useful resource for obtaining frequency lists of the most prevalent DMs in English, as well as data on a range of phenomena such as anaphoric relations signaled by entities, and some explicitly annotated syntactic constructions.
Working in the hierarchical framework of Rhetorical Structure Theory BIBREF11, BIBREF18 re-annotated the existing RST Discourse Treebank BIBREF30, by taking the existing discourse relation annotations in the corpus as a ground truth and analyzing any possible information in the data, including content words, patterns of repetition or genre conventions, as a possibly present discourse relation signaling device. The resulting RST Signalling Corpus (RST-SC, BIBREF31) consists of 385 Wall Street Journal articles from the Penn Treebank BIBREF32, a smaller subset of the same corpus used in PDTB. It contains 20,123 instances of 78 relation types (e.g. attribution, circumstance, result etc.), which are enriched with 29,297 signal annotations. BIBREF12 showed that when all types of signals are considered, over 86% of discourse relations annotated in the corpus were signaled in some way, but among these, just under 20% of cases were marked by a DM. However, unlike PDTB, the RST Signalling Corpus does not provide a concrete span of tokens for the locus of each signal, indicating instead only the type of signaling device used.
Although the signal annotations in RST-SC have a broader scope than those in PDTB and are made more complex by extending to hierarchical relations, BIBREF33 have shown that RST-SC's annotation scheme can be `anchored' by associating discourse signal categories from RST-SC with concrete token spans. BIBREF27 applied the same scheme to a data set described in Section SECREF3, which we will use to evaluate our model in Section SECREF5. Since that data set is based on the same annotation scheme of signal types as RST-SC, we will describe the data for the present study and RST-SC signal type annotation scheme next.
<<</Discourse Relation Signal Annotations>>>
<<</Previous Work>>>
<<<Data>>>
<<<Anchored Signals in the GUM Corpus>>>
In order to study open-ended signals anchored to concrete tokens, we use the signal-annotated subset of the freely available Georgetown University Multilayer (GUM) Corpus BIBREF7 from BIBREF27. Our choice to use a multi-genre RST-annotated corpus rather than using PDTB, which also contains discourse relation signal annotation to a large extent is motivated by three reasons: The first reason is that we wish to explore the full range of potential signals, as laid out in the work on the Signalling Corpus BIBREF12, BIBREF34, whereas PDTB annotates only a subset of the possible cues identified by human annotators. Secondly, the use of RST as a framework allows us to examine discourse relations at all hierarchical levels, including long distance, high-level relations between structures as large as paragraphs or sections, which often have different types of signals allowing their identification. Finally, although the entire GUM corpus is only about half the size of RST-DT (109K tokens), using GUM offers the advantage of a more varied range of genres than PDTB and RST-SC, both of which annotate Wall Street Journal data.
The signal annotated subset of GUM includes academic papers, how-to guides, interviews and news text, encompassing over 11,000 tokens. Although this data set may be too small to train a successful neural model for signal detection, we will not be using it for this purpose; instead, we will reserve it for use solely as a test set, and use the remainder of the data (about 98K tokens) to build our model (see Section SECREF28 for more details about the subsets and splits), including data from four further genres, for which the corpus also contains RST annotations but no signaling annotations: travel guides, biographies, fiction, and Reddit forum discussions. The GUM corpus is manually annotated with a large number of layers, including document layout (headings, paragraphs, figures, etc.); multiple POS tags (Penn tags, CLAWS5, Universal POS); lemmas; sentence types (e.g. imperative, wh-question etc., BIBREF35); Universal Dependencies BIBREF36; (non-)named entity types; coreference and bridging resolution; and discourse parses using Rhetorical Structure Theory BIBREF11. In particular, the RST annotations in the corpus use a set of 20 commonly used RST relation labels, which are given in Table TABREF10, along with their frequencies in the corpus. The relations cover asymmetrical prominence relations (satellite-nucleus) and symmetrical ones (multinuclear relations), with the restatement relation being realized in two versions, one for each type.
The signaling annotation in the corpus follows the scheme developed by RST-SC, with some additions. Although RST-SC does not indicate token positions for signals, it provides a detailed taxonomy of signal types which is hierarchically structured into three levels:
signal class, denoting the signal's degree of complexity
signal type, indicating the linguistic system to which it belongs
specific signal, which gives the most fine-grained subtypes of signals within each type
It is assumed that any number of word tokens can be associated with any number of signals (including the same tokens participating in multiple signals), that signals can arise without corresponding to specific tokens (e.g. due to graphical layout of paragraphs), and that each relation can have an unbounded number of signals ($0-n$), each of which is characterized by all three levels.
The signal class level is divided into single, combined (for complex signals), and unsure for unclear signals which cannot be identified conclusively, but are noted for further study. For each signal (regardless of its class), signal type and specific signal are identified. According to RST-SC's taxonomy, signal type includes 9 types such as DMs, genre, graphical, lexical, morphological, numerical, reference, semantic, and syntactic. Each type then has specific subcategories. For instance, the signal type semantic has 7 specific signal subtypes: synonymy, antonymy, meronymy, repetition, indicative word pair, lexical chain, and general word. We will describe some of these in more depth below.
In addition to the 9 signal types, RST-SC has 6 combined signal types such as reference+syntactic, semantic+syntactic, and graphical+syntactic etc., and 15 specific signals are identified for the combined signals. Although the rich signaling annotations in RST-SC offer an excellent overview of the relative prevalence of different signal types in the Wall Street Journal corpus, it is difficult to apply the original scheme to the study of individual signal words, since actual signal positions are not identified. While recovering these positions may be possible for some categories using the original guidelines, most signaling annotations (e.g. lexical chains, repetition) cannot be automatically paired with actual tokens, meaning that, in order to use the original RST-SC for our study, we would need to re-annotate it for signal token positions. As this effort is beyond the scope of our study, we will use the smaller data set with anchored signaling annotations from BIBREF27: This data is annotated with the same signal categories as RST-SC, but also includes exact token positions for each signal, including possibly no tokens for unanchorable signals such as some types of genre conventions or graphical layout which are not expressible in terms of specific words.
In order to get a better sense of how the annotations work, we consider example SECREF7.
. [5] Sociologists have explored the adverse consequences of discrimination; [6] psychologists have examined the mental processes that underpin conscious and unconscious biases; [7] neuroscientists have examined the neurobiological underpinnings of discrimination; [8] and evolutionary theorists have explored the various ways that in-group/out-group biases emerged across the history of our species. – joint [GUM_academic_discrimination]
In this example, there is a joint relation between four spans in a fragment from an RST discourse tree. The first tokens in each span form a parallel construction and include semantically related items such as explored and examined (signal class `combined', type `semantic+syntactic', specific subtype `parallel syntactic construction + lexical chain'). The words corresponding to this signal in each span are highlighted in Figure FIGREF15, and are considered to signal each instance of the joint relation. Additionally, the joint relation is also signaled by a number of further signals which are highlighted in the figure as well, such as the semicolons between spans, which correspond to a type `graphical', subtype `semicolon' in RST-SC. The data model of the corpus records which tokens are associated with which categorized signals, and allows for multiple membership of the same token in several signal annotations.
In terms of annotation reliability, BIBREF12 reported a weighted kappa of 0.71 for signal subtypes in RST-SC without regard to the span of words corresponding to a signal, while a study by BIBREF37 suggests that signal anchoring, i.e. associating RST-SC signal categories with specific tokens achieves a 90.9% perfect agreement score on which tokens constitute signals, or a Cohen's Kappa value of 0.77. As anchored signal positions will be of the greatest interest to our study, we will consider how signal token positions are distributed in the corpus next, and develop an anchoring taxonomy which we will refer back to for the remainder of this paper.
<<</Anchored Signals in the GUM Corpus>>>
<<<A Taxonomy of Anchored Signals>>>
From a structural point of view, one of the most fundamental distinctions with regard to signal realization recognized in previous work is the classification of signaling tokens into satellite or nucleus-oriented positions, i.e. whether a signal for the relation appears within the modifier span or the span being modified BIBREF38. While some relation types exhibit a strong preference for signal position (e.g. using a discourse marker such as because in the satellite for cause, BIBREF39), others, such as concession are more balanced (almost evenly split signals between satellite and nucleus in BIBREF38). In this study we would like to further refine the taxonomy of signal positions, breaking it down into several features.
At the highest level, we have the distinction between anchorable and non-anchorable signals, i.e. signals which correspond to no token in the text (e.g. genre conventions, graphical layout). Below this level, we follow BIBREF38 in classifying signals as satellite or nucleus-oriented, based on whether they appear in the more prominent Elementary Discourse Unit (EDU) of a relation or its dependent. However, several further distinctions may be drawn:
Whether the signal appears before or after the relation in text order; since we consider the relation to be instantiated as soon as its second argument in the text appears, `before' is interpreted as any token before the second head unit in the discourse tree begins, and `after' is any subsequent token
Whether the signal appears in the head unit of the satellite/nucleus, or in a dependent of that unit; this distinction only matters for satellite or nucleus subtrees that consist of more than one unit
Whether the signal is anywhere within the structure dominated by the units participating in the relation, or completely outside of this structure
Table TABREF20 gives an overview of the taxonomy proposed here, which includes the possible combinations of these properties and the distribution of the corresponding anchorable signals found in the signal-annotated subset of the GUM Corpus from BIBREF27. Individual feature combinations can be referred to either as acronyms, e.g. ABIHS for `Anchorable, Before the second EDU of the relation, Inside the relation's subtree, Head unit of the Satellite', or using the group IDs near the bottom of the table (in this case the category numbered Roman I). We will refer back to these categories in our comparison of manually annotated and automatically predicted signals. To illustrate how the taxonomy works in practice, we can consider the example in Figure FIGREF23, which shows a signal whose associated tokens instantiate categories I and IV in a discourse tree – the words demographic variables appear both within a preparation satellite (unit [50], category I), which precedes and points to its nucleus [51–54], and within a satellite inside that block (unit [52], a dependent inside the nucleus block, category IV). Based on the RST-SC annotation scheme, the signal class is Simple, with the type Semantic and specific sub-type Lexical chain.
The numbers at the bottom of Table TABREF20 show the number of tokens signaling each relation at each position, as well as the number of relations which have signal tokens at the relevant positions. The hypothetical categories V and X, with signal tokens which are not within the subtree of satellite or nucleus descendants, are not attested in our data, as far as annotators were able to identify.
<<</A Taxonomy of Anchored Signals>>>
<<</Data>>>
<<<Automatic Signal Extraction>>>
<<<A Contextless Frequentist Approach>>>
To motivate the need for a fine-grained and contextualized approach to describing discourse relation signals in our data, we begin by extracting some basic data-driven descriptions of our data along the lines presented in Section SECREF3. In order to constrain candidate words to just the most relevant ones for marking a specific signal, we first need a way to address a caveat of the frequentist approach: higher order relations which often connect entire paragraphs (notably background and elaboration) must be prevented from allowing most or even all words in the document to be considered as signaling them. A simple approach to achieving this is to assume `Strong Nuclearity', relying on Marcu's (BIBREF42) Compositionality Criterion for Discourse Trees (CCDT), which suggests that if a relation holds between two blocks of EDUs, then it also holds between their head EDUs. While this simplification may not be entirely accurate in all cases, Table TABREF20 suggests that it captures most signals, and allows us to reduce the space of candidate signal tokens to just the two head EDUs implicated in a relation. We will refer to signals within the head units of a relation as `endocentric' and signals outside this region as `exocentric'. Figure FIGREF25 illustrates this, where units [64] and [65] are the respective heads of two blocks of EDUs, and unit [65] in fact contains a plausible endocentric signal for the result relation, the discourse marker thus.
More problematic caveats for the frequentist approach are the potential for over/underfitting and ambiguity. The issue of overfitting is especially thorny in small datasets, in which certain content words appear coincidentally in discourse segments with a certain function. Table TABREF27 shows the most distinctive lexical types for several discourse relations in GUM based on pure ratio of occurrence in head EDUs marked for those relations. On the left, types are chosen which have a maximal frequency in the relevant relationship compared with their overall frequency in the corpus. This quickly overfits the contents of the corpus, selecting irrelevant words such as holiest and Slate for the circumstance relation, or hypnotizing and currency for concession. The same lack of filtering can, however, yield some potentially relevant lexical items, such as causing for result or even highly specific content words such as ammonium, which are certainly not discourse markers, but whose appearance in a sequence is not accidental: the word is in this case typical for sequences in how-to guides, where use of ingredients in a recipe is described in a sequence. Even if these kinds of items may be undesirable candidates for signal words in general, it seems likely that some rare content words may function as signals in context, such as evaluative adjectives (e.g. exquisite) enabling readers to recognize an evaluation.
If we are willing to give up on the latter kind of rare items, the overfitting problem can be alleviated somewhat by setting a frequency threshold for each potential signal lexeme, thereby suppressing rare items. The items on the right of the table are limited to types occurring more than 10 times. Since the most distinctive items on the left are all comparatively rare (and therefore exclusive to their relations), they do not overlap with the items on the right.
Looking at the items on the right, several signals make intuitive sense, especially for relations such as solutionhood (used for question-answer pairs) or concession, which show the expected WH words and auxiliary did, or discourse markers such as though, respectively. At the same time, some high frequency items may be spurious, such as NATO for justify, which could perhaps be filtered out based on low dispersion across documents, but also stuff for cause, which probably could not be.
Another problem with the lists on the right is that some expected strong signals, such as the word and for sequence are absent from the table. This is not because and is not frequent in sequences, but rather because it is a ubiquitous word, and as a result, it is not very specific to the relation. However if we look at actual examples of and inside and outside of sequences, it is easy to notice that the kind of and that does signal a relation in context is often clause initial as in SECREF24 and very different from the adnominal coordinating ands in SECREF24, which do not signal the relation:
. [she was made a Dame by Elizabeth II for services to architecture,] [and in 2015 she became the first and only woman to be awarded the Royal Gold Medal]$_{\textsc {sequence}}$
. [Gordon visited England and Scotland in 1686.] [In 1687 and 1689 he took part in expeditions against the Tatars in the Crimea]$_{\textsc {sequence}}$
These examples suggest that a data-driven approach to signal detection needs some way of taking context into account. In particular, we would like to be able to compare instances of signals and quantify how strong the signal is in each case. In the next section, we will attempt to apply a neural model with contextualized word embeddings BIBREF44 to this problem, which will be capable of learning contextualized representations of words within the discourse graph.
<<</A Contextless Frequentist Approach>>>
<<<A Contextualized Neural Model>>>
<<<Task and Model Architecture>>>
Since we are interested in identifying unrestricted signaling devices, we deliberately avoid a supervised learning approach as used in automatic signal detection trained on resources such as PDTB. While recent work on PDTB connective detection (BIBREF26, BIBREF45) achieves good results (F-Scores of around 88-89 for English PDTB explicit connectives), the use of such supervised approaches would not tell us about new signaling devices, and especially about unrestricted lexical signals and other coherence devices not annotated in PDTB. Additionally, we would be restricted to the newspaper text types represented in the Wall Street Journal corpus, since no other large English corpus has been annotated for anchored signals.
Instead, we will adopt a distantly supervised approach: we will task a model with supervised discourse relation classification on data that has not been annotated for signals, and infer the positions of signals in the text by analyzing the model's behavior. A key assumption, which we will motivate below, is that signals can have different levels of signaling strength, corresponding to their relative importance in identifying a relation. We would like to assume that different signal strength is in fact relevant to human analysts' decision making in relation identification, though in practice we will be focusing on model estimates of strength, the usefulness of which will become apparent below.
As a framework, we use the sentence classifier configuration of FLAIR BIBREF46 with a biLSTM encoder/classifier architecture fed by character and word level representations composed of a concatenation of fixed 300 dimensional GloVe embeddings BIBREF47, pre-trained contextualized FLAIR word embeddings, and pre-trained contextualized character embeddings from AllenNLP BIBREF48 with FLAIR's default hyperparameters. The model's architecture is shown in Figure FIGREF30.
Contextualized embeddings BIBREF44 have the advantage of giving distinct representations to different instances of the same word based on the surrounding words, meaning that an adnominal and connecting two NPs can be distinguished from one connecting two verbs based on its vector space representation in the model. Using character embeddings, which give vector space representations to substrings within each word, means that the model can learn the importance of morphological forms, such as the English gerund's -ing suffix, even for out-of-vocabulary items not seen during training.
Formally, the input to our system is formed of EDU pairs which are the head units within the respective blocks of discourse units that they belong to, which are in turn connected by an instance of a discourse relation. This means that every discourse relation in the corpus is expressed as exactly one EDU pair. Each EDU is encoded as a (possibly padded) sequence of $n$-dimensional vector representations of each word ${x_1,..,x_T}$, with some added separators which are encoded in the same way and described below. The bidirectional LSTM composes representations and context for the input, and a fully connected softmax layer gives the probability of each relation:
where the probability of each relation $rel_i$ is derived from the composed output of the function $h$ across time steps $0 \ldots t$, $\delta \in \lbrace b,f\rbrace $ is the direction of the respective LSTMs, $c_t^\delta $ is the recurrent context in each direction and $\theta = {W,b}$ gives the model weights and bias parameters (see BIBREF46 for details). Note that although the output of the system is ostensibly a probability distribution over relation types, we will not be directly interested in the most probable relation as outputted by the classifier, but rather in analyzing the model's behavior with respect to the input word representations as potential signals of each relation.
In order to capitalize on the system's natural language modeling knowledge, EDU satellite-nucleus pairs are presented to the model in text order (i.e. either the nucleus or the satellite may come first). However, the model is given special separator symbols indicating the positions of the satellite and nucleus, which are essential for deciding the relation type (e.g. cause vs. result, which may have similar cue words but lead to opposite labels), and a separator symbol indicating the transition between satellite and nucleus. This setup is illustrated in SECREF29.
. $<$s$>$ Sometimes this information is available , $<$sep$>$ but usually not . $<$n$>$
Label: concession
In this example, the satellite precedes the nucleus and is therefore presented first. The model is made aware of the fact that the segment on the left is the satellite thanks to the tag <s>. Since the LSTM is bi-directional, it is aware of positions being within the nucleus or satellite, as well as their proximity to the separator, at every time step. We reserve the signal-annotated subset of 12 documents from GUM for testing, which contains 1,185 head EDU pairs (each representing one discourse relation), and a random selection of 12 further documents from the remaining RST-annotated GUM data (1,078 pairs) is taken as development data, leaving 102 documents (5,828 pairs) for training. The same EDUs appear in multiple pairs if a unit has multiple children with distinct relations, but no instances of EDUs are shard across partitions, since the splits are based on document boundaries. We note again that for the training and development data, we have no signaling annotations of any kind; this is possible since the network does not actually use the human signaling annotations we will be evaluating against: its distant supervision consists solely of the RST relation labels.
<<</Task and Model Architecture>>>
<<<Relation Classification Performance>>>
Although only used as an auxiliary training task, we can look at the model's performance on predicting discourse relations, which is given in Table TABREF34. Unsurprisingly, the model performs best on the most frequent relations in the corpus, such as elaboration or joint, but also on rarer ones which tend to be signaled explicitly, such as condition (often signaled explicitly by if), solutionhood (used for question-answer pairs signaled by question marks and WH words), or concession (DMs such as although). However, the model also performs reasonably well for some trickier (i.e. less often introduced by unambiguous DMs) but frequent relations, such as preparation, circumstance, and sequence. Rare relations with complex contextual environments, such as result, justify or antithesis, unsurprisingly do not perform well, with the latter two showing an F-score of 0. The relation restatement, which also shows no correct classifications, reveals a weakness of the model: while it is capable of recognizing signals in context, it cannot learn that repetition in and of itself, regardless of specific areas in vector space, is important (see Section SECREF6 for more discussion of these and other classification weaknesses).
Although this is not the actual task targeted by the current paper, we may note that the overall performance of the model, with an F-Score of 44.37, is not bad, though below the performance of state-of-the-art full discourse parsers (see BIBREF49) – this is to be expected, since the model is not aware of the entire RST tree, rather looking only at EDU pairs out of context, and given that standard scores on RST-DT come from a larger and more homogeneous corpus, with with fewer relations and some easy cases that are absent from GUM.
Given the model's performance on relation classification, which is far from perfect, one might wonder whether signal predictions made by our analysis should be trusted. This question can be answered in two ways: first, quantitatively, we will see in Section SECREF5 that model signal predictions overlap considerably with human judgments, even when the predicted relation is incorrect. Intuitively, for similar relations, such as concession or contrast, both of which are adversative, the model may notice a relevant cue (e.g. `but', or contrasting lexical items) despite choosing the wrong one. Second, as we will see below, we will be analyzing the model's behavior with respect to the probability of the correct relation, regardless of the label it ultimately chooses, meaning that the importance of predicting the correct label exactly will be diminished further.
<<</Relation Classification Performance>>>
<<<Signaling Metric>>>
The actual performance we are interested in evaluating is the model's ability to extract signals for given discourse relations, rather than its accuracy in predicting the relations. To do so, we must extract anchored signal predictions from the model, which is non-trivial. While earlier work on interpreting neural models has focused on token-wise softmax probability BIBREF50 or attention weights BIBREF51, using contextualized embeddings complicates the evaluation: since word representations are adjusted to reflect neighboring words, the model may assign higher importance to the word standing next to what a human annotator may interpret as a signal. Example SECREF36 illustrates the problem:
. [RGB]230, 230, 230To [RGB]53, 53, 53provide [RGB]165, 165, 165information [RGB]179, 179, 179on [RGB]175, 175, 175the [RGB]160, 160, 160analytical [RGB]157, 157, 157sample [RGB]187, 187, 187as [RGB]170, 170, 170a [RGB]168, 168, 168whole [RGB]207, 207, 207, $\xrightarrow[\text{pred:preparation}]{\text{gold:purpose}}$ [RGB]168, 168, 168two [RGB]170, 170, 170additional [RGB]164, 164, 164demographic [RGB]175, 175, 175variables [RGB]182, 182, 182are [RGB]165, 165, 165included [RGB]230, 230, 230.
Each word in SECREF36 is shaded based on the softmax probability assigned to the correct relation of the satellite, i.e. how `convincing' the model found the word in terms of local probability. In addition, the top-scoring word in each sentence is rendered in boldface for emphasis. The gold label for the relation is placed above the arrow, which indicates the direction of the relation (satellite to nucleus), and the model's predicted label is shown under the arrow.
Intuitively, the strongest signal of the purpose relation in SECREF36 is the initial infinitive marker To – however, the model ranks the adjacent provide higher and almost ignores To. We suspect that the reason for this, and many similar examples in the model evaluated based on relation probabilities, is that contextual embeddings allow for a special representation of the word provide next to To, making it difficult to tease apart the locus of the most meaningful signal.
To overcome this complication, we use the logic of permutation importance, treating the neural model as a black box and manipulating the input to discover relevant features in the data (cf. BIBREF52). We reason that this type of evaluation is more robust than, for example, examining model internal attention weights because such weights are not designed or trained with a reward ensuring they are informative – they are simply trained on the same classification error loss as the rest of the model. Instead, we can withhold potentially relevant information from the model directly: After training is complete, we feed the test data to the model in two forms – as-is, and with each word masked, as shown in SECREF36.
. Original: <$s$>$ To\quad \ p̄rovide īnformation .. <$sep$>$ .. <$n$>$ Original: \: $<$s$>$ \: \ To \: provide \: information \: ... \: $<$sep$>$ \: ... \: $<$n$>$ \\ Masked1: \: $<$s$>$ \: $<$X$>$ \: provide \: information \: ... \: $<$sep$>$ \: ... \: $<$n$>$ \\ Masked2: \: $<$s$>$ \: \ To \: \ $<$X$>$ \: information \: ... \: $<$sep$>$ \: ... \: $<$n$>$ \\ Masked3: \: $<$s$>$ \: \ To \: provide \: \ $<$X$>$ \: ... \: $<$sep$>$ \: ... \: $<$n$>$ $ Label: purpose
We reason that, if a token is important for predicting the correct label, masking it will degrade the model's classification accuracy, or at least reduce its reported classification certainty. In SECREF36, it seems reasonable to assume that masking the word `To' has a greater impact on predicting the label purpose than masking the word `provide', and even less so, the following noun `information'. We therefore use reduction in softmax probability of the correct relation as our signaling strength metric for the model. We call this metric ${\Delta }_s$ (for delta-softmax), which can be written as:
where $rel$ is the true relation of the EDU pair, $t_i$ represents the token at index $i$ of $N$ tokens, and $X_{mask=i}$ represents the input sequence with the masked position $i$ (for $i \in 1 \ldots N$ ignoring separators, or $\phi $, the empty set).
To visualize the model's predictions, we compare ${\Delta }_s$ for a particular token to two numbers: the maximum ${\Delta }_s$ achieved by any token in the current pair (a measure of relative importance for the current classification) and the maximum ${\Delta }_s$ achieved by any token in the current document (a measure of how strongly the current relation is signaled compared to other relations in the text). We then shade each token 50% based on the first number and 50% based on the second. As a result, the most valid cues in an EDU pair are darker than their neighbors, but EDU pairs with no good cues are overall very light, whereas pairs with many good signals are darker. Some examples of this visualization are given in SECREF36-SECREF36 (human annotated endocentric signal tokens are marked by double underlines).
. [RGB]61, 61, 61To [RGB]112, 112, 112provide [RGB]205, 205, 205information [RGB]230, 230, 230on [RGB]230, 230, 230the [RGB]230, 230, 230analytical [RGB]230, 230, 230sample [RGB]230, 230, 230as [RGB]230, 230, 230a [RGB]230, 230, 230whole [RGB]230, 230, 230, $\xrightarrow[\text{pred:preparation}]{\text{gold:purpose}}$ [RGB]230, 230, 230two [RGB]183, 183, 183additional [RGB]230, 230, 230demographic [RGB]230, 230, 230variables [RGB]94, 94, 94are [RGB]194, 194, 194included [RGB]163, 163, 163.
. [RGB]230, 230, 230Telling [RGB]230, 230, 230good [RGB]230, 230, 230jokes [RGB]230, 230, 230is [RGB]230, 230, 230an [RGB]230, 230, 230art [RGB]230, 230, 230that [RGB]230, 230, 230comes [RGB]230, 230, 230naturally [RGB]230, 230, 230to [RGB]230, 230, 230some [RGB]211, 211, 211people [RGB]135, 135, 135, $\xleftarrow[\text{pred:contrast}]{\text{gold:contrast}}$ [RGB]21, 21, 21but [RGB]209, 209, 209for [RGB]207, 207, 207others [RGB]230, 230, 230it [RGB]217, 217, 217takes [RGB]230, 230, 230practice [RGB]230, 230, 230and [RGB]189, 189, 189hard [RGB]230, 230, 230work [RGB]230, 230, 230.
. [RGB]230, 230, 230It [RGB]230, 230, 230is [RGB]230, 230, 230possible [RGB]230, 230, 230that [RGB]230, 230, 230these [RGB]230, 230, 230two [RGB]230, 230, 230children [RGB]230, 230, 230understood [RGB]230, 230, 230the [RGB]230, 230, 230task [RGB]230, 230, 230and [RGB]230, 230, 230really [RGB]230, 230, 230did [RGB]230, 230, 230believe [RGB]230, 230, 230that [RGB]230, 230, 230the [RGB]230, 230, 230puppet [RGB]230, 230, 230did [RGB]230, 230, 230not [RGB]230, 230, 230produce [RGB]230, 230, 230any [RGB]230, 230, 230poor [RGB]230, 230, 230descriptions [RGB]230, 230, 230, [RGB]230, 230, 230and [RGB]230, 230, 230in [RGB]230, 230, 230this [RGB]230, 230, 230regard [RGB]230, 230, 230, [RGB]230, 230, 230are [RGB]230, 230, 230not [RGB]230, 230, 230yet [RGB]230, 230, 230adult-like [RGB]230, 230, 230in [RGB]230, 230, 230their [RGB]230, 230, 230SI [RGB]230, 230, 230interpretations [RGB]230, 230, 230. $\xleftarrow[\text{pred:evaluation}]{\text{gold:evaluation}}$ [RGB]230, 230, 230This [RGB]230, 230, 230is [RGB]41, 41, 41unlikely
The highlighting in SECREF36 illustrates the benefits of the masking based evaluation compared to SECREF36: the token To is now clearly the strongest signal, and the verb is taken to be less important, followed by the even less important object of the verb. This is because removing the initial To hinders classification much more than the removal of the verb or noun. We note also that although the model in fact misclassified this example as preparation, we can still use masking importance to identify To, since the score queried from the model corresponds to a relative decrease in the probability of the correct relation, purpose, even if this was not the highest scoring relation overall.
In SECREF36 we see the model's ability to correctly predict contrast based on the DM but. Note that despite a rather long sentence, the model does not need any other word nearly as much for the classification. Although the model is not trained explicitly to detect discourse markers, the DM can be recognized due to the fact that masking it leads to a drop of 66% softmax probability (${\Delta }_s$=0.66) of this pair representing the contrast relation. We can also note that a somewhat lower scoring content word is also marked: hard (${\Delta }_s$=0.18). In our gold signaling annotations, this word was marked together with comes naturally as a signal, due to the contrast between the two concepts (additionally, some people is flagged as a signal along with others). The fact that the model finds hard helpful, but does not need the contextual near antonym naturally, suggests that it is merely learning that words in the semantic space near hard may indicate contrast, and not learning about the antonymous relationship – otherwise we would expect to see `naturally' have a stronger score (see also the discussion in Section SECREF6).
Finally SECREF36 shows that, much like in the case of hard, the model is not biased towards traditional DMs, confirming that it is capable of learning about content words, or neighborhoods of content words in vector space. In a long EDU pair of 41 words, the model relies almost exclusively on the word unlikely (${\Delta }_s$=0.36) to correctly label the relation as evaluation. By contrast, the anaphoric demonstrative `This' flagged by the human annotator, which is a more common function word, is disregarded, perhaps because it can appear with several other relations, and is not particularly exclusive to evaluation. These results suggest that the model may be capable of recognizing signals through distant supervision, allowing it to validate human annotations, to potentially point out signals that may be missed by annotators, and most importantly, to quantify signaling strength on a sliding scale. At the same time, we need a way to evaluate the model's quality and assess the kinds of errors it makes, as well as what we can learn from them. We therefore move on to evaluating the model and its errors next.
<<</Signaling Metric>>>
<<</A Contextualized Neural Model>>>
<<</Automatic Signal Extraction>>>
<<<Evaluation and Error Analysis>>>
<<<Evaluation Metric>>>
To evaluate the neural model, we would like to know how well ${\Delta }_s$ corresponds to annotators' gold standard labels. This leads to two kinds of problems: the first is that the model is distantly supervised, and therefore does not know about signal types, subtypes, or any aspect of signaling annotation and its relational structure. The second problem is that signaling annotations are categorical, and do not correspond to the ratio-scaled predictions provided by ${\Delta }_s$ (this is in fact one of the motivations for desiring a model-based estimate of signaling strength).
The first issue means that we can only examine the model's ability to locate signals – not to classify them. Although there may be some conceivable ways of analyzing model output to identify classes such as DMs (which are highly lexicalized, rather than representing broad regions of vector space, as words such as unlikely might), or more contextual relational signals, such as pronouns, this line of investigation is beyond the scope of the present paper. A naive solution to the second problem might be to identify a cutoff point, e.g. deciding that all and only words scoring ${\Delta }_s>$0.15 are predicted to be signals.
The problem with the latter approach is that sentences can be very different in many ways, and specifically in both length and in levels of ambiguity. Sentences with multiple, mutually redundant cues, may produce lower ${\Delta }_s$ scores compared to shorter sentences with a subset of the same cues. Conversely, in very short sentences with low signal strength, the model may reasonably be expected to degrade very badly with the deletion of almost any word, as the context becomes increasingly incomprehensible.
For these reasons, we choose to adopt an evaluation metric from the paradigm of information retrieval, and focus on recall@k (recall at rank k, for $k=1,2,3$...). The idea is to poll the model for each sentence in which some signals have been identified, and see whether the model is able to find them if we let it guess using the word with the maximal ${\Delta }_s$ score (recall@1), regardless of how high that score is, or alternatively relax the evaluation criteria and see whether the human annotator's signal tokens appear at rank 2 or 3. Figure FIGREF40 shows numbers for recall@k for the top 3 ranks outputted by the model, next to random guess baselines.
The left, middle and right panels in Figure FIGREF40 correspond to measurements when all signals are included, only cases contained entirely in the head EDUs shown to the model, and only DMs, respectively. The scenario on the left is rather unreasonable and is included only for completeness: here the model is also penalized for not detecting signals such as lexical chains, part of which is outside the units that the model is being shown. An example of such a case can be seen in Figure FIGREF41. The phrase Respondents in unit [23] signals the relation elaboration, since it is coreferential with a previous mention of the respondents in [21]. However, because the model is only given heads of EDU blocks to classify, it does not have access to the first occurrence of respondents while predicting the elaboration relation – the first half of the signal token set is situated in a child of the nucleus EDU before the relation, i.e. it belongs to group IV in the taxonomy in Table TABREF20. Realistically, our model can only be expected to learn about signals from `directly participating' EDUs, i.e. groups I, II, VI and VII, the `endocentric' signal groups from Section SECREF16.
Although most signals belong to endocentric categories (71.62% of signaled relations belong to these groups, cf. Table TABREF20), exocentric cases form a substantial portion of signals which we have little hope of capturing with the architecture used here. As a result, recall metrics in the `all signals' scenario are closest to the random baselines, though the signals detected in other instances still place the model well above the baseline.
A more reasonable evaluation is the one in the middle panel of Figure FIGREF40, which includes only endocentric signals as defined in the taxonomy. EDUs with no endocentric signals are completely disregarded in this scenario, which substantially reduces the number of tokens considered to be signals, since, while many tokens are part of some meaningful lexical chain in the document, requiring signals to be contained only in the pair of head units eliminates a wide range of candidates. Although the random baseline is actually very slightly higher (perhaps because eliminated EDUs were often longer ones, sharing small amounts of material with larger parts of the text, and therefore prone to penalizing the baseline; many words mean more chances for a random guess to be wrong), model accuracy is substantially better in this scenario, reaching a 40% chance of hitting a signal with only one guess, exceeding 53% with two guesses, and capping at 64% for recall@3, over 20 points above baseline.
Finally, the right panel in the figure shows recall when only DMs are considered. In this scenario, a random guess fares very poorly, since most words are not DMs. The model, by contrast, achieves the highest results in all metrics, since DMs have the highest cue validity for relation classification, and the model attends to them most strongly. With just one guess, recall is over 56%, and goes as high as 67% for recall@3. The baseline only goes as high as 16% for three guesses.
<<</Evaluation Metric>>>
<<<Qualitative Analysis>>>
Looking at the model's performance qualitatively, it is clear that it can detect not only DMs, but also morphological cues (e.g. gerunds as markers of elaboration, as in SECREF43), semantic classes and sentiment, such as positive and negative evaluatory terms in SECREF43, as well as multiple signals within the same EDU, as in SECREF43. In fact, only about 8.3% of the tokens correctly identified by the model in Table TABREF45 below are of the DM type, whereas about 7.2% of all tokens flagged by human annotators were DMs, meaning that the model frequently matches non-DM items to discourse relation signals (see Performance on Signal Types below). It should also be noted that signals can be recognized even when the model misclassifies relations, since ${\Delta }_s$ does not rely on correct classification: it merely quantifies the contribution of a word in context toward the correct label's score. If we examine the influence of each word on the score of the correct relation, that impact should and does still correlate with human judgments based on what the system may tag as the second or third best class to choose.
. [RGB]230, 230, 230For [RGB]230, 230, 230the [RGB]230, 230, 230present [RGB]230, 230, 230analysis [RGB]230, 230, 230, [RGB]230, 230, 230these [RGB]230, 230, 230responses [RGB]230, 230, 230were [RGB]230, 230, 230recoded [RGB]230, 230, 230into [RGB]230, 230, 230nine [RGB]230, 230, 230mutually [RGB]230, 230, 230exclusive [RGB]230, 230, 230categories $\xleftarrow[\text{pred:elaboration}]{\text{gold:result}}$ [RGB]63, 63, 63capturing [RGB]219, 219, 219the [RGB]230, 230, 230following [RGB]230, 230, 230options [RGB]135, 135, 135:
. [RGB]185, 185, 185Professor [RGB]219, 219, 219Eastman [RGB]223, 223, 223said [RGB]207, 207, 207he [RGB]194, 194, 194is [RGB]64, 64, 64alarmed [RGB]230, 230, 230by [RGB]230, 230, 230what [RGB]230, 230, 230they [RGB]230, 230, 230found [RGB]230, 230, 230. $\xrightarrow[\text{pred:preparation}]{\text{gold:evaluation}}$ [RGB]230, 230, 230" [RGB]230, 230, 230Pregnant [RGB]229, 229, 229women [RGB]187, 187, 187in [RGB]230, 230, 230Australia [RGB]98, 98, 98are [RGB]213, 213, 213getting [RGB]230, 230, 230about [RGB]230, 230, 230half [RGB]171, 171, 171as [RGB]159, 159, 159much [RGB]230, 230, 230as [RGB]230, 230, 230what [RGB]155, 155, 155they [RGB]155, 155, 155require [RGB]223, 223, 223on [RGB]214, 214, 214a [RGB]109, 109, 109daily [RGB]176, 176, 176basis [RGB]111, 111, 111.
. [RGB]195, 195, 195Even [RGB]230, 230, 230so [RGB]230, 230, 230, [RGB]230, 230, 230estimates [RGB]230, 230, 230of [RGB]230, 230, 230the [RGB]230, 230, 230prevalence [RGB]230, 230, 230of [RGB]230, 230, 230perceived [RGB]230, 230, 230discrimination [RGB]219, 219, 219remains [RGB]230, 230, 230rare $\xleftarrow[\text{pred:evidence}]{\text{gold:concession}}$ [RGB]111, 111, 111At [RGB]63, 63, 63least [RGB]230, 230, 230one [RGB]230, 230, 230prior [RGB]230, 230, 230study [RGB]230, 230, 230by [RGB]230, 230, 230Kessler [RGB]225, 225, 225and [RGB]230, 230, 230colleagues [RGB]230, 230, 230[ [RGB]230, 230, 23015 [RGB]161, 161, 161] [RGB]200, 200, 200, [RGB]136, 136, 136however [RGB]222, 222, 222, [RGB]228, 228, 228using [RGB]230, 230, 230measures [RGB]230, 230, 230of [RGB]230, 230, 230perceived [RGB]224, 224, 224discrimination [RGB]217, 217, 217in [RGB]230, 230, 230a [RGB]230, 230, 230large [RGB]218, 218, 218American [RGB]230, 230, 230sample [RGB]230, 230, 230, [RGB]230, 230, 230reported [RGB]230, 230, 230that [RGB]230, 230, 230approximately [RGB]230, 230, 23033 [RGB]212, 212, 212% [RGB]230, 230, 230of [RGB]230, 230, 230respondents [RGB]156, 156, 156reported [RGB]169, 169, 169some [RGB]122, 122, 122form [RGB]168, 168, 168of [RGB]230, 230, 230discrimination
Unsurprisingly, the model sometimes make sporadic errors in signal detection for which good explanations are hard to find, especially when its predicted relation is incorrect, as in SECREF43. Here the evaluative adjective remarkable is missed in favor of neighboring words such as agreed and a subject pronoun, which are not indicative of the evaluation relation in this context but are part of several cohorts of high scoring words. However, the most interesting and interpretable errors arise when ${\Delta }_s$ scores are high compared to an entire document, and not just among words in one EDU pair, in which most or even all words may be relatively weak signals. As an example of such a false positive with high confidence, we can consider SECREF43. In this example, the model correctly assigns the highest score to the DM so marking a purpose relation. However, it also picks up on a recurring tendency in how-to guides in which the second person pronoun referring to the reader is often the benefactee of some action, which contributes to the purpose reading and helps to disambiguate so, despite not being considered a signal by annotators.
. [RGB]216, 216, 216The [RGB]99, 99, 99agreement [RGB]89, 89, 89was [RGB]230, 230, 230that [RGB]131, 131, 131Gorbachev [RGB]102, 102, 102agreed [RGB]230, 230, 230to [RGB]230, 230, 230a [RGB]230, 230, 230quite [RGB]230, 230, 230remarkable [RGB]125, 125, 125concession [RGB]230, 230, 230: $\xrightarrow[\text{pred:preparation}]{\text{gold:evaluation}}$ [RGB]64, 64, 64he [RGB]81, 81, 81agreed [RGB]230, 230, 230to [RGB]230, 230, 230let [RGB]220, 220, 220a [RGB]143, 143, 143united [RGB]149, 149, 149Germany [RGB]230, 230, 230join [RGB]83, 83, 83the [RGB]230, 230, 230NATO [RGB]230, 230, 230military [RGB]230, 230, 230alliance [RGB]230, 230, 230.
. [RGB]230, 230, 230The [RGB]220, 220, 220opening [RGB]230, 230, 230of [RGB]230, 230, 230the [RGB]230, 230, 230joke [RGB]230, 230, 230— [RGB]230, 230, 230or [RGB]230, 230, 230setup [RGB]230, 230, 230— [RGB]230, 230, 230should [RGB]230, 230, 230have [RGB]230, 230, 230a [RGB]230, 230, 230basis [RGB]230, 230, 230in [RGB]230, 230, 230the [RGB]230, 230, 230real [RGB]200, 200, 200world $\xleftarrow[\text{pred:purpose}]{\text{gold:purpose}}$ [RGB]7, 7, 7so [RGB]73, 73, 73your [RGB]230, 230, 230audience [RGB]230, 230, 230can [RGB]230, 230, 230relate [RGB]230, 230, 230to [RGB]230, 230, 230it [RGB]230, 230, 230,
In other cases, the model points out plausible signals which were passed over by an annotator, and may be considered errors in the gold standard. For example, the model easily notices that question marks indicate the solutionhood relation, even where these were skipped by annotators in favor of marking WH words instead:
. [RGB]230, 230, 230Which [RGB]230, 230, 230previous [RGB]230, 230, 230Virginia [RGB]230, 230, 230Governor(s) [RGB]230, 230, 230do [RGB]230, 230, 230you [RGB]230, 230, 230most [RGB]230, 230, 230admire [RGB]230, 230, 230and [RGB]230, 230, 230why [RGB]12, 12, 12? $\xrightarrow[\text{pred:solutionhood}]{\text{gold:solutionhood}}$ [RGB]230, 230, 230Thomas [RGB]230, 230, 230Jefferson [RGB]183, 183, 183.
From the model's perspective, the question mark, which scores ${\Delta }_s$=0.79, is the single most important signal, and virtually sufficient for classifying the relation correctly, though it was left out of the gold annotations. The WH word Which and the sentence final why, by contrast, were noticed by annotators but were are not as unambiguous (the former could be a determiner, and the latter in sentence final position could be part of an embedded clause). In the presence of the question mark, their individual removal has much less impact on the classification decision. Although the model's behavior is sensible and can reveal annotation errors, it also suggests that ${\Delta }_s$ will be blind to auxiliary signals in the presence of very strong, independently sufficient cues.
Using the difference in likelihood of correct relation prediction as a metric also raises the possibility of an opposite concept to signals, which we will refer to as distractors. Since ${\Delta }_s$ is a signed measure of difference, it is in fact possible to obtain negative values whenever the removal or masking of a word results in an improvement in the model's ability to predict the relation. In such cases, and especially when the negative value is of a large magnitude, it seems like a reasonable interpretation to say that a word functions as a sort of anti-signal, preventing or complicating the recognition of what might otherwise be a more clear-cut case. Examples SECREF43–SECREF43 show some instances of distractors identified by the masking procedure (distractors with ${\Delta }_s<$-0.2 are underlined).
. [RGB]230, 230, 230How [RGB]230, 230, 230do [RGB]230, 230, 230they [RGB]201, 201, 201treat [RGB]167, 167, 167those [RGB]210, 210, 210not [RGB]190, 190, 190like [RGB]230, 230, 230themselves [RGB]100, 100, 100? $\xrightarrow[\text{pred:solutionhood}]{\text{gold:preparation}}$ [RGB]52, 52, 52then [RGB]230, 230, 230they [RGB]230, 230, 230're [RGB]230, 230, 230either [RGB]230, 230, 230over-zealous [RGB]230, 230, 230, [RGB]230, 230, 230ignorant [RGB]230, 230, 230of [RGB]230, 230, 230other [RGB]230, 230, 230people [RGB]230, 230, 230or [RGB]230, 230, 230what [RGB]230, 230, 230to [RGB]230, 230, 230avoid [RGB]230, 230, 230those [RGB]230, 230, 230that [RGB]230, 230, 230contradict [RGB]230, 230, 230their [RGB]230, 230, 230fantasy [RGB]230, 230, 230land [RGB]230, 230, 230that [RGB]220, 220, 220caters [RGB]230, 230, 230to [RGB]230, 230, 230them [RGB]230, 230, 230and [RGB]230, 230, 230them [RGB]230, 230, 230only [RGB]230, 230, 230.
. [RGB]230, 230, 230God [RGB]230, 230, 230, [RGB]230, 230, 230I [RGB]230, 230, 230do [RGB]230, 230, 230n't [RGB]230, 230, 230know [RGB]51, 51, 51! $\xrightarrow[\text{pred:preparation}]{\text{gold:preparation}}$ [RGB]230, 230, 230but [RGB]230, 230, 230nobody [RGB]230, 230, 230will [RGB]230, 230, 230go [RGB]230, 230, 230to [RGB]230, 230, 230fight [RGB]230, 230, 230for [RGB]230, 230, 230noses [RGB]230, 230, 230any [RGB]219, 219, 219more [RGB]169, 169, 169.
In SECREF43, a rhetorical question trips up the classifier, which predicts the question-answer relation solutionhood instead of preparation. Here the initial WH word How and the subsequent auxiliary do-support both distract (with ${\Delta }_s$=-0.23 and -0.25) from the preparation relation, which is however being signaled positively by the DM then in the nucleus unit. Later on, the adverb only is also disruptive (${\Delta }_s$=-0.31), perhaps due to a better association with adversative relations, such as contrast.
In SECREF43, a preparatory “God, I don't know!” is followed up with a nucleus starting with but, which typically marks a concession or other adversative relation. In fact, the DM but is related to a concessive relation with another EDU (not shown), which the model is not aware of while making the classification for the preparation. Although this example reveals a weakness in the model's inability to consider broader context, it also reveals the difficulty of expecting DMs to fall in line with a strong nuclearity assumption: since units serve multiple functions as satellites and nuclei, signals which aid the recognition of one relation may hinder the recognition of another.
<<</Qualitative Analysis>>>
<<<Performance on Signal Types>>>
To better understand the kinds of signals which the model captures better or worse, Table TABREF45 gives a breakdown of performance by signal type and specific signal categories, for categories attested over 20 times (note that the categories are human labels assigned to the corresponding positions – the system does not predict signal types). To evaluate performance for all types we cannot use recall@1–3, since some sentences contain more than 3 signal tokens, which would lead to recall errors even if the top 3 ranks are correctly identified signals. The scores in the table therefore express how many of the signal tokens belonging to each subtype in the gold annotations are recognized if we allow the system to make as many guesses as there are signal tokens in each EDU pair, plus a tolerance of a maximum of 2 additional tokens (similarly to recall@3). We also note that a single token may be associated with multiple signal types, in which case its identification or omission is counted separately for each type.
Three of the top four categories which the model performs best for are, perhaps unsurprisingly, the most lexical ones: alternate expression captures non-DM phrases such as I mean (for elaboration), or the problem is (for concession), and indicative word includes lexical items such as imperative see (consistently marking evidence in references within academic articles) or evaluative adjectives such as interesting for evaluation. The good performance of the category colon captures the model's recognition of colons as important punctuation, primarily predicting preparation. The only case of a `relational' category, requiring attention to two separate positions in the input, which also fares well is synonymy, though this is often based on flagging only one of two items annotated as synonymous, and is based on rather few examples. We can find only one example, SECREF44, where both sides of a pair of similar words is actually noticed, which both belong to the same stem (decline/declining):
. [RGB]230, 230, 230The [RGB]230, 230, 230report [RGB]209, 209, 209says [RGB]213, 213, 213the [RGB]172, 172, 172decline [RGB]220, 220, 220in [RGB]228, 228, 228iodine [RGB]230, 230, 230intake [RGB]215, 215, 215appears [RGB]230, 230, 230to [RGB]230, 230, 230be [RGB]230, 230, 230due [RGB]230, 230, 230to [RGB]230, 230, 230changes [RGB]230, 230, 230in [RGB]230, 230, 230the [RGB]230, 230, 230dairy [RGB]230, 230, 230industry [RGB]230, 230, 230, [RGB]230, 230, 230where [RGB]230, 230, 230chlorine-containing [RGB]230, 230, 230sanitisers [RGB]226, 226, 226have [RGB]230, 230, 230replaced [RGB]230, 230, 230iodine-containing [RGB]230, 230, 230sanitisers [RGB]230, 230, 230. $\xleftarrow[\text{pred:background}]{\text{gold:justify}}$ [RGB]193, 193, 193Iodine [RGB]230, 230, 230released [RGB]230, 230, 230from [RGB]230, 230, 230these [RGB]230, 230, 230chemicals [RGB]230, 230, 230into [RGB]216, 216, 216milk [RGB]230, 230, 230has [RGB]230, 230, 230been [RGB]230, 230, 230the [RGB]230, 230, 230major [RGB]230, 230, 230source [RGB]230, 230, 230of [RGB]226, 226, 226dietary [RGB]206, 206, 206iodine [RGB]230, 230, 230in [RGB]230, 230, 230Australia [RGB]230, 230, 230for [RGB]230, 230, 230at [RGB]230, 230, 230least [RGB]230, 230, 230four [RGB]230, 230, 230decades [RGB]202, 202, 202, [RGB]153, 153, 153but [RGB]230, 230, 230is [RGB]230, 230, 230now [RGB]63, 63, 63declining [RGB]79, 79, 79.
We note that our evaluation is actually rather harsh towards the model, since in multiword expressions, often only one central word is flagged by ${\Delta }_s$ (e.g. problem in “the problem is”), while the model is penalized in Table TABREF45 for each token that is not recognized (i.e. the and is, which were all flagged by a human annotator as signals in the data).
Interestingly, the model fares rather well in identifying morphological tense cues, even though these are marked by both inflected lexical verbs and semantically poor auxiliaries (e.g. past perfect auxiliary had marking background); but modality cues (especially can or could for evaluation) are less successfully identified, suggesting they are either more ambiguous, or mainly relevant in the presence of evaluative content words which out-score them.
Other relational categories from the middle of the table which ostensibly require matching pairs of words, such as repetition, meronymy, or personal reference (coreference) are mainly captured by the model when a single item is a sufficiently powerful cue, often ignoring the other half of the signal, as shown in SECREF44.
. [RGB]230, 230, 230On [RGB]230, 230, 230a [RGB]230, 230, 230new [RGB]230, 230, 230website [RGB]230, 230, 230, [RGB]230, 230, 230" [RGB]230, 230, 230The [RGB]230, 230, 230Internet [RGB]230, 230, 230Explorer [RGB]230, 230, 2306 [RGB]230, 230, 230Countdown [RGB]230, 230, 230" [RGB]230, 230, 230, [RGB]230, 230, 230Microsoft [RGB]230, 230, 230has [RGB]230, 230, 230launched [RGB]230, 230, 230an [RGB]230, 230, 230aggressive [RGB]230, 230, 230campaign [RGB]230, 230, 230to [RGB]230, 230, 230persuade [RGB]230, 230, 230users [RGB]230, 230, 230to [RGB]230, 230, 230stop [RGB]171, 171, 171using [RGB]133, 133, 133IE6 $\xleftarrow[\text{pred:elaboration}]{\text{gold:elaboration}}$ [RGB]56, 56, 56Its [RGB]197, 197, 197goal [RGB]167, 167, 167is [RGB]230, 230, 230to [RGB]230, 230, 230decrease [RGB]230, 230, 230IE6 [RGB]230, 230, 230users [RGB]230, 230, 230to [RGB]230, 230, 230less [RGB]230, 230, 230than [RGB]230, 230, 230one [RGB]124, 124, 124percent [RGB]229, 229, 229.
Here the model has learned that an initial possessive pronoun, perhaps in the context of a subject NP in a copula sentence (note the shading of the following is) is an indicator of an elaboration relation, even though there is no indication that the model has noticed which word is the antecedent. Similarly for the count category, the model only learns to notice the possible importance of some numbers, but is not actually aware of whether they are identical (e.g. for restatement) or different (e.g. in contrast).
Finally, some categories are actually recognized fairly reliably, but are penalized by the same partial substring issue identified above: Date expressions are consistently flagged as indicators of circumstance, but often a single word, such as a weekday in SECREF44, is dominant, while the model is penalized for not scoring other words as highly (including commas within dates, which are marked as part of the signal token span in the gold standard, but whose removal does not degrade prediction accuracy). In this case it seems fair to say that the model has successfully recognized the date signal of `Wednesday April 13', yet it loses points for missing two instances of `,', and the `2011', which is no longer necessary for recognizing that this is a date.
. [RGB]230, 230, 230NASA [RGB]230, 230, 230celebrates [RGB]230, 230, 23030th [RGB]230, 230, 230anniversary [RGB]230, 230, 230of [RGB]230, 230, 230first [RGB]230, 230, 230shuttle [RGB]230, 230, 230launch [RGB]230, 230, 230; $\xleftarrow[\text{pred:circumstance}]{\text{gold:circumstance}}$ [RGB]11, 11, 11Wednesday [RGB]186, 186, 186, [RGB]115, 115, 115April [RGB]153, 153, 15313 [RGB]219, 219, 219, [RGB]230, 230, 2302011
<<</Performance on Signal Types>>>
<<</Evaluation and Error Analysis>>>
<<<Discussion>>>
This paper has used a corpus annotated for discourse relation signals within the framework of the RST Signalling Corpus (BIBREF12) and extended with anchored signal annotations (BIBREF27) to develop a taxonomy of unrestricted and hierarchically aware discourse signal positions, as well as a data-driven neural network model to explore distantly supervised signal word extraction. The results shed light on the distribution of signal categories from the RST-SC taxonomy in terms of associated word forms, and show the promise of neural models with contextual embeddings for the extraction of context dependent and gradient discourse signal detection in individual texts. The metric developed for the evaluation, $\Delta _s$, allows us to assess the relative importance of signal words for automatic relation classification, and reveal observations for further study, as well as shortcomings which point to the need to develop richer feature representations and system architectures in future work.
The model presented in the previous sections is clearly incomplete in both its classification accuracy and its ability to recognize the same signals that humans do. However, given the fact that it is trained entirely without access to discourse signal annotations and is unaware of any of the guidelines used to create the gold standard that it is evaluated on, its performance may be considered surprisingly good. As an approach to extracting discourse signals in a data-driven way, similar to frequentist methods or association measures used in previous work, we suggest that this model forms a more fine grained tool, capable of taking context into consideration and delivering scores for each instance of a signal candidate, rather than resulting in a table of undifferentiated signal word types.
Additionally, although we consider human signal annotations to be the gold standard in identifying the presence of relevant cues, the ${\Delta }_s$ metric gives new insights into signaling which cannot be approached using manual signaling annotations. Firstly, the quantitative nature of the metric allows us to rank signaling strength in a way that humans have not to date been able to apply: using ${\Delta }_s$, we can say which instances of which signals are evaluated as stronger, by how much, and which words within a multi-word signal instance are the most important (e.g. weekdays in dates are important, the commas are not). Secondly, the potential for negative values of the metric opens the door to the study of negative signals, or `distractors', which we have only touched upon briefly in this paper. And finally, we consider the availability of multiple measurements for a single DM or other discourse signal to be a potentially very interesting window into the relative ambiguity of different signaling devices (cf. BIBREF16) and for research on the contexts in which such ambiguity results.
To see how ambiguity is reflected in multiple measurements of ${\Delta }_s$, we can consider Figure FIGREF47. The figure shows boxplots for multiple instances of the same signal tokens. We can see that words like and are usually not strong signals, with the entire interquartile range scoring less than 0.02, i.e. aiding relation classification by less than 2%, with some values dipping into the negative region (i.e. cases functioning as distractors). However, some outliers are also present, reaching almost as high as 0.25 – these are likely to be coordinating predicates, which may signal relations such as sequence or joint. A word such as but is more important overall, with the box far above and, but still covering a wide range of values: these can correspond to more or less ambiguous cases of but, but also to cases in which the word is more or less irreplaceable as a signal. In the presence of multiple signals for the same relation, the presence of but should be less important. We can also see that but can be a distractor with negative values, as we saw in example SECREF43 above. As far as we are aware, this is the first empirical corpus-based evidence giving a quantitative confirmation to the intuition that `but' in context is significantly less ambiguous as a discourse marker than `and'; the overlap in their bar plots indicate that they can be similarly ambiguous or even distracting in some cases, but the difference in interquartile ranges makes it clear that these are exceptions.
For less ambiguous DMs, such as if, we can also see a contrast between lower and upper case instances: upper case If is almost always a marker of condition, but the lower case if is sometimes part of an embedded object clause, which is not segmented in the corpus and does not mark a conditional relation (e.g. “they wanted to see if...”). For the word to, the figure suggests a strongly bimodal distribution, with a core population of (primarily prepositional) discourse-irrelevant to, and a substantial number of outliers above a large gap, representing to in infinitival purpose clauses (though not all to infinitives mark such clauses, as in adnominal “a chance to go”, which the model is usually able to distinguish in context). In other words, our model can not only disambiguate ambiguous strings into grammatical categories, but also rank members of the same category by importance in context, as evidenced by its ability to correctly classify high frequency items like `to' or `and' as true positives. A frequentist approach would not only lack this ability – it would miss such items altogether, due to its overall high string frequency and low specificity.
Beyond what the results can tell us about discourse signals in this particular corpus, the fact that the neural model is sensitive to mutual redundancy of signals raises interesting theoretical questions about what human annotators are doing when they characterize multiple features of a discourse unit as signals. If it is already evident from the presence of a conventional DM that some relation applies, are other, less explicit signals which might be relied on in the absence of the DM, equally `there'? Do we need a concept of primary and auxiliary signals, or graded signaling strength, in the way that a metric such as ${\Delta }_s$ suggests?
Another open question relates to the postulation of distractors as an opposite concept to discourse relation signals. While we have not tested this so far, it is interesting to ask to what extent human analysts are aware of distractors, whether we could form annotation guidelines to recognize them, and how humans weigh the value of signals and potential distractors in extrapolating intended discourse relations. It seems likely that distractors affecting humans may be found in cases of misunderstanding or ambiguity of discourse relations (see also BIBREF25).
Finally, the error analysis for signal detection complements the otherwise opaque relation classification results in Table TABREF34 in showing some of the missing sources of information that our model would need in order to work better. We have seen that relational information, such as identifying not just the presence of a pronoun but also its antecedent, or both sides of lexical semantic relations such as synonymy, meronymy or antonymy, as well as comparing count information, are still unavailable to the classifier – if they were being used, then ${\Delta }_s$ would reflect the effects of their removal, but this is largely not the case. This suggests that, in the absence of vastly larger discourse annotated corpora, discourse relation recognition may require the construction of either features, architectures, or both, which can harness abstract relational information of this nature beyond the memorization of specific pairs of words (or regions of vector space with similar words) that are already attested in the limited training data. In this vein, BIBREF54 conducted a series of experiments on automatic sense prediction for four top-level implicit discourse relations within the PDTB framework, which also suggested benefits for using linguistically-informed features such as verb information, polarity tags, context, lexical items (e.g. first and last words of the arguments; first three words in the sentence) etc. The model architecture and input data are also in need of improvements, as the current architecture can only be expected to identify endocentric signals. The substantial amount of exocentric signaling cases is in itself an interesting finding, as it suggests that relation classification from head EDU pairs may ultimately have a natural ceiling that is considerably below what could be inferred from looking at larger contexts. We predict that as we add more features to the model and improve its architecture in ways that allow it to recognize the kinds of signals that humans do, classification accuracy will increase; and conversely, as classification accuracy rises, measurements based on ${\Delta }_s$ will overlap increasingly with human annotations of anchored signals.
In sum, we believe that there is room for much research on what relation classification models should look like, and how they can represent the kinds of information found in non-trivial signals. The results of this line of work can therefore benefit NLP systems targeting discourse relations by suggesting locations within the text which systems should attend to in one way or another. Moreover, we think that using distant-supervised techniques for learning discourse relations (e.g. BIBREF55) is promising in the development of discourse models using the proposed dataset. We hope to see further analyses benefit from this work and the application of metrics such as ${\Delta }_s$ to other datasets, within more complex models, and using additional features to capture such information. We also hope to see applications of discourse relations such as machine comprehension BIBREF20 and sentiment analysis BIBREF55 etc. benefit from the proposed model architecture as well as the dataset.
<<</Discussion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nPrevious Work\nData-driven Approaches\nDiscourse Relation Signal Annotations\nData\nAnchored Signals in the GUM Corpus\nA Taxonomy of Anchored Signals\nAutomatic Signal Extraction\nA Contextless Frequentist Approach\nA Contextualized Neural Model\nTask and Model Architecture\nRelation Classification Performance\nSignaling Metric\nEvaluation and Error Analysis\nEvaluation Metric\nQualitative Analysis\nPerformance on Signal Types\nDiscussion"
],
"type": "outline"
}
|
2002.00317
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Citation Text Generation
<<<Abstract>>>
We introduce the task of citation text generation: given a pair of scientific documents, explain their relationship in natural language text in the manner of a citation from one text to the other. This task encourages systems to learn rich relationships between scientific texts and to express them concretely in natural language. Models for citation text generation will require robust document understanding including the capacity to quickly adapt to new vocabulary and to reason about document content. We believe this challenging direction of research will benefit high-impact applications such as automatic literature review or scientific writing assistance systems. In this paper we establish the task of citation text generation with a standard evaluation corpus and explore several baseline models.
<<</Abstract>>>
<<<Introduction>>>
The output of the world's scientists doubles roughly every nine years BIBREF0, and their pace is quickening. As a result, scientists and other experts must devote significant time to the difficult task of literature review, or coming to understand the context in which they work. Might artificial intelligence help to reduce that time?
Several lines of research seek to do so. Citation recommendations systems BIBREF1, BIBREF2, BIBREF3 suggest references to relevant published work for a given document such as a current draft. Summarization systems BIBREF4, BIBREF5 condense the information in one or more documents, allowing researchers to more quickly understand the basic ideas in a piece of research.
We introduce a complementary—but so far unaddressed—problem, citation text generation, where the relationship between a document and one or several others is expressed in natural language text. This differs from traditional summarization in that the primary focus is explaining the relationship between the two documents rather than their content. Automatically describing inter-document relationships could dramatically decrease the time researchers devote to literature review. For instance, a new paper could be explained in terms of its relationships to relevant works that a particular reader is most familiar with, rather than just those which the authors elected to cite (personalization). Further, such technology could be incorporated into writing assistance systems to help less experienced or non-native writers better articulate the connection between their work and prior art. Additionally, users of citation recommendation systems can benefit from natural language explanations of recommendation system choices.
Beyond the immediate utility of citation text generation systems, the task offers significant challenges for language understanding and generation research. A major challenge is how to represent the information in one or more scientific texts. These documents are longer than those in most other domains typically studied in NLP, and make use of a long-tailed, open-domain technical vocabulary. Often an important phrase in the citing sentence output occurs only in a specific cited document and not elsewhere in the corpus. This requires a model that can learn phrase meanings from very few exposures, an important but unsolved problem for text generation systems. Possibly more challenging is understanding and expressing the various and nuanced relationships between related scientific works.
In this work, we introduce the task of citation text generation. Leveraging the full texts of English-language scientific articles, we construct a dataset of citation sentences in the computer science domain for training and evaluating citation text generation models. We investigate strong retrieval and neural baseline models against which future work can compare. For use cases where large models can be trained, we extend the successful GPT2 architecture BIBREF6 to the scientific domain with additional pre-training and subsequent fine-tuning on the citation generation task. We experiment with different kinds of document context in the fine-tuning and inference stages. We also explore retrieval-based techniques which may more easily generalize to lower-resource settings. These models retrieve citation sentences from training documents which are most similar to test inputs. Our evaluations show that these techniques often produce plausible citation sentences, but indicate clear directions for improvement. Code and artifacts are provided for future research.
<<</Introduction>>>
<<<Task>>>
Given the important research challenges posed by the citation text generation task, along with the potential social benefits of its solutions, let us continue with a formalization of the problem. Citation text generation is the task of generating a natural language citing sentence which explains the relationship between two documents. Examples of such citing sentences can be found in scientific documents as in-text citations to a previous work. Thus, we will formally distinguish one document as the source document, from which we will draw citing sentences which reference the cited document.
If we want to leverage powerful modern neural text generation systems, we are faced with the problem of how to represent the documents in a way that these models can consume. In particular, language models like GPT2 are trained to predict next token probabilities given long stretches of contiguous text from a single document. It is not clear how to mix information from more than one document when providing context to these models.
An additional difficulty of the citation text generation task is the vocabulary. In this domain, low-frequency, highly meaningful terms regularly appear in output texts. These terms may be completely novel to a single or small collection of papers (consider the phrase “citation text generation”, for instance), yet they are necessary for explaining the paper.
This framing suggests a supervised learning setup. Let $t$ denote a citing sentence drawn from $S$, and $S^{\prime }$ denote $S$ without $t$. Then let
be the probability of $t$ given $S^{\prime }$, cited document $C$, and model parameters $\theta $. The goal of learning a citation text generation model would be to maximize this probability across a large number of $t,S,C$ triples, so long as the parameters also generalize to unseen instances. At inference time, the goal is to generate a sentence $t^\ast $ which accurately describes the relationship between $S$ and $C$.
The most appropriate evaluation metric for most text generation tasks is human judgment by potential users of the system. Evaluating citation text requires human judges with scientific expertise. For exploratory purposes, we use the standard automatic metrics for text generation tasks described in Section SECREF4, and we an expert error analysis in Section SECREF14.
For source and cited documents, we use English-language computer science articles and annotation from the S2-GORC dataset BIBREF7. S2-GORC is a large citation graph dataset which includes full texts of 8.1 million scientific documents. We select a subset of 154K computer science articles as our corpus. From these, we extract 622K citing sentences that link back to other documents in our corpus. We hold 2500 examples for each of the validation and test sets. Detailed statistics can be found in Table TABREF4.
<<</Task>>>
<<<Models>>>
We explore two basic styles of model for citation text generation. Following current work in neural text generation, we fine-tune the predictions of a large pre-trained language model to the citation text generation task. Additionally, we investigate approximate nearest neighbor methods to retrieve plausible human-authored citation sentences from the training data.
<<<Neural Text Generation>>>
Recent work has shown that adapting large pre-trained language models to text generation tasks yields strong results BIBREF8. Due to its widespread use in text generation, we investigate the GPT model of BIBREF6 for the citation text generation task. GPT2 is a transformer model trained on 40 gigabytes of internet text with a language modeling objective BIBREF9. The adaptation process, called fine-tuning, involves continued training of the model on the target objective, in our case citation text generation.
To fine-tune GPT2 for text generation, it is typical to concatenate the conditioning context $X = x_1 \ldots x_n$ and citing sentence $Y = y_1 \ldots y_m$ with a special separator token $\mho $. The model learns to approximate next token probabilities for each index after $\mho $:
for $0<i<m$ and model parameters $\theta $. Cross-entropy loss is calculated for each $y_i$ and backpropagation is used find parameters $\theta $ which maximize $p(y_{i+1} \mid X,\mho ,y_1,\ldots ,y_i)$.
To adapt Equation DISPLAY_FORM6 to the citation text generation task, we construct the conditioning context $X$ from the source and cited documents. We take $j$ tokens from source document $s_1,\ldots ,s_j$ along with $k$ tokens from the cited document $c_1,\ldots ,c_k$. (Which tokens are drawn from the two documents is an independent variable that we explore experimentally.) We then condition the generation of citing sentence $Y$ on $X = s_1,\ldots ,s_j,\mho ,c_1,\ldots ,c_k$. This model is trained to predict each token of $Y$ as described above.
<<<Context>>>
The primary question we investigate with this model is what kind of input is best for generating accurate and informative citation sentences. Prior works in citation recommendation have made use of abstracts, which perhaps act as sufficient summaries of document content for this task. Additionally, we explore variants of extended context, such as the introduction or first section after the abstract. Since scientific texts are too long to fit into the context window of our generation model, we also investigate a “sampling” approach which samples sentences from throughout the document until the context window is full. In this work, we combine either the abstract or introduction of the source document with each of the abstract, introduction, or sampled sentences from the cited document.
<<</Context>>>
<<</Neural Text Generation>>>
<<<Retrieval with Approximate Nearest Neighbors>>>
While neural text generation techniques have advanced significantly in recent years, they are still inferior to human authored texts. For some tasks, it is better to retrieve a relevant human-authored text rather than generating novel text automatically BIBREF10. Is this also the case for citation text generation?
To answer this question, we adapt an approximate nearest neighbor search algorithm to find similar pairs of documents. The basic search procedure is as follows: Given a test instance input $(S,C)$ for source $S$ and cited document $C$, we find the set $\bf {N}_C$, the nearest neighbors to $C$ in the training data. For each document $N_C$ from $\bf {N}_C$, let $\bf {N}_S$ be the set of documents that cite $N_C$. This means that each $N_S \in {\bf N}_S$ contains at least one citing sentence $t^{\prime }$ which cites $N_C$. We return the $t^{\prime }$ associated with the $(N_S,N_C)$ pair from the training which is closest to $(S,C)$.
We measure the closeness of two pairs of documents by measuring cosine distances between vector representations of their content. The abstract of each document is embedded into a single dense vector by averaging the contextualized embeddings provided by the SciBERT model of BIBREF11 and normalizing. The distance between $(S,C)$ and candidate $(N_S,N_C)$ is computed as:
where $\alpha $ and $\beta $ control the relative contribution of the two document similarities. We explore setting both $\alpha $ and $\beta $ to 1, or tuning them to optimize either BLEU or BERTScore on the validation set.
<<</Retrieval with Approximate Nearest Neighbors>>>
<<<Language Model Pretraining>>>
GPT2-based models have demonstrated an ability to capture long distance dependencies over hundreds of tokens, which we hypothesize will allow them to synthesize information in both the source and cited documents. But citation text generation models must also handle the challenging technical vocabulary of the scientific domain.
Prior work has shown that pretraining on in-domain data improves the performance of large language models on domain-specific tasks BIBREF11. Inspired by this, we experiment with additional pretraining of GPT2 in the science domain. This model, SciGPT2, is trained for an additional 3 epochs over the full text of the documents in our corpus using a language modeling objective. We note that both SciGPT2 and the SciBERT language models used here have been exposed to citing sentences from the test and validation sets as in-line citations during their pre-training phases, which may improve their performance versus models without this exposure. Such exposure is typical when using pretrained language models, as text from test data cannot be guaranteed to be absent from the large task-independent corpora upon which these models are trained.
<<</Language Model Pretraining>>>
<<</Models>>>
<<<Evaluation>>>
We compare the different baseline systems using BLEU BIBREF12, ROUGE (specifically ROUGE 1, 2, and L; BIBREF13), and the recently introduced BertScore BIBREF14, a similarity metric based on BERT embeddings which has been shown to correlate well with human judgements on other tasks. To adapt the BertScore metric to the scientific text domain, we use SciBERT embeddings.
Table TABREF7 (above the double line) shows the performance of the SciGPT2 model on the test set when provided with the different input context combinations outlined in Section SECREF5. We find that context does make a difference for this category of model, and that models which have access to the intro of the documents outperform those which use abstracts or sampling.
Automatic evaluation of the retrieval-based methods on the test data are shown below the double line in Table TABREF7. This table shows that the retrieval methods perform well on this task. However we will show the limitations of these automatic metrics in Section SECREF14. We also observe that tuning the $\alpha $ and $\beta $ parameters on the validation set results in overfitting for this method. Outputs are largely unchanged by this tuning; fewer than 400 test datapoints differ from the untuned outputs. A larger validation split may alleviate this problem.
Statistical significance is assessed for select results using bootstrapping with 1000 samples in each of 100 iterations. This test shows that conditioning on the introduction of the source document improves performance compared to conditioning on the abstract when using the SciGPT2 model. However, we see that IR methods perform better than the best neural models. We do not find enough evidence to reject the null hypothesis regarding what context from the cited document should be used.
<<</Evaluation>>>
<<<Analysis>>>
In this section we take a closer look at the details of the SciGPT2 and IR system outputs on a collection of validation datapoints. We provide a quantitative error analysis as well as qualitative analysis and examples.
<<<Errors>>>
In order to better understand the performance of the models, we undertake a quantitative analysis of its output. One author randomly selected 200 datapoints from the validation set and their associated model outputs. Source and cited papers in the topic of NLP were used so as to facilitate expert judgement. For tractability, we limited the context presented to the annotator to the document abstracts and analyze the outputs of the abs $\times $ abs and IR systems.
In this analysis, we ask whether the models are producing believable citing sentences given their input. In particular, we are interested in the relative believability of the SciGPT2 and IR systems, as well as how believability of a citing sentence changes when a reader can see the abstract of one document or both.
We use 100 datapoints with outputs from the SciGPT2 system and 100 with outputs from the IR system. For 50 datapoints from each system, the cited document's abstract is initially masked such that only the source context is visible (Source, One Visible). Based only on the source context, the annotator judged whether the model output (1) could have convincingly been a citation in the source document based solely on the abstract (believable), (2) could have been a citation in the source document, but unclear from the abstract alone and depends on the rest of the paper's content (content-dependent), or (3) is unlikely to appear in this document (not believable). After making this judgment, the annotator was then shown the abstract of the cited document and asked to make the 3-way believability judgment based on both source and cited abstracts (Source, Both Visible). This process is repeated with the remaining 50 datapoints, but with the cited context masked initially (Cited, One Visible and Cited, Both Visible).
The results of our analysis presented in Table TABREF13. We find that believability in the Cited, One Visible condition correlates well with the Cited, Both Visible condition. In the Source conditions, we see a greater difference in believability between One Visible and Both Visible. These findings makes sense: in-line citations often summarize a prior study rather than highlight the paper's own contributions. Together, these results indicate that the believability of citing sentences is more related to the cited document than to the source.
Another interesting feature of this analysis is the difference between SciGPT2 and IR in terms of context-dependent citing sentences. We observe fewer such judgements in the IR outputs. This is probably due to the fact that neural text generation systems such as SciGPT2 will sometimes produce generic, uninformative outputs while the IR system outputs are usually specific enough that a stronger believability judgement can be made.
We also observe an overall higher instance of not believable judgements of the IR model outputs. This implies that automatic metrics such as BLEU, where the IR system scored higher than SciGPT2, do not correlate with factual accuracy in citation text generation.
Example citations and annotations are shown in Table TABREF15. We find that in the cases where the model generated outputs are unconvincing they are still on topic. All 10 cases in the Source, One Visible and 4 of the cases in Cited, One Visible that were no longer believable in the Both Visible conditions exhibit this quality. A common example (4 cases) of this phenomenon occurs when the model output references a dataset. While the dataset would be potentially relevant to both papers, the cited papers focus on modeling contributions and do not introduce a novel corpus.
<<</Errors>>>
<<<Examples>>>
Example system outputs for randomly selected validation instances are shown in Table TABREF18. We see that both the SciGPT2 and IR model outputs regularly hit on the correct broad topic of the cited text (such “literary analysis” or “image captioning evaluation metrics”). It is notable that the SciGPT2 model outputs syntactically correct and coherent citation sentences, even given the difficulty of the vocabulary in this domain. This is a testament to the power of the domain-specific language model training.
We also observe that the outputs of the SciGPT2 model are often shorter than the desired citing sentence. Brevity is a known issue for neural text generation and may be alleviated by penalizing brevity in the inference procedure. More problematic are the factual errors in the generated text. In the last example, for instance, we see that SciGPT2 fails to cite the specific image captioning dataset described in the cited paper (Pascal1K) and instead focuses on the more general evaluation metric for the image captioning task (CIDEr). This is typical of neural text generation systems, which often assign high probability to generic or frequent phrases and revert to these in the face of uncertainty.
<<</Examples>>>
<<<Future Work>>>
The fluency and topical relevance of the baseline models show the plausibility of the citation text generation task as well as the utility of including pretrained scientific language models in future models. But based on the kinds of errors we have seen, future work should focus on two complementary goals: ensuring the factual accuracy of the generated text and improved modeling of the cited document. Factual accuracy is difficult to enforce in statistical text generation systems, especially where inference includes sampling procedures. Grounding to knowledge bases could help. For this task, knowledge extracted from candidate generations could be compared with knowledge from the full source and cited documents to prune false or irrelevant statements. Further, modeling input documents as knowledge graphs of their contents may help these algorithms better understand the cited document, resulting in better outputs. However, such a model will have to address the open problem of combining pretrained language models with graph encoding techniques.
<<</Future Work>>>
<<</Analysis>>>
<<<Related Work>>>
The current work builds on recent research in scientific document understanding, including citation recommendation and categorization, as well as scientific document summarization.
Citation recommendation, or the task of selecting works related to a source document which would be suitable for citing, is a longstanding goal of AI research BIBREF15, BIBREF2, BIBREF16. Recently, researchers have sought to categorize citations using various ontologies of citation intents. BIBREF1 sought to discern “highly influential” citations from others. BIBREF17 uses six categories including “motivation”, “uses”, and “future work” among others. BIBREF3 condense this ontology to just three: “background”,“method”, and “result comparison”.
We view the citation text generation task as an extension of these classification approaches with distinct advantages. While classification requires an extant citation link to exist, our generation task can describe possible relationships between works which do not cite each other, such as contemporaneous works. Additionally, because gold citation texts are readily available in scientific documents, the citation text generation task requires no task-specific annotated training data. In practice, citation classification is used to assist in suggesting relevant works to researchers; citation text generation complements this goal by providing rationales for the recommendation and furthering progress toward explainable AI.
Generating a citation is also connected to summarizing scientific documents. There is a long history research on summarizing scientific documents BIBREF18, BIBREF19. More recently, researchers have included citing sentences as part of the input for summarization, hoping to capture the contribution of a work along with its content BIBREF20, BIBREF21, BIBREF5. Ours is the first to focus on the specific relationship between two documents when generating such sentences. Because of the emphasis on relational document understanding in our task, citation generation models can be used to assist with drafting papers as well, reducing researcher workload and providing non-native writers with a helpful first draft.
Our work builds on recent advances in transfer learning in NLP. In particular, large pretrained models such as BERT BIBREF22 and GPT2 BIBREF6 have made strong advances on a number of tasks BIBREF23. It has also been shown that pretraining these models on domain-specific data further improves results on domain-speicific tasks BIBREF11, BIBREF24. In this work, we apply that methodology by adding an additional pretraining phase on in-domain data before finetuning a GPT2 model on the citation text generation task.
<<</Related Work>>>
<<<Conclusion>>>
We have introduced the challenging but useful task of citation text generation. This task requires reasoning about the relationships between documents and expressing these relationships in natural language text. We have established a dataset for this task and studied the performance of contemporary neural text generation and information retrieval models. Our analysis shows that while these models produce fluent and topical outputs, more research is needed to ensure factual accuracy and specificity in the generated text.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nTask\nModels\nNeural Text Generation\nContext\nRetrieval with Approximate Nearest Neighbors\nLanguage Model Pretraining\nEvaluation\nAnalysis\nErrors\nExamples\nFuture Work\nRelated Work\nConclusion"
],
"type": "outline"
}
|
2004.04228
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Asking and Answering Questions to Evaluate the Factual Consistency of Summaries
<<<Abstract>>>
Practical applications of abstractive summarization models are limited by frequent factual inconsistencies with respect to their input. Existing automatic evaluation metrics for summarization are largely insensitive to such errors. We propose an automatic evaluation protocol called QAGS (pronounced"kags") that is designed to identify factual inconsistencies in a generated summary. QAGS is based on the intuition that if we ask questions about a summary and its source, we will receive similar answers if the summary is factually consistent with the source. To evaluate QAGS, we collect human judgments of factual consistency on model-generated summaries for the CNN/DailyMail (Hermann et al., 2015) and XSUM (Narayan et al., 2018) summarization datasets. QAGS has substantially higher correlations with these judgments than other automatic evaluation metrics. Also, QAGS offers a natural form of interpretability: The answers and questions generated while computing QAGS indicate which tokens of a summary are inconsistent and why. We believe QAGS is a promising tool in automatically generating usable and factually consistent text.
<<</Abstract>>>
<<<Introduction>>>
Automatic summarization aims to produce summaries that are succinct, coherent, relevant, and — crucially — factually correct. Recent progress in conditional text generation has led to models that can generate fluent, topical summaries BIBREF2. However, model-generated summaries frequently contain factual inconsistencies, limiting their applicability BIBREF3.
The problem of factual inconsistency is due in part to the lack of automatic evaluation metrics that can detect such errors. Standard metrics for evaluating generated text are predominantly based on counting $n$-grams, which weigh all $n$-grams equally and are insensitive to semantic errors. This inadequacy leaves human evaluation as the primary method for evaluating the factual consistencies, which has been noted to be challenging even for humans BIBREF4, BIBREF5, in addition to being slow and costly.
We argue that evaluation metrics that are able to capture subtle semantic errors are required to build better models. In this work, we introduce a general framework for evaluating conditional text generation that is designed to detect factual inconsistencies in generated text with respect to some input. Our framework consists of three steps: (1) Given a generated text, a question generation (QG) model generates a set of questions about the text. (2) We then use question answering (QA) models to answer these questions given both the input and the generated text. (3) A quality score is computed based on the similarity of corresponding answers.
This approach leverages recent progress in QA and QG to ask and answer human readable, on-topic questions BIBREF6, BIBREF7. It only assumes access to a question answering dataset to train the QG and QA models, and is applicable to any modality where a QA model is available, e.g. text, images, or knowledge graphs.
We use this framework to develop QAGS (Question Answering and Generation for Summarization), a metric for evaluating the factual consistency of abstractive document summaries. Compared to commonly used automatic metrics such as ROUGE BIBREF8, QAGS shows dramatically higher correlations with human judgements of factuality, for example achieving a Pearson correlation coefficient of 54.52 on the CNN/DailyMail summarization task, compared to 17.72 for ROUGE-2. QAGS also achieves new state-of-the-art results on evaluating the factuality of summaries, outperforming recently proposed NLI models for this task BIBREF5.
Finally, we analyse the robustness of QAGS through an ablation study. QAGS shows robustness to the quality of the underlying QG and QA models, the domain of the models, and the number of questions asked. Even under the worst ablation settings, QAGS still has stronger correlation with human judgments than other automatic metrics.
Overall, we contribute the following: (1) We introduce QAGS, an automatic model-based evaluation metric for measuring the factual consistency of model-generated text. (2) We collect a new set of human judgments of factual consistency of model-generated summaries for two summarization datasets. We demonstrate that QAGS correlates with these judgments significantly better than other automatic metrics. (3) We show via ablations that QAGS is robust to a number of factors including underlying model quality and domain mismatch. (4) We analyze the questions and answers produced in computing QAGS to illustrate which parts of summaries are inconsistent. (5) We will release models and code to compute QAGS.
<<</Introduction>>>
<<<Background: Automatically Evaluating Machine Generated Text>>>
Standard approaches to evaluating generated text are primarily based on counting $n$-gram overlap. These methods assume access to one or more reference texts, and score a generated summary based on the precision and recall of all reference $n$-grams in the generated summary. We briefly describe the most common metrics in this family, and refer readers to BIBREF9 for further discussion.
ROUGE BIBREF8 was developed specifically for evaluating automatic summarization, and its variants are the de facto standard for such. The most common variant is ROUGE-$n$ (typically $n \in \lbrace 1, 2\rbrace $), which computes the F1 score for all reference $n$-grams in the generated summary. ROUGE-$L$, another commonly used variant, is the length of the longest common subsequence (possibly non-consecutive) between a summary and references.
BLEU BIBREF10 is closely related to ROUGE but was developed for machine translation. BLEU computes the precision of the reference $n$-grams in the generated summary. METEOR BIBREF11 extends BLEU by using an alignment between the generated text and a reference, as well as using stemming and synonym replacement for more flexible $n$-gram matching.
We identify two key deficiencies when using these $n$-gram based evaluation metrics to detect factual inconsistencies in generated text.
First, these metrics require one or more reference texts to compare against. Obtaining references can be expensive and challenging, and as such many text generation datasets contain only a single reference. This problem is exacerbated with high-entropy generation tasks, such as summarization or dialogue, where there is a very large number of acceptable outputs. In these settings, comparing against a single reference is woefully inadequate.
Second, given a reference to compare against, $n$-gram based approach weigh all portions of the text equally, even when only a small fraction of the $n$-grams carry most of the semantic content. Factual inconsistencies caused by minor changes may be drowned out by otherwise high $n$-gram overlap, making these metrics insensitive to these errors. For example, the sentences “I am writing my paper in Vancouver.” and “I am not writing my paper in Vancouver.” share nearly all unigrams and bigrams despite having the opposite meaning.
<<</Background: Automatically Evaluating Machine Generated Text>>>
<<<A Framework for Automatically Evaluating Factual Consistency>>>
We introduce a framework for automatically detecting factual inconsistencies in generated text while also addressing the deficiencies of current approaches. Let $X$ and $Y$ be sequences of tokens coming from a vocabulary $V$ where $X$ is a source text and $Y$ is a summary of $X$. We define $p(Q|Y)$ as a distribution over all possible questions $Q$ given summary $Y$, and $p(A|Q, X)$ and $p(A|Q, Y)$ as distributions over all possible answers $A$ to a particular question $Q$ given either the source $X$ or the summary $Y$. We constrain the questions $Q$ and answers $A$ to also be sequences of tokens from $V$. Then the factual consistency of the summary $Y$ is
where $D$ is some function measuring the similarity of the two answer distributions. This expression is maximized when $Y$ contains a subset of the information in $X$ such that it produces the same answer for any question from $p(Q|Y)$. This happens trivially when $Y=X$, e.g. we take $X$ as its own summary, but we usually have other desiderata of $Y$ such that this solution is undesirable.
This framework addresses the two issues with $n$-gram based approaches. Instead of requiring a reference to compare against, our framework asks questions based on the generation itself, and compares answers with the provided source text. Also, the use of questions focuses the metric on the semantically relevant parts of the generated text, rather than weighting all parts of the text equally.
In practice, exactly computing the expectation in Equation DISPLAY_FORM4 is intractable due to the large space of possible questions. One potential workaround is to randomly sample questions from $p(Q|Y)$, but this suffers from high variance and requires many samples to obtain a good estimate. Instead, we focus on producing highly probable questions, e.g. as produced by beam search, which may be biased in the limit, but will require fewer questions to estimate because of the higher quality of the questions.
<<</A Framework for Automatically Evaluating Factual Consistency>>>
<<<QAGS>>>
Using this framework requires specifying the question distribution $p(Q|Y)$, the answer distribution $p(A|Q, Y)$ (or $X$), and the answer similarity function $D$. We apply this framework to summarization to develop QAGS and describe our instantiations of these components.
<<<Question Generation>>>
To instantiate $p(Q|Y)$, we draw on recent work on automatic question generation (QG), which models this distribution using neural seq2seq models BIBREF12, BIBREF13. We over-sample questions, and then filter out low quality questions as follows.
First, we train and generate from answer-conditional QG models: The model receives both the answer and the source article, and is trained to maximize the likelihood of the paired question. At test time, we extract named entities and noun phrases as answers candidates using spaCy.
Second, we filter out low-quality questions using a number of heuristics, such as duplicates and questions less than three tokens long. We also found it useful to run the QA model (see next section) on all of the candidate questions, and filter out questions for which the QA model predicted no answer.
<<</Question Generation>>>
<<<Question Answering>>>
We instantiate the answer distributions $p(A|Q,*)$ as extractive QA models, for simplicity. We use extractive QA because we assume the facts are represented as text spans in the article and summary. Future work should explore using abstractive QA models, which could match paraphrases of the same answer.
<<</Question Answering>>>
<<<Answer Similarity>>>
We use token-level F1 to compare answers, which is standard for extractive QA and equivalent to defining $D$ as
<<</Answer Similarity>>>
<<<The QAGS Score>>>
Given these components, we obtain the QAGS score of a generation by (1) generating $K$ questions conditioned on the summary, (2) answering the questions using both the source article and the summary to get two sets of answers, (3) comparing corresponding answers using the answer similarity metric, and (4) averaging the answer similarity metric over all questions. We depict this process in Figure FIGREF3.
<<</The QAGS Score>>>
<<</QAGS>>>
<<<Experiments>>>
<<<Human Evaluation>>>
We test whether QAGS accurately measures the factual consistency of a summary with respect to a source article by computing correlations with human judgments of factual consistency.
<<<Datasets>>>
We evaluate on two abstractive summarization datasets, CNN/Daily Mail BIBREF0, BIBREF14 and XSUM BIBREF1. Abstractive summarization is particularly interesting because factual consistency with the original text is crucial to usability, and a lack of such consistency has plagued abstractive neural summarization models BIBREF15, BIBREF16, BIBREF5.
CNN/DM is a standard dataset for summarization that consists of CNN and DailyMail articles. Each reference summary consists of the concatenation of three editor-written, bullet point highlights. For summaries, we use 235 test outputs from BIBREF17.
XSUM was created by taking the first sentence of a news article as the summary, and using the rest of the article as the source. Consequently, XSUM summaries are significantly more abstractive than those of CNN/DM, and extractive summarization models perform poorly on this dataset.
We found that while the XSUM summaries are more abstractive, frequently there are facts (e.g. first names) in the summary that are not available in the “article”. This quirk made it especially difficult for humans and QAGS to tell when factual errors were being made by the summarization model. To remedy this, for human evaluation and QAGS, we prepend the summary back to the “article”. We use a subset of 239 test outputs from BART fine-tuned on XSUM BIBREF2.
<<</Datasets>>>
<<<Annotation Protocol>>>
We collect human judgments on Amazon Mechanical Turk via ParlAI BIBREF18. We present summaries one sentence at a time, along with the entire article. For each summary sentence, the annotator makes a binary decision as to whether the sentence is factually consistent with the article. Workers are instructed to mark non-grammatical sentences as not consistent, and copies of article sentences as consistent. Workers are paid $1 per full summary annotated. See Appendix SECREF10 for further details.
We collect 3 annotations per summary. To obtain a single “correctness” score per summary, we first take the majority vote for each sentence, then average the binary scores across summary sentences.
Inter-annotator agreement as measured by Krippendorff's $\alpha $ is 0.51 and 0.34 for CNN/DM and XSUM, respectively indicating “moderate” and “fair” agreement BIBREF19. While not perfect, these agreement numbers are in-line with similar figures from previous work on summarization evaluation BIBREF4.
<<</Annotation Protocol>>>
<<</Human Evaluation>>>
<<<Experimental Details>>>
<<<Baselines>>>
We compare against a number of automatic evaluation metrics: ROUGE BIBREF8, METEOR BIBREF11, BLEU BIBREF10, and BERTScore BIBREF24. The latter uses BERT representations to compute an alignment between generation and reference tokens, and which is then used to compute a soft version of unigram F1. We use the large-uncased BERT variant.
<<</Baselines>>>
<<</Experimental Details>>>
<<<Results>>>
We present results in Table . QAGS strongly outperforms other automatic evaluation metrics in terms of correlation with human judgments of factual consistency. BLEU and ROUGE perform comparably, and lower order $n$-gram metrics work better. BERTScore matches the best $n$-gram metrics on CNN/DM, but the worst overall on XSUM.
On CNN/DM, QAGS obtains nearly twice the correlation of the next best automatic metric (BLEU-1). We speculate that this large increase is due to the sensitivity of the QA model to the sentence fusing behavior exhibited in many summarization models trained on CNN/DM BIBREF25. When two sentences are fused to produce an incorrect summary statement, the QA model produces different answers than when using the source article versus when using the summary.
On XSUM, all metrics correlate worse with human judgments than on CNN/DM, which reflects the fact that XSUM is more abstractive. QAGS still outperforms the next best automatic metric.
<<</Results>>>
<<<Ablations>>>
A potential issue with model-based evaluation is that the quality of the evaluation metric may depend heavily on specific hyperparameter settings. We explore whether this is true with QAGS by performing ablations on several factors.
<<<Model Quality>>>
We first consider the degree to which the quality of the underlying models impacts their evaluation capabilities.
For QA quality, we answer this question by training QA models of varying quality by fine-tuning different versions of BERT on SQuAD. We present results in Table . The QA models perform similarly despite substantially different performances on the SQuAD development set. Surprisingly, using the best QA model (bert-large-wwm) does not lead to the best correlations with human judgments. On CNN/DM, bert-large-wwm slightly underperforms bert-base and bert-large. On XSUM, bert-base slightly outperforms the other two BERT variants. These results indicate that QAGS is fairly robust to the quality of the underlying QA model, though we note that BERT is a strong QA baseline, and using weaker QA models might lead to larger performance dropoffs.
To ablate QG quality, we use models with increasing perplexity on the NewsQA development set. Results in Table show that QAGS is robust to the QG model quality, with some decrease in correlation with human judgments as perplexity increases on CNN/DM, and no clear trend on XSUM. Even the weakest QG model still significantly outperforms all other automatic metrics in Table .
<<</Model Quality>>>
<<<Domain Effects>>>
Our approach relies on having a labeled dataset to train QG and QA models. However, for relatively niche domains, such a labeled QA/QG dataset may not exist. Instead, we may need to resort to using models trained on out-of-domain data, leading to domain shift effects that negatively impact the quality of the QAGS scores. We simulate this setting by fine-tuning the QG model on SQuAD, which is of similar size to NewsQA but drawn from Wikipedia articles rather than CNN articles, which exactly matches the genre of the summarization datasets.
Evaluating with this QG model, we get correlations of 51.53 and 15.28 with human judgments on CNN/DM and XSUM respectively, versus 54.53 and 17.49 when using the NewsQA-tuned QG model. The drop in performance indicates a negative domain shift effect. However using the SQuAD-tuned QG model still substantially outperforms all other automatic metrics, again pointing to the robustness of QAGS.
<<</Domain Effects>>>
<<<Number of Questions>>>
Next, we investigate the correlation with human judgments when varying the number of questions used. Results in Table show that increasing the number of questions used improves correlations with human judgments. We observe a large increase when moving from 10 to 20 questions, and a smaller increase from 20 to 50 questions, indicating decreasing marginal benefit moving beyond 50 questions. With just 5 questions, QAGS still substantially outperforms other automatic metrics, indicating its robustness.
<<</Number of Questions>>>
<<<Answer Similarity Metric>>>
Finally, we consider using exact match as an alternative answer similarity metric. Exact match is another common evaluation metric for extractive QA, and is more restrictive than F1. When using EM, we obtain Pearson correlations with human judgments of 45.97 and 18.10 on CNN/DM and XSUM, as opposed to 54.53 and 17.49 when using F1.
<<</Answer Similarity Metric>>>
<<</Ablations>>>
<<</Experiments>>>
<<<Re-ranking with QAGS>>>
Several works explore the use of natural language inference (NLI) models to detect factual consistency in generated text BIBREF26, BIBREF16. We compare against these methods by evaluating on the sentence ranking experiment from BIBREF16. The experiment uses 373 triplets of source sentences from CNN/DM and two summary sentences generated from the model from BIBREF27. One summary sentence is factually consistent with the source sentence, and the other is inconsistent. A metric (or model) is evaluated based on how often it ranks the consistent sentence higher than the inconsistent sentence.
We present the results in Table . Results using two NLI models fine-tuned on MultiNLI BIBREF28, BERT NLI and ESIM BIBREF29, are from BIBREF16. FactCC BIBREF5 is an NLI-based fact-checking model that is trained on a dataset tailor made for detecting factual inconsistencies in generated text. QAGS outperforms these methods, while requiring no special supervision for this task.
<<</Re-ranking with QAGS>>>
<<<Qualitative Analysis>>>
<<<Interpreting QAGS>>>
The questions and answers produced in computing QAGS are directly interpretable, and highlight errors in summaries. We present examples of articles, summaries, and the QAGS questions and answers in Table .
On the first example (Table , top), QAGS detects several factual inconsistencies in the generated summary: The summary mistakes the first name of the attacker, the location of the attack, and the weapons used. Because the QG model focuses on these details, QAGS is able to correctly penalize the summary for its hallucinations. Because the answer candidates used are mostly named entities and noun phrases, QAGS is particularly effective at detecting errors of this kind. Using more diverse answer candidates may broaden the set of inconsistencies that QAGS is able to detect.
The second example (Table , bottom), illustrates failure modes of QAGS. For example, the QA model incorrectly marks question 2 as unanswerable. On question 4, both answers produced are correct, but because they have no common tokens, they are marked inconsistent by QAGS.
<<</Interpreting QAGS>>>
<<<Error Analysis>>>
The interpretability of QAGS allows for error analysis on the metric. We manually annotate 400 triplets of generated questions, article answers, and summary answers that are produced in computing QAGS on the XSUM summaries, and label them by the quality of the generated questions, predicted answers, and answer similarity scores.
Among the generated questions, 8.75% are nonsensical, while 3.00% are well-formed but unanswerable using the generated summary they were conditioned upon. These figures indicate that the vast majority of questions are understandable and on-topic. We frequently observe multiple questions with slightly different wordings, which is likely due to the low number of answer candidates in XSUM summaries (which are one sentence long) and due to beam search. 8.25% of questions are well-formed but unanswerable using the source, which is usually due to a hallucinated fact in the summary that the QG model turns into a question.
Among predicted answers, 1.75% of questions are potentially answerable using the summary, but are incorrectly answered. This percentage increases to 32.50% for the article, which indicates that the transfer ability of the QA model is lacking. In a small number of cases, we found that while a question had a single answer in the summary, it could have multiple answers in the article.
Finally, for 8.00% of the examples, the question is answered correctly using both the article and summary, but the answers have high lexical variation such that F1 score fails to detect their similarity. While this happens in a relatively small number of cases, exploring similarity metrics other than $n$-gram based approaches could be useful.
<<</Error Analysis>>>
<<<Limitations>>>
We emphasize that QAGS and our overall framework are specifically designed to detect factual inconsistencies in generated summaries relative to the source article. QAGS does not measure other desirable properties of generated text, including fluency, readability, or factual recall. We therefore recommend using QAGS in conjunction with complementary evaluation metrics.
The choices of QG and QA models in QAGS are particular to abstractive summarization and may require adaptation to be used for other conditional text generation tasks. For example, we expect that extractive summarization models may obtain nearly perfect QAGS scores because facts and statements are directly copied from the source article.
<<</Limitations>>>
<<</Qualitative Analysis>>>
<<<Related Work>>>
Automatic summarization and its evaluation are long-standing lines of work in NLP, dating at least as far back as the Document Understanding Conferences BIBREF30. The primary evaluation metric then and now is ROUGE BIBREF8, though much work has demonstrated the limited ability of ROUGE and its relatives to evaluate summaries BIBREF31, BIBREF32, BIBREF33. Other metrics have focused on specific aspects of summarization quality, including content selection BIBREF34, relevance prediction BIBREF4, and many more.
There has been a recent resurgence of work leveraging NLU models for evaluating the factuality of generated text. BIBREF35 use information extraction models to measure factual overlap, but facts are restricted to pre-defined schemas. BIBREF16 investigate the use of NLI models to evaluate the factual correctness of CNN/DM summaries, and conclude that current NLI models are too brittle to be reliably used in this manner. BIBREF5 train a NLI-based fact-checking model by building a dataset of factual inconsistencies based on noise heuristic. Our QA approach allows a finer-grained analysis, because NLI operates on complete sentences, whereas QAGS can ask many questions about the same sentence.
Most relatedly, BIBREF36 and BIBREF37 use QA models to evaluate summarization. We diverge from these works in two important ways. First, both works use Cloze-style questions, which are generated by masking entities in either the source document or the reference summary. We instead generate the questions with a model, allowing a much greater range of questions. Second, we produce questions conditioned on the generated summary, rather than the reference summary or source article. Producing questions from the generated summary is more appropriate for verifying the accuracy of the text, whereas using the reference or source measures content selection.
<<</Related Work>>>
<<<Conclusion>>>
We introduce a framework for automatically detecting factual inconsistencies in conditionally generated texts and use this framework to develop QAGS, a metric for measuring inconsistencies in abstractive summarization. QAGS correlates with human judgments of factuality significantly better than standard automatic evaluation metrics for summarization, and outperforms related NLI-based approaches to factual consistency checking. QAGS is naturally interpretable: The questions and answers produced in computing QAGS indicate which tokens in a generated summary are inconsistent and why. Error analysis shows that future work should explore improved QA models. Our approach can also be applied to diverse modalities, such as translation and image captioning. Overall, we believe QAGS is useful in quantifying and incentivizing factually consistent text generation.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nBackground: Automatically Evaluating Machine Generated Text\nA Framework for Automatically Evaluating Factual Consistency\nQAGS\nQuestion Generation\nQuestion Answering\nAnswer Similarity\nThe QAGS Score\nExperiments\nHuman Evaluation\nDatasets\nAnnotation Protocol\nExperimental Details\nBaselines\nResults\nAblations\nModel Quality\nDomain Effects\nNumber of Questions\nAnswer Similarity Metric\nRe-ranking with QAGS\nQualitative Analysis\nInterpreting QAGS\nError Analysis\nLimitations\nRelated Work\nConclusion"
],
"type": "outline"
}
|
1909.00161
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Benchmarking Zero-shot Text Classification: Datasets, Evaluation and Entailment Approach
<<<Abstract>>>
Zero-shot text classification (0Shot-TC) is a challenging NLU problem to which little attention has been paid by the research community. 0Shot-TC aims to associate an appropriate label with a piece of text, irrespective of the text domain and the aspect (e.g., topic, emotion, event, etc.) described by the label. And there are only a few articles studying 0Shot-TC, all focusing only on topical categorization which, we argue, is just the tip of the iceberg in 0Shot-TC. In addition, the chaotic experiments in literature make no uniform comparison, which blurs the progress. ::: This work benchmarks the 0Shot-TC problem by providing unified datasets, standardized evaluations, and state-of-the-art baselines. Our contributions include: i) The datasets we provide facilitate studying 0Shot-TC relative to conceptually different and diverse aspects: the ``topic'' aspect includes ``sports'' and ``politics'' as labels; the ``emotion'' aspect includes ``joy'' and ``anger''; the ``situation'' aspect includes ``medical assistance'' and ``water shortage''. ii) We extend the existing evaluation setup (label-partially-unseen) -- given a dataset, train on some labels, test on all labels -- to include a more challenging yet realistic evaluation label-fully-unseen 0Shot-TC (Chang et al., 2008), aiming at classifying text snippets without seeing task specific training data at all. iii) We unify the 0Shot-TC of diverse aspects within a textual entailment formulation and study it this way. ::: Code & Data: this https URL
<<</Abstract>>>
<<<Introduction>>>
Supervised text classification has achieved great success in the past decades due to the availability of rich training data and deep learning techniques. However, zero-shot text classification ($\textsc {0shot-tc}$) has attracted little attention despite its great potential in real world applications, e.g., the intent recognition of bank consumers. $\textsc {0shot-tc}$ is challenging because we often have to deal with classes that are compound, ultra-fine-grained, changing over time, and from different aspects such as topic, emotion, etc.
Existing $\textsc {0shot-tc}$ studies have mainly the following three problems.
<<<First problem.>>>
The $\textsc {0shot-tc}$ problem was modeled in a too restrictive vision. Firstly, most work only explored a single task, which was mainly topic categorization, e.g., BIBREF1, BIBREF2, BIBREF3. We argue that this is only the tiny tip of the iceberg for $\textsc {0shot-tc}$. Secondly, there is often a precondition that a part of classes are seen and their labeled instances are available to train a model, as we define here as Definition-Restrictive:
Definition-Restrictive ($\textsc {0shot-tc}$). Given labeled instances belonging to a set of seen classes $S$, $\textsc {0shot-tc}$ aims at learning a classifier $f(\cdot ): X \rightarrow Y$, where $Y=S\cup U$; $U$ is a set of unseen classes and belongs to the same aspect as $S$.
In this work, we formulate the $\textsc {0shot-tc}$ in a broader vision. As Figure FIGREF2 demonstrates, a piece of text can be assigned labels which interpret the text in different aspects, such as the “topic” aspect, the “emotion” aspect, or the “situation” aspect described in the text. Different aspects, therefore, differ in interpreting the text. For instance, by “topic”, it means “this text is about {health, finance $\cdots $}”; by “emotion”, it means “this text expresses a sense of {joy, anger, $\cdots $}”; by “situation”, it means “the people there need {shelter, medical assistance, $\cdots $}”. Figure FIGREF2 also shows another essential property of $\textsc {0shot-tc}$ – the applicable label space for a piece of text has no boundary, e.g., “this text is news”, “the situation described in this text is serious”, etc. Therefore, we argue that we have to emphasize a more challenging scenario to satisfy the real-world problems: seeing no labels, no label-specific training data. Here is our new $\textsc {0shot-tc}$ definition:
Definition-Wild ($\textsc {0shot-tc}$). $\textsc {0shot-tc}$ aims at learning a classifier $f(\cdot ): X \rightarrow Y$, where classifier $f(\cdot )$ never sees $Y$-specific labeled data in its model development.
<<</First problem.>>>
<<<Second problem.>>>
Usually, conventional text classification denotes labels as indices {0,1,2, $\cdots $, $n$} without understanding neither the aspect's specific interpretation nor the meaning of the labels. This does not apply to $\textsc {0shot-tc}$ as we can not pre-define the size of the label space anymore, and we can not presume the availability of labeled data. Humans can easily decide the truth value of any upcoming labels because humans can interpret those aspects correctly and understand the meaning of those labels. The ultimate goal of $\textsc {0shot-tc}$ should be to develop machines to catch up with humans in this capability. To this end, making sure the system can understand the described aspect and the label meanings plays a key role.
<<</Second problem.>>>
<<<Third problem.>>>
Prior work is mostly evaluated on different datasets and adopted different evaluation setups, which makes it hard to compare them fairly. For example, DBLPRiosK18 work on medical data while reporting R@K as metric; DBLPXiaZYCY18 work on SNIPS-NLU intent detection data while only unseen intents are in the label-searching space in evaluation.
In this work, we benchmark the datasets and evaluation setups of $\textsc {0shot-tc}$. Furthermore, we propose a textual entailment approach to handle the $\textsc {0shot-tc}$ problem of diverse aspects in a unified paradigm. To be specific, we contribute in the following three aspects:
<<</Third problem.>>>
<<<Dataset.>>>
We provide datasets for studying three aspects of $\textsc {0shot-tc}$: topic categorization, emotion detection, and situation frame detection – an event level recognition problem. For each dataset, we have standard split for train, dev, and test, and standard separation of seen and unseen classes.
<<</Dataset.>>>
<<<Evaluation.>>>
Our standardized evaluations correspond to the Definition-Restrictive and Definition-Wild. i) Label-partially-unseen evaluation. This corresponds to the commonly studied $\textsc {0shot-tc}$ defined in Definition-Restrictive: for the set of labels of a specific aspect, given training data for a part of labels, predicting in the full label set. This is the most basic setup in $\textsc {0shot-tc}$. It checks whether the system can generalize to some labels in the same aspect. To satisfy Definition-Wild, we define a new evaluation: ii) Label-fully-unseen evaluation. In this setup, we assume the system is unaware of the upcoming aspects and can not access any labeled data for task-specific training.
<<</Evaluation.>>>
<<<Entailment approach.>>>
Our Definition-Wild challenges the system design – how to develop a $\textsc {0shot-tc}$ system, without accessing any task-specific labeled data, to deal with labels from diverse aspects? In this work, we propose to treat $\textsc {0shot-tc}$ as a textual entailment problem. This is to imitate how humans decide the truth value of labels from any aspects. Usually, humans understand the problem described by the aspect and the meaning of the label candidates. Then humans mentally construct a hypothesis by filling a label candidate, e.g., “sports”, into the aspect-defined problem “the text is about $\underline{?}$”, and ask ourselves if this hypothesis is true, given the text. We treat $\textsc {0shot-tc}$ as a textual entailment problem so that our model can gain knowledge from entailment datasets, and we show that it applies to both Definition-Restrictive and Definition-Wild.
Overall, this work aims at benchmarking the research of $\textsc {0shot-tc}$ by providing standardized datasets, evaluations, and a state-of-the-art entailment system. All datasets and codes are released.
<<</Entailment approach.>>>
<<</Introduction>>>
<<<Related Work>>>
$\textsc {Zero-stc}$ was first explored by the paradigm “Dataless Classification” BIBREF0. Dataless classification first maps the text and labels into a common space by Explicit Semantic Analysis (ESA) BIBREF4, then picks the label with the highest matching score. Dataless classification emphasizes that the representation of labels takes the equally crucial role as the representation learning of text. Then this idea was further developed in BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9.
With the prevalence of word embeddings, more and more work adopts pretrained word embeddings to represent the meaning of words, so as to provide the models with the knowledge of labels BIBREF10, BIBREF2, BIBREF11, BIBREF12. DBLPYogatamaDLB17 build generative LSTM to generate text given the embedded labels. DBLPRiosK18 use label embedding to attend the text representation in the developing of a multi-label classifier. But they report R@K, so it is unclear whether the system can really predict unseen labels. DBLPXiaZYCY18 study the zero-shot intent detection problem. The learned representations of intents are still the sum of word embeddings. But during testing, the intent space includes only new intents; seen intents are not covered. All of these studies can only meet the definition in Definition-Restrictive, so they do not really generalize to open aspects of $\textsc {0shot-tc}$.
JiangqngGuo enrich the embedding representations by incorporating class descriptions, class hierarchy, and the word-to-label paths in ConceptNet. DBLPMitchellSL18 assume that some natural language explanations about new labels are available. Then those explanations are parsed into formal constraints which are further combined with unlabeled data to yield new label oriented classifiers through posterior regularization. However, those explanatory statements about new labels are collected from crowd-sourcing. This limits its application in real world $\textsc {0shot-tc}$ scenarios.
There are a few works that study a specific zero-shot problem by indirect supervision from other problems. DBLPLevySCZ17 and obamuyide2018zero study zero-shot relation extraction by converting it into a machine comprehension and textual entailment problem respectively. Then, a supervised system pretrained on an existing machine comprehension dataset or textual entailment dataset is used to do inference. Our work studies the $\textsc {0shot-tc}$ by formulating a broader vision: datasets of multiple apsects and evaluations.
Other zero-shot problems studied in NLP involve entity typing BIBREF13, sequence labeling BIBREF14, etc.
<<</Related Work>>>
<<<Benchmark the dataset>>>
In this work, we standardize the datasets for $\textsc {0shot-tc}$ for three aspects: topic detection, emotion detection, and situation detection.
For each dataset, we insist on two principles: i) Label-partially-unseen: A part of labels are unseen. This corresponds to Definition-Restrictive, enabling us to check the performance of unseen labels as well as seen labels. ii) Label-fully-unseen: All labels are unseen. This corresponds to Definition-Wild, enabling us to check the system performance in test-agnostic setups.
<<<Topic detection>>>
<<<Yahoo.>>>
We use the large-scale Yahoo dataset released by DBLPZhangZL15. Yahoo has 10 classes: {“Society & Culture”, “Science & Mathematics”, “Health”, “Education & Reference”, “Computers & Internet”, “Sports”, “Business & Finance”, “Entertainment & Music”, “Family & Relationships”, “Politics & Government”}, with original split: 1.4M/60k in train/test (all labels are balanced distributed).
We reorganize the dataset by first fixing the dev and test sets as follows: for dev, all 10 labels are included, with 6k labeled instances for each; For test, all 10 labels are included, with 10k instances for each. Then training sets are created on remaining instances as follows.
For label-partially-unseen, we create two versions of Yahoo train for $\textsc {0shot-tc}$:
Train-v0: 5 classes: {“Society & Culture”, “Health”, “Computers & Internet”, “Business & Finance”, “Family & Relationships”} are included; each is equipped with 130k labeled instances.
Train-v1: 5 classes: { “Science & Mathematics”, “Education & Reference”, “Sports”, “Entertainment & Music”, “Politics & Government”} are included; each is equipped with 130k labeled instances.
We always create two versions of train with non-overlapping labels so as to get rid of the model's over-fitting on one of them.
Label-fully-unseen share the same test and dev with the label-partially-unseen except that it has no training set. It is worth mentioning that our setup of label-partially-unseen and label-fully-unseen enables us to compare the performance mutually; it can show the system's capabilities while seeing different sizes of classes.
<<</Yahoo.>>>
<<</Topic detection>>>
<<<Emotion detection>>>
<<<UnifyEmotion.>>>
This emotion dataset was released by DBLPBostanK18. It was constructed by unifying the emotion labels of multiple public emotion datasets. This dataset consists of text from multiple domains: tweet, emotional events, fairy tale and artificial sentences, and it contains 9 emotion types {“sadness”, “joy”, “anger”, “disgust”, “fear”, “surprise”, “shame”, “guilt”, “love”} and “none” (if no emotion applies). We remove the multi-label instances (appro. 4k) so that the remaining instances always have a single positive label. The official evaluation metric is label-weighted F1.
Since the labels in this dataset has unbalanced distribution. We first directly list the fixed $\emph {test}$ and $\emph {dev}$ in Table TABREF9 and Table TABREF10, respectively. They are shared by following label-partial-unseen and label-fully-unseen setups of train.
Label-partial-unseen has the following two versions of train:
Train-v0: 5 classes: {“sadness”, “anger”, “fear”, “shame”, “love”} are included.
Train-v1: 4 classes: { “joy”, “disgust”, “surprise”, “guilt”} are included.
For label-fully-unseen, no training set is provided.
<<</UnifyEmotion.>>>
<<</Emotion detection>>>
<<<Situation detection>>>
The situation frame typing is one example of an event-type classification task. A situation frame studied here is a need situation such as the need for water or medical aid, or an issue situation such as crime violence BIBREF16, BIBREF17. It was originally designed for low-resource situation detection, where annotated data is unavailable. This is why it is particularly suitable for $\textsc {0shot-tc}$.
We use the Situation Typing dataset released by mayhewuniversity. It has 5,956 labeled instances. Totally 11 situation types: “food supply”, “infrastructure”, “medical assistance”, “search/rescue”, “shelter”, “utilities, energy, or sanitation”, “water supply”, “evacuation”, “regime change”, “terrisms”, “crime violence” and an extra type “none” – if none of the 11 types applies. This dataset is a multi-label classification, and label-wise weighted F1 is the official evaluation.
The train, test and dev are listed in Table TABREF22.
<<<Summary of @!START@$\textsc {0shot-tc}$@!END@ datasets.>>>
Our three datasets covers single-label classification (i.e., “topic” and “emotion”) and multi-label classification (i.e., “situation”). In addition, a “none” type is adopted in “emotion” and “situation” tasks if no predefined types apply – this makes the problem more realistic.
<<</Summary of @!START@$\textsc {0shot-tc}$@!END@ datasets.>>>
<<</Situation detection>>>
<<</Benchmark the dataset>>>
<<<Benchmark the evaluation>>>
How to evaluate a $\textsc {0shot-tc}$ system? This needs to review the original motivation of doing $\textsc {0shot-tc}$ research. As we discussed in Introduction section, ideally, we aim to build a system that works like humans – figuring out if a piece of text can be assigned with an open-defined label, without any constrains on the domains and the aspects described by the labels. Therefore, we challenge the system in two setups: label-partially-unseen and label-fully-unseen.
<<<Label-partially-unseen.>>>
This is the most common setup in existing $\textsc {0shot-tc}$ literature: for a given dataset of a specific problem such as topic categorization, emotion detection, etc, train a system on a part of the labels, then test on the whole label space. Usually all labels describe the same aspect of the text.
<<</Label-partially-unseen.>>>
<<<Label-fully-unseen.>>>
In this setup, we push “zero-shot” to the extreme – no annotated data for any labels. So, we imagine that learning a system through whatever approaches, then testing it on $\textsc {0shot-tc}$ datasets of open aspects.
This label-fully-unseen setup is more like the dataless learning principle BIBREF0, in which no task-specific annotated data is provided for training a model (since usually this kind of model fails to generalize in other domains and other tasks), therefore, we are encouraged to learn models with open-data or test-agnostic data. In this way, the learned models behave more like humans.
<<</Label-fully-unseen.>>>
<<</Benchmark the evaluation>>>
<<<An entailment model for @!START@$\textsc {0shot-tc}$@!END@>>>
As one contribution of this work, we propose to deal with $\textsc {0shot-tc}$ as a textual entailment problem. It is inspired by: i) text classification is essentially a textual entailment problem. Let us think about how humans do classification: we mentally think “whether this text is about sport?”, or “whether this text expresses a specific feeling?”, or “whether the people there need water supply?” and so on. The reason that conventional text classification did not employ entailment approach is it always has pre-defined, fixed-size of classes equipped with annotated data. However, in $\textsc {0shot-tc}$, we can neither estimate how many and what classes will be handled nor have annotated data to train class-specific parameters. Textual entailment, instead, does not preordain the boundary of the hypothesis space. ii) To pursue the ideal generalization of classifiers, we definitely need to make sure that the classifiers understand the problem encoded in the aspects and understand the meaning of labels. Conventional supervised classifiers fail in this aspect since label names are converted into indices – this means the classifiers do not really understand the labels, let alone the problem. Therefore, exploring $\textsc {0shot-tc}$ as a textual entailment paradigm is a reasonable way to achieve generalization.
<<<Convert labels into hypotheses.>>>
The first step of dealing with $\textsc {0shot-tc}$ as an entailment problem is to convert labels into hypotheses. To this end, we first convert each aspect into an interpretation (we discussed before that generally one aspect defines one interpretation). E.g., “topic” aspect to interpretation “the text is about the topic”. Table TABREF24 lists some examples for the three aspects: “topic”, “emotion” and “situation”.
In this work, we just explored two simple methods to generate the hypotheses. As Table TABREF24 shows, one is to use the label name to complete the interpretation, the other is to use the label's definition in WordNet to complete the interpretation. In testing, once one of them results in an “entailment” decision, then we decide the corresponding label is positive. We can definitely create more natural hypotheses through crowd-sourcing, such as “food” into “the people there are starving”. Here we just set the baseline examples by automatic approaches, more explorations are left as future work, and we welcome the community to contribute.
<<</Convert labels into hypotheses.>>>
<<<Convert classification data into entailment data.>>>
For a data split (train, dev and test), each input text, acting as the premise, has a positive hypothesis corresponding to the positive label, and all negative labels in the data split provide negative hypotheses. Note that unseen labels do not provide negative hypotheses for instances in train.
<<</Convert classification data into entailment data.>>>
<<<Entailment model learning.>>>
In this work, we make use of the widely-recognized state of the art entailment technique – BERT BIBREF18, and train it on three mainstream entailment datasets: MNLI BIBREF19, GLUE RTE BIBREF20, BIBREF21 and FEVER BIBREF22, respectively. We convert all datasets into binary case: “entailment” vs. “non-entailment”, by changing the label “neutral” (if exist in some datasets) into “non-entailment”.
For our label-fully-unseen setup, we directly apply this pretrained entailment model on the test sets of all $\textsc {0shot-tc}$ aspects. For label-partially-unseen setup in which we intentionally provide annotated data, we first pretrain BERT on the MNLI/FEVER/RTE, then fine-tune on the provided training data.
<<</Entailment model learning.>>>
<<<Harsh policy in testing.>>>
Since seen labels have annotated data for training, we adopt different policies to pick up seen and unseen labels. To be specific, we pick a seen label with a harsher rule: i) In single-label classification, if both seen and unseen labels are predicted as positive, we pick the seen label only if its probability of being positive is higher than that of the unseen label by a hyperparameter $\alpha $. If only seen or unseen labels are predicted as positive, we pick the one with the highest probability; ii) In multi-label classification, if both seen and unseen labels are predicted as positive, we change the seen labels into “negative” if their probability of being positive is higher than that of the unseen label by less than $\alpha $. Finally, all labels labeled positive will be selected. If no positive labels, we choose “none” type.
$\alpha $ = 0.05 in our systems, tuned on dev.
<<</Harsh policy in testing.>>>
<<</An entailment model for @!START@$\textsc {0shot-tc}$@!END@>>>
<<<Experiments>>>
<<<Label-partially-unseen evaluation>>>
In this setup, there is annotated data for partial labels as train. So, we report performance for unseen classes as well as seen classes. We compare our entailment approaches, trained separately on MNLI, FEVER and RTE, with the following baselines.
<<<Baselines.>>>
Majority: the text picks the label of the largest size.
ESA: A dataless classifier proposed in BIBREF0. It maps the words (in text and label names) into the title space of Wikipedia articles, then compares the text with label names. This method does not rely on train.
We implemented ESA based on 08/01/2019 Wikipedia dump. There are about 6.1M words and 5.9M articles.
Word2Vec BIBREF23: Both the representations of the text and the labels are the addition of word embeddings element-wisely. Then cosine similarity determines the labels. This method does not rely on train either.
Binary-BERT: We fine-tune BERT on train, which will yield a binary classifier for entailment or not; then we test it on test – picking the label with the maximal probability in single-label scenarios while choosing all the labels with “entailment” decision in multi-label cases.
<<</Baselines.>>>
<<<Discussion.>>>
The results of label-partially-unseen are listed in Table TABREF30. “ESA” performs slightly worse than “Word2Vec” in topic detection, mainly because the label names, i.e., topics such as “sports”, are closer than some keywords such as “basketball” in Word2Vec space. However, “ESA” is clearly better than “Word2Vec” in situation detection; this should be mainly due to the fact that the label names (e.g., “shelter”, “evaculation”, etc.) can hardly find close words in the text by Word2Vec embeddings. Quite the contrary, “ESA” is easier to make a class such as “shelter” closer to some keywords like “earthquake”. Unfortunately, both Word2Vec and ESA work poorly for emotion detection problem. We suspect that emotion detection requires more entailment capability. For example, the text snippet “when my brother was very late in arriving home from work”, its gold emotion “fear” requires some common-knowledge inference, rather than just word semantic matching through Word2Vec and ESA.
The supervised method “Binary-BERT” is indeed strong in learning the seen-label-specific models – this is why it predicts very well for seen classes while performing much worse for unseen classes.
Our entailment models, especially the one pretrained on MNLI, generally get competitive performance with the “Binary-BERT” for seen (slightly worse on “topic” and “emotion” while clearly better on “situation”) and improve the performance regarding unseen by large margins. At this stage, fine-tuning on an MNLI-based pretrained entailment model seems more powerful.
<<</Discussion.>>>
<<</Label-partially-unseen evaluation>>>
<<<Label-fully-unseen evaluation>>>
Regarding this label-fully-unseen evaluation, apart from our entailment models and three unsupervised baselines “Majority”, “Word2Vec” and “ESA”, we also report the following baseline:
Wikipedia-based: We train a binary classifier based on BERT on a dataset collected from Wikipedia. Wikipedia is a corpus of general purpose, without targeting any specific $\textsc {0shot-tc}$ task. Collecting categorized articles from Wikipedia is popular way of creating training data for text categorization, such as BIBREF13. More specifically, we collected 100K articles along with their categories in the bottom of each article. For each article, apart from its attached positive categories, we randomly sample three negative categories. Then each article and its positive/negative categories act as training pairs for the binary classifier.
We notice “Wikipedia-based” training indeed contributes a lot for the topic detection task; however, its performances on emotion and situation detection problems are far from satisfactory. We believe this is mainly because the Yahoo-based topic categorization task is much closer to the Wikipedia-based topic categorization task; emotion and situation categorizations, however, are relatively further.
Our entailment models, pretrained on MNLI/FEVER/RTE respectively, perform more robust on the three $\textsc {0shot-tc}$ aspects (except for the RTE on emotion). Recall that they are not trained on any text classification data, and never know the domain and the aspects in the test. This clearly shows the great promise of developing textual entailment models for $\textsc {0shot-tc}$. Our ensemble approach further boosts the performances on all three tasks.
An interesting phenomenon, comparing the label-partially-unseen results in Table TABREF30 and the label-fully-unseen results in Table TABREF32, is that the pretrained entailment models work in this order for label-fully-unseen case: RTE $>$ FEVER $>$MNLI; on the contrary, if we fine-tune them on the label-partially-unseen case, the MNLI-based model performs best. This could be due to a possibility that, on one hand, the constructed situation entailment dataset is closer to the RTE dataset than to the MNLI dataset, so an RTE-based model can generalize well to situation data, but, on the other hand, it could also be more likely to over-fit the training set of “situation” during fine-tuning. A deeper exploration of this is left as future work.
<<</Label-fully-unseen evaluation>>>
<<<How do the generated hypotheses influence>>>
In Table TABREF24, we listed examples for converting class names into hypotheses. In this work, we only tried to make use of the class names and their definitions in WordNet. Table TABREF33 lists the fine-grained performance of three ways of generating hypotheses: “word”, “definition”, and “combination” (i.e., word&definition).
This table indicates that: i) Definition alone usually does not work well in any of the three tasks, no matter which pretrained entailment model is used; ii) Whether “word” alone or “word&definition” works better depends on the specific task and the pretrained entailment model. For example, the pretrained MNLI model prefers “word&definition” in both “emotion” and “situation” detection tasks. However, the other two entailment models (RTE and FEVER) mostly prefer “word”. iii) Since it is unrealistic to adopt only one entailment model, such as from {RTE, FEVER, MNLI}, for any open $\textsc {0shot-tc}$ problem, an ensemble system should be preferred. However, the concrete implementation of the ensemble system also influences the strengths of different hypothesis generation approaches. In this work, our ensemble method reaches the top performance when combining the “word” and “definition”. More ensemble systems and hypothesis generation paradigms need to be studied in the future.
To better understand the impact of generated hypotheses, we dive into the performance of each labels, taking “situation detection” as an example. Figure FIGREF47 illustrates the separate F1 scores for each situation class, predicted by the ensemble model for label-fully-unseen setup. This enables us to check in detail how easily the constructed hypotheses can be understood by the entailment model. Unfortunately, some classes are still challenging, such as “evacuation”, “infrastructure”, and “regime change”. This should be attributed to their over-abstract meaning. Some classes were well recognized, such as “water”, “shelter”, and “food”. One reason is that these labels mostly are common words – systems can more easily match them to the text; the other reason is that they are situation classes with higher frequencies (refer to Table TABREF22) – this is reasonable based on our common knowledge about disasters.
<<</How do the generated hypotheses influence>>>
<<</Experiments>>>
<<<Summary>>>
In this work, we analyzed the problems of existing research on zero-shot text classification ($\textsc {0shot-tc}$): restrictive problem definition, the weakness in understanding the problem and the labels' meaning, and the chaos of datasets and evaluation setups. Therefore, we are benchmarking $\textsc {0shot-tc}$ by standardizing the datasets and evaluations. More importantly, to tackle the broader-defined $\textsc {0shot-tc}$, we proposed a textual entailment framework which can work with or without the annotated data of seen labels.
<<</Summary>>>
<<<Acknowledgments>>>
The authors would like to thank Jennifer Sheffield and the anonymous reviewers for insightful comments and suggestions. This work was supported by Contracts HR0011-15-C-0113 and HR0011-18-2-0052 with the US Defense Advanced Research Projects Agency (DARPA). Approved for Public Release, Distribution Unlimited. The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government.
<<</Acknowledgments>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nFirst problem.\nSecond problem.\nThird problem.\nDataset.\nEvaluation.\nEntailment approach.\nRelated Work\nBenchmark the dataset\nTopic detection\nYahoo.\nEmotion detection\nUnifyEmotion.\nSituation detection\nSummary of @!START@$\\textsc {0shot-tc}$@!END@ datasets.\nBenchmark the evaluation\nLabel-partially-unseen.\nLabel-fully-unseen.\nAn entailment model for @!START@$\\textsc {0shot-tc}$@!END@\nConvert labels into hypotheses.\nConvert classification data into entailment data.\nEntailment model learning.\nHarsh policy in testing.\nExperiments\nLabel-partially-unseen evaluation\nBaselines.\nDiscussion.\nLabel-fully-unseen evaluation\nHow do the generated hypotheses influence\nSummary\nAcknowledgments"
],
"type": "outline"
}
|
1909.08167
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Weighed Domain-Invariant Representation Learning for Cross-domain Sentiment Analysis
<<<Abstract>>>
Cross-domain sentiment analysis is currently a hot topic in the research and engineering areas. One of the most popular frameworks in this field is the domain-invariant representation learning (DIRL) paradigm, which aims to learn a distribution-invariant feature representation across domains. However, in this work, we find out that applying DIRL may harm domain adaptation when the label distribution $\rm{P}(\rm{Y})$ changes across domains. To address this problem, we propose a modification to DIRL, obtaining a novel weighted domain-invariant representation learning (WDIRL) framework. We show that it is easy to transfer existing SOTA DIRL models to WDIRL. Empirical studies on extensive cross-domain sentiment analysis tasks verified our statements and showed the effectiveness of our proposed solution.
<<</Abstract>>>
<<<Introduction>>>
Sentiment analysis aims to predict sentiment polarity of user-generated data with emotional orientation like movie reviews. The exponentially increase of online reviews makes it an interesting topic in research and industrial areas. However, reviews can span so many different domains and the collection and preprocessing of large amounts of data for new domains is often time-consuming and expensive. Therefore, cross-domain sentiment analysis is currently a hot topic, which aims to transfer knowledge from a label-rich source domain (S) to the label-few target domain (T).
In recent years, one of the most popular frameworks for cross-domain sentiment analysis is the domain invariant representation learning (DIRL) framework BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4. Methods of this framework follow the idea of extracting a domain-invariant feature representation, in which the data distributions of the source and target domains are similar. Based on the resultant representations, they learn the supervised classifier using source rich labeled data. The main difference among these methods is the applied technique to force the feature representations to be domain-invariant.
However, in this work, we discover that applying DIRL may harm domain adaptation in the situation that the label distribution $\rm {P}(\rm {Y})$ shifts across domains. Specifically, let $\rm {X}$ and $\rm {Y}$ denote the input and label random variable, respectively, and $G(\rm {X})$ denote the feature representation of $\rm {X}$. We found out that when $\rm {P}(\rm {Y})$ changes across domains while $\rm {P}(\rm {X}|\rm {Y})$ stays the same, forcing $G(\rm {X})$ to be domain-invariant will make $G(\rm {X})$ uninformative to $\rm {Y}$. This will, in turn, harm the generation of the supervised classifier to the target domain. In addition, for the more general condition that both $\rm {P}(\rm {Y})$ and $\rm {P}(\rm {X}|\rm {Y})$ shift across domains, we deduced a conflict between the object of making the classification error small and that of making $G(\rm {X})$ domain-invariant. We argue that the problem is worthy of studying since the shift of $\rm {P}(\rm {Y})$ exists in many real-world cross-domain sentiment analysis tasks BIBREF0. For example, the marginal distribution of the sentiment of a product can be affected by the overall social environment and change in different time periods; and for different products, their marginal distributions of the sentiment are naturally considered different. Moreover, there are many factors, such as the original data distribution, data collection time, and data clearing method, that can affect $\rm {P}(\rm {Y})$ of the collected target domain unlabeled dataset. Note that in the real-world cross-domain tasks, we do not know the labels of the collected target domain data. Thus, we cannot previously align its label distribution $\rm {P}_T(\mathbf {Y})$ with that of source domain labeled data $\rm {P}_S(\mathbf {Y})$, as done in many previous works BIBREF0, BIBREF2, BIBREF5, BIBREF4, BIBREF6, BIBREF7.
To address the problem of DIRL resulted from the shift of $\rm {P}(\rm {Y})$, we propose a modification to DIRL, obtaining a weighted domain-invariant representation learning (WDIRL) framework. This framework additionally introduces a class weight $\mathbf {w}$ to weigh source domain examples by class, hoping to make $\rm {P}(\rm {Y})$ of the weighted source domain close to that of the target domain. Based on $\mathbf {w}$, it resolves domain shift in two steps. In the first step, it forces the marginal distribution $\rm {P}(\rm {X})$ to be domain-invariant between the target domain and the weighted source domain instead of the original source, obtaining a supervised classifier $\rm {P}_S(\rm {Y}|\rm {X}; \mathbf {\Phi })$ and a class weight $\mathbf {w}$. In the second step, it resolves the shift of $\rm {P}(\rm {Y}|\rm {X})$ by adjusting $\rm {P}_S(\rm {Y}|\rm {X}; \mathbf {\Phi })$ using $\mathbf {w}$ for label prediction in the target domain. We detail these two steps in §SECREF4. Moreover, we will illustrate how to transfer existing DIRL models to their WDIRL counterparts, taking the representative metric-based CMD model BIBREF3 and the adversarial-learning-based DANN model BIBREF2 as an example, respectively.
In summary, the contributions of this paper include: ($\mathbf {i}$) We theoretically and empirically analyse the problem of DIRL for domain adaptation when the marginal distribution $\rm {P}(\rm {Y})$ shifts across domains. ($\mathbf {ii}$) We proposed a novel method to address the problem and show how to incorporate it with existent DIRL models. ($\mathbf {iii}$) Experimental studies on extensive cross-domain sentiment analysis tasks show that models of our WDIRL framework can greatly outperform their DIRL counterparts.
<<</Introduction>>>
<<<Preliminary and Related Work>>>
<<<Domain Adaptation>>>
For expression consistency, in this work, we consider domain adaptation in the unsupervised setting (however, we argue that our analysis and solution also applies to the supervised and semi-supervised domain adaptation settings). In the unsupervised domain adaptation setting, there are two different distributions over $\rm {X} \times \rm {Y}$: the source domain $\rm {P}_S(\rm {X},\rm {Y})$ and the target domain $\rm {P}_T(\rm {X},\rm {Y})$. And there is a labeled data set $\mathcal {D}_S$ drawn $i.i.d$ from $\rm {P}_S(\rm {X},\rm {Y})$ and an unlabeled data set $\mathcal {D}_T$ drawn $i.i.d.$ from the marginal distribution $\rm {P}_T(\rm {X})$:
The goal of domain adaptation is to build a classier $f:\rm {X} \rightarrow \rm {Y}$ that has good performance in the target domain using $\mathcal {D}_S$ and $\mathcal {D}_T$.
For this purpose, many approaches have been proposed from different views, such as instance reweighting BIBREF8, pivot-based information passing BIBREF9, spectral feature alignment BIBREF10 subsampling BIBREF11, and of course the domain-invariant representation learning BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22.
<<</Domain Adaptation>>>
<<<Domain Invariant Representation Learning>>>
Domain invariant representation learning (DIRL) is a very popular framework for performing domain adaptation in the cross-domain sentiment analysis field BIBREF23, BIBREF4, BIBREF24, BIBREF7. It is heavily motivated by the following theorem BIBREF25.
Theorem 1 For a hypothesis $h$,
Here, $\mathcal {L}_S(h)$ denotes the expected loss with hypothesis $h$ in the source domain, $\mathcal {L}_T(h)$ denotes the counterpart in the target domain, $d_1$ is a measure of divergence between two distributions.
Based on Theorem UNKREF3 and assuming that performing feature transform on $\rm {X}$ will not increase the values of the first and third terms of the right side of Ineq. (DISPLAY_FORM4), methods of the DIRL framework apply a feature map $G$ onto $\rm {X}$, hoping to obtain a feature representation $G(\rm {X})$ that has a lower value of ${d}_{1}(\rm {P}_S(G(\rm {X})), \rm {P}_T(G(\rm {X})))$. To this end, different methods have been proposed. These methods can be roughly divided into two directions. The first direction is to design a differentiable metric to explicitly evaluate the discrepancy between two distributions. We call methods of this direction as the metric-based DIRL methods. A representative work of this direction is the center-momentum-based model proposed by BIBREF3. In that work, they proposed a central moment discrepancy metric (CMD) to evaluate the discrepancy between two distributions. Specifically, let denote $\rm {X}_S$ and $\rm {X}_T$ an $M$ dimensional random vector on the compact interval $[a; b]^M$ over distribution $\rm {P}_S$ and $\rm {P}_T$, respectively. The CMD loss between $\rm {P}_S$ and $\rm {P}_T$ is defined by:
Here, $\mathbb {E}(\rm {X})$ denotes the expectation of $\rm {X}$ over distribution $\rm {P}_S(\rm {X})$, and
is the $k$-th momentum, where $\rm {X}_i$ denotes the $i^{th}$ dimensional variable of $\rm {X}$.
The second direction is to perform adversarial training between the feature generator $G$ and a domain discriminator $D$. We call methods of this direction as the adversarial-learning-based methods. As a representative, BIBREF2 trained $D$ to distinguish the domain of a given example $x$ based on its representation $G(x)$. At the same time, they encouraged $G$ to deceive $D$, i.e., to make $D$ unable to distinguish the domain of $x$. More specifically, $D$ was trained to minimize the loss:
over its trainable parameters, while in contrast $G$ was trained to maximize $\mathcal {L}_d$. According to the work of BIBREF26, this is equivalent to minimize the Jensen-shannon divergence BIBREF27, BIBREF28 $\text{JSD}(\rm {P}_S, \rm {P}_T)$ between $\rm {P}_S(G(\rm {X}))$ and $\rm {P}_T(G(\rm {X}))$ over $G$. Here, for a concise expression, we write $\rm {P}$ as the shorthand for $\rm {P}(G(\rm {X}))$.
The task loss is the combination of the supervised learning loss $\mathcal {L}_{sup}$ and the domain-invariant learning loss $\mathcal {L}_{inv}$, which are defined on $\mathcal {D}_S$ only and on the combination of $\mathcal {D}_S$ and $\mathcal {D}_T$, respectively:
Here, $\alpha $ is a hyper-parameter for loss balance, and the aforementioned domain adversarial loss $\text{JSD}(\rm {P}_S, \rm {P}_T)$ and $\text{CMD}_K$ are two concrete forms of $\mathcal {L}_{inv}$.
<<</Domain Invariant Representation Learning>>>
<<</Preliminary and Related Work>>>
<<<Problem of Domain-Invariant Representation Learning>>>
In this work, we found out that applying DIRL may harm domain adaptation in the situation that $\rm {P}(\rm {Y})$ shifts across domains. Specifically, when $\rm {P}_S(\rm {Y})$ differs from $\rm {P}_T(\rm {Y})$, forcing the feature representations $G(\rm {X})$ to be domain-invariant may increase the value of $\mathcal {L}_S(h)$ in Ineq. (DISPLAY_FORM4) and consequently increase the value of $\mathcal {L}_T(h)$, which means the decrease of target domain performance. In the following, we start our analysis under the condition that $\rm {P}_S(\rm {X}|\rm {Y})=\rm {P}_T(\rm {X}|\rm {Y})$. Then, we consider the more general condition that $\rm {P}_S(\rm {X}|\rm {Y})$ also differs from $\rm {P}_T(\rm {X}|\rm {Y})$.
When $\rm {P}_S(\rm {X}|\rm {Y})=\rm {P}_T(\rm {X}|\rm {Y})$, we have the following theorem.
Theorem 2 Given $\rm {P}_S(\rm {X}|\rm {Y})=\rm {P}_T(\rm {X}|\rm {Y})$, if $\rm {P}_S(\rm {Y}=i) \ne \rm {P}_T(\rm {Y}=i)$ and a feature map $G$ makes $\rm {P}_S \left( \mathcal {M}(\rm {X}))=\rm {P}_T(\mathcal {M}(\rm {X}) \right)$, then $\rm {P}_S(\rm {Y}=i|\mathcal {M}(\rm {X}))=\rm {P}_S(\rm {Y}=i)$.
Proofs appear in Appendix A.
<<<Remark.>>>
According to Theorem UNKREF8, we know that when $\rm {P}_S(\rm {X}|\rm {Y})=\rm {P}_T(\rm {X}|\rm {Y})$ and $\rm {P}_S(\rm {Y}=i) \ne \rm {P}_T(\rm {Y}=i)$, forcing $G(\rm {X})$ to be domain-invariant inclines to make data of class $i$ mix with data of other classes in the space of $G(\rm {X})$. This will make it difficult for the supervised classifier to distinguish inputs of class $i$ from inputs of the other classes. Think about such an extreme case that every instance $x$ is mapped to a consistent point $g_0$ in $G(\rm {X})$. In this case, $\rm {P}_S(G(\rm {X})=g_0)= \rm {P}_T(G(\rm {X})=g_0) = 1$. Therefore, $G(\rm {X})$ is domain-invariant. As a result, the supervised classifier will assign the label $y^* = \operatornamewithlimits{arg\,max}_y \rm {P}_S(\rm {Y}=y)$ to all input examples. This is definitely unacceptable. To give a more intuitive illustration of the above analysis, we offer several empirical studies on Theorem UNKREF8 in Appendix B.
When $\rm {P}_S(\rm {Y})\ne \rm {P}_T(\rm {Y})$ and $\rm {P}_S(\rm {X}|\rm {Y}) \ne \rm {P}_T(\rm {X}|\rm {Y})$, we did not obtain such a strong conclusion as Theorem UNKREF8. Instead, we deduced a conflict between the object of achieving superior classification performance and that of making features domain-invariant.
Suppose that $\rm {P}_S(\rm {Y}=i) \ne \rm {P}_T(\rm {Y}=i)$ and instances of class $i$ are completely distinguishable from instances of the rest classes in $G(\rm {X})$, i.e.,:
In DIRL, we hope that:
Consider the region $x \in \mathcal {X}_i$, where $\rm {P}(G(\rm {X}=x)|\rm {Y}=i)>0$. According to the above assumption, we know that $\rm {P}(G(\rm {X}=x \in \mathcal {X}_i)|\rm {Y} \ne i) = 0$. Therefore, applying DIRL will force
in region $x \in \mathcal {X}_i$. Taking the integral of $x$ over $\mathcal {X}_i$ for both sides of the equation, we have $\rm {P}_S(\rm {Y}=i) = \rm {P}_T(\rm {Y}=i)$. This deduction contradicts with the setting that $\rm {P}_S(\rm {Y}=i) \ne \rm {P}_T(\rm {Y}=i)$. Therefore, $G(\rm {X})$ is impossible fully class-separable when it is domain-invariant. Note that the object of the supervised learning is exactly to make $G(\rm {X})$ class-separable. Thus, this actually indicates a conflict between the supervised learning and the domain-invariant representation learning.
Based on the above analysis, we can conclude that it is impossible to obtain a feature representation $G(X)$ that is class-separable and at the same time, domain-invariant using the DIRL framework, when $\rm {P}(\rm {Y})$ shifts across domains. However, the shift of $\rm {P}(\rm {Y})$ can exist in many cross-domain sentiment analysis tasks. Therefore, it is worthy of studying in order to deal with the problem of DIRL.
<<</Remark.>>>
<<</Problem of Domain-Invariant Representation Learning>>>
<<<Weighted Domain Invariant Representation Learning>>>
According to the above analysis, we proposed a weighted version of DIRL to address the problem caused by the shift of $\rm {P}(\rm {Y})$ to DIRL. The key idea of this framework is to first align $\rm {P}(\rm {Y})$ across domains before performing domain-invariant learning, and then take account the shift of $\rm {P}(\rm {Y})$ in the label prediction procedure. Specifically, it introduces a class weight $\mathbf {w}$ to weigh source domain examples by class. Based on the weighted source domain, the domain shift problem is resolved in two steps. In the first step, it applies DIRL on the target domain and the weighted source domain, aiming to alleviate the influence of the shift of $\rm {P}(\rm {Y})$ during the alignment of $\rm {P}(\rm {X}|\rm {Y})$. In the second step, it uses $\mathbf {w}$ to reweigh the supervised classifier $\rm {P}_S(\rm {Y}|\rm {X})$ obtained in the first step for target domain label prediction. We detail these two steps in §SECREF10 and §SECREF14, respectively.
<<<Align @!START@$\rm {P}(\rm {X}|\rm {Y})$@!END@ with Class Weight>>>
The motivation behind this practice is to adjust data distribution of the source domain or the target domain to alleviate the shift of $\rm {P}(\rm {Y})$ across domains before applying DIRL. Consider that we only have labels of source domain data, we choose to adjust data distribution of the source domain. To achieve this purpose, we introduce a trainable class weight $\mathbf {w}$ to reweigh source domain examples by class when performing DIRL, with $\mathbf {w}_i > 0$. Specifically, we hope that:
and we denote $\mathbf {w}^*$ the value of $\mathbf {w}$ that makes this equation hold. We shall see that when $\mathbf {w}=\mathbf {w}^*$, DIRL is to align $\rm {P}_S(G(\rm {X})|\rm {Y})$ with $\rm {P}_T(G(\rm {X})|\rm {Y})$ without the shift of $\rm {P}(\rm {Y})$. According to our analysis, we know that due to the shift of $\rm {P}(\rm {Y})$, there is a conflict between the training objects of the supervised learning $\mathcal {L}_{sup}$ and the domain-invariant learning $\mathcal {L}_{inv}$. And the conflict degree will decrease as $\rm {P}_S(\rm {Y})$ getting close to $\rm {P}_T(\rm {Y})$. Therefore, during model training, $\mathbf {w}$ is expected to be optimized toward $\mathbf {w}^*$ since it will make $\rm {P}(\rm {Y})$ of the weighted source domain close to $\rm {P}_T(\rm {Y})$, so as to solve the conflict.
We now show how to transfer existing DIRL models to their WDIRL counterparts with the above idea. Let $\mathbb {S}:\rm {P} \rightarrow {R}$ denote a statistic function defined over a distribution $\rm {P}$. For example, the expectation function $\mathbb {E}(\rm {X})$ in $\mathbb {E}(\rm {X}_S) \equiv \mathbb {E}(\rm {X})(\rm {P}_S(\rm {X}))$ is a concrete instaintiation of $\mathbb {S}$. In general, to transfer models from DIRL to WDIRL, we should replace $\mathbb {S}(\rm {P}_S(\rm {X}))$ defined in $\mathcal {L}_{inv}$ with
Take the CMD metric as an example. In WDIRL, the revised form of ${\text{CMD}}_K$ is defined by:
Here, $\mathbb {E}(\rm {X}_S|\rm {Y}_S=i) \equiv \mathbb {E}(\rm {X})(\rm {P}_S(\rm {X}|\rm {Y}=i))$ denotes the expectation of $\rm {X}$ over distribution $\rm {P}_S(\rm {X}|\rm {Y}=i)$. Note that both $\rm {P}_S(\rm {Y}=i)$ and $\mathbb {E}(\rm {X}_S|\rm {Y}_S=i)$ can be estimated using source labeled data, and $\mathbb {E}(\rm {X}_T)$ can be estimated using target unlabeled data.
As for those adversarial-learning-based DIRL methods, e.g., DANN BIBREF2, the revised domain-invariant loss can be precisely defined by:
During model training, $D$ is optimized in the direction to minimize $\hat{\mathcal {L}}_d$, while $G$ and $\mathbf {w}$ are optimized to maximize $\hat{\mathcal {L}}_d$. In the following, we denote $\widehat{\text{JSD}}(\rm {P}_S, \rm {P}_T)$ the equivalent loss defined over $G$ for the revised version of domain adversarial learning.
The general task loss in WDIRL is defined by:
where $\hat{\mathcal {L}}_{inv}$ is a unified representation of the domain-invariant loss in WDIRL, such as $\widehat{\text{CMD}}_K$ and $\widehat{\text{JSD}}(\rm {P}_S, \rm {P}_T)$.
<<</Align @!START@$\rm {P}(\rm {X}|\rm {Y})$@!END@ with Class Weight>>>
<<<Align @!START@$\rm {P}(\rm {Y}|\rm {X})$@!END@ with Class Weight>>>
In the above step, we align $\rm {P}(\rm {X}|\rm {Y})$ across domains by performing domain-invariant learning on the class-weighted source domain and the original target domain. In this step, we deal with the shift of $\rm {P}(\rm {Y})$. Suppose that we have successfully resolved the shift of $\rm {P}(\rm {X}|\rm {Y})$ with $G$, i.e., $\rm {P}_S(G(\rm {X})|\rm {Y})=\rm {P}_T(G(\rm {X})|\rm {Y})$. Then, according to the work of BIBREF29, we have:
where $\gamma (\rm {Y}=i)={\rm {P}_T(\rm {Y}=i)}/{\rm {P}_S(\rm {Y}=i)}$. Of course, in most of the real-world tasks, we do not know the value of $\gamma (\rm {Y}=i)$. However, note that $\gamma (\rm {Y}=i)$ is exactly the expected class weight $\mathbf {w}^*_i$. Therefore, a natural practice of this step is to estimate $\gamma (\rm {Y}=i)$ with the obtained $\mathbf {w}_i$ in the first step and estimate $\rm {P}_T(\rm {Y}|G(\rm {X}))$ with:
In summary, to transfer methods of the DIRL paradigm to WDIRL, we should: first revise the definition of $\mathcal {L}_{inv}$, obtaining its corresponding WDIRL form $\hat{\mathcal {L}}_{inv}$; then perform supervised learning and domain-invariant representation learning on $\mathcal {D}_S$ and $\mathcal {D}_T$ according to Eq. (DISPLAY_FORM13), obtaining a supervised classifier $\rm {P}_S(\rm {Y}|\rm {X}; \mathbf {\Phi })$ and a class weight vector $\mathbf {w}$; and finally, adjust $\rm {P}_S(\rm {Y}|\rm {X}; \mathbf {\Phi })$ using $\mathbf {w}$ according to Eq. (DISPLAY_FORM16) and obtain the target domain classifier $\rm {P}_T(\rm {Y}|\rm {X}; \mathbf {\Phi })$.
<<</Align @!START@$\rm {P}(\rm {Y}|\rm {X})$@!END@ with Class Weight>>>
<<</Weighted Domain Invariant Representation Learning>>>
<<<Experiment>>>
<<<Experiment Design>>>
Through the experiments, we empirically studied our analysis on DIRL and the effectiveness of our proposed solution in dealing with the problem it suffered from. In addition, we studied the impact of each step described in §SECREF10 and §SECREF14 to our proposed solution, respectively. To performe the study, we carried out performance comparison between the following models:
SO: the source-only model trained using source domain labeled data without any domain adaptation.
CMD: the centre-momentum-based domain adaptation model BIBREF3 of the original DIRL framework that implements $\mathcal {L}_{inv}$ with $\text{CMD}_K$.
DANN: the adversarial-learning-based domain adaptation model BIBREF2 of the original DIRL framework that implements $\mathcal {L}_{inv}$ with $\text{JSD}(\rm {P}_S, \rm {P}_T)$.
$\text{CMD}^\dagger $: the weighted version of the CMD model that only applies the first step (described in §SECREF10) of our proposed method.
$\text{DANN}^\dagger $: the weighted version of the DANN model that only applies the first step of our proposed method.
$\text{CMD}^{\dagger \dagger }$: the weighted version of the CMD model that applies both the first and second (described in §SECREF14) steps of our proposed method.
$\text{DANN}^{\dagger \dagger }$: the weighted version of the DANN model that applies both the first and second steps of our proposed method.
$\text{CMD}^{*}$: a variant of $\text{CMD}^{\dagger \dagger }$ that assigns $\mathbf {w}^*$ (estimate from target labeled data) to $\mathbf {w}$ and fixes this value during model training.
$\text{DANN}^{*}$: a variant of $\text{DANN}^{\dagger \dagger }$ that assigns $\mathbf {w}^*$ to $\mathbf {w}$ and fixes this value during model training.
Intrinsically, SO can provide an empirical lowerbound for those domain adaptation methods. $\text{CMD}^{*}$ and $\text{DANN}^{*}$ can provide the empirical upbound of $\text{CMD}^{\dagger \dagger }$ and $\text{DANN}^{\dagger \dagger }$, respectively. In addition, by comparing performance of $\text{CMD}^{*}$ and $\text{DANN}^{*}$ with that of $\text{SO}$, we can know the effectiveness of the DIRL framework when $\rm {P}(\rm {Y})$ dose not shift across domains. By comparing $\text{CMD}^\dagger $ with $\text{CMD}$, or comparing $\text{DANN}^\dagger $ with $\text{DANN}$, we can know the effectiveness of the first step of our proposed method. By comparing $\text{CMD}^{\dagger \dagger }$ with $\text{CMD}^{\dagger }$, or comparing $\text{DANN}^{\dagger \dagger }$ with $\text{DANN}^{\dagger }$, we can know the impact of the second step of our proposed method. And finally, by comparing $\text{CMD}^{\dagger \dagger }$ with $\text{CMD}$, or comparing $\text{DANN}^{\dagger \dagger }$ with $\text{DANN}$, we can know the general effectiveness of our proposed solution.
<<</Experiment Design>>>
<<<Dataset and Task Design>>>
We conducted experiments on the Amazon reviews dataset BIBREF9, which is a benchmark dataset in the cross-domain sentiment analysis field. This dataset contains Amazon product reviews of four different product domains: Books (B), DVD (D), Electronics (E), and Kitchen (K) appliances. Each review is originally associated with a rating of 1-5 stars and is encoded in 5,000 dimensional feature vectors of bag-of-words unigrams and bigrams.
<<<Binary-Class.>>>
From this dataset, we constructed 12 binary-class cross-domain sentiment analysis tasks: B$\rightarrow $D, B$\rightarrow $E, B$\rightarrow $K, D$\rightarrow $B, D$\rightarrow $E, D$\rightarrow $K, E$\rightarrow $B, E$\rightarrow $D, E$\rightarrow $K, K$\rightarrow $B, K$\rightarrow $D, K$\rightarrow $E. Following the setting of previous works, we treated a reviews as class `1' if it was ranked up to 3 stars, and as class `2' if it was ranked 4 or 5 stars. For each task, $\mathcal {D}_S$ consisted of 1,000 examples of each class, and $\mathcal {D}_T$ consists of 1500 examples of class `1' and 500 examples of class `2'. In addition, since it is reasonable to assume that $\mathcal {D}_T$ can reveal the distribution of target domain data, we controlled the target domain testing dataset to have the same class ratio as $\mathcal {D}_T$. Using the same label assigning mechanism, we also studied model performance over different degrees of $\rm {P}(\rm {Y})$ shift, which was evaluated by the max value of $\rm {P}_S(\rm {Y}=i)/\rm {P}_T(\rm {Y}=i), \forall i=1, \cdots , L$. Please refer to Appendix C for more detail about the task design for this study.
<<</Binary-Class.>>>
<<<Multi-Class.>>>
We additionally constructed 12 multi-class cross-domain sentiment classification tasks. Tasks were designed to distinguish reviews of 1 or 2 stars (class 1) from those of 4 stars (class 2) and those of 5 stars (class 3). For each task, $\mathcal {D}_S$ contained 1000 examples of each class, and $\mathcal {D}_T$ consisted of 500 examples of class 1, 1500 examples of class 2, and 1000 examples of class 3. Similarly, we also controlled the target domain testing dataset to have the same class ratio as $\mathcal {D}_T$.
<<</Multi-Class.>>>
<<</Dataset and Task Design>>>
<<<Implementation Detail>>>
For all studied models, we implemented $G$ and $f$ using the same architectures as those in BIBREF3. For those DANN-based methods (i.e., DANN, $\text{DANN}^{\dagger }$, $\text{DANN}^{\dagger \dagger }$, and $\text{DANN}^{*}$), we implemented the discriminator $D$ using a 50 dimensional hidden layer with relu activation functions and a linear classification layer. Hyper-parameter $K$ of $\text{CMD}_K$ and $\widehat{\text{CMD}}_K$ was set to 5 as suggested by BIBREF3. Model optimization was performed using RmsProp BIBREF30. Initial learning rate of $\mathbf {w}$ was set to 0.01, while that of other parameters was set to 0.005 for all tasks.
Hyper-parameter $\alpha $ was set to 1 for all of the tested models. We searched for this value in range $\alpha =[1, \cdots , 10]$ on task B $\rightarrow $ K. Within the search, label distribution was set to be uniform, i.e., $\rm {P}(\rm {Y}=i)=1/L$, for both domain B and K. We chose the value that maximize the performance of CMD on testing data of domain K. You may notice that this practice conflicts with the setting of unsupervised domain adaptation that we do not have labeled data of the target domain for training or developing. However, we argue that this practice would not make it unfair for model comparison since all of the tested models shared the same value of $\alpha $ and $\alpha $ was not directly fine-tuned on any tested task. With the same consideration, for every tested model, we reported its best performance achieved on testing data of the target domain during its training.
To initialize $\mathbf {w}$, we used label prediction of the source-only model. Specifically, let $\rm {P}_{SO}(\rm {Y}|\rm {X}; \mathbf {\theta }_{SO})$ denote the trained source-only model. We initialized $\mathbf {w}_i$ by:
Here, $\mathbb {I}$ denotes the indication function. To offer an intuitive understanding to this strategy, we report performance of WCMD$^{\dagger \dagger }$ over different initializations of $\mathbf {w}$ on 2 within-group (B$\rightarrow $D, E$\rightarrow $K) and 2 cross-group (B$\rightarrow $K, D$\rightarrow $E) binary-class domain adaptation tasks in Figure FIGREF33. Here, we say that domain B and D are of a group, and domain E and K are of another group since B and D are similar, as are E and K, but the two groups are different from one another BIBREF9. Note that $\rm {P}_{S}(\rm {Y}=1)=0.5$ is a constant, which is estimated using source labeled data. From the figure, we can obtain three main observations. First, WCMD$^{\dagger \dagger }$ generally outperformed its CMD counterparts with different initialization of $\mathbf {w}$. Second, it was better to initialize $\mathbf {w}$ with a relatively balanced value, i.e., $\mathbf {w}_i \rm {P}_S(\rm {Y}=i) \rightarrow \frac{1}{L}$ (in this experiment, $L=2$). Finally, $\mathbf {w}^0$ was often a good initialization of $\mathbf {w}$, indicating the effectiveness of the above strategy.
<<</Implementation Detail>>>
<<<Main Result>>>
Table TABREF27 shows model performance on the 12 binary-class cross-domain tasks. From this table, we can obtain the following observations. First, CMD and DANN underperform the source-only model (SO) on all of the 12 tested tasks, indicating that DIRL in the studied situation will degrade the domain adaptation performance rather than improve it. This observation confirms our analysis. Second, $\text{CMD}^{\dagger \dagger }$ consistently outperformed CMD and SO. This observation shows the effectiveness of our proposed method for addressing the problem of the DIRL framework in the studied situation. Similar conclusion can also be obtained by comparing performance of $\text{DANN}^{\dagger \dagger }$ with that of DANN and SO. Third, $\text{CMD}^{\dagger }$ and $\text{DANN}^{\dagger }$ consistently outperformed $\text{CMD}$ and DANN, respectively, which shows the effectiveness of the first step of our proposed method. Finally, on most of the tested tasks, $\text{CMD}^{\dagger \dagger }$ and $\text{DANN}^{\dagger \dagger }$ outperforms $\text{CMD}^{\dagger }$ and $\text{DANN}^{\dagger }$, respectively. Figure FIGREF35 depicts the relative improvement, e.g., $(\text{Acc}(\text{CMD})-\text{Acc}(\text{SO}))/\text{Acc}(\text{SO})$, of the domain adaptation methods over the SO baseline under different degrees of $\rm {P}(\rm {Y})$ shift, on two binary-class domain adaptation tasks (You can refer to Appendix C for results of the other models on other tasks). From the figure, we can see that the performance of CMD generally got worse as the increase of $\rm {P}(\rm {Y})$ shift. In contrast, our proposed model $\text{CMD}^{\dagger \dagger }$ performed robustly to the varying of $\rm {P}(\rm {Y})$ shift degree. Moreover, it can achieve the near upbound performance characterized by $\text{CMD}^{*}$. This again verified the effectiveness of our solution.
Table TABREF34 reports model performance on the 2 within-group (B$\rightarrow $D, E$\rightarrow $K) and the 2 cross-group (B$\rightarrow $K, D$\rightarrow $E) multi-class domain adaptation tasks (You can refer to Appendix D for results on the other tasks). From this table, we observe that on some tested tasks, $\text{CMD}^{\dagger \dagger }$ and $\text{DANN}^{\dagger \dagger }$ did not greatly outperform or even slightly underperformed $\text{CMD}^{\dagger }$ and $\text{DANN}^{\dagger }$, respectively. A possible explanation of this phenomenon is that the distribution of $\mathcal {D}_T$ also differs from that of the target domain testing dataset. Therefore, the estimated or learned value of $\mathbf {w}$ using $\mathcal {D}_T$ is not fully suitable for application to the testing dataset. This explanation is verified by the observation that $\text{CMD}^{\dagger }$ and $\text{DANN}^{\dagger }$ also slightly outperforms $\text{CMD}^{*}$ and $\text{DANN}^{*}$ on these tasks, respectively.
<<</Main Result>>>
<<</Experiment>>>
<<<Conclusion>>>
In this paper, we studied the problem of the popular domain-invariant representation learning (DIRL) framework for domain adaptation, when $\rm {P}(\rm {Y})$ changes across domains. To address the problem, we proposed a weighted version of DIRL (WDIRL). We showed that existing methods of the DIRL framework can be easily transferred to our WDIRL framework. Extensive experimental studies on benchmark cross-domain sentiment analysis datasets verified our analysis and showed the effectiveness of our proposed solution.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nPreliminary and Related Work\nDomain Adaptation\nDomain Invariant Representation Learning\nProblem of Domain-Invariant Representation Learning\nRemark.\nWeighted Domain Invariant Representation Learning\nAlign @!START@$\\rm {P}(\\rm {X}|\\rm {Y})$@!END@ with Class Weight\nAlign @!START@$\\rm {P}(\\rm {Y}|\\rm {X})$@!END@ with Class Weight\nExperiment\nExperiment Design\nDataset and Task Design\nBinary-Class.\nMulti-Class.\nImplementation Detail\nMain Result\nConclusion"
],
"type": "outline"
}
|
1909.04181
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
BERT-Based Arabic Social Media Author Profiling
<<<Abstract>>>
We report our models for detecting age, language variety, and gender from social media data in the context of the Arabic author profiling and deception detection shared task (APDA). We build simple models based on pre-trained bidirectional encoders from transformers (BERT). We first fine-tune the pre-trained BERT model on each of the three datasets with shared task released data. Then we augment shared task data with in-house data for gender and dialect, showing the utility of augmenting training data. Our best models on the shared task test data are acquired with a majority voting of various BERT models trained under different data conditions. We acquire 54.72% accuracy for age, 93.75% for dialect, 81.67% for gender, and 40.97% joint accuracy across the three tasks.
<<</Abstract>>>
<<<Introduction>>>
The proliferation of social media has made it possible to collect user data in unprecedented ways. These data can come in the form of usage and behavior (e.g., who likes what on Facebook), network (e.g., who follows a given user on Instagram), and content (e.g., what people post to Twitter). Availability of such data have made it possible to make discoveries about individuals and communities, mobilizing social and psychological research and employing natural language processing methods. In this work, we focus on predicting social media user age, dialect, and gender based on posted language. More specifically, we use the total of 100 tweets from each manually-labeled user to predict each of these attributes. Our dataset comes from the Arabic author profiling and deception detection shared task (APDA) BIBREF0. We focus on building simple models using pre-trained bidirectional encoders from transformers(BERT) BIBREF1 under various data conditions. Our results show (1) the utility of augmenting training data, and (2) the benefit of using majority votes from our simple classifiers.
In the rest of the paper, we introduce the dataset, followed by our experimental conditions and results. We then provide a literature review and conclude.
<<</Introduction>>>
<<<Data>>>
For the purpose of our experiments, we use data released by the APDA shared task organizers. The dataset is divided into train and test by organizers. The training set is distributed with labels for the three tasks of age, dialect, and gender. Following the standard shared tasks set up, the test set is distributed without labels and participants were expected to submit their predictions on test. The shared task predictions are expected by organizers at the level of users. The distribution has 100 tweets for each user, and so each tweet is distributed with a corresponding user id. As such, in total, the distributed training data has 2,250 users, contributing a total of 225,000 tweets. The official task test set contains 720,00 tweets posted by 720 users. For our experiments, we split the training data released by organizers into 90% TRAIN set (202,500 tweets from 2,025 users) and 10% DEV set (22,500 tweets from 225 users). The age task labels come from the tagset {under-25, between-25 and 34, above-35}. For dialects, the data are labeled with 15 classes, from the set {Algeria, Egypt, Iraq, Kuwait, Lebanon-Syria, Lybia, Morocco, Oman, Palestine-Jordan, Qatar, Saudi Arabia, Sudan, Tunisia, UAE, Yemen}. The gender task involves binary labels from the set {male, female}.
<<</Data>>>
<<<Experiments>>>
As explained earlier, the shared task is set up at the user level where the age, dialect, and gender of each user are the required predictions. In our experiments, we first model the task at the tweet level and then port these predictions at the user level. For our core modelling, we fine-tune BERT on the shared task data. We also introduce an additional in-house dataset labeled with dialect and gender tags to the task as we will explain below. As a baseline, we use a small gated recurrent units (GRU) model. We now introduce our tweet-level models.
<<<Tweet-Level Models>>>
<<<Baseline GRU.>>>
Our baseline is a GRU network for each of the three tasks. We use the same network architecture across the 3 tasks. For each network, the network contains a layer unidirectional GRU, with 500 units and an output linear layer. The network is trained end-to-end. Our input embedding layer is initialized with a standard normal distribution, with $\mu =0$, and $\sigma =1$, i.e., $W \sim N(0,1)$. We use a maximum sequence length of 50 tokens, and choose an arbitrary vocabulary size of 100,000 types, where we use the 100,000 most frequent words in TRAIN. To avoid over-fitting, we use dropout BIBREF2 with a rate of 0.5 on the hidden layer. For the training, we use the Adam BIBREF3 optimizer with a fixed learning rate of $1e-3$. We employ batch training with a batch size of 32 for this model. We train the network for 15 epochs and save the model at the end of each epoch, choosing the model that performs highest accuracy on DEV as our best model. We present our best result on DEV in Table TABREF7. We report all our results using accuracy. Our best model obtains 42.48% for age, 37.50% for dialect, and 57.81% for gender. All models obtains best results with 2 epochs.
<<</Baseline GRU.>>>
<<<BERT.>>>
For each task, we fine-tune on the BERT-Base Muultilingual Cased model relesed by the authors BIBREF1 . The model was pre-trained on Wikipedia of 104 languages (including Arabic) with 12 layer, 768 hidden units each, 12 attention heads, and has 110M parameters in entire model. The vocabulary of the model is 119,547 shared WordPices. We fine-tune the model with maximum sequence length of 50 tokens and a batch size of 32. We set the learning rate to $2e-5$ and train for 15 epochs. We use the same network architecture and parameters across the 3 tasks. As Table TABREF7 shows, comparing with GRU, BERT is 3.16% better for age, 4.85% better for dialect, and 2.45% higher for gender.
<<</BERT.>>>
<<<Data Augmentation.>>>
To further improve the performance of our models, we introduce in-house labeled data that we use to fine-tune BERT. For the gender classification task, we manually label an in-house dataset of 1,100 users with gender tags, including 550 female users, 550 male users. We obtain 162,829 tweets by crawling the 1,100 users' timelines. We combine this new gender dataset with the gender TRAIN data (from shared task) to obtain an extended dataset, to which we refer as EXTENDED_Gender. For the dialect identification task, we randomly sample 20,000 tweets for each class from an in-house dataset gold labeled with the same 15 classes as the shared task. In this way, we obtain 298,929 tweets (Sudan only has 18,929 tweets). We combine this new dialect data with the shared task dialect TRAIN data to form EXTENDED_Dialect. For both the dialect and gender tasks, we fine-tune BERT on EXTENDED_Dialect and EXTENDED_Gender independently and report performance on DEV. We refer to this iteration of experiments as BERT_EXT. As Table TABREF7 shows, BERT_EXT is 2.18% better than BERT for dialect and 0.75% better than BERT for gender.
<<</Data Augmentation.>>>
<<</Tweet-Level Models>>>
<<<User-Level Models>>>
Our afore-mentioned models identify user's profiling on the tweet-level, rather than directly detecting the labels of a user. Hence, we follow the work of Zhang & Abdul-Mageed BIBREF4 to identify user-level labels. For each of the three tasks, we use tweet-level predicted labels (and associated softmax values) as a proxy for user-level labels. For each predicted label, we use the softmax value as a threshold for including only highest confidently predicted tweets. Since in some cases softmax values can be low, we try all values between 0.00 and 0.99 to take a softmax-based majority class as the user-level predicted label, fine-tuning on our DEV set. Using this method, we acquire the following results at the user level: BERT models obtain an accuracy of 55.56% for age, 96.00% for dialect, and 80.00% for gender. BERT_EXT models achieve 95.56% accuracy for dialect and 84.00% accuracy for gender.
<<</User-Level Models>>>
<<<APDA@FIRE2019 submission>>>
First submission. For the shared task submission, we use the predictions of BERT_EXT as out first submission for gender and dialect, but only BERT for age (since we have no BERT_EXT models for age, as explained earlier). In each case, we acquire results at tweet-level first, then port the labels at the user-level as explained in the previous section. For our second and third submitted models, we also follow this method of going from tweet to user level. Second submission. We combine our DEV data with our EXTENDED_Dialect and EXTENDED_Gender data, for dialect and gender respectively, and train our second submssions for the two tasks. For age second submsision, we concatenate DEV data to TRAIN and fine-tune the BERT model. We refer to the settings for our second submission models collectively as BERT_EXT+DEV.
Third submission. Finally, for our third submission, we use a majority vote of (1) first submission, (2) second submission, and (3) predictions from our user-level BERT model. These majority class models (i.e., our third submission) achieve best results on the official test data. We acquire 54.72% accuracy for age, 81.67% accuracy for gender, 93.75% accuracy for dialect, and 40.97% joint accuracy.
<<</APDA@FIRE2019 submission>>>
<<</Experiments>>>
<<<Conclusion>>>
In this work, we described our submitted models to the Arabic author profiling and deception detection shared task (APDA) BIBREF0. We focused on detecting age, dialect, and gender using BERT models under various data conditions, showing the utility of additional, in-house data on the task. We also showed that a majority vote of our models trained under different conditions outperforms single models on the official evaluation. In the future, we will investigate automatically extending training data for these tasks as well as better representation learning methods.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nData\nExperiments\nTweet-Level Models\nBaseline GRU.\nBERT.\nData Augmentation.\nUser-Level Models\nAPDA@FIRE2019 submission\nConclusion"
],
"type": "outline"
}
|
1911.06171
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Unsupervised Pre-training for Natural Language Generation: A Literature Review
<<<Abstract>>>
Recently, unsupervised pre-training is gaining increasing popularity in the realm of computational linguistics, thanks to its surprising success in advancing natural language understanding (NLU) and the potential to effectively exploit large-scale unlabelled corpus. However, regardless of the success in NLU, the power of unsupervised pre-training is only partially excavated when it comes to natural language generation (NLG). The major obstacle stems from an idiosyncratic nature of NLG: Texts are usually generated based on certain context, which may vary with the target applications. As a result, it is intractable to design a universal architecture for pre-training as in NLU scenarios. Moreover, retaining the knowledge learned from pre-training when learning on the target task is also a non-trivial problem. This review summarizes the recent efforts to enhance NLG systems with unsupervised pre-training, with a special focus on the methods to catalyse the integration of pre-trained models into downstream tasks. They are classified into architecture-based methods and strategy-based methods, based on their way of handling the above obstacle. Discussions are also provided to give further insights into the relationship between these two lines of work, some informative empirical phenomenons, as well as some possible directions where future work can be devoted to.
<<</Abstract>>>
<<<Introduction>>>
Unsupervised pre-training has sparked a sensational research interest in the natural language processing (NLP) community. This technology provides a promising way to exploit linguistic information from large-scale unlabelled textual data, which can serve as an auxiliary prior knowledge to benefit a wide range of NLP applications. In the literature, language modeling (LM) is a prevalent task for pre-training, where the target words are predicted conditioned on a given context. Therefore, it is intuitive to employ the pre-trained LMs for natural language generation, as the pre-training objective naturally accords with the goal of NLG. However, revolutionary improvements are only observed in the field of NLU.
The primary factor that impedes the progress of unsupervised pre-training in NLG is an idiosyncratic nature of text generation: Basically, we do not write words from scratch, but instead based on particular context, e.g., the source language sentences for translation, the dialog histories for response generation, and the visual scenes for image captioning, among others. In unsupervised pre-training, the task-specific context is not available, which leads to a discrepancy between pre-training and training in the target task. More precisely, the challenges posed by the discrepancy can be reflected in two aspects: First, the diverse context makes it intractable to design a universal representation extractor as in the case of NLU, and the pre-trained language generators may have to modify their inner structures to deal with the task-specific context. Second, the mismatch in data distribution and objective between the two training stages might result in the performance on the pre-training tasks being compromised during fine-tuning, which is dubbed as the catastrophic forgetting problem BIBREF0.
In response to the above challenges, two lines of work are proposed by resorting to architecture-based and strategy-based solutions, respectively. Architecture-based methods either try to induce task-specific architecture during pre-training (task-specific methods), or aim at building a general pre-training architecture to fit all downstream tasks (task-agnostic methods). Strategy-based methods depart from the pre-training stage, seeking to take advantage of the pre-trained models during the process of target task learning. The approaches include fine-tuning schedules that elaborately design the control of learning rates for optimization, proxy tasks that leverage labeled data to help the pre-trained model better fit the target data distribution, and knowledge distillation approaches that ditch the paradigm of initialization with pre-trained parameters by adopting the pre-trained model as a teacher network.
The remainder of this review is organized as follows: In Section SECREF2, we will introduce the background knowledge about unsupervised pre-training for NLU, followed by a sketch of how the pre-trained models are employed through parameter initialization for NLG in Section SECREF3. In Section SECREF4, we will describe the architecture-based methods, and the strategy-based methods are presented in Section SECREF5. Section SECREF6 provides some in-depth discussions, and Section SECREF7 concludes this review.
<<</Introduction>>>
<<<Background: Unsupervised Pre-training for NLU>>>
Learning fine-grained language representations is a perennial topic in natural language understanding. In restrospect, compelling evidences suggest that good representations can be learned through unsupervised pre-training.
Early work focused on word-level representations BIBREF1, BIBREF2, which encodes each word independently. For sentence-level representations, there are roughly two kinds of pre-training objectives, namely discriminative pre-training and generative pre-training. Discriminative pre-training distinguishes context sentence(s) for a given sentence from non-context sentence(s) BIBREF3, BIBREF4, with the aim to capture inter-sentence relationships. Generative pre-training follows the language model paradigm:
where $x_{t}$ is the $t^{th}$ word in the textual sequence to generate, $T$ indicates sequence length, $\theta $ stands for learnable parameters, and $C$ is the context information, which is defined by the pre-training objective. ELMo BIBREF5 and GPT (short for Generative Pre-training) BIBREF6 adopt uni-directional Transformer BIBREF7 and bi-directional LSTM BIBREF8 language models, respectively. In this case, the context is defined as $x_{1:t}$ or $x_{t+1:T}$. BERT BIBREF3 is trained with a novel masked language model (MLM), which is a non-autoregressive way of generation. Specifically, MLM randomly replaces a fixed proportion of tokens in each sentence with a special [MASK] token or a random token, which results in a corrupted sentence $X_{\text{mask}}$, and predicts each replaced token based on the same context $X_{\text{mask}}$. To alleviate the inconsistency with target tasks caused by the introduction of [MASK] token, XLNet BIBREF9 introduces permutation-based language model, which conducts autoregressive language modeling over all possible permutations of the original word sequence. This gives rise to a context $C=X_{\mathbf {z}_{1:t-1}}$, where $\mathbf {z}$ is a certain permutation of $[1,2, \ldots , T]$, according to the definitions in BIBREF9. BIBREF10 and BIBREF11 pre-trained an encoder-decoder framework to reconstruct the input sentence and the surrounding sentence, respectively, and the encoded input sentence thereby is included in the context $C$.
The sentence representations learned by LMs can be used to perform many NLU tasks by adding a simple linear classifier. Despite the objective of language modeling, the pre-trained representations and have successfuly pushed the state-of-the-art on multiple benchmarks .
<<</Background: Unsupervised Pre-training for NLU>>>
<<<Unsupervised Pre-training and Parameter Initialization for NLG>>>
NLG systems are usually built with an encoder-decoder framework, where the encoder reads the context information and the decoder generates the target text from the encoded vectorial representations. A direct way to utilize the pre-trained models is to initialize part of the encoder (when dealing with textual context) and/or the decoder with pre-trained parameters. For the encoder, pre-training is expected to provide better sentence representations, as we discussed in Section SECREF2. For the decoder, the intuition is to endow the model with some rudimentary ability for text generation.
BIBREF12 employed BERT as the encoder for abstractive text summarization, with some additional techniques to help integrate the BERT-initialized encoder with the randomly initialized decoder, which we will explicate in Section SECREF12. GPT-2 BIBREF13 inherited the left-to-right LM pre-training objective from GPT and extended the application to NLG, where the pre-trained LM directly serves as the language generator, with some special symbols to identify task-specific contexts. In the case of zero-shot task transfer, preliminary experiments showed that straightforward adaption of GPT-2 compares unfavorably with other unsupervised baselines.
BIBREF14 is among the first attempts to investigate unsupervised pre-training for sequence to sequence (Seq2Seq) learning. They used pre-trained LSTM-based LMs to initialize the first layer of the encoder and the decoder, which act as representation extractors. An additional LSTM layer, which is randomly initialized, is then added on top of the pre-trained LMs to build the Seq2Seq framework. To make use of the text generation ability of LMs, the output softmax layer of the decoder LM is also retained. Some recent endeavours BIBREF15, BIBREF16 explored multiple combinations of GPT- and BERT-based models to initialize the encoder and the decoder, respectively. Although remarkable results are observed, the separately pre-trained LMs are still inconsistent with the Seq2Seq framework.
<<</Unsupervised Pre-training and Parameter Initialization for NLG>>>
<<<Architecture-based Methods>>>
<<<Inducing Task-Specific Architecture in Pre-training>>>
Separately initializing the encoder and the decoder with LMs neglects the interaction between the two modules at the pre-training stage, which is sub-optimal. For NLG tasks that can be modeled as Seq2Seq learning, it is feasible to jointly pre-train the encoder and the decoder. Existing approaches for this sake can be categorized into three variants: Denoising autoencoders (DAEs), conditional masked language models (CMLMs) and sequence to sequence language models (Seq2Seq LMs).
<<<Denoising Autoencoder>>>
An intuitive way to conduct unsupervised Seq2Seq learning is to train an autoencoder (AE) based on encoder-decoder framework. Different from AEs, DAEs take a corrupted sentence as input and reconstruct the original sentence. The advantage is that the corrupted input will force the decoder to extract relevant information from the source side for text generation. To obtain the corrupted sentence, BIBREF17 designed three noising functions: shuffle, delete and replace (the left plot of Figure FIGREF4 gives an illustration), each of which is controlled by a pre-defined probability distribution. To be more specific, each token in the raw sequence is assigned with a new index based on a gaussion distribution $N(0, \sigma )$; the delete and replace operations of a token are determined by a Bernoulli distribution $B(p)$ with Beta distribution as prior. The three functions are applied to the raw sequences in random order.
<<</Denoising Autoencoder>>>
<<<Conditional Masked Language Model>>>
CMLM BIBREF18 extends the single model MLM proposed by BIBREF3 to the encoder-decoder setting, where the masked text sequence is read by the encoder, and the decoder only reconstructs the masked tokens, in construct to the entire sequence in DAEs. As the middle plot of Figure FIGREF4 shows, CMLM masks consecutive tokens , and the unmasked tokens in the encoder side are masked when being feed to the decoder. Following the notations in BIBREF18, let us assume that the tokens with index from $u$ to $v$ are masked from the raw sentence $X$, which results in $X^{\backslash u: v}$, and $X^{u: v}$ denotes the decoder input. Then, when predicting each masked token $x_{t}$ ($u \le t \le v$), the context is $X^{u: v}_{<t}$ and $X^{\backslash u: v}$. The underlying motivation, as BIBREF18 argued, is to force the encoder to understand the meaning of the unmasked tokens, which is achieved by encoder side masks, and encourage the decoder to refer to the source information rather than the leftward target tokens, which is achieved by decoder side masks.
<<</Conditional Masked Language Model>>>
<<<Sequence to Sequence Language Model>>>
Seq2Seq LM BIBREF19 performs Seq2Seq modeling using a single Transformer model, with the concatenation of source sentence and target sentence as input. To simulate Seq2Seq learning with encoder-decoder frameworks, the attention span of each target token is constrained to the source tokens and the leftward target tokens, which is achieved by self-attention masks (see the right plot of Figure FIGREF4). In this way, the ability to extract language representation and generate texts are integrated into a single model. It is worth mentioning that Seq2Seq LM does not auto-regressively generate the target sentence, but instead predicting masked tokens based on the contexts controlled by self-attention masks. In other words, Seq2Seq LM still belongs into the family of MLMs. Apart from Seq2Seq LM, BIBREF19 also explored uni-directional LM and bi-directional LM structures to perform the MLM-based cloze task, and incorporated the three kinds of LMs to build the final pre-training objective.
<<</Sequence to Sequence Language Model>>>
<<</Inducing Task-Specific Architecture in Pre-training>>>
<<<Encoder-Agnostic Architectures for Adaptation>>>
Although the Seq2Seq-based pre-training methods exhibit strong performance, they are confined to text-to-text generation. In order to encompass more diverse contexts, some researches began to investigate encoder-agnostic pre-training architectures BIBREF22, BIBREF23. Context Attention and Pseudo Self-Attention are two typical variants presented by BIBREF23, which differ in the way that the task-specific context is injected (see Figure FIGREF11). Context Attention takes the form of a standard Transformer decoder, with the layer that attends to the encoder outputs being randomly initialized. Pseudo Self-Attention considers the context vectors and the previous layer decoder outputs as an integral input, and the attended results are computed as follows:
where $C \in \mathbb {R}^{|C| \times d_{c}}$ and $Y \in \mathbb {R}^{|Y| \times d_{y}}$ are the context vectors and representations of the target textual sequence, respectively. The linear transformation matrices $W^{c}_{k}, W^{c}_{v} \in \mathbb {R}^{|C| \times d_{model}}$ with respect to $C$ are added to project the context to the self-attention space, and $W_{q}, W^{y}_{k}, W^{y}_{v} \in \mathbb {R}^{|Y| \times d_{model}}$ are part of the pre-trained model.
Except for the performance on target tasks, an alternative metric to gauge the quality of encoder-agnostic architectures is the degree to which the pre-trained parameters have to change, in order to inject the task-specific context. BIBREF23 compared the parameter changes of Context Attention and Pseudo Self-Attention in the feed forward layer, and discovered that Pseudo Self-Attention is more robust under this evaluation.
<<</Encoder-Agnostic Architectures for Adaptation>>>
<<</Architecture-based Methods>>>
<<<Strategy-based Methods>>>
<<<Fine-tuning Schedules for Adaption>>>
When the pre-trained model is only a part of the target task system, fine-tuning requires joint learning of the components initialized in different fashion, which can make the training process unstable. The pre-trained model may also suffer from aggravated catastrophic forgetting problem as it has to coordinate with other components during fine-tuning BIBREF24, BIBREF25. From the perspective of optimization, it is unreasonable to schedule the pre-trained components and the newly-introduced components with the same learning rate, considering that the former have already possessed some unique knowledge. A common assumption is that the pre-trained parameters should be updated at a slower learning rate and with smoother decay BIBREF12, BIBREF25. The rationale behind such setting is that fine-tuning with more accurate gradient can prevent the pre-trained parameters from deviating too faraway from the original point, and the newly-introduced components need to quickly converge to the target parameter space. To this end, BIBREF12 adopted two Adam optimizers with different learning rates for the pre-trained encoder and the randomly initialized decoder. The learning rates are scheduled as in BIBREF7 with different warming up steps:
where ${warmup}_{\operatorname{Enc/Dec}}$ and $\tilde{l}r_{\operatorname{Enc/Dec}}$ determine the speed of learning rate changes and the max learning rates, respectively.
<<</Fine-tuning Schedules for Adaption>>>
<<<Proxy Tasks for Adaption>>>
Large-scale unlabelled data provides generic linguistic knowledge, but the target tasks have unique data distribution and objectives. An effective way to bridge this gap is to introduce proxy tasks with moderate changes to the pre-training objectives, but at the same time take the labeled data into account BIBREF15, BIBREF20. Translation Language Modeling (TLM) BIBREF15 is a special generalization of MLM in the cross-lingual situation. It leverages the paralleled machine translation corpus for further training of the LMs that are pre-trained on monolingual corpora. Specifically, the source language sentence and the corresponding target language sentence are fed to the model in parallel, with random tokens from each language being masked to perform the cloze-style prediction as in MLM. Different from monolingual MLM, TLM encourages word predictions to rely on the interdependence from two languages, therefore the sentence representations learned from separate languages can be well aligned.
For some particular NLG tasks, existing proxy tasks designed under the supervised setup can also work with unsupervised pre-training models. For instance, in neural text summarization, the combination of extractive and abstractive objectives can generate better summaries BIBREF26, BIBREF27. Inspired by this, BIBREF12 introduced extractive summarization as a proxy task to fine-tune the pre-trained BERT, before adopting it as the abstractive summarization encoder. Compared with the original BERT features, the representations learned from extractive summarization contain more task-specific information, therefore conveying the meaning of source texts better.
<<</Proxy Tasks for Adaption>>>
<<<Knowledge Distillation for Adaption>>>
The aforementioned methods are diverse in implementation, but share the common idea of employing the pre-trained models through parameter initialization. An alternative way to exploit the pre-trained models is using the knowledge distillation technique BIBREF28. Knowledge distillation is a special form of training, where a student network learns from the supervision signals produced by a teacher network.
Taking BERT as an example, the pre-trained MLM contains global information, which can teach the autoregressive Seq2Seq models to “see from the future” BIBREF20. In practice, the probability distribution predicted by BERT is regarded as a soft label to compute the cross-entropy loss function :
where $X$, $Y$ and $Y^{masked}$ are the source sequence, the raw target sequence and the masked target sequence, respectively. $\mathcal {V}$ denotes the output vocabulary. $\theta $ indicates the parameters of the student network (Seq2Seq), which are learnable, and $\phi $ indicates the BERT parameters, which are fixed. In this way, the knowledge from unsupervised pre-training can be flexibly transferred to the target tasks, dispensing with the size and architecture limitations.
The supervision can also be derived from the hidden representations BIBREF25, with a mean-squared-error (MSE) distillation loss:
where $m$ and $n$ are hyper-parameters denoting the layer subscripts. Compared with the probability soft labels, the representation distillation method requires the Seq2Seq model to have the same hidden size with BERT, which is a more strict constrain.
Combining the knowledge distillation loss and the standard generative loss for Seq2Seq learning gives rise to the final objective to optimize:
where $\alpha $ is the weighting term that balances the contribution of the two kinds of loss functions.
<<</Knowledge Distillation for Adaption>>>
<<</Strategy-based Methods>>>
<<<Discussions>>>
<<<The Relationship between Architecture- and Strategy-based Methods>>>
We have analysed two major challenges faced by the application of unsupervised pre-training to NLG (see Section SECREF1). On this basis, we introduced existing methodologies from the architecture and strategy considerations. The architecture-based methods are mainly proposed in response to the first challenge. Since the architecture of pre-trained model has a significant effect on the downstream task (when the pre-trained parameters are used for initialization), architecture designings have to plan in advance to narrow the discrepancy between pre-training and training on target tasks. This motivation has shown great effectiveness on the Seq2Seq framework BIBREF17, BIBREF18, BIBREF19. The strategy-based methods focus on the second challenge. They take a postprocessing point of view, with the aim to make the best of the pre-trained model at the target task training stage. It is noteworthy that the challenges are not independent inherently, and the two types of methods can actually work as complement to each other. For example, the fine-tuning schedules can alleviate the negative effects caused by the modification of pre-trained structures, and the catastrophic forgetting problem can also seek solution by devising a general task-agnostic architecture.
<<</The Relationship between Architecture- and Strategy-based Methods>>>
<<<Experimental Phenomenons>>>
Existing researches on unsupervised pre-training for NLG are conducted on various tasks for different purposes. Probing into the assorted empirical results may help us discover some interesting phenomenons:
The advantage of pre-training gradually diminishes with the increase of labeled data BIBREF14, BIBREF17, BIBREF18.
Fixed representations yield better results than fine-tuning in some cases BIBREF24.
Overall, pre-training the Seq2Seq encoder outperforms pre-training the decoder BIBREF24, BIBREF17, BIBREF15, BIBREF16.
The first two phenomenons attest to the catastrophic forgetting theory. Thanks to the access to large-scale unlabeled corpora, unsupervised pre-training is able to excel at zero/low-shot settings, while the pre-trained models can only achieve few gains when abundant labeled data is available. This can be explained by the high quality of the dataset and the capacity of the task-specific models, which leave little space for improvement. Nonetheless, the increased supervision from labeled data can also influence the performance on pre-training tasks. By fixing the pre-trained parameters, the learned representations will not be affected by the numerous iterations of training on the target task, which makes them work better without fine-tuning.
The third phenomenon is kind of counter-intuitive, as the generative pre-training objectives are more similar to the decoder's function. There is no unanimous theory to explain why the encoder is a more important element to pre-train. But this discovery suggests that the pre-trained LMs are more robust when acting as representation extractors, while they are more sensitive the the change of context when acting as conditional language generators.
<<</Experimental Phenomenons>>>
<<<Future Directions>>>
The diversity of NLG applications poses challenges on the employment of unsupervised pre-training, yet it also raises more scientific questions for us to explore. In terms of the future development of this technology, we emphasize the importance of answering four questions: 1) How to introduce unsupervised pre-training into NLG tasks with cross-modal context? 2) How to design a generic pre-training algorithm to fit a wide range of NLG tasks? 3) How to reduce the computing resources required for large-scale pre-training? 4) What aspect of knowledge do the pre-trained models provide for better language generation?
NLG tasks can be defined by the context features and mapping functions. The introduction of cross-lingual textual features BIBREF15 and task-specific Seq2Seq architectures BIBREF18, BIBREF17, BIBREF19 in the pre-training stage has successfully boosted the performance on text-to-text generation. For NLG tasks concerning multiple modalities, it is conceivable that pre-training methods could also benefit from the joint consideration of cross-modal features. For example, in the vision-and-language field, the learning of cross-modal representations has proven to be highly effective BIBREF29, BIBREF30, but such representations can not yet be extracted from unpaired images and texts for image-grounded text generation, to the best of our knowledge.
In NLU, it is possible to pre-train one model to obtain language representations once and for all. As for NLG, a task-agnostic pre-training algorithm should transcend the purpose of representation learning, and consider the general ability for language generation. The notion of “encoder-agnostic adaption” BIBREF23 makes a preliminary step towards this direction, but still remains far from approaching the equivalent performance as its NLU counterparts BIBREF5, BIBREF3, BIBREF6, BIBREF9.
Due to the colossal scale of the pre-training corpora, including a large number of parameters is essential to achieve favorable performance. As a result, the model size usually costs at least 8 GPU cards BIBREF19, BIBREF18, BIBREF15 in the pre-training for NLG systems, and it also hinders real-world applications. To reduce the memory consumption problem, existing work resorted to knowledge distillation to transfer the knowledge from a large teacher network to a small student network BIBREF31, BIBREF32, or parameter reduction techniques to prune the model size in a more direct way BIBREF33. However, the research context is limited to the NLU scenarios, and same endeavours are necessary to NLG applications.
Another important branch of researches on unsupervised pre-training in NLP try to explain what kind of knowledge can be learned from pre-training. Related work has been done on the basis of both language understanding BIBREF34, BIBREF35 and generation BIBREF36. Specially, BIBREF36 analysed the characters of texts generated from a pre-trained GPT-2 by evaluating them over a wide spectrum of metrics. We argue that deeper understanding the way in which unsupervised pre-training contributes to better text generation, and the intrinsic mechanisms of the pre-trained models are also crucial to future work.
<<</Future Directions>>>
<<</Discussions>>>
<<<Conclusion>>>
Unsupervised pre-training has defined the state-of-the-arts on a variety NLP tasks. However, in the field of NLG, the diversity of context information is still impeding the the application of unsupervised pre-training. The major challenges exist in designing model architectures to cater for the assorted context, and retaining the general knowledge learned from pre-training. In this review, we survey the recent unsupervised methods to utilize large-scale corpora for NLG purposes, with a highlight on those aiming at facilitating the integration of pre-trained models with downstream tasks. We propose to classify them into architecture- and strategy-based methods, followed with detailed introductions and discussions of their pros and cons. Based on the comparison of these methods and analyses of some informative experimental results from previous publications, we summarize some scientific questions that has not yet been well understood, and suggest attention being paid to these questions by future work.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nBackground: Unsupervised Pre-training for NLU\nUnsupervised Pre-training and Parameter Initialization for NLG\nArchitecture-based Methods\nInducing Task-Specific Architecture in Pre-training\nDenoising Autoencoder\nConditional Masked Language Model\nSequence to Sequence Language Model\nEncoder-Agnostic Architectures for Adaptation\nStrategy-based Methods\nFine-tuning Schedules for Adaption\nProxy Tasks for Adaption\nKnowledge Distillation for Adaption\nDiscussions\nThe Relationship between Architecture- and Strategy-based Methods\nExperimental Phenomenons\nFuture Directions\nConclusion"
],
"type": "outline"
}
|
2002.06053
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Exploring Chemical Space using Natural Language Processing Methodologies for Drug Discovery
<<<Abstract>>>
Text-based representations of chemicals and proteins can be thought of as unstructured languages codified by humans to describe domain-specific knowledge. Advances in natural language processing (NLP) methodologies in the processing of spoken languages accelerated the application of NLP to elucidate hidden knowledge in textual representations of these biochemical entities and then use it to construct models to predict molecular properties or to design novel molecules. This review outlines the impact made by these advances on drug discovery and aims to further the dialogue between medicinal chemists and computer scientists.
<<</Abstract>>>
<<<Introduction>>>
The design and discovery of novel drugs for protein targets is powered by an understanding of the underlying principles of protein-compound interaction. Biochemical methods that measure affinity and biophysical methods that describe the interaction in atomistic level detail have provided valuable information toward a mechanistic explanation for bimolecular recognition BIBREF0. However, more often than not, compounds with drug potential are discovered serendipitously or by phenotypic drug discovery BIBREF1 since this highly specific interaction is still difficult to predict BIBREF2. Protein structure based computational strategies such as docking BIBREF3, ultra-large library docking for discovering new chemotypes BIBREF4, and molecular dynamics simulations BIBREF3 or ligand based strategies such as quantitative structure-activity relationship (QSAR) BIBREF5, BIBREF6, and molecular similarity BIBREF7 have been powerful at narrowing down the list of compounds to be tested experimentally. With the increase in available data, machine learning and deep learning architectures are also starting to play a significant role in cheminformatics and drug discovery BIBREF8. These approaches often require extensive computational resources or they are limited by the availability of 3D information. On the other hand, text based representations of biochemical entities are more readily available as evidenced by the 19,588 biomolecular complexes (3D structures) in PDB-Bind BIBREF9 (accessed on Nov 13, 2019) compared with 561,356 (manually annotated and reviewed) protein sequences in Uniprot BIBREF10 (accessed on Nov 13, 2019) or 97 million compounds in Pubchem BIBREF11 (accessed on Nov 13, 2019). The advances in natural language processing (NLP) methodologies make processing of text based representations of biomolecules an area of intense research interest.
The discipline of natural language processing (NLP) comprises a variety of methods that explore a large amount of textual data in order to bring unstructured, latent (or hidden) knowledge to the fore BIBREF12. Advances in this field are beneficial for tasks that use language (textual data) to build insight. The languages in the domains of bioinformatics and cheminformatics can be investigated under three categories: (i) natural language (mostly English) that is used in documents such as scientific publications, patents, and web pages, (ii) domain specific language, codified by a systematic set of rules extracted from empirical data and describing the human understanding of that domain (e.g. proteins, chemicals, etc), and (iii) structured forms such as tables, ontologies, knowledge graphs or databases BIBREF13. Processing and extracting information from textual data written in natural languages is one of the major application areas of NLP methodologies in the biomedical domain (also known as BioNLP). Information extracted with BioNLP methods is most often shared in structured databases or knowledge graphs BIBREF14. We refer the reader to the comprehensive review on BioNLP by BIBREF15. Here, we will be focusing on the application of NLP to domain specific, unstructured biochemical textual representations toward exploration of chemical space in drug discovery efforts.
We can view the textual representation of biomedical/biochemical entities as a domain-specific language. For instance, a genome sequence is an extensive script of four characters (A, T, G, C) constituting a genomic language. In proteins, the composition of 20 different natural amino acids in varying lengths builds the protein sequences. Post-translational modifications expand this 20 letter alphabet and confer different properties to proteins BIBREF16. For chemicals there are several text based alternatives such as chemical formula, IUPAC International Chemical Identifier (InChI) BIBREF17 and Simplified Molecular Input Line Entry Specification (SMILES) BIBREF18.
Today, the era of “big data" boosts the “learning" aspect of computational approaches substantially, with the ever-growing amounts of information provided by publicly available databases such as PubChem BIBREF11, ChEMBL BIBREF19, UniProt BIBREF10. These databases are rich in biochemical domain knowledge that is in textual form, thus building an efficient environment in which NLP-based techniques can thrive. Furthermore, advances in computational power allow the design of more complex methodologies, which in turn drive the fields of machine learning (ML) and NLP. However, biological and chemical interpretability and explainability remain among the major challenges of AI-based approaches. Data management in terms of access, interoperability and reusability are also critical for the development of NLP models that can be shared across disciplines.
With this review, we aim to provide an outline of how the field of NLP has influenced the studies in bioinformatics and cheminformatics and the impact it has had over the last decade. Not only are NLP methodologies facilitating processing and exploitation of biochemical text, they also promise an “understanding" of biochemical language to elucidate the underlying principles of bimolecular recognition. NLP technologies are enhancing the biological and chemical knowledge with the final goal of accelerating drug discovery for improving human health. We highlight the significance of an interdisciplinary approach that integrates computer science and natural sciences.
<<<NLP Basics>>>
BIBREF20 describes NLP on three levels: (i) the word level in which the smallest meaningful unit is extracted to define the morphological structure, (ii) the sentence level where grammar and syntactic validity are determined, and (iii) the domain or context level in which the sentences have global meaning. Similarly, our review is organized in three parts in which bio-chemical data is investigated at: (i) word level, (ii) sentence (text) level, and (iii) understanding text and generating meaningful sequences. Table TABREF37 summarizes important NLP concepts related to the processing of biochemical data. We refer to these concepts and explain their applications in the following sections.
All NLP technology relates to specific AI architectures. In Table TABREF38 W-we summarize the main ML and deep learning (DL) architectures that will be mentioned throughout the review.
<<</NLP Basics>>>
<<</Introduction>>>
<<<Biochemical Language Processing>>>
The language-like properties of text-based representations of chemicals were recognized more than 50 years ago by Garfield BIBREF21. He proposed a “chemico-linguistic" approach to representing chemical nomenclature with the aim of instructing the computer to draw chemical diagrams. Protein sequence has been an important source of information about protein structure and function since Anfinsen's experiment BIBREF22. Alignment algorithms, such as Needleman-Wunsh BIBREF23 and Smith-Waterman BIBREF24, rely on sequence information to identify functionally or structurally critical elements of proteins (or genes).
To make predictions about the structure and function of compounds or proteins, the understanding of these sequences is critical for bioinformatics tasks with the final goal of accelerating drug discovery. Much like a linguist who uses the tools of language to bring out hidden knowledge, biochemical sequences can be processed to propose novel solutions, such as predicting interactions between chemicals and proteins or generating new compounds based on the level of understanding. In this section, we will review the applications of some of the NLP-concepts to biochemical data in order to solve bio/cheminformatics problems.
<<<Textual Chemical Data>>>
Information about chemicals can be found in repositories such as PubChem BIBREF11, which includes information on around 100 million compounds, or Drugbank BIBREF25, which includes information on around 10,000 drugs. The main textual sources used in drug discovery are textual representations of chemicals and proteins. Table TABREF39 lists some sources that store different types of biochemical information.
Chemical structures can be represented in different forms that can be one-dimensional (1D), 2D, and 3D. Table TABREF40 depicts different identifiers/representations of the drug ampicillin. While the 2D and 3D representations are also used in ML based approaches BIBREF8, here we focus on the 1D form, which is the representation commonly used in NLP.
<<<IUPAC name>>>
The International Union of Pure and Applied Chemistry (IUPAC) scheme (i.e. nomenclature) is used to name compounds following pre-defined rules such that the names of the compounds are unique and consistent with each other (iupac.org/).
<<</IUPAC name>>>
<<<Chemical Formula>>>
The chemical formula is one of the simplest and most widely-known ways of describing chemicals using letters (i.e. element symbols), numbers, parentheses, and (-/+) signs. This representation gives information about which elements and how many of them are present in the compound.
<<</Chemical Formula>>>
<<<SMILES>>>
The Simplified Molecular Input Entry Specification (SMILES) is a text-based form of describing molecular structures and reactions BIBREF18. SMILES strings can be obtained by traversing the 2D graph representation of the compound and therefore SMILES provides more complex information than the chemical formula. Moreover, due to its textual form, SMILES takes 50% to 70% less space than other representation methods such as an identical connection table (daylight.com/dayhtml/doc/theory/theory.smiles.html).
SMILES notation is similar to a language with its own set of rules. Just like it is possible to express the same concept with different words in natural languages, the SMILES notation allows molecules to be represented with more than one unique SMILES. Although this may sound like a significant ambiguity, the possibility of using different SMILES to represent the same molecule was successfully adopted as a data augmentation strategy by various groups (BIBREF26, BIBREF27, BIBREF28).
Canonical SMILES can provide a unique SMILES representation. However, different databases such as PubChem and ChEMBL might use different canonicalization algorithms to generate different unique SMILES. OpenSMILES (opensmiles.org/opensmiles.html) is a new platform that aims to universalize the SMILES notation. In isomeric SMILES, isotopism and stereochemistry information of a molecule is encoded using a variety of symbols (“/", “\", “@", “@@").
<<</SMILES>>>
<<<DeepSMILES>>>
DeepSMILES is a novel SMILES-like notation that was proposed to address two challenges of the SMILES syntax: (i) unbalanced parentheses and (ii) ring closure pairs BIBREF29. It was initially designed to enhance machine/deep-learning based approaches that utilize SMILES data as input (github.com/nextmovesoftware/deepsmiles). DeepSMILES was adopted in a drug-target binding affinity prediction task in which the findings highlighted the efficacy of DeepSMILES over SMILES in terms of identifying undetectable patterns BIBREF30. DeepSMILES was also utilized in a molecule generation task in which it was compared to canonical and randomized SMILES text BIBREF31. Here, the results suggested that DeepSMILES might limit the learning ability of the SMILES-based molecule generation models because its syntax is more grammar sensitive with the ring closure alteration and the use of a single symbol for branching (i.e. “)") introducing longer sequences.
<<</DeepSMILES>>>
<<<SELFIES>>>
SELF-referencIng Embedding Strings (SELFIES) is an alternative sequence-based representation that is built upon “semantically constrained graphs" BIBREF32. Each symbol in a SELFIES sequence indicates a recursive Chomsky-2 type grammar, and can thus be used to convert the sequence representation to a unique graph. SELFIES utilize SMILES syntax to extract words that will correspond to semantically valid graphs (github.com/aspuru-guzik-group/selfies). BIBREF32 compared SELFIES, DeepSMILES and SMILES representations in terms of validity in cases where random character mutations are introduced. The evaluations on the QM9 dataset yielded results in the favor of SELFIES.
<<</SELFIES>>>
<<<InChI>>>
InChI is the IUPAC International Chemical Identifier, which is a non-proprietary and open-source structural representation (inchi-trust.org) BIBREF33. The InChIKey is a character-based representation that is generated by hashing the InChI strings in order to shorten them. InChi representation has several layers (each) separated by the “/" symbol.
The software that generates InChi is publicly available and InChi does not suffer from ambiguity problems. However, its less complex structure makes the SMILES representation easier to use as shown in a molecular generation study BIBREF34 and in building meaningful chemical representations with a translation-based system BIBREF35. Interestingly, the translation model was able to translate from InChi to canonical SMILES, whereas it failed to translate from canonical SMILES to InChi. BIBREF35 suggested that the complex syntax of InChi made it difficult for the model to generate a correct sequence.
<<</InChI>>>
<<<SMARTS>>>
SMiles ARbitrary Target Specification (SMARTS) is a language that contains specialized symbols and logic operators that enable substructure (pattern) search on SMILES strings BIBREF36. SMARTS can be used in any task that requires pattern matching on a SMILES string such as, querying databases or creating rule dictionaries such as RECAP BIBREF37 and BRICS BIBREF38 to extract fragments from SMILES (daylight.com/dayhtml/doc/theory/theory.smarts.html).
<<</SMARTS>>>
<<<SMIRKS>>>
SMIRKS notation can be used to describe generic reactions (also known as transforms) that comprise one or more changes in atoms and bonds (https://daylight.com/daycgi_tutorials/smirks_examples.html). These transforms are based on “reactant to product" notation, and thus make use of SMILES and SMARTS languages. SMIRKS is utilized in tasks such as constructing an online transform database BIBREF39 and predicting metabolic transformations BIBREF40. A recent study achieves a similar performance to rule-based systems in classifying chemical reactions by learning directly from SMILES text with transforms via neural networks BIBREF41.
<<</SMIRKS>>>
<<</Textual Chemical Data>>>
<<<Identification of Words/Tokens>>>
Similar to words in natural languages, we can assume that the “words" of biochemical sequences convey significant information (e.g. folding, function etc) about the entities. In this regard, each compound/protein is analogous to a sentence, and each compound/protein unit is analogous to a word. Therefore, if we can decipher the grammar of biochemical languages, it would be easier to model bio/cheminformatics problems. However, protein and chemical words are not explicitly known and different approaches are needed to extract syntactically and semantically meaningful biochemical word units from these textual information sources (i.e. sequences). Here, we review some of the most common tokenization approaches used to determine the words of biochemical languages.
<<<@!START@$k$@!END@-mers (@!START@$n$@!END@-grams)>>>
One of the simplest approaches in NLP to extract a small language unit is to use $k$-mers, also known as $n$-grams. $k$-mers indicate $k$ consecutive overlapping characters that are extracted from the sequence with a sliding window approach. “LINGO", which is one of the earliest applications of $k$-mers in cheminformatics, is the name of the overlapping 4-mers that are extracted from SMILES strings BIBREF42. 4-mers of the SMILES of ampicillin, “CC1(C(N2C(S1)C(C2=O)NC(=O)C(C3=CC=CC=C3)N)C(=O)O)C", can be listed as { `CC1(', `C1(C', `1(C(', ..., `O)O)', `)O)C' }. From a sequence of length $l$, a total of $(l-n)+1$ $k$-mers can be extracted. Extracting LINGOs from SMILES is a simple yet powerful idea that has been successfully used to compute molecular similarities, to differentiate between bioisosteric and random molecular pairs BIBREF42 and in a drug-target interaction prediction task BIBREF43, without requiring 2D or 3D information. The results suggested that a SMILES-based approach to compute the similarity of chemicals is not only as good as a 2D-based similarity measurement, but also faster BIBREF43.
$k$-mers were successfully utilized as protein BIBREF44 and chemical words BIBREF45 in protein family classification tasks. 3-mers to 5-mers were often considered as the words of the protein sequence. BIBREF46 reported that some 5-mers could be matched to motifs and protein words are most likely a mixture of different $k$-mers. For the protein function prediction task, BIBREF47 decided to choose among the 1000 most frequent words to build the protein vocabulary, whereas BIBREF48 utilized each $k$-mer type separately and showed that 4-mers provided the best performance. In the latter work, instead of using the whole protein sequence, the words were extracted from different length protein segments, which are also long $k$-mers (i.e. 100-mer, 120-mer) with 30 amino-acid gaps. The use of segmented protein sequences yielded better results than using the whole protein sequence, and important and conserved subsequences were highlighted. $k$-mers were also used as features, along with position specific score matrix features, in the protein fold prediction problem BIBREF49.
<<</@!START@$k$@!END@-mers (@!START@$n$@!END@-grams)>>>
<<<Longest Common Subsequences>>>
The identification of the longest common subsequence (LCS) of two sequences is critical for detecting their similarity. When there are multiple sequences, LCSs can point to informative patterns. LCSs extracted from SMILES sequences performed similarly well to 4-mers in chemical similarity calculation BIBREF43.
<<</Longest Common Subsequences>>>
<<<Maximum Common Substructure>>>
BIBREF50 investigated organic chemistry as a language in an interesting study that extracts maximum common substructures (MCS) from the 2D structures of pairs of compounds to build a vocabulary of the molecule corpus. Contrary to the common idea of functional groups (e.g. methyl, ethyl etc.) being “words" of the chemical language, the authors argued that MCSs (i.e. fragments) can be described as the words of the chemical language BIBREF50. A recent work investigated the distribution of these words in different molecule subsets BIBREF51. The “words" followed Zipf's Law, which indicates the relationship between the frequency of a word and its rank (based on the frequency) BIBREF52, similar to most natural languages. Their results also showed that drug “words" are shorter compared to natural product “words".
<<</Maximum Common Substructure>>>
<<<Minimum Description Length>>>
Minimum Description Length (MDL) is an unsupervised compression-based word segmentation technique in which words of an unknown language are detected by compressing the text corpus. In a protein classification task, each protein was assigned to the family in which its sequence is compressed the most, according to the MDL-based representation BIBREF53. BIBREF53 investigated whether the MDL-based words of the proteins show similarities to PROSITE patterns BIBREF54 and showed that less conserved residues were compressed less by the algorithm. BIBREF53 also emphasized that the integration of domain knowledge, such as the consideration of the hydrophilic and hydrophobic aminoacids in the words (i.e. grammar building), might prove effective.
<<</Minimum Description Length>>>
<<<Byte-Pair Encoding>>>
Byte-Pair Encoding (BPE) generates words based on high frequency subsequences starting from frequent characters BIBREF55. A recent study adopted a linguistic-inspired approach to predict protein-protein interactions (PPIs) BIBREF56. Their model was built upon “words" (i.e. bio-words) of the protein language, in which BPE was utilized to build the bio-word vocabulary. BIBREF56 suggested that BPE-segmented words indicate a language-like behavior for the protein sequences and reported improved accuracy results compared to using 3-mers as words.
<<</Byte-Pair Encoding>>>
<<<Pattern-based words>>>
Subsequences that are conserved throughout evolution are usually associated with protein structure and function. These conserved sequences can be detected as patterns via multiple sequence alignment (MSA) techniques and Hidden Markov Models (HMM). PROSITE BIBREF54, a public database that provides information on domains and motifs of proteins, uses regular expressions (i.e. RE or regex) to match these subsequences.
Protein domains have been investigated for their potential of being the words of the protein language. One earlier study suggested that folded domains could be considered as “phrases/clauses" rather than “words" because of the higher semantic complexity between them BIBREF57. Later, domains were described as the words, and domain architectures as sentences of the language BIBREF58, BIBREF59. Protein domains were treated as the words of multi-domain proteins in order to evaluate the semantic meaning behind the domains BIBREF60. The study supported prior work by BIBREF59 suggesting that domains displayed syntactic and semantic features, but there are only a few multi-domain proteins with more than six domains limiting the use of domains as words to build sentences. Protein domains and motifs have also been utilized as words in different drug discovery tasks such as the prediction of drug-target interaction affinity BIBREF61, BIBREF62. These studies showed that motifs and domains together contribute to the prediction as much as the use of the full protein sequence.
SMARTS is a well-known regex-based querying language that is used to identify patterns in a SMILES string. SMARTS has been utilized to build specific rules for small-molecule protonation BIBREF63, to design novel ligands based on the fragments connected to the active site of a target BIBREF64, and to help generate products in reaction prediction BIBREF65. MolBlocks, a molecular fragmentation tool, also adopted SMARTS dictionaries to partition a SMILES string into overlapping fragments BIBREF36. Furthermore, MACCS BIBREF66 and PubChem BIBREF11 Fingerprints (FP) are molecular descriptors that are described as binary vectors based on the absence/presence of substructures that are predefined with SMARTS language. A recent study on protein family clustering uses a ligand-centric representation to describe proteins in which ligands were represented with SMILES-based (i.e. 8-mers) representation, MACCS and Extended Connectivity Fingerprint (ECFP6) BIBREF45. The results indicate that three of the ligand representation approaches provide similar performances for protein family clustering.
To the best of our knowledge, there is no comprehensive evaluation of the different word extraction techniques except a comparison by BIBREF56 of the performance of BPE-based words against $k$-mers in a PPI prediction task. Such comparison would provide important insights to the bio/cheminformatics community.
<<</Pattern-based words>>>
<<</Identification of Words/Tokens>>>
<<<Text representation>>>
The representation of a text (e.g. molecule or protein sequence) aims to capture syntactic, semantic or relational meaning. In the widely used Vector Space Model (VSM), a text is represented by a feature vector of either weighted or un-weighted terms BIBREF67. The terms of this vector may correspond to words, phrases, k-grams, characters, or dimensions in a semantic space such as in the distributed word embedding representation models. The similarity between two texts represented in the vector space model is usually computed using the cosine similarity metric BIBREF68, which corresponds to the cosine of the angle between the two vectors.
Similarly to the one-hot encoding scheme BIBREF69, in the traditional bag-of-words BIBREF70 and term frequency-inverse document frequency (TF-IDF) BIBREF71 text representation models, each word corresponds to a different dimension in the vector space. Therefore, the similarity between two words in the vector space is zero, even if they are synonymous or related to each other. In the distributed representation models BIBREF72 on the other hand, words are represented as dense vectors based on their context. Words that occur in similar contexts have similar vector representations. In this subsection, we review these commonly used text representation models with their applications in cheminformatics.
<<<Bag-of-words representation>>>
In this representation model, a text is represented as a vector of bag-of-words, where the multiplicity of the words is taken into account, but the order of the words in the text is lost BIBREF70. For instance, the SMILES of ampicillin “CC1(C(N2C(S1)C(C2=O)NC(=O)C(
C3=CC=CC=C3)N)C(=O)O)C" can be represented as a bag-of 8-mers as follows: {“CC1(C(N2", “C1(C(N2C", “1(C(N2C(", “(C(N2C(S",...,“N)C(=O)O" ,“)C(=O)O)" ,“C(=O)O)C" }. We can vectorize it as $S = [1, 1, 1, 1, ...,1, 1, 1]$ in which each number refers to the frequency of the corresponding 8-mer.
Bag-of-words representation was used in molecular similarity computation, in which the SMILES string and the LINGOs extracted from it were treated as the sentence and words, respectively BIBREF42. The unique LINGOs were considered for each pair and a Tanimoto coefficient was used to measure the similarity BIBREF42. Another approach called SMILES Fingerprint (SMIfp) also adopted bag-of-words to create representations of molecules for a ligand-based virtual screening task BIBREF73. SMIfp considered 34 unique symbols in SMILES strings to create a frequency-based vector representation, which was utilized to compute molecular similarity. SMIfp provided comparable results to a chemical representation technique that also incorporated polar group and topological information, as well as atom and bond information, in recovering active compounds amongst decoys BIBREF73.
<<</Bag-of-words representation>>>
<<<TF-IDF>>>
The bag-of-words model, which is based on counting the terms of the sentence/document, might prioritize insignificant but frequent words. To overcome this issue, a weighting scheme can be integrated into the vector representation in order to give more importance to the rare terms that might play a key role in detecting similarity between two documents. One popular weighting approach is to use term frequency-inverse document frequency (TF-IDF) BIBREF71. TF refers to the frequency of a term in the document, and IDF denotes the logarithm of the total number of documents over the number of documents in which the term appears. IDF is therefore an indicator of uniqueness. For instance, the IDF of “C3=CC=CC" is lower than that of “(C(N2C(S", which appears in fewer compounds. Therefore, the existence of “(C(N2C(S" in a compound may be more informative.
TF-IDF weigthing was utilized to assign weights to LINGOs that were extracted from SMILES in order to compute molecule similarity using cosine similarity BIBREF43. Molecular similarities were then used as input for drug-target interaction prediction. A similar performance between TF-IDF weighted LINGO and a graph-based chemical similarity measurement was obtained. BIBREF50 used TF-IDF weighting on chemical bonds to show that bonds with higher TF-IDF scores have a higher probability of breaking.
<<</TF-IDF>>>
<<<One-hot representation>>>
In one-hot representation, for a given vocabulary of a text, each unique word/character is represented with a binary vector that has a 1 in the corresponding position, while the vector positions for the remaining words/characters are filled with 0s BIBREF69. One-hot encoding is fast to build, but might lead to sparse vectors with large dimensions based on the size of the vocabulary (e.g. one million unique words in the vocabulary means one million dimensional binary vectors filled with zeros except one). It is a popular choice, especially in machine learning-based bio/cheminformatic studies to encode different types of information such as SMILES characters BIBREF74, BIBREF75, atom/bond types BIBREF76, BIBREF77 and molecular properties BIBREF78.
<<</One-hot representation>>>
<<<Distributed representations>>>
The one-hot encoding builds discrete representations, and thus does not consider the relationships between words. For instance, the cosine similarity of two different words is 0 even if they are semantically similar. However, if the word (i.e. 8-mer) “(C(N2C(S" frequently appears together with the word “C(C2=O)N" in SMILES strings, this might suggest that they have related “meanings". Furthermore, two words might have similar semantic meanings even though they are syntactically apart. This is where distributed vector representations come into play.
The distributed word embeddings models gained popularity with the introduction of Word2Vec BIBREF72 and GloVe BIBREF79. The main motivation behind the Word2Vec model is to build real-valued high-dimensional vectors for each word in the vocabulary based on the context in which they appear. There are two main approaches in Word2Vec: (i) Skip-Gram and (ii) Continuous Bag of Words (CBOW). The aim of the Skip-Gram model is to predict context words given the center word, whereas in CBOW the objective is to predict the target word given the context words. Figure FIGREF32 depicts the Skip-gram architecture in Word2Vec BIBREF72. For the vocabulary of size $V$, given the target word “2C(S", the model learns to predict two context words. Both target word and context words are represented as one-hot encoded binary vectors of size $V$. The number of neurons in the hidden layer determines the size of the embedding vectors. The weight matrix between the input layer and the hidden layer stores the embeddings of the vocabulary words. The $i^{th}$ row of the embedding matrix corresponds to the embedding of the $i^{th}$ word.
The Word2Vec architecture has inspired a great deal of research in the bio/cheminformatics domains. The Word2Vec algorithm has been successfully applied for determining protein classes BIBREF44 and protein-protein interactions (PPI) BIBREF56. BIBREF44 treated 3-mers as the words of the protein sequence and observed that 3-mers with similar biophysical and biochemical properties clustered together when their embeddings were mapped onto the 2D space. BIBREF56, on the other hand, utilized BPE-based word segmentation (i.e. bio-words) to determine the words. The authors argued that the improved performance for bio-words in the PPI prediction task might be due to the segmentation-based model providing more distinct words than $k$-mers, which include repetitive segments. Another recent study treated multi-domain proteins as sentences in which each domain was recognized as a word BIBREF60. The Word2Vec algorithm was trained on the domains (i.e. PFAM domain identifiers) of eukaryotic protein sequences to learn semantically interpretable representations of them. The domain representations were then investigated in terms of the Gene Ontology (GO) annotations that they inherit. The results indicated that semantically similar domains share similar GO terms.
The Word2Vec algorithm was also utilized for representation of chemicals. SMILESVec, a text-based ligand representation technique, utilized Word2Vec to learn embeddings for 8-mers (i.e. chemical words) that are extracted from SMILES strings BIBREF45. SMILESVec was utilized in protein representation such that proteins were represented as the average of the SMILESVec vectors of their interacting ligands. The results indicated comparable performances for ligand-based and sequence based protein representations in protein family/superfamily clustering. Mol2Vec BIBREF80, on the other hand, was based on the identifiers of the substructures (i.e. words of the chemical) that were extracted via Extended Connectivity Fingerprint (ECFP) BIBREF81. The results showed a better performance with Mol2Vec than with the simple Morgan Fingerprint in a solubility prediction task, and a comparable performance to graph-based chemical representation BIBREF82. BIBREF83 also employed the Word2vec model that was trained on the fragments that are extracted from SMILES strings using a graph traversing algorithm. The results favored the distributed fragment-based ligand representation over fragment-based binary vector representation in a ring system clustering task and showed a comparable performance in the prediction of toxicity against Tetrahymena BIBREF83. Figure FIGREF33 illustrates the pipeline of a text-based molecule representation based on $k$-mers.
FP2Vec is another method that utilizes embedding representation for molecules, however instead of the Word2Vec algorithm, it depends on a Convolutional Neural Network (CNN) to build molecule representations to be used in toxicity prediction tasks BIBREF84. CNN architectures have also been utilized for drug-target binding affinity prediction BIBREF85 and drug-drug interaction prediction BIBREF75 to build representations for chemicals from raw SMILES strings, as well as for protein fold prediction BIBREF86 to learn representations for proteins from amino-acid sequences. SMILES2Vec adopted different DL architectures (GRU, LSTM, CNN+GRU, and CNN+LSTM) to learn molecule embeddings, which were then used to predict toxicity, affinity and solubility BIBREF87. A CNN+GRU combination was better at the prediction of chemical properties. A recent study compared several DL approaches to investigate the effect of different chemical representations, which were learned through these architectures, on a chemical property prediction problem BIBREF88. The authors also combined DL architectures that were trained on SMILES strings with the MACCS fingerprint, proposing a combined representation for molecules (i.e. CheMixNet). The CheMixNet representation outperformed the other representations that were trained on a single data type such as SMILES2Vec (i.e. SMILES) and Chemception (i.e. 2D graph) BIBREF89.
<<</Distributed representations>>>
<<</Text representation>>>
<<<Text generation>>>
Text generation is a primary NLP task, where the aim is to generate grammatically and semantically correct text, with many applications ranging from question answering to machine translation BIBREF90. It is generally formulated as a language modeling task, where a statistical model is trained using a large corpus to predict the distribution of the next word in a given context. In machine translation, the generated text is the translation of an input text in another language.
Medicinal chemistry campaigns use methods such as scaffold hopping BIBREF91 or fragment-based drug design BIBREF3 to build and test novel molecules but the chemotype diversity and novelty may be limited. It is possible to explore uncharted chemical space with text generation models, which learn a distribution from the available data (i.e. SMILES language) and generate novel molecules that share similar physicochemical properties with the existing molecules BIBREF74. Molecule generation can then be followed by assessing physicochemical properties of the generated compound or its binding potential to a target protein BIBREF74. For a comprehensive review of molecule generation methodologies, including graph-based models, we refer the reader to the review of BIBREF92. Machine translation models have also been recently adapted to text-based molecule generation, which start with one “language" such as that of reactants and generate a novel text in another “language" such as that of products BIBREF28. Below, we present recent studies on text based molecule generation.
RNN models, which learn a probability distribution from a training set of molecules, are commonly used in molecule generation to propose novel molecules similar to the ones in the training data set. For instance, given the SMILES sequence “C(=O", the model would predict the next character to be “)" with a higher probability than “(". The production of valid SMILES strings, however, is a challenge because of the complicated SMILES syntax that utilizes parentheses to indicate branches and ring numbers. The sequential nature of RNNs, which may miss long range dependencies, is a disadvantage of these models BIBREF74. RNN descendants LSTM and GRU, which model long-term dependencies, are better suited for remembering matching rings and branch closures. Motivated by such a hypothesis, BIBREF74 and BIBREF93 successfully pioneered de novo molecule generation using LSTM architecture to generate valid novel SMILES. BIBREF74 further modified their model to generate target-specific molecules by integrating a target bioactivity prediction step to filter out inactive molecules and then retraining the LSTM network. In another study, transfer learning was adopted to fine-tune an LSTM-based SMILES generation model so that structurally similar leads were generated for targets with few known ligands BIBREF94. BIBREF95 and BIBREF96 used reinforcement learning (RL) to bias their model toward compounds with desired properties. Merk et al. BIBREF97, BIBREF98 fine-tuned their LSTM model on a target-focused library of active molecules and synthesized some novel compounds. BIBREF99 explored how much of the GDB-13 database BIBREF100 they could rediscover by using an RNN-based generative model.
The variational Auto-encoder (VAE) is another widely adopted text generation architecture BIBREF101. BIBREF34 adopted this architecture for molecule generation. A traditional auto-encoder encodes the input into the latent space, which is then decoded to reconstruct the input. VAE differs from AE by explicitly defining a probability distribution on the latent space to generate new samples. BIBREF34 hypothesized that the variational part of the system integrates noise to the encoder, so that the decoder can be more robust to the large diversity of molecules. However, the authors also reported that the non-context free property of SMILES caused by matching ring numbers and parentheses might often lead the decoder to generate invalid SMILES strings. A grammar variational auto-encoder (GVAE), where the grammar for SMILES is explicitly defined instead of the auto-encoder learning the grammar itself, was proposed to address this issue BIBREF102. This way, the generation is based on the pre-defined grammar rules and the decoding process generates grammar production rules that should also be grammatically valid. Although syntactic validity would be ensured, the molecules may not have semantic validity (chemical validity). BIBREF103 built upon the VAE BIBREF34 and GVAE BIBREF102 architectures and introduced a syntax-directed variational autoencoder (SD-VAE) model for the molecular generation task. The syntax-direct generative mechanism in the decoder contributed to creating both syntactically and semantically valid SMILES sequences. BIBREF103 compared the latent representations of molecules generated by VAE, GVAE, and SD-VAE, and showed that SD-VAE provided better discriminative features for druglikeness. BIBREF104 proposed an adversarial AE for the same task. Conditional VAEs BIBREF105, BIBREF106 were trained to generate molecules conditioned on a desired property. The challenges that SMILES syntax presents inspired the introduction of new syntax such as DeepSMILES BIBREF29 and SELFIES BIBREF32 (details in Section SECREF3).
Generative Adversarial Network (GAN) models generate novel molecules by using two components: the generator network generates novel molecules, and the discriminator network aims to distinguish between the generated molecules and real molecules BIBREF107. In text generation models, the novel molecules are drawn from a distribution, which are then fine-tuned to obtain specific features, whereas adversarial learning utilizes generator and discriminator networks to produce novel molecules BIBREF107, BIBREF108. ORGAN BIBREF108, a molecular generation methodology, was built upon a sequence generative adversarial network (SeqGAN) from NLP BIBREF109. ORGAN integrated RL in order to generate molecules with desirable properties such as solubility, druglikeness, and synthetizability through using domain-specific rewards BIBREF108.
<<<Machine Translation>>>
Machine translation finds use in cheminformatics in “translation" from one language (e.g. reactants) to another (e.g. products). Machine translation is a challenging task because the syntactic and semantic dependencies of each language differ from one another and this may give rise to ambiguities. Neural Machine Translation (NMT) models benefit from the potential of deep learning architectures to build a statistical model that aims to find the most probable target sequence for an input sequence by learning from a corpus of examples BIBREF110, BIBREF111. The main advantage of NMT models is that they provide an end-to-end system that utilizes a single neural network to convert the source sequence into the target sequence. BIBREF110 refer to their model as a sequence-to-sequence (seq2seq) system that addresses a major limitation of DNNs that can only work with fixed-dimensionality information as input and output. However, in the machine translation task, the length of the input sequences is not fixed, and the length of the output sequences is not known in advance.
The NMT models are based on an encoder-decoder architecture that aims to maximize the probability of generating the target sequence (i.e. most likely correct translation) for the given source sequence. The first encoder-decoder architectures in NMT performed poorly as the sequence length increased mainly because the encoder mapped the source sequence into a single fixed-length vector. However, fixed-size representation may be too small to encode all the information required to translate long sequences BIBREF112. To overcome the issue of the fixed context vector (Figure FIGREF35a), a new method was developed, in which every source token was encoded into a memory bank independently (Figure FIGREF35b). The decoder could then selectively focus on parts of this memory bank during translation BIBREF112, BIBREF113. This technique is known as “attention mechanism" BIBREF114.
Inspired by the successes in NMT, the first application of seq2seq models in cheminformatics was for reaction prediction by BIBREF115, who proposed to translate the SMILES strings of reactants and separated reagents to the corresponding product SMILES. The authors hypothesized that the reaction prediction problem can be re-modelled as a translation system in which both inputs and output are sequences. Their model used GRUs for the encoder-decoder and a Bahdanau BIBREF112 attention layer in between. BIBREF116 in contrast, performed the opposite task, the single-step retrosynthesis prediction, using a similar encoder-decoder model. When given a product and a reaction class, their model predicted the reactants that would react together to form that product. One major challenge in the retrosynthesis prediction task is the possibility of multiple correct targets, because more than one reactant combination could lead to the same product. Similarly to BIBREF115, BIBREF117 also adopted a seq2seq model to translate precursors into products, utilizing the SMILES representation for the reaction prediction problem. Their model used a different attention mechanism by BIBREF113 and LSTMs in the encoder and decoder. By visualizing the attention weights, an atom-wise mapping between the product and the reactants could be obtained and used to understand the predictions better. BIBREF117 showed that seq2seq models could compete with graph neural network-based models in the reaction prediction task BIBREF118.
A translation model was also employed to learn a data-driven representation of molecules BIBREF35. BIBREF35 translated between two textual representations of a chemical, InChi and SMILES, to extract latent representations that can integrate the semantic “meaning" of the molecule. The results indicated a statistically significant improvement with the latent representations in a ligand-based virtual screening task against fingerprint methods such as ECFP (i.e. Morgan algorithm). NMT architectures were also adopted in a protein function prediction task for the first time, in which “words" that were extracted from protein sequences are translated into GO identifiers using RNNs as encoder and decoder BIBREF47. Although exhibiting a comparable performance to the state-of-the-art protein function prediction methods, the authors argued that the performance of the model could be improved by determining more meaningful “words" such as biologically interpretable fragments.
Transformer is an attention-based encoder-decoder architecture that was introduced in NMT by BIBREF119. Although similar to previous studies BIBREF110, BIBREF111, BIBREF112 in terms of adopting an encoder-decoder architecture, Transformer differs from the others because it only consists of attention and feed-forward layers in the encoder and decoder. As transformers do not contain an RNN, positional embeddings are needed to capture order relationships in the sequences. BIBREF28 were the first to adopt the Transformer architecture in cheminformatics and designed a Molecular Transformer for the chemical reaction prediction task. The Molecular Transformer, which was atom-mapping independent, outperformed the other algorithms (e.g. based on a two-step convolutional graph neural network BIBREF120) on commonly used benchmark data sets. Transformer architecture was also adopted to learn representations for chemicals in prediction of drug-target interactions BIBREF121 and molecular properties BIBREF122 in which the proposed systems either outperformed the state-of-the-art systems or obtained comparable results.
<<</Machine Translation>>>
<<</Text generation>>>
<<</Biochemical Language Processing>>>
<<<Future Perspectives>>>
The increase in the biochemical data available in public databases combined with the advances in computational power and NLP methodologies have given rise to a rapid growth in the publication rate in bio/cheminformatics, especially through pre-print servers. As this interdisciplinary field grows, novel opportunities come hand in hand with novel challenges.
<<<Challenges>>>
The major challenges that can be observed from investigating these studies can be summarized as follows: (i) the need for universalized benchmarks and metrics, (ii) reproducibility of the published methodologies, (iii) bias in available data, and (iv) biological and chemical interpretability/explainability of the solutions.
<<<Benchmarking>>>
There are several steps in the drug discovery pipeline, from affinity prediction to the prediction of other chemical properties such as toxicity, and solubility. The use of different datasets and different evaluation metrics makes the assessment of model performance challenging. Comprehensive benchmarking platforms that can assess the success of different tools are still lacking. A benchmarking environment rigorously brings together the suitable data sets and evaluation methodologies in order to provide a fair comparison between the available tools. Such environments are available for molecule generation task from MOSES BIBREF123 and GuacaMol BIBREF124. MoleculeNet is also a similar attempt to build a benchmarking platform for tasks such as prediction of binding affinity and toxicity BIBREF82.
<<</Benchmarking>>>
<<<Reproducibility>>>
Despite the focus on sharing datasets and source codes on popular software development platforms such as GitHub (github.com) or Zenodo (zenodo.org), it is still a challenge to use data or code from other groups. The use of FAIR (Findable, Accessible, Interoperable and Reusable) (meta)data principles can guide the management of scientific data BIBREF125. Automated workflows that are easy to use and do not require programming knowledge encourage the flow of information from one discipline to the other. Platform-free solutions such as Docker (docker.com) in which an image of the source code is saved and can be opened without requiring further installation could accelerate the reproduction process. A recent initiative to provide a unified-framework for predictive models in genomics can quickly be adopted by the medicinal chemistry community BIBREF126.
<<</Reproducibility>>>
<<<Bias in data>>>
The available data has two significant sources of bias, one related to the limited sampling of chemical space and the other related to the quality and reproducibility of the data. The lack of information about some regions of the protein/chemical landscape limits the current methodologies to the exploitation of data rather than full exploration. The data on protein-compound interactions is biased toward some privileged molecules or proteins because the protein targets are related to common diseases or the molecules are similar to known actives. Hence, not all of chemical space is sampled, and chemical space is expanded based on the similarity of an active compound to others, which is also referred to as inductive bias BIBREF127. Data about proteins or molecules related to rare diseases is limited and inactive molecules are frequently not reported. Moreover, some experimental measurements that are not reproducible across different labs or conditions limit their reliability BIBREF128. BIBREF129 and BIBREF130 have recently discussed the bias factors in dataset composition. Zhang and Lee have also addressed the sources of bias in the data and proposed to use Bayesian deep learning to quantify uncertainty.
<<</Bias in data>>>
<<<Interpretability>>>
The black box nature of ML/DL methodologies makes assigning meaning to the results difficult. Explainability of an ML model is especially critical in drug discovery to facilitate the use of these findings by medicinal chemists, who can contribute to the knowledge loop. explainable-AI (XAI) is a current challenge that calls for increased interpretability of AI solutions for a given context and includes several factors such as trust, safety, privacy, security, fairness and confidence BIBREF131. Explainability is also critical for the domain experts to assess the reliability of new methodolodogies. Interpretability is usually classified into two categories: post-hoc (i.e. after) and ante-hoc (i.e. before). Post-hoc approaches explain the predictions of the model, whereas ante-hoc approaches integrate explainability into the model. Recent studies have already aimed to map the semantic meaning behind the models onto the biochemical description. An attentive pooling network, a two-way attention system that extends the attention mechanism by allowing input nodes to be aware of one another, is one approach that has been employed in drug-target interaction prediction BIBREF132. BIBREF76 showed that mapping activations of hidden neurons in feed-forward neural networks to pharmacophores, or linking atom representations computed by convolutional filters to substructures in a graph-convolution model, are possible ways of integrating explainability into AI-based drug discovery systems. BIBREF133 also demonstrated a novel approach that combines molecule generation and retrosynthesis prediction to generate synthesizable molecules. Integration of such solutions to drug discovery problems will not only be useful for computational researchers but also for the medicinal chemistry community.
<<</Interpretability>>>
<<</Challenges>>>
<<<Opportunities>>>
The NLP field has seen tremendous advances in the past five years, starting with the introduction of distributed word embedding algorithms such as Word2Vec BIBREF72 and Glove BIBREF79. The concept of contextualized word embeddings (i.e. ELMo) was introduced soon after BIBREF134. Here, the embedding of the word is not fixed, but changes according to the context (i.e. sentence) in which it appears. These advances continued with more complicated architectures such as Transformer (i.e. Generative Pre-Training or GPT) BIBREF135 and BERT BIBREF136, RoBERTa BIBREF137, GPT2 BIBREF138, Transformer-XL BIBREF139, and XLNet BIBREF140 models. Such models with a focus on context might have significant impact not only on drug discovery, but also on the protein folding problem, which is critical for predicting structural properties of the protein partner. Secondary structure BIBREF141, BIBREF142, BIBREF143, domain boundary BIBREF144 and fold BIBREF49 prediction studies often use sequence information in combination with similarity to available structures. The recent success of AlphaFold BIBREF145 in Critical Assessment of Protein Structure Prediction (CASP) competitions (http://predictioncenter.org/) showed that the enhanced definitions of context, brought about by the advances in machine/deep learning systems, might be useful for capturing the global dependencies in protein sequences to detect interactions between residues separated in sequence space but close together in 3D space BIBREF141.
Unsupervised learning can be used on “big" textual data through using language models with attention BIBREF119 and using pre-trained checkpoints from language models BIBREF146. Encoder-decoder architectures have also had significant impact on solving text generation and machine translation problems and were successfully applied to molecule generation problem. As NLP moves forward, the most recent approaches such as Topic-Guided VAE BIBREF90 and knowledge graphs with graph transformers BIBREF147 will easily find application in bio/cheminformatics.
Recent NLP models are not domain-specific, and they can help with the generalization of models BIBREF138. Current studies emphasize multi-task learning, which requires the use of DNNs that share parameters to learn more information from related but individual tasks BIBREF148, BIBREF138. Combined with the transferability of contextual word representation models, multi-task learning can also provide solutions to drug discovery which has many interwoven tasks, such as chemical property prediction and molecule generation.
Language has an important power, not only for daily communication but also for the communication of codified domain knowledge. Deciphering the meaning behind text is the primary purpose of NLP, which inevitably has found its way to bio/cheminformatics. The complicated nature of biochemical text makes understanding the semantic construction of the hidden words all the more challenging and interesting. The applications we discussed in this review provide a broad perspective of how NLP is already integrated with the processing of biochemical text. A common theme in all of these applications is the use of AI-based methodologies that drive and benefit from the NLP field. Novel advances in NLP and ML are providing auspicious results to solving long-standing bio/cheminformatics problems.
With this review, we have summarized the impact of NLP on bio/cheminformatics to encourage this already interdisciplinary field to take advantage of recent advances. The communication between researchers from different backgrounds and domains can be enhanced through establishing a common vocabulary toward common goals. This review has been an attempt to facilitate this conversation.
<<</Opportunities>>>
<<</Future Perspectives>>>
<<<Acknowledgement>>>
This work is partially supported by TUBITAK (The Scientific and Technological Research Council of Turkey) under grant number 119E133. HO acknowledges TUBITAK-BIDEB 2211 scholarship program and thanks Gökçe Uludoğan for her comments on figures. EO thanks Prof. Amedeo Caflisch for hosting her at the University of Zurich during her sabbatical.
<<</Acknowledgement>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nNLP Basics\nBiochemical Language Processing\nTextual Chemical Data\nIUPAC name\nChemical Formula\nSMILES\nDeepSMILES\nSELFIES\nInChI\nSMARTS\nSMIRKS\nIdentification of Words/Tokens\n@!START@$k$@!END@-mers (@!START@$n$@!END@-grams)\nLongest Common Subsequences\nMaximum Common Substructure\nMinimum Description Length\nByte-Pair Encoding\nPattern-based words\nText representation\nBag-of-words representation\nTF-IDF\nOne-hot representation\nDistributed representations\nText generation\nMachine Translation\nFuture Perspectives\nChallenges\nBenchmarking\nReproducibility\nBias in data\nInterpretability\nOpportunities\nAcknowledgement"
],
"type": "outline"
}
|
1912.07976
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
A Multi-task Learning Model for Chinese-oriented Aspect Polarity Classification and Aspect Term Extraction
<<<Abstract>>>
Aspect-based sentiment analysis (ABSA) task is a multi-grained task of natural language processing and consists of two subtasks: aspect term extraction (ATE) and aspect polarity classification (APC). Most of the existing work focuses on the subtask of aspect term polarity inferring and ignores the significance of aspect term extraction. Besides, the existing researches do not pay attention to the research of the Chinese-oriented ABSA task. Based on the local context focus (LCF) mechanism, this paper firstly proposes a multi-task learning model for Chinese-oriented aspect-based sentiment analysis, namely LCF-ATEPC. Compared with existing models, this model equips the capability of extracting aspect term and inferring aspect term polarity synchronously, moreover, this model is effective to analyze both Chinese and English comments simultaneously and the experiment on a multilingual mixed dataset proved its availability. By integrating the domain-adapted BERT model, the LCF-ATEPC model achieved the state-of-the-art performance of aspect term extraction and aspect polarity classification in four Chinese review datasets. Besides, the experimental results on the most commonly used SemEval-2014 task4 Restaurant and Laptop datasets outperform the state-of-the-art performance on the ATE and APC subtask.
<<</Abstract>>>
<<<Introduction>>>
Aspect-based sentiment analysis BIBREF0, BIBREF1, BIBREF2 (ABSA) is a fine-grained task compared with traditional sentiment analysis, which requires the model to be able to automatic extract the aspects and predict the polarities of all the aspects. For example, given a restaurant review: "The dessert at this restaurant is delicious but the service is poor," the full-designed model for ABSA needs to extract the aspects "dessert" and "service" and correctly reason about their polarity. In this review, the consumers' opinions on "dessert" and "service" are not consistent, with positive and negative sentiment polarity respectively.
Generally, aspects and their polarity need to be manually labeled before running the aspect polarity classification procedure in the supervised deep learning models. However, most of the proposed models for aspect-based sentiment analysis tasks only focus on improving the classification accuracy of aspect polarity and ignore the research of aspect term extraction. Therefore, when conducting transfer learning on aspect-based sentiment analysis, those proposed models often fall into the dilemma of lacking aspect extraction method on targeted tasks because there is not enough research support.
The APC task is a kind of classification problem. The researches concerning APC tasks is more abundant than the ATE task, and a large number of deep learning-based models have been proposed to solve APC problems, such as the models BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8 based on long short-term memory (LSTM) and the methodologies BIBREF9, BIBREF10 based on transformer BIBREF11. The purpose of the APC task is to predict the exact sentiment polarity of different aspects in their context, rather than to fuzzily analyze the overall sentiment polarity on the sentence-level or document-level. In the APC task, the polarities are most usually classified into three categories: positive, negative, and neutral. It is obvious that the sentiment polarity classified based on aspects can better mine the fine-grained emotional tendency in reviews or tweets, thus providing a more accurate reference for decision-makers.
Similar to the named entity recognition BIBREF12 (NER) task, the ATE task is a sequence labeling task, which aims to extract aspects from the reviews or tweet. In most researches BIBREF13, BIBREF14, BIBREF15, the ATE task is studied independently, away from the APC task. The ATE task first segments a review into separate tokens and then infers whether the tokens belong to any aspect. The tokens may be labeled in different forms in different studies, but most of the studies have adopted the IOB label to annotate tokens.
Aiming to automatically extract aspects from the text efficiently and analyze the sentiment polarity of aspects simultaneously, this paper proposes a multi-task learning model for aspect-based sentiment analysis. Multilingual processing is an important research orientation of natural language processing. The LCF-ATEPC model proposed in this paper is a novel multilingual and multi-task-oriented model. Apart from achieving state-of-the-art performance in commonly used SemEval-2014 task4 datasets, the experimental results in four Chinese review datasets also validate that this model has a strong ability to expand and adapt to the needs of multilingual task. The proposed model is based on multi-head self-attention (MHSA) and integrates the pre-trained BERT BIBREF16 and the local context focus mechanism, namely LCF-ATEPC. By training on a small amount of annotated data of aspect and their polarity, the model can be adapted to a large-scale dataset, automatically extracting the aspects and predicting the sentiment polarities. In this way, the model can discover the unknown aspects and avoids the tedious and huge cost of manually annotating all aspects and polarities. It is of great significance for the field-specific aspect-based sentiment analysis.
The main contributions of this article are as follows:
For the first time, this paper studies the multi-task model of APC subtask and ATE subtask for multilingual reviews, which provides a new idea for the research of Chinese aspect extraction.
This paper firstly applies self-attention and local context focus techniques to aspect word extraction task, and fully explore their potential in aspect term extraction task.
The LCF-ATEPC model proposed in this paper integrates the pre-trained BERT model, significantly improves both the performance of ATE task and APC subtask, and achieves new state-of-the-art performance especially the F1 score of ATE task. Besides, we adopted the domain-adapted BERT model trained on the domain-related corpus to the ABSA joint-task learning model. The experimental results show that the domain-adapted BERT model significantly promotes the performance of APC tasks on the three datasets, especially the Restaurant dataset.
We designed and applied dual labels for the input sequence applicable for the SemEval-2014 and Chinese review datasets of ABSA joint-task, the aspect term label, and the sentiment polarity label, respectively. The dual label improves the learning efficiency of the proposed model.
<<</Introduction>>>
<<<Related Works>>>
Most ABSA-oriented methodologies regard the ATE and the APC as independent tasks and major in one of them. Accordingly, this section will introduce the related works of ATE and APC in two parts.
<<<Aspect Term Extraction>>>
The approaches to ATE tasks are classified into two categories: the early dictionary-based or rule-based approaches, and methodologies based on machine-learning or deep learning. BIBREF17 proposed a new rule-based approach to extracting aspects from product reviews using common sense and sentence dependency trees to detect explicit and implicit aspects. BIBREF18 adopts an unsupervised and domain-independent aspect extraction method that relies on syntactic dependency rules and can selects rules automatically.
Compared with manually annotating all aspects in the dataset, the models for ATE can learn the features of aspects and automatically extract aspects in the text, which greatly saves labor and time. BIBREF19 proposed a model that can extract and cluster aspects simultaneously according to the seed words provided by users for several aspect categories. By classification, synonymous aspects can be grouped into the same category. BIBREF20 proposed the first aspect-oriented deep learning model in opinion mining, which deploys a 7-layer deep convolutional neural network to mark each word in the sentences with opinions as an aspect or non-aspect word. BIBREF21 proposed a new method for aspect term extraction, which utilizes word embedding to explore the co-occurrence distribution of words and applies the attention mechanism to weaken the irrelevant words and further improves the coherence of all aspects. BIBREF22 proposed a deep neural network-based model namely coupled multilevel attention, which does not require any parser or other linguistic resources to be pre-processed and provides an end-to-end solution. Besides, the proposed model is a multi-layer attention network, where each layer deploys a pair of attentions. This model allows the aspect terms and opinion terms learned interactively and dual propagate during the training process.
For the Chinese-oriented ATE task, a multi-aspect bootstrapping (MAB) method BIBREF23 is proposed to extract the aspects of Chinese restaurant reviews. BIBREF24 introduced machine learning methods to explore and extract aspect terms from Chinese hotel reviews. they chose the optimal feature-dimension, feature representation, and maximum entropy (ME) classifier according to the empirical results, and studied the integral effect of aspect extraction.
Up to now, the MHSA and pre-trained model has not been applied in the ATE task. This paper explores the potential of the new techniques of deep learning and new network architecture in the ATE task.
<<</Aspect Term Extraction>>>
<<<Aspect Polarity Classification>>>
Aspect polarity classification is another important subtask of ABSA. The approaches designed for the APC task can be categorized into traditional machine learning and recent deep learning methods.The APC task has been comprehensively turned to the the deep neural networks. Therefore, this section mainly introduces approaches based on deep learning techniques.
The most commonly applied deep neural network architectures for APC task are recurrent neural networks BIBREF5, BIBREF6, BIBREF7, BIBREF25, BIBREF26 (RNNs) and convolutional neural networks (CNNs) BIBREF14, BIBREF15, BIBREF27. TD-LSTM BIBREF5 first divides the context of aspects into the left and right parts and modeling for them independently. Attention mechanism BIBREF28 has been adapted to APC task in the last few years. ATAE-LSTM takes the feature representation of aspects and context words as the input of the model and applies an attention mechanism to dynamically calculate the attention weight according to the relationship between aspects and context words, and finally predicts the polarity of aspects according to the weighted context features. Another LSTM-based model IAN BIBREF7 deployed with attention mechanism equips two independent LSTM networks to capture the features of the context and aspect, with interactively integrating and learning the inner correlation of the features of context and targeted aspects. The RAM BIBREF13 is a bi-directional LSTM-based architecture deploys a multi-layer deep neural network with dedicated memory layers. The multi-layer network utilizes the token features learned based on the attention mechanism and GRUs to finally obtain the global semantic features of the text to predict the sentiment polarities of targeted aspects. In order to retard the loss of context features during the training process, TNet BIBREF25 introduced a conventional transformation architecture based on context-preserving transformation (CPT) units. TNet integrates the bidirectional LSTM network and convolutional neural network and significantly improves the accuracy of sentiment polarity prediction. Multi-grained attention network BIBREF8 (MGAN) is a new deep neural network model, which equips with a variety of fine-grained attention mechanisms, and applies the fine-grained attention mechanisms to interactively learn the token-level features between aspects and context, making great use of the inherent semantic correlation of aspects and context.
BIBREF29 proposed the methods for the Chinese language APC task, which conducted the APC task at the aspect level via three granularities. Two fusion methods for the granularities in the Chinese APC task are introduced and applied. Empirical results show that the proposed methods achieved promising performance on the most commonly used ABSA datasets and four Chinese review datasets. Meanwhile, a joint framework aimed to aspect sentiment classification subtask and aspect-opinion pair identification subtask is proposedby BIBREF30, in which the external knowledge are considered and put into the network to alleviate the problem of insufficient train data. The gated alternate neural network (GANN) BIBREF31 proposed for APC task aimed to solve the shortcomings of traditional RNNs and CNNs. The GANN applied the gate truncation RNN (GTR) to learn the aspect-dependent sentiment clue representations. BIBREF32 proposed an end-to-end neural network model for the ABSA task based on joint learning, and the experimental results on a Chinese review show that the proposed model works fine while conducting ATE and APC subtask simultaneously.
BERT-SPC is the BERT text pair classification model, it is a variation model of Bert and is adapted to solve the ABSA task in BIBREF9 and achieve high performance. LCF-Bert BIBREF10 proposed a feature-level local context focus mechanism based on self-attention, which can be applied to aspect level emotion analysis and many other fine-grained natural language processing tasks. BERT-ADA BIBREF33 shows that although the pre-trained model based on a large universal corpus, and is easy to be applied to most tasks and improve performance. Still, it is not task-specific. For specific tasks, if the pre-trained BERT is adapted to specific tasks through the fine-tuning process on a task-related corpus, the task performance can be further improved.
<<</Aspect Polarity Classification>>>
<<</Related Works>>>
<<<Methodology>>>
Aspect-based sentiment analysis relies on the targeted aspects, and most existing studies focus on the classification of aspect polarity, leaving the problem of aspect term extraction. To propose an effective aspect-based sentiment analysis model based on multi-task learning, we adopted domain-adapted BERT model from BERT-ADA and integrated the local context focus mechanism into the proposed model. This section introduces the architecture and methodology of LCF-ATEPC.
This section introduces the methodology of the APC module and the ATE module, respectively. and the contents are organized by order of the network layer hierarchy.
<<<Task Definition>>>
<<</Task Definition>>>
<<<Model Architecture>>>
Aiming at the problem of insufficient research on aspect term extraction task, a joint deep learning model is designed in this section. This model combines aspect polarity classification task and aspect term extraction task, and two independent Bert layers are adopted to model the global context and the local context respectively. For conducting multi-task training at the same time, the input sequences are tokenized into different tokens and the each token is assigned two kinds of label. The first label indicates whether the token belongs to an aspect; the second label marks the polarity of the tokens belongs to the aspect.
Fig FIGREF18 is the network architecture of LCF-ATEPC. Local context feature generator (LCFG) unit is on the left and a global context feature generator (GCFG) unit is on the right. Both context feature generator units contain an independent pre-trained BERT layer, $BERT^l$ and $BERT^g$ respectively. The LCFG unit extracts the features of the local context by a local context focus layer and a MHSA encoder. The GCFG unit deploys only one MHSA encoder to learn the global context feature. The feature interactive learning (FIL) layer combines the learning of the interaction between local context features and global context features and predicts the sentiment polarity of aspects. The extraction of aspects based on the features of the global context.
<<<BERT-Shared Layer>>>
The pre-trained BERT model is designed to improve performance for most NLP tasks, and The LCF-ATEPC model deploys two independent BERT-Shared layers that are aimed to extract local and global context features. For pre-trained BERT, the fine-tuning learning process is indispensable. Both BERT-Shared layers are regarded as embedded layers, and the fine-tuning process is conducted independently according to the joint loss function of multi-task learning. $X^{l}$ and $X^{g}$ are used to represent the tokenized inputs of LCFG and GCFG respectively, and we can obtain the preliminary outputs of local and global context features.
$O^{l}_{BERT}$ and $O^{g}_{BERT}$ are the output features of the LCFG and the GCFG, respectively. $BERT^{l}$ and $BERT^{g}$ are the corresponding BERT-shared layer embedded in the LCFG and the GCFG respectively.
<<</BERT-Shared Layer>>>
<<</Model Architecture>>>
<<<Multi-Head Self-Attention>>>
Multi-head self-attention is based on multiple scale-dot attention (SDA), which can be utilized to extract deep semantic features in the context, and the features are represented in self-attention score. The MHSA can avoids the negative influence caused by the long distance dependence of the context when learning the features. Suppose $X_{SDA}$ is the input features learned by the LCFG. The scale-dot attention is calculate as follows:
$Q$, $K$ and $V$ are the abstract matrices packed from the input features of SDA by three weight matrices $W_{q} \in \mathbb {R}^{d_{h} \times d_{q}}$, $W_{k} \in \mathbb {R}^{d_{h} \times d_{k}}$, $W_{v} \in \mathbb {R}^{d_{h} \times d_{v}}$. The MHSA performs multiple scaled-dot attention in parallel and concatenate the output features, then transform the features by multiplying a vector $W^{M H}$. $h$ represents the number of the attention heads and equal to 12.
The “;” means feature concatenation of each head. $W^{M H} \in \mathbb {R}^{hd_{v} \times d_{h}}$ is the parameter matrices for projection . Additionally, we apply a $\tanh $ activation function for the MHSA learning process, which significantly enhanced feature-capture capability.
<<</Multi-Head Self-Attention>>>
<<<Local Context Focus>>>
<<<Semantic-Relative Distance>>>
The determination of local context depends on semantic-relative distance (SRD), which is proposed to determine whether the context word belongs to the local context of a targeted aspect to help the model capture the local context. Local context is a new concept that can be adapted to most fine-grained NLP tasks. In the ABSA field, existing models generally segment input sequences into aspect sequences and context sequences, treat aspects and context as independent segments and model their characteristics separately. Instead of leaving the aspect alone as part of the input, this paper mines the aspect and its local context, because the empirical result shows the local context of the target aspect contains more important information.
SRD is a concept based on token-aspect pairs, describing how far a token is from the aspect. It counts the number of tokens between each specific token towards a targeted aspect as the SRD of all token-aspect pairs. The SRD is calculated as:
where $i$ $(1<i<n)$ is the position of the specific token, $P_{a}$ is the central position of aspect. $m$ is the length of targeted aspect, and $SRD_{i}$ represents for the SRD between the $ i $-th token and the targeted aspect.
Figure FIGREF30 and Figure FIGREF31 are two implementations of the local context focus mechanism, the context-feature dynamic mask (CDM) layer and context-feature dynamic weighting (CDW) layer, respectively. The bottom and top of the figures represent the feature input and output positions (POS) corresponding to each token. The self-attention mechanism treats all tokens equally, so that each token can generate the self-attention score with other tokens through parallel matrix operation. According to the definition of MHSA, the features of the output position corresponding to each token are more closely related to itself. After calculating the output of all tokens by MHSA encoder, the output features of each output position will be masked or attenuated, except that the local context will be retained intact.
<<</Semantic-Relative Distance>>>
<<<Context-features Dynamic Mask>>>
Apart from to the features of the local context, the CDM layer will mask non-local context's features learned by the $BERT^l$ layer. Although it is easy to directly mask the non-local context words in the input sequence, it is inevitable to discard the features of non-local context words. As the CDM layer is deployed, only a relatively small amount of the semantic context itself will be masked at the corresponding output position. The relative representation of context words and aspects with relatively few semantics is preserved in the corresponding output position.
According to the CDM implementation, the features on all the positions of non-local context words will be set to zero vectors. In order to avoid the unbalanced distribution of features after the CDM operation, an MHSA encoder is utilized to learn and rebalance the masked local context features. Suppose that the $O_{BERT^l}$ is the preliminary output features of $BERT^l$, then we get the local context feature output as follows,
To mask the features of non-local context, we defines a feature masking matrix $M$, and $ V_{i}^{m} $ is the mask vectors for each token in the input sequence. $\alpha $ is the SRD threshold and $n$ is the length of input sequence including aspect. Tokens whose SRD regarding to the targeted aspect is less than the threshold $\alpha $ are the local contexts. The $E \in \mathbb {R}^{d_{h}}$ represents the ones vector and $O \in \mathbb {R}^{d_{h}}$ is the zeros vectors. “$.$” denotes the dot-product operation of the vectors.
Finally the local context features learned by the CDM layer are delivered as $O^{l}$.
<<</Context-features Dynamic Mask>>>
<<<Context-features Dynamic Weighting>>>
Although empirical results show that the CDM has achieved excellent performance compared with existing models, we design the CDW to explore the potential of LCF mechanism. The CDW is another implementation of the LCF mechanism, takes a more modest strategy compared to the CDM layer, which simply drops the features of the non-local context completely. While the features of local context retained intact, the features of the non-local context words will be weighted decay according to their SRD concerning a targeted aspect.
where $W$ is the constructed weight matrix and $V_{i}^{w}$ is the weight vector for each non-local context words. Consistently with CDM, $SRD_{i}$ is the SRD between the i-th context token and a targeted aspect. $n$ is the length of the input sequence. $\alpha $ is the SRD threshold. “$.$” denotes the vector dot-product operation.
$O_{C D W}^{l}$ is the output of CDW layer. The CDM and CDW layers are independent, which mean they are alternative. Both the output features of CDM and CDW layers are denoted as $O^{l}$. Besides, we tried to concatenate the learned features of CDM and CDW layers and take linear transformation as the features of local context.
$W^{f}$, $O^{f}$ and $b^{f}$ are weight matrix and bias vector, respectively. The model can choose one of the three approaches to learn the local context features.
<<</Context-features Dynamic Weighting>>>
<<</Local Context Focus>>>
<<<Feature Interactive Learning>>>
LCF-ATEPC does not only rely on local context features for sentiment polarity classification, but combines and learns the local context features and the global context features to conduct polarity classification.
$O^{l} $ and $ O^{g}$ are the local context features and global context features, respectively. $ W^{lg} \in \mathbb {R}^{d_{h} \times 2d_{h}}$ and $ b^{lg} \in \mathbb {R}^{d_{h}}$ are the weights and bias vectors, respectively. To learn the features of the concatenated vectors, an MHSA encoding process is performed on the $O_{dense}^{l g}$.
<<</Feature Interactive Learning>>>
<<<Aspect Polarity Classifier>>>
Aspect polarity classifier performs a head-pooling on the learned concatenated context features. Head-pooling is to extract the hidden states on the corresponding position of the first token in the input sequence. then a Softmax operation is applied to predict the sentiment polarity.
where $C$ is the number of sentiment categories, and $Y_{polarity}$ represents the polarity predicted by aspect polarity classifier.
<<</Aspect Polarity Classifier>>>
<<<Aspect Term Extractor>>>
Aspect term extractor first performs the token-level classification for each token, suppose $T_{i}$ is the features on the corresponding position of token $T$,
where $N$ is the number of token categories, and $Y_{term}$ represents the token category inferred by aspect polarity classifier.
<<</Aspect Term Extractor>>>
<<<Training Details>>>
The LCFG and the GCFG are based on the BERT-BASE and BERT-SPC models, respectively. And the BERT-SPC BIBREF9 significantly improved the performance of APC tasks. In LCF-ATEPC, BERT-SPC only refactored the input sequence form compared with BERT-BASE model. The input sequence of BERT-BASE is formed in “[CLS]” + sequence + “[SEP]”, while it is formed in “[CLS]” + sequence + “[SEP]” + aspect + “[SEP]” for BERT-SPC.
Since LCF-ATEPC is a multi-task learning model, we redesigned the form of data input and adopted dual labels of sentiment polarity and token category. The Figure FIGREF55 are the input samples of BERT-BASE and BERT-SPC model, respectively.
The cross-entropy loss is adopted for APC and ATE subtask and the $\mathbf {L}_{2}$ regularization is applied in LCF-ATEPC, here is the loss function for APC task,
where $C$ is the number of polarity categories, $\lambda $ is the $L_{2}$ regularization parameter, and $\Theta $ is the parameter-set of the LCF-ATEPC. The loss function for ATE task is
where $N$ is the number of token classes and $k$ is the sum of the tokens in each input sequence. Accordingly, the loss function of LCF-ATEPC is as follows:
<<</Training Details>>>
<<</Methodology>>>
<<<Experiments>>>
<<<Datasets and Hyperparameters Setting>>>
To comprehensive evaluate the performance of the proposed model, the experiments were conducted in three most commonly used ABSA datasets, the Laptops and Restaurant datasets of SemEval-2014 Task4 subtask2 BIBREF0 and an ACL Twitter social dataset BIBREF34. To evaluate our model capability with processing the Chinese language, we also tested the performance of LCF-ATEPC on four Chinese comment datasets BIBREF35, BIBREF36, BIBREF29 (Car, Phone, Notebook, Camera). We preprocessed the seven datasets. We reformatted the origin dataset and annotated each sample with the IOB labels for ATE task and polarity labels for APC tasks, respectively. The polarity of each aspect on the Laptops, Restaurants and datasets may be positive, neutral, and negative, and the conflicting labels of polarity are not considered. The reviews in the four Chinese datasets have been purged, with each aspect may be positive or negative binary polarity. To verify the effectiveness and performance of LCF-ATEPC models on multilingual datasets, we built a multilingual dataset by mixing the 7 datasets. We adopt this dataset to conduct multilingual-oriented ATE and APC experiments.
The table demonstrates the details of these datasets.
The samples distribution of those datasets is not balanced. For example, most samples in the restaurant dataset are positive, while the neutral samples in the Twitter dataset account for the majority.
Apart from some hyperparameters setting referred to previous researches, we also conducted the controlled trials and analyzed the experimental results to optimize the hyperparameters setting. The superior hyperparameters are listed in Table TABREF65. The default SRD setting for all experiments is 5, with additional instructions for experiments with different SRD.
<<</Datasets and Hyperparameters Setting>>>
<<<Compared Methods>>>
We compare the LCF-ATEPC model to current state-of-the-art methods. Experimental results show that the proposed model achieves state-of-the-art performance both in the ATE and APC tasks.
ATAE-LSTM BIBREF6 is a classical LSTM-based network for the APC task, which applies the attention mechanism to focus on the important words in the context. Besides, ATAE-LSTM appends aspect embedding and the learned features to make full use of the aspect features. The ATAE-LSTM can be adapted to the Chinese review datasets.
ATSM-S BIBREF29 is a baseline model of the ATSM variations for Chinese language-oriented ABSA task. This model learns the sentence and aspect terms at three perspectives of granularity.
GANN is novel neural network model for APC task aimed to solve the shortcomings of traditional RNNs and CNNs. The GANN applied the Gate Truncation RNN (GTR) to learn informative aspect-dependent sentiment clue representations. GANN obtained the state-of-the-art APC performance on the Chinese review datasets.
AEN-BERT BIBREF9 is an attentional encoder network based on the pretrained BERT model, which aims to solve the aspect polarity classification.
BERT-PT BIBREF37 is a BERT-adapted model for Review Reading Comprehension (RRC) task, a task inspired by machine reading comprehension (MRC), it could be adapted to aspect-level sentiment classification task.
BERT-BASE BIBREF16 is the basic pretrained BERT model. We adapt it to ABSA multi-task learning, which equips the same ability to automatically extract aspect terms and classify aspects polarity as LCF-ATEPC model.
BERT-SPC BIBREF9 is a pretrained BERT model designed for the sentence-pair classification task. Consistent with the basic BERT model, we implemented this model for ABSA multitasking.
BERT-ADA BIBREF33 is a domain-adapted BERT-based model proposed for the APC task, which fine-tuned the BERT-BASE model on task-related corpus. This model obtained state-of-the-art accuracy on the Laptops dataset.
LCF-ATEPC is the multi-task learning model for the ATE and APC tasks, which is based on the the BERT-SPC model and local context focus mechanism.
LCF-ATE are the variations of the LCF-ATEPC model which only optimize for the ATE task.
LCF-APC are the variations of LCF-ATEPC and it only optimize for the APC task during training process.
<<</Compared Methods>>>
<<<Results Analysis>>>
The experiments are conducted in several segments. First, the baseline performance of LCF-ATEPC on all Chinese and English data sets was tested, and then the effectiveness of multi-task learning was demonstrated. Finally, the assistance of domain-adapted BERT model in improving performance was evaluated and the sensitivity of different datasets to SRD was studied.
<<<Performance on Chinese Review Datasets>>>
Table TABREF70 are the experimental results of LCF-ATEPC models on four Chinese review datasets.
<<</Performance on Chinese Review Datasets>>>
<<<Performance on SemEval-2014 task4>>>
Table TABREF72 lists the main experimental results of LCF-ATEPC models to compare the performance with other ABSA-oriented models.
The LCF-ATEPC models are multilingual-oriented. To demonstrate its ability to simultaneously input and analyze reviews in multiple languages, we constructed and experimented with a multilingual dataset fore-mentioned. And result on the multilingual mixed dataset illustrates the effectiveness of the LCF-ATEPC models.
<<</Performance on SemEval-2014 task4>>>
<<</Results Analysis>>>
<<<Overall Performance Analysis>>>
Many models for ABSA tasks do not take into account the ATE subtask, but there are still some joint models BIBREF38 based on the traditional neural network architecture to conduct the APC and ATE tasks simultaneously. Benefit from the joint training process, the two ABSA subtasks of APC and ATE can promote each other and improve the performance.
The CDM layer works better on twitter dataset because there are a lot of non-standard grammar usage and language abbreviations within it, and the local context focus techniques can promote to infer the polarity of terms. Surprisingly, for the Laptop and Restaurant datasets, guests occasionally have a unified “global” view in a specific review. That is, if the customer is not satisfied with one aspect, it is likely to criticize the other. Things will be the same if a customer prefers a restaurant he would be tolerant of some small disamenity, so the CDW mechanism performs better because it does not completely mask the local context of the other aspect. In the multi-task learning process, the convergence rate of APC and ATE tasks is different, so the model does not achieve the optimal effect at the same time.
We build a joint model for the multi-task of ATE and APC based on the BERT-BASE model. After optimizing the model parameters according to the empirical result, the joint model based on BERT-BASE achieved hopeful performance on all three datasets and even surpassed other proposed BERT based improved models on some datasets, such as BERT-PT, AEN-BERT, SDGCN-BERT, and so on. Meanwhile, we implement the joint-task model based on BERT-SPC. Compared with the BERT-BASE model, BERT-SPC significantly improves the accuracy and F1 score of aspect polarity classification. In addition, for the first time, BERT-SPC has increased the F1 score of ATE subtask on three datasets up to 99%.
ATEPC-Fusion is a supplementary scheme of LCF mechanism, and it adopts a moderate approach to generate local context features. The experimental results show that its performance is also better than the existing BERT-based models.
<<<Effectiveness of Multi-task Learning>>>
Keeping the main architecture of the LCF-ATEPC model unchanged, we tried to only optimize parameters for a single task in the multi-task model to explore the difference between the optimal performance of a single task and the multi-task learning model .
The Figure TABREF76 depicts the performance of the LCF-ATEPC model when performing an single APC or ATE task. Experimental results show that on some datasets the LCF-ATEPC model performs better concerning APC or ATE single task than conducting ABSA multi-task on some datasets. In general, the proposed model LCF-ATEPC proposed in this paper is still superior to other ABSA-oriented multi-task models and even the single-task models aim to APC or ATE. When optimizing the model parameters for through back-propagation of multiple tasks, the multi-task learning model needs to take into account multiple loss functions of the different subtasks. So sometimes the multi-task learning cannot achieve as the best effect as single-task learning does, which is also the compromise of the multi-task learning model when dealing with multiple tasks.
<<</Effectiveness of Multi-task Learning>>>
<<<Domain-adaption for LCF-ATEPC>>>
The BERT-BASE model is trained on a large-scale general corpus, so the fine-tuning during process during training process is significant and inevitable for BERT-based models. Meanwhile, the ABSA datasets commonly benchmarked are generally small with the domain-specific characteristic, the effect of BERT-BASE model on the most ABSA datasets can be further improved through domain-adaption. Domain adaption is a effective technique while integrating the pre-trained BERT-BASE model. By further training the BERT-BASE model in a domain-related corpus similar to or homologous to the target ABSA dataset, then domain-related pretrained BERT model can be obtained. We adopted the method proposed in BIBREF33 to obtain the domain-adapted pre-trained BERT model based on the corpus of Yelp Dataset Challenge reviews and the amazon Laptops review datasetBIBREF39. Table TABREF78 shows that the performance of APC task significantly improved by domain-adapted BERT model. The accuracy benchmark in the classical Restaurant achieving more than 90%, which means that the LCF-ATEPC is the first ABSA-oriented model obtained up to 90% accuracy on the Restaurant dataset. In addition, experimental result on the Laptop dataset also prove the effectiveness of domain-adaption in multi-task learning. Besides, the experimental results on the laptop dataset also validate the effectiveness of domain-adapted BERT model for ABSA multi-task learning.
<<</Domain-adaption for LCF-ATEPC>>>
<<<SRD Sensitivity on Different Datasets>>>
We tested the sensitivity of SRD threshold on the typical Chinese and English ABSA datasets: the Phone dataset and The Restaurant dataset, respectively. Besides, for the evaluation of the restaurant dataset, we adopted the domain-adapted BERT model as the underlying architecture of the LCF-ATEPC model. The experimental result of Figure FIGREF81, FIGREF84 are evaluated in multi-task learning process.
For the Chinese Phone dataset, the LCF-ATEPC-CDM model can achieve the best APC accuracy and F1 score when the SRD threshold is about 4-5, while the best ATE task performance reaches the highest when the SRD threshold is about 1-3. The LCF-ATEPC-CDW model obtains the best APC performance on the Phone dataset when the SRD threshold is 5, while the best ATE F1 score is approximately obtained when the SRD threshold is 7.
For the Restaurant dataset, the optimal APC accuracy and F1 score achieved by LCF-ATEPC-CDM while the SRD threshold is approximately between 4 and 6. While the SRD threshold for the LCF-ATEPC-CDW is set to 8, the model achieves the optimal aspect classification accuracy and F1 score. However, the F1 score of the ATE task is less sensitive to the SRD threshold, indicating that aspect polarity classification task has less assistance on it during the joint learning process.
<<</SRD Sensitivity on Different Datasets>>>
<<</Overall Performance Analysis>>>
<<</Experiments>>>
<<<Conclusion>>>
The ATE and APC subtasks were treated as independent tasks in previous studies. Moreover, the multi-task learning model for ATE and APC subtasks has not attracted enough attention from researchers. Besides, the researches concerning the Chinese language-oriented ABSA task are not sufficient and urgent to be proposed and developed. To address the above problems, this paper proposes a multi-task learning model LCF-ATEPC for aspect-based sentiment analysis based on the MHSA and the LCF mechanisms and applies the pre-trained BERT to the ATE sub-tasks for the first time. Not only for the Chinese language, but the models proposed in this paper are multilingual and applicable to the classic English review sentiment analysis task, such as the SemEval-2014 task4. The proposed model can automatically extract aspects from reviews and infer aspects' polarity. Empirical results on 3 commonly English datasets and four Chinese review datasets for ABSA tasks show that, compared with all models based on basic BERT, the LCF-ATEPC model achieves state-of-the-art performance on ATE and APC tasks.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Works\nAspect Term Extraction\nAspect Polarity Classification\nMethodology\nTask Definition\nModel Architecture\nBERT-Shared Layer\nMulti-Head Self-Attention\nLocal Context Focus\nSemantic-Relative Distance\nContext-features Dynamic Mask\nContext-features Dynamic Weighting\nFeature Interactive Learning\nAspect Polarity Classifier\nAspect Term Extractor\nTraining Details\nExperiments\nDatasets and Hyperparameters Setting\nCompared Methods\nResults Analysis\nPerformance on Chinese Review Datasets\nPerformance on SemEval-2014 task4\nOverall Performance Analysis\nEffectiveness of Multi-task Learning\nDomain-adaption for LCF-ATEPC\nSRD Sensitivity on Different Datasets\nConclusion"
],
"type": "outline"
}
|
1909.09268
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Towards Neural Language Evaluators
<<<Abstract>>>
We review three limitations of BLEU and ROUGE -- the most popular metrics used to assess reference summaries against hypothesis summaries, come up with criteria for what a good metric should behave like and propose concrete ways to use recent Transformers-based Language Models to assess reference summaries against hypothesis summaries.
<<</Abstract>>>
<<<Introduction>>>
Evaluation metrics play a central role in the machine learning community. They direct the efforts of the research community and are used to define the state of the art models. In machine translation and summarization, the two most common metrics used for evaluating similarity between candidate and reference texts are BLEU BIBREF0 and ROUGE BIBREF1. Both approaches rely on counting the matching n-grams in the candidates summary to n-grams in the reference text. BLEU is precision focused while ROUGE is recall focused. These metrics have posed serious limitations and have already been criticized by the academic community.In this work we formulate three criticisms of BLEU and ROUGE, establish criteria that a sound metric should have and propose concrete ways to use recent advances in NLP to design data-driven metric addressing the weaknesses found in BLEU and ROUGE.
<<</Introduction>>>
<<<Related Work>>>
<<<BLEU, ROUGE and n-gram matching approaches>>>
BLEU (Bilingual Evaluation Understudy) BIBREF0 and ROUGE BIBREF1 have been used to evaluate many NLP tasks for almost two decades. The general acceptance of these methods depend on many factors including their simplicity and the intuitive interpretability. Yet the main factor is the claim that they highly correlate with human judgement BIBREF0. This has been criticised extensively by the literature and the shortcomings of these methods have been widely studied. Reiter BIBREF2 , in his structured review of BLEU, finds a low correlation between BLEU and human judgment. Callison et al BIBREF3 examines BLEU in the context of machine translation and find that BLEU does neither correlate with human judgment on adequacy(whether the hypothesis sentence adequately captures the meaning of the reference sentence) nor fluency(the quality of language in a sentence). Sulem et al BIBREF4 examines BLEU in the context of text simplification on grammaticality, meaning preservation and simplicity and report BLEU has very low or in some cases negative correlation with human judgment. Considering these results it is a natural step to pursue new avenues for natural language evaluation and with the advent of deep learning using neural networks for this task is a promising step forward.
<<</BLEU, ROUGE and n-gram matching approaches>>>
<<<Transformers, BERT and GPT>>>
Language modeling has become an important NLP technique thanks to the ability to apply it to various NLP tasks as explained in Radford et al BIBREF5. There are two leading architectures for language modeling Recurrent Neural Networks (RNNs)BIBREF6 and Transformers BIBREF7 . RNNs handle the input tokens, words or characters, one by one through time to learn the relationship between them, whereas, transformers receive a segment of tokens and learn the dependencies between them using an attention mechanism.
<<</Transformers, BERT and GPT>>>
<<<Model-based metrics>>>
While BLEU and ROUGE are defined in a discrete space new evaluation metric can be defined in this continuous space. BERTscore BIBREF8 uses word embeddings and cosine similarity to create a score array and use greedy matching to maximize the similarity score. Sentence Mover’s Similarity BIBREF9 uses the mover similarity, Wasserstein distance, between sentence embedding generated from averaging the word embeddings in a sentence. Both of these methods report stronger correlations with human judgment and better results when compared to BLEU and ROUGE. While they are using word embeddings BIBREF10 to transfer their sentence in a continuous space they are still using distance metrics to evaluate that sentence. While BLEND BIBREF11 uses an SVM to combine different existing evaluation metrics. One other evaluation method proposed is RUSE BIBREF12 this method proposes embedding both sentences separately and pooling them to a given size. After that they use a pre trained MLP to predict on different tasks. This quality estimator metric is then proposed to be used in language evaluation. Our proposed methodology is to take neural language evaluation beyond architecture specifications. We are proposing a framework in which an evaluators success can be determined.
<<</Model-based metrics>>>
<<</Related Work>>>
<<<Challenges with BLEU and ROUGE>>>
In this part, we discuss three significant limitations of BLEU and ROUGE. These metrics can assign: High scores to semantically opposite translations/summaries, Low scores to semantically related translations/summaries and High scores to unintelligible translations/summaries.
<<<High score, opposite meanings>>>
Suppose that we have a reference summary s1. By adding a few negation terms to s1, one can create a summary s2 which is semantically opposite to s1 but yet has a high BLEU/ROUGE score.
<<</High score, opposite meanings>>>
<<<Low score, similar meanings>>>
In addition not to be sensitive to negation, BLEU and ROUGE score can give low scores to sentences with equivalent meaning. If s2 is a paraphrase of s1, the meaning will be the same ;however, the overlap between words in s1 and s2 will not necessarily be significant.
<<</Low score, similar meanings>>>
<<<High score, unintelligible sentences>>>
A third weakness of BLEU and ROUGE is that in their simplest implementations, they are insensitive to word permutation and can give very high scores to unintelligible sentences. Let s1 be "On a morning, I saw a man running in the street." and s2 be “On morning a, I saw the running a man street”. s2 is not an intelligible sentence. The unigram version of ROUGE and BLEU will give these 2 sentences a score of 1.
<<</High score, unintelligible sentences>>>
<<<Experiments>>>
<<<Experiments with carefully crafted sentences>>>
To illustrate our argument, let's consider the following pairs of sentences:
In Pair 1: s1 is "For the past two decades, the translation and summarization communities have used ROUGE and BLEU and these metrics have shown to be robust to criticism” s2 is "“For the past two decades, the translation and summarization communities have used ROUGE and BLEU and these metrics have shown not to be robust to criticism”. They differ by adding the negation in s2.
In Pair 2: s1 is "On a morning, I saw a man running in the street." and s2 is "In the early hours of the day, I observed one gentleman jogging along the road”. s2 is a paraphrase of s1.
<<</Experiments with carefully crafted sentences>>>
<<<Semantic similarity experiments>>>
To go beyond carefully crafted sentences. We assessed how well BLEU and ROUGE correlated with human judgement of similarity between pairs of paraphrased sentences and compared their performance to a RoBERTa model finetuned for semantic similarity (Table 2).
<<</Semantic similarity experiments>>>
<<</Experiments>>>
<<</Challenges with BLEU and ROUGE>>>
<<<Towards a robust data-driven approach>>>
<<<Metric Scorecard>>>
In our methodology to design new evaluation metrics for comparing reference summaries/translations to hypothesis ones, we established first-principles criteria on what a good evaluator should do. The first one is that it should be highly correlated with human judgement of similarity. The second one is that it should be able to distinguish sentences which are in logical contradiction, logically unrelated or in logical agreement. The third one is that a robust evaluator should also be able to identify unintelligible sentences. The last criteria is that a good evaluation metric should not give high scores to semantically distant sentences and low scores to semantically related sentences.
<<</Metric Scorecard>>>
<<<Implementing metrics satisfying scorecard>>>
<<<Semantic Similarity>>>
Starting from the RoBERTa large pre-trained model BIBREF13 , we finetune it to predict sentence similarity on the STS-B benchmark dataset. Given two sentences of text, s1 and s2, the systems need to compute how similar s1 and s2 are, returning a similarity score between 0 and 5. The dataset comprises naturally occurring pairs of sentences drawn from several domains and genres, annotated by crowdsourcing. The benchmark comprises 8628 sentence pairs with 5700 pairs in the training set, 1500 in the development set and 1379 in the test set.
<<</Semantic Similarity>>>
<<<Logical Equivalence>>>
For logical inference, we start with a pretrained RoBERTa BIBREF13 model and finetune it using the Multi-Genre Natural Language Inference Corpus (Williams et al., 2018). It is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis, contradicts the hypothesis, or neither (neutral). The training set includes 393k sentence pairs, development set includes 20k and test set includes 20k. The accuracy of the pre-trained model on the development set is 0.9060.
<<</Logical Equivalence>>>
<<<Sentence Intelligibility>>>
We start with a pretrained roBERTa BIBREF13 model and finetune it using the Corpus of Linguistic Acceptability (CoLA) . It consists of examples of expert English sentence acceptability judgments drawn from 22 books. Each example is a single string of English words annotated with whether it is grammatically possible sentence of English. The training set for CoLA has 10k sentences and the development set includes 1k sentences. The current model gets 67.8 percent accuracy
<<</Sentence Intelligibility>>>
<<<Rationale for Language Models>>>
The overall rationale for using language models fine tuned for specific aspects of the scorecard is that recent work has shown that language models are unsupervised multitask learners BIBREF5 and can rediscover the classical NLP pipeline. By fine tuning them on a specific task, we make them pay attention to the correct level of abstraction corresponding to the scorecard.
<<</Rationale for Language Models>>>
<<</Implementing metrics satisfying scorecard>>>
<<</Towards a robust data-driven approach>>>
<<<Conclusion>>>
In this work, we have shown three main limitations of BLEU and ROUGE and proposed a path forward outlining why and how state of the art language models can be used as summary evaluators. Future work includes extending the proposed scorecard, updating the models matching best the scorecard criteria and assessing published summarization models using that scorecard.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Work\nBLEU, ROUGE and n-gram matching approaches\nTransformers, BERT and GPT\nModel-based metrics\nChallenges with BLEU and ROUGE\nHigh score, opposite meanings\nLow score, similar meanings\nHigh score, unintelligible sentences\nExperiments\nExperiments with carefully crafted sentences\nSemantic similarity experiments\nTowards a robust data-driven approach\nMetric Scorecard\nImplementing metrics satisfying scorecard\nSemantic Similarity\nLogical Equivalence\nSentence Intelligibility\nRationale for Language Models\nConclusion"
],
"type": "outline"
}
|
1910.00194
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Improved Word Sense Disambiguation Using Pre-Trained Contextualized Word Representations
<<<Abstract>>>
Contextualized word representations are able to give different representations for the same word in different contexts, and they have been shown to be effective in downstream natural language processing tasks, such as question answering, named entity recognition, and sentiment analysis. However, evaluation on word sense disambiguation (WSD) in prior work shows that using contextualized word representations does not outperform the state-of-the-art approach that makes use of non-contextualized word embeddings. In this paper, we explore different strategies of integrating pre-trained contextualized word representations and our best strategy achieves accuracies exceeding the best prior published accuracies by significant margins on multiple benchmark WSD datasets.
<<</Abstract>>>
<<<Introduction>>>
Word sense disambiguation (WSD) automatically assigns a pre-defined sense to a word in a text. Different senses of a word reflect different meanings a word has in different contexts. Identifying the correct word sense given a context is crucial in natural language processing (NLP). Unfortunately, while it is easy for a human to infer the correct sense of a word given a context, it is a challenge for NLP systems. As such, WSD is an important task and it has been shown that WSD helps downstream NLP tasks, such as machine translation BIBREF0 and information retrieval BIBREF1.
A WSD system assigns a sense to a word by taking into account its context, comprising the other words in the sentence. This can be done through discrete word features, which typically involve surrounding words and collocations trained using a classifier BIBREF2, BIBREF3, BIBREF4, BIBREF5. The classifier can also make use of continuous word representations of the surrounding words BIBREF6, BIBREF7. Neural WSD systems BIBREF8, BIBREF9 feed the continuous word representations into a neural network that captures the whole sentence and the word representation in the sentence. However, in both approaches, the word representations are independent of the context.
Recently, pre-trained contextualized word representations BIBREF10, BIBREF11, BIBREF12, BIBREF13 have been shown to improve downstream NLP tasks. Pre-trained contextualized word representations are obtained through neural sentence encoders trained on a huge amount of raw texts. When the resulting sentence encoder is fine-tuned on the downstream task, such as question answering, named entity recognition, and sentiment analysis, with much smaller annotated training data, it has been shown that the trained model, with the pre-trained sentence encoder component, achieves new state-of-the-art results on those tasks.
While demonstrating superior performance in downstream NLP tasks, pre-trained contextualized word representations are still reported to give lower accuracy compared to approaches that use non-contextualized word representations BIBREF10, BIBREF12 when evaluated on WSD. This seems counter-intuitive, as a neural sentence encoder better captures the surrounding context that serves as an important cue to disambiguate words. In this paper, we explore different strategies of integrating pre-trained contextualized word representations for WSD. Our best strategy outperforms prior methods of incorporating pre-trained contextualized word representations and achieves new state-of-the-art accuracy on multiple benchmark WSD datasets.
The following sections are organized as follows. Section SECREF2 presents related work. Section SECREF3 describes our pre-trained contextualized word representation. Section SECREF4 proposes different strategies to incorporate the contextualized word representation for WSD. Section SECREF5 describes our experimental setup. Section SECREF6 presents the experimental results. Section SECREF7 discusses the findings from the experiments. Finally, Section SECREF8 presents the conclusion.
<<</Introduction>>>
<<<Related Work>>>
Continuous word representations in real-valued vectors, or commonly known as word embeddings, have been shown to help improve NLP performance. Initially, exploiting continuous representations was achieved by adding real-valued vectors as classification features BIBREF14. BIBREF6 fine-tuned non-contextualized word embeddings by a feed-forward neural network such that those word embeddings were more suited for WSD. The fine-tuned embeddings were incorporated into an SVM classifier. BIBREF7 explored different strategies of incorporating word embeddings and found that their best strategy involved exponential decay that decreased the contribution of surrounding word features as their distances to the target word increased.
The neural sequence tagging approach has also been explored for WSD. BIBREF8 proposed bidirectional long short-term memory (LSTM) BIBREF15 for WSD. They concatenated the hidden states of the forward and backward LSTMs and fed the concatenation into an affine transformation followed by softmax normalization, similar to the approach to incorporate a bidirectional LSTM adopted in sequence labeling tasks such as part-of-speech tagging and named entity recognition BIBREF16. BIBREF9 proposed a self-attention layer on top of the concatenated bidirectional LSTM hidden states for WSD and introduced multi-task learning with part-of-speech tagging and semantic labeling as auxiliary tasks. However, on average across the test sets, their approach did not outperform SVM with word embedding features. Subsequently, BIBREF17 proposed the incorporation of glosses from WordNet in a bidirectional LSTM for WSD, and reported better results than both SVM and prior bidirectional LSTM models.
A neural language model (LM) is aimed at predicting a word given its surrounding context. As such, the resulting hidden representation vector captures the context of a word in a sentence. BIBREF10 designed context2vec, which is a one-layer bidirectional LSTM trained to maximize the similarity between the hidden state representation of the LSTM and the target word embedding. BIBREF12 designed ELMo, which is a two-layer bidirectional LSTM language model trained to predict the next word in the forward LSTM and the previous word in the backward LSTM. In both models, WSD was evaluated by nearest neighbor matching between the test and training instance representations. However, despite training on a huge amount of raw texts, the resulting accuracies were still lower than those achieved by WSD approaches with pre-trained non-contextualized word representations.
End-to-end neural machine translation (NMT) BIBREF18, BIBREF19 learns to generate an output sequence given an input sequence, using an encoder-decoder model. The encoder captures the contextualized representation of the words in the input sentence for the decoder to generate the output sentence. Following this intuition, BIBREF11 trained an encoder-decoder model on parallel texts and obtained pre-trained contextualized word representations from the encoder.
<<</Related Work>>>
<<<Pre-Trained Contextualized Word Representation>>>
The contextualized word representation that we use is BERT BIBREF13, which is a bidirectional transformer encoder model BIBREF20 pre-trained on billions of words of texts. There are two tasks on which the model is trained, i.e., masked word and next sentence prediction. In both tasks, prediction accuracy is determined by the ability of the model to understand the context.
A transformer encoder computes the representation of each word through an attention mechanism with respect to the surrounding words. Given a sentence $x^n_1$ of length $n$, the transformer computes the representation of each word $x_i$ through a multi-head attention mechanism, where the query vector is from $x_i$ and the key-value vector pairs are from the surrounding words $x_{i^{\prime }}$ ($1 \le i^{\prime } \le n$). The word representation produced by the transformer captures the contextual information of a word.
The attention mechanism can be viewed as mapping a query vector $\mathbf {q}$ and a set of key-value vector pairs $(\mathbf {k}, \mathbf {v})$ to an output vector. The attention function $A(\cdot )$ computes the output vector which is the weighted sum of the value vectors and is defined as:
where $\mathbf {K}$ and $\mathbf {V}$ are matrices, containing the key vectors and the value vectors of the words in the sentence respectively, and $\alpha (\mathbf {q}, \mathbf {k}, \rho )$ is a scalar attention weight between $\mathbf {q}$ and $\mathbf {k}$, re-scaled by a scalar $\rho $.
Two building blocks for the transformer encoder are the multi-head attention mechanism and the position-wise feed-forward neural network (FFNN). The multi-head attention mechanism with $H$ heads leverages the attention function in Equation DISPLAY_FORM1 as follows:
where $\oplus $ denotes concatenation of vectors, $\mathbf {W}_\text{MH} \in \mathbb {R}^{d_\text{model} \times Hd_\mathbf {v}}$, $\mathbf {W}^\mathbf {Q}_\eta , \mathbf {W}^\mathbf {K}_\eta \in \mathbb {R}^{d_\mathbf {k} \times d_\text{model}}$, and $ \mathbf {W}^\mathbf {V}_\eta \in \mathbb {R}^{d_\mathbf {v} \times d_\text{model}}$. The input vector $\mathbf {q} \in \mathbb {R}^{d_\text{model}}$ is the hidden vector for the ambiguous word, while input matrices $\mathbf {K}, \mathbf {V} \in \mathbb {R}^{d_\text{model} \times n}$ are the concatenation of the hidden vectors of all words in the sentence. For each attention head, the dimension of both the query and key vectors is $d_\mathbf {k}$ while the dimension of the value vector is $d_\mathbf {v}$. The encoder model dimension is $d_\text{model}$.
The position-wise FFNN performs a non-linear transformation on the attention output corresponding to each input word position as follows:
in which the input vector $\mathbf {u} \in \mathbb {R}^{d_\text{model}}$ is transformed to the output vector with dimension $d_\text{model}$ via a series of linear projections with the ReLU activation function.
For the hidden layer $l$ ($1 \le l \le L$), the self-attention sub-layer output $\mathbf {f}^l_i$ is computed as follows:
where LayerNorm refers to layer normalization BIBREF21 and the superscript $l$ and subscript $\mathbf {h}$ indicate that each encoder layer $l$ has an independent set of multi-head attention weight parameters (see Equations DISPLAY_FORM2 and ). The input for the first layer is $\mathbf {h}^0_i = \mathbf {E}(x_i)$, which is the non-contextualized word embedding of $x_i$.
The second sub-layer consists of the position-wise fully connected FFNN, computed as:
where, similar to self-attention, an independent set of weight parameters in Equation DISPLAY_FORM3 is defined in each layer.
<<</Pre-Trained Contextualized Word Representation>>>
<<<Incorporating Pre-Trained Contextualized Word Representation>>>
As BERT is trained on the masked word prediction task, which is to predict a word given the surrounding (left and right) context, the pre-trained model already captures the context. In this section, we describe different techniques of leveraging BERT for WSD, broadly categorized into nearest neighbor matching and linear projection of hidden layers.
<<<Nearest Neighbor Matching>>>
A straightforward way to disambiguate word sense is through 1-nearest neighbor matching. We compute the contextualized representation of each word in the training data and the test data through BERT. Given a hidden representation $\mathbf {h}^L_{i}$ at the $L$-th layer for word $x_i$ in the test data, nearest neighbor matching finds a vector $\mathbf {h^*}$ in the $L$-th layer from the training data such that
where the sense assigned to $x_i$ is the sense of the word whose contextualized representation is $\mathbf {h^*}$. This WSD technique is adopted in earlier work on contextualized word representations BIBREF10, BIBREF12.
<<</Nearest Neighbor Matching>>>
<<<Linear Projection of Hidden Layers>>>
Apart from nearest neighbor matching, we can perform a linear projection of the hidden vector $\mathbf {h}_i$ by an affine transformation into an output sense vector, with its dimension equal to the number of senses for word $x_i$. The output of this affine transformation is normalized by softmax such that all its values sum to 1. Therefore, the predicted sense $\mathbf {s}_i$ of word $x_i$ is formulated as
where $\mathbf {s}_i$ is a vector of predicted sense distribution for word $x_i$, while $\mathbf {W}^{\text{lexelt}(x_i)}$ and $\mathbf {b}^{\text{lexelt}(x_i)}$ are respectively the projection matrix and bias vector specific to the lexical element (lexelt) of word $x_i$, which consists of its lemma and optionally its part-of-speech tag. We choose the sense corresponding to the element of $\mathbf {s}_i$ with the maximum value.
Training the linear projection model is done by the back-propagation algorithm, which updates the model parameters to minimize a cost function. Our cost function is the negative log-likelihood of the softmax output value that corresponds to the tagged sense in the training data. In addition, we propose two novel ways of incorporating BERT's hidden representation vectors to compute the sense output vector, which are described in the following sub-subsections.
<<<Last Layer Projection>>>
Similar to the nearest neighbor matching model, we can feed the hidden vector of BERT in the last layer, $\mathbf {h}^L_i$, into an affine transformation followed by softmax. That is, $\mathbf {h}_i$ in Equation DISPLAY_FORM10 is instantiated by $\mathbf {h}^L_i$. The last layer projection model is illustrated in Figure FIGREF7(a).
<<</Last Layer Projection>>>
<<<Weighted Sum of Hidden Layers>>>
BERT consists of multiple layers stacked one after another. Each layer carries a different representation of word context. Taking into account different hidden layers may help the WSD system to learn from different context information encoded in different layers of BERT.
To take into account all layers, we compute the weighted sum of all hidden layers, $\mathbf {h}^l_i$, where $1 \le l \le L$, corresponding to a word position $i$, through attention mechanism. That is, $\mathbf {h}_i$ in Equation DISPLAY_FORM10 is replaced by the following equation:
where $\mathbf {m} \in \mathbb {R}^{d_\text{model}}$ is a projection vector to obtain scalar values with the key vectors. The model with weighted sum of all hidden layers is illustrated in Figure FIGREF7(b).
<<</Weighted Sum of Hidden Layers>>>
<<<Gated Linear Unit>>>
While the contextualized representations in the BERT hidden layer vectors are features that determine the word sense, some features are more useful than the others. As such, we propose filtering the vector values by a gating vector whose values range from 0 to 1. This mechanism is known as the gated linear unit (GLU) BIBREF22, which is formulated as
where $\mathbf {W}^\mathbf {h}$ and $\mathbf {W}^\mathbf {g}$ are separate projection matrices and $\mathbf {b}^\mathbf {h}$ and $\mathbf {b}^\mathbf {g}$ are separate bias vectors. The symbols $\sigma (\cdot )$ and $\odot $ denote the sigmoid function and element-wise vector multiplication operation respectively.
GLU transforms the input vector $\mathbf {h}$ by feeding it to two separate affine transformations. The second transformation is used as the sigmoid gate to filter the input vector, which is summed with the vector after the first affine transformation.
<<</Gated Linear Unit>>>
<<</Linear Projection of Hidden Layers>>>
<<</Incorporating Pre-Trained Contextualized Word Representation>>>
<<<Experimental Setup>>>
We conduct experiments on various WSD tasks. The description and the statistics for each task are given in Table . For English, a lexical element (lexelt) is defined as a combination of lemma and part-of-speech tag, while for Chinese, it is simply the lemma, following the OntoNotes setup.
We exploit English BERT$_\text{BASE}$ for the English tasks and Chinese BERT for the Chinese task. We conduct experiments with different strategies of incorporating BERT as described in Section SECREF4, namely 1-nearest neighbor matching (1-nn) and linear projection. In the latter technique, we explore strategies including simple last layer projection, layer weighting (LW), and gated linear unit (GLU).
In the linear projection model, we train the model by the Adam algorithm BIBREF23 with a learning rate of $10^{-3}$. The model parameters are updated per mini-batch of 16 sentences. As update progresses, we pick the best model parameters from a series of neural network updates based on accuracy on a held-out development set, disjoint from the training set.
The state-of-the-art supervised WSD approach takes into account features from the neighboring sentences, typically one sentence to the left and one to the right apart from the current sentence that contains the ambiguous words. We also exploit this in our model, as BERT supports inputs with multiple sentences separated by a special [SEP] symbol.
For English all-words WSD, we train our WSD model on SemCor BIBREF24, and test it on Senseval-2 (SE2), Senseval-3 (SE3), SemEval 2013 task 12 (SE13), and SemEval 2015 task 13 (SE15). This common benchmark, which has been annotated with WordNet-3.0 senses BIBREF25, has recently been adopted in English all-words WSD. Following BIBREF9, we choose SemEval 2007 Task 17 (SE07) as our development data to pick the best model parameters after a number of neural network updates, for models that require back-propagation training.
We also evaluate on Senseval-2 and Senseval-3 English lexical sample tasks, which come with pre-defined training and test data. For each word type, we pick 20% of the training instances to be our development set and keep the remaining 80% as the actual training data. Through this development set, we determine the number of epochs to use in training. We then re-train the model with the whole training dataset using the number of epochs identified in the initial training step.
While WSD is predominantly evaluated on English, we are also interested in evaluating our approach on Chinese, to evaluate the effectiveness of our approach in a different language. We use OntoNotes Release 5.0, which contains a number of annotations including word senses for Chinese. We follow the data setup of BIBREF26 and conduct an evaluation on four genres, i.e., broadcast conversation (BC), broadcast news (BN), magazine (MZ), and newswire (NW), as well as the concatenation of all genres. While the training and development datasets are divided into genres, we train on the concatenation of all genres and test on each individual genre.
For Chinese WSD evaluation, we train IMS BIBREF5 on the Chinese OntoNotes dataset as our baseline. We also incorporate pre-trained non-contextualized Chinese word embeddings as IMS features BIBREF6, BIBREF7. The pre-trained word embeddings are obtained by training the word2vec skip-gram model on Chinese Gigaword Fifth Edition, which after automatic word segmentation contains approximately 2 billion words. Following BIBREF6, we incorporate the embedding features of words within a window surrounding the target ambiguous word. In our experiments, we take into account 5 words to the left and right.
<<</Experimental Setup>>>
<<<Results>>>
We present our experimental results and compare them with prior baselines.
<<<English All-Words Tasks>>>
For English all-words WSD, we compare our approach with three categories of prior approaches. Firstly, we compare our approach with the supervised SVM classifier approach, namely IMS BIBREF5. We compare our approach with both the original IMS without word embedding features and IMS with non-contextualized word embedding features, that is, word2vec with exponential decay BIBREF7. We also compare with SupWSD BIBREF27, which is an alternative implementation of IMS. Secondly, we compare our approach with the neural WSD approaches that leverage bidirectional LSTM (bi-LSTM). These include the bi-LSTM model with attention trained jointly with lexical semantic labeling task BIBREF9 (BiLSTMatt+LEX) and the bi-LSTM model enhanced with gloss representation from WordNet (GAS). Thirdly, we show comparison with prior contextualized word representations for WSD, pre-trained on a large number of texts, namely context2vec BIBREF10 and ELMo BIBREF12. In these two models, WSD is treated as nearest neighbor matching as described in Section SECREF4.
Table shows our WSD results in F1 measure. It is shown in the table that with the nearest neighbor matching model, BERT outperforms context2vec and ELMo. This shows the effectiveness of BERT's pre-trained contextualized word representation. When we include surrounding sentences, one to the left and one to the right, we get improved F1 scores consistently.
We also show that linear projection to the sense output vector further improves WSD performance, by which our best results are achieved. While BERT has been shown to outperform other pre-trained contextualized word representations through the nearest neighbor matching experiments, its potential can be maximized through linear projection to the sense output vector. It is worthwhile to note that our more advanced linear projection, by means of layer weighting (§SECREF12 and gated linear unit (§SECREF14) gives the best F1 scores on all test sets.
All our BERT WSD systems outperform gloss-enhanced neural WSD, which has the best overall score among all prior systems.
<<</English All-Words Tasks>>>
<<<English Lexical Sample Tasks>>>
For English lexical sample tasks, we compare our approach with the original IMS BIBREF5 and IMS with non-contextualized word embedding features. The embedding features incorporated into IMS include CW embeddings BIBREF28, obtained from a convolutional language model, fine-tuned (adapted) to WSD BIBREF6 (+adapted CW), and word2vec skip-gram BIBREF29 with exponential decay BIBREF7 (+w2v+expdecay). We also compare our approach with the bi-LSTM, on top of which sense classification is defined BIBREF8, and context2vec BIBREF10, which is a contextualized pre-trained bi-LSTM model trained on 2B words of text. Finally, we also compare with a prior multi-task and semi-supervised WSD approach learned through alternating structure optimization (ASO) BIBREF3, which also utilizes unlabeled data for training.
As shown in Table , our BERT-based WSD approach with linear projection model outperforms all prior approaches. context2vec, which is pre-trained on a large amount of texts, performs worse than the prior semi-supervised ASO approach on Senseval-3, while our best result outperforms ASO by a large margin.
Neural bi-LSTM performs worse than IMS with non-contextualized word embedding features. Our neural model with pre-trained contextualized word representations outperforms the best result achieved by IMS on both Senseval-2 and Senseval-3.
<<</English Lexical Sample Tasks>>>
<<<Chinese OntoNotes WSD>>>
We compare our approach with IMS without and with word embedding features as the baselines. The results are shown in Table .
Across all genres, BERT outperforms the baseline IMS with word embedding (non-contextualized word representation) features BIBREF6. The latter also improves over the original IMS without word embedding features BIBREF5. Linear projection approaches consistently outperform nearest neighbor matching by a significant margin, similar to the results on English WSD tasks.
The best overall result for the Chinese OntoNotes test set is achieved by the models with simple projection and hidden layer weighting.
<<</Chinese OntoNotes WSD>>>
<<</Results>>>
<<<Discussion>>>
Across all tasks (English all-words, English lexical sample, and Chinese OntoNotes), our experiments demonstrate the effectiveness of BERT over various prior WSD approaches. The best results are consistently obtained by linear projection models, which project the last hidden layer or the weighted sum of all hidden layers to an output sense vector.
We can view the BERT hidden layer outputs as contextual features, which serve as useful cues in determining the word senses. In fact, the attention mechanism in transformer captures the surrounding words. In prior work like IMS BIBREF5, these contextual cues are captured by the manually-defined surrounding word and collocation features. The features obtained by the hidden vector output are shown to be more effective than the manually-defined features.
We introduced two advanced linear projection techniques, namely layer weighting and gated linear unit (GLU). While BIBREF12 showed that the second biLSTM layer results in better WSD accuracy compared to the first layer (nearer to the individual word representation), we showed that taking into account different layers by means of the attention mechanism is useful for WSD. GLU as an activation function has been shown to be effective for better convergence and to overcome the vanishing gradient problem in the convolutional language model BIBREF22. In addition, the GLU gate vector, with values ranging from 0 to 1, can be seen as a filter for the features from the hidden layer vector.
Compared with two prior contextualized word representations models, context2vec BIBREF10 and ELMo BIBREF12, BERT achieves higher accuracy. This shows the effectiveness of the attention mechanism used in the transformer model to represent the context.
Our BERT WSD models outperform prior neural WSD models by a large margin. These prior neural WSD models perform comparably with IMS with embeddings as classifier features, in addition to the discrete features. While neural WSD approaches BIBREF8, BIBREF9, BIBREF17 exploit non-contextualized word embeddings which are trained on large texts, the hidden layers are trained only on a small amount of labeled data.
<<</Discussion>>>
<<<Conclusion>>>
For the WSD task, we have proposed novel strategies of incorporating BERT, a pre-trained contextualized word representation which effectively captures the context in its hidden vectors. Our experiments show that linear projection of the hidden vectors, coupled with gating to filter the values, gives better results than the prior state of the art. Compared to prior neural and feature-based WSD approaches that make use of non-contextualized word representations, using pre-trained contextualized word representation with our proposed incorporation strategy achieves significantly higher scores.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Work\nPre-Trained Contextualized Word Representation\nIncorporating Pre-Trained Contextualized Word Representation\nNearest Neighbor Matching\nLinear Projection of Hidden Layers\nLast Layer Projection\nWeighted Sum of Hidden Layers\nGated Linear Unit\nExperimental Setup\nResults\nEnglish All-Words Tasks\nEnglish Lexical Sample Tasks\nChinese OntoNotes WSD\nDiscussion\nConclusion"
],
"type": "outline"
}
|
1910.02339
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Natural- to formal-language generation using Tensor Product Representations
<<<Abstract>>>
Generating formal-language represented by relational tuples, such as Lisp programs or mathematical expressions, from a natural-language input is an extremely challenging task because it requires to explicitly capture discrete symbolic structural information from the input to generate the output. Most state-of-the-art neural sequence models do not explicitly capture such structure information, and thus do not perform well on these tasks. In this paper, we propose a new encoder-decoder model based on Tensor Product Representations (TPRs) for Natural- to Formal-language generation, called TP-N2F. The encoder of TP-N2F employs TPR 'binding' to encode natural-language symbolic structure in vector space and the decoder uses TPR 'unbinding' to generate a sequence of relational tuples, each consisting of a relation (or operation) and a number of arguments, in symbolic space. TP-N2F considerably outperforms LSTM-based Seq2Seq models, creating a new state of the art results on two benchmarks: the MathQA dataset for math problem solving, and the AlgoList dataset for program synthesis. Ablation studies show that improvements are mainly attributed to the use of TPRs in both the encoder and decoder to explicitly capture relational structure information for symbolic reasoning.
<<</Abstract>>>
<<<INTRODUCTION>>>
When people perform explicit reasoning, they can typically describe the way to the conclusion step by step via relational descriptions. There is ample evidence that relational representations are important for human cognition (e.g., BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4). Although a rapidly growing number of researchers use deep learning to solve complex symbolic reasoning and language tasks (a recent review is BIBREF5), most existing deep learning models, including sequence models such as LSTMs, do not explicitly capture human-like relational structure information.
In this paper we propose a novel neural architecture, TP-N2F, to solve natural- to formal-language generation tasks (N2F). In the tasks we study, math or programming problems are stated in natural-language, and answers are given as programs, sequences of relational representations, to solve the problem. TP-N2F encodes the natural-language symbolic structure of the problem in an input vector space, maps this to a vector in an intermediate space, and uses that vector to produce a sequence of output vectors that are decoded as relational structures. Both input and output structures are modelled as Tensor Product Representations (TPRs) BIBREF6. During encoding, NL-input symbolic structures are encoded as vector space embeddings using TPR `binding' (following BIBREF7); during decoding, symbolic constituents are extracted from structure-embedding output vectors using TPR `unbinding' (following BIBREF8, BIBREF9).
Our contributions in this work are as follows. (i) We propose a role-level analysis of N2F tasks. (ii) We present a new TP-N2F model which gives a neural-network-level implementation of a model solving the N2F task under the role-level description proposed in (i). To our knowledge, this is the first model to be proposed which combines both the binding and unbinding operations of TPRs to achieve generation tasks through deep learning. (iii) State-of-the-art performance on two recently developed N2F tasks shows that the TP-N2F model has significant structure learning ability on tasks requiring symbolic reasoning through program synthesis.
<<</INTRODUCTION>>>
<<<Background: Review of Tensor-Product Representation>>>
The TPR mechanism is a method to create a vector space embedding of complex symbolic structures. The type of a symbol structure is defined by a set of structural positions or roles, such as the left-child-of-root position in a tree, or the second-argument-of-$R$ position of a given relation $R$. In a particular instance of a structural type, each of these roles may be occupied by a particular filler, which can be an atomic symbol or a substructure (e.g., the entire left sub-tree of a binary tree can serve as the filler of the role left-child-of-root). For now, we assume the fillers to be atomic symbols.
The TPR embedding of a symbol structure is the sum of the embeddings of all its constituents, each constituent comprising a role together with its filler. The embedding of a constituent is constructed from the embedding of a role and the embedding of the filler of that role: these are joined together by the TPR `binding' operation, the tensor (or generalized outer) product $\otimes $.
Formally, suppose a symbolic type is defined by the roles $\lbrace r_i \rbrace $, and suppose that in a particular instance of that type, ${S}$, role $r_i$ is bound by filler $f_i$. The TPR embedding of ${S}$ is the order-2 tensor = i i i = i i i where $\lbrace _i \rbrace $ are vector embeddings of the fillers and $\lbrace _i \rbrace $ are vector embeddings of the roles. In Eq. SECREF2, and below, for notational simplicity we conflate order-2 tensors and matrices.
As a simple example, consider the symbolic type string, and choose roles to be $r_1 = $ first_element, $r_2 = $ second_element, etc. Then in the specific string S = cba, the first role $r_1$ is filled by c, and $r_2$ and $r_3$ by b and a, respectively. The TPR for S is $\otimes _1 + \otimes _2 + \otimes _3$, where $, , $ are the vector embeddings of the symbols a, b, c, and $_i$ is the vector embedding of role $r_i$.
A TPR scheme for embedding a set of symbol structures is defined by a decomposition of those structures into roles bound to fillers, an embedding of each role as a role vector, and an embedding of each filler as a filler vector. Let the total number of roles and fillers available be $n_{\mathrm {R}}, n_{\mathrm {F}}$, respectively. Define the matrix of all possible role vectors to be $\in ^{d_{\mathrm {R}}\times n_{\mathrm {R}}}$, with column $i$, $[]_{:i} = _i \in ^{d_{\mathrm {R}}}$, comprising the embedding of $r_i$. Similarly let $\in ^{d_{\mathrm {F}}\times n_{\mathrm {F}}}$ be the matrix of all possible filler vectors. The TPR $\in ^{d_{\mathrm {F}}\times d_{\mathrm {R}}}$. Below, $d_{\mathrm {R}}, n_{\mathrm {R}}, d_{\mathrm {F}}, n_{\mathrm {F}}$ will be hyper-parameters, while $, $ will be learned parameter matrices.
Using summation in Eq.SECREF2 to combine the vectors embedding the constituents of a structure risks non-recoverability of those constituents given the embedding $$ of the the structure as a whole. The tensor product is chosen as the binding operation in order to enable recovery of the filler of any role in a structure ${S}$ given its TPR $$. This can be done with perfect precision if the embeddings of the roles are linearly independent. In that case the role matrix $$ has a left inverse $$: $= $. Now define the unbinding (or dual) vector for role $r_j$, $_j$, to be the $j^{{\mathrm {th}}}$ column of $^\top $: $U_{:j}^\top $. Then, since $[]_{ji} = []_{ji} = _{j:} _{:i} = [^\top _{:j}]^\top _{:i} =_j^\top _i = _i^\top _j$, we have $_i^\top _j = \delta _{ji}$. This means that, to recover the filler of $r_j$ in the structure with TPR $$, we can take its tensor inner product (or matrix-vector product) with $_j$: j = [ i i i] j = i i ij = j
In the architecture proposed here, we will make use of both TPR binding using the tensor product with role vectors $_i$ and TPR unbinding using the tensor inner product with unbinding vectors $_j$. Binding will be used to produce the order-2 tensor $_S$ embedding of the NL problem statement. Unbinding will be used to generate output relational tuples from an order-3 tensor $$. Because they pertain to different representations (of different orders in fact), the binding and unbinding vectors we will use are not related to one another.
<<</Background: Review of Tensor-Product Representation>>>
<<<TP-N2F Model>>>
We propose a general TP-N2F neural network architecture operating over TPRs to solve N2F tasks under a proposed role-level description of those tasks. In this description, natural-language input is represented as a straightforward order-2 role structure, and formal-language relational representations of outputs are represented with a new order-3 recursive role structure proposed here. Figure FIGREF3 shows an overview diagram of the TP-N2F model. It depicts the following high-level description.
As shown in Figure FIGREF3, while the natural-language input is a sequence of words, the output is a sequence of multi-argument relational tuples such as $(R \hspace{2.84526pt}A_1 \hspace{2.84526pt}A_2)$, a 3-tuple consisting of a binary relation (or operation) $R$ with its two arguments. The “TP-N2F encoder” uses two LSTMs to produce a pair consisting of a filler vector and a role vector, which are bound together with the tensor product. These tensor products, concatenated, comprise the “context” over which attention will operate in the decoder. The sum of the word-level TPRs, flattened to a vector, is treated as a representation of the entire problem statement; it is fed to the “Reasoning MLP”, which transforms this encoding of the problem into a vector encoding the solution. This is the initial state of the “TP-N2F decoder” attentional LSTM, which outputs at each time step an order-3 tensor representing a relational tuple. To generate a correct tuple from decoder operations, the model must learn to give the order-3 tensor the form of a TPR for a $(R \hspace{2.84526pt}A_1 \hspace{2.84526pt}A_2)$ tuple (detailed explanation in Sec. SECREF7). In the following sections, we first introduce the details of our proposed role-level description for N2F tasks, and then present how our proposed TP-N2F model uses TPR binding and unbinding operations to create a neural network implementation of this description of N2F tasks.
<<<Role-level description of N2F tasks>>>
In this section, we propose a role-level description of N2F tasks, which specifies the filler/role structures of the input natural-language symbolic expressions and the output relational representations.
<<<Role-level description for natural-language input>>>
Instead of encoding each token of a sentence with a non-compositional embedding vector looked up in a learned dictionary, we use a learned role-filler decomposition to compose a tensor representation for each token. Given a sentence $S$ with $n$ word tokens $\lbrace w^0,w^1,...,w^{n-1}\rbrace $, each word token $w^t$ is assigned a learned role vector $^t$, soft-selected from the learned dictionary $$, and a learned filler vector $^t$, soft-selected from the learned dictionary $$ (Sec. SECREF2). The mechanism closely follows that of BIBREF7, and we hypothesize similar results: the role and filler approximately encode the grammatical role of the token and its lexical semantics, respectively. Then each word token $w^t$ is represented by the tensor product of the role vector and the filler vector: $^t=^t \otimes ^t$. In addition to the set of all its token embeddings $\lbrace ^0, \ldots , ^{n-1} \rbrace $, the sentence $S$ as a whole is assigned a TPR equal to the sum of the TPR embeddings of all its word tokens: $_S = \sum _{t=0}^{n-1} ^t$.
Using TPRs to encode natural language has several advantages. First, natural language TPRs can be interpreted by exploring the distribution of tokens grouped by the role and filler vectors they are assigned by a trained model (as in BIBREF7). Second, TPRs avoid the Bag of Word (BoW) confusion BIBREF8: the BoW encoding of Jay saw Kay is the same as the BoW encoding of Kay saw Jay but the encodings are different with TPR embedding, because the role filled by a symbol changes with its context.
<<</Role-level description for natural-language input>>>
<<<Role-level description for relational representations>>>
In this section, we propose a novel recursive role-level description for representing symbolic relational tuples. Each relational tuple contains a relation token and multiple argument tokens. Given a binary relation $rel$, a relational tuple can be written as $(rel \hspace{2.84526pt}arg_1 \hspace{2.84526pt}arg_2)$ where $arg_1,arg_2$ indicate two arguments of relation $rel$. Let us adopt the two positional roles, $p_i^{rel} = $ arg$_i$-of-$rel$ for $i=1,2$. The filler of role $p_i^{rel}$ is $arg_i$. Now let us use role decomposition recursively, noting that the role $p_i^{rel}$ can itself be decomposed into a sub-role $p_i = $ arg$_i$-of-$\underline{\hspace{5.69054pt}}$ which has a sub-filler $rel$. Suppose that $arg_i, rel, p_i$ are embedded as vectors $_i, , _i$. Then the TPR encoding of $p_i^{rel}$ is $_{rel} \otimes _i$, so the TPR encoding of filler $arg_i$ bound to role $p_i^{rel}$ is $_i \otimes (_{rel} \otimes _i)$. The tensor product is associative, so we can omit parentheses and write the TPR for the formal-language expression, the relational tuple $(rel \hspace{2.84526pt}arg_1 \hspace{2.84526pt}arg_2)$, as: = 1 rel 1 + 2 rel 2. Given the unbinding vectors $^{\prime }_i$ for positional role vectors $_i$ and the unbinding vector $^{\prime }_{rel}$ for the vector $_{rel}$ that embeds relation $rel$, each argument can be unbound in two steps as shown in Eqs. SECREF7–SECREF7. i' = [ 1 rel 1 + 2 rel 2 ] i' = i rel
[ i rel ] 'rel = i Here $\cdot $ denotes the tensor inner product, which for the order-3 $$ and order-1 $^{\prime }_i$ in Eq. SECREF7 can be defined as $[\cdot ^{\prime }_i]_{jk} = \sum _l []_{jkl} [^{\prime }_i]_l$; in Eq. SECREF7, $\cdot $ is equivalent to the matrix-vector product.
Our proposed scheme can be contrasted with the TPR scheme in which $(rel \hspace{2.84526pt}arg_1 \hspace{2.84526pt}arg_2)$ is embedded as $_{rel} \otimes _1 \otimes _2$ (e.g., BIBREF11, BIBREF12). In that scheme, an $n$-ary-relation tuple is embedded as an order-($n+1$) tensor, and unbinding an argument requires knowing all the other arguments (to use their unbinding vectors). In the scheme proposed here, an $n$-ary-relation tuple is still embedded as an order-3 tensor: there are just $n$ terms in the sum in Eq. SECREF7, using $n$ position vectors $_1, \dots , _n$; unbinding simply requires knowing the unbinding vectors for these fixed position vectors.
In the model, the order-3 tensor $$ of Eq. SECREF7 has a different status than the order-2 tensor $_S$ of Sec. SECREF5. $_S$ is a TPR by construction, whereas $$ is a TPR as a result of successful learning. To generate the output relational tuples, the decoder assumes each tuple has the form of Eq. SECREF7, and performs the unbinding operations which that structure calls for. In Appendix Sec. SECREF65, it is shown that, if unbinding each of a set of roles from some unknown tensor $$ gives a target set of fillers, then $$ must equal the TPR generated by those role/filler pairs, plus some tensor that is irrelevant because unbinding from it produces the zero vector. In other words, if the decoder succeeds in producing filler vectors that correspond to output relational tuples that match the target, then, as far as what the decoder can see, the tensor that it operates on is the TPR of Eq. SECREF7.
<<</Role-level description for relational representations>>>
<<<The TP-N2F Scheme for Learning the input-output mapping>>>
To generate formal relational tuples from natural-language descriptions, a learning strategy for the mapping between the two structures is particularly important. As shown in (SECREF8), we formalize the learning scheme as learning a mapping function $f_{\mathrm {mapping}}(\cdot )$, which, given a structural representation of the natural-language input, $_S$, outputs a tensor $_F$ from which the structural representation of the output can be generated. At the role level of description, there's nothing more to be said about this mapping; how it is modeled at the neural network level is discussed in Sec. SECREF10. F = fmapping(S)
<<</The TP-N2F Scheme for Learning the input-output mapping>>>
<<</Role-level description of N2F tasks>>>
<<<The TP-N2F Model for Natural- to Formal-Language Generation>>>
As shown in Figure FIGREF3, the TP-N2F model is implemented with three steps: encoding, mapping, and decoding. The encoding step is implemented by the TP-N2F natural-language encoder (TP-N2F Encoder), which takes the sequence of word tokens as inputs, and encodes them via TPR binding according to the TP-N2F role scheme for natural-language input given in Sec. SECREF5. The mapping step is implemented by an MLP called the Reasoning Module, which takes the encoding produced by the TP-N2F Encoder as input. It learns to map the natural-language-structure encoding of the input to a representation that will be processed under the assumption that it follows the role scheme for output relational-tuples specified in Sec. SECREF7: the model needs to learn to produce TPRs such that this processing generates correct output programs. The decoding step is implemented by the TP-N2F relational tuples decoder (TP-N2F Decoder), which takes the output from the Reasoning Module (Sec. SECREF8) and decodes the target sequence of relational tuples via TPR unbinding. The TP-N2F Decoder utilizes an attention mechanism over the individual-word TPRs $^t$ produced by the TP-N2F Encoder. The detailed implementations are introduced below.
<<<The TP-N2F natural-language Encoder>>>
The TP-N2F encoder follows the role scheme in Sec. SECREF5 to encode each word token $w^t$ by soft-selecting one of $n_{\mathrm {F}}$ fillers and one of $n_{\mathrm {R}}$ roles. The fillers and roles are embedded as vectors. These embedding vectors, and the functions for selecting fillers and roles, are learned by two LSTMs, the Filler-LSTM and the Role-LSTM. (See Figure FIGREF11.) At each time-step $t$, the Filler-LSTM and the Role-LSTM take a learned word-token embedding $^t$ as input. The hidden state of the Filler-LSTM, $_{\mathrm {F}}^t$, is used to compute softmax scores $u_k^{\mathrm {F}}$ over $n_{\mathrm {F}}$ filler slots, and a filler vector $^{t} = ^{\mathrm {F}}$ is computed from the softmax scores (recall from Sec. SECREF2 that $$ is the learned matrix of filler vectors). Similarly, a role vector is computed from the hidden state of the Role-LSTM, $_{\mathrm {R}}^t$. $f_{\mathrm {F}}$ and $f_{\mathrm {R}}$ denote the functions that generate $^{t}$ and $^t$ from the hidden states of the two LSTMs. The token $w^t$ is encoded as $^t$, the tensor product of $^{t}$ and $^t$. $^t$ replaces the hidden vector in each LSTM and is passed to the next time step, together with the LSTM cell-state vector $^t$: see (SECREF10)–(SECREF10). After encoding the whole sequence, the TP-N2F encoder outputs the sum of all tensor products $\sum _t ^t$ to the next module. We use an MLP, called the Reasoning MLP, for TPR mapping; it takes an order-2 TPR from the encoder and maps it to the initial state of the decoder. Detailed equations and implementation are provided in Sec. SECREF22 of the Appendix. Ft = fFiller-LSTM(t,t-1, Ft-1) Rt = fRole-LSTM(t,t-1, Rt-1)
t = t t = fF(Ft) fR(Rt)
<<</The TP-N2F natural-language Encoder>>>
<<<The TP-N2F Relational-Tuple Decoder>>>
The TP-N2F Decoder is an RNN that takes the output from the reasoning MLP as its initial hidden state for generating a sequence of relational tuples (Figure FIGREF13). This decoder contains an attentional LSTM called the Tuple-LSTM which feeds an unbinding module: attention operates on the context vector of the encoder, consisting of all individual encoder outputs $\lbrace ^t \rbrace $. The hidden-state $$ of the Tuple-LSTM is treated as a TPR of a relational tuple and is unbound to a relation and arguments. During training, the Tuple-LSTM needs to learn a way to make $$ suitably approximate a TPR. At each time step $t$, the hidden state $^t$ of the Tuple-LSTM with attention (The version in BIBREF13) (SECREF12) is fed as input to the unbinding module, which regards $^t$ as if it were the TPR of a relational tuple with $m$ arguments possessing the role structure described in Sec. SECREF7: $^t \approx \sum _{i=1}^{m} _{i}^t \otimes _{rel}^t \otimes _i$. (In Figure FIGREF13, the assumed hypothetical form of $^t$, as well as that of $_i^t$ below, is shown in a bubble with dashed border.) To decode a binary relational tuple, the unbinding module decodes it from $^t$ using the two steps of TPR unbinding given in (SECREF7)–(SECREF7). The positional unbinding vectors $^{\prime }_{i}$ are learned during training and shared across all time steps. After the first unbinding step (SECREF7), i.e., the inner product of $^t$ with $^{\prime }_i$, we get tensors $_{i}^t$ (SECREF12). These are treated as the TPRs of two arguments $_i^t$ bound to a relation $_{rel}^t$. A relational unbinding vector $_{rel}^{\prime t}$ is computed by a linear function from the sum of the $_{i}^t$ and used to compute the inner product with each $_i^t$ to yield $_i^t$, which are treated as the embedding of argument vectors (SECREF12). Based on the TPR theory, $_{rel}^{\prime t}$ is passed to a linear function to get $_{rel}^t$ as the embedding of a relation vector. Finally, the softmax probability distribution over symbolic outputs is computed for relations and arguments separately. In generation, the most probable symbol is selected. (Detailed equations are in Appendix Sec. SECREF42) t = Atten(fTuple-LSTM(relt,arg1t,arg2t,t-1,ct-1),[0,...,n-1])
1t = t 1' 2t = t 2'
rel't = flinear(1t + 2t) 1t = 1t rel't 2t = 2t rel't
<<</The TP-N2F Relational-Tuple Decoder>>>
<<</The TP-N2F Model for Natural- to Formal-Language Generation>>>
<<<Inference and The Learning Strategy of the TP-N2F Model>>>
During inference time, natural language questions are encoded via the encoder and the Reasoning MLP maps the output of the encoder to the input of the decoder. We use greedy decoding (selecting the most likely class) to decode one relation and its arguments. The relation and argument vectors are concatenated to construct a new vector as the input for the Tuple-LSTM in the next step.
TP-N2F is trained using back-propagation BIBREF14 with the Adam optimizer BIBREF15 and teacher-forcing. At each time step, the ground-truth relational tuple is provided as the input for the next time step. As the TP-N2F decoder decodes a relational tuple at each time step, the relation token is selected only from the relation vocabulary and the argument tokens from the argument vocabulary. For an input ${\mathcal {I}}$ that generates $N$ output relational tuples, the loss is the sum of the cross entropy loss ${\mathcal {L}}$ between the true labels $L$ and predicted tokens for relations and arguments as shown in (SECREF14). LI = i=0N-1L(reli, Lreli) + i=0N-1j=12L(argji, Largji)
<<</Inference and The Learning Strategy of the TP-N2F Model>>>
<<</TP-N2F Model>>>
<<<EXPERIMENTS>>>
The proposed TP-N2F model is evaluated on two N2F tasks, generating operation sequences to solve math problems and generating Lisp programs. In both tasks, TP-N2F achieves state-of-the-art performance. We further analyze the behavior of the unbinding relation vectors in the proposed model. Results of each task and the analysis of the unbinding relation vectors are introduced in turn. Details of experiments and datasets are described in Sec. SECREF20 in the Appendix.
<<<Generating operation sequences to solve math problems>>>
Given a natural-language math problem, we need to generate a sequence of operations (operators and corresponding arguments) from a set of operators and arguments to solve the given problem. Each operation is regarded as a relational tuple by viewing the operator as relation, e.g., $(add, n1, n2)$. We test TP-N2F for this task on the MathQA dataset BIBREF16. The MathQA dataset consists of about 37k math word problems, each with a corresponding list of multi-choice options and the corresponding operation sequence. In this task, TP-N2F is deployed to generate the operation sequence given the question. The generated operations are executed with the execution script from BIBREF16 to select a multi-choice answer. As there are about 30% noisy data (where the execution script returns the wrong answer when given the ground-truth program; see Sec. SECREF20 of the Appendix), we report both execution accuracy (of the final multi-choice answer after running the execution engine) and operation sequence accuracy (where the generated operation sequence must match the ground truth sequence exactly). TP-N2F is compared to a baseline provided by the seq2prog model in BIBREF16, an LSTM-based seq2seq model with attention. Our model outperforms both the original seq2prog, designated SEQ2PROG-orig, and the best reimplemented seq2prog after an extensive hyperparameter search, designated SEQ2PROG-best. Table TABREF16 presents the results. To verify the importance of the TP-N2F encoder and decoder, we conducted experiments to replace either the encoder with a standard LSTM (denoted LSTM2TP) or the decoder with a standard attentional LSTM (denoted TP2LSTM). We observe that both the TPR components of TP-N2F are important for achieving the observed performance gain relative to the baseline.
<<</Generating operation sequences to solve math problems>>>
<<<Generating program trees from natural-language descriptions>>>
Generating Lisp programs requires sensitivity to structural information because Lisp code can be regarded as tree-structured. Given a natural-language query, we need to generate code containing function calls with parameters. Each function call is a relational tuple, which has a function as the relation and parameters as arguments. We evaluate our model on the AlgoLisp dataset for this task and achieve state-of-the-art performance. The AlgoLisp dataset BIBREF17 is a program synthesis dataset. Each sample contains a problem description, a corresponding Lisp program tree, and 10 input-output testing pairs. We parse the program tree into a straight-line sequence of tuples (same style as in MathQA). AlgoLisp provides an execution script to run the generated program and has three evaluation metrics: the accuracy of passing all test cases (Acc), the accuracy of passing 50% of test cases (50p-Acc), and the accuracy of generating an exactly matching program (M-Acc). AlgoLisp has about 10% noisy data (details in the Appendix), so we report results both on the full test set and the cleaned test set (in which all noisy testing samples are removed). TP-N2F is compared with an LSTM seq2seq with attention model, the Seq2Tree model in BIBREF17, and a seq2seq model with a pre-trained tree decoder from the Tree2Tree autoencoder (SAPS) reported in BIBREF18. As shown in Table TABREF18, TP-N2F outperforms all existing models on both the full test set and the cleaned test set. Ablation experiments with TP2LSTM and LSTM2TP show that, for this task, the TP-N2F Decoder is more helpful than TP-N2F Encoder. This may be because lisp codes rely more heavily on structure representations.
<<</Generating program trees from natural-language descriptions>>>
<<<Interpretation of learned structure>>>
To interpret the structure learned by the model, we extract the trained unbinding relation vectors from the TP-N2F Decoder and reduce the dimension of vectors via Principal Component Analysis. K-means clustering results on the average vectors are presented in Figure FIGREF71 and Figure FIGREF72 (in Appendix A.6). Results show that unbinding vectors for operators or functions with similar semantics tend to be close to each other. For example, with 5 clusters in the MathQA dataset, arithmetic operators such as add, subtract, multiply, divide are clustered together, and operators related to square or volume of geometry are clustered together. With 4 clusters in the AlgoLisp dataset, partial/lambda functions and sort functions are in one cluster, and string processing functions are clustered together. Note that there is no direct supervision to inform the model about the nature of the operations, and the TP-N2F decoder has induced this role structure using weak supervision signals from question/operation-sequence-answer pairs. More clustering results are presented in the Appendix A.6.
<<</Interpretation of learned structure>>>
<<</EXPERIMENTS>>>
<<<Related work>>>
N2F tasks include many different subtasks such as symbolic reasoning or semantic parsing BIBREF19, BIBREF20, BIBREF21, BIBREF16, BIBREF17, BIBREF18. These tasks require models with strong structure-learning ability. TPR is a promising technique for encoding symbolic structural information and modeling symbolic reasoning in vector space. TPR binding has been used for encoding and exploring grammatical structural information of natural language BIBREF7, BIBREF9. TPR unbinding has also been used to generate natural language captions from images BIBREF8. Some researchers use TPRs for modeling deductive reasoning processes both on a rule-based model and deep learning models in vector space BIBREF22, BIBREF11, BIBREF12. However, none of these previous models takes advantage of combining TPR binding and TPR unbinding to learn structure representation mappings explicitly, as done in our model. Although researchers are paying increasing attention to N2F tasks, most of the proposed models either do not encode structural information explicitly or are specialized to particular tasks. Our proposed TP-N2F neural model can be applied to many tasks.
<<</Related work>>>
<<<CONCLUSION AND FUTURE WORK>>>
In this paper we propose a new scheme for neural-symbolic relational representations and a new architecture, TP-N2F, for formal-language generation from natural-language descriptions. To our knowledge, TP-N2F is the first model that combines TPR binding and TPR unbinding in the encoder-decoder fashion. TP-N2F achieves the state-of-the-art on two instances of N2F tasks, showing significant structure learning ability. The results show that both the TP-N2F encoder and the TP-N2F decoder are important for improving natural- to formal-language generation. We believe that the interpretation and symbolic structure encoding of TPRs are a promising direction for future work. We also plan to combine large-scale deep learning models such as BERT with TP-N2F to take advantage of structure learning for other generation tasks.
<<</CONCLUSION AND FUTURE WORK>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nINTRODUCTION\nBackground: Review of Tensor-Product Representation\nTP-N2F Model\nRole-level description of N2F tasks\nRole-level description for natural-language input\nRole-level description for relational representations\nThe TP-N2F Scheme for Learning the input-output mapping\nThe TP-N2F Model for Natural- to Formal-Language Generation\nThe TP-N2F natural-language Encoder\nThe TP-N2F Relational-Tuple Decoder\nInference and The Learning Strategy of the TP-N2F Model\nEXPERIMENTS\nGenerating operation sequences to solve math problems\nGenerating program trees from natural-language descriptions\nInterpretation of learned structure\nRelated work\nCONCLUSION AND FUTURE WORK"
],
"type": "outline"
}
|
1908.11860
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Adapt or Get Left Behind: Domain Adaptation through BERT Language Model Finetuning for Aspect-Target Sentiment Classification
<<<Abstract>>>
Aspect-Target Sentiment Classification (ATSC) is a subtask of Aspect-Based Sentiment Analysis (ABSA), which has many applications e.g. in e-commerce, where data and insights from reviews can be leveraged to create value for businesses and customers. Recently, deep transfer-learning methods have been applied successfully to a myriad of Natural Language Processing (NLP) tasks, including ATSC. Building on top of the prominent BERT language model, we approach ATSC using a two-step procedure: self-supervised domain-specific BERT language model finetuning, followed by supervised task-specific finetuning. Our findings on how to best exploit domain-specific language model finetuning enable us to produce new state-of-the-art performance on the SemEval 2014 Task 4 restaurants dataset. In addition, to explore the real-world robustness of our models, we perform cross-domain evaluation. We show that a cross-domain adapted BERT language model performs significantly better than strong baseline models like vanilla BERT-base and XLNet-base. Finally, we conduct a case study to interpret model prediction errors.
<<</Abstract>>>
<<<Introduction>>>
Sentiment Analysis (SA) is an active field of research in Natural Language Processing and deals with opinions in text. A typical application of classical SA in an industrial setting would be to classify a document like a product review into positve, negative or neutral sentiment polarity.
In constrast to SA, the more fine-grained task of Aspect Based Sentiment Analysis (ABSA) BIBREF0, BIBREF1 aims at finding both the aspect of an entity like a restaurant and the sentiment associated with this aspect.
It is important to note that ABSA comes in two variants. We will use the sentence “I love their dumplings” to explain these variants in detail.
Both variants are implemented as a two-step procedure. The first variant is comprised of Aspect-Category Detection (ACD) followed by Aspect-Category Sentiment Classification (ACSC). ACD is a multilabel classification task, where a sentence can be associated with a set of predefined aspect categories like "food" and "service" in the restaurants domain. In the second step, ACSC, the sentiment polarity associated to the aspect is classified. For our example-sentence the correct result is (“food”, “positive”).
The second variant consists of Aspect-Target Extraction (ATE) followed by Aspect-Target Sentiment Classification (ATSC). ATE is a sequence labeling task, where terms like “dumplings” are detected. In the second step, ATSC, the sentiment polarity associated to the aspect-target is determined. In our example the correct result is the tuple ("dumplings", "positive").
In this work, we focus on ATSC. In the last years, specialized neural architectures BIBREF2, BIBREF3 have been developed that substantially improved modeling of this target-context relationship. More recently, the Natural Language Processing community experienced a substantial shift towards using pre-trained language models BIBREF4, BIBREF5, BIBREF6, BIBREF7 as a base for many down-stream tasks, including ABSA BIBREF8, BIBREF9, BIBREF10. We still see huge potential that comes with this trend, this is why we approach the ATSC task using the BERT architecture.
As shown by BIBREF9, for the ATSC task the performance of models that were pre-trained on general text corpora is improved substantially by finetuning the model on domain-specific corpora — in their case review corpora — that have not been used for pre-training BERT, or other language models.
We extend the work by Xu et al. by further investigating the behavior of finetuning the BERT language model in relation to ATSC performance. In particular, our contributions are:
The analysis of the influence of the amount of training-steps used for BERT language model finetuning on the Aspect-Target Sentiment Classification performance.
The findings on how to exploit BERT language model finetuning enables us to achieve new state-of-the-art performance on the SemEval 2014 restaurants dataset.
The analysis of cross-domain adaptation between the laptops and restaurants domain. Adaptation is tested by finetuning the BERT language model self-supervised on the target-domain and then supervised training on the ATSC task in the source-domain. In addition, the performance of training on the combination of both datasets is measured.
<<</Introduction>>>
<<<Related Works>>>
We separate our discussion of related work into two areas: First, neural methods applied to ATSC that have improved performance solely by model architecture improvements. Secondly, methods that additionally aim to transfer knowledge from semantically related tasks or domains.
<<<Architecture Improvements for Aspect-Target Sentiment Classification>>>
The datasets typically used for Aspect-Target Sentiment Classification are the SemEval 2014 Task 4 datasets BIBREF1 for the restaurants and laptops domain. Unfortunately, both datasets only have a small number of training examples. One common approach to compensate for insufficient training examples is to invent neural architectures that better model ATSC. For example, in the past a big leap in classification performance was achieved with the use of the Memory Network architecture BIBREF3, which uses memory to remember context words and explicitly models attention over both the target word and context. It was found that making full use of context words improves their model compared to previous models BIBREF2 that make use of left- and right-sided context independently.
BIBREF8 proposed Attention Encoder Networks (AEN), a modification to the transformer architecture. The authors split the Multi-Head Attention (MHA) layers into Intra-MHA and Inter-MHA layers in order to model target words and context differently, which results in a more lightweight model compared to the transformer architecture.
Another recent performance leap was achieved by BIBREF11, who model dependencies between sentiment words explicitly in sentences with more than one aspect-target by using a graph convolutional neural network. They show that their architecture performs particularly well if multiple aspects are present in a sentence.
<<</Architecture Improvements for Aspect-Target Sentiment Classification>>>
<<<Knowledge Transfer for Aspect-Target Sentiment Classification Analysis>>>
Another approach to compensate for insufficient training examples is to transfer knowledge across domains or across similar tasks.
BIBREF12 proposed Multi-Granularity Alignment Networks (MGAN). They use this architecture to transfer knowledge from both an aspect-category classification task and also across different domains. They built a large scale aspect-category dataset specifically for this.
BIBREF13 transfer knowledge from a document-level sentiment classification task trained on the amazon review dataset BIBREF14. They successfully apply pre-training by reusing the weights of a Long Short Term Memory (LSTM) network BIBREF15 that has been trained on the document-level sentiment task. In addition, they apply multi-task learning where aspect and document-level tasks are learned simultaneously by minimizing a joint loss function.
Similarly, BIBREF9 introduce a multi-task loss function to simultaneously optimize the BERT model's BIBREF7 pre-training objectives as well as a question answering task.
In contrast to the methods described above that aim to transfer knowledge from a different source task like question answering or document-level sentiment classification, this paper aims at transferring knowledge across different domains by finetuning the BERT language model.
<<</Knowledge Transfer for Aspect-Target Sentiment Classification Analysis>>>
<<</Related Works>>>
<<<Methodology>>>
We approach the Aspect-Target Sentiment Classification task using a two-step procedure. We use the pre-trained BERT architecture as a basis. In the first step we finetune the pre-trained weights of the language model further in a self-supervised way on a domain-specific corpus. In the second step we train the finetuned language model in a supervised way on the ATSC end-task.
In the following subsections, we discuss the BERT architecture, how we finetune the language model, and how we transform the ATSC task into a BERT sequence-pair classification task BIBREF10. Finally, we discuss the different end-task training and domain-specific finetuning combinations we employ to evaluate our model's generalization performance not only in-domain but also cross-domain.
<<<BERT>>>
The BERT model builds on many previous innovations: contextualized word representations BIBREF4, the transformer architecture BIBREF16, and pre-training on a language modeling task with subsequent end-to-end finetuning on a downstream task BIBREF5, BIBREF6. Due to being deeply bidirectional, the BERT architecture creates very powerful sequence representations that perform extremely well on many downstream tasks BIBREF7.
The main innovation of BERT is that instead of using the objective of next-word prediction a different objective is used to train the language model. This objective consists of 2 parts.
The first part is the masked language model objective, where the model learns to predict tokens, which have been randomly masked, from the context.
The second part is the next-sequence prediction objective, where the model needs to predict if a sequence $B$ would naturally follow the previous sequence $A$. This objective enables the model to capture long-term dependencies better. Both objectives are discussed in more detail in the next section.
As a base for our experiments we use the BERTBASE model, which has been pre-trained by the Google research team. It has the following parameters: 12 layers, 768 hidden dimensions per token and 12 attention heads. It has 110 Mio. parameters in total.
For finetuning the BERT language model on a specific domain we use the weights of BERTBASE as a starting point.
<<</BERT>>>
<<<BERT Language Model Finetuning>>>
As the first step of our procedure we perform language model finetuning of the BERT model using domain-specific corpora. Algorithmically, this is equivalent to pre-training. The domain-specific language model finetuning as an intermediate step to ATSC has been shown by BIBREF9. As an extension to their paper we investigate the limits of language model finetuning in terms of how end-task performance is dependent on the amount of training steps.
The training input representation for language model finetuning consists of two sequences $s_A$ and $s_B$ in the format of $"\textrm {[CLS]} \ s_{A} \ \textrm {[SEP]} \ s_{B} \ \textrm {[SEP]}"$, where [CLS] is a dummy token used for downstream classification and [SEP] are separator tokens.
<<<Masked Language Model Objective>>>
The sequences $A$ and $B$ have tokens randomly masked out in order for the model to learn to predict them. The following example shows why domain-specific finetuning can alleviate the bias from pre-training on a Wikipedia corpus: "The touchscreen is an [MASK] device". In the fact-based context of Wikipedia the [MASK] could be "input" and in the review domain a typical guess could be the general opinion word "amazing".
<<</Masked Language Model Objective>>>
<<<Next-Sentence Prediction>>>
In order to train BERT to capture long-term dependencies better, the model is trained to predict if sequence $B$ follows sequence $A$. If this is the case, sequence A and sequence B are jointly sampled from the same document in the order they are occuring naturally. Otherwise the sequences are sampled randomly from the training corpus.
<<</Next-Sentence Prediction>>>
<<</BERT Language Model Finetuning>>>
<<<Aspect-Target Sentiment Classification>>>
The ATSC task aims at classifying sentiment polarity into the three classes positive, negative, neutral with respect to an aspect-target. The input to the classifier is a tokenized sentence $s=s_{1:n}$ and a target $t=s_{j:j+m}$ contained in the sentence, where $j < j+m \le n$. Similar to previous work by BIBREF10, we transform the input into a format compatible with BERT sequence-pair classification tasks: $"\textrm {[CLS]} \ s \ \textrm {[SEP]} \ t \ \textrm {[SEP]}"$.
In the BERT architecture the position of the token embeddings is structurally maintained after each Multi-Head Attention layer. Therefore, we refer to the last hidden representation of the [CLS] token as $h_{[CLS]} \in \mathbf {R}^{768 \times 1}$. The number of sentiment polarity classes is three. A distribution $p \in [0,1]^3$ over these classes is predicted using a fully-connected layer with 3 output neurons on top of $h_{[CLS]}$, followed by a softmax activation function
where $b \in \mathbf {R}^3$ and $W \in \mathbf {R}^{3 \times 768}$. Cross-entropy is used as the training loss. The way we use BERT for classifying the sentiment polaritites is equivalent to how BERT is used for sequence-pair classification tasks in the original paper BIBREF7.
<<</Aspect-Target Sentiment Classification>>>
<<<Domain Adaptation through Language Model Finetuning>>>
In academia, it is common that the performance of a machine learning model is evaluated in-domain. This means that the model is evaluated on a test set that comes from the same distribution as the training set. In real-world applications this setting is not always valid, as the trained model is used to predict previously unseen data.
In order to evaluate the performance of a machine learning model more robustly, its generalization error can be evaluated across different domains, i.e. cross-domain. Additionally, the model itself can be adapted towards a target domain. This is known as Domain Adaptation, which is a special case of Transductive Transfer Learning in the taxonomy of BIBREF17. Here, it is typically assumed that supervised data for a specific task is only available for a source domain $S$, whereas only unsupervised data is available in the target domain $T$. The goal is to optimize performance of the task in the target domain while transferring task-specific knowledge from the source domain.
If we map this framework to our challenge, we define Aspect-Target Sentiment Classification as the transfer-task and BERT language model finetuning is used for domain adaptation. In terms of on which domain is finetuned on, the full transfer-procedure can be expressed in the following way:
Here, $D_{LM}$ stands for the domain on which the language model is finetuned and can take on the values of Restaurants, Laptops or (Restaurants $\cup $ Laptops). The domain for training $D_{Train}$ can take on the same values, for the joint case case the training datasets for laptops and restaurants are simply combined. The domain for testing $D_{Test}$ can only be take on the values Restaurants or Laptops.
Combining finetuning and training steps gives us nine different evaluation scenarios, which we group into the following four categories:
<<</Domain Adaptation through Language Model Finetuning>>>
<<<In-Domain Training>>>
ATSC is trained on a domain-specific dataset and evaluated on the test set from the same domain. This can be expressed as
$D_{LM} \rightarrow T \rightarrow T,$ where $T$ is our target domain and can be either Laptops or Restaurants. It is expected that the performance of the model is best if $D_{LM} = T$.
<<</In-Domain Training>>>
<<<Cross-Domain Training>>>
ATSC is trained on a domain-specific dataset and evaluated on the test set from the other domain. This can be expressed as
$D_{LM} \rightarrow S \rightarrow T,$ where $S\ne T$ are source and target domain and can be either Laptops or Restaurants.
<<</Cross-Domain Training>>>
<<<Cross-Domain Adaptation>>>
As a special case of cross-domain Training we expect performance to be optimal if $D_{LM} = T$. This is the variant of Domain Adaptation and is written as
$T \rightarrow S \rightarrow T.$
<<</Cross-Domain Adaptation>>>
<<<Joint-Domain Training>>>
ATSC is trained on both domain-specific datasets jointly and evaluated on both test sets independently. This can be expressed as
$D_{LM} \rightarrow (S \cup T) \rightarrow T,$ where $S\ne T$ are source- and target domain and can either be Laptops or Restaurants.
<<</Joint-Domain Training>>>
<<</Methodology>>>
<<<Experiments>>>
In our experiments we aim to answer the following research questions (RQs):
RQ1: How does the number of training iterations in the BERT language model finetuning stage influence the ATSC end-task performance? At what point does performance start to improve, when does it converge?
RQ2: When trained in-domain, what ATSC end-task performance can be reached through fully exploitet finetuning of the BERT language model?
RQ3: When trained cross-domain in the special case of domain adaptation, what ATSC end-task performance can be reached if BERT language model finetuning is fully exploitet?
<<<Datasets for Classification and Language Model Finetuning>>>
We conduct experiments using the two SemEval 2014 Task 4 Subtask 2 datasets BIBREF1 for the laptops and the restaurants domain. The two datasets contain sentences with multiple marked aspect terms that each have a 3-level sentiment polarity (positive, neutral or negative) associated. In the original dataset the conflict label is also present. Here, conflicting labels are dropped for reasons of comparability with BIBREF9. Both datasets are small, detailed statistics are shown in tab:datasets.
For BERT language model finetuning we prepare three corpora for the two domains of laptops and restaurants. For the restaurants domain we use Yelp Dataset Challenge reviews and for the laptops domain we use Amazon Laptop reviews BIBREF14. For the laptop domain we filtered out reviews that appear in the SemEval 2014 laptops dataset to avoid training bias for the test data. To be compatible with the next-sentence prediction task used during fine tuning, we removed reviews containing less than two sentences.
For the laptop corpus, $1,007,209$ sentences are left after pre-processing. For the restaurants domain more reviews are available, we sampled $10,000,000$ sentences to have a sufficient amount of data for fully exploitet language model finetuning. In order to compensate for the smaller amount of finetuning data in the laptops domain, we finetune for more epochs, 30 epochs in the case of the laptops domain compared to 3 epochs for the restaurants domain, so that the BERT model trains on about 30 million sentences in both cases. This means that 1 sentence can be seen multiple times with a different language model masking.
We also create a mixed corpus to jointly finetune both domains. Here, we sample 1 Mio. restaurant reviews and combine them with the laptop reviews. This results in about 2 Mio. reviews that are finetuned for 15 epochs. The exact statistics for the three finetuning corpora are shown in the top of tab:datasets.
To be able to reproduce our finetuning corpora, we make the code that is used to generate them available online.
<<</Datasets for Classification and Language Model Finetuning>>>
<<<Hyperparameters>>>
We use BERTBASE (uncased) as the base for all of our experiments, with the exception of XLNetBASE (cased), which is used as one of the baseline models.
For the BERT language model finetuning we use 32 bit floating point computations using the Adam optimizer BIBREF18. The batchsize is set to 32 while the learning rate is set to $3\cdot 10^{-5}$. The maximum input sequence length is set to 256 tokens, which amounts to about 4 sentences per sequence on average. As shown in tab:datasets, we finetune the language models on each domain so that the model trains a total of about 30 Mio. sentences (7.5 Mio. sequences).
For training the BERT and XLNet models on the down-stream task of ATSC we use mixed 16 bit and 32 bit floating point computations, the Adam optimizer, and a learning rate of $3\cdot 10^{-5}$ and a batchsize of 32. We train the model for a total of 7 epochs. The validation accuracy converges after about 3 epochs of training on all datasets, but training loss still improves after that.
It is important to note that all our results reported are the average of 9 runs with different random initializations. This is needed to measure significance of improvements, as the standard deviation in accuray amounts to roughly $1\%$ for all experiments, see fig:acc-dep-lmiterations.
<<</Hyperparameters>>>
<<<Compared Methods>>>
We compare in-domain results to current state of the art methods, which we will now describe briefly.
SDGCN-BERT BIBREF11 explicitly models sentiment dependencies for sentences with multiple aspects with a graph convolutional network. This method is current state-of-the-art on the SemEval 2014 laptops dataset.
AEN-BERT BIBREF8 is an attentional encoder network. When used on top of BERT embeddings this method performs especially well on the laptops dataset.
BERT-SPC BIBREF8 is BERT used in sentence-pair classification mode. This is exactly the same method as our BERT-base baseline and therefore, we can cross-check the authors results.
BERT-PT BIBREF9 uses multi-task fine-tuning prior to downstream classification, where the BERT language model is finetuned jointly with a question answering task. It performs state-of-the-art on the restaurants dataset prior to this paper.
To our knowledge, cross- and joint-domain training on the SemEval 2014 Task 4 datasets has not been analyzed so far. Thus, we compare our method to two very strong baselines: BERT and XLNet.
BERT-base BIBREF7 is using the pre-trained BERTBASE embeddings directly on the down-stream task without any domain specific language model finetuning.
XLNet-base BIBREF19 is a method also based on general language model pre-training similar to BERT. Instead of randomly masking tokens for pre-training like in BERT a more general permutation objective is used, where all possible variants of masking are fully exploitet.
Our models are BERT models whose language model has been finetuned on different domain corpora.
BERT-ADA Lapt is the BERT language model finetuned on the laptops domain corpus.
BERT-ADA Rest is the BERT language model finetuned on the restaurant domain corpus.
BERT-ADA Joint is the BERT language model finetuned on the corpus containing an equal amount of laptops and restaurants reviews.
<<</Compared Methods>>>
<<<Results Analysis>>>
The results of our experiments are shown in fig:acc-dep-lmiterations and tab:results respectively.
To answer RQ1, which is concerned with details on domain-specific language model finetuning, we can see in fig:acc-dep-lmiterations that first of all, language model finetuning has a substantial effect on ATSC end-task performance. Secondly, we see that in the laptops domain the performance starts to increase at about 10 Mio. finetuned sentences. This is an interesting insight as one would expect a relation closer to a logarithmic curve. One reason might be that it takes many steps to train knowledge into the BERT language model due to its vast amount of parameters. The model already converges at around 17 Mio. sentences. More finetuning does not improve performance significantly. In addition, we find that different runs have a high variance, the standard deviation amounts to about $1\%$ in accuracy, which justifies averaging over 9 runs to measure differences in model performance reliably.
To answer RQ2, which is concerned with in-domain ATSC performance, we see in tab:results that for the in-domain training case, our models BERT-ADA Lapt and BERT-ADA Rest achieve performance close to state-of-the-art on the laptops dataset and new state-of-the-art on the restaurants dataset with accuracies of $79.19\%$ and $87.14\%$, respectively. On the restaurants dataset, this corresponds to an absolute improvement of $2.2\%$ compared to the previous state-of-the-art method BERT-PT. Language model finetuning produces a larger improvement on the restaurants dataset. We think that one reason for that might be that the restaurants domain is underrepresented in the pre-training corpora of BERTBASE. Generally, we find that language model finetuning helps even if the finetuning domain does not match the evaluation domain. We think the reason for this might be that the BERT-base model is pre-trained more on knowledge-based corpora like Wikipedia than on text containing opinions. Another finding is that BERT-ADA Joint performs better on the laptops dataset than BERT-ADA Rest, although the unique amount of laptop reviews are the same in laptops- and joint-corpora. We think that confusion can be created when mixing the domains, but this needs to be investigated further. We also find that the XLNet-base baseline performs generally stronger than BERT-base and even outperforms BERT-ADA Lapt with an accuracy of $79.89\%$ on the laptops dataset.
To answer RQ3, which is concerned with domain adaptation, we can see in the grayed out cells in tab:results, which correspond to the cross-domain adaption case where the BERT language model is trained on the target domain, that domain adaptation works well with $2.2\%$ absolute accuracy improvement on the laptops test set and even $3.6\%$ accuracy improvement on the restaurants test set compared to BERT-base.
In general, the ATSC task generalizes well cross-domain, with about 2-$3\%$ drop in accuracy compared to in-domain training. We think the reason for this might be that syntactical relationships between the aspect-target and the phrase expressing sentiment polarity as well as knowing the sentiment-polarity itself are sufficient to solve the ATSC task in many cases.
For the joint-training case, we find that combining both training datasets improves performance on both test sets. This result is intuitive, as more training data leads to better performance if the domains do not confuse each other. Interesting for the joint-training case is that the BERT-ADA Joint model performs especially strong when measured by the Macro-F1 metric. A reason for this might be that the SemEval 2014 datasets are imbalanced due to dominance of positive label. It seems like through finetuning the language model on both domains the model learns to classify the neutral class much better, especially in the laptops domain.
<<</Results Analysis>>>
<<</Experiments>>>
<<<Conclusion>>>
We performed experiments on the task of Aspect-Target Sentiment Classification by first finetuning a pre-trained BERT model on a domain specific corpus with subsequent training on the down-stream classification task.
We analyzed the behavior of the number of domain-specific BERT language model finetuning steps in relation to the end-task performance.
With the findings on how to best exploit BERT language model finetuning we were able to train high performing models, which one of even performs as new state-of-the-art on SemEval 2014 Task 4 restaurants dataset.
We further evaluated our models cross-domain to explore the robustness of Aspect-Target Sentiment Classification. We found that in general, this task transfers well between the laptops and the restaurants domain.
As a special case we ran a cross-domain adaptation experiments, where the BERT language model is specifically finetuned on the target domain. We achieve significant improvement over unadapted models, a cross-domain adapted model performs even better than a BERT-base model that is trained in-domain.
Overall, our findings reveal promising directions for follow-up work. The XLNet-base model performs strongly on the ATSC task. Here, domain-specific finetuning could probably bring significant performance improvements. Another interesting direction for future work would be to investigate cross-domain behavior for an additional domain like hotels, which is more similar to the restaurants domain. Here, it could be interesting to find out if the shared nature of these domain would results in more confusion or if they would behave synergetically.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Works\nArchitecture Improvements for Aspect-Target Sentiment Classification\nKnowledge Transfer for Aspect-Target Sentiment Classification Analysis\nMethodology\nBERT\nBERT Language Model Finetuning\nMasked Language Model Objective\nNext-Sentence Prediction\nAspect-Target Sentiment Classification\nDomain Adaptation through Language Model Finetuning\nIn-Domain Training\nCross-Domain Training\nCross-Domain Adaptation\nJoint-Domain Training\nExperiments\nDatasets for Classification and Language Model Finetuning\nHyperparameters\nCompared Methods\nResults Analysis\nConclusion"
],
"type": "outline"
}
|
2002.09758
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Unsupervised Question Decomposition for Question Answering
<<<Abstract>>>
We aim to improve question answering (QA) by decomposing hard questions into easier sub-questions that existing QA systems can answer. Since collecting labeled decompositions is cumbersome, we propose an unsupervised approach to produce sub-questions. Specifically, by leveraging>10M questions from Common Crawl, we learn to map from the distribution of multi-hop questions to the distribution of single-hop sub-questions. We answer sub-questions with an off-the-shelf QA model and incorporate the resulting answers in a downstream, multi-hop QA system. On a popular multi-hop QA dataset, HotpotQA, we show large improvements over a strong baseline, especially on adversarial and out-of-domain questions. Our method is generally applicable and automatically learns to decompose questions of different classes, while matching the performance of decomposition methods that rely heavily on hand-engineering and annotation.
<<</Abstract>>>
<<<Introduction>>>
Question answering (QA) systems have become remarkably good at answering simple, single-hop questions but still struggle with compositional, multi-hop questions BIBREF0, BIBREF1. In this work, we examine if we can answer hard questions by leveraging our ability to answer simple questions. Specifically, we approach QA by breaking a hard question into a series of sub-questions that can be answered by a simple, single-hop QA system. The system's answers can then be given as input to a downstream QA system to answer the hard question, as shown in Fig. FIGREF1. Our approach thus answers the hard question in multiple, smaller steps, which can be easier than answering the hard question all at once. For example, it may be easier to answer “What profession do H. L. Mencken and Albert Camus have in common?” when given the answers to the sub-questions “What profession does H. L. Mencken have?” and “Who was Albert Camus?”
Prior work in learning to decompose questions into sub-questions has relied on extractive heuristics, which generalizes poorly to different domains and question types, and requires human annotation BIBREF2, BIBREF3. In order to scale to any arbitrary question, we would require sophisticated natural language generation capabilities, which often relies on large quantities of high-quality supervised data. Instead, we find that it is possible to learn to decompose questions without supervision.
Specifically, we learn to map from the distribution of hard questions to the distribution of simpler questions. First, we automatically construct a noisy, “pseudo-decomposition” for each hard question by retrieving relevant sub-question candidates based on their similarity to the given hard question. We retrieve candidates from a corpus of 10M simple questions that we extracted from Common Crawl. Second, we train neural text generation models on that data with (1) standard sequence-to-sequence learning and (2) unsupervised sequence-to-sequence learning. The latter has the advantage that it can go beyond the noisy pairing between questions and pseudo-decompositions. Fig. FIGREF2 overviews our decomposition approach.
We use decompositions to improve multi-hop QA. We first use an off-the-shelf single-hop QA model to answer decomposed sub-questions. We then give each sub-question and its answer as additional input to a multi-hop QA model. We test our method on HotpotQA BIBREF0, a popular multi-hop QA benchmark.
Our contributions are as follows. First, QA models relying on decompositions improve accuracy over a strong baseline by 3.1 F1 on the original dev set, 11 F1 on the multi-hop dev set from BIBREF4, and 10 F1 on the out-of-domain dev set from BIBREF3. Our most effective decomposition model is a 12-block transformer encoder-decoder BIBREF5 trained using unsupervised sequence-to-sequence learning, involving masked language modeling, denoising, and back-translation objectives BIBREF6. Second, our method is competitive with state-of-the-art methods SAE BIBREF7 and HGN BIBREF8 which leverage strong supervision. Third, we show that our approach automatically learns to generate useful decompositions for all 4 question types in HotpotQA, highlighting the general nature of our approach. In our analysis, we explore how sub-questions improve multi-hop QA, and we provide qualitative examples that highlight how question decomposition adds a form of interpretability to black-box QA models. Our ablations show that each component of our pipeline contributes to QA performance. Overall, we find that it is possible to successfully decompose questions without any supervision and that doing so improves QA.
<<</Introduction>>>
<<<Method>>>
We now formulate the problem and overview our high-level approach, with details in the following section. We aim to leverage a QA model that is accurate on simple questions to answer hard questions, without using supervised question decompositions. Here, we consider simple questions to be “single-hop” questions that require reasoning over one paragraph or piece of evidence, and we consider hard questions to be “multi-hop.” Our aim is then to train a multi-hop QA model $M$ to provide the correct answer $a$ to a multi-hop question $q$ about a given a context $c$ (e.g., several paragraphs). Normally, we would train $M$ to maximize $\log p_M(a | c, q)$. To help $M$, we leverage a single-hop QA model that may be queried with sub-questions $s_1, \dots , s_N$, whose “sub-answers” to each sub-question $a_1, \dots , a_N$ may be provided to the multi-hop QA model. $M$ may then instead maximize the (potentially easier) objective $\log p_M(a | c, q, [s_1, a_1], \dots , [a_N, s_N])$.
Supervised decomposition models learn to map each question $q \in Q$ to a decomposition $d = [s_1; \dots ; s_N]$ of $N$ sub-questions $s_n \in S$ using annotated $(q, d)$ examples. In this work, we do not assume access to strong $(q, d)$ supervision. To leverage the single-hop QA model without supervision, we follow a three-stage approach: 1) map a question $q$ into sub-questions $s_1, \dots , s_N$ via unsupervised techniques, 2) find sub-answers $a_1, \dots , a_N$ with the single-hop QA model, and 3) provide $s_1, \dots , s_N$ and $a_1, \dots , a_N$ to help predict $a$.
<<<Unsupervised Question Decomposition>>>
To train a decomposition model, we need appropriate training data. We assume access to a hard question corpus $Q$ and a simple question corpus $S$. Instead of using supervised $(q, d)$ training examples, we design an algorithm that constructs pseudo-decompositions $d^{\prime }$ to form $(q, d^{\prime })$ pairs from $Q$ and $S$ using an unsupervised approach (§SECREF4). We then train a model to map $q$ to a decomposition. We explore learning to decompose with standard and unsupervised sequence-to-sequence learning (§SECREF6).
<<<Creating Pseudo-Decompositions>>>
For each $q \in Q$, we construct a pseudo-decomposition set $d^{\prime } = \lbrace s_1; \dots ; s_N\rbrace $ by retrieving simple question $s$ from $S$. We concatenate all $N$ simple questions in $d^{\prime }$ to form the pseudo-decomposition used downstream. $N$ may be chosen based on the task or vary based on $q$. To retrieve useful simple questions for answering $q$, we face a joint optimization problem. We want sub-questions that are both (i) similar to $q$ according to some metric $f$ and (ii) maximally diverse:
<<<Similarity-based Retrieval>>>
To retrieve question-relevant sub-questions, we embed any text $t$ into a vector $\mathbf {v}_t$ by summing the FastText vectors BIBREF13 for words in $t$. We use cosine similarity as our similarity metric $f$. Let $q$ be a multi-hop question used to retrieve pseudo-decomposition $(s_1^*, s_2^*)$, and let $\hat{\mathbf {v}}$ be the unit vector of $\mathbf {v}$. Since $N=2$, Eq. DISPLAY_FORM5 reduces to:
The last term requires $O(|S|^2)$ comparisons, which is expensive as $|S|$ is large ($>$10M). Instead of solving Eq. (DISPLAY_FORM19) exactly, we find an approximate pseudo-decomposition $(s_1^{\prime }, s_2^{\prime })$ by computing Eq. (DISPLAY_FORM19) over $S^{\prime } = \operatornamewithlimits{topK}_{\lbrace s \in S\rbrace }\left[ \mathbf {\hat{v}}_{q}^{\top } \mathbf {\hat{v}}_s\right]$, using $K=1000$. We use FAISS BIBREF14 to efficiently build $S^{\prime }$.
<<</Similarity-based Retrieval>>>
<<<Random Retrieval>>>
For comparison, we test random pseudo-decompositions, where we randomly retrieve $s_1, \dots , s_N$ by sampling from $S$. USeq2Seq trained on random $d^{\prime } = [s_1; \dots ; s_N]$ should at minimum learn to map $q$ to multiple simple questions.
<<</Random Retrieval>>>
<<<Editing Pseudo-Decompositions>>>
Since the sub-questions are retrieval-based, the sub-questions are often not about the same entities as $q$. As a post-processing step, we replace entities in $(s^{\prime }_1, s^{\prime }_2)$ with entities from $q$. We find all entities in $(s^{\prime }_1, s^{\prime }_2)$ that do not appear in $q$ using spaCy BIBREF15. We replace these entities with a random entity from $q$ with the same type (e.g., “Date” or “Location”) if and only if one exists. We use entity replacement on pseudo-decompositions from both random and similarity-based retrieval.
<<</Editing Pseudo-Decompositions>>>
<<</Creating Pseudo-Decompositions>>>
<<<Learning to Decompose>>>
Having now retrieved relevant pseudo-decompositions, we examine different ways to learn to decompose (with implementation details in the following section):
<<<No Learning>>>
We use pseudo-decompositions directly, employing retrieved sub-questions in downstream QA.
<<</No Learning>>>
<<<Sequence-to-Sequence (Seq2Seq)>>>
We train a Seq2Seq model with parameters $\theta $ to maximize $\log p_{\theta }(d^{\prime } | q)$.
<<</Sequence-to-Sequence (Seq2Seq)>>>
<<<Unsupervised Sequence-to-Sequence (USeq2Seq)>>>
We start with paired $(q, d^{\prime })$ examples but do not learn from the pairing, because the pairing is noisy. We use unsupervised sequence-to-sequence learning to learn a $q \rightarrow d$ mapping instead of training directly on the noisy pairing.
<<</Unsupervised Sequence-to-Sequence (USeq2Seq)>>>
<<</Learning to Decompose>>>
<<</Unsupervised Question Decomposition>>>
<<<Answering Sub-Questions>>>
To answer the generated sub-questions, we use an off-the-shelf QA model. The QA model may answer sub-questions using any free-form text (i.e., a word, phrase, sentence, etc.). Any QA model is suitable, so long as it can accurately answer simple questions in $S$. We thus leverage good accuracy on questions in $S$ to help QA models on questions in $Q$.
<<</Answering Sub-Questions>>>
<<<QA using Decompositions>>>
Downstream QA systems may use sub-questions and sub-answers in various ways. We add sub-questions and sub-answers as auxiliary input for a downstream QA model to incorporate in its processing. We now describe the implementation details of our approach outlined above.
<<</QA using Decompositions>>>
<<</Method>>>
<<<Experimental Setup>>>
<<<Question Answering Task>>>
We test unsupervised decompositions on HotpotQA BIBREF0, a standard benchmark for multi-hop QA. We use HotpotQA's “Distractor Setting,” which provides 10 context paragraphs from Wikipedia. Two (or more) paragraphs contain question-relevant sentences called “supporting facts,” and the remaining paragraphs are irrelevant, “distractor paragraphs.” Answers in HotpotQA are either yes, no, or a span of text in an input paragraph. Accuracy is measured with F1 and Exact Match (EM) scores between the predicted and gold spans.
<<</Question Answering Task>>>
<<<Unsupervised Decomposition>>>
<<<Question Data>>>
We use HotpotQA questions as our initial multi-hop, hard question corpus $Q$. We use SQuAD 2 questions as our initial single-hop, simple question corpus $S$. However, our pseudo-decomposition corpus should be large, as the corpus will be used to train neural Seq2Seq models, which are data hungry. A larger $|S|$ will also improve the relevance of retrieved simple questions to the hard question. Thus, we take inspiration from work in machine translation on parallel corpus mining BIBREF9, BIBREF10 and in unsupervised QA BIBREF11. We augment $Q$ and $S$ by mining more questions from Common Crawl. We choose sentences which start with common “wh”-words and end with “?” Next, we train a FastText classifier BIBREF12 to classify between 60K questions sampled from Common Crawl, SQuAD 2, and HotpotQA. Then, we classify Common Crawl questions, adding questions classified as SQuAD 2 questions to $S$ and questions classified as HotpotQA questions to $Q$. Question mining greatly increases the number of single-hop questions (130K $\rightarrow $ 10.1M) and multi-hop questions (90K $\rightarrow $ 2.4M). Thus, our unsupervised approach allows us to make use of far more data than supervised counterparts.
<<</Question Data>>>
<<<Unsupervised Decomposition Models>>>
<<<Pre-training>>>
Pre-training is a key ingredient for unsupervised Seq2Seq methods BIBREF16, BIBREF17, so we initialize all decomposition models with the same pre-trained weights, regardless of training method (Seq2Seq or USeq2Seq). We warm-start our pre-training with the pre-trained, English Masked Language Model (MLM) from BIBREF6, a 12-block decoder-only transformer model BIBREF5 trained to predict masked-out words on Toronto Books Corpus BIBREF18 and Wikipedia. We train the model with the MLM objective for one epoch on the augmented corpus $Q$ (2.4 M questions), while also training on decompositions $D$ formed via random retrieval from $S$. For our pre-trained encoder-decoder, we initialize a 6-block encoder with the first 6 MLM blocks, and we initialize a 6-block decoder with the last 6 MLM blocks, randomly initializing the remaining weights as in BIBREF6.
<<</Pre-training>>>
<<<Seq2Seq>>>
We fine-tune the pre-trained encoder-decoder using maximum likelihood. We stop training based on validation BLEU BIBREF19 between generated decompositions and pseudo-decompositions.
<<</Seq2Seq>>>
<<<USeq2Seq>>>
We follow the approach by BIBREF6 in unsupervised translation. Training follows two stages: (1) MLM pre-training on the training corpora (described above), followed by (2) training simultaneously with denoising and back-translation objectives. For denoising, we produce a noisy input $\hat{d}$ by randomly masking, dropping, and locally shuffling tokens in $d \sim D$, and we train a model with parameters $\theta $ to maximize $\log p_{\theta }(d | \hat{d})$. We likewise maximize $\log p_{\theta }(q | \hat{q})$. For back-translation, we generate a multi-hop question $\hat{q}$ for a decomposition $d \sim D$, and we maximize $\log p_{\theta }(d | \hat{q})$. Similarly, we maximize $\log p_{\theta }(q | \hat{d})$. To stop training without supervision, we use a modified version of round-trip BLEU BIBREF17 (see Appendix §SECREF56 for details). We train with denoising and back-translation on smaller corpora of HotpotQA questions ($Q$) and their pseudo-decompositions ($D$).
<<</USeq2Seq>>>
<<</Unsupervised Decomposition Models>>>
<<</Unsupervised Decomposition>>>
<<<Single-hop Question Answering Model>>>
We train our single-hop QA model following prior work from BIBREF3 on HotpotQA.
<<<Model Architecture>>>
We fine-tune a pre-trained model to take a question and several paragraphs and predicts the answer, similar to the single-hop QA model from BIBREF21. The model computes a separate forward pass on each paragraph (with the question). For each paragraph, the model learns to predict the answer span if the paragraph contains the answer and to predict “no answer” otherwise. We treat yes and no predictions as spans within the passage (prepended to each paragraph), as in BIBREF22 on HotpotQA. During inference, for the final softmax, we consider all paragraphs as a single chunk. Similar to BIBREF23, we subtract a paragraph's “no answer” logit from the logits of all spans in that paragraph, to reduce or increase span probabilities accordingly. In other words, we compute the probability $p(s_p)$ of each span $s_p$ in a paragraph $p \in \lbrace 1, \dots , P \rbrace $ using the predicted span logit $l(s_p)$ and “no answer” paragraph logit $n(p)$ as follows:
We use $\textsc {RoBERTa}_{\textsc {LARGE}}$ BIBREF24 as our pre-trained initialization. Later, we also experiment with using the $\textsc {BERT}_{\textsc {BASE}}$ ensemble from BIBREF3.
<<</Model Architecture>>>
<<<Training Data and Ensembling>>>
Similar to BIBREF3, we train an ensemble of 2 single-hop QA models using data from SQuAD 2 and HotpotQA questions labeled as “easy” (single-hop). To ensemble, we average the logits of the two models before predicting the answer. SQuAD is a single-paragraph QA task, so we adapt SQuAD to the multi-paragraph setting by retrieving distractor paragraphs from Wikipedia for each question. We use the TFIDF retriever from DrQA BIBREF25 to retrieve 2 distractor paragraphs, which we add to the input for one model in the ensemble. We drop words from the question with a 5% probability to help the model handle any ill-formed sub-questions. We use the single-hop QA ensemble as a black-box model once trained, never training the model on multi-hop questions.
<<</Training Data and Ensembling>>>
<<<Returned Text>>>
We have the single-hop QA model return the sentence containing the model's predicted answer span, alongside the sub-questions. Later, we compare against alternatives, i.e., returning the predicted answer span without its context or not returning sub-questions.
<<</Returned Text>>>
<<<Sub-Answer Confidence>>>
Figure FIGREF46 (right) shows that the model's sub-answer confidence correlates with downstream multi-hop QA performance for all HotpotQA dev sets. A low confidence sub-answer may be indicative of (i) an unanswerable or ill-formed sub-question or (ii) a sub-answer that is more likely to be incorrect. In both cases, the single-hop QA model is less likely to retrieve the useful supporting evidence to answer the multi-hop question.
<<</Sub-Answer Confidence>>>
<<<Changing the Single-hop QA Model>>>
We find that our approach is robust to the single-hop QA model that answers sub-questions. We use the $\textsc {BERT}_{\textsc {BASE}}$ ensemble from BIBREF3 as the single-hop QA model. The model performs much worse compared to our $\textsc {RoBERTa}_{\textsc {LARGE}}$ single-hop ensemble when used directly on HotpotQA (56.3 vs. 66.7 F1). However, the model results in comparable QA when used to answer single-hop sub-questions within our larger system (79.9 vs. 80.1 F1 for our $\textsc {RoBERTa}_{\textsc {LARGE}}$ ensemble).
<<</Changing the Single-hop QA Model>>>
<<</Single-hop Question Answering Model>>>
<<<Multi-hop Question Answering Model>>>
Our multi-hop QA architecture is identical to the single-hop QA model, but the multi-hop QA model also uses sub-questions and sub-answers as input. We append each (sub-question, sub-answer) pair in order to the multi-hop question along with separator tokens. We train one multi-hop QA model on all of HotpotQA, also including SQuAD 2 examples used to train the single-hop QA model. Later, we experiment with using $\textsc {BERT}_{\textsc {LARGE}}$ and $\textsc {BERT}_{\textsc {BASE}}$ instead of $\textsc {RoBERTa}_{\textsc {LARGE}}$ as the multi-hop QA model. All reported error margins show the mean and std. dev. across 5 multi-hop QA training runs using the same decompositions.
<<<Varying the Base Model>>>
To understand how decompositions impact performance as the multi-hop QA model gets stronger, we vary the base pre-trained model. Table shows the impact of adding decompositions to $\textsc {BERT}_{\textsc {BASE}}$ , $\textsc {BERT}_{\textsc {LARGE}}$ , and finally $\textsc {RoBERTa}_{\textsc {LARGE}}$ (see Appendix §SECREF64 for hyperparameters). The gain from using decompositions grows with strength of the multi-hop QA model. Decompositions improve QA by 1.2 F1 for a $\textsc {BERT}_{\textsc {BASE}}$ model, by 2.6 F1 for the stronger $\textsc {BERT}_{\textsc {LARGE}}$ model, and by 3.1 F1 for our best $\textsc {RoBERTa}_{\textsc {LARGE}}$ model.
<<</Varying the Base Model>>>
<<</Multi-hop Question Answering Model>>>
<<</Experimental Setup>>>
<<<Results on Question Answering>>>
We compare variants of our approach that use different learning methods and different pseudo-aligned training sets. As a baseline, we compare RoBERTa with decompositions to a RoBERTa model that does not use decompositions but is identical in all other respects. We train the baseline for 2 epochs, sweeping over batch size $\in \lbrace 64, 128\rbrace $, learning rate $\in \lbrace 1 \times 10^{-5}, 1.5 \times 10^{-5}, 2 \times 10^{-5}, 3 \times 10^{-5}\rbrace $, and weight decay $\in \lbrace 0, 0.1, 0.01, 0.001\rbrace $; we choose the hyperparameters that perform best on our dev set. We then use the best hyperparameters for the baseline to train our RoBERTa models with decompositions.
We report results on 3 versions of the dev set: (1) the original version, (2) the multi-hop version from BIBREF4 which created some distractor paragraphs adversarially to test multi-hop reasoning, and (3) the out-of-domain version from BIBREF3 which retrieved distractor paragraphs using the same procedure as the original version, but excluded paragraphs in the original version.
<<<Main Results>>>
Table shows how unsupervised decompositions affect QA. Our RoBERTa baseline performs quite well on HotpotQA (77.0 F1), despite processing each paragraph separately, which prohibits inter-paragraph reasoning. The result is in line with prior work which found that a version of our baseline QA model using BERT BIBREF26 does well on HotpotQA by exploiting single-hop reasoning shortcuts BIBREF21. We achieve significant gains over our strong baseline by leveraging decompositions from our best decomposition model, trained with USeq2Seq on FastText pseudo-decompositions; we find a 3.1 F1 gain on the original dev set, 11 F1 gain on the multi-hop dev set, and 10 F1 gain on the out-of-domain dev set. Unsupervised decompositions even match the performance of using (within our pipeline) supervised and heuristic decompositions from DecompRC (i.e., 80.1 vs. 79.8 F1 on the original dev set).
More generally, all decomposition methods improve QA over the baseline by leveraging the single-hop QA model (“1hop” in Table ). Using FastText pseudo-decompositions as sub-questions directly improves QA over using random sub-questions on the multi-hop set (72.4 vs. 70.9 F1) and out-of-domain set (72.0 vs. 70.7 F1). USeq2Seq on random pseudo-decompositions also improves over the random sub-question baseline (e.g., 79.8 vs. 78.4 F1 on HotpotQA). However, we only find small improvements when training USeq2Seq on FastText vs. Random pseudo-decompositions (e.g., 77.1 vs. 76.5 F1 on the out-of-domain dev set).
The best decomposition methods learn with USeq2Seq. Using Seq2Seq to generate decompositions gives similar QA accuracy as the “No Learning” setup, e.g. both approaches achieve 78.9 F1 on the original dev set for FastText pseudo-decompositions. The results are similar perhaps since supervised learning is directly trained to place high probability on pseudo-decompositions. USeq2Seq may improve over Seq2Seq by learning to align hard questions and pseudo-decompositions while ignoring the noisy pairing.
After our experimentation, we chose USeq2Seq trained on FastText pseudo-decompositions as the final model, and we submitted the model for hidden test evaluation. Our approach achieved a test F1 of 79.34 and Exact Match (EM) of 66.33. Our approach is competitive with concurrent, state-of-the-art systems SAE BIBREF7 and HGN BIBREF8, which both (unlike our approach) learn from additional, strong supervision about which sentences are necessary to answer the question.
<<</Main Results>>>
<<<Question Type Breakdown>>>
To understand where decompositions help, we break down QA performance across 4 question types from BIBREF3. “Bridge” questions ask about an entity not explicitly mentioned in the question (“When was Erik Watts' father born?”). “Intersection” questions ask to find an entity that satisfies multiple separate conditions (“Who was on CNBC and Fox News?”). “Comparison” questions ask to compare a property of two entities (“Which is taller, Momhil Sar or K2?”). “Single-hop” questions are likely answerable using single-hop shortcuts or single-paragraph reasoning (“Where is Electric Six from?”). We split the original dev set into the 4 types using the supervised type classifier from BIBREF3. Table shows F1 scores for RoBERTa with and without decompositions across the 4 types.
Unsupervised decompositions improve QA across all question types. Our single decomposition model generates useful sub-questions for all question types without special case handling, unlike earlier work from BIBREF3 which handled each question type separately. For single-hop questions, our QA approach does not require falling back to a single-hop QA model and instead learns to leverage decompositions to better answer questions with single-hop shortcuts (76.9 vs. 73.9 F1 without decompositions).
<<</Question Type Breakdown>>>
<<<Answers to Sub-Questions are Crucial>>>
To measure the usefulness of sub-questions and sub-answers, we train the multi-hop QA model with various, ablated inputs, as shown in Table . Sub-answers are crucial to improving QA, as sub-questions with no answers or random answers do not help (76.9 vs. 77.0 F1 for the baseline). Only when sub-answers are provided do we see improved QA, with or without sub-questions (80.1 and 80.2 F1, respectively). It is important to provide the sentence containing the predicted answer span instead of the answer span alone (80.1 vs. 77.8 F1, respectively), though the answer span alone still improves over the baseline (77.0 F1).
<<</Answers to Sub-Questions are Crucial>>>
<<<How Do Decompositions Help?>>>
Decompositions help to answer questions by retrieving important supporting evidence to answer questions. Fig. FIGREF41 shows that multi-hop QA accuracy increases when the sub-answer sentences are the “supporting facts” or sentences needed to answer the question, as annotated by HotpotQA. We retrieve supporting facts without learning to predict them with strong supervision, unlike many state-of-the-art models BIBREF7, BIBREF8, BIBREF22.
<<</How Do Decompositions Help?>>>
<<<Example Decompositions>>>
To illustrate how decompositions help QA, Table shows example sub-questions from our best decomposition model with predicted sub-answers. Sub-questions are single-hop questions relevant to the multi-hop question. The single-hop QA model returns relevant sub-answers, sometimes in spite of grammatical errors (Q1, SQ$_1$) or under-specified questions (Q2, SQ$_1$). The multi-hop QA model then returns an answer consistent with the predicted sub-answers. The decomposition model is largely extractive, copying from the multi-hop question rather than hallucinating new entities, which helps generate relevant sub-questions. To better understand our system, we analyze the model for each stage: decomposition, single-hop QA, and multi-hop QA.
<<</Example Decompositions>>>
<<</Results on Question Answering>>>
<<<Analysis>>>
<<<Unsupervised Decomposition Model>>>
<<<Intrinsic Evaluation of Decompositions>>>
We evaluate the quality of decompositions on other metrics aside from downstream QA. To measure the fluency of decompositions, we compute the likelihood of decompositions using the pre-trained GPT-2 language model BIBREF27. We train a classifier on the question-wellformedness dataset of BIBREF28, and we use the classifier to estimate the proportion of sub-questions that are well-formed. We measure how abstractive decompositions are by computing (i) the token Levenstein distance between the multi-hop question and its generated decomposition and (ii) the ratio between the length of the decomposition and the length of the multi-hop question. We compare our best decomposition model against the supervised+heuristic decompositions from DecompRC BIBREF3 in Table .
Unsupervised decompositions are both more natural and well-formed than decompositions from DecompRC. Unsupervised decompositions are also closer in edit distance and length to the multi-hop question, consistent with our observation that our decomposition model is largely extractive.
<<</Intrinsic Evaluation of Decompositions>>>
<<<Quality of Decomposition Model>>>
Another way to test the quality of the decomposition model is to test if the model places higher probability on decompositions that are more helpful for downstream QA. We generate $N=5$ hypotheses from our best decomposition model using beam search, and we train a multi-hop QA model to use the $n$th-ranked hypothesis as a question decomposition (Fig. FIGREF46, left). QA accuracy decreases as we use lower probability decompositions, but accuracy remains relatively robust, at most decreasing from 80.1 to 79.3 F1. The limited drop suggests that decompositions are still useful if they are among the model's top hypotheses, another indication that our model is trained well for decomposition.
<<</Quality of Decomposition Model>>>
<<</Unsupervised Decomposition Model>>>
<<</Analysis>>>
<<<Related Work>>>
Answering complicated questions has been a long-standing challenge in natural language processing. To this end, prior work has explored decomposing questions with supervision or heuristic algorithms. IBM Watson BIBREF29 decomposes questions into sub-questions in multiple ways or not at all. DecompRC BIBREF3 largely frames sub-questions as extractive spans of a multi-hop question, learning to predict span-based sub-questions via supervised learning on human annotations. In other cases, DecompRC decomposes a multi-hop question using a heuristic algorithm, or DecompRC does not decompose at all. Watson and DecompRC use special case handling to decompose different questions, while our algorithm is fully automated and requires minimal hand-engineering.
More traditional, semantic parsing methods map questions to compositional programs, whose sub-programs can be viewed as question decompositions in a formal language BIBREF2, BIBREF30. Examples include classical QA systems like SHRDLU BIBREF31 and LUNAR BIBREF32, as well as neural Seq2Seq semantic parsers BIBREF33 and neural module networks BIBREF34, BIBREF35. Such methods usually require strong, program-level supervision to generate programs, as in visual QA BIBREF36 and on HotpotQA BIBREF37. Some models use other forms of strong supervision, e.g. predicting the “supporting evidence” to answer a question annotated by HotpotQA. Such an approach is taken by SAE BIBREF7 and HGN BIBREF8, whose methods may be combined with our approach.
Unsupervised decomposition complements strongly and weakly supervised decomposition approaches. Our unsupervised approach enables methods to leverage millions of otherwise unusable questions, similar to work on unsupervised QA BIBREF11. When decomposition examples exist, supervised and unsupervised learning can be used in tandem to learn from both labeled and unlabeled examples. Such semi-supervised methods outperform supervised learning for tasks like machine translation BIBREF38. Other work on weakly supervised question generation uses a downstream QA model's accuracy as a signal for learning to generate useful questions. Weakly supervised question generation often uses reinforcement learning BIBREF39, BIBREF40, BIBREF41, BIBREF42, BIBREF43, where an unsupervised initialization can greatly mitigate the issues of exploring from scratch BIBREF44.
<<</Related Work>>>
<<<Conclusion>>>
We proposed an algorithm that decomposes questions without supervision, using 3 stages: (1) learning to decompose using pseudo-decompositions without supervision, (2) answering sub-questions with an off-the-shelf QA system, and (3) answering hard questions more accurately using sub-questions and their answers as additional input. When evaluated on HotpotQA, a standard benchmark for multi-hop QA, our approach significantly improved accuracy over an equivalent model that did not use decompositions. Our approach relies only on the final answer as supervision but works as effectively as state-of-the-art methods that rely on strong supervision, such as supporting fact labels or example decompositions. Qualitatively, we found that unsupervised decomposition resulted in fluent sub-questions whose answers often match the annotated supporting facts in HotpotQA. Our unsupervised decompositions are largely extractive, which is effective for compositional, multi-hop questions but not all complex questions, showing room for future work. Overall, this work opens up exciting avenues for leveraging methods in unsupervised learning and natural language generation to improve the interpretability and generalization of machine learning systems.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nMethod\nUnsupervised Question Decomposition\nCreating Pseudo-Decompositions\nSimilarity-based Retrieval\nRandom Retrieval\nEditing Pseudo-Decompositions\nLearning to Decompose\nNo Learning\nSequence-to-Sequence (Seq2Seq)\nUnsupervised Sequence-to-Sequence (USeq2Seq)\nAnswering Sub-Questions\nQA using Decompositions\nExperimental Setup\nQuestion Answering Task\nUnsupervised Decomposition\nQuestion Data\nUnsupervised Decomposition Models\nPre-training\nSeq2Seq\nUSeq2Seq\nSingle-hop Question Answering Model\nModel Architecture\nTraining Data and Ensembling\nReturned Text\nSub-Answer Confidence\nChanging the Single-hop QA Model\nMulti-hop Question Answering Model\nVarying the Base Model\nResults on Question Answering\nMain Results\nQuestion Type Breakdown\nAnswers to Sub-Questions are Crucial\nHow Do Decompositions Help?\nExample Decompositions\nAnalysis\nUnsupervised Decomposition Model\nIntrinsic Evaluation of Decompositions\nQuality of Decomposition Model\nRelated Work\nConclusion"
],
"type": "outline"
}
|
1912.08320
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Garbage In, Garbage Out? Do Machine Learning Application Papers in Social Computing Report Where Human-Labeled Training Data Comes From?
<<<Abstract>>>
Many machine learning projects for new application areas involve teams of humans who label data for a particular purpose, from hiring crowdworkers to the paper's authors labeling the data themselves. Such a task is quite similar to (or a form of) structured content analysis, which is a longstanding methodology in the social sciences and humanities, with many established best practices. In this paper, we investigate to what extent a sample of machine learning application papers in social computing --- specifically papers from ArXiv and traditional publications performing an ML classification task on Twitter data --- give specific details about whether such best practices were followed. Our team conducted multiple rounds of structured content analysis of each paper, making determinations such as: Does the paper report who the labelers were, what their qualifications were, whether they independently labeled the same items, whether inter-rater reliability metrics were disclosed, what level of training and/or instructions were given to labelers, whether compensation for crowdworkers is disclosed, and if the training data is publicly available. We find a wide divergence in whether such practices were followed and documented. Much of machine learning research and education focuses on what is done once a "gold standard" of training data is available, but we discuss issues around the equally-important aspect of whether such data is reliable in the first place.
<<</Abstract>>>
<<<Introduction>>>
Machine learning (ML) has become widely used in many academic fields, as well as across the private and public sector. Supervised machine learning is particularly prevalent, in which training data is collected for a set of entities with known properties (a “ground truth” or “gold standard”), which is used to create a classifier that will make predictions about new entities of the same type. Supervised ML requires high-quality training data to produce high-quality classifiers. “Garbage In, Garbage Out” is a longstanding aphorism in computing about how flawed input data or instructions will produce flawed outputs. BIBREF0, BIBREF1 However, contemporary ML research and education tends to focus less on obtaining and validating such a training dataset, with such considerations often passed over in major textbooks BIBREF2, BIBREF3, BIBREF4. The predominant focus is typically on what is done with the training data to produce a classifier, with heavy emphasis on mathematical foundations and routine use of clean and tidy “toy” datasets. The process of creating a “gold standard” or “ground truth” dataset is routinely black-boxed. Many papers in ML venues are expected to use a standard, public training dataset, with authors comparing various performance metrics on the same dataset. While such a focus on what is done to a training dataset may be appropriate for theoretically-oriented basic research in ML, this is not the case for supervised ML applications.
<<<Study overview>>>
All approaches of producing a training dataset involve some form of human judgment, albeit at varying levels of granularity. In this paper, we investigate and discuss a wide range of issues and concerns around the curation of human-labeled or human-annotated data, in which one or more individuals make discrete assessments of items. We report from a study in which a team of six labelers systematically examined a corpus of supervised machine learning application papers in social computing, specifically those that classified tweets from Twitter for various purposes. For each paper, we recorded what the paper does or does not state about the training data used to produce the classifier presented in the paper. The bulk of the papers we examined were a sample of preprints or postprints published on ArXiV.org, plus a smaller set of published papers sampled from Scopus. We determined whether such papers involved an original classification task using supervised ML, whether the training data labels were produced from human annotation, and if so, the source of the human-labeled dataset (e.g. the paper's authors, Mechanical Turk, recruited experts, no information given, etc.). For all papers in which an original human-labeled dataset was produced, we then made a series of further determinations, including if definitions and/or examples were given to labelers, if labelers independently labeled the same items, if inter-rater reliability metrics were presented, if compensation details for crowdworkers were reported, if a public link to the dataset was available, and more.
As our research project was a human-labeling project studying other human-labeling projects, we took care in our own practices. We only have access to the paper reporting about the study and not the actual study itself, and many papers either do not discuss such details at all or without sufficient detail to make a determinations. For example, many papers did note that the study involved the creation of an original human-labeled dataset, but did not specify who labeled it. For some of our items, one of the most common labels we gave was “no information” — which is a concerning issue, given how crucial such information is in understanding the validity of the training dataset and by extension, the validity of the classifier.
<<</Study overview>>>
<<</Introduction>>>
<<<Literature review and motivation>>>
<<<A different kind of “black-boxing” in machine learning>>>
In the introduction, we noted training data is frequently black-boxed in machine learning research and applications. We use the term “black-boxed” in a different way than it is typically invoked in and beyond the FAT* community, where often refers to interpretability. In that sense, “black-boxing” means that even for experts who have access to the training data and code which created the classifier, it is difficult to understand why the classifier made each decision. In social science and humanities work on “black-boxing” of ML (and other “algorithmic” systems), there is often much elision between issues of interpretability and intentional concealment, as Burrell BIBREF5 notes. A major focus is on public accountability BIBREF6, where many problematic issues can occur behind closed doors. This is even the case with relatively simple forms of analytics and automation — such as if-then statements, linear regressions, or rule-based expert systems BIBREF7, BIBREF8.
In contrast, we are concerned with what is and is not taken for granted when developing a classifier. This use is closer to how Latour & Woolgar used it in an ethnographic study of scientific laboratories BIBREF9. They discuss how equipment like a mass spectrometer would typically be implicitly trusted to turn samples into signals. However, when the results were drastically unexpected, it could be a problem with the machine or a fundamental breakthrough. Scientists and technicians would have to “open up the black box,” changing their relationship to the equipment to determine if the problem was with the equipment or the prevailing theory. In this view, black-boxing is a relational concept, not an objective property. It is about the orientation people have to the same social-technical systems they routinely work with and rely upon. “Opening up the black box” is not about digging into technical or internal details per se, but a gestalt shift in whether the output of a system is implicitly taken for granted or open for further investigation.
In this view, black-boxing is not inherently problematic. The question is more about who gets to be skeptical about data and who is obligated to suspend disbelief, which are also raised in discussions of open science & reproducibility BIBREF10. Operationalization, measurement, and construct validity have long been crucial and contested topics in the social sciences. Within quantitative sub-fields, it is common to have extensive debates about the best way to define and measure a complex concept (e.g. “intelligence”). From a qualitative and Science & Technology Studies perspective, there is extensive work on the practices and implications of various regimes of measurement BIBREF11, BIBREF12, BIBREF13, BIBREF14. In ML, major operationalization decisions can implicitly occur in data labeling. Yet as Jacobs & Wallach note, “[i]n computer science, it is particularly rare to articulate the distinctions between constructs and their operationalizations” BIBREF15. This is concerning, because “many well-studied harms [in ML] are direct results of a mismatch between the constructs purported to be measured and their operationalizations” BIBREF15.
<<</A different kind of “black-boxing” in machine learning>>>
<<<Content analysis>>>
Creating human-labeled training datasets for machine learning often looks like content analysis, a well-established methodology in the humanities and the social sciences (particularly literature, communication studies, and linguistics), which also has versions used in the life, ecological, and medical sciences. Content analysis has taken many forms over the past century, from more positivist methods that formally establish structural ways of evaluating content to more interpretivist methods that embrace ambiguity and multiple interpretations, such as grounded theory BIBREF16. The intersection of ML and interpretivist approaches is outside of the scope of this article, but it is an emerging area of interest BIBREF17.
Today, structured content analysis (also called “closed coding”) is used to turn qualitative or unstructured data of all kinds into structured and/or quantitative data, including media texts, free-form survey responses, interview transcripts, and video recordings. Projects usually involve teams of “coders” (also called “annotators”, “labelers”, or “reviewers”), with human labor required to “code”, “annotate”, or “label” a corpus of items. (Note that we use such terms interchangeably in this paper.) In one textbook, content analysis is described as a “systematic and replicable” BIBREF18 method with several best practices: A “coding scheme” is defined, which is a set of labels, annotations, or codes that items in the corpus may have. Schemes include formal definitions or procedures, and often include examples, particularly for borderline cases. Next, coders are trained with the coding scheme, which typically involves interactive feedback. Training sometimes results in changes to the coding scheme, in which the first round becomes a pilot test. Then, annotators independently review at least a portion of the same items throughout the entire process, with a calculation of “inter-annotator agreement” or “inter-rater reliability.” Finally, there is a process of “reconciliation” for disagreements, which is sometimes by majority vote without discussion and other times discussion-based.
Structured content analysis is a difficult, complicated, and labor-intensive process, requiring many different forms of expertise on the part of both the coders and those who manage them. Historically, teams of students have often performed such work. With the rise of crowdwork platforms like Amazon Mechanical Turk, crowdworkers are often used for content analysis tasks, which are often similar to other kinds of common crowdworking tasks. Google's reCAPTCHA BIBREF19 is a Turing test in which users perform annotation tasks to prove their humanness — which initially involved transcribing scanned phrases from books, but now involves image labeling for autonomous vehicles. There are major qualitative data analysis software tools that scaffold the content analysis process to varying degrees, such as MAXQDA or NVivo, which have support for inter-annotator agreement metrics. There have also been many new software platforms developed to support more micro-level annotation or labeling at scale, including in citizen science, linguistics, content moderation, and more general-purpose use cases BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25. For example, the Zooniverse BIBREF26 provides a common platform for citizen science projects across different domain application areas, which let volunteers make judgements about items, which are aggregated and reconciled in various ways.
<<</Content analysis>>>
<<<Meta-research and methods papers in linguistics and crowdsourcing>>>
Our paper is also in conversation with various meta-research and standardization efforts in linguistics, crowdsourcing, and other related disciplines. Linguistics and Natural Language Processing have long struggled with issues around standardization and reliability of linguistic tagging. Linguistics researchers have long developed best practices for corpus annotation BIBREF27, including recent work about using crowdworkers BIBREF28. Annotated corpus projects often release guidelines and reflections about their process. For example, the Linguistic Data Consortium's guidelines for annotation of English-language entities (version 6.6) is 72 single-spaced pages BIBREF29. A universal problem of standardization is that there are often too many standards and not enough enforcement. As BIBREF30 notes, 33-81% of linguistics/NLP papers in various venues do not even mention the name of the language being studied (usually English). A meta-research study found only 1 in 9 qualitative papers in Human-Computer Interaction reported inter-rater reliability metrics BIBREF31.
Another related area are meta-research and methods papers focused on identifying or preventing low-effort responses from crowdworkers — sometimes called “spam” or “random” responses, or alternatively ”fraudsters” or ”cheaters.” Rates of “self-agreement” are often used, determining if the same person labels the same item differently at a later stage. One paper BIBREF32 examined 17 crowdsourced datasets for sentiment analysis and found none had self-agreement rates (Krippendorf's alpha) above 0.8, with some lower than 0.5. Another paper recommends the self-agreement strategy in conjunction with asking crowdworkers to give a short explanation of their response, even if the response is never actually examined. BIBREF33. One highly-cited paper BIBREF34 proposes a strategy in which crowdworkers are given some items with known labels (a gold/ground truth), and those who answer incorrectly are successively given more items with known labels, with a Bayesian approach to identifying those who are answering randomly.
<<</Meta-research and methods papers in linguistics and crowdsourcing>>>
<<<The data documentation movements>>>
Our paper is also in conversation with two related movements in computationally-supported knowledge production that have surfaced issues around documentation. First, we see connections with the broader open science and reproducibility movements. Open science is focused on a range of strategies, including open access research publications, educational materials, software tools, datasets, and analysis code BIBREF35. The reproducibility movement is deeply linked to the open science movement, focusing on getting researchers to release everything that is necessary for others to perform the same tasks needed to get the exact same results BIBREF36, BIBREF10. This increasingly includes pushing for high standards for releasing protocols, datasets, and analysis code. As more funders and journals are requiring releasing data, the issue of good documentation for data and protocols is rising BIBREF37, BIBREF38. There are also intersecting literatures on systems for capturing information in ML data flows and supply chains BIBREF39, BIBREF40, BIBREF41, as well as supporting data cleaning BIBREF42, BIBREF43. These issues have long been discussed in the fields of library and information science, particularly in Research Data Management BIBREF44, BIBREF45, BIBREF46, BIBREF47.
A major related movement is in and around the FATML field, with many recent papers proposing training data documentation in the context of ML. Various approaches, analogies, and metaphors have been taken in this area, including “datasheets for datasets” BIBREF48, ”model cards” BIBREF49, “data statements” BIBREF30, “nutrition labels” BIBREF50, a “bill of materials” BIBREF51, “data labels” BIBREF52, and “supplier declarations of conformity” BIBREF53. Many go far beyond the concerns we have raised around human-labeled training data, as some are also (or primarily) concerned with documenting other forms of training data, model performance and accuracy, bias, considerations of ethics and potential impacts, and more. We discuss how our findings relate to this broader emerging area more in the concluding discussion.
<<</The data documentation movements>>>
<<</Literature review and motivation>>>
<<<Data and methods>>>
<<<Data: machine learning papers performing classification tasks on Twitter data>>>
Our goal was to find a corpus of papers that were using original human annotation or labeling to produce a new training dataset for supervised ML. We restricted our corpus to papers whose classifiers were trained on data from Twitter, for various reasons: First, we did attempt to produce a broader corpus of supervised ML application papers, but found our search queries in academic search engines would either 1) be so broad that most papers were non-applied / theoretical papers or papers re-using public pre-labeled datasets; or 2) that the results were so narrow they excluded many canonical papers in this area, which made us suspect that they were non-representative samples. Sampling to papers using Twitter data has strategic benefits for this kind of initial study. Data from Twitter is of interest to scholars from a variety of disciplines and topical interest areas, in addition to those who have an inherent interest in Twitter as a social media site. As we detail in appendix section SECREF45, the papers represented political science, public health, NLP, sentiment analysis, cybersecurity, content moderation, hate speech, information quality, demographic profiling, and more.
We drew the main corpus of ML application papers from ArXiV, the oldest and most established “preprint” repositories, originally for researchers to share papers prior to peer review. Today, ArXiV is widely used to share both drafts of papers that have not (yet) passed peer review (“preprints”) and final versions of papers that have passed peer review (often called “postprints”). Users submit to any number of disciplinary categories and subcategories. Subcategory moderators perform a cursory review to catch spam, blatant hoaxes, and miscategorized papers, but do not review papers for soundness or validity. We sampled all papers published in the Computer Science subcategories of Artificial Intelligence (cs.AI), Machine Learning (cs.LG), Social and Information Networks (cs.SI), Computational Linguistics (cs.CL), Computers and Society (cs.CY), Information Retrieval (cs.IR), and Computer Vision (CS.CV), the Statistics subcategory of Machine Learning (stat.ML), and Social Physics (physics.soc-ph). We filtered for papers in which the title or abstract included at least one of the words “machine learning”, “classif*”, or “supervi*” (case insensitive). We then filtered to papers in which the title or abstract included at least “twitter” or “tweet” (case insensitive), which resulted in 494 papers. We used the same query on Elsevier's Scopus database of peer-reviewed articles, selecting 30 randomly sampled articles, which mostly selected from conference proceedings. One paper from the Scopus sample was corrupted, so only 29 papers were examined.
ArXiV is likely not a representative sample of all ML publications. However, we chose it because ArXiV papers are widely accessible to the public, indexed in Google Scholar and other scholarly databases, and are generally considered citeable publications. The fact that many ArXiV papers are not peer-reviewed and that papers posted are not likely representative samples of ML research is worth considering when reflecting on the generalizability of our findings. However, given that such papers are routinely discussed in both academic literature and the popular press means that issues with their reporting of training data is just as crucial. Sampling from ArXiv also lets us examine papers at various stages in the peer-review cycle, breaking out preprints not (yet) published, preprints of later published papers, and postprints of published works. The appendix details both corpora, including an analysis of the topics and fields of papers (in SECREF47), an analysis of the publishers and publication types (e.g. an early preprint of a journal article, a final postprint of a conference proceeding, a preprint never published) and publishers (in SECREF50 and SECREF47). The final dataset can be found on GitHub and Zenodo.
<<</Data: machine learning papers performing classification tasks on Twitter data>>>
<<<Labeling team, training, and workflow>>>
Our labeling team included one research scientist who led the project (RSG) and undergraduate research assistants, who worked for course credit as part of an university-sponsored research experience program (KY, YY, MD, JQ, RT, and JH). The project began with five students for one semester, four of whom continued on the project for the second semester. A sixth student replaced the student who did not continue. All students had some coursework in computer science and/or data science, with a range of prior experience in machine learning in both a classroom and applied setting. Students' majors and minors included Electrical Engineering & Computer Science, Data Science, Statistics, and Linguistics.
The labeling workflow was that each week, a set of papers were randomly sampled each week from the unlabled set of 494 ArXiV papers in the corpus. For two weeks, the 30 sampled papers from Scopus were selected. The five students independently reviewed and labeled the same papers each week, using a different web-based spreadsheet to record labels. The team leader synthesized labels and identified disagreement. The team met in person each week to discuss cases of disagreement, working to build a consensus about the proper label (as opposed to purely majority vote). The team leader facilitated these discussions and had the final say when a consensus could not be reached. The papers labeled for the first two weeks were in a training period, in which the team worked on a different set of papers not included in the dataset. In these initial weeks, the team learned the coding schema and the reconciliation process, which were further refined.
<<</Labeling team, training, and workflow>>>
<<<Second round verification and reconciliation>>>
After 164 papers were labeled by five annotators, we conducted a second round of verification. This was necessary both because there were some disagreements in labeling and changes made to the coding schema (discussed in appendix SECREF54). All labels for all 164 papers were independently re-examined by at least two of the six team members. Annotators were given a summary of the original labels in the first round and were instructed to review all papers, being mindful of how the schema and instructions had changed. We then aggregated, reconciled, and verified labels in the same way as in the first round. For papers where there was no substantive disagreement on any question between those who re-examined it in the second round, the paper's labels were considered to be final. For papers where there was any substantive disagreement on any question, the paper was either discussed to consensus in the same manner as in the first round or decided by the team leader. The final schema and instructions are in the appendix, section SECREF57.
Finally, we cleaned up issues with labels around implicit or blank values using rule-based scripts. We learned our process involved some ambiguities around whether a subsequent value needed to be filled in. For example, if a paper was not using crowdworkers, then the instructions for our schema were that the question about crowdworker compensation was to remain blank. However, we found we had cases where “reported crowdworker compensation” was “no” for papers that did not use crowdworkers. This would be concerning had we had a “yes” for such a variable, but found no such cases. We recoded questions about pre-screening for crowdwork platforms (implied by using crowdworkers in original human annotation source) and the number of human annotators.
We measured interrater reliability metrics using mean percent total agreement, or the proportion of cases where all labelers initially gave the same label. This is a more stringent metric than Fleiss's kappa and Krippendorf's alpha, and our data does not fit the assumptions for those widely-used metrics. IRR rates for round one were relatively low: across all questions, the mean percent total agreement was 66.67%, with the lowest question having a rate of 38.2%. IRR rates for round two were quite higher: the mean percent total agreement across all questions was 84.80% and the lowest agreement score was 63.4% (for “used external human annotation”, which we discuss later). We are confident about our labeling process, especially because these individual ratings were followed by an expert-adjudicated discussion-based reconciliation process, rather than simply counting majority votes. We detail more information and reflection about interrater reliability in appendix section SECREF52.
<<</Second round verification and reconciliation>>>
<<<Raw and normalized information scores>>>
We quantified the information about training data in papers, developing a raw and normalized information score, as different studies demanded different levels of information. For example, our question about whether inter-annotator agreement metrics were reported is only applicable for papers involving multiple annotators. Our questions about whether prescreening was used for crowdwork platforms or whether crowdworker compensation was reported is only relevant for projects using crowdworkers. However, some kinds of information are relevant to all papers that involve original human annotation: who the annotators are (annotation source), annotator training, formal instructions or definitions were given, the number of annotators involved, whether multiple annotators examined the same items, or a link to a publicly-available dataset.
For raw scores, papers involving original human annotation received one point each for reporting the six items mentioned above. In addition, they received one point per question if they included information for each of the two questions about crowdworkers if the project used crowdworkers, and one point if they reported inter-annotator metrics if the project used multiple annotators per item. For the normalized score, the raw score was divided by the highest possible raw score. We only calculated scores for papers involving original human annotation. Finally, we conducted an analysis of information scores by various bibliometric factors, which required determining such factors for all papers. For all ArXiV papers, we determined whether the PDF was a pre-print not (yet) published in another venue, a post-print identical in content to a published version, or a pre-print version of a paper published elsewhere with different content. For all Scopus papers and ArXiV post-prints, we also determined the publisher. We detail these in appendix SECREF47.
<<</Raw and normalized information scores>>>
<<</Data and methods>>>
<<<Findings>>>
<<<Original classification task>>>
The first question was whether the paper was conducting an original classification task using supervised machine learning. Our keyword-based process of generating the corpus included many papers not in this scope. However, defining the boundaries of supervised ML and classification tasks is difficult, particularly for papers that are long, complex, and ambiguously worded. We found that some papers claimed to be using ML, but when we examined the details, these did not fall under our definition. We defined machine learning broadly, using a common working definition in which machine learning includes any automated process that does not exclusively rely on explicit rules, in which the performance of a task increases with additional data. This includes simple linear regressions, for example, and there is much debate about if and when simple linear regressions are a form of ML. However, as we were also looking for classification tasks, linear regressions were only included if it is used to make a prediction in a set of defined classes. We defined an “original” classifier to mean a classifier the authors made based on new or old data, which excludes the exclusive use of pre-trained classifiers or models.
As table TABREF13 shows, the overwhelming majority of papers in our dataset were involved in an original classification task. We placed 5 papers in the “unsure” category — meaning they did not give enough detail for us to make this determination, or that they were complex boundary cases. One of the “unsure” cases clearly used labels from human annotation, and so we answered the subsequent questions, which is why the counts in Table 2 add up to 143 (as well as some other seeming disparities in later questions).
<<</Original classification task>>>
<<<Labels from human annotation>>>
One of the major issues we had to come to a consensus around was whether a paper used labels from human annotation. We observed a wide range of cases in which human judgment was brought to bear on the curation of training data. Our final definition required that “the classifier [was] at least in part trained on labeled data that humans made for the purpose of the classification problem.” We decided on a working definition that excluded many “clever uses of metadata” from this category, but did allow some cases of “self-annotation” from social media, which were typically the most borderline cases on the other side. For example, one case from our examples we decided was human annotation used specific politically-inflected hashtags to automatically label tweets as for or against a position, for use in stance detection (e.g. #ProChoice versus #ProLife). However, these cases of self-annotation would all be considered external human annotation rather than original human annotation, and so the subsequent questions about the annotation process would be not applicable. Another set of borderline cases involved papers where no human annotation was involved in the curation of the training dataset that was used to build the classifier, but human annotation was used for validation purposes. We did not consider these to involve human annotation as we originally defined it in our schema, even though the same issues arise with equal significance for the validity of such research.
<<</Labels from human annotation>>>
<<<Used original human annotation and external human annotation>>>
Our next two questions were about whether papers that used human annotation used original human annotation, which we defined as a process in which the paper's authors obtained new labels from humans for items. It is common in ML research to re-use public datasets, and many of papers in our corpus did so. We also found 10 papers in which external and original human annotation was combined to create a new training dataset. For these reasons, we modified our schema to ask separate questions for original and external human annotation data, to capture all three cases (using only original, only external, or both). Tables TABREF17 and TABREF17 show the breakdown for both questions. We only answered the subsequent questions about the human annotation process for the papers producing an original human annotated dataset.
<<</Used original human annotation and external human annotation>>>
<<<Original human annotation source>>>
Our next question asked who the annotators were, for the 74 papers that used original human annotation. The possible options were: the paper's authors, Amazon Mechanical Turk, other crowdworking platforms, experts/professionals, other, and no information. We took phrases like “we labeled” (with no other details) to be an implicit declaration that the paper's authors did the labeling. If the paper discussed labelers' qualifications for the task beyond an average person, we labeled it as “experts / professionals.” For example, some of our boundary cases involved recruiting students to label sentiment. One study involved labeling tweets with both English and Hindi text and noted that the students were fluent in both languages – which we considered to be in the “experts / professionals” category. Another paper we included in this category recruited students to label tweets with emojis, noting that the recruited students “are knowledgeable with the context of use of emojis.”
As table TABREF19 shows, we found a diversity of approaches to the recruitment of human annotators. The plurality of papers involved the paper's authors doing the annotation work themselves. The next highest category was “no information,” which was found in almost a quarter of the papers using original human annotation. Experts / professionals was far higher than we expected, although we took any claim of expertise for granted. Crowdworkers constituted a far smaller proportion than we expected, with Amazon Mechanical Turk and other platforms collectively comprising about 15% of papers. Almost all of the other crowdworking platforms specified were CrowdFlower/FigureEight, with one paper using oDesk.
<<</Original human annotation source>>>
<<<Number of human annotators>>>
Our instructions for the question about the number of human annotators was not precise and had one of the lower levels of inter-rater reliability. If the paper included information about the number of human annotators, the instructions were to put such a number, leaving the field blank for no information. Most of the disagreement was from differences around how papers report the number of annotators used. For example, some papers specified the total number of humans who worked on the project annotating items, while others only specified how many annotators were used per item (particularly for those using crowdworkers), and a few reported both. Some involved a closed set of annotators who all examined the same set of items, similar to how our team operated. Other papers involved an open set of annotators, particularly drawn from crowdworking platforms, but had a consistent number of annotators who reviewed each item. Due to these inconsistencies, we computationally re-coded responses into the presence of information about the number of human annotators. These are both important aspects to discuss, although it is arguably more important to discuss the number of annotators who reviewed each item. In general, having more annotators review each item provides a more robust way of determining the validity of the entire process, although this also requires caluclating inter-annotator agreement metrics.
As table TABREF21 shows, a slim majority of papers using original human annotation specified the number of annotators involved in some way. Based on our experiences, we typically noticed that papers discussing the number of annotators often fell into two categories: 1) a small closed team (more often 2-3, sometimes 4-6) that were either the papers' authors or recruited directly by the authors, who tended to perform the same amount of work for the duration of the project; or 2) a medium to large (25-500) open set of annotators, typically but not necessarily recruited through a crowdworking platform, who each performed highly variable amounts of work.
<<</Number of human annotators>>>
<<<Formal definitions and instructions>>>
Our next question was about whether instructions or guidelines with formal definitions or examples are reportedly given to annotators. Formal definitions and concrete examples are both important, as they help annotators understand how the researchers have operationalized the concept in question and determine edge cases. With no or ambiguous definitions/examples, there could be fundamental misunderstandings that are not captured by inter-annotator agreement metrics, if all annotators make the same misunderstandings. We defined two levels: giving no instructions beyond the text of a question, then giving definitions for each label and/or concrete examples. The paper must describe or refer to instructions given (or include them in supplemental materials), otherwise, we categorized it "No Information". Some borderline cases involved authors labeling the dataset themselves, where the paper presented a formal definition, but only implied that it informed the labeling – which we took to be a formal definition. As table TABREF23 shows, the plurality of papers did not provide enough information to make a determination (it is rare for authors to say they did not do something), but 43.2% provided definitions or examples.
<<</Formal definitions and instructions>>>
<<<Training for human annotators>>>
We defined training for human annotators to involve some kind of interactive process in which the annotators have the opportunity to receive some kind of feedback and/or dialogue about the annotation process. We identified this as a distinct category from both the qualifications of the annotators and the instructions given to annotators, which are examined in other questions. Training typically involved some kind of live session or ongoing meeting in which annotators' progress was evaluated and/or discussed, where annotators had the chance to ask questions or receive feedback on why certain determinations did or did not match definitions or a schema. We used our own team's process as an example of this, and found several papers that used a similar roundtable process, which went into detail about interactions between team members. Cases in which the paper only specified that annotators were given a video or a detailed schema to review were not considered training details, as this was a one-way process and counted as definitions/instructions.
The overwhelming majority of papers did not discuss such issues, as table TABREF25 shows, with 15% of papers involving a training session. Because we had a quite strict definition for what constitutes training (versus what many may think of around “trained annotators”), this is expected. We also are not all that concerned with this low number, as there are many tasks that likely do not require specialized training — unlike our project, which required both specific expertise in an area and with our complicated schema.
<<</Training for human annotators>>>
<<<Pre-screening for crowdwork platforms>>>
Crowdwork platforms let employers pre-screen or test for traits, skills, or performance metrics, which significantly narrows the pool of crowdworkers. For example, “project-specific pre-screening” involves offering a sample task with known outcomes: if the crowdworker passed, they would be invited to annotate more items. 5 of the 11 papers using crowdworkers reported using this approach. Platforms also often have location-based screening (e.g. US-only), which 2 papers reported using. Some crowdwork platforms have a qualification for workers who have a positive track record based on total employer ratings (e.g. AMT Master). Platforms also offer generic skills-based tests for certain kinds of work (e.g. CrowdFlower's Skill Tests). These last two qualifications were in our coding schema, but no papers reported using them.
<<</Pre-screening for crowdwork platforms>>>
<<<Multiple annotator overlap and reporting inter-annotator agreement>>>
Our next two questions were about using multiple annotators to review the same items (multiple annotator overlap) and whether inter-annotator agreement metrics were reported. Having multiple independent annotators is typically a foundational best practice in structured content analysis, so that the integrity of the annotations and the schema can be evaluated (although see BIBREF31). For multiple annotator overlap, our definitions required papers state whether all or some of the items were labeled by multiple labelers, otherwise “no information” was recorded. Then, for papers that did multiple annotator overlap, we examined whether any inter-annotator agreement metric was reported. We did find one paper that did not explicitly state that multiple labelers overlapped, but did report inter-annotator agreement metrics. This implicitly means that at least some of the items were labeled by multiple labelers, but for consistency, we keep the “no information” label for this case. We did not record what kind of inter-annotator metric was used, such as Cohen's kappa or Krippendorff's alpha, but many different metrics were used. We also did not record what the exact statistic was, although we did notice a wide variation in what was considered an acceptable or unacceptable score for inter-annotator agreement.
For multiple annotator overlap, table TABREF29 shows that just under half of all papers that involved an original human annotation task did not provide explicit information one way or the other about whether multiple annotators reviewed each item. This includes the one paper that reported inter-annotator agreement metrics, but did not specify whether overlap was for all items or some items. Only three papers explicitly stated that there was no overlap among annotators, and so it is quite likely that the papers that did not specify such information did not engage in such a practice. For the 37 papers that did involve some kind of multiple annotator overlap, the overwhelming majority of this subsample (84%) involved multiple annotation of all items, rather than only some items. We also found that for papers that did involve some kind of multiple overlap, the large majority of them ( 70%) did report some metric of inter-annotator agreement, as table TABREF29 indicates.
<<</Multiple annotator overlap and reporting inter-annotator agreement>>>
<<<Reported crowdworker compensation>>>
Crowdworking is often used because of the low cost, which can be far below minimum wage in certain countries. Researchers and crowdworkers have been organizing around issues related to the exploitation of crowdworkers in research, advocating ethical practices including fair pay BIBREF54. We examined all papers involving crowdworkers for any indication of compensation, and found zero mentioned compensation. We did find that some papers using other sources of human annotation (e.g. students) discussed compensation for annotators, but this was not in our original schema.
<<</Reported crowdworker compensation>>>
<<<Link to dataset available>>>
Our final question was about whether the paper contained a link to the dataset containing the original human annotated training dataset. Note that this question was only answered for papers involving some kind of original or novel human annotation, and papers that were exclusively re-using an existing open or public dataset were left blank to avoid double-counting. We did not follow such links or verify that such data was actually available. As table TABREF32 shows, the overwhelming majority of papers did not include such a link, with 8 papers (10.81%) using original human-annotated training datasets linking to such data. Given the time, labor, expertise, and funding in creating original human annotated datasets, authors may be hesitant to release such data until they feel they have published as many papers as they can.
<<</Link to dataset available>>>
<<</Findings>>>
<<<Paper information scores>>>
The raw and normalized information scores (see section SECREF10 for methodology) were calculated for all papers that involved original human annotation. As previously discussed, our corpora represent a likely non-representative sample of ML research, even if bounded to social computing. Our relatively small sample sizes combined with the number of multiple comparisons would mean that thresholds for statistical significance would need to be quite high. Instead, we present these results to help provide an initial framework and limited results on this issue, intended to help inform a broader and more systematic evaluation the ML literature. We do observe quite varying ranges and distributions of information scores, which does give evidence to the claim that there is substantial and wide variation in the practices around human annotation, training data curation, and research documentation.
<<<Overall distributions of information scores>>>
Figure FIGREF34 shows histograms for raw and normalized information scores, which both suggest a bimodal distribution, with fewer papers at the both extremes and the median. This suggests that there are roughly two populations of researchers, with one centered around raw scores of 1-2 and normalized scores of 0.25 and one centered around raw scores of 5 and normalized scores of 0.7. The normalized information score ranged from 0 to 1, with 6 papers having a normalized score of 0 and only 1 paper with a score of 1. The raw information score ranged from 0 to 7, with no paper receiving a full score of 8 or 9, which would have required a study involving crowdworkers, multiple overlap, and open datasets. Overall, the mean normalized information score was 0.441, with a median of 0.429 and a standard deviation of 0.261. The mean raw score was 3.15, with a median of 3.0 and a standard deviation of 2.05.
<<</Overall distributions of information scores>>>
<<<Information scores by corpus and publication type>>>
Figure FIGREF37 shows two boxplots of normalized information scores that are based on different intersecting categories of publication type and status. The left figure compares scores in four categories: all papers in the Scopus sample (non-ArXived), ArXiv preprints that were never (or are not yet) published, and ArXiv preprints that were either postprints or preprints of a traditional publication. The category with the lowest median score are papers from the Scopus sample, which is followed closely by ArXiv preprints never published, although preprints never published had a much larger IQR and standard deviation. Postprints of publications had a similar IQR and standard deviation as preprints never published, but a much higher median score. Preprints of publications had a similar median score as postprints, but with a much smaller IQR and standard deviation. The righthand figure plots publication types for the combined corpora. Conference proceedings and ArXiv preprints never published have somewhat similar medians and IQRs, with journal articles having a higher median of 0.5 and a much narrower IQR. While we hesitate to draw generalizable conclusions, we see these findings indicating a wide range of factors potentially at play.
<<</Information scores by corpus and publication type>>>
<<<Information scores by publisher>>>
Figure FIGREF39 shows boxplots for normalized information scores by publisher, split between papers sampled from ArXiv and Scopus. The boxplots are ordered by the median score per publisher. In papers in the ArXiv corpus, those that were pre- or post-prints of papers published by the professional societies Association for Computing Machinery (ACM) or Association of Computational Linguistics (ACL) tied for the highest median scores of 0.667, with similar IQRs. These were followed by Springer and Elsevier, with respective medians 0.625 and 0.603 and narrower IQRs. ArXiv preprints not published elsewhere had a median score of 0.381 and the highest IQR and standard deviation (0.289), suggesting that it represents a wide range of papers. The publishers at the lower end of the scale included AAAI, with a median of 0.444 and a narrower IQR, and IEEE, with a median of 0.226 and the second-highest IQR and standard deviation (0.327). Curiously, papers from the Scopus corpus show different results per-publisher, with the median scores of all publishers lower in the Scopus corpus than in the ArXiv corpus. Given the small number of papers in the Scopus sample, we hesitate to draw general conclusions, but suspect it indicates differences between all academic authors and those who post ArXiv postprints.
<<</Information scores by publisher>>>
<<</Paper information scores>>>
<<<Concluding discussion>>>
<<<Implications>>>
Based on our findings and experiences in this project, we believe human annotation should be considered a core aspect of the research process, with as much attention, care, and concern placed on the annotation process as is currently placed on performance-based metrics like F1 scores. Our findings — while preliminary, descriptive, and limited in scope — tell us that there is much room for improvement. This paper also makes steps towards more large-scale and systematic analyses of the research landscape, as well as towards standards and best practices for researchers and reviewers.
Institutions like journals, funders, and disciplinary societies have a major role to play in solutions to these issues. Most publications have strict length maximums, and many papers we scored highly spent a page or more describing their process. Reviewer expectations are crucial in any discussion of the reporting of methodological details in research publications. It could be that some authors did include such details, but were asked to take it out and add other material instead. Authors have incentives to be less open about the messiness inherent in research, as this may open them up to additional criticism. We see many parallels here to issues around reproducibility and open science, which are increasingly being tackled by universal requirements from journals and funders, rather than relying on individuals to change norms. Such research guidelines are common, including the COREQ standard for qualitative data analysis reporting BIBREF55, a requirement by some journals. A number of proposed standards have been created around datasets for ML BIBREF48, BIBREF49, BIBREF30, BIBREF50, BIBREF51, BIBREF52, BIBREF53, which are often framed as potential ways to mitigate bias and improve transparency and accountability. Several of these are broader proposals around reporting information about ML classifiers and models, which include various aspects beyond our study. In fact, given the recent explosion of proposals for structured disclosure or transparency documents around ML, the Partnership on AI has recently created the “ABOUT ML” working group to arrive at a common format or standard. BIBREF56
From our perspective, it is important to frame this issue as one of research validity and integrity: what kind of information about training data is needed for researchers, reviewers, and readers to have confidence in the model or classifier? As we observed in our discussions, we became skeptical about papers that did not adequately describe their human annotation processes. However, human annotation is a broad and diverse category of analytical activity, encompassing a wide range of structured human judgment brought to bear on items, some far more straightforward or complex. We saw the wide range papers that were engaged in various forms of annotation or labeling, even though we bounded our study to papers using data from Twitter. One important distinguishing factor is the difficulty of the task and the level of specific knowledge needed to complete it, which can vary significantly. Another key distinction may be between when there is expected to be only one `right' answer and when there might be many valid answers.
Most importantly, we would not want a straightforward checklist to overdetermine issues of model integrity. A number of papers we read were missing details we thought were crucial for understanding that study, but would not make sense for a majority of papers we examined. If a checklist was created, it should not be seen as an end in itself. The classic principle of scientific replicability could be a useful heuristic: does the paper provide enough information about the labeling process such that any reader could (with sufficient resources and access to the same kind of human annotators) conduct a substantively identical human annotation process on their own? We also see a role for technical solutions to help scaffold adherence to these best practices. For example, major qualitative data analysis platforms like MAXQDA or NVivo have built-in support for inter-annotator agreement metrics. Several crowdsourcing and citizen science platforms for data labeling are built to support reconciliation for disagreements. Automated workflow, pipeline, and provenance tracking is an increasing topic in ML, although these can focus more on model building and tuning, taking data as given. We recommend such projects include human annotation as a first-class element, with customization as needed.
Finally, our own experience in this human annotation project studying human annotation projects has shown us the costs and benefits of taking an intensive, detailed, collaborative, and multi-stage approach to human annotation. On one side, we believe that after going through such a long process, we have not only better data, but also a much better contextual understanding of our object of study. Yet on the other hand, even though struggling over the labels and labeling process is an opportunity, our time- and labor-intensive process did have a direct tradeoff with the number of items we were able to annotate. These issues and tradeoffs are important for ML researchers to discuss when designing their own projects and evaluating others.
<<</Implications>>>
<<<Limitations and future work>>>
Our study has limitations, as we only examined a sample of publications in the ML application space. First, we only examined papers that performing a classification task on tweets, which is likely not a representative sample of ML application publications. We would expect to find different results in different domain application areas. Papers in medicine and health may have substantially different practices around reporting training data, due to strict reporting standards in clinical trials and related areas. We also generally examined papers that are posted on ArXiV (in addition to 30 papers sampled from Scopus) and ArXiV is likely to not be a representative sample of academic publications. ArXiV papers are self-submitted and represent a range of publication stages, from drafts not submitted to review, preprints in peer review, and postprints that have passed peer review. Future work should examine different kinds of stratified random samples to examine differences between various publishers, publication types, disciplines, topics, and other factors.
Our study only examined a set of the kinds of issues that scholars and practitioners in ML are examining when they call for greater transparency and accountability through documentation of datasets and models. We have not recorded information about what exactly the rates of inter-annotator agreement are. In particular, we did not record information about the reconciliation or adjudication process for projects which involve multiple overlap (e.g. majority rule, talking to consensus), which we have personally found to be a crucial and difficult process. Other questions we considered but did not include were: the demographics of the labelers, the number of labelers (total and per item), compensation beyond crowdworkers, whether instructions or screenshot of the labeling interface was included, and whether labelers had the option to choose “unsure” (vs. being forced to choose a label). We leave this for future work, but also found that each additional question made it more difficult for labelers. We also considered but did not have our team give a holistic score indicating their confidence in the paper (e.g. a 1-5 score, like those used in some peer reviewing processes).
Our study also has limitations that any human annotation project has, and we gained much empathy around the difficulties of human annotation. Our process is not perfect, and as we have analyzed our data, we have identified cases that make us want to change our schema even further or reclassify boundary cases. In future work, we would also recommend using a more structured and constrained system for annotation to capture the text that annotators use to justify their answers to various questions. ML papers are very long and complex, such that our reconciliation and adjudication process was very time-consuming. Finally, we only have access to what the publications say about the work they did, and not the work itself. Future work could improve on this through other methods, such as ethnographic studies of ML practitioners.
<<</Limitations and future work>>>
<<</Concluding discussion>>>
<<<Appendix>>>
The appendix appears following the references section. This work was funded in part by the Gordon & Betty Moore Foundation (Grant GBMF3834) and Alfred P. Sloan Foundation (Grant 2013-10-27), as part of the Moore-Sloan Data Science Environments grant to UC-Berkeley. This work was also supported by UC-Berkeley's Undergraduate Research Apprenticeship Program (URAP). We thank many members of UC-Berkeley's Algorithmic Fairness & Opacity Group (AFOG) for providing invaluable feedback on this project.
<<<Dataset/corpus details>>>
<<<Keyword labels>>>
To capture the topical and disciplinary diversity of papers in our corpus, we assigned one or more keyword labels to each paper, intended to capture topical, domain, disciplinary, and methodological qualities about the study. A paper seeking to classify tweets for spam and phishing in Turkish might include the labels: spam detection; phishing detection; cybersecurity; non-English. A study seeking to classify whether users are tweeting in support or opposition of a protest might have the keywords: user profiling; political science; protests; stance detection; public opinion. As part of the annotation and labeling process, all five annotators gave each paper a short description of what was being classified or predicted. The project lead aggregated these independent descriptions and additionally examined the paper title, abstract, and text. The project lead — who has extensive knowledge and experience of the various disciplines in the social computing space — then conducted a two-stage thematic coding process. A first pass involved open (or free-form) coding for all papers, with the goal of creating a typology of keywords. The list of keywords were then refined and consolidated, and a second pass was conducted on all of the items to re-label them as appropriate. Papers could have multiple keywords.
The distribution is plotted in Figure FIGREF46, which is broken out by papers that were using original human annotation (e.g. a new labeled training dataset) versus either theoretical papers or papers exclusively re-using a public or external dataset (see section SECREF16). This shows that the most common keywords were user profiling (a broader keyword that includes demographic prediction and classification of users into various categories), public opinion (a broader keyword that includes using Twitter to obtain beliefs or opinions, typically about political or cultural topics), and then two NLP methodologies of sentiment analysis and topic identification. The keyword "social networks" was used for any paper that either made substantive use of the network structure (e.g. follower graphs) as a feature, or tried to predict it. This figure also shows that our corpus also includes papers from a wide range of fields and sub-fields across disciplines, including a number of papers on cybersecurity (including bot/human detection, phishing detection, and spam detection), public health and epidemology, hate speech and content moderation, human geography, computer vision, political science, and crisis informatics. Papers using non-English languages were also represented in our corpus.
<<</Keyword labels>>>
<<<Distribution of paper types in the corpus>>>
For each of our 164 papers, we needed to determine various bibliometric factors. For papers in the ArXiv sample, the most important of these is whether the file uploaded to ArXiV is a version of a paper published in a more traditional venue, and if so, whether the ArXiV version is a pre-print submitted prior to peer-review (and has different content than the published version) or if it is a post-print that is identical in content to the published version. Many authors upload a paper to ArXiv when they submit it to a journal, others upload the accepted manuscript that has passed peer-review but has not been formatted and typeset by the publisher, and others upload the exact “camera-ready” version published by the publishers. ArXiV also lets authors update new versions; some will update each of these versions as they progress through the publishing process, others will only upload a final version, and some only upload the pre-review version and do not update the version in ArXiv to the published version.
To do this, the project lead first manually searched for the exact text of the title in Google Scholar, which consolidates multiple versions of papers with the same title. Papers that only had versions in ArXiv, ArXiv mirrors (such as adsabs), other e-print repositories like ResearchGate, personal websites, or institutional repositories were labeled as “Preprint never published.” For papers that also appeared in any kind of publication venue or publishing library (such as the ACM, IEEE, AAAI, or ACL digital libraries), the project lead recorded the publication venue and publisher, then downloaded the published version. In some workshops and smaller conferences, the “publisher” was a single website just for the event, which lacked ISSNs or DOIs. These were considered to be published as conference or workshop proceedings, if there was a public list of all the papers presented at the event with links to all of the papers. There was only one case in which there were two or more publications with the exact same title by the same authors, which involved a 2-page archived extended abstract for a poster in an earlier conference proceeding and a full paper in a later conference proceeding. For this case, we chose the full paper in the later venue.
The project lead then compared the version uploaded to ArXiv with the published version. As this was done after the labeling process, for papers where the author uploaded multiple versions to ArXiv, we took care to examine the version our labelers examined. If there were any differences in substantive content, the paper was labeled as “Preprint of” and then an appropriate description of the venue, such as “refereed conference proceeding” or “refereed journal article.” If there were no differences in the substantive content of the paper, the paper was labeled as “Postprint of” and then the venue description. Changes in reference style or ordering, page layout, typesetting, the size or color of figures, or moving the same text between footnotes and inline parentheticals were not considered to be substantive content changes. However, even a single character typo fix to the main body text, a single added or removed reference, or a change to a figure's caption constituted a substantive content change. Table TABREF48 shows the distribution of paper types. Because there was only one dissertation in the sample, which also was not using original human annotation, we excluded this category from the aggregate analyses by paper type shown in the results section.
<<</Distribution of paper types in the corpus>>>
<<<Distribution of publishers in corpus>>>
For each paper in the Scopus samples and each paper in the ArXiv corpus that was a pre-print or post-print of a published paper, we also collected information about the journal and publisher. There were 80 different journals, conference proceedings, or workshops represented, with the top venues being the proceedings of SocInfo with 6 papers and the proceedings of ASONAM (Advances in Social Network Analysis and Mining) with 4 papers. Six venues had 3 publications each, which were all conference proceedings: AAAI ICWSM, ELRA LREC, ACM CIKM, ACM WWW, and IEEE Big Data. The distribution of publishers is presented in table TABREF49, which is broken out by papers in the ArXiv and Scopus corpus. The distribution of papers by years is shown in table TABREF49.
<<</Distribution of publishers in corpus>>>
<<</Dataset/corpus details>>>
<<<Methods and analysis details>>>
<<<Inter-annotator agreement>>>
In the first round, 5 annotators examined each paper independently, then met to discuss papers with disagreement. Table TABREF53 shows for each question, what percent of items were given the same label by all annotators (with number of annotators being recoded for the presence or absence of any information). Cases where no annotator answered the question because it was not relevant (e.g. crowdworker compensation for non-crowdworker projects) were not included in such a calculation, which would have increased such rates even more, but this would be somewhat disingenuous.
We report percent complete agreement among all raters for each question; for each item, what percent were given the same rating by all raters? We believe this is a more appropriate and straightforward metric for our project. This is due to the fact that our data does not necessarily meet the particular assumptions of other widely used two statistical estimators for 3+ raters. Fleiss's kappa and Krippendorf's alpha are widely used because they take into account the possibilities that raters made decisions based on random chance. However, this requires assuming a uniform prior possibility of such a random distribution, which generally only applies if each possible response by raters is equally likely BIBREF64, BIBREF61. This is the case in balanced datasets, but we observed widely skewed distributions.
The rates of proportional agreement were not high enough in the first round for us to be confident, which is likely due to a variety of factors. First, in contrast to most of the papers we examined, our project involved annotators answering 13 different questions for each item, which adds significant complexity to the process. Second, machine learning publications are also some of the more difficult pieces of content to make determinations around, as the definitions and boundaries of various concepts are often relatively undefined and contested across the many academic disciplines. In particular, our lowest rate for the second round was in the external human annotation question, which was added between the first and second round, and appears to still have some ambiguity.
We observed substantial increases in agreement between round one and two, although this also is likely confounded by the fact that all five annotators reviewed every item in round one, but only two or three reviewed every item in round two. We should note that as our approach was a human annotation research project studying human annotation research projects, this has given us much empathy for how difficult such a task is. We also acknowledge that our project involves the same kind of “black boxing” we discussed in the literature review, in which a messy process of multiple rounds of human annotations is reduced to a gold standard. However, we do believe in being open about our process, and our data for both rounds of annotation and the final dataset will be available upon publication.
The overall question for any study involving structured human annotation is whether the entire annotation, integration, review, and reconciliation process ultimately results in high confidence for the final dataset. The standard approach of human annotation checked by inter-rater reliability treats individual humans as instruments that turn phenomena in the world into structured data. If there is a high degree of inter-rater reliability, then each individual human can generally be trusted to make the same determination. If this is the case, then either reconciliation can easily take place through a majority vote process involving no discussion, or if rates are quite high, then only a subset of items need to be reviewed multiple times. In contrast, what our first round of inter-rater reliability metrics told us was that we were not the same kinds of standardized instruments that turn the same inputs into the same outputs. This does not bode well if we were conducting a single-stage mechanical majority-rule reconciliation process, and certainly would be unwise if we only had a single individual annotate each paper. For such a reason, we did not rely on such easier processes of reconciliation and demanded all papers be annotated by multiple individuals and discussed in a group setting moderated by the lead research scientist.
Furthermore, because our approach was largely focused on identifying the presence of various kinds of information within long-form publications, this is a different kind of human judgment than is involved in common tasks using human annotators in social computing, such as social media content moderation, sentiment analysis, or image labeling. Typically, annotated items are much smaller and tend to be evaluated holistically, with disagreements arising from annotators who looked at the same information and made different determinations.
In contrast, we reflected that in our reconciliation process, most of the time when annotators disagreed, it was because some annotators had caught a piece of information in the paper that others had not seen. There was a common occurrence wherein one of the annotators would point out a particular paragraph, the other annotators who had initially disagreed would read it, and then remark that they had missed that part and would like to change their answer. That said, there were cases wherein annotators were reading the same sections of the paper and still arriving at different answers, which was often either 1) because the paper was giving ambiguous, incomplete, or implicit information, or 2) because there was a fundamental interpretation of the coding schema, which required updating the schema or the examples in it. For such reasons, we are relatively confident that if, after our two rounds of annotation and the reconciliation process, no individual member of our team has identified the presence of such information, then it is quite likely it is not present in the paper.
<<</Inter-annotator agreement>>>
<<<Changes to the coding schema>>>
Unlike in some approaches to structured content analysis, the coding schema was open to revision if needed during this first round. Some difficult edge cases led to the refinement of the schema approximately half-way through this round of the labeling. The schema was developed on a web-based word processing platform, which also included examples of difficult edge cases, which were added as they were identified in team meetings. The document detailed each question, a formal definition or explanation of the question, the list of possible permitted labels, and various cases of examples that illustrated difficult or edge cases.
The coding schema was modified only in cases where backward compatibility could be maintained with prior labeling work. This typically involved taking a question which had many granular possible labels and consolidating the possible labels into a smaller number of broader labels. For example, the question about whether instructions were given to human annotators originally involved specifying whether the instructions included a formal definition, examples, or both. This was revised to only specify “instructions with formal definition or examples.” Similarly, training for human annotators originally included a more granular list of possible training circumstances, plus ”no information”, ”other”, and ”unsure”. Because of the difficulty of gaining consensus on these different forms of training and the relatively few number of papers that gave any details whatsoever about annotator training (as well as no papers that explicitly stated no training had occurred), these were reduced to “some training details”, “no information”, and ”unsure” (see Table TABREF55).
In addition, three questions were added halfway through the first round of the annotation process. First, a question was added about whether the paper used an external human-annotated dataset or not, which was added to clarify the question about whether original human annotation was used. This was added after a paper was discussed where an external human-annotated dataset was combined with an original human-annotated dataset. Two other questions were added about whether the paper contains a link to the training dataset and whether details about crowdworker compensation were included for projects using crowdworkers. These were both relatively straightforward questions, with relatively few incidences across our dataset. All papers had all questions answered in the second round.
<<</Changes to the coding schema>>>
<<</Methods and analysis details>>>
<<<Software used>>>
All computational analysis and scripting was conducted in Python 3.7 BIBREF66, using the following libraries: Pandas dataframes BIBREF60 for data parsing and transformation; SciPy BIBREF58 and NumPy BIBREF65 for quantitative computations; and Matplotlib BIBREF57 and Seaborn BIBREF67 for visualization. Analysis was conducted in Jupyter Notebooks BIBREF59 using the IPython BIBREF62 kernels. Datasets and Jupyter Notebooks for data collection and analysis will be made available upon publication, which are made to run on Binder BIBREF63.
<<</Software used>>>
<<<Coding schema, examples, and instructions>>>
A final version of our coding schema and instructions is below:
1. Original classification task: Is the paper presenting its own original classifier that is trying to predict something? “Original” means a new classifier they made based on new or old data, not anything about the novelty or innovation in the problem area.
Machine learning involves any process that does not have explicit or formal rules, where performance increases with more data. Classification involves predicting cases on a defined set of categories. Prediction is required, but not enough. Linear regressions might be included if the regression is used to make a classification, but making predictions for a linear variable is not. Predicting income or age brackets is classification, predicting raw income or age is not.
Example: analyzing statistics about the kinds of words people use on social media is not a classification task at all.
Example: predicting location is a classification task if it is from work, school, home, or other, but not if it is an infinite/undefined number of locations.
Example: This paper (https://ieeexplore.ieee.org/document/7937783) was framed as not an original classification task (more algorithm performance), but they did create an original classifier. This can also be an “unsure” – which is 100% OK to answer.
Example: Literature review papers that include classification papers aren't in this, if they didn't actually build a classifier.
Example: if there is a supervised classification task that is part of a broader process, this counts, focus on that.
If no, skip the following questions.
2. Classification outcome: What is the general type of problem or outcome that the classifier is trying to predict? Keep it short if possible. For example: sentiment, gender, human/bot, hate speech, political affiliation.
3. Labels from human annotation: Is the classifier at least in part trained on labeled data that humans made for the purpose of the classification problem? This includes re-using existing data from human judgments, if it was for the same purpose as the classifier. This does not include clever re-using of metadata.
Do a quick CTRL-F for “manual” and “annot” if you don't see anything, just to be sure.
If not, skip the following questions about human annotation.
Example: ISideWith paper on political stances was labels from human annotation, just not original. They took the labels from elsewhere and filled in the gaps (more on that in next Q).
Example: Buying followers and seeing who follows (1411.4299.pdf) is not human annotation.
Example: Generating (smart) simulated datasets from metadata is not human annotation.
Example: 1612.08207.pdf is not annotation when looking up political affiliation of politicians from an external database, even though it is manual work. No judgment is involved.
Example: 1709.01895.pdf is labels from human annotation, even though it is semi-automated. They identified hashtags that they believe universally correspond to certain political stances. There is a form of human judgment here, although in that paper, they don't define or explain it.
Example: Evaluation using human annotation is not annotation for ML, if the annotation wasn't used to make the classifier. (1710.07394.pdf)
Example: If they are using human annotation just to have confidence that a machine-annotated dataset is as good as a human annotated one, but the human annotated dataset isn't actually used to train the classifier, it is *not* using human annotation for ML. (1605.05195.pdf)
4. Used original human annotation: Did the project involve creating new human-labeled data, or was it exclusively re-using an existing dataset?
Yes
No
Unsure
Papers may have a mix of new and old human labeled data, or new human labeled data and non-human labeled data. If there is any new human annotation, say yes.
New human annotation must be systematic, not filling in the gaps of another dataset. Example: ISideWith paper on political stances is *not* original human annotation, even though they did some manual original research to fill the gap.
If the methods section is too vague to not tell, then leave as unsure (example: 1801.06294.pdf)
4.5. Used external human annotation data: Did the project use an already existing dataset from human labeled data?
Yes
No
Unsure
If they are using external human annotated data, skip the remaining questions:
5. Original human annotation source: Who were the human annotators? Drop-down options are:
Amazon Mechanical Turk (AMT, Turkers)
Any other crowdworking platform (Crowdflower / Figure8)
The paper's authors
Academic experts / professionals in the area
No information in the paper
Other
Unsure
For academic experts or professionals in the area, this is independent from the kinds of specific training they received for the task at hand. Think of “the area” broadly, so if it is something about healthcare and nurses were recruited, that would be professionals in the area, even if they don't say anything about the nurses having specific training in the annotation task at hand. If it doesn't easily fit into these or uses multiple sources, add them in the next column.
Example: “We develop a mechanism to help three volunteers analyze each collected user manually” -- put other, if that is all they say
Example: If it just says “we annotated...” then assume it is only the paper's authors unless otherwise stated.
6. Number of human annotators:
Put the number if stated, if not, leave blank.
7. Training for human annotators: Did the annotators receive interactive training for this specific annotation task / research project? Training involves some kind of interactive feedback. Simply being given formal instructions or guidelines is not training. Prior professional expertise is not training. Options include:
Some kind of training is mentioned
No information in the paper
Unsure
Example: It is not considered training if there was prescreening, unless they were told what they got right and wrong or other debriefing. Not training if they just gave people with high accuracy more work.
Example: This paper had a minimum acceptable statement for some training information, with only these lines: “The labeling was done by four volunteers, who were carefully instructed on the definitions in Section 3. The volunteers agree on more than 90% of the labels, and any labeling differences in the remaining accounts are resolved by consensus.”
8. Formal instructions/guidelines: What documents were the annotators given to help them? This document you are in right now is an example of formal instructions with definitions and examples.
No instructions beyond question text
Instructions include formal definition or examples
No information in paper (or not enough to decide)
Unsure
Example of a paper showing examples: “we asked crowdsourcing workers to assign the `relevant' label if the tweet conveys/reports information useful for crisis response such as a report of injured or dead people, some kind of infrastructure damage, urgent needs of affected people, donations requests or offers, otherwise assign the `non-relevant' label”
9. Prescreening for crowdwork platforms
Leave blank if this is not applicable.
No prescreening (must state this)
Previous platform performance qualification (e.g. AMT Master)
Generic skills-based qualification (e.g. AMT Premium)
Location qualification
Project-specific prescreening: researchers had known ground truth and only invited
No information
Unsure
10. Multiple annotator overlap: Did the annotators label at least some of the same items?
Yes, for all items
Yes, for some items
No
Unsure
No information
If it says there was overlap but not info to say all or some, put unsure.
11. Reported inter-annotator agreement: Leave blank if there was no overlap. Is a metric of inter-annotator agreement or intercoder reliability reported? It may be called Krippendorf's alpha, Cohen's kappa, F1 score, or other things.
Yes
No
Unsure
12. Reported crowdworker compensation: If using crowdworkers to annotate, did they say how much the annotators were paid for their work? Leave blank if crowdworkers were not used.
Yes
No
Unsure
13. Link to dataset available: Is there a link in the paper to the dataset they used?
Yes
No
Unsure
<<</Coding schema, examples, and instructions>>>
<<</Appendix>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nStudy overview\nLiterature review and motivation\nA different kind of “black-boxing” in machine learning\nContent analysis\nMeta-research and methods papers in linguistics and crowdsourcing\nThe data documentation movements\nData and methods\nData: machine learning papers performing classification tasks on Twitter data\nLabeling team, training, and workflow\nSecond round verification and reconciliation\nRaw and normalized information scores\nFindings\nOriginal classification task\nLabels from human annotation\nUsed original human annotation and external human annotation\nOriginal human annotation source\nNumber of human annotators\nFormal definitions and instructions\nTraining for human annotators\nPre-screening for crowdwork platforms\nMultiple annotator overlap and reporting inter-annotator agreement\nReported crowdworker compensation\nLink to dataset available\nPaper information scores\nOverall distributions of information scores\nInformation scores by corpus and publication type\nInformation scores by publisher\nConcluding discussion\nImplications\nLimitations and future work\nAppendix\nDataset/corpus details\nKeyword labels\nDistribution of paper types in the corpus\nDistribution of publishers in corpus\nMethods and analysis details\nInter-annotator agreement\nChanges to the coding schema\nSoftware used\nCoding schema, examples, and instructions"
],
"type": "outline"
}
|
2002.10832
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
BERT Can See Out of the Box: On the Cross-modal Transferability of Text Representations
<<<Abstract>>>
Pre-trained language models such as BERT have recently contributed to significant advances in Natural Language Processing tasks. Interestingly, while multilingual BERT models have demonstrated impressive results, recent works have shown how monolingual BERT can also be competitive in zero-shot cross-lingual settings. This suggests that the abstractions learned by these models can transfer across languages, even when trained on monolingual data. In this paper, we investigate whether such generalization potential applies to other modalities, such as vision: does BERT contain abstractions that generalize beyond text? We introduce BERT-gen, an architecture for text generation based on BERT, able to leverage on either mono- or multi- modal representations. The results reported under different configurations indicate a positive answer to our research question, and the proposed model obtains substantial improvements over the state-of-the-art on two established Visual Question Generation datasets.
<<</Abstract>>>
<<<Introduction>>>
The BERT language model BIBREF0 is a Deep Bidirectional Transformer BIBREF1 pre-trained on textual corpora (BookCorpus and Wikipedia) using a Masked Language Model (MLM) objective – predicting some words that are randomly masked in the sentence, along with a sentence entailment loss. Recent research efforts BIBREF2 have shown how BERT encodes abstractions that generalize across languages, even when trained on monolingual data only. This contradicts the common belief BIBREF3, BIBREF4 that a shared vocabulary and joint training on multiple languages are essential to achieve cross-lingual generalization capabilities. In this work, we further investigate the generalization potentials of large pre-trained LMs, this time moving to a cross-modal setup: does BERT contain abstractions that generalize beyond text?
In the Artificial Intelligence community, several works have investigated the longstanding research question of whether textual representations encode visual information. On the one hand, a large body of research called language grounding considers that textual representations lack visual commonsense BIBREF5, and intend to ground the meaning of words BIBREF6, BIBREF7 and sentences BIBREF8, BIBREF9 in the perceptual world. In another body of work, textual representations have successfully been used to tackle multi-modal tasks BIBREF10 such as Zero-Shot Learning BIBREF11, Visual Question Answering BIBREF12 or Image Captioning BIBREF13. Following the latter line of research, in this paper we evaluate the potential of pre-trained language models to generalize in the context of Visual Question Generation (VQG) BIBREF14.
The Visual Question Generation task allows us to investigate the cross-modal capabilities of BERT: unlike Image Captioning (where the input is only visual) or VQA (where the input is visual and textual), VQG is a multi-modal task where input can be textual and/or visual. VQG data usually includes images and the associated captions, along with corresponding questions about the image; thus, different experimental setups can be designed to analyze the impact of each modality. Indeed, the questions can be generated using i) textual (the caption), ii) visual (the image), or iii) multi-modal (both the caption and the image) input.
From a practical standpoint, the VQG task has several applications: robots or AI assistants could ask questions rooted in multi-modal data (e.g. fusing conversational data with visual information from captors and cameras), in order to refine their interpretation of the situation they are presented with. It could also allow systems relying on knowledge-bases to gain visual common sense and deal with the Human Reporting Bias BIBREF15, which states that the content of images and text are intrinsically different, since visual common sense is rarely explicitly stated in text.
Recently, BERT-based Multi-Modal Language Models have been proposed BIBREF16, BIBREF17, BIBREF18, BIBREF19 to tackle multi-modal tasks, using different approaches to incorporate visual data within BERT. From these works, it is left to explore whether the cross-modal alignment is fully learned, or it is to some extent already encoded in the BERT abstractions. Therefore, in contrast with those approaches, we explicitly avoid using the following complex mechanisms:
Multi-modal supervision: all previous works exploit an explicit multi-modal supervision through a pre-training step; the models have access to text/image pairs as input, to align their representations. In contrast, our model can switch from text-only to image-only mode without any explicit alignment.
Image-specific losses: specific losses such as Masked RoI (Region of Interest) Classification with Linguistic Clues BIBREF19 or sentence-image prediction BIBREF18 have been reported helpful to align visual and text modalities. Instead, we only use the original MLM loss from BERT (and not its entailment loss).
Non-linearities: we explore a scenario in which the only learnable parameters, for aligning image representations to BERT, are those of simple linear projection layer. This allows us to assess whether the representations encoded in BERT can transfer out-of-the-box to another modality.
Furthermore, to the best of our knowledge, this paper is the first attempt to investigate multi-modal text generation using pre-trained language models. We introduce BERT-gen, a text generator based on BERT, that can be applied both in mono and multi-modal settings. We treat images similarly to text: while a sentence is seen as a sequence of (sub)word tokens, an image is seen as a sequence of objects associated to their corresponding positions (bounding boxes). We show how a simple linear mapping, projecting visual embeddings into the first layer, is enough to ground BERT in the visual realm: text and image object representations are found to be effectively aligned, and the attention over words transfers to attention over the relevant objects in the image.
Our contributions can be summarized as follows:
we introduce BERT-gen, a novel method for generating text using BERT, that can be applied in both mono and multi-modal settings;
we show that the semantic abstractions encoded in pre-trained BERT can generalize to another modality;
we report state-of-the art results on the VQG task;
we provide extensive ablation analyses to interpret the behavior of BERT-gen under different configurations (mono- or multi- modal).
<<</Introduction>>>
<<<Related Work>>>
<<<Unsupervised Pre-trained Language Models>>>
Learning unsupervised textual representations that can be applied to downstream tasks is a widely investigated topic in the literature. Text representations have been learned at different granularities: words with Word2vec BIBREF20, sentences with SkipThought BIBREF21, paragraphs with ParagraphVector BIBREF22 and contextualized word vectors with ELMo BIBREF23. Other methods leverage a transfer-learning approach by fine-tuning all parameters of a pre-trained model on a target task, a paradigm which has become mainstream since the introduction of BERT BIBREF0. BERT alleviates the problem of the uni-directionality of most language models (i.e. where the training objective aims at predicting the next word) by proposing a new objective called Masked Language Model (MLM). Under MLM, some words, that are randomly selected, are masked; the training objective aims at predicting them.
<<</Unsupervised Pre-trained Language Models>>>
<<<Multi-modal Language Models>>>
Following the successful application of BERT BIBREF0, and its derivatives, across a great majority of NLP tasks, several research efforts have focused on the design of multi-modal versions of BERT. VideoBERT BIBREF24, a joint video and text model, is pre-trained on a huge corpus of YouTube videos, and applied to action classification and video captioning tasks on the YouCook II dataset BIBREF25. The video is treated as a “visual sentence" (each frame being a “visual word") that is processed by the BERT Transformer.
Concerning models jointly treating information from images and text, visual features extracted from the image are used as “visual words", and a [SEP] special token is employed to separate textual and visual tokens. In the literature, visual features are object features extracted with a Faster R-CNN BIBREF26 – with the notable exception of BIBREF27 who used pooling layers from a CNN. A first body of work exploit single-stream Transformers in which visual features are incorporated in a BERT-like Transformer: this is the case for VisualBERT BIBREF18, VL-BERT BIBREF19, Unicoder-VL BIBREF28 and B2T2 BIBREF29. Other works, such as ViLBERT BIBREF16 and LXMERT BIBREF17 have investigated two-stream approaches: these models employ modality-specific encoders built on standard Transformer blocks, which are then fused into a cross-modal encoder. Interestingly, none of the aforementioned models have been used for generation tasks such as VQG, tackled in this work.
<<</Multi-modal Language Models>>>
<<<Visual Question Generation>>>
The text-based Question Generation task has been largely studied by the NLP community BIBREF30, BIBREF31, BIBREF32, BIBREF33, BIBREF34, BIBREF35, BIBREF36. However, its visual counterpart, Visual Question Generation (VQG), has been comparatively less explored than standard well-known multi-modal tasks such as Visual Question Answering (VQA) BIBREF37, BIBREF38, BIBREF39, BIBREF40, Visual Dialog BIBREF41, BIBREF42, or Image Captioning BIBREF43, BIBREF44, BIBREF45.
The VQG task was first introduced by BIBREF46 in their Neural Self Talk model: the goal is to gain knowledge about an image by iteratively generating questions (VQG) and answering them (VQA). The authors tackle the task with a simple RNN conditioned on the image, following Image Captioning works such as BIBREF45.
Suitable data for the VQG task can come from standard image datasets on which questions have been manually annotated, such as $VQG_{COCO}$, $VQG_{Flickr}$, $VQG_{Bing}$ BIBREF14 , each consisting of 5000 images with 5 questions per image. Alternatively, VQG samples can be derived from Visual Question Answering datasets, such as $VQA1.0$ BIBREF47, by “reversing" them (taking images as inputs and questions as outputs).
A variety of approaches have been proposed. BIBREF14 use a standard Gated Recurrent Neural Network, i.e. a CNN encoder followed by a GRU decoder to generate questions. BIBREF48 aim at generating, for a given image, multiple visually grounded questions of varying types (what, when, where, etc.); similarly, BIBREF49 generate diverse questions using Variational Autoencoders. In BIBREF50, VQG is jointly tackled along its dual task (VQA), just as BIBREF46. In BIBREF51, BIBREF52, the image (processed by a CNN) and the caption (processed by a LSTM) are combined in a mixture module, followed by a LSTM decoder to generate the question, leading to state-of-the-art results on the VQG task on $VQA1.0$ data. More recently, BIBREF53 incorporate multiple cues – place information obtained from PlaceCNN BIBREF54, caption, tags – and combine them within a deep Bayesian framework where the contribution of each cue is weighted to predict a question, obtaining the current state-of-the-art results on $VQG_{COCO}$.
<<</Visual Question Generation>>>
<<</Related Work>>>
<<<Model>>>
In VQG, the objective is to generate a relevant question from an image and/or its caption. The caption $X_{txt}$ is composed of $M$ tokens $txt_1, ..., txt_M$; these tokens can be words or subwords (smaller than word) units depending on the tokenization strategy used. As BERT uses subword tokenization, throughout this paper we will refer to subwords as our tokenization units.
The proposed model is illustrated in Figure FIGREF11. In SECREF12, we detail how images are incorporated in the Transformer framework. In SECREF14, we present BERT-gen, a novel approach to use BERT for text generation.
<<<Representing an Image as Text>>>
In this work, we treat textual and visual inputs similarly, by considering both as sequences. Since an image is not a priori sequential, we consider the image $X_{img}$ as a sequence of object regions $img_1, ..., img_N$, as described below.
The images are first processed as in BIBREF17: a Faster-RCNN BIBREF26, pre-trained on Visual Genome BIBREF55, detects the $N=36$ most salient regions (those likely to include an object) per image. The weights of the Faster-RCNN are fixed during training, as we use the precomputed representations made publicly available by BIBREF56. Each image is thus represented by a sequence of $N=36$ semantic embeddings $f_1, ... f_{N}$ (one for each object region) of dimension 2048, along with the corresponding bounding box coordinates $b_1, ... b_{N}$ of dimension 4. With this approach, the BERT attention can be computed at the level of objects or salient image regions; had we represented images with traditional CNN features, the attention would instead correspond to a uniform grid of image regions without particular semantics, as noted in BIBREF56. To build an object embedding $o_j$ encoding both the object region semantics and its location in the image, we concatenate $f_j$ and $b_j$ ($j\in [1,N]$). Hence, an image is seen as a sequence of $N=36$ visual representations (each corresponding to an object region) $o_1,..., o_N$. Object region representations $o_i$ are ordered by the relevance of the object detected, and the model has access to their relative location in the image through the vectors $b_i$.
To investigate whether our BERT-based model can transfer knowledge beyond language, we consider image features as simple visual tokens that can be presented to the model analogously to textual token embeddings. In order to make the $o_j$ vectors (of dimension $2048+4=2052$) comparable to BERT embeddings (of dimension 768), we use a simple linear cross-modal projection layer $W$ of dimensions $2052\hspace{-1.00006pt}\times \hspace{-1.00006pt}768$. The $N$ object regions detected in an image, are thus represented as $X_{img} = (W.o_1,...,W.o_N)$. Once mapped into the BERT embedding space with $W$, the image is seen by the rest of the model as a sequence of units with no explicit indication if it is a text or an image embedding.
<<</Representing an Image as Text>>>
<<<BERT-gen: Text Generation with BERT>>>
We cast the VQG task as a classic sequence-to-sequence BIBREF57 modeling framework:
where the input $X=X_{txt}$ in caption-only mode, $X = X_{img}$ in image-only mode, and $X =X_{img} \oplus X_{txt}$ in a multi-modal setup; $Y = {y_1,..., y_T}$ is the question composed of $T$ tokens. $\Theta $ are the parameters of the BERT model; $W$ represents the weights of the linear layer used for projecting visual input to the BERT embedding layer.
As mentioned earlier, BERT is a Transformer BIBREF1 encoder pre-trained using the Masked Language Model (MLM) objective: tokens within the text are replaced with a [MASK] special token, and the model is trained to predict them. Since BERT was not trained with an unidirectional objective, its usage for text generation is not straightforward.
To generate text, BIBREF58 propose to stack a Transformer decoder, symmetric to BERT. However, the authors report training difficulties since the stacked decoder is not pre-trained, and propose a specific training regime, with the side-effect of doubling the number of parameters. BIBREF59 opt for an intermediate step of self-supervised training, introducing a unidirectional loss. As detailed below, we propose a relatively simpler, yet effective, method to use BERT out-of-the-box for text generation.
<<<Decoder>>>
We simply use the original BERT decoder as is, initially trained to generate the tokens masked during its pre-training phase. It consists in a feed-forward layer, followed by normalization, transposition of the embedding layer, and a softmax over the vocabulary.
<<</Decoder>>>
<<<Next Token Prediction>>>
At inference time, to generate the first token of the question $y_1$, we concatenate [MASK] to the input tokens $X$, then encode $X \oplus \texttt {[MASK]}$ with the BERT encoder, and feed the output of the encoder to the decoder; $y_1$ is the output of the decoder for the [MASK] token. Subsequently, given $y_1$, we concatenate it to the input tokens and encode $X \oplus y_1 \oplus \texttt {[MASK]}$ to predict the next token $y_2$. This procedure is repeated until the generation of a special token [EOS] signaling the end of the sentence.
<<</Next Token Prediction>>>
<<<Attention Trick>>>
As we iteratively concatenate the generated tokens, the BERT bi-directional self-attention mechanism would impact, at every new token, the representations of the previous tokens. To counter that, we use a left-to-right attention mask, similar to the one employed in the original Transformer decoder BIBREF1. For the input tokens in $X$, we apply such mask to all the target tokens $Y$ that were concatenated to $X$, so that input tokens can only attend to the other input tokens. Conversely, for target tokens $y_t$, we put an attention mask on all tokens $y_{>t}$, allowing target tokens $y_t$ to attend only to the input tokens and the already generated target tokens.
This novel method allows to use pre-trained encoders for text generation. In this work, we initialize our model with the parameters from BERT-base. Nonetheless, the methodology can be applied to any pre-trained Transformer encoders such as RoBERTa BIBREF60, or Ernie BIBREF61.
<<</Attention Trick>>>
<<<Modality-specific setups>>>
The proposed model can be used in either mono- or multi- modal setups. This is accomplished by activating or deactivating specific modules.
<<</Modality-specific setups>>>
<<</BERT-gen: Text Generation with BERT>>>
<<</Model>>>
<<<Experimental Protocol>>>
Our main objective is to measure whether the textual knowledge encoded in pre-trained BERT can be beneficial in a cross-modal task. Thus, we define the three following experimental setups, which we refer to as Step 1, 2, and 3:
<<<1. Caption only>>>
Deactivating the Visual embedding module (see Figure FIGREF11), the model has only access to textual input, i.e. the caption. The model is initialized with the BERT weights and trained according to Equation DISPLAY_FORM15.
<<</1. Caption only>>>
<<<2. Image only>>>
Conversely, deactivating the Textual embedding module (see Figure FIGREF11), the model has only access to the input image, not the caption. To indicate the position $t$ of $img_t$ in the sequence, we sum the BERT positional embedding of $t$ to the visual representation of $img_t$, just as we would do for a text token $txt_t$. The model is initialized with the weights learned during step 1. All BERT-gen $\Theta $ weights are frozen, and only the linear layer $W$ is learnable. Hence, if the model is able to learn to generate contextualized questions w.r.t. the image, it shows that a simple linear layer is enough to bridge the two modalities.
<<</2. Image only>>>
<<<3. Image + Caption>>>
The full model is given access to both image and caption inputs. In this setup, we separate the two different inputs by a special BERT token [SEP]. Thus, the input sequence for the model takes the form of $\texttt {[CLS]}, img_1,..., img_N, \texttt {[SEP]}, txt_1,..., txt_M$. In step 1, only BERT-gen $\Theta $ parameters are learned, as no image input was given. In step 2, $W$ is trained while keeping $\Theta $ frozen. Finally then, in step 3, we fine-tune the model using both image and text inputs: the model is initialized with the parameters $\Theta $ learned during step 1 and the $W$ learned during step 2, and we unfreeze all parameters.
<<</3. Image + Caption>>>
<<<Ablations>>>
Additionally, we report results obtained with: Image only (unfreeze), where the BERT-gen parameters $\Theta $ are not frozen; and Image+Caption (from scratch) where the model is learned without the intermediate steps 1 and 2: the BERT-gen parameters $\Theta $ are initialized with the weights from pre-trained BERT while $W$ is randomly initialized.
<<</Ablations>>>
<<<Datasets>>>
We conduct our experiments using two established datasets for Visual Question Generation:
<<<@!START@$VQG_{COCO}$@!END@>>>
Introduced by BIBREF14, it contains 2500 training images, 1250 validation images and 1250 test images from MS COCO BIBREF62; each image has 5 corresponding questions and 5 ground-truth captions.
<<</@!START@$VQG_{COCO}$@!END@>>>
<<<@!START@$VQA$@!END@>>>
The Visual Question Answering BIBREF47 dataset can be used to derive VQG data BIBREF50. The task is reversed: instead of answering the question based on the image (VQA), models are called to generate a relevant question given the image (VQG). Also based on MS COCO, it contains 82783 training images, 40504 validation images and 81434 testing images. In $VQA1.0$, each image has 3 associated questions. Since the test set of MS COCO does not contain ground-truth captions, we generated artificial captions for it using NeuralTalk2 BIBREF45: for fair comparison, we used exactly the same model as BIBREF52 (MDN-Joint).
<<</@!START@$VQA$@!END@>>>
<<</Datasets>>>
<<<Baselines>>>
We compare the proposed model to the following:
<<<Sample>>>
BIBREF46 Questions are generated by a RNN conditioned on the image: at each generation step, the distribution over the vocabulary is computed and used to sample the next generated word. This baseline enables to generate diverse questions over the same image, as the word selection process is non-deterministic.
<<</Sample>>>
<<<Max>>>
BIBREF46 Using the above model, selecting words with maximum probability from the computed distribution.
<<</Max>>>
<<<MDN-Joint>>>
BIBREF52 State-of-the-art model on $VQA1.0$, based on joint usage of caption and image information.
<<</MDN-Joint>>>
<<<MC-SBN>>>
BIBREF53 State-of-the-art on $VQG_{COCO}$. The model jointly leverages on multiple cues (the image, place information, caption, tags) to generate questions.
<<</MC-SBN>>>
<<</Baselines>>>
<<<Metrics>>>
We report the following metrics for all experiments, consistently with previous works:
<<<BLEU>>>
BIBREF63 A precision-oriented metric, originally proposed to evaluate machine translation. It is based on the counts of overlapping n-grams between the generated sequences and the human references.
<<</BLEU>>>
<<<ROUGE>>>
BIBREF64 The recall-oriented counterpart to BLEU metrics, again based on n-gram overlaps.
<<</ROUGE>>>
<<<METEOR>>>
BIBREF65 The harmonic mean between precision and recall w.r.t. unigrams. As opposed to the other metrics, it also accounts for stemming and synonymy matching.
<<</METEOR>>>
<<<CIDEr>>>
BIBREF66 Originally designed for Image Captioning, it uses human consensus among the multiple references, favoring rare words and penalizing frequent words. This feature is particularly relevant for our task, as the automatically generated questions often follow similar patterns such as “What is the [...] ?". Indeed, we verify experimentally (cf Table and Table ) that the CIDEr metric is the most discriminant in our quantitative results.
<<</CIDEr>>>
<<</Metrics>>>
<<<Implementation details>>>
All models are implemented in PyText BIBREF67. For all our experiments we used a single NVIDIA RTX 2080 Ti GPU, a batch size of 128 and 5 epochs. We used the Adam optimizer with the recommended parameters for BERT: learning rate is set at $2e^{-5}$ with a warmup of $0.1$. The most computationally expensive experiment is the step 3 described above: for this model, completion of one epoch demands 30 seconds and 2 minutes for $VQG_{COCO}$ and $VQA$ datasets, respectively. Metrics were computed using the Python package released by BIBREF33.
<<</Implementation details>>>
<<</Experimental Protocol>>>
<<<Results>>>
In Table , we report quantitative results for the VQG task on $VQA1.0$. The Caption only model already shows strong improvements for all metrics over state-of-the-art models. For this text-only model, the impressive performance can mostly be attributed to BERT, demonstrating once again the benefits obtained using pre-trained language models. In our second step (Image only), the BERT $\Theta $ parameters are frozen and only those of the cross-modal projection matrix $W$ are learned. Despite using a simple linear layer, the model is found to perform well, generating relevant questions given only visual inputs.
This suggests that the conceptual representations encoded in pre-trained language models such as BERT can effectively be used beyond text. Further, we report an additional Image only experiment, this time unfreezing the BERT parameters $\Theta $ – see Step 2 (unfreeze) in Table . As could be expected, since the model is allowed more flexibility, the performance is found to further improve.
Finally, in our third step (Image + Caption), we obtain the highest scores, for all metrics. This indicates that the model is able to effectively leverage the combination of textual and visual inputs. Indeed, complementary information from both modalities can be exploited by the self-attention mechanism, making visual and textual tokens interact to generate the output sequences. Again, we additionally report the results obtained bypassing the intermediate steps 1 and 2: for the model denoted as Step 3 (from scratch) (last row of Table ), $\Theta $ parameters are initialized with the original weights from pre-trained BERT, while the $W$ matrix is randomly initialized. Under this experimental condition, we observe lower performances, a finding that consolidates the importance of the multi-step training procedure we adopted.
In Table , we report quantitative VQG results on $VQG_{COCO}$. These are globally consistent with the ones above for $VQA1.0$. However, we observe two main differences. First, a bigger relative improvement over the state-of-the-art. As the efficacy of pre-trained models is boosted in small-data scenarios BIBREF68, this difference can be explained by the smaller size of $VQG_{COCO}$. Second, we note that the Caption only model globally outperforms all other models, especially on the discriminant CIDEr metric. This can be explained by the fact that, in $VQG_{COCO}$, the captions are human-written (whereas they are automatically generated for $VQA1.0$) and, thus, of higher quality; moreover, the smaller size of the dataset could play a role hindering the ability to adapt to the visual modality. Nonetheless, the strong performances obtained for Step 2 compared to the baselines highlight the effectiveness of our method to learn a cross-modal projection even with a relatively small number of training images.
<<<Human Evaluation>>>
To get more in-depth understanding of our models, we report human assessment results in Table . We randomly sampled 50 images from the test set of $VQA1.0$. Each image is paired with its caption, the human-written question used as ground-truth, and the output for our three models: Caption only, Image only and Image+Caption. We asked 3 human annotators to assess the quality of each question using a Likert scale ranging from 1 to 5, for the following criteria: readability, measuring how well-written the question is; caption relevance, how relevant the question is w.r.t. to the caption; and, image relevance, how relevant the question is toward the image. For caption and image relevance, the annotators were presented with only the caption and only the image, respectively.
We observe that all evaluated models produce well-written sentences, as readability does not significantly differ compared to human's questions. Unsurprisingly, the Caption only model shows a higher score for caption relevance, while the relatively lower image relevance score can be explained by the automatically generated and thus imperfect captions in the $VQA1.0$ dataset. Comparatively, the Image only model obtains lower caption relevance and higher image relevance scores; this indicates that the cross modal projection is sufficient to bridge modalities, allowing BERT to generate relevant questions toward the image. Finally, the Image + Caption model obtains the best image relevance among our models, consistently the quantitative results reported in Tables and .
<<</Human Evaluation>>>
<<</Results>>>
<<<Model Discussion>>>
<<<What does the model look at?>>>
To interpret the behavior of attention-based models, it is useful to look at which tokens are given higher attention BIBREF69. In Figure FIGREF44, we present two images $A$ and $B$, along with their captions and the three generated questions corresponding to our three experimental setups (Caption only, Image only and Image + Caption). For this analysis, we average the attention vectors of all the heads in the last layer, and highlight the textual and visual tokens most attended by the models.
For both images, the Caption only model attends to salient words in the caption. The Image only model remains at least as much relevant: on image $A$, it generates a question about a table (with an unclear attention). Interestingly, for image $B$, the Image only model corrects a mistake from step 1: it is a woman holding an umbrella rather than a man, and the attention is indeed focused on the woman in the image. Finally, the Image + Caption model is able to generate fitting questions about the image, with relatively little relevance to the caption: for image $A$, Image + Caption the model generates “What time is it?" while paying attention to the clock; for image $B$, Image + Caption generates “What is the color of the umbrella ?", focusing the attention on the umbrella. The captions of either samples include no mentions of clocks or umbrellas, further indicating effective alignment between visual and textual representations.
<<</What does the model look at?>>>
<<<Cross-modal alignment>>>
We carry out an additional experiment to analyze the text/vision alignment for each model. Figure FIGREF46 shows the cross-modal similarity $X_{sim}$ for different model scenarios, computed at each BERT-base layer from 1 to 12. We define the cross-modal similarity $X_{sim}$ as the cosine similarity between the vector representations of both modalities. These vectors are the two continuous space representations from a model when given as input either i) an image, or ii) its corresponding caption. We represent these captions and images vectors with the special BERT token [CLS], following previous works BIBREF70 where [CLS] is used to represent the entire sequence.
The reported values correspond to the average cross-modal similarity calculated for all the examples of $VQG_{COCO}$ test set. In addition to the setups described in Section SECREF4 (Caption-only, Image-only and Image + Caption), we also report $X_{sim}$ for Random Transformer, a BERT architecture with random weights. As expected, its $X_{sim}$ is close to zero.
All the other models are based on BERT. As suggested by BIBREF71, the first layers in BERT tend to encode lower-level language information. This might explain why the models show similar $X_{sim}$ scores up to the 9th layer, and diverge afterwards: the weights for those layers remain very similar between our fine-tuned models.
For the last layer ($l=12$), we observe that $\textit {Caption only} < \textit {Image only} < \textit {Image + Caption}$. The Caption only model has never seen images during training, and therefore is not able to encode semantic information given only images as input. Still, its reported $X_{sim} > 0$ can be attributed to the fact that, when fine-tuned on VQG during Step 1, BERT-gen encodes task-specific information in the [CLS] token embedding (e.g. a question ends with a “?" and often begins with “What/Where/Who"). $\textit {Image only} > \textit {Caption only}$ can be explained by the learning of the cross-modal projection $W$. However, since BERT is not fine-tuned, the model learns a “contortion" allowing it to align text and vision. Finally, Image + Caption $>$ Image only can be attributed to BERT fine-tuning, contributing to an increase in the observed gap, and its emergence in earlier layers.
<<</Cross-modal alignment>>>
<<</Model Discussion>>>
<<<Conclusion and Perspectives>>>
We investigated whether the abstractions encoded in a pre-trained BERT model can generalize beyond text. We proposed BERT-gen, a novel methodology that allows to directly generate text from out-of-the-box pre-trained encoders, either in mono- or multi- modal setups. Moreover, we applied BERT-gen to Visual Question Generation, obtaining state-of-the-art results on two established datasets. We showed how a simple linear projection is sufficient to effectively align visual and textual representations.
In future works, we plan to extend BERT-gen to other modalities, such as audio or video, exploring the potential interactions that can emerge in scenarios where more than two modalities are present.
<<</Conclusion and Perspectives>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Work\nUnsupervised Pre-trained Language Models\nMulti-modal Language Models\nVisual Question Generation\nModel\nRepresenting an Image as Text\nBERT-gen: Text Generation with BERT\nDecoder\nNext Token Prediction\nAttention Trick\nModality-specific setups\nExperimental Protocol\n1. Caption only\n2. Image only\n3. Image + Caption\nAblations\nDatasets\n@!START@$VQG_{COCO}$@!END@\n@!START@$VQA$@!END@\nBaselines\nSample\nMax\nMDN-Joint\nMC-SBN\nMetrics\nBLEU\nROUGE\nMETEOR\nCIDEr\nImplementation details\nResults\nHuman Evaluation\nModel Discussion\nWhat does the model look at?\nCross-modal alignment\nConclusion and Perspectives"
],
"type": "outline"
}
|
2002.05058
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Learning to Compare for Better Training and Evaluation of Open Domain Natural Language Generation Models
<<<Abstract>>>
Automated evaluation of open domain natural language generation (NLG) models remains a challenge and widely used metrics such as BLEU and Perplexity can be misleading in some cases. In our paper, we propose to evaluate natural language generation models by learning to compare a pair of generated sentences by fine-tuning BERT, which has been shown to have good natural language understanding ability. We also propose to evaluate the model-level quality of NLG models with sample-level comparison results with skill rating system. While able to be trained in a fully self-supervised fashion, our model can be further fine-tuned with a little amount of human preference annotation to better imitate human judgment. In addition to evaluating trained models, we propose to apply our model as a performance indicator during training for better hyperparameter tuning and early-stopping. We evaluate our approach on both story generation and chit-chat dialogue response generation. Experimental results show that our model correlates better with human preference compared with previous automated evaluation approaches. Training with the proposed metric yields better performance in human evaluation, which further demonstrates the effectiveness of the proposed model.
<<</Abstract>>>
<<<Introduction>>>
Recent advances in sequence-to-sequence learning architecture BIBREF0 and the transformer model BIBREF1 have raised increasing interest in natural language generation (NLG) tasks, including story generation BIBREF2, open-domain dialogue response generation BIBREF3 and abstractive summarization BIBREF4. Despite the fast advances of models, there remains a huge gap in the evaluation of NLG models and it is hard to measure the progress due to the lack of good evaluation metrics. While perplexity is a good measure of how well a model fits some data, it does not measure performance at the desired task. Word overlap based metrics such as BLEU BIBREF5, METEOR BIBREF6 and ROUGE BIBREF7 capture quality better than the perplexity and are useful in translation and summarization. However, they still correlate poorly with human evaluation BIBREF8 in open domain text generation tasks including story generation and dialogue response generation because two equally good generated texts may have no n-gram overlap. Human evaluation is generally considered to be the gold standard evaluation, however, it does not scale well as it is generally expensive and time-consuming to conduct human evaluation.
Apart from measuring relative progress between different models, automated evaluation metrics also play an important role in the training stage of NLG models. It is a common practice to tune the model hyperparameter, detect convergence, perform early-stopping, and select the best checkpoints based on the model's performance on automated evaluation metrics. While acceptable for tasks where automated metrics correlate well with human evaluations, including machine translation and text summarization, this can be erroneous and result in sub-optimal training in open domain NLG tasks because available automated metrics correlate poorly with human evaluation, as demonstrated in the experimental section of this paper.
To tackle the aforementioned problems, in this paper, we propose a self-supervised approach with transfer learning to learn to compare the quality of two samples as an automated comparative Turing test. The motivation of our approach is that we can better assess the quality of generated samples or trained NLG model by comparing it with another one. Our model is a text pair classification model trained to compare the task-specific quality of two samples, which is then used to evaluate the quality of trained NLG models. As human preference annotation is generally expensive, our model is designed to be able to perform self-supervised training using only generated samples and gold reference samples without human preference annotation. When human preference annotation is available, our model can be further fine-tuned to better imitate human judgment. To evaluate the model-level quality of NLG models based on pairwise comparison in sample-level, we adopt the skill rating system similar to ELO BIBREF9 and Trueskill BIBREF10, which is a method for assigning a numerical skill to players in a player-vs-player game, given a win-loss record of games played. In our scenario, the players are NLG models to be evaluated and a higher rating indicates a better model. The skill rating system makes it possible to evaluate all n models without needing to run $n^{2}$ matches and is able to take into account the amount of new information each comparison provides.
The contribution of this paper is threefold:
We propose a “learning to compare” model to better assess the quality of text generated by NLG models based on pairwise comparison. Our model is able to transfer natural language understanding knowledge from BERT by fine-tuning in a self-supervised way while also able to be further fine-tuned with human preference annotation. Once trained, our model is able to perform inter-model comparison without the need for gold references, which greatly enlarges the potentially available test set and reduces the potential risk of overfitting the reference in the test set.
We propose to use the skill rating system to perform model-level evaluation based on the sample-level evaluation information provided by our pairwise comparison model. The skill rating system is more efficient and accurate than several baseline approaches.
We conduct experiments on both story generation task and open domain dialogue response generation task. Experimental results show that our approach correlates better with human evaluation on both datasets. Moreover, we show that using automated metrics such as BLEU to perform hyperparameter tuning and early-stopping results in sub-optimal model and our approach helps alleviate this problem.
<<</Introduction>>>
<<<Related Work>>>
Evaluation of NLG models has been a long-standing open problem. While human evaluation may be ideal, it is generally expensive to conduct and does not scale well. Various automated evaluation approaches are proposed to facilitate the development and evaluation of NLG models. We summarize these evaluation approaches below.
Text Overlap Metrics, including BLEU BIBREF5, METEOR BIBREF6 and ROUGE BIBREF7, are the most popular metrics employed in the evaluation of NLG models. They evaluate generated text by comparing the similarity between the generated text and human written references. While this works well in tasks where the diversity of acceptable output is limited, such as machine translation and text summarization, text overlap metrics are shown to have weak or no correlation with human judgments in open domain natural language generation tasks BIBREF8. There are two major drawbacks in these metrics. First, text overlap metrics can not distinguish minor variations in a generated text which may make the sentence not equally grammatically correct or semantically meaningful. Second, there may exist multiple equally good outputs for the given input and comparing against one gold reference can be erroneous.
Perplexity is commonly used to evaluate the quality of a language model. It measures how well a probability distribution predicts a sample and captures the degree of uncertainty in the model. It is used to evaluate models in open-domain NLG tasks such as story generation BIBREF2 and open domain dialogue systems. However, “how likely a sentence is generated by a given model” may not be comparable across different models and does not indicate the quality of the sentence.
Parameterized Metrics learn a parameterized model to evaluate generated text. Adversarial evaluation models BIBREF11, BIBREF12 assigns a score based on how easy it is to distinguish the dialogue model responses from human responses. However, training such a discriminator can be difficult as the binary classification task can be easily over-fitted and leads to poor generalizability BIBREF11. Moreover, the information we get from the discriminator accuracy is limited as we can not compare the quality of two generated sentences when they both succeed or fail in fooling the discriminator. Recent study shows that the discriminator accuracy does not correlate well with human preference BIBREF13. Automated Dialogue Evaluation Model (ADEM) BIBREF14 is another parameterized metric proposed for dialogue system evaluation. It learns to score a generated dialogue response based on the context and the human written reference. However, it requires human-annotated scores for generated sentences. It is generally hard to design appropriate questions for crowdsourcing these scores, which makes the annotation very expensive to get and the inter-annotator agreement score is only moderate BIBREF14. As a result, the training data is limited and noisy, which makes the scoring task even harder. It can be problematic when comparing models with similar quality. In addition, this model is designed only for evaluating dialogue response generation models. More recently, embedding similarity based metrics such as HUSE BIBREF15 and BERTScore BIBREF16. These metrics alleviate the first problem of text overlap metrics by modeling semantic similarity better. However, they can not address the response diversity problem and thus are only suitable for machine translation and text summarization.
Another line of research on NLG evaluation is to unify human evaluation with statistical evaluation BIBREF17, BIBREF18. These works are orthogonal to our paper as they mainly focus on the combination of human evaluation and automated evaluation.
Another related work of our research is the skill rating system, which evaluates players by observing a record of wins and losses of multiple players and inferring the value of a latent, unobserved skill variable for each player that explains the records of wins and losses. It is first adopted to evaluate GANs BIBREF19 for synthesizing images BIBREF20 by competing generators against discriminators. Their approach is an approximation of skill rating as the original skill rating system requires game played by two symmetric players, while in their system the players are asymmetric. Their approach does not include the “tie” option, thus can not distinguish cases where the discriminator is confident enough or not. More importantly, their approach is only designed for evaluating GANs while our approach can be used for any NLG models.
<<</Related Work>>>
<<<Methodology>>>
We present the proposed approach in this section. We begin with the sample-level pairwise comparison model. Afterwards, we introduce how to adopt the skill rating system to perform model-level evaluation of NLG models.
<<<Learning to Compare>>>
The proposed comparative evaluator is a text pair relation classifier which is trained to compare the task-specific quality of two samples. The motivation of evaluating one sample by comparing it with another sample is drawn from the insight learned when conducting human evaluation for NLG models. We find that when comparing two NLG models, instead of asking human annotator to assign scores separately for samples generated by different models, which resembles the case in the ADEM model BIBREF14, it is much easier for human annotators to directly compare one sample generated by the first model against another sample from the second model pairwisely and compute the win/loss rate. The comparison-based evaluation may also be more accurate, which is demonstrated by a higher inter-annotator agreement score in our preliminary experiments.
The comparative evaluator learns a total order of sample quality by classifying whether the first compared sample is better ($>$), worse ($<$), or indistinguishable ($\approx $) in terms of its quality compared with another sample. In this way, our model encodes the inductive bias that sometimes two samples can have similar quality and it is hard and unreliable to choose the better sample. By giving our model the third “tie” option, it can explicitly express its uncertainty and choose its preference only when being confident enough. This design choice is motivated by the practice that adding the “tie” option for human annotator when performing pairwise human evaluation can often make the comparison easier and more reliable. For a text sample, our comparative evaluator can provide a more informative assessment than the binary discriminative evaluator because one evaluated sample can receive multiple feedback from the comparative evaluator by comparing it with multiple other samples. In contrast, the discriminative evaluator can only evaluate a sample once, which is more likely to suffer from the inherent uncertainty of the evaluator.
We propose two approaches to construct pairwise training examples for training a comparative evaluator. The first approach generates strong supervision examples. It is based on the intuition that human written references are generally of better quality than machine-generated samples, and it is hard to tell the difference in term of the quality when two compared samples are both generated by machines or human written reference. We denote $S_{+}$$/$$S_{-}$ as the set of real/generated samples. For a real sample $s_{+}\in S_{+}$ and a generated sample $s_{-}\in S_{-}$, we assign the label “better ($>$)” to the pair ($s_+$, $s_-$) and “worse ($<$)” to ($s_-$, $s_+$). For two samples both from real data or from the generated samples, we assign the label “indistinguishable ($\approx $)” to such pairs (i.e., ($s_+^i$, $s_+^j$) and ($s_-^i$, $s_-^j$)). For a training set with $n$ real samples and $n$ generated samples, we can construct $\binom{2n}{2}$ pairwise training examples for the comparative evaluator, allowing to enhance the generalization ability and introduce more informative learning signals than the standard real/fake binary discriminative evaluator. Note that when constructing a sample pair ($s_-^i$, $s_-^j$), $s_-^i$ and $s_-^j$ are sampled from the same checkpoint of the same model in order to ensure that they are of similar quality in expectation.
One problem of the strong supervision approach is that it always labels two generated samples as indistinguishable. However, during inference, the input of the comparative evaluator is a pair of two generated samples from different models. Thus it requires the model to capture the quality relation in training examples and generalize well to successfully compare two samples rather than simply classifying them as indistinguishable, which provides relatively less information for evaluating NLG models.
To tackle this problem, we propose an approach to construct weak supervision examples for training the comparative evaluator. The intuition of our weak supervision approach is that during training, the quality of the NLG model keeps improving until convergence. Given two checkpoints of the same model, we can thus consider samples generated by the more recent checkpoint are of better quality compared with samples generated by the earlier version of the same model. This approach is considered to be weak supervision because the model quality may not improve monotonically and sometimes it is hard to decide whether the model begins to overfit the training data and its quality starts to decline. To minimize the noise introduced by these problems, we empirically set the minimal margin between two selected checkpoints to be $10\%$ of the total training iteration and do not select two “almost converged” checkpoints. The construction of training samples is similar to the first approach. In addition, motivated by the fact that the larger the margin between the quality two selected version of the model, the easier for the comparative evaluator to learn to distinguish the training examples, we propose to use curriculum learning BIBREF21 by feeding the comparative evaluator with sample pairs with larger margin (i.e. more training iterations between two selected checkpoints) during initial training stage and gradually decrease the margin to let the model gradually learn to capture smaller quality differences. Moreover, when human preference annotation is available, we can additionally fine-tune the comparative evaluator with human annotations.
The comparative evaluator is trained with maximum likelihood estimation (MLE) objective, as described in eq DISPLAY_FORM6
where $\mathcal {X}$ is the set of pairwise training examples contructed as described above, $Q(x_1, x_2) \in \lbrace >,<,\approx \rbrace $ is the true label for the pair ($x_1$, $x_2$), $D_\phi ^q(x_1, x_2)$ is the probability of the comparative discriminator's prediction being $q$ ($q \in \lbrace >,<,\approx \rbrace $) for the pair ($x_1$, $x_2$).
As comparing the quality of generated text requires good natural language understanding ability and our comparative evaluator is formulated as a sentence pair classification model, we propose to fine-tune BERT BIBREF22 as the comparative evaluator, the architecture of the resulting comparative evaluator is illustrated by Figure 1. Note that the compared sample A and B are based on the same context, which ensures that they are comparable.
<<</Learning to Compare>>>
<<<Skill Rating>>>
In player-vs-player games such as chess or tennis, skill rating systems such as Elo BIBREF9 or Glicko2 BIBREF23 evaluate players by observing a record of wins and losses of multiple players and inferring the value of a latent, unobserved skill variable for each player that explains the records of wins and losses. We adopt the skill rating system for model-level evaluation of NLG models. By taking the trained comparative evaluator as the “playground” and NLG models as “player”, the “player-vs-player” game is played by sampling one output sample from each NLG model conditioning on the same input and the game output is decided by the comparative evaluator.
Following previous work BIBREF20, in our paper, we use the Glicko2 system BIBREF23. The employed system can be summarized as follows: each player's skill rating is represented as a Gaussian distribution, with a mean and standard deviation, representing the current state of the evidence about their “true” skill rating. As we evaluate frozen snapshots of NLG models, we disabled an irrelevant feature of Glicko2 that increases uncertainty about a human player’s skill when they have not participated in a match for some time. Another difference is that conventional skill rating systems do not support the “tie” option, which is important for the system to be stable and reliable in our case because the evaluator is not perfect. To incorporate this feature, we follow the intuition that a player's skill rating should be increased when it draws with another player with a higher skill rating and vice versa. We come up with a simple rule which increases/decreases the skill rating of one player by a ratio (e.g. 0.1) of the changes in its skill rating when it wins/loses if it draws with another player with higher/lower skill rating. In our experiments, the skill rating is performed by randomly sampling two compared models, simulating a “game” between two selected models by sampling one sample from each model and comparing them with the comparative evaluator, and then updating the skill rating of selected models according to the outcome. This procedure is performed iteratively until convergence, which is defined as the order of skill ratings of compared models keeps the same after each model is selected at least 50 times. While the sampling procedure can be optimized by bayesian optimization BIBREF24 or multi-armed bandit algorithms BIBREF25, we choose to keep the method as simple as possible and use random sampling.
<<</Skill Rating>>>
<<</Methodology>>>
<<<Experiments>>>
We set up experiments in order to answer the following research questions:
RQ1: Can the comparative evaluator correlate better with human preference in sample-level than previous automated metrics when evaluating open domain NLG models?
RQ2: Can the comparative evaluator correlate better with human preference in model-level, so that our approach can measure the progress on open domain NLG better?
RQ3: As existing approaches fail to correlate well with human preference, whether and to what extent this problem affects the quality of the final NLG model when performing hyperparameter search and early-stopping?
RQ4: If the previous problem exists, can proposed comparative evaluator reduce this problem?
<<<Experimental Settings>>>
<<<Datasets>>>
We evaluate the effectiveness of the proposed approach on two open domain natural language generation tasks: story generation and open domain dialogue response generation. For story generation, we use the WritingPrompts dataset released by BIBREF2. The WritingPrompts dataset is a large dataset of 303,358 human-generated stories paired with writing prompts from an online forum. NLG models are trained by taking writing prompts as input and generating the whole story. The average length of prompts is 28.4 and the average length of stories is 734.5 words, which makes human evaluation very expensive and better automated metrics are thus critical. For open domain dialogue response generation task, we use the Dailydialog dataset BIBREF26, which consists of dialogues that resemble daily conversations across multiple topics. It comprises of 13k dialogues with an average of 7.9 turns per dialog.
<<</Datasets>>>
<<<Compared Models and Metrics>>>
As our objective is to evaluate the evaluators rather than comparing state-of-the-art models, we choose three representative sequence-to-sequence architectures: LSTM BIBREF27 seq2seq, Convolutional seq2seq BIBREF28, and transformer BIBREF1 model. We compare models with different architectures, hyperparameter choices, and early-stopping criteria with different automated metrics, as well as human evaluation.
Regarding the evaluation metric (and criteria for choosing hyperparameter choice and early-stopping), we compare the proposed approach with the discriminative evaluator, BLEU score (average of 2-, 3-, 4-grams), perplexity, and ADEM. When evaluating generated stories, we cut off the story at the nearest sentence for stories longer than 250 words.
The proposed comparative evaluator is employed for choosing hyperparameter by performing skill rating among all models trained with different hyperparameter choices. For early-stopping, as incrementally performing skill rating is computationally expensive, we propose to perform n (e.g. 1000) pairwise comparison between the samples generated by the latest checkpoint and the previous k (e.g. 2) checkpoints and stop training when the wining rate of latest checkpoint keeps being smaller than its losing rate for 5 iterations.
<<</Compared Models and Metrics>>>
<<<Detail of Parameterized Evaluators>>>
The proposed comparative evaluator is trained by fine-tuning BERT-large as a sentence-pair classifier. To ensure fair evaluation, we also train the discriminative evaluator by fine-tuning BERT. For ADEM, we adopt its original implementation as its architecture is relatively complicated. In addition, we perform ablation study by evaluating three variants of the comparative evaluator where it is trained without strong supervision examples, without weak supervision examples, without fine-tuning with human preference annotations, and without transferring from BERT.
<<</Detail of Parameterized Evaluators>>>
<<<Human Evaluation Procedure>>>
As human evaluation is expensive, sample-level evaluation is performed jointly with model-level evaluation, which is also used for evaluating the ability of different metrics for performing hyperparameter search and early-stopping. Concretely, we perform 10 groups of evaluations for performing hyperparameter selecting and early-stopping with five compared automated metrics. In each evaluation, each of the five compared metrics is used to select the best hyperparameter combination or early-stopping checkpoint with other variants fixed.
We choose to perform score-based human evaluation for four reasons: 1) the ADEM baseline requires human-annotated score as training examples, 2) we can construct up to $\binom{2n}{2}$ training examples for our comparative evaluator with $n$ human-annotated scores, 3) score-based human evaluation facilitates the evaluation of correlation scores, and 4) as all other metrics do not perform pairwise comparison, using pairwise human evaluation will likely be biased toward our approach.
We sample 20 generated samples from each model (out of 5) of the 20 evaluation groups. We invite 20 human annotators which are all graduate students with good English language proficiency to score these samples. Each annotator scores one sample from each model, such that each model is uniformly evaluated. The score scales from 1 to 5, higher score indicates better overall sample quality. According to experimental results from BIBREF14, we do not ask annotators to provide specific scores for fluency or informativeness. To test the inner-annotator agreement score, we additionally ask them to evaluate another 40 generated samples, of which 20 samples are scored from 1 to 5 and another 20 are evaluated based on pairwise comparison with 4 other generated samples and scored to 1-5 based on how many times they are considered to be better than a reference sample. We get an inter-annotator agreement score $\kappa =0.53$ for directly scoring and $\kappa =0.76$ with pairwise comparison, which validates our intuition that evaluation by comparison may be more accurate. These additional human annotations are used as training data for ADEM and the comparative evaluator.
<<</Human Evaluation Procedure>>>
<<</Experimental Settings>>>
<<<Experimental Designs & Results>>>
<<<RQ1: Sample-Level Correlation>>>
To test the correlation of different automated metrics with respect to human preference, we employ different metrics to score the collected 2000 samples and calculate their Pearson and Spearman correlation with human scores. For comparative evaluator, as the evaluation is performed pairwisely and no absolute score is available, we use two different approaches to get an absolute score for each sample: 1) we sample 50 common references from machine-generated samples for each task and compare each sample with all references by the comparative evaluator. A sample gets 3 points when beats a reference, 1 point when draws with the reference, and get 0 point when loses, 2) we adopt skill rating system by regarding each sample as an NLG model which always outputs the same sample and use the skill rating for each sample as its score. To ensure the computational budget to be roughly the same, we fix the number of plays in skill rating to 10,000.
The experimental results are summarized in Table 1. We can see that the proposed comparative evaluator correlates far better with human judgment than BLEU and perplexity. When compared with recently proposed parameterized metrics including adversarial evaluator and ADEM, our model consistently outperforms them by a large margin, which demonstrates that our comparison-based evaluation metric is able to evaluate sample quality more accurately. In addition, we find that evaluating generated samples by comparing it with a set of randomly selected samples or using sample-level skill rating performs almost equally well. This is not surprising as the employed skill rating is able to handle the inherent variance of players (i.e. NLG models). As this variance does not exist when we regard a sample as a model which always generates the same sample.
<<</RQ1: Sample-Level Correlation>>>
<<<RQ2: Model-Level Correlation>>>
As for model-level evaluation, we employ the average score of the evaluated 100 samples as each model's score and calculate their correlation with human scores. For comparative evaluator, we propose three different approaches to get an absolute score for each sample: 1) we calculate the average reference-based score (method 1 for sample-level comparison) of each sample as model-level score, 2) we calculate the average skill rating of each sample obtained in the experiments of RQ1 as model-level score, 2) we use the proposed skill rating system to get a model-level skill rating for each compared model.
Results are shown in Table 2. We can see that the proposed comparative evaluator with skill rating significantly outperforms all compared baselines, including comparative evaluator with averaged sample-level scores. This demonstrates the effectiveness of the skill rating system for performing model-level comparison with pairwise sample-level evaluation. In addition, the poor correlation between conventional evaluation metrics including BLEU and perplexity demonstrates the necessity of better automated evaluation metrics in open domain NLG evaluation.
<<</RQ2: Model-Level Correlation>>>
<<<RQ3&4: Automated Metrics for Model Training>>>
We further investigate the impact of imperfect metrics on training NLG models. As described in the human evaluation procedure, we perform 10 runs to test the reliability of each metric when used to perform hyperparameter tuning and early-stopping respectively. In each run, we select the best hyperparameter combination or early-stopping checkpoint based on each of the five compared metrics. Human evaluation is then employed to identify the best choice. We evaluate the performance of each metric by how many times (out of 10) they succeeded in selecting the best hyperparameter combination or early-stopping checkpoint (out of 4) and the average human-annotated score for their selected models.
The results are shown in Table 3. We can see that conventional automated metrics perform poorly and result in sub-optimal result when performing hyperparameter search and selecting the best performing checkpoints. Converting evaluation metric from BLEU or perplexity to the proposed comparative evaluator can yield non-neglectable improvements without changing model architecture or training objective. While previous work on NLG evaluation mostly focuses on the evaluation stage and does not explore the influence of imperfect metrics during model training, our experiments demonstrate the existence of this problem and that the proposed method can, to some extent, alleviate this problem.
<<</RQ3&4: Automated Metrics for Model Training>>>
<<</Experimental Designs & Results>>>
<<<Qualitative Analysis>>>
We present several comparison examples in the Dailydialog dataset for qualitative analysis of the proposed comparative evaluator. From the first example, we can see that the comparative evaluator is capable of identifying that generic and dull responses (i.e. “I don't know”) should be considered as of worse quality. The second example suggests that our approach handles the diversity in possible responses well, as it regards both positive response and negative response as valid responses. Hopefully, these examples may provide us with some insights about why the proposed metric correlates better with human preference.
<<</Qualitative Analysis>>>
<<<Ablation Study>>>
To better understand the proposed comparative evaluator and analyze the relative importance of its different components, we conduct an ablation study with several variants of the proposed model:
w/o comparison: Evaluating generated samples without comparison, which degrades to the adversarial evaluation method.
w/o strong supervision: Training the comparative evaluator without “strong supervision”, which models the inductive bias that human written reference samples are generally of better quality compared with that generated by NLG models.
w/o weak supervision: Training without “weak supervision”, which models the inductive bias that the quality of NLG models generally improves during training.
w/o human preference annotation Training without human annotated preference data (i.e. only with strong and weak supervision).
w/o tie option The variant of comparative evaluator where the model must select the better sample rather than able to admit its uncertainty.
w/o BERT The variant where the model is trained from scratch instead of fine-tuning BERT.
We evaluate these model variants on the Dailydialog dataset. Results are presented in Table 5. We can see that comparison-based evaluation is very effective as our model correlates much better than adversarial evaluator. The tie option is also very important as it can prevent the comparative evaluator from making uncertain decision and model the inductive bias that samples generated by the same model are generally of similar quality, which may help our model generalize better. As for different sources of training examples, we find that human preference annotation is the most important, which is not surprising. In addition, we find that the proposed weak supervision also helps, but is of smaller relative importance compared with strong supervision. This may be due to the fact that examples constructed by the weak supervision approach may contain a lot of noise. We can also see that our model correlates well with human preference without training with human preference annotation, this is very important in practice as human annotations are not always available. Finally, we find that transferring the natural language understanding ability from BERT to be very important for the final performance.
<<</Ablation Study>>>
<<</Experiments>>>
<<<Discussion and Conclusion>>>
In this paper, we present a novel comparison-based parameterized automated evaluation metric for evaluating open domain NLG models. The proposed model is based on the intuition that we can better evaluate the quality of a sample by comparing it with other samples. Our model allows the model to admit its uncertainty with the “tie” option. We adopt the skill rating system to perform model-level evaluation based on sample-level pairwise comparison.
By transferring pretrained natural language understanding knowledge from BERT and fine-tuning with strong and weak supervision examples and human preference annotations, our model correlates better with human judgment than other compared metrics. In addition, we find that when used as evaluation metrics, conventional metrics such as BLEU and perplexity may affect the training stage of NLG models as they may lead to sub-optimal hyperparameter choice and checkpoint selection. Our model, in contrast, is much more reliable when performing these choices.
<<</Discussion and Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Work\nMethodology\nLearning to Compare\nSkill Rating\nExperiments\nExperimental Settings\nDatasets\nCompared Models and Metrics\nDetail of Parameterized Evaluators\nHuman Evaluation Procedure\nExperimental Designs & Results\nRQ1: Sample-Level Correlation\nRQ2: Model-Level Correlation\nRQ3&4: Automated Metrics for Model Training\nQualitative Analysis\nAblation Study\nDiscussion and Conclusion"
],
"type": "outline"
}
|
2002.06675
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Speech Corpus of Ainu Folklore and End-to-end Speech Recognition for Ainu Language
<<<Abstract>>>
Ainu is an unwritten language that has been spoken by Ainu people who are one of the ethnic groups in Japan. It is recognized as critically endangered by UNESCO and archiving and documentation of its language heritage is of paramount importance. Although a considerable amount of voice recordings of Ainu folklore has been produced and accumulated to save their culture, only a quite limited parts of them are transcribed so far. Thus, we started a project of automatic speech recognition (ASR) for the Ainu language in order to contribute to the development of annotated language archives. In this paper, we report speech corpus development and the structure and performance of end-to-end ASR for Ainu. We investigated four modeling units (phone, syllable, word piece, and word) and found that the syllable-based model performed best in terms of both word and phone recognition accuracy, which were about 60% and over 85% respectively in speaker-open condition. Furthermore, word and phone accuracy of 80% and 90% has been achieved in a speaker-closed setting. We also found out that a multilingual ASR training with additional speech corpora of English and Japanese further improves the speaker-open test accuracy.
<<</Abstract>>>
<<<Introduction>>>
Automatic speech recognition (ASR) technology has been made a dramatic progress and is currently brought to a pratical levels of performance assisted by large speech corpora and the introduction of deep learning techniques. However, this is not the case for low-resource languages which do not have large corpora like English and Japanese have. There are about 5,000 languages in the world over half of which are faced with the danger of extinction. Therefore, constructing ASR systems for these endangered languages is an important issue.
The Ainu are an indigenous people of northern Japan and Sakhakin in Russia, but their language has been fading away ever since the Meiji Restoration and Modernization. On the other hand, active efforts to preserve their culture have been initiated by the Government of Japan, and exceptionally large oral recordings have been made. Nevertheless, a majority of the recordings have not been transcribed and utilized effectively. Since transcribing them requires expertise in the Ainu language, not so many people are able to work on this task. Hence, there is a strong demand for an ASR system for the Ainu language. We started a project of Ainu ASR and this article is the first report of this project.
We have built an Ainu speech corpus based on data provided by the Ainu Museum and the Nibutani Ainu Culture Museum. The oral recordings in this data consist of folklore and folk songs, and we chose the former to construct the ASR model. The end-to-end method of speech recognition has been proposed recently and has achieved performance comparable to that of the conventional DNN-HMM hybrid modeling BIBREF0, BIBREF1, BIBREF2. End-to-end systems do not have a complex hierarchical structure and do not require expertise in target languages such as their phonology and morphology. In this study we adopt the attention mechanism BIBREF3, BIBREF4 and combine it with Connectionist Temporal Classification (CTC) BIBREF5, BIBREF6. In this work, we investigate the modeling unit and utilization of corpora of other languages.
<<</Introduction>>>
<<<Overview of the Ainu Language>>>
This section briefly overviews the background of the data collection, the Ainu language, and its writing system. After that, we describe how Ainu recordings are classified and review previous works dealing with the Ainu language.
<<<Background>>>
The Ainu people had total population of about 20,000 in the mid-19th century BIBREF7 and they used to live widely distributed in the area that includes Hokkaido, Sakhalin, and the Kuril Islands. The number of native speakers, however, rapidly decreased through the assimilation policy after late 19th century. At present, there are only less than 10 native speakers, and UNESCO listed their language as critically endangered in 2009 BIBREF8. In response to this situation, Ainu folklore and songs have been actively recorded since the late 20th century in efforts initiated by the Government of Japan. For example, the Ainu Museum started audio recording of Ainu folklore in 1976 with the cooperation of a few Ainu elders which resulted in the collection of speech data with the total duration of roughly 700 hours. This kind of data should be a key to the understanding of Ainu culture, but most of it is not transcribed and fully studied yet.
<<</Background>>>
<<<The Ainu Language and its Writing System>>>
The Ainu language is an agglutinative language and has some similarities to Japanese. However, its genealogical relationship with other languages has not been clearly understood yet. Among its features such as closed syllables and personal verbal affixes, one important feature is that there are many compound words. For example, a word atuykorkamuy (means “a sea turtle”) can be disassembled into atuy (“the sea”), kor (“to have”), and kamuy (“god”).
Although the Ainu people did not traditionally have a writing system, the Ainu language is currently written following the examples in a reference book “Akor itak” BIBREF9. With this writing system, it is transcribed with sixteen Roman letters {a, c, e, h, i, k, m, n, o, p, r, s, t, u, w, y}. Since each of these letters correspond to a unique pronunciation, we call them “phones” for convenience. In addition, the symbol {=} is used for connecting a verb and a personal affix and { ' } is used to represent the pharyngeal stop. For the purpose of transcribing recordings, consonant symbols {b, d, g, z} are additionally used to transcribe Japanese sounds the speakers utter. The symbols { _ , __ } are used to transcribe drops and liaisons of phones. An example is shown below.
<<</The Ainu Language and its Writing System>>>
<<<Types of Ainu Recordings>>>
The Ainu oral traditions are classified into three types: “yukar” (heroic epics), “kamuy yukar” (mythic epics), and “uwepeker” (prose tales). Yukar and kamuy yukar are recited in the rhythm while uwepeker is not. In this study we focus on the the prose tales as the first step.
<<</Types of Ainu Recordings>>>
<<<Previous Work>>>
There have so far been a few studies dealing with the Ainu language. ainulrec built a dependency tree bank in the scheme of Universal Dependencies. postag developed tools for part-of-speech (POS) tagging and word segmentation. Ainu speech recognition was tried by ainutrans with 2.5 hours of Ainu folklore data even though the Ainu language was not their main target. Their phone error rare was about 40% which is not an accuracy level for practical use yet.
It appears that there has not been a substantial Ainu speech recognition study yet that utilizes corpora of a reasonable size. Therefore, our first step was to build a speech corpus for ASR based on the data sets provided by the Ainu Museum and the Nibutani Ainu Culture Museum.
<<</Previous Work>>>
<<</Overview of the Ainu Language>>>
<<<Ainu Speech Corpus>>>
In this section we explain the content of the data sets and how we modified it for our ASR corpus.
<<<Numbers of Speakers and Episodes>>>
The corpus we have prepared for ASR in this study is composed of text and speech. Table 1 shows the number of episodes and the total speech duration for each speaker. Among the total of eight speakers, the data of the speakers KM and UT is from the Ainu Museum, and the rest is from Nibutani Ainu Culture Museum. All speakers are female. The length of the recording for a speaker varies depending on the circumstances at the recording times. A sample text and its English translation are shown in Table 2.
<<</Numbers of Speakers and Episodes>>>
<<<Data Annotation>>>
For efficient training of ASR model, we have made some modifications to the provided data. First, from the transcripts explained in Section 2.1, the symbols {_ , __ , '} have been removed as seen in the example below.
Though the equal symbol (`=') does not represent a sound, we keep it because it is used in almost all of the Ainu documents and provides grammatical information.
To train an ASR system, the speech data needs to be segmented into a set of manageable chunks. For the ease of automatic processing, we chose to segment speech into inter-pausal units (IPUs) BIBREF10which is a stretch of speech bounded by pauses. The number of IPUs for each speaker is shown in Table 1.
<<</Data Annotation>>>
<<</Ainu Speech Corpus>>>
<<<End-to-end Speech Recognition>>>
In this section, the two approaches to end-to-end speech recognition that we adopt in this work are summarized. Then, we introduce four modeling units we explained, i.e., phone, syllable, word piece, and word. We also discuss multilingual training that we adopt for tackling the low resource problem.
<<<End-to-end Modeling>>>
End-to-end models have an architecture much simpler than that of conventional DNN-HMM hybrid models. Since they predict character or word symbols directly from acoustic features, pronunciation dictionaries and language modeling are not required explicitly. In this paper, we utilize two kinds of end-to-end models, namely, Connectionist Temporal Classification (CTC) and the attention-based encoder-decoder model.
CTC augments the output symbol set with the “blank” symbol `$\phi $'. It outputs symbols by contracting frame-wise outputs from recurrent neural networks (RNNs). This is done by first collapsed repeating symbols and then removing all blank symbols as in the following example:
The probability of an output sequence $\mathbf {L}$ for an input acoustic feature sequence $\mathbf {X}$, where $|\mathbf {L}| < |\mathbf {X}|$, is defined as follows.
$\mathcal {B}$ is a function to contract the outputs of RNNs, so $\mathcal {B}^{-1}(\mathbf {L})$ means the set of symbol sequences which is reduced to $\mathbf {L}$. The model is trained to maximize (1).
The attention-based encoder-decoder model is another method for mapping between two sequences with different lengths. It has two RNNs called the “encoder” and the “decoder”. In naive encoder-decoder model, the encoder converts the input sequence into a single context vector which is the last hidden state of the encoder RNN from which the decoder infers output symbols. In an attention-based model, the context vector $\mathbf {c}_l$ at $l$-th decoding step is the sum of the product of all encoder outputs $h_1, ... , h_\mathrm {T}$ and the $l$-th attention weight $\alpha _{1,l}, ... , \alpha _{\mathrm {T},l}$ as shown in (2). Here, $\mathrm {T}$ is the length of the encoder output.
The attention weights $\alpha _{1,l}, ... , \alpha _{\mathrm {T},l}$ indicates the relative importances of the encoder output frames for the $l$-th decoding step and the model parameters to generate these weights are determined in an end-to-end training.
In our model, the attention-based model and the CTC share the encoder and are optimized simultaneously as shown in Figure 1.BIBREF11 Long Short-Term Memory (LSTM) BIBREF12 is used for RNNs in the encoder and the decoder.
<<</End-to-end Modeling>>>
<<<Modeling Units>>>
In the conventional DNN-HMM hybrid modeling, the acoustic model outputs probabilities triphone states from each acoustic feature which is converted into the most likely word sequence. An end-to-end model, on the other hand, has some degree of freedom in the modeling unit other than phones, and there are some studies that use characters or words as a unit BIBREF13, BIBREF14. A word unit based end-to-end model can take long context into consideration at the inference time, but it has the data sparsity problem due to its large vocabulary size. Though phone unit based model does not have such a problem, it cannot grasp so long context. It depends on the size of available corpora to decide which to adopt. In addition to these both models, a word piece unit, which is defined by automatically dividing a word into frequent parts, has been proposed BIBREF15, BIBREF16, and its vocabulary size can be determined almost freely.
In this paper, we investigate the modeling unit for the end-to-end Ainu speech recognition since the optimal unit for this size of corpus is not obvious. BIBREF17 It is presupposed that all units can be converted into word units automatically. The candidates are phone, syllable, word piece (WP), and word. Examples of them are shown in Table 3 and the details of each unit are described below.
<<<Phone>>>
As mentioned in Section 2.1, we regard the Roman letters as phones. `=' and the special symbol `$\langle $wb$\rangle $', which means a word boundary, are added to make it possible to convert the output into a sequence of words like the `original' in Table 3.
<<</Phone>>>
<<<Syllable>>>
A syllable of the Ainu language takes the form of either V, CV, VC, or CVC, where `C' and `V' mean consonant and vowel, respectively. The phones {a, e, i, o, u} are vowels and the rest of the Roman letters in Section 2.2 are consonants. In this work, every word is divided into syllables by the following procedure.
A word with a single letter is unchanged.
Two consecutive Cs and Vs are given a syllable boundary between them.
R$^*${CC, VV}R$^*$$\rightarrow $ R$^*${C-C, V-V}R$^*$
(R $$ {C, V})
Put a syllable boundary after the segment-initial V if it is following by at least two phones.
VCR$^+$$\rightarrow $ V-CR$^+$
Put a syllable boundary after CV repeatedly from left to right until only CV or CVC is left.
(CV)$^*${CV, CVC} $\rightarrow $ (CV-)$^*${CV, CVC}
In addition, `=' and `$\langle $wb$\rangle $' are added as explained in Section 4.2.1. through the model training process.
This procedure does not always generate a morphologically relevant syllable segmentation. For example, a word isermakus (meaning “(for a god) to protect from behind”) is divided as i-ser-ma-kus, but the right syllabification is i-ser-mak-us.
<<</Syllable>>>
<<<Word Piece>>>
The byte pair encoding (BPE) BIBREF18 and the unigram language modeling BIBREF19 are alternative methods for dividing a word into word pieces. The former repeatedly replaces the most common character pair with a new single symbol until the vocabulary becomes the intended size. The latter decides the segmentation to maximize the likelihood of occurrence of the sequence. We adopt the latter and use the open-source software SentencePiece BIBREF20. With this tool, `$\langle $wb$\rangle $' and other units are often merged to constitute a single piece as seen in Table 3.
<<</Word Piece>>>
<<<Word>>>
The original text can be segmented into words separated by spaces. To make the vocabulary smaller for the ease of training, `=' is treated as a word and infrequent words are replaced with a special label `$\langle $unk$\rangle $'. As seen in Table 3, `a=saha' is dealt with as three words (`a', `=', `saha') and the word `kokopan' is replaced with `$\langle $unk$\rangle $'.
<<</Word>>>
<<</Modeling Units>>>
<<<Multilingual Training>>>
When an enough amount of data is not available for the target languages, the ASR model training can be enhanced by taking advantage of data from other languages BIBREF21, BIBREF22. There are some similarities between Ainu and Japanese language BIBREF23. For instance, both have almost the same set of vowels and do not have consonant clusters (like `str' of `strike' in English). Hence, the multilingual training with a Japanese corpus is expected to be effective. In addition, an English corpus is used for the purpose of comparison. The corpora used are the JNAS corpus BIBREF24 (in Japanese) and the WSJ corpus BIBREF25 (in English). JNAS comprises roughly 80 hours from 320 speakers, and WSJ has about 70 hours of speech from 280 speakers.
In the multilingual training, the encoder and the attention module are shared among the Ainu ASR model and the models for other languages, and they are trained using data for all languages. Figure 2 shows the architecture for the multilingual learning with two corpora. When the input acoustic features are from the Ainu ASR corpus, they go through the shared encoder and attention module and are delivered into the decoder on the left side in Figure 2 as a context vector. In this case, the right-side decoder is not trained.
<<</Multilingual Training>>>
<<</End-to-end Speech Recognition>>>
<<<Experimental Evaluation>>>
In this section the setting and results of ASR experiments are described and the results are discussed.
<<<Data Setup>>>
The ASR experiments were performed in speaker-open condition as well as speaker-closed condition.
In the speaker-closed condition, two episodes were set aside from each speaker as development and test sets. Thereafter, the total sizes of the development and test sets turns out to be 1585 IPUs spanning 2 hours 23 minutes and 1841 IPUs spanning 2 hours and 48 minutes respectively. The ASR model is trained with the rest data. In the speaker-open condition, all the data except for the test speaker's were used for training As it would be difficult to train the model if all of the data of speaker KM or UT were removed, experiments using their speaker-open conditions were not conducted.
<<</Data Setup>>>
<<<Experimental Setting>>>
The input acoustic features were 120-dimensional vectors made by frame stacking BIBREF26 three 40-dimensional log-mel filter banks features at contiguous time frames. The window length and the frame shift were set to be 25ms and 10ms. The encoder was composed of five BiLSTM layers and the attention-based decoder had a single layer of LSTM. Each LSTM had 320 cells and their weights were randomly initialized using a uniform distribution DBLP:journals/corr/HeZR015 with biases of zero. The fully connected layers were initialized following $\mathcal {U}{(-0.1, 0.1)}$. The weight decay BIBREF27 whose rate was $10^{-5}$ and the dropout BIBREF28 following $\mathcal {B}e(0.2)$ were used to alleviate overfitting. The parameters were optimized with Adam BIBREF29. The learning rate was $10^{-3}$ at first and was multiplied by $10^{-1}$ at the beginning of 31st and 36th epoch BIBREF30. The mini-batch size was 30 and the utterances (IPUs) were sorted in an ascending order of length. To stabilize the training, we removed utterances longer than 12 seconds.
The loss function of the model was a linear sum of the loss from CTC and the attention-based decoder,
where $\lambda $ was set to be 0.5. Through all experiments, the phone labels are used to train the auxiliary CTC task because it is reported that the hierarchical architecture, using few and general labels in the auxiliary task, improves the performance BIBREF31.
Strictly speaking, the number of each modeling units depends on the training set, but there are roughly 25-phone, 500-syllable, and 5,000-word units including special symbols that represent the start and end of a sentence. The words occurring less than twice were replaced with `$\langle $unk$\rangle $'. The vocabulary size for word piece modeling was set to be 500. These settings were based on the results of preliminary experiments with the development set.
For the multilingual training, we made three training scripts by concatenating the script of Ainu and other languages (JNAS, WSJ, JNAS and WSJ). The model was trained by these scripts until 30th epoch. From 31$^{\rm {st}}$ and 40th epoch, the model was fine-turned by the Ainu script. Phone units are used for JNAS and WSJ throughout the experiments.
<<</Experimental Setting>>>
<<<Results>>>
Table 4 shows the phone error rates (PERs) and word error rates (WERs) for the speaker-closed and speaker-open settings. The `average' is weighted by the numbers of tokens in the ground truth transcriptions for speaker-wise evaluation sets.
The word recognition accuracy reached about 80% in the speaker-closed setting. In the speaker-open setting it was 60% on average and varied greatly from speaker to speaker (from 50% to 70%). The best phone accuracies in the speaker-closed and speaker-open settings were about 94% and 86%. Regardless of the settings, the syllable-based modeling yielded the best WER and PER. This suggests that syllables provide reasonable coverage and constraints for the Ainu language in a corpus of this size.
The PERs of the word unit model were larger than those of other units. This is because the word model often outputs the `$\langle $unk$\rangle $' symbols while other unit models are able to output symbols similar in sound as below.
In this example, the PER of the syllable model is 5% and that of the word model is 30% even though the WERs are the same. (The output of the syllable model is rewritten into words using the `$\langle $wb$\rangle $' symbol.)
WERs are generally much larger than PERs and it is further aggravated with the Ainu language. This is because, as mentioned in Section 2.1, the Ainu language has a lot of compound words and the model may be confused about whether the output is multiple words or a single compound word. The actual outputs frequently contain errors as below. The WER of this example is 57% though the PER is zero.
The results of multilingual training in which the modeling unit is syllables are presented in Table 5. All error rates are the weighted averages of all evaluated speakers. Here, `+ both' represents the result of training with both JNAS and WSJ corpora. The multilingual training is effective in the speaker-open setting, providing a relative WER improvement of 10%. The JNAS corpus was more helpful than the WSJ corpus because of the similarities between Ainu and Japanese language.
<<</Results>>>
<<</Experimental Evaluation>>>
<<<Summary>>>
In this study, we first developed a speech corpus for Ainu ASR and then, using the end-to-end model with CTC and the attention mechanism, compared four modeling units: phones, syllables, word pieces, and words. The best performance was obtained with the syllable unit, with which WERs in the speaker-closed and speaker-open settings were respectively about 20% and 40% while PERs were about 6% and 14%. Multilingual training using the JNAS improved the performance in the speaker-open setting. Future tasks include reducing the between-speaker performance differences by using speaker adaptation techniques.
<<</Summary>>>
<<<Acknowledgement>>>
The data sets used in this study are provided by the Ainu Museum and Nibutani Ainu Culture Museum. The authors would like to thank Prof. Osami Okuda of Sapporo Gakuin University for his useful advices on the Ainu language.
<<</Acknowledgement>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nOverview of the Ainu Language\nBackground\nThe Ainu Language and its Writing System\nTypes of Ainu Recordings\nPrevious Work\nAinu Speech Corpus\nNumbers of Speakers and Episodes\nData Annotation\nEnd-to-end Speech Recognition\nEnd-to-end Modeling\nModeling Units\nPhone\nSyllable\nWord Piece\nWord\nMultilingual Training\nExperimental Evaluation\nData Setup\nExperimental Setting\nResults\nSummary\nAcknowledgement"
],
"type": "outline"
}
|
1909.08041
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Revealing the Importance of Semantic Retrieval for Machine Reading at Scale
<<<Abstract>>>
Machine Reading at Scale (MRS) is a challenging task in which a system is given an input query and is asked to produce a precise output by "reading" information from a large knowledge base. The task has gained popularity with its natural combination of information retrieval (IR) and machine comprehension (MC). Advancements in representation learning have led to separated progress in both IR and MC; however, very few studies have examined the relationship and combined design of retrieval and comprehension at different levels of granularity, for development of MRS systems. In this work, we give general guidelines on system design for MRS by proposing a simple yet effective pipeline system with special consideration on hierarchical semantic retrieval at both paragraph and sentence level, and their potential effects on the downstream task. The system is evaluated on both fact verification and open-domain multihop QA, achieving state-of-the-art results on the leaderboard test sets of both FEVER and HOTPOTQA. To further demonstrate the importance of semantic retrieval, we present ablation and analysis studies to quantify the contribution of neural retrieval modules at both paragraph-level and sentence-level, and illustrate that intermediate semantic retrieval modules are vital for not only effectively filtering upstream information and thus saving downstream computation, but also for shaping upstream data distribution and providing better data for downstream modeling. Code/data made publicly available at: this https URL
<<</Abstract>>>
<<<Introduction>>>
Extracting external textual knowledge for machine comprehensive systems has long been an important yet challenging problem. Success requires not only precise retrieval of the relevant information sparsely restored in a large knowledge source but also a deep understanding of both the selected knowledge and the input query to give the corresponding output. Initiated by chen2017drqa, the task was termed as Machine Reading at Scale (MRS), seeking to provide a challenging situation where machines are required to do both semantic retrieval and comprehension at different levels of granularity for the final downstream task.
Progress on MRS has been made by improving individual IR or comprehension sub-modules with recent advancements on representative learning BIBREF0, BIBREF1, BIBREF2. However, partially due to the lack of annotated data for intermediate retrieval in an MRS setting, the evaluations were done mainly on the final downstream task and with much less consideration on the intermediate retrieval performance. This led to the convention that upstream retrieval modules mostly focus on getting better coverage of the downstream information such that the upper-bound of the downstream score can be improved, rather than finding more exact information. This convention is misaligned with the nature of MRS where equal effort should be put in emphasizing the models' joint performance and optimizing the relationship between the semantic retrieval and the downstream comprehension sub-tasks.
Hence, to shed light on the importance of semantic retrieval for downstream comprehension tasks, we start by establishing a simple yet effective hierarchical pipeline system for MRS using Wikipedia as the external knowledge source. The system is composed of a term-based retrieval module, two neural modules for both paragraph-level retrieval and sentence-level retrieval, and a neural downstream task module. We evaluated the system on two recent large-scale open domain benchmarks for fact verification and multi-hop QA, namely FEVER BIBREF3 and HotpotQA BIBREF4, in which retrieval performance can also be evaluated accurately since intermediate annotations on evidences are provided. Our system achieves the start-of-the-art results with 45.32% for answer EM and 25.14% joint EM on HotpotQA (8% absolute improvement on answer EM and doubling the joint EM over the previous best results) and with 67.26% on FEVER score (3% absolute improvement over previously published systems).
We then provide empirical studies to validate design decisions. Specifically, we prove the necessity of both paragraph-level retrieval and sentence-level retrieval for maintaining good performance, and further illustrate that a better semantic retrieval module not only is beneficial to achieving high recall and keeping high upper bound for downstream task, but also plays an important role in shaping the downstream data distribution and providing more relevant and high-quality data for downstream sub-module training and inference. These mechanisms are vital for a good MRS system on both QA and fact verification.
<<</Introduction>>>
<<<Related Work>>>
Machine Reading at Scale First proposed and formalized in chen2017drqa, MRS has gained popularity with increasing amount of work on both dataset collection BIBREF5, BIBREF6 and MRS model developments BIBREF7, BIBREF8, BIBREF9. In some previous work BIBREF10, paragraph-level retrieval modules were mainly for improving the recall of required information, while in some other works BIBREF4, sentence-level retrieval modules were merely for solving the auxiliary sentence selection task. In our work, we focus on revealing the relationship between semantic retrieval at different granularity levels and the downstream comprehension task. To the best of our knowledge, we are the first to apply and optimize neural semantic retrieval at both paragraph and sentence levels for MRS.
Automatic Fact Checking: Recent work BIBREF11 formalized the task of automatic fact checking from the viewpoint of machine learning and NLP. The release of FEVER BIBREF3 stimulates many recent developments BIBREF12, BIBREF13, BIBREF14 on data-driven neural networks for automatic fact checking. We consider the task also as MRS because they share almost the same setup except that the downstream task is verification or natural language inference (NLI) rather than QA.
Information Retrieval Success in deep neural networks inspires their application to information retrieval (IR) tasks BIBREF15, BIBREF16, BIBREF17, BIBREF18. In typical IR settings, systems are required to retrieve and rank BIBREF19 elements from a collection of documents based on their relevance to the query. This setting might be very different from the retrieval in MRS where systems are asked to select facts needed to answer a question or verify a statement. We refer the retrieval in MRS as Semantic Retrieval since it emphasizes on semantic understanding.
<<</Related Work>>>
<<<Method>>>
In previous works, an MRS system can be complicated with different sub-components processing different retrieval and comprehension sub-tasks at different levels of granularity, and with some sub-components intertwined. For interpretability considerations, we used a unified pipeline setup. The overview of the system is in Fig. FIGREF2.
To be specific, we formulate the MRS system as a function that maps an input tuple $(q, \mathbf {K})$ to an output tuple $(\hat{y}, \mathbf {S})$ where $q$ indicates the input query, $\mathbf {K}$ is the textual KB, $\hat{y}$ is the output prediction, and $\mathbf {S}$ is selected supporting sentences from Wikipedia. Let $\mathbf {E}$ denotes a set of necessary evidences or facts selected from $\mathbf {K}$ for the prediction. For a QA task, $q$ is the input question and $\hat{y}$ is the predicted answer. For a verification task, $q$ is the input claim and $\hat{y}$ is the predicted truthfulness of the input claim. For all tasks, $\mathbf {K}$ is Wikipedia.
The system procedure is listed below:
(1) Term-Based Retrieval: To begin with, we used a combination of the TF-IDF method and a rule-based keyword matching method to narrow the scope from whole Wikipedia down to a set of related paragraphs; this is a standard procedure in MRS BIBREF20, BIBREF10, BIBREF12. The focus of this step is to efficiently select a candidate set $\mathbf {P_I}$ that can cover the information as much as possible ($\mathbf {P_I} \subset \mathbf {K}$) while keeping the size of the set acceptable enough for downstream processing.
(2) Paragraph-Level Neural Retrieval: After obtaining the initial set, we compare each paragraph in $\mathbf {P_I}$ with the input query $q$ using a neural model (which will be explained later in Sec SECREF4). The outputs of the neural model are treated as the relatedness score between the input query and the paragraphs. The scores will be used to sort all the upstream paragraphs. Then, $\mathbf {P_I}$ will be narrowed to a new set $\mathbf {P_N}$ ($\mathbf {P_N} \subset \mathbf {P_I}$) by selecting top $k_p$ paragraphs having relatedness score higher than some threshold value $h_p$ (going out from the P-Level grey box in Fig. FIGREF2). $k_p$ and $h_p$ would be chosen by keeping a good balance between the recall and precision of the paragraph retrieval.
(3) Sentence-Level Neural Retrieval: Next, we select the evidence at the sentence-level by decomposing all the paragraphs in $\mathbf {P_N}$ into sentences. Similarly, each sentence is compared with the query using a neural model (see details in Sec SECREF4) and obtain a set of sentences $\mathbf {S} \subset \mathbf {P_N}$ for the downstream task by choosing top $k_s$ sentences with output scores higher than some threshold $h_s$ (S-Level grey box in Fig. FIGREF2). During evaluation, $\mathbf {S}$ is often evaluated against some ground truth sentence set denoted as $\mathbf {E}$.
(4) Downstream Modeling: At the final step, we simply applied task-specific neural models (e.g., QA and NLI) on the concatenation of all the sentences in $\mathbf {S}$ and the query, obtaining the final output $\hat{y}$.
In some experiments, we modified the setup for certain analysis or ablation purposes which will be explained individually in Sec SECREF6.
<<<Modeling and Training>>>
Throughout all our experiments, we used BERT-Base BIBREF2 to provide the state-of-the-art contextualized modeling of the input text.
Semantic Retrieval: We treated the neural semantic retrieval at both the paragraph and sentence level as binary classification problems with models' parameters updated by minimizing binary cross entropy loss. To be specific, we fed the query and context into BERT as:
We applied an affine layer and sigmoid activation on the last layer output of the [$\mathit {CLS}$] token which is a scalar value. The parameters were updated with the objective function:
where $\hat{p}_i$ is the output of the model, $\mathbf {T}^{p/s}_{pos}$ is the positive set and $\mathbf {T}^{p/s}_{neg}$ is the negative set. As shown in Fig. FIGREF2, at sentence level, ground-truth sentences were served as positive examples while other sentences from upstream retrieved set were served as negative examples. Similarly at the paragraph-level, paragraphs having any ground-truth sentence were used as positive examples and other paragraphs from the upstream term-based retrieval processes were used as negative examples.
QA: We followed devlin2018bert for QA span prediction modeling. To correctly handle yes-or-no questions in HotpotQA, we fed the two additional “$\mathit {yes}$" and “$\mathit {no}$" tokens between [$\mathit {CLS}$] and the $Query$ as:
where the supervision was given to the second or the third token when the answer is “yes" or “no", such that they can compete with all other predicted spans. The parameters of the neural QA model were trained to maximize the log probabilities of the true start and end indexes as:
where $\hat{y}^s_i$ and $\hat{y}^e_i$ are the predicted probability on the ground-truth start and end position for the $i$th example, respectively. It is worth noting that we used ground truth supporting sentences plus some other sentences sampled from upstream retrieved set as the context for training the QA module such that it will adapt to the upstream data distribution during inference.
Fact Verification: Following Thorne18Fever, we formulate downstream fact verification as the 3-way natural language inference (NLI) classification problem BIBREF21, BIBREF22 and train the model with 3-way cross entropy loss. The input format is the same as that of semantic retrieval and the objective is $\mathcal {J}_{ver} = -\sum _{i} \mathbf {y}_i \cdot \log (\hat{\mathbf {y}}_i)$, where $\hat{\mathbf {y}}_i \in \mathbf {R^3}$ denotes the model's output for the three verification labels, and $\mathbf {y}_i$ is a one-hot embedding for the ground-truth label. For verifiable queries, we used ground truth evidential sentences plus some other sentences sampled from upstream retrieved set as new evidential context for NLI. For non-verifiable queries, we only used sentences sampled from upstream retrieved set as context because those queries are not associated with ground truth evidential sentences. This detail is important for the model to identify non-verifiable queries and will be explained more in Sec SECREF6. Additional training details and hyper-parameter selections are in the Appendix (Sec. SECREF8; Table TABREF27).
It is worth noting that each sub-module in the system relies on its preceding sub-module to provide data both for training and inference. This means that there will be upstream data distribution misalignment if we trained the sub-module in isolation without considering the properties of its precedent upstream module. The problem is similar to the concept of internal covariate shift BIBREF23, where the distribution of each layer's inputs changes inside a neural network. Therefore, it makes sense to study this issue in a joint MRS setting rather than a typical supervised learning setting where training and test data tend to be fixed and modules being isolated. We release our code and the organized data both for reproducibility and providing an off-the-shelf testbed to facilitate future research on MRS.
<<</Modeling and Training>>>
<<</Method>>>
<<<Experimental Setup>>>
MRS requires a system not only to retrieve relevant content from textual KBs but also to poccess enough understanding ability to solve the downstream task. To understand the impact or importance of semantic retrieval on the downstream comprehension, we established a unified experimental setup that involves two different downstream tasks, i.e., multi-hop QA and fact verification.
<<<Tasks and Datasets>>>
HotpotQA: This dataset is a recent large-scale QA dataset that brings in new features: (1) the questions require finding and reasoning over multiple documents; (2) the questions are diverse and not limited to pre-existing KBs; (3) it offers a new comparison question type BIBREF4. We experimented our system on HotpotQA in the fullwiki setting, where a system must find the answer to a question in the scope of the entire Wikipedia, an ideal MRS setup. The sizes of the train, dev and test split are 90,564, 7,405, and 7,405. More importantly, HotpotQA also provides human-annotated sentence-level supporting facts that are needed to answer each question. Those intermediate annotations enable evaluation on models' joint ability on both fact retrieval and answer span prediction, facilitating our direct analysis on the explainable predictions and its relations with the upstream retrieval.
FEVER: The Fact Extraction and VERification dataset BIBREF3 is a recent dataset collected to facilitate the automatic fact checking. The work also proposes a benchmark task in which given an arbitrary input claim, candidate systems are asked to select evidential sentences from Wikipedia and label the claim as either Support, Refute, or Not Enough Info, if the claim can be verified to be true, false, or non-verifiable, respectively, based on the evidence. The sizes of the train, dev and test split are 145,449, 19,998, and 9,998. Similar to HotpotQA, the dataset provides annotated sentence-level facts needed for the verification. These intermediate annotations could provide an accurate evaluation on the results of semantic retrieval and thus suits well for the analysis on the effects of retrieval module on downstream verification.
As in chen2017drqa, we use Wikipedia as our unique knowledge base because it is a comprehensive and self-evolving information source often used to facilitate intelligent systems. Moreover, as Wikipedia is the source for both HotpotQA and FEVER, it helps standardize any further analysis of the effects of semantic retrieval on the two different downstream tasks.
<<</Tasks and Datasets>>>
<<<Metrics>>>
Following Thorne18Fever, yang2018hotpotqa, we used annotated sentence-level facts to calculate the F1, Precision and Recall scores for evaluating sentence-level retrieval. Similarly, we labeled all the paragraphs that contain any ground truth fact as ground truth paragraphs and used the same three metrics for paragraph-level retrieval evaluation. For HotpotQA, following yang2018hotpotqa, we used exact match (EM) and F1 metrics for QA span prediction evaluation, and used the joint EM and F1 to evaluate models' joint performance on both retrieval and QA. The joint EM and F1 are calculated as: $P_j = P_a \cdot P_s; R_j = R_a \cdot R_s; F_j = \frac{2P_j \cdot R_j}{P_j + R_j}; \text{EM}_j = \text{EM}_a \cdot \text{EM}_s$, where $P$, $R$, and $\text{EM}$ denote precision, recall and EM; the subscript $a$ and $s$ indicate that the scores are for answer span and supporting facts.
For the FEVER task, following Thorne18Fever, we used the Label Accuracy for evaluating downstream verification and the Fever Score for joint performance. Fever score will award one point for each example with the correct predicted label only if all ground truth facts were contained in the predicted facts set with at most 5 elements. We also used Oracle Score for the two retrieval modules. The scores were proposed in nie2019combining and indicate the upperbound of final FEVER Score at one intermediate layer assuming all downstream modules are perfect. All scores are averaged over examples in the whole evaluation set.
<<</Metrics>>>
<<</Experimental Setup>>>
<<<Results on Benchmarks>>>
We chose the best system based on the dev set, and used that for submitting private test predictions on both FEVER and HotpotQA .
As can be seen in Table TABREF8, with the proposed hierarchical system design, the whole pipeline system achieves new start-of-the-art on HotpotQA with large-margin improvements on all the metrics. More specifically, the biggest improvement comes from the EM for the supporting fact which in turn leads to doubling of the joint EM on previous best results. The scores for answer predictions are also higher than all previous best results with $\sim $8 absolute points increase on EM and $\sim $9 absolute points on F1. All the improvements are consistent between test and dev set evaluation.
Similarly for FEVER, we showed F1 for evidence, the Label Accuracy, and the FEVER Score (same as benchmark evaluation) for models in Table TABREF9. Our system obtained substantially higher scores than all previously published results with a $\sim $4 and $\sim $3 points absolute improvement on Label Accuracy and FEVER Score. In particular, the system gains 74.62 on the evidence F1, 22 points greater that of the second system, demonstrating its ability on semantic retrieval.
Previous systems BIBREF24, BIBREF4 on HotpotQA treat supporting fact retrieval (sentence-level retrieval) just as an auxiliary task for providing extra model explainability. In nie2019combining, although they used a similar three-stage system for FEVER, they only applied one neural retrieval module at sentence-level which potentially weaken its retrieval ability. Both of these previous best systems are different from our fully hierarchical pipeline approach. These observations lead to the assumption that the performance gain comes mainly from the hierarchical retrieval and its positive effects on downstream. Therefore, to validate the system design decisions in Sec SECREF3 and reveal the importance of semantic retrieval towards downstream, we conducted a series of ablation and analysis experiments on all the modules. We started by examining the necessity of both paragraph and sentence retrieval and give insights on why both of them matters.
<<</Results on Benchmarks>>>
<<<Analysis and Ablations>>>
Intuitively, both the paragraph-level and sentence-level retrieval sub-module help speeding up the downstream processing. More importantly, since downstream modules were trained by sampled data from upstream modules, both of neural retrieval sub-modules also play an implicit but important role in controlling the immediate retrieval distribution i.e. the distribution of set $\mathbf {P_N}$ and set $\mathbf {S}$ (as shown in Fig. FIGREF2), and providing better inference data and training data for downstream modules.
<<<Ablation Studies>>>
<<<Setups:>>>
To reveal the importance of neural retrieval modules at both paragraph and sentence level for maintaining the performance of the overall system, we removed either of them and examine the consequences. Because the removal of a module in the pipeline might change the distribution of the input of the downstream modules, we re-trained all the downstream modules accordingly. To be specific, in the system without the paragraph-level neural retrieval module, we re-trained the sentence-level retrieval module with negative sentences directly sampled from the term-based retrieval set and then also re-trained the downstream QA or verification module. In the system without the sentence-level neural retrieval module, we re-train the downstream QA or verification module by sampling data from both ground truth set and retrieved set directly from the paragraph-level module. We tested the simplified systems on both FEVER and HotpotQA.
<<</Setups:>>>
<<<Results:>>>
Table TABREF13 and TABREF14 shows the ablation results for the two neural retrieval modules at both paragraph and sentence level on HotpotQA and FEVER. To begin with, we can see that removing paragraph-level retrieval module significantly reduces the precision for sentence-level retrieval and the corresponding F1 on both tasks. More importantly, this loss of retrieval precision also led to substantial decreases for all the downstream scores on both QA and verification task in spite of their higher upper-bound and recall scores. This indicates that the negative effects on downstream module induced by the omission of paragraph-level retrieval can not be amended by the sentence-level retrieval module, and focusing semantic retrieval merely on improving the recall or the upper-bound of final score will risk jeopardizing the performance of the overall system.
Next, the removal of sentence-level retrieval module induces a $\sim $2 point drop on EM and F1 score in the QA task, and a $\sim $15 point drop on FEVER Score in the verification task. This suggests that rather than just enhance explainability for QA, the sentence-level retrieval module can also help pinpoint relevant information and reduce the noise in the evidence that might otherwise distract the downstream comprehension module. Another interesting finding is that without sentence-level retrieval module, the QA module suffered much less than the verification module; conversely, the removal of paragraph-level retrieval neural induces a 11 point drop on answer EM comparing to a $\sim $9 point drop on Label Accuracy in the verification task. This seems to indicate that the downstream QA module relies more on the upstream paragraph-level retrieval whereas the verification module relies more on the upstream sentence-level retrieval. Finally, we also evaluate the F1 score on FEVER for each classification label and we observe a significant drop of F1 on Not Enough Info category without retrieval module, meaning that semantic retrieval is vital for the downstream verification module's discriminative ability on Not Enough Info label.
<<</Results:>>>
<<</Ablation Studies>>>
<<<Sub-Module Change Analysis>>>
To further study the effects of upstream semantic retrieval towards downstream tasks, we change training or inference data between intermediate layers and then examine how this modification will affect the downstream performance.
<<<Effects of Paragraph-level Retrieval>>>
We fixed $h_p=0$ (the value achieving the best performance) and re-trained all the downstream parameters and track their performance as $k_p$ (the number of selected paragraph) being changed from 1 to 12. The increasing of $k_p$ means a potential higher coverage of the answer but more noise in the retrieved facts. Fig. FIGREF17 shows the results. As can be seen that the EM scores for supporting fact retrieval, answer prediction, and joint performance increase sharply when $k_p$ is changed from 1 to 2. This is consistent with the fact that at least two paragraphs are required to ask each question in HotpotQA. Then, after the peak, every score decrease as $k_p$ becomes larger except the recall of supporting fact which peaks when $k_p=4$. This indicates that even though the neural sentence-level retrieval module poccesses a certain level of ability to select correct facts from noisier upstream information, the final QA module is more sensitive to upstream data and fails to maintain the overall system performance. Moreover, the reduction on answer EM and joint EM suggests that it might be risky to give too much information for downstream modules with a unit of a paragraph.
<<</Effects of Paragraph-level Retrieval>>>
<<<Effects of Sentence-level Retrieval>>>
Similarly, to study the effects of neural sentence-level retrieval module towards downstream QA and verification modules, we fixed $k_s$ to be 5 and set $h_s$ ranging from 0.1 to 0.9 with a 0.1 interval. Then, we re-trained the downstream QA and verification modules with different $h_s$ value and experimented on both HotpotQA and FEVER.
Question Answering: Fig. FIGREF18 shows the trend of performance. Intuitively, the precision increase while the recall decrease as the system becomes more strict about the retrieved sentences. The EM score for supporting fact retrieval and joint performance reaches their highest value when $h_s=0.5$, a natural balancing point between precision and recall. More interestingly, the EM score for answer prediction peaks when $h_s=0.2$ and where the recall is higher than the precision. This misalignment between answer prediction performance and retrieval performance indicates that unlike the observation at paragraph-level, the downstream QA module is able to stand a certain amount of noise at sentence-level and benefit from a higher recall.
Fact Verification: Fig. FIGREF19 shows the trends for Label Accuracy, FEVER Score, and Evidence F1 by modifying upstream sentence-level threshold $h_s$. We observed that the general trend is similar to that of QA task where both the label accuracy and FEVER score peak at $h_s=0.2$ whereas the retrieval F1 peaks at $h_s=0.5$. Note that, although the downstream verification could take advantage of a higher recall, the module is more sensitive to sentence-level retrieval comparing to the QA module in HotpotQA. More detailed results are in the Appendix.
<<</Effects of Sentence-level Retrieval>>>
<<</Sub-Module Change Analysis>>>
<<<Answer Breakdown>>>
We further sample 200 examples from HotpotQA and manually tag them according to several common answer types BIBREF4. The proportion of different answer types is shown in Figure FIGREF24. The performance of the system on each answer type is shown in Table TABREF23. The most frequent answer type is 'Person' (24%) and the least frequent answer type is 'Event' (2%). It is also interesting to note that the model performs the best in Yes/No questions as shown in Table TABREF23, reaching an accuracy of 70.6%.
<<</Answer Breakdown>>>
<<<Examples>>>
Fig. FIGREF26 shows an example that is correctly handled by the full pipeline system but not by the system without paragraph-level retrieval module. We can see that it is very difficult to filter the distracting sentence after sentence-level either by the sentence retrieval module or the QA module.
Above findings in both FEVER and HotpotQA bring us some important guidelines for MRS: (1) A paragraph-level retrieval module is imperative; (2) Downstream task module is able to undertake a certain amount of noise from sentence-level retrieval; (3) Cascade effects on downstream task might be caused by modification at paragraph-level retrieval.
<<</Examples>>>
<<</Analysis and Ablations>>>
<<<Conclusion>>>
We proposed a simple yet effective hierarchical pipeline system that achieves state-of-the-art results on two MRS tasks. Ablation studies demonstrate the importance of semantic retrieval at both paragraph and sentence levels in the MRS system. The work can give general guidelines on MRS modeling and inspire future research on the relationship between semantic retrieval and downstream comprehension in a joint setting.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Work\nMethod\nModeling and Training\nExperimental Setup\nTasks and Datasets\nMetrics\nResults on Benchmarks\nAnalysis and Ablations\nAblation Studies\nSetups:\nResults:\nSub-Module Change Analysis\nEffects of Paragraph-level Retrieval\nEffects of Sentence-level Retrieval\nAnswer Breakdown\nExamples\nConclusion"
],
"type": "outline"
}
|
1909.09270
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Named Entity Recognition with Partially Annotated Training Data
<<<Abstract>>>
Supervised machine learning assumes the availability of fully-labeled data, but in many cases, such as low-resource languages, the only data available is partially annotated. We study the problem of Named Entity Recognition (NER) with partially annotated training data in which a fraction of the named entities are labeled, and all other tokens, entities or otherwise, are labeled as non-entity by default. In order to train on this noisy dataset, we need to distinguish between the true and false negatives. To this end, we introduce a constraint-driven iterative algorithm that learns to detect false negatives in the noisy set and downweigh them, resulting in a weighted training set. With this set, we train a weighted NER model. We evaluate our algorithm with weighted variants of neural and non-neural NER models on data in 8 languages from several language and script families, showing strong ability to learn from partial data. Finally, to show real-world efficacy, we evaluate on a Bengali NER corpus annotated by non-speakers, outperforming the prior state-of-the-art by over 5 points F1.
<<</Abstract>>>
<<<Introduction>>>
Most modern approaches to NLP tasks rely on supervised learning algorithms to learn and generalize from labeled training data. While this has proven successful in high-resource scenarios, this is not realistic in many cases, such as low-resource languages, as the required amount of training data just doesn't exist. However, partial annotations are often easy to gather.
We study the problem of using partial annotations to train a Named Entity Recognition (NER) system. In this setting, all (or most) identified entities are correct, but not all entities have been identified, and crucially, there are no reliable examples of the negative class. The sentence shown in Figure FIGREF2 shows examples of both a gold and a partially annotated sentence. Such partially annotated data is relatively easy to obtain: for example, a human annotator who does not speak the target language may recognize common entities, but not uncommon ones. With no reliable examples of the negative class, the problem becomes one of estimating which unlabeled instances are true negatives and which are false negatives.
To address the above-mentioned challenge, we present Constrained Binary Learning (CBL) – a novel self-training based algorithm that focuses on iteratively identifying true negatives for the NER task while improving its learning. Towards this end, CBL uses constraints that incorporate background knowledge required for the entity recognition task.
We evaluate the proposed methods in 8 languages, showing a significant ability to learn from partial data. We additionally experiment with initializing CBL with domain-specific instance-weighting schemes, showing mixed results. In the process, we use weighted variants of popular NER models, showing strong performance in both non-neural and neural settings. Finally, we show experiments in a real-world setting, by employing non-speakers to manually annotate romanized Bengali text. We show that a small amount of non-speaker annotation combined with our method can outperform previous methods.
<<</Introduction>>>
<<<Related Work>>>
The supervision paradigm in this paper, partial supervision, falls broadly under the category of semi-supervision BIBREF0, and is closely related to weak supervision BIBREF1 and incidental supervision BIBREF2, in the sense that data is constructed through some noisy process. However, all of the most related work shares a key difference from ours: reliance on a small amount of fully annotated data in addition to the noisy data.
FernandesBr11 introduces a transductive version of structured perceptron for partially annotated sequences. However, their definition of partial annotation is labels removed at random, so examples from all classes are still available if not contiguous.
Fidelity Weighted Learning BIBREF3 uses a teacher/student model, in which the teacher has access to (a small amount) of high quality data, and uses this to guide the student, which has access to (a large amount) of weak data.
HedderichKl18, following GoldbergerBe17, add a noise adaptation layer on top of an LSTM, which learns how to correct noisy labels, given a small amount of training data. We compare against this model in our experiments.
In the world of weak supervision, Snorkel BIBREF4, BIBREF5, is a system that combines automatic labeling functions with data integration and noise reduction methods to rapidly build large datasets. They rely on high recall and consequent redundancy of the labeling functions. We argue that in certain realistic cases, high-recall candidate identification is unavailable.
We draw inspiration from the Positive-Unlabeled (PU) learning framework BIBREF6, BIBREF7, BIBREF8, BIBREF9. Originally introduced for document classification, PU learning addresses problems where examples of a single class (for example, sports) are easy to obtain, but a full labeling of all other classes is prohibitively expensive.
Named entity classification as an instance of PU learning was introduced in Grave14, which uses constrained optimization with constraints similar to ours. However, they only address the problem of named entity classification, in which mentions are given, and the goal is to assign a type to a named-entity (like `location', `person', etc.) as opposed to our goal of identifying and typing named entities.
Although the task is slightly different, there has been work on building `silver standard' data from Wikipedia BIBREF10, BIBREF11, BIBREF12, using hyperlink annotations as the seed set and propagating throughout the document.
Partial annotation in various forms has also been studied in the contexts of POS-tagging BIBREF13, word sense disambiguation BIBREF14, temporal relation extraction BIBREF15, dependency parsing BIBREF16, and named entity recognition BIBREF17.
In particular, BIBREF17 study a similar problem with a few key differences: since they remove entity surfaces randomly, the dataset is too easy; and they do not use constraints on their output. We compare against their results in our experiments.
Our proposed method is most closely aligned with the Constraint Driven Learning (CoDL) framework BIBREF18, in which an iterative algorithm reminiscent of self-training is guided by constraints that are applied at each iteration.
<<</Related Work>>>
<<<Constrained Binary Learning>>>
Our method assigns instance weights to all negative elements (tokens tagged as O), so that false negatives have low weights, and all other instances have high weights. We calculate weights according to the confidence predictions of a classifier trained iteratively over the partially annotated data. We refer to our method as Constrained Binary Learning (CBL).
We will first describe the motivation for this approach before moving on to the mechanics. We start with partially annotated data (which we call set $T$) in which some, but not all, positives are annotated (set $P$), and no negative is labeled. By default, we assume that any instance not labeled as positive is labeled as negative as opposed to unlabeled. This data (set $N$) is noisy in the sense that many true positives are labeled as negative (these are false negatives). Clearly, training on $T$ as-is will result in a noisy classifier.
Two possible approaches are: 1) find the false negatives and label them correctly, or 2) find the false negatives and remove them. The former method affords more training data, but runs the risk of adding noise, which could be worse than the original partial annotations. The latter is more forgiving because of an asymmetry in the penalties: it is important to remove all false negatives in $N$, but inadvertently removing true negatives from $N$ is typically not a problem, especially in NER, where negative examples dominate. Further, a binary model (only two labels) is sufficient in this case, as we need only detect entities, not type them.
We choose the latter method, but instead of removing false negatives, we adopt an instance-weighting approach, in which each instance is assigned a weight $v_i \ge 0$ according to confidence in the labeling of that instance. A weight of 0 means that the loss this instance incurs during training will not update the model.
With this in mind, CBL takes two phases: first, it learns a binary classifier $\lambda $ using a constrained iterative process modeled after the CODL framework BIBREF18, and depicted in Figure FIGREF5. The core of the algorithm is the train-predict-infer loop. The training process (line 4) is weighted, using weights $V$. At the start, these can be all 1 (Raw), or can be initialized with prior knowledge. The learned model is then used to predict on all of $T$ (line 5). In the inference step (line 6), we take the predictions from the prior round and the constraints $C$ and produce a new labeling on $T$, and a new set of weights $V$. The details of this inference step are presented later in this section. Although our ultimate strategy is simply to assign weights (not change labels), in this inner loop, we update the labels on $N$ according to classifier predictions.
In the second phase of CBL, we use the $\lambda $ trained in the previous phase to assign weights to instances as follows:
Where $P_{\lambda }(y_i=\text{O} \mid x_i)$ is understood as the classifier's confidence that instance $x_i$ takes the negative label. In practice it is sufficient to use any confidence score from the classifier, not necessarily a probability. If the classifier has accurately learned to detect entities, then for all the false negatives in $N$, $P_{\lambda }(y_i=\text{O}|x_i)$ is small, which is the goal.
Ultimately, we send the original multiclass partially annotated dataset along with final weights $V$ to a standard weighted NER classifier to learn a model. No weights are needed at test time.
<<<NER with CBL>>>
So far, we have given a high-level view of the algorithm. In this section, we will give more low-level details, especially as they relate to the specific problem of NER. One contribution of this work is the inference step (line 6), which we address using a constrained Integer Linear Program (ILP) and describe in this section. However, the constraints are based on a value we call the entity ratio. First, we describe the entity ratio, then we describe the constraints and stopping condition of the algorithm.
<<<Entity ratio and Balancing>>>
We have observed that NER datasets tend to hold a relatively stable ratio of entity tokens to total tokens. We refer to this ratio as $b$, and define it with respect to some labeled dataset as:
where $N$ is the set of negative examples. Previous work has shown that in fully-annotated datasets the entity ratio tends to be about $0.09 \pm 0.05$, depending on the dataset and genre BIBREF19. Intuitively, knowledge of the gold entity ratio can help us estimate when we have found all the false negatives.
In our main experiments, we assume that the entity ratio with respect to the gold labeling is known for each training dataset. A similar assumption was made in ElkanNo08 when determining the $c$ value, and in Grave14 in the constraint determining the percentage of other examples. However, we also show in Section that knowledge of this ratio is not strictly necessary, and a flat value across all datasets produces similar performance.
With a weighted training set, it is also useful to define the weighted entity ratio.
When training an NER model on weighted data, one can change the weighted entity ratio to achieve different effects. To make balanced predictions on test, the entity ratio in the training data should roughly match that of the test data BIBREF20. To bias a model towards predicting positives or predicting negatives, the weighted entity ratio can be set higher or lower respectively. This effect is pronounced when using linear methods for NER, but not as clear in neural methods.
To change the entity ratio, we scale the weights in $N$ by a scaling constant $\gamma $. Targeting a particular $b^*$, we may write:
We can solve for $\gamma $:
To obtain weights, $v^*_i$, that attain the desired entity ratio, $b^*$, we scale all weights in $N$ by $\gamma $.
In the train-predict-infer loop, we balance the weights to a value near the gold ratio before training.
<<</Entity ratio and Balancing>>>
<<<Constraints and Stopping Condition>>>
We encode our constraints with an Integer Linear Program (ILP), shown in Figure FIGREF17. Intuitively, the job of the inference step is to take predictions ($\hat{T}$) and use knowledge of the task to `fix' them.
In the objective function (Eqn. DISPLAY_FORM18), token $i$ is represented by two indicator variables $y_{0i}$ and $y_{1i}$, representing negative and positive labels, respectively. Associated prediction scores $C_0$ and $C_1$ are from the classifier $\lambda $ in the last round of predictions. The first constraint (Eqn. ) encodes the fact that an instance cannot be both an entity and a non-entity.
The second constraint (Eqn. ) enforces the ratio of positive to total tokens in the corpus to match a required entity ratio. $|T|$ is the total number of tokens in the corpus. $b$ is the required entity ratio, which increases at each iteration. $\delta $ allows some flexibility, but is small.
Constraint encodes that instances in $P$ should be labeled positive since they were manually labeled and are by definition trustworthy. We set $\xi \ge 0.99$.
This framework is flexible in that more complex language- or task-specific constraints could be added. For example, in English and many other languages with Latin script, it may help to add a capitalization constraint. In languages with rich morphology, certain suffixes may indicate or contraindicate a named entity. For simplicity, and because of the number of languages in our experiments, we use only a few constraints.
After the ILP has selected predictions, we assign weights to each instance in preparation for training the next round. The decision process for an instance is:
This is similar to Equation (DISPLAY_FORM6), except that the set of tokens that the ILP labeled as positive is larger than $P$. With new labels and weights, we start the next iteration.
The stopping condition for the algorithm is related to the entity ratio. One important constraint (Eqn. ) governs how many positives are labeled at each round. This number starts at $|P|$ and is increased by a small value at each iteration, thereby improving recall. Positive instances are chosen in two ways. First, all instances in $P$ are constrained to be labeled positive (Eqn. ). Second, the objective function ensures that high-confidence positives will be chosen. The stopping condition is met when the number of required positive instances (computed using gold unweighted entity ratio) equals the number of predicted positive instances.
<<</Constraints and Stopping Condition>>>
<<</NER with CBL>>>
<<</Constrained Binary Learning>>>
<<<Experiments>>>
We measure the performance of our method on 8 different languages using artificially perturbed labels to simulate the partial annotation setting.
<<<Data>>>
We experiment on 8 languages. Four languages – English, German, Spanish, Dutch – come from the CoNLL 2002/2003 shared tasks BIBREF21, BIBREF22. These are taken from newswire text, and have labelset of Person, Organization, Location, Miscellaneous.
The remaining four languages come from the LORELEI project BIBREF23. These languages are: Amharic (amh: LDC2016E87), Arabic (ara: LDC2016E89), Hindi (hin: LDC2017E62), and Somali (som: LDC2016E91). These come from a variety of sources including discussion forums, newswire, and social media. The labelset is Person, Organization, Location, Geo-political entity. We define train/development/test splits, taking care to keep a similar distribution of genres in each split. Data statistics for all languages are shown in Table TABREF25.
<<</Data>>>
<<<Artificial Perturbation>>>
We create partial annotations by perturbing gold annotated data in two ways: lowering recall (to simulate missing entities), and lowering precision (to simulate noisy annotations).
To lower recall, we replace gold named entity tags with $O$ tags (for non-name). We do this by grouping named entity surface forms, and replacing tags on all occurrences of a randomly selected surface form until the desired amount remains. For example, if the token `Bangor' is chosen to be untagged, then every occurrence of `Bangor' will be untagged. We chose this slightly complicated method because the simplest idea (remove mentions randomly) leaves an artificially large diversity of surface forms, which makes the problem of discovering noisy entities easier.
To lower precision, we tag a random span (of a random start position, and a random length between 1 and 3) with a random named entity tag. We continue this process until we reach the desired precision. When both precision and recall are to be perturbed, the recall adjustment is made first, and then the number of random spans to be added is calculated by the entities that are left.
<<</Artificial Perturbation>>>
<<<NER Models>>>
In principle, CBL can use any NER method that can be trained with instance weights. We experiment with both non-neural and neural models.
<<<Non-neural Model>>>
For our non-neural system, we use a version of Cogcomp NER BIBREF24, BIBREF25 modified to use Weighted Averaged Perceptron. This operates on a weighted training set $D_w = \lbrace (x_i, y_i, v_i) \rbrace _{i=1}^N $, where $N$ is the number of training examples, and $v_i \ge 0$ is the weight on the $i$th training example. In this non-neural system, a training example is a word with context encoded in the features. We change only the update rule, where the learning rate $\alpha $ is multiplied by the weight:
We use a standard set of features, as documented in BIBREF24. In order to keep the language-specific resources to a minimum, we did not use any gazetteers for any language. One of the most important features is Brown clusters, trained for 100, 500, and 1000 clusters for the CoNLL languages, and 2000 clusters for the remaining languages. We trained these clusters on Wikipedia text for the four CoNLL languages, and on the same monolingual text used to train the word vectors (described in Section SECREF26).
<<</Non-neural Model>>>
<<<Neural Model>>>
A common neural model for NER is the BiLSTM-CRF model BIBREF26. However, because the Conditional Random Field (CRF) layer calculates loss at the sentence level, we need a different method to incorporate token weights. We use a variant of the CRF that allows partial annotations by marginalizing over all possible sequences BIBREF27.
When using a standard BiLSTM-CRF model, the loss of a dataset ($D$) composed of sentences ($s$) is calculated as:
Where $P_\theta (\mathbf {y}^{(s)} | \textbf {x}^{(s)})$ is calculated by the CRF over outputs from the BiLSTM. In the marginal CRF framework, it is assumed that $\mathbf {y}^{(s)}$ is necessarily partial, denoted as $\mathbf {y}^{(s)}_p$. To incorporate partial annotations, the loss is calculated by marginalizing over all possible sequences consistent with the partial annotations, denoted as $C(\mathbf {y}_p^s)$.
However, this formulation assumes that all possible sequences are equally likely. To address this, BIBREF17 introduced a way to weigh sequences.
It's easy to see that this formulation is a generalization of the standard CRF if $q(.)=1$ for the gold sequence $\mathbf {y}$, and 0 for all others.
The product inside the summation depends on tag transition probabilities and tag emission probabilities, as well as token-level “weights" over the tagset. These weights can be seen as defining a soft gold labeling for each token, corresponding to confidence in each label.
For clarity, define the soft gold labeling over each token $x_i$ as $\mathbf {G}_i \in [0,1]^{L}$, where $L$ is the size of the labelset. Now, we may define $q(.)$ as:
Where $G_i^{y_i}$ is understood as the weight in $\mathbf {G}_i$ that corresponds to the label $y_i$.
We incorporate our instance weights in this model with the following intuitions. Recall that if an instance weight $v_i=0$, this indicates low confidence in the label on token $x_i$, and therefore the labeling should not update the model at training time. Conversely, if $v_i=1$, then this label is to be trusted entirely.
If $v_i=0$, we set the soft labeling weights over $x_i$ to be uniform, which is as good as no information. Since $v_i$ is defined as confidence in the O label, the soft labeling weight for O increases proportionally to $v_i$. Any remaining probability mass is distributed evenly among the other labels.
To be precise, for tokens in $N$, we calculate values for $\mathbf {G}_i$ as follows:
For example, consider phase 1 of Constrained Binary Learning, in which the labelset is collapsed to two labels ($L=2$). Assuming that the O label has index 0, then if $v_i=0$, then $\mathbf {G}_i = [0.5, 0.5]$. If $v_i=0.6$, then $\mathbf {G}_i = [0.6, 0.4]$.
For tokens in $P$ (which have some entity label with high confidence), we always set $\mathbf {G}_i$ with 1 in the given label index, and 0 elsewhere.
We use pretrained GloVe BIBREF28 word vectors for English, and the same pretrained vectors used in BIBREF29 for Dutch, German, and Spanish. The other languages are distributed with monolingual text BIBREF23, which we used to train our own skip-n-gram vectors.
<<</Neural Model>>>
<<</NER Models>>>
<<<Baselines>>>
We compare against several baselines, including two from prior work.
<<<Raw annotations>>>
The simplest baseline is to do nothing to the partially annotated data and train on it as is.
<<</Raw annotations>>>
<<<Instance Weights>>>
Although CBL works with no initialization (that is, all tokens with weight 1), we found that a good weighting scheme can boost performance for certain models. We design weighting schemes that give instances in $N$ weights corresponding to an estimate of the label confidence. For example, non-name tokens such as respectfully should have weight 1, but possible names, such as Russell, should have a low weight, or 0. We propose two weighting schemes: frequency-based and window-based.
For the frequency-based weighting scheme, we observed that names have relatively low frequency (for example, Kennebunkport, Dushanbe) and common words are rarely names (for example the, and, so). We weigh each instance in $N$ according to its frequency.
where $freq(x_i)$ is the frequency of the $i^{th}$ token in $N$ divided by the count of the most frequent token. In our experiments, we computed frequencies over $P+N$, but these could be estimated on any sufficiently large corpus. We found that the neural model performed poorly when the weights followed a Zipfian distribution (e.g. most weights very small), so for those experiments, we took the log of the token count before normalizing.
For the window-based weighting scheme, noting that names rarely appear immediately adjacent to each other in English text, we set weights for tokens within a window of size 1 of a name (identified in $P$) to be $1.0$, and for tokens farther away to be 0.
where $d_i$ is the distance of the $i^{th}$ token to the nearest named entity in $P$.
Finally, we combine the two weighting schemes as:
<<</Instance Weights>>>
<<<Self-training with Marginal CRF>>>
BIBREF17 propose a model based on marginal CRF BIBREF27 (described in Section SECREF26). They follow a self-training framework with cross-validation, using the trained model over all but one fold to update gold labeling distributions in the final fold. This process continues until convergence. They use a partial-CRF framework similar to ours, but taking predictions at face value, without constraints.
<<</Self-training with Marginal CRF>>>
<<<Neural Network with Noise Adaptation>>>
Following BIBREF30, we used a neural network with a noise adaptation layer. This extra layer attempts to correct noisy examples given a probabilistic confusion matrix of label noise. Since this method needs a small amount of labeled data, we selected 500 random tokens to be the gold training set, in addition to the partial annotations.
As with our BiLSTM experiments, we use pretrained GloVe word vectors for English, and the same pretrained vectors used in BIBREF29 for Dutch, German, and Spanish. We omit results from the remaining languages because the scores were substantially worse even than training on raw annotations.
<<</Neural Network with Noise Adaptation>>>
<<</Baselines>>>
<<<Experimental Setup and Results>>>
We show results from our experiments in Table TABREF30. In all experiments, the training data is perturbed at 90% precision and 50% recall. These parameters are similar to the scores obtained by human annotators in a foreign language (see Section SECREF5). We evaluate each experiment with both non-neural and neural methods.
First, to get an idea of the difficulty of NER in each language, we report scores from models trained on gold data without perturbation (Gold). Then we report results from an Oracle Weighting scheme (Oracle Weighting) that takes partially annotated data and assigns weights with knowledge of the true labels. Specifically, mislabeled entities in set $N$ are given weight 0, and all other tokens are given weight 1.0. This scheme is free from labeling noise, but should still get lower scores than Gold because of the smaller number of entities. Since our method estimates these weights, we do not expect CBL to outperform the Oracle method. Next, we show results from all baselines. The bottom two sections are our results, first with no initialization (Raw), and CBL over that, then with Combined Weighting initialization, and CBL over that.
<<</Experimental Setup and Results>>>
<<<Analysis>>>
Regardless of initialization or model, CBL improves over the baselines. Our best model, CBL-Raw BiLSTM-CRF, improves over the Raw Annotations BiLSTM-CRF baseline by 11.2 points F1, and the Self-training prior work by 2.6 points F1, showing that it is an effective way to address the problem of partial annotation. Further, the best CBL version for each model is within 3 points of the corresponding Oracle ceiling, suggesting that this weighting framework is nearly saturated.
The Combined weighting scheme is surprisingly effective for the non-neural model, which suggests that the intuition about frequency as distinction between names and non-names holds true. It gives modest improvement in the neural model. The Self-training method is effective, but is outperformed by our best CBL method, a difference we discuss in more detail in Section SECREF43. The Noise Adaptation method outperforms the Raw annotations Cogcomp baseline in most cases, but does not reach the performance of the Self-training method, despite using some fully labeled data.
It is instructive to compare the neural and non-neural versions of each setup. The neural method is better overall, but is less able to learn from the knowledge-based initialization weights. In the non-neural method, the difference between Raw and Combined is nearly 20 points, but the difference in the neural model is less than 3 points. Combined versions of the non-neural method outperform the neural method on 3 languages: Dutch, Arabic, and Hindi. Further, in the neural method, CBL-Raw is always worse than CBL-Combined. This may be due to the way that weights are used in each model. In the non-neural model, a low enough weight completely cancels the token, whereas in the neural model it is still used in training. Since the neural model performs well in the Oracle setting, we know that it can learn from hard weights, but it may have trouble with the subtle differences encoded in frequencies. We leave it to future work to discover improved ways of incorporating instance weights in a BiLSTM-CRF.
In seeking to understand the details of the other results, we need to consider the precision/recall tradeoff. First, all scores in the Gold row had higher precision than recall. Then, training on raw partially annotated data biases a classifier strongly towards predicting few entities. All results from the Raw annotations row have precision more than double the recall (e.g. Dutch Precision, Recall, F1 were: 91.5, 32.4, 47.9). In this context, the problem this paper explores is how to improve the recall of these datasets without harming the precision.
<<</Analysis>>>
<<<Difference from Prior Work>>>
While our method has several superficial similarities with prior work, most notably BIBREF17, there are some crucial differences.
Our methods are similar in that they both use a model trained at each step to assign a soft gold-labeling to each token. Each algorithm iteratively trains models using weights from the previous steps.
One difference is that BIBREF17 use cross-validation to train, while we follow BIBREF18 and retrain with the entire training set at each round.
However, the main difference has to do with the focus of each algorithm. Recall the discussion in Section SECREF3 regarding the two possible approaches of 1) find the false negatives and label them correctly, and 2) find the false negatives and remove them. Conceptually, the former was the approach taken by BIBREF17, the latter was our approach. Another way to look at this is as focusing on predicting correct tag labels (BIBREF17) or focus on predicting O tags with high confidence (ours).
Even though they use soft labeling (which they show to be consistently better than hard labeling), it is possible that the predicted tag distribution is incorrect. Our approach allows us to avoid much of the inevitable noise that comes from labelling with a weak model.
<<</Difference from Prior Work>>>
<<</Experiments>>>
<<<Bengali Case Study>>>
So far our experiments have shown effectiveness on artificially perturbed labels, but one might argue that these systematic perturbations don't accurately simulate real-world noise. In this section, we show how our methods work in a real-world scenario, using Bengali data partially labeled by non-speakers.
<<<Non-speaker Annotations>>>
In order to compare with prior work, we used the train/test split from ZPWVJKM16. We removed all gold labels from the train split, romanized it BIBREF31, and presented it to two non-Bengali speaking annotators using the TALEN interface BIBREF32. The instructions were to move quickly and annotate names only when there is high confidence (e.g. when you can also identify the English version of the name). They spent about 5 total hours annotating, without using Google Translate. This sort of non-speaker annotation is possible because the text contains many `easy' entities – foreign names – which are noticeably distinct from native Bengali words. For example, consider the following:
Romanized Bengali: ebisi'ra giliyyaana phinnddale aaja pyaalestaaina adhiinastha gaajaa theke aaja raate ekhabara jaaniyyechhena .
Translation: ABC's Gillian Fondley has reported today from Gaza under Palestine today.
The entities are Gillian Findlay, ABC, Palestine, and Gaza. While a fast-moving annotator may not catch most of these, `pyaalestaaina' could be considered an `easy' entity, because of its visual and aural similarity to `Palestine.' A clever annotator may also infer that if Palestine is mentioned, then Gaza may be present.
Annotators are moving fast and being intentionally non-thorough, so the recall will be low. Since they do not speak Bengali, there are likely to be some mistakes, so the precision may drop slightly also. This is exactly the noisy partial annotation scenario addressed in this paper. The statistics of this data can be seen in Table TABREF49, including annotation scores computed with respect to the gold training data for each annotator, as well as the combined score.
We show results in Table TABREF50, using the BiLSTM-CRF model. We compare against other low-resource approaches published on this dataset, including two based on Wikipedia BIBREF33, BIBREF12, another based on lexicon translation from a high-resource language BIBREF34. These prior methods operate under somewhat different paradigms than this work, but have the same goal: maximizing performance in the absence of gold training data.
Raw annotations is defined as before, and gives similar high-precision low-recall results. The Combined Weighting scheme improves over Raw annotations by 10 points, achieving a score comparable to the prior state of the art. Beyond that, CBL-Raw outperforms the prior best by nearly 6 points F1, although CBL-Combined again underwhelms.
To the best of our knowledge, this is the first result showing a method for non-speaker annotations to produce high-quality NER scores. The simplicity of this method and the small time investment for these results gives us confidence that this method can be effective for many low-resource languages.
<<</Non-speaker Annotations>>>
<<</Bengali Case Study>>>
<<<Conclusions>>>
We explore an understudied data scenario, and introduce a new constrained iterative algorithm to solve it. This algorithm performs well in experimental trials in several languages, on both artificially perturbed data, and in a truly low-resource situation.
<<</Conclusions>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Work\nConstrained Binary Learning\nNER with CBL\nEntity ratio and Balancing\nConstraints and Stopping Condition\nExperiments\nData\nArtificial Perturbation\nNER Models\nNon-neural Model\nNeural Model\nBaselines\nRaw annotations\nInstance Weights\nSelf-training with Marginal CRF\nNeural Network with Noise Adaptation\nExperimental Setup and Results\nAnalysis\nDifference from Prior Work\nBengali Case Study\nNon-speaker Annotations\nConclusions"
],
"type": "outline"
}
|
2003.09586
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Analyzing Word Translation of Transformer Layers
<<<Abstract>>>
The Transformer translation model is popular for its effective parallelization and performance. Though a wide range of analysis about the Transformer has been conducted recently, the role of each Transformer layer in translation has not been studied to our knowledge. In this paper, we propose approaches to analyze the translation performed in encoder / decoder layers of the Transformer. Our approaches in general project the representations of an analyzed layer to the pre-trained classifier and measure the word translation accuracy. For the analysis of encoder layers, our approach additionally learns a weight vector to merge multiple attention matrices into one and transform the source encoding to the target side with the merged alignment matrix to align source tokens with target translations while bridging different input - output lengths. While analyzing decoder layers, we additionally study the effects of the source context and the decoding history in word prediction through bypassing the corresponding self-attention or cross-attention sub-layers. Our analysis reveals that the translation starts at the very beginning of the"encoding"(specifically at the source word embedding layer), and shows how translation evolves during the forward computation of layers. Based on observations gained in our analysis, we propose that increasing encoder depth while removing the same number of decoder layers can simply but significantly boost the decoding speed. Furthermore, simply inserting a linear projection layer before the decoder classifier which shares the weight matrix with the embedding layer can effectively provide small but consistent and significant improvements in our experiments on the WMT 14 English-German, English-French and WMT 15 Czech-English translation tasks (+0.42, +0.37 and +0.47 respectively).
<<</Abstract>>>
<<<Introduction>>>
Neural Machine Translation (NMT) has achieved great success in the last few years BIBREF0, BIBREF1, BIBREF2. The popular Transformer BIBREF2 model, which outperforms previous RNN/CNN based translation models BIBREF0, BIBREF1, is based on multi-layer self-attention networks and can be paralleled effectively.
Recently, a wide range of analysises BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10 related to the Transformer have been conducted. For example, bisazza2018lazy perform a fine-grained analysis of how various source-side morphological features are captured at different levels of the NMT encoder, they find no correlation between the accuracy of source morphology encoding and translation quality, and morphological features only in context and only to the extent directly transferable to the target words are captured. voita2019bottom study how information flows across Transformer layers and find that representations differ significantly depending on the objectives (MT, LM and MLM). tang2019encoders find that encoder hidden states outperform word embeddings significantly in word sense disambiguation. However, how the Transformer translation model transforms individual source tokens into corresponding target tokens (word translations, as shown in Figure FIGREF1), and specifically, what is the role of each Transformer layer in translation, at which layer a target word is translated has not been studied to our knowledge.
To detect roles of Transformer layers in translation, in this paper, we follow previous probing approaches BIBREF11, BIBREF12, BIBREF13, and propose to measure the word translation accuracy of output representations of individual Transformer layers by probing corresponding target translation tokens in these representations. In addition to analyzing the role of each encoder / decoder layer, we also analyze the contribution of the source context and the decoding history in translation by testing the effects of the self-attention sub-layer and the cross-attention sub-layer in decoder layers.
Our analysis reveals that the translation already starts at the source embedding layer, which offers an explanation for bisazza2018lazy. It also demonstrates how the word translation evolves across encoder / decoder layers and the effects of the source “encoding” and the decoding history on the translation of target tokens.
Based on the observations from our analysis, we find that: 1) the proper use of more encoder layers with fewer decoder layer can significantly boost decoding speed without harming quality; 2) inserting a linear projection layer before the decoder classifier can provide small but significant and consistent improvements in our experiments on the WMT 14 English-German, English-French and WMT 15 Czech-English news translation tasks ($+0.42$, $+0.37$ and $+0.47$ BLEU respectively).
<<</Introduction>>>
<<<Word Translation Accuracy Analysis>>>
To analyze word translation accuracy of the Transformer, we first freeze a trained Transformer model so its behavior is consistent in how it performs in translation during our analysis, then we compute the forward pass and extract output representations of the layer analyzed. Finally, we apply a linear projection layer to extract and enhance features related to translation and feed projected representations to the frozen decoder classifier of the converged Transformer. The linear projection layer is the only module trained and updated on the training set with the original Transformer being frozen, thus it will only transform between vector spaces without generating new features for the word translation. An illustration of our analysis approach for encoder / decoder layers is shown in Figure FIGREF2.
<<<Analysis of Encoder Layers>>>
Analyzing word translation accuracy of encoder layers requires us to align source tokens with corresponding target token. We use the alignment matrices computed by cross-attention sub-layers in decoder layers to align source tokens with target tokens. As there are multiple matrices produced by each sub-layer (due to the multi-head attention mechanism) and multiple decoder layers, we have to ensemble them into one matrix of high alignment accuracy using weights. Assume there are $d$ decoder layers with $h$ attention heads in each multi-head attention sub-layer, which results in $d * h$ alignment matrices $A_1, ... A_{d * h}$. We use a $d * h$ dimension weight vector $w$ to combine all these attention matrices. The weight vector is first normalized by softmax to a probability distribution $p$:
where $i$ indicates the $i$th element in $w$.
Then we use $p$ as the weights of corresponding attention matrices and merge them into 1 alignment matrix $A$.
$w$ can be trained during backpropagation together with the linear projection layer.
After we obtain the alignment matrix $A$, instead of selecting the target token with the highest alignment weight as the translation of a source token, we perform matrix multiplication between the encoded source representations $E$ (size: source sentence length $*$ input dimension) and the alignment matrix $A$ (size: source sentence length $*$ target sentence length) to transform / re-order source representations to the target side $T_E$:
where $A^T$ and $\times $ indicate the transpose of $A$ and matrix multiplication.
Thus $T_E$ has the same length as the gold translation sequence, and the target sequence can be used directly as translations representing by $T_E$.
Though source representations are transformed to the target side, we suggest this does not involve any target side information as the pre-trained Transformer is frozen and the transformation does not introduce any representation from the decoder side. We do not retrieve target tokens with highest alignment score as word translations of corresponding source tokens because translation may involve one/none/multiple source token(s) to one/none/multiple target token(s) alignment, and we suggest that using a soft alignment (attention weights) may lead to more reliable gradients than the hard alignment.
<<</Analysis of Encoder Layers>>>
<<<Analysis of Decoder Layers>>>
The analysis of predicting accuracy of the decoder is simpler than the encoder, as we can directly use the shifted target sequence without the requirement to bridge the different sequence length of the source sentence and the target while analyzing the encoder. We can simply use the output representations of the analyzed layer, and evaluate its prediction accuracy after projection.
However, as studied by li2019word, the decoder involves 2 kinds of “translation”, one (performed by the self-attention sub-layer) translates the history token sequence to the next token, another (performed by the cross-attention sub-layer) translates by attending source tokens. We additionally analyze the effects of these 2 kinds of translation on predicting accuracy by dropping the corresponding sub-layer of the analyzed decoder layer (i.e. we only compute the other sub-layer and the feed-forward layer with only the residual connection kept as the computation of the skipped sub-layer).
<<</Analysis of Decoder Layers>>>
<<</Word Translation Accuracy Analysis>>>
<<<Analysis Experiments>>>
<<<Settings>>>
We conducted experiments based on the Neutron implementation of the Transformer BIBREF14. We first trained a Transformer base model for our analysis following all settings of vaswani2017attention on the WMT 14 English to German news translation task. The input dimension of the model and the hidden dimension of the feed-forward sub-layer were 512 and $2,048$ respectively. We employed a $512 * 512$ parameter matrix as the linear projection layer. The source embedding matrix, the target embedding matrix and the weight of the classifier were bound.
We applied joint Byte-Pair Encoding (BPE) BIBREF15 with $32k$ merge operations to address the unknown word issue. We only kept sentences with a maximum of 256 sub-word tokens for training. We removed repeated data in the training set, and the training set was randomly shuffled in every training epoch. The concatenation of newstest 2012 and newstest 2013 was used for validation and newstest 2014 as the test set.
The number of warm-up steps was set to $8k$ . Each training batch contained at least $25k$ target tokens, and the model was trained for $100k$ training steps. The large batch size is achieved by gradient accumulation. We used a dropout of $0.1$ and employed a label smoothing BIBREF16 value of $0.1$. We used the Adam optimizer BIBREF17 with $0.9$, $0.98$ and $10^{-9}$ as $\beta _{1}$, $\beta _{2}$ and $\epsilon $. Parameters were uniformly initialized under the Lipschitz constraint BIBREF18.
We averaged the last 5 checkpoints saved with an interval of $1,500$ training steps. For decoding, we used a beam size of 4, and evaluated tokenized case-sensitive BLEU . The averaged model achieved a BLEU score of $27.96$ on the test set.
The linear projection layer and the weight vector $w$ of 48 elements for alignment during the analysis of encoder layers were trained on the training set. We monitored the accuracy on the development set during their training, and reported results on the test set.
<<</Settings>>>
<<<Analysis>>>
The analysis results of the trained Transformer are shown in Table TABREF8. Layer 0 stands for the embedding layer. “Acc” indicates the prediction accuracy. “-Self attention” and “-Cross attention” in the decoder layer analysis mean bypassing the computation of the self-attention sub-layer and the cross-attention sub-layer respectively of the analyzed decoder layer. In layer analysis of the encoder and decoder, “$\Delta $” indicates improvements in word translation accuracy of the analyzed layer over the previous layer. While analyzing the self-attention and cross-attention sub-layers, “$\Delta $” is the accuracy loss when we remove the computation of the corresponding sub-layer.
The results of encoder layers in Table TABREF8 shows that: 1) surprisingly but reasonably the translation already starts at the embedding layer, and an amazingly sound word translation accuracy is obtained at the source embedding layer! This indicates that the translation already begins at the very beginning of “encoding” (specifically, the source embedding layer) instead of at the decoder. 2) With the stacking of encoder layers, the word translation accuracy improves (i.e. encoder layers gradually fix word translations of the source embedding layer), and improvements brought by different layers are relatively similar.
While analyzing decoder layers, Table TABREF8 shows that: 1) shallow decoder layers (0, 1, 2 and 3) perform significantly worse compared to corresponding encoder layers (until reaching the 4th decoder layer, where a word translation accuracy which surpasses the embedding layer of the encoder is achieved); 2) The improvements brought by different decoder layers are quite different. Specifically, layer 4 and 5 bring more improvements than the others.
While analyzing the effects of the source context (the self-attention sub-layer is responsible for the target language re-ordering, and “-Self attention” prevents using the decoding history in the analyzed decoder layer) and the decoding history (“-Cross attention” prevents copying translation from the source “encoding”), Table TABREF8 shows that in shallow decoder layers (layer 1-3), the decoding history plays a similarly important role like the source “encoding”, while in deep layers, the source “encoding” plays a more vital role than the decoding history. Thus, we suggest our comparison sheds light on the importance of translation performed by the encoder.
<<</Analysis>>>
<<<Translation from Encoder Layers>>>
Since our approach extracts features for translation from output representations of encoder layers while analyzing them, is it possible to perform word translation with only these features from encoder layers without using the decoder?
To achieve this goal, we feed output representations from an encoder layer to the corresponding linear projection layer, and feed the output of the linear projection layer directly to the decoder classifier, and retrieve tokens with highest probabilities as “translations”. Even though such “translations” from encoder layers have a same length and a same word-order as source sentences, individual source tokens are translated to the target language to some extent. We evaluated BPEized case-insensitive BLEU and BLEU 1 (1-gram BLEU, indicates the word translation quality), and results are shown in Table TABREF13. “FULL” is the performance of the whole Transformer model (decoding with a beam size of 4). “$\Delta $” means the improvements obtained by the introduced layer (or the decoder for “FULL”) over the previous layer.
Table TABREF13 shows that though there is a significant gap in BLEU scores between encoder layers and the full Transformer, the gap in BLEU 1 is relatively smaller than in BLEU. It is reasonable that encoder layers achieve a comparably high BLEU 1 score while a low BLEU score, as they perform word translation in the same order as the source sentence without any word re-ordering of the target language. We suggest the BLEU 1 score achieved by only the source embedding layer (i.e. translating with only embeddings) surprising and worth noting.
<<</Translation from Encoder Layers>>>
<<</Analysis Experiments>>>
<<<Findings Based on Observations>>>
<<<Trade Decoder Layers for Encoder Layers>>>
From our analysis of the 6-layer Transformer base model (Table TABREF8), we find that in contrast to the improvements of the word translation accuracy with increasing depth on the encoder side, some decoder layers contribute significantly fewer improvements than the others (i.e. Layer 4 and 5 bring more word translation accuracy improvements than that from layer 1, 2, 3 and 6 in Table TABREF8). We suggest there might be more “lazy” layers in the decoder than in the encoder, which means that it might be easier to compress the decoder than the encoder, and further conjecture that simply removing some decoder layers while adding the same number of encoder layers may improve the performance of the Transformer. The other motivations for doing so are:
Each decoder layer has one more cross-attention sub-layer than an encoder layer, and increasing encoder layers while decreasing the same number of decoder layers will reduce the number of parameters and computational cost;
The decoder has to compute the forward pass for every decoding step (the decoding of each target token), and the acceleration of reducing decoder layers will be more significant in decoding, which is of productive value.
<<</Trade Decoder Layers for Encoder Layers>>>
<<<Linear Projection Layer before Classifier>>>
We compare the word translation accuracy achieved by the last decoder layer (with the linear projection layer) during analysis and that of the pre-trained standard Transformer (without the projection layer before the decoder classifier), and results are shown in Table TABREF20.
Table TABREF20 shows that feeding the representations from the last decoder layer after the linear projection to the decoder classifier leads to slightly higher word prediction accuracy than feeding them directly to the classifier. We conjecture potential reasons might be:
We follow vaswani2017attention binding the weight matrix of the classifier with the embedding matrix. Processing the inserted linear projection layer followed by the classifier is equivalent to using only a classifier but with a new weight matrix (equivalent to the matrix multiplication of the linear projection layer's weight matrix and the embedding matrix), which indirectly detaches the classifier weight matrix with the embedding matrix;
As described in our analysis approach, the linear projection layer is expected to enhance the part of its input representations which relates to the classification while fading the other parts irrelevant to the word prediction, which may benefit the performance.
Thus, we suggest that inserting a linear projection layer which simply performs matrix multiplication between input representations and a weight matrix before the decoder classifier may help improve the word translation accuracy and further lead to improved translation quality.
<<</Linear Projection Layer before Classifier>>>
<<<Results and Analysis>>>
<<<Effects of Encoder/Decoder Depth>>>
We examine the effects of reducing decoder depth while adding corresponding numbers of encoder layers, and results are shown in Table TABREF24. The decoding speed is measured on the test set which contains $3,003$ sentences with a beam size of 4. “Speed up” stands for the decoding acceleration compared to the 6-layer Transformer.
Table TABREF24 shows that while the acceleration of trading decoder layers for encoding layers in training is small, in decoding is significant. Specifically, the Transformer with 10 encoder layers and 2 decoder layers is $2.32$ times as fast as the 6-layer Transformer while achieving a slightly higher BLEU.
Though the Transformer with 11 encoder layers and only 1 decoder layer fails to achieve a comparable performance comparing with the 6-layer Transformer, our results still suggest that using more encoder layers with fewer but sufficient decoder layers can significantly boost the decoding speed, which is simple but effective and valuable for production applications.
We demonstrate the word accuracy analysis results of the 10 encoder layer 2 decoder layer Transformer in Table TABREF27.
Comparing Table TABREF27 with Table TABREF8, we find that: 1) The differences in improvements ($1.17$ vs. $0.11$) brought by individual layers of the 10-layer encoder are larger than those of the 6-layer encoder ($1.90$ vs. $0.87$), indicating that there might be some “lazy” layers in the 10-layer encoder; 2) Decreasing the depth of the decoder removes those “lazy” decoder layers in the 6-layer decoder and makes decoder layers rely more on the source “encoding” (by comparing the effects of skipping the self-attention sub-layer and cross-attention sub-layer on performance).
<<</Effects of Encoder/Decoder Depth>>>
<<<Effects of the Projection Layer>>>
To study the effects of the linear projection layer on performance, we conducted experiments on the WMT 14 English-French and WMT 15 Czech-English news translation tasks in addition to the WMT 14 English-German task. We also conducted significance tests BIBREF19. Results are tested on newstest 2014 and 2015 respectively and shown in Table TABREF28.
Table TABREF28 shows that the linear projection layer is able to provide small but consistent and significant improvements in all 3 tasks.
<<</Effects of the Projection Layer>>>
<<</Results and Analysis>>>
<<</Findings Based on Observations>>>
<<<Related Work>>>
<<<Analysis of NMT Models.>>>
li2019word analyze the word alignment quality in NMT with prediction difference, and further analyze the effect of alignment errors on translation errors, which demonstrates that NMT captures good word alignment for those words mostly contributed from source, while their word alignment is much worse for those words mostly contributed from target. voita2019analyzing evaluate the contribution of individual attention heads to the overall performance of the model and analyze the roles played by them in the encoder. yang2019assessing propose a word reordering detection task to quantify how well the word order information is learned by Self-Attention Networks (SAN) and RNN, and reveal that although recurrence structure makes the model more universally-effective on learning word order, learning objectives matter more in the downstream tasks such as machine translation. tsai2019transformer regard attention as applying a kernel smoother over the inputs with the kernel scores being the similarities between inputs, and analyze individual components of the Transformer’s attention with the new formulation via the lens of the kernel. tang2019encoders find that encoder hidden states outperform word embeddings significantly in word sense disambiguation. he2019towards measure the word importance by attributing the NMT output to every input word and reveal that words of certain syntactic categories have higher importance while the categories vary across language pairs. voita2019bottom use canonical correlation analysis and mutual information estimators to study how information flows across Transformer layers and find that representations differ significantly depending on the objectives (MT, LM and MLM). An early work BIBREF3 performs a fine-grained analysis of how various source-side morphological features are captured at different levels of the NMT encoder. While they are unable to find any correlation between the accuracy of source morphology encoding and translation quality, they discover that morphological features are only captured in context and only to the extent that they are directly transferable to the target words, thus they suggest encoder layers are “lazy”, while our analysis offers an explanation for their results as the translation already starts at the source embedding layer, and possibly source embeddings already represent linguistic features of their translations more than those of themselves.
<<</Analysis of NMT Models.>>>
<<<Analysis of BERT.>>>
BERT BIBREF20 uses the Transformer encoder, and analysis of BERT may provide valuable references for analyzing the Transformer. jawahar2019bert provide novel support that BERT networks capture structural information, and perform a series of experiments to unpack the elements of English language structure learned by BERT. tenney2019bert employ the edge probing task suite to explore how the different layers of the BERT network can resolve syntactic and semantic structure within a sentence, and find that the model represents the steps of the traditional NLP pipeline in an interpretable and localizable way, and that the regions responsible for each step appear in the expected sequence: POS tagging, parsing, NER, semantic roles, then coreference. pires2019multilingual present a large number of probing experiments, and show that Multilingual-BERT’s robust ability to generalize cross-lingually is underpinned by a multilingual representation.
<<</Analysis of BERT.>>>
<<<Accelerating Decoding.>>>
zhang2018accelerating propose average attention as an alternative to the self-attention network in the Transformer decoder to accelerate its decoding. wu2018pay introduce lightweight convolution and dynamic convolutions which are simpler and more efficient than self-attention. The number of operations required by their approach scales linearly in the input length, whereas self-attention is quadratic. zhang2018speeding apply cube pruning into neural machine translation to speed up the translation. zhang2018exploring propose to adapt an n-gram suffix based equivalence function into beam search decoding, which obtains similar translation quality with a smaller beam size, making NMT decoding more efficient. Non-Autoregressive Translation (NAT) BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25, BIBREF26, BIBREF27 enables parallelized decoding, while there is still a significant quality drop compared to traditional autoregressive beam search, our findings on using more encoder layers might also be adapted to the NAT.
<<</Accelerating Decoding.>>>
<<</Related Work>>>
<<<Conclusion>>>
We propose approaches for the analysis of word translation accuracy of Transformer layers to investigate how translation is performed. To measure word translation accuracy, our approaches train a linear projection layer which bridges representations from the analyzing layer and the pre-trained classifier. While analyzing encoder layers, our approach additionally learns a weight vector to merge multiple attention matrices into one, and transforms the source “encoding” to the target shape by multiplying the merged alignment matrix. For the analysis of decoder layers, we additionally analyze the effects of the source context and the decoding history in word prediction through bypassing the corresponding sub-layers.
Two main findings of our analysis are: 1) the translation starts at the very beginning of “encoding” (specifically at the source word embedding layer), and evolves further with the forward computation of layers; 2) translation performed by the encoder is very important for the evolution of word translation of decoder layers, especially for Transformers with few decoder layers.
Based on our analysis, we propose to increase encoder depth while removing the same number of decoder layers to boost the decoding speed. We further show that simply inserting a linear projection layer before the decoder classifier which shares the weight matrix with the embedding layer can effectively provide small but consistent and significant improvements.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nWord Translation Accuracy Analysis\nAnalysis of Encoder Layers\nAnalysis of Decoder Layers\nAnalysis Experiments\nSettings\nAnalysis\nTranslation from Encoder Layers\nFindings Based on Observations\nTrade Decoder Layers for Encoder Layers\nLinear Projection Layer before Classifier\nResults and Analysis\nEffects of Encoder/Decoder Depth\nEffects of the Projection Layer\nRelated Work\nAnalysis of NMT Models.\nAnalysis of BERT.\nAccelerating Decoding.\nConclusion"
],
"type": "outline"
}
|
1911.03270
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Char-RNN and Active Learning for Hashtag Segmentation
<<<Abstract>>>
We explore the abilities of character recurrent neural network (char-RNN) for hashtag segmentation. Our approach to the task is the following: we generate synthetic training dataset according to frequent n-grams that satisfy predefined morpho-syntactic patterns to avoid any manual annotation. The active learning strategy limits the training dataset and selects informative training subset. The approach does not require any language-specific settings and is compared for two languages, which differ in inflection degree.
<<</Abstract>>>
<<<Introduction>>>
A hashtag is a form of metadata labeling used in various social networks to help the users to navigate through the content. For example, one of the most popular hashtags on Instagram is "#photooftheday" [photo of the day]. Hashtags are written without any delimiters, although some users use an underscore or camel-casing to separate words. Hashtags themselves may be a great source for features for following opinion mining and social network analysis. Basically hashtags serve as keyphrases for a post in social media. By segmenting the hashtags into separate words we may use regular techniques to process them. The problem of hashtag segmentation resembles of another problem, namely word segmentation.
The problem of word segmentation is widely studied in languages like Chinese, since it lacks whitespaces to separate words, or in German to split compound words. In languages like English or Russian, where compounds are not that frequent as in German and where whitespace delimiters are regularly used, the problem of word segmentation arises mainly when working with hashtags.
Formally the problem is stated as follows: given a string of $n$ character $s = s_1 \ldots s_n$ we need to define the boundaries of the substrings $s_{i:j}, i < j$, so that each substring is meaningful (i.e. is a regular word, named entity, abbreviation, number, etc). The main challenge of this problem is that the segmentation might be ambiguous. For example, a string “somethingsunclear” might be segmented as “something sun clear” or “somethings unclear”. To deal with the ambiguity more processing is required, such as POS-tagging, estimation of frequencies of all hashtag constituencies or their co-occurence frequency. The frequencies can be estimated on a large corpus, such as BNC , COCA , Wikipedia. However when working with noisy user generated data, such as texts or hashtags from social networks, the problem of unknown words (or out of vocabulary words) arises. In language modeling this problem is solved by using smoothing, such as Laplacian smoothing or Knesser-Ney smoothing. Otherwise additional heuristics can be used to extend the dictionary with word-like sequences of characters. Unlike language modelling, in hashtag segmentation frequency estimation is not only source for defining word boundaries. Otherwise candidate substrings can be evaluated according to length BIBREF0.
Several research groups have shown that introducing character level into models help to deal with unknown words in various NLP tasks, such as text classification BIBREF1, named entity recognition BIBREF2, POS-tagging BIBREF3, dependency parsing BIBREF4, word tokenization and sentence segmentation BIBREF5 or machine translation BIBREF6, BIBREF7. The character level model is a model which either treats the text as a sequence of characters without any tokenization or incorporates character level information into word level information. Character level models are able to capture morphological patterns, such as prefixes and suffixes, so that the model is able to define the POS tag or NE class of an unknown word.
Following this intuition, we use a character level model for hashtag segmentation. Our main motivation is the following: if the character level model is able to capture word ending patterns, it should also be able to capture the word boundary patterns. We apply a character level model, specifically, a recurrent neural network, referred further as char-RNN, to the task of hashtag segmentation. The char-RNN is trained and tested on the synthetic data, which was generated from texts, collected from social networks in English and Russian, independently. We generate synthetic data for training by extracting frequent $N$-grams and removing whitespaces. The test data is annotated manually . Since the problem statement is very basic, we use additional techniques, such as active learning, character embeddings and RNN hidden state visualization, to interpret the weights, learned by char-RNN. We address the following research questions and claim our respective contributions:
We show that our char-RNN model outperforms the traditional unigram or bigram language models with extensive use of external sources BIBREF8, BIBREF0.
What is the impact of high inflection in languages such as Russian on the performance of character-level modelling as opposed to languages with little inflection such as English? We claim that character-level models offer benefits for processing highly inflected languages by capturing the rich variety of word boundary patterns.
As getting sufficient amount of annotated training collection is labor-intensive and error-prone, a natural question would be: can we avoid annotating real-world data altogether and still obtain high quality hashtag segmentations? We approach this problem by using morpho-syntactic patterns to generate synthetic hashtags.
A potentially unlimited volume of our synthetic training dataset raises yet another question of whether an informative training subset could be selected. To this extent, we apply an active learning-based strategy to subset selection and identify a small portion of the original synthetic training dataset, necessary to obtain a high performance.
<<</Introduction>>>
<<<Neural Model for Hashtag Segmentation>>>
<<<Sequence Labeling Approach>>>
We treat hashtag segmentation as a sequence labeling task. Each character is labeled with one of the labels $\mathcal {L} = \lbrace 0, 1\rbrace $, (1) for the end of a word, and (0) otherwise (Table TABREF9 and TABREF9). Given a string $s = {s_1, \ldots , s_n}$ of characters, the task is to find the labels $Y^* = {y_1^*. \ldots , y_n^*}$, such that $ Y^* = \arg \max _{Y \in \mathcal {L} ^n} p(Y | s).$
The neural model for hashtag segmentation consists of three layers.
The embedding layer is used to compute the distributed representation of input characters. Each character $c_i$ is represented with an embedding vector $e_i \in \mathbb {R}^{d_e}$, where $d_e$ is the size of the character embedding. $E$ is the look up table of size $|V| \times d_e$, where $V$ is the vocabulary, i.e. the number of unique characters.
The feature layer is used to process the input. We use a bi-directional recurrent layer with LSTM units to process the input in forward and backward directions. The LSTM units we use are default keras LSTM units as introduced by Hochreiter.
The inference layer is used to predict the labels of each character. We use a single dense layer as f or inference and $softmax$ to predict the probabilities of the labels $\mathcal {L} = \lbrace 0, 1\rbrace $.
Each character is assigned with the most probable label.
The parameters of the char-RNN are the following:
Embedding layer = 50 input dimensions;
Feature layer = 64 bidirectional LSTM units;
Inference layer = 2 output neurons with softmax activation function mapped to each of 64 outputs.
<<</Sequence Labeling Approach>>>
<<</Neural Model for Hashtag Segmentation>>>
<<<Dataset>>>
In this section we describe the datasets we used for hashtag segmentation. We experimented with Russian and English datasets to compare the performance of the char-RNN.
<<<Russian dataset>>>
To our knowledge there is no available dataset for hashtag segmentation in Russian, so we faced the need to create our own dataset. Our approach to the dataset creation was twofold: the training data was created from social network texts by selecting frequent $n$-grams and generating hashtags following some hashtag patterns. The test dataset consists of real hashtags collected from vk.com (a Russian social network) and were segmented manually.
We followed the same strategy to create an English language dataset.
<<<Training Dataset Generation>>>
We scraped texts from several pages about civil services from vk.com. Next we extracted frequent $n$-grams that do not contain stopwords and consist of words and digits in various combinations (such as word + 4 digits + word or word + word + 8 digits). We used several rules to merge these $n$-grams so that they resemble real hashtags, for example:
remove all whitespace: wordwordworddigits
Examples: ЁлкаВЗазеркалье, нескольколетназад
replace all whitespace with an underscore: word_word_digits
Examples: увд_юга_столицы
remove some whitespace and replace other spaces with an underscore: word_worddigits.
Examples: ищусвоегогероя_уфпс
A word here might be a word in lower case, upper case or capitalized or an abbreviation. There might be up to four digits.
In general, we introduced 11 types of hashtags, which contain simply constructed hashtags as well as the complex ones. Here are a couple of examples:
The hashtag consists of two parts: the word/abbreviation in the first part and the number or word in the second. The underscore is a delimiter.
Examples: word_2017, NASA_2017, word_word
Two or three words, which are separated by an underscore.
Examples: Word_Word, word_word_word
<<</Training Dataset Generation>>>
<<<Test Dataset Annotation>>>
We segmented manually 2K the most frequent hashtags, extracted from the same collection of the scraped texts.
The resulting size of the Russian dataset is 15k hashtags for training and 2k hashtags for testing.
<<</Test Dataset Annotation>>>
<<</Russian dataset>>>
<<<English dataset>>>
We used the dataset, released by BIBREF0. This dataset consists of:
a collection of tweets, which we used to generate the synthetic training hashtags according to the same rules as for Russian;
a collection of annotated and separated hashtags, which we used as a testing set. From this test set we excluded ambiguous hashtags, annotated with several possible segmentations.
The resulting size of the English dataset is 15k hashtags for training and 1k hashtags for testing.
<<</English dataset>>>
<<</Dataset>>>
<<<Active Learning>>>
We followed the strategy for active learning, as in BIBREF9. The training procedure consists of multiple rounds of training and testing of the model. We start by training the model on 1k hashtags, which were randomly selected from the training dataset. Next we test the model on the reminder of the training dataset and select 1k hashtags according to the current model’s uncertainty in its prediction of the segmentation. These hashtags are not manually relabelled, since a) they belong to the synthetically generated training dataset and b) the correct labeling for these hashtag is already known. In BIBREF9 three uncertainty measure are presented, from which we selected the maximum normalized log-probability (MNLP) assigned by the model to the most likely sequence of tags. The model is then retrained on the hashtags it is uncertain about. Note, that here we do not check if the predictions of the model are correct. We are more interested in training the model on hard examples than in evaluating the quality of intermediate results. We refer the reader to BIBREF9 for more technical details.
<<</Active Learning>>>
<<<Experiments>>>
<<<Baseline>>>
As for baseline algorithm, we consider the BIBREF0 system architecture as a state-of-the-art algorithm. Unfortunately, their approach is not straightforwardly applicable to our synthetic Russian dataset, because it requires twofold input: a hashtag and a corresponding tweet or a text from any other social media, which is absent in our task setting due to synthetic nature of the training dataset.
For this reason as a baseline algorithm for English dataset we refer to results from BIBREF0, and as for Russian dataset, we used the probabilistic language model, described in BIBREF8. The probability of a sequence of words is the product of the probabilities of each word, given the word’s context: the preceding word. As in the following equation:
where
In case there is no such a pair of words $(w_{i-1}, w_i)$ in the set of bigrams, the probability of word $w_i$ is obtained as if it was only an unigram model:
where $V$ – vocabulary, $f(w_{i})$ – frequency of word $w_{i}$, and $\alpha $ = 1.
In Table TABREF30 we present three baseline results: LM BIBREF8 for Russian and English datasets; context-based LM BIBREF0 for English dataset only. We treat a segmentation as correct if prediction and target sequences are the same.
<<</Baseline>>>
<<<Neural Model>>>
In our experiments we used 5 epochs to train the char-RNN with LSTM units. For each language we observed three datasets with different number of hashtags. In case of Russian language, the more data we use while training, the higher the accuracy. As for English, the highest accuracy score was achieved on a set of 10k hashtags (Table TABREF32). Due to it's lower morphological diversity and complexity the model starts to overfit on training sets with large sizes. The training showed that mostly the model makes wrong predictions of segmentation on hashtags of complex types, such as “wordword_worddigits”.
Our results outperform all choosen baseline both for Russian and English datasets. Note, that we have two baselines for the English dataset: one is purely frequency-based, another is cited from BIBREF0, where external resources are heavily used. We show that using significantly less amount of training data, we achieve a boost in quality by switching from statistical word language models to char-RNN. As expected, the results on Russian dataset are higher than for the English dataset due to higher inflection degree in Russian as opposed to English.
<<</Neural Model>>>
<<<Visualization>>>
In order to see if embeddings of similar characters, in terms of string segmentation, appear near each-other in their resulting 50-dimensional embedding space, we have applied one technique for dimensionality reduction: SVD to character embeddings to plot them on 2D space. For both languages meaningful and interpretable clusters can be extracted: capital letters, letters in lower case, digits and underscore, as shown below.
<<</Visualization>>>
<<</Experiments>>>
<<<Related Work>>>
The problem of word segmentation has received much attention in Chinese and German NLP for word segmentation and compound splitting BIBREF10, respectively. The major techniques for word segmentation exploit string matching algorithms BIBREF11, language models BIBREF12, BIBREF0 and sequence labeling methods BIBREF10. Recent trend of deep learning as a major approach for any NLP task in general and sequence labeling in particular resulted in using various RNN-based models and CNN-based model for Chinese word segmentation BIBREF10, BIBREF13, BIBREF14.
Since BIBREF10 Chinese word segmentation is addressed as a character labeling task: each character of the input sequence is labeled with one of the four labels $\mathcal {L} = \lbrace B, M, E, S\rbrace $, which stand for character in Begin, Middle or End of the word or Single character word. BIBREF10 uses a maximum entropy tagger to tag each character independently. This approach was extended in BIBREF15 to the sequence modeling task, and linear conditional random fields were used to attempt it and receive state of the art results. A neural approach to Chinese segmentation mainly uses various architectures of character level recurrent neural networks BIBREF16, BIBREF17, BIBREF18 and very deep constitutional networks BIBREF19. Same architectures are used for dialectal Arabic segmentation BIBREF20.
The evolution of German compound splitters is more or less similar to Chinese word segmentation systems. The studies of German compound splitting started with corpus- and frequency-based approaches BIBREF13, BIBREF14 and are now dominated with neural-based distributional semantic models. However, German compound splitting is rarely seen as sequence modeling task.
The problem of hashtag segmentation, analysis and usage in English has been approached by several research groups. As it was shown by BIBREF12 hashtag segmentation for TREC microblog track 2011 BIBREF21 improves the quality of information retrieval, while BIBREF0 shows that hashtag segmentation improves linking of entities extracted from tweets to a knowledge base. Both BIBREF12, BIBREF0 use Viterbi-like algorithm for hashtag segmentation: all possible segmentations of hashtag are scored using a scoring function:
where $P_{Unigram}$ are probabilities, computed according to the unigram model based on a large enough corpus or any N-gram service.
Following the idea of scoring segmentation candidates, BIBREF11 introduces other scoring functions, which include a bigram model (2GM) and a Maximum Unknown Matching (MUM), which is adjustable to unseen words.
BIBREF22 attempt to split camel-cased hashtags using rule-based approach and POS-tagging for further semantic classification. WordSegment has been used for sentiment analysis BIBREF23, BIBREF24 and other applications.
To our knowledge there has been little work done for word or hashtag segmentation in Russian.
<<<Active Learning in NLP>>>
Active learning is machine learning technique which allows efficient use of the available training data. It presumes that, first an initial model is trained on a very little amount of data and next tested on large unlabeled set. Next the model is able to choose a few most difficult examples and ask an external knowledge source about the desired labels. Upon receiving these labels, the model is updated and retrained on the new train set. There might be a few rounds of label querying and model updating. To use active learning strategy, we need a definition of what a difficult example is and how to score its difficulty. One of the most common scoring approaches is entropy-based uncertainty sampling, which selects the examples with the lowest prediction probability.
Active learning is widely used in NLP applications, when there is little annotated data while the amount of unlabeled data is abundant. Being ultimately used for text classification using traditional machine learning classifiers BIBREF25, BIBREF26, active learning is less known to be used with deep learning sequence classifiers. Recent works report on scoring word embeddings that are likely to be updated with the greatest magnitude BIBREF27 and on using maximum normalized log-probability (MNLP) assigned by the model to the most likely sequence of tags BIBREF9:
<<</Active Learning in NLP>>>
<<<Training on synthetic data>>>
The lack of training data is an issue for many NLP applications. There have been attempts to generate and use synthetic data for training question answering systems BIBREF28 and SQL2text systems BIBREF29. In BIBREF0 synthetic hashtags are generated by removing whitespace characters from frequent n-grams, while in BIBREF30 German compounds are synthesized for further machine translation.
<<</Training on synthetic data>>>
<<</Related Work>>>
<<<Conclusions>>>
In this paper we approach the problem of hashtag segmentation by using char-RNNs. We treat the problem of hashtag segmentation as a sequence labeling task, so that each symbol of a given string is labeled with 1 (there should be a whitespace after this symbol) or 0 (otherwise). We use two datasets to test this approach in English and in Russian without any language-specific settings. We compare char-RNN to traditional probabilistic algorithms. To interpret the results we use a few visualization techniques and the strategy of active learning to evaluate the complexity of training data, since we use synthetically generated hashtags for training.
The results show that:
When approached on character level, hashtag segmentation problem can be solved using relatively small and simple recurrent neural network model without usage of any external corpora and vocabularies. Such char-RNN not only outperforms significantly traditional frequency-based language models, but also can be trained on synthetic data generated according to morpho-syntactic patterns, without any manual annotation and preprocessing.
In languages with high inflection (such as Russian) the char-RNN achieves higher results than in languages with little inflections (such as English) due to the ability of the char-RNN to capture and memorize word boundary patterns, especially word ending patterns (i.e. adjective endings “ый”,“ая”,“ое” or verbal endings “ать”,“еть” in Russian).
The amount of generated synthetic training data can be limited by using techniques for active learning which allows to select sufficient training subset without any loss of quality.
<<</Conclusions>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nNeural Model for Hashtag Segmentation\nSequence Labeling Approach\nDataset\nRussian dataset\nTraining Dataset Generation\nTest Dataset Annotation\nEnglish dataset\nActive Learning\nExperiments\nBaseline\nNeural Model\nVisualization\nRelated Work\nActive Learning in NLP\nTraining on synthetic data\nConclusions"
],
"type": "outline"
}
|
2004.03762
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Generating Narrative Text in a Switching Dynamical System
<<<Abstract>>>
Early work on narrative modeling used explicit plans and goals to generate stories, but the language generation itself was restricted and inflexible. Modern methods use language models for more robust generation, but often lack an explicit representation of the scaffolding and dynamics that guide a coherent narrative. This paper introduces a new model that integrates explicit narrative structure with neural language models, formalizing narrative modeling as a Switching Linear Dynamical System (SLDS). A SLDS is a dynamical system in which the latent dynamics of the system (i.e. how the state vector transforms over time) is controlled by top-level discrete switching variables. The switching variables represent narrative structure (e.g., sentiment or discourse states), while the latent state vector encodes information on the current state of the narrative. This probabilistic formulation allows us to control generation, and can be learned in a semi-supervised fashion using both labeled and unlabeled data. Additionally, we derive a Gibbs sampler for our model that can fill in arbitrary parts of the narrative, guided by the switching variables. Our filled-in (English language) narratives outperform several baselines on both automatic and human evaluations.
<<</Abstract>>>
<<<A Switching Dynamical System for Narrative Generation>>>
In this section, we give a brief overview of Switching Dynamical systems and how they can be used to capture both a scaffold of the narrative as well as the narrative dynamics. We then describe in detail the components of our model and its relation to existing models.
<<<Narrative Dynamics in a Dynamical System>>>
The specifics of the narrative (characters, setting, etc.), will differ between stories, but as BIBREF0 notes, the way they transition to the next point in the narrative (what we refer to as “narrative dynamics") is often shared. Let's say that, as done often, we represent the `narrative specifics' at time step $i$ with a latent vector $Z_i$. A natural way to explicitly model how this state evolves over time that fits with the above observation is as a Linear Dynamical System:
Where $A$ is a matrix, shared across all narratives, and $\Sigma $ is a noise term that takes into consideration idiosyncrasies different narratives will have. The fact that the shared transition matrix $A$ is linear means that narratives will have linearly analogous trajectories through time, despite having different details (comparable to stories with different settings but matching structures such as Ran/King Lear, Ulysses/Odyssey, etc). Of course, the fatal flaw of the model is that it assumes there exists only one transition matrix, and thus only one possible way to transition through a narrative!
<<</Narrative Dynamics in a Dynamical System>>>
<<<Narrative Scaffolds as Switching Variables>>>
A more fitting model would thus be a Switching Linear Dynamical System BIBREF1, BIBREF2, BIBREF3. In an SLDS, we assume there exists a set of $K$ different sets of dynamics, $\lbrace (A_1, \Sigma _1),...(A_K,\Sigma _K)\rbrace $. At time step $i+1$, one of these sets of dynamics is used. The one used depends on the value of a discrete variable at time step $i+1$ called the switching variable, $S_{i+1} \in \lbrace 1,...K\rbrace $:
There is a switching variable $S_i$ associated with each time step. The switching variable value itself evolves over time by a prior Markov process, $P(S_{i+1} | S_{i})$. This top level chain of switching variables thus forms our narrative scaffold, indicating what transitions we must go through in the narrative, with the dynamics matrices indicating how they transition.
<<</Narrative Scaffolds as Switching Variables>>>
<<<Narrative Scaffold - Emotional Trajectory>>>
What the switching variables actually represent can be chosen by the user. Straightforward narrative scaffolds include event sequences BIBREF6, keywords BIBREF7, or latent template ids BIBREF8. More complex but potentially more informative scaffolds may be created using concepts such as story grammar non-terminals BIBREF9, BIBREF10, or character action taken throughout a story BIBREF11.
In our work, we use the sentiment trajectory of the narrative as the scaffold. That is, each $S_i$ for a sentence indicates the overall coarse sentiment of the sentence (Positive, Negative, or Neutral). Though simple, the overall sentiment trajectory of a narrative is important in defining the high level `shape' of a narrative often shared among different narratives BIBREF12, BIBREF13. Furthermore, sentiment trajectory has been shown to be fairly useful in story understanding tasks BIBREF14, BIBREF15. We discuss in the conclusion future directions for using different types of scaffolds.
<<</Narrative Scaffold - Emotional Trajectory>>>
<<<The Full Model>>>
The final component of the model is a conditional language model that generates sentence $i$ conditioned on the current $Z_i$, and all previous sentences, $X_{:i}$. Generation continues until an <eos> is reached. This conditional language model may be parameterized as desired, but in this work, we parameterize it as an RNN neural network language model.
The graphical model for our SLDS is pictured in Figure FIGREF8. The model consists of three sets of variables: (1) Switching variables $S_1,...,S_N$, (2) Latent state variables $Z_1,...,Z_N$ capturing the details of the narrative at sentence $i$, (3) The sentences themselves $X_1,...X_N$, where each sentence $X_i$ has $n_i$ words, $x^i_1,...x^i_{n_i}$. The joint over all variables factorizes as below into the following components ($X_{:i}$ stands for all sentence before $X_i$):
❶ Narrative Scaffold Planner: The factor $P(S_i | S_{i-1})$ is a transition matrix, which we calculate via count based statistics from training. It is fed in as prior knowledge and fixed.
❷ Narrative Dynamics Network: The factor $P(Z_i | Z_{i-1}, S_i)$ is determined like a switching linear dynamical system:
which is equivalent to drawing $Z_i$ from a Normal distribution with mean $A_{S_i}Z_{i-1}$ and variance $B_{S_i}B_{S_i}^T$.
❸ Conditional Language model: The factor $P(X_i | Z_i, X_{:i})$ is parameterized by an RNN language model conditioned on the latent $Z_i$.
<<</The Full Model>>>
<<</A Switching Dynamical System for Narrative Generation>>>
<<<Learning and Posterior Inference>>>
Due to the conditionals parameterized by neural networks we use amortized variational inference in a manner similar to Variational AutoEncoders BIBREF16, both to learn an approximate posterior $q(S, Z | X)$ and to learn the generative model parameters by maximizing a lower bound on the data likelihood (ELBO). We assume that the approximate posterior factorizes as follows:
Like in VAEs, computing these individual factors is done through a parameterized function called the inference or recognition network whose parameters are trained jointly with the generative model. In our case there are two forms for the factors in our posterior: (1) The first form, $q(S_i | \textbf {X}) = q_{S_i}$ is parameterized by a classifier that takes in the set of sentences $\mathbf {X}$ and outputs a categorical distribution over the switching variables. (2) The second form, $q(Z_i| Z_{i-1}, S_i, X_{:i}, X_{i}) = q_{Z_i}$ is realized by functions $f_{\mu }(Z_{i-1}, S_i, X_{:i}, X_{i})$ and $f_\sigma (Z_{i-1}, S_i, X_{:i}, X_{i})$ that output the mean and variance, respectively, of a Gaussian over $Z_i$.
Borrowing terminology from VAEs, the approximate posterior (the factors given above) act as an `encoder', while the generative model from the previous section can be seen as the `decoder'. This type of training has been previously used in BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21.
<<<Lower bound formula & exact training algorithm>>>
As mentioned previously, we optimize all parameters (including the variational factor functions) by optimizing a lower bound on the data likelihood. The model may be trained either with supervision labels for the switching states (in our case, sentiment labels) or without supervised labels.
If one is training without the sentiment labels, then the lower bound on the marginal likelihood (and thus our optimization objective) may be written as follows:
The derivation for this objective is identical to that found in BIBREF18, BIBREF19, and simply relies on using properties of iterated expectations. All expectations are estimated with Monte Carlo samples.
If training with the sentiment labels $S_1,...,S_N$, then the objective is similar (but without the sampling of the switching states), and is augmented with an additional supervision objective as done in BIBREF22:
Final training procedure for a single narrative is:
For each sentence (starting from the first), sample the switching state $S_i$ from $q(S_i | \textbf {X})$.
For each sentence (starting from the first), sample the latent $Z_i$ from $q(Z_i | S_i, Z_{i-1}, X)$.
Evaluate the data likelihood and KL term(s) with these samples.
Take the gradients of the objective function w.r.t. all parameters, using the reparameterization trick for $q_{Z_i}$ BIBREF16 or the Gumbel-Softmax trick for $q_{S_i}$ BIBREF23, and optimize.
<<</Lower bound formula & exact training algorithm>>>
<<</Learning and Posterior Inference>>>
<<<Interpolations via Gibbs Sampling>>>
One of the benefits of probabilistic formulation is the possibility (if an inference procedure can be found) of generating narratives with specific constraints, where the constraints may be specified as clamped variables in the model. In this section, we show how narratives may be generated conditioned on arbitrary bits and pieces of the narrative already filled in, using approximate Gibbs sampling. This allows one to, for example, interpolate a narrative given the first and the last sentence (similar to how earlier story generation systems were able to generate with a given end goal in mind). Some examples of these interpolations generated by our system can be found in Table TABREF37. We give the equations and summarize the algorithm in the next sections.
<<<Conditionals for Gibbs Sampling>>>
For our Gibbs sampling algorithm we give the narrative scaffold (switching variables), $S_1,...,S_T \in \mathbf {S}$ and a set of observed sentences, $\mathbf {X^+}$. This may be any set of sentences (the first and last, just the second sentence, etc) as inputs to the system. We wish to find values for the unobserved sentences in set $\mathbf {X^-}$ by sampling from the distribution $P(\mathbf {X^-}, Z_1,...,Z_T | \mathbf {S},\mathbf {X^+})$. We perform this sampling via Gibbs sampling. Two different forms of conditionals need to be derived to do Gibbs sampling. One over some $Z_i$ conditioned on everything else, and one over some $X_i$ conditioned on everything else.
By using the d-separation properties of the graph, and substituting the true posterior over $Z_{i}$ with our approximate posterior $q$, we can show the first distribution is approximately proportional to
The last line is the product between a Gaussian density over $Z_{i+1}$ and $Z_{i}$, respectively. With some algebraic manipulations, one can show the last line is proportional to a single Gaussian PDF over $Z_i$:
To find the second conditional, one can use the d-separation properties of the graph to find that it is proportional to:
These two distributions are simply factors of our conditional language model, and both terms can thus be evaluated easily. In theory, one could use this fact to sample the original conditional via Metropolis-Hastings . Unfortunately, we found this approach to be much too slow for practical purposes. We observed that the simple heuristic of deterministically assigning $X_i$ to be the greedy decoded output of the conditional language model $P(X_{i} | X_{:i}, Z_{i})$ works well, as evidenced by the empirical results. We leave it for future work to research different conditional language model parameterizations that allow easy sampling from this conditional
<<</Conditionals for Gibbs Sampling>>>
<<<Gibbs Sampling Interpolation Overview>>>
The variables in the Gibbs sampler are first initialized using some heuristics (see Supplemental Materials for details). After initialization, performing the interpolations with Gibbs sampling follows the below two step process:
For each $Z_i$, sample a value $Z^\prime $ from equation $(1)$ and set $Z_i$ to $Z^\prime $.
For each $X_i$ in $\mathbf {X}^-$, find a new value for $X_i$ by running greedy decoding using the conditional language model.
<<</Gibbs Sampling Interpolation Overview>>>
<<</Interpolations via Gibbs Sampling>>>
<<<Training Details>>>
<<<Dataset and Preprocessing>>>
We use the ROCStories corpora introduced in BIBREF27. It contains 98,159 short commonsense stories in English as training, and 1,570 stories for validation and test each. Each story in the dataset has five-sentences and captures causal and temporal commonsense relations. We limit our vocabulary size to 16,983 based on a per-word frequency cutoff set to 5. For sentiment tags, we automatically tag the entirety of the corpus with the rule based sentiment tagger, Vader BIBREF28, and bucket the polarity scores of Vader into three tags: neutral, negative, and positive. These tags form the label set of the $S$ variables in our SLDS model. We tokenize the stories with Spacy tokenizer. Each sentences in the input narrative has an <eos> tag except for the S2S model discussed below.
<<</Dataset and Preprocessing>>>
<<<Switching Linear Dynamical System (SLDS)>>>
SLDS has RNN encoder and decoder networks with single layer GRU cells of hidden size 1024. Model uses an embedding size of 300. We train the model using Adam optimizer with the defaults used by PyTorch. We stop training the models when the validation loss does not decrease for 3 consecutive epochs. Training details remain same as above unless otherwise mentioned.
<<</Switching Linear Dynamical System (SLDS)>>>
<<<Baselines>>>
Language Model (LM): We train a two layer recurrent neural language model with GRU cells of hidden size 512.
Sequence-to-Sequence Attention Model (S2S): We train a two layer neural sequence to sequence model equipped with bi-linear attention function with GRU cells of hidden size 512. Sentiments tags for a narrative (1 for each sentence) are given as input to the model and the corresponding sentences are concatenated together as the output with only one <eos> tag at the end. This model is trained with a 0.1 dropout. This model is comparable to the static model of BIBREF7, and other recent works employing a notion of scaffolding into neural generation (albeit adapted for our setting).
Linear Dynamical System (LDS): We also train a linear dynamical system as discussed in Section SECREF1 as one of our baselines for fair comparisons. Apart from having just a single transition matrix this model has the same architectural details as SLDS.
Semi-Supervised SLDS (SLDS-X%): To gauge the usability of semi-supervision, we also train semi-supervised SLDS models with varying amount of labelled sentiment tags unlike the original model which uses 100% tagged data. We refer to these as SLDS-X%, where X is the % labelled data used for training: 1%, 10%, 25%, and 50%.
<<</Baselines>>>
<<</Training Details>>>
<<<Evaluations>>>
As described above, our model is able to perform narrative interpolations via an approximate Gibbs sampling procedure. At the core of our evaluations is thus a fill-in-the-sentences task. We provide 1 or 2 sentences, and require the model to generate the rest of the narrative . We evaluate this via automatic evaluations as well as with crowd-sourced human evaluations. We also report perplexity to evaluate the models' ability to fit the data. Lastly, we look at whether the transitions learned by the SLDS models capture what they are intended to capture: does using the transition matrix associated with a sentiment tag (positive/negative/neutral) lead to a generated sentence with that sentiment?
<<<Generating the Interpolations>>>
For the SLDS models, the interpolations are generated via the Gibbs sampling algorithm described earlier. In all experiments for the SLDS models we draw 50 samples (including burn in samples) and output the interpolation that maximizes the probability of the given sentence(s). Since the baselines do not have the means for doing interpolations, we simulate `interpolations' for the baselines; we draw 1000 samples using top k (with k=15) truncated sampling (conditioned on the given initial sentences, if available). We then output the sample that maximizes the probability of the clamped sentences around which we are interpolating the others. We allow the S2S access to the gold sentiment tags. To give a lower bound on the performance of the SLDS model, we do not provide it with gold tags. We instead provide the SLDS model with the semi-noisy tags that are output from $q(S_i | X)$.
<<</Generating the Interpolations>>>
<<<Automatic Evaluation of Interpolations>>>
We automatically evaluate on four different types of interpolations (where different combinations of sentences are removed and the model is forced to regenerate them), We evaluate the generations with the ROUGE BIBREF29 and METEOR BIBREF30 metrics using the true sentences as targets. Table TABREF33 shows the automatic evaluation results from interpolations using our proposed models and baselines. The #Sent(s) column indicates which sentence(s) were removed, and then regenerated by the model. We gave the baselines a slight edge over SLDS because they pick the best out of 1000 samples while SLDS is only out of 50. The SLDS models see their largest gain over the baseline models when at least the first sentence is given as an input. The baseline models do better when the first and second sentence need to be imputed. This is likely due to the fact that having access to the earlier sentences allows a better initialization for the Gibbs sampler. Surprisingly, the semi-supervised variants of the SLDS models achieve higher scores. The reasons for this is discussed below in the Perplexity section.
<<</Automatic Evaluation of Interpolations>>>
<<<Human Evaluation of Interpolations>>>
<<<Annotation Scheme>>>
As automatic evaluation metrics are not sufficient to assess the quality of any creative task such as narrative generation, we measure the quality of the generations through human evaluation of 200 stories on the Amazon Mechanical Turk platform. We provided Turkers with two generated narratives from two different models, each with five sentences. The first and last sentences were fed to each model as input, and the middle three sentences were generated. Each pair of narratives is graded by 3 users each with two tasks: (1) to rank on a scale of 0-3 each of the sentences except the first one on the basis of its coherency with the previous sentence(s) and (2) compare and rank the two narratives based on their overall coherency, ie how well the story connects the starting/ending sentences.
<<</Annotation Scheme>>>
<<<Human Evaluation Results>>>
Table TABREF41 reports the result of human evaluations of SLDS and baseline generations. We can observe that people preferred narratives generated by SLDS over the ones generated by baseline models (LM and S2S) as they found the former model more coherent, which is an important criteria for narrative generation. 51.3% of the time SLDS generates better narratives than the LM model while LM in turn does it only 35.0% of the times. 13.7% of the generations end up in tie. The mean sentence level coherence score for SLDS is around 12.5% larger than that of the LM, with a slightly lower standard deviation. We see similar results when compared against the S2S model.
<<</Human Evaluation Results>>>
<<</Human Evaluation of Interpolations>>>
<<<Language Modeling Perplexity Score>>>
As our models are essentially language models, we evaluated their per-sentence negative log-likelihood and per-word perplexity scores, which can be viewed as an indirect measure of how well a system works as a generative model of narrative text. For the SLDS and LDS models these scores are approximations, an upper bound (the negative of the ELBO) to the actual values. For the other two models the scores are exact. A good model should assign low perplexity scores to its test set. In Table TABREF44 SLDS achieves the lowest scores, implying that it is able to model the data distribution well. In Table TABREF45 we also calculate the perplexity scores for the semi-supervised SLDS models to assess the effectiveness of semi-supervised training. Surprisingly, the models with less supervision scored better in terms of perplexity. One possibility for this might be the use of the soft Gumbel-Softmax in the semi-supervised models. The soft Gumbel-Softmax variant does not commit to using a single transition matrix at each time step (instead linearly combining them, weighted by the Softmax weights). This fact may permit the model greater flexibility in fitting the training data. While this leads to better scores in metrics such as perplexity or BLEU, it does leads to transitions that are worse in capturing the properties they should be capturing, as we shall see in the next section.
<<</Language Modeling Perplexity Score>>>
<<<Evaluation of Transition Dynamics>>>
One matter of interest is whether or not the transitions are capturing what they are supposed to capture, appropriate sentiment. Since we used the sentiment tagger Vader for training tags, we again utilize it to evaluate whether using transitions of a certain sentiment actually leads the model to produce outputs with the given sentiment. To perform this evaluation, we give as input to our models (and the S2S baseline) the sentiment tags for a sentence and allow it to generate a sentence conditioned on these sentiment tags. We then tag the generated sentences with Vader and see if the sentiment tags match the originals. We calculate the F1 score across all sentiment tags and report the macro average. In Table TABREF47 we see that having labels is incredibly important for meaningful transitions. There is a large drop in F1 as the amount of labels given to the model is decreased. The SLDS model that is trained with 100% of the labels performs a little better than even S2S, despite not having direct access to the sentiment labels (SLDS only uses the sentiment labels to decide which transition to use while the S2S model uses attention directly on the sentiment labels).
<<</Evaluation of Transition Dynamics>>>
<<</Evaluations>>>
<<<Related Work>>>
Story/narrative generation has a rich history in the field of AI. Many early systems were based on structured formalisms for describing common narrative structures BIBREF9, BIBREF10, BIBREF31, many being inspired by the initial work of BIBREF0. There has been a swath of recent work that has looked to add some semblance of a `narrative scaffold' back into generation methods BIBREF32, BIBREF6, BIBREF7, BIBREF33. Many of these methods work as conditional LMs (conditioned directly on the scaffold). This line of work may be combined with our formalization as well, by conditioning the generation on the switching state as well, as done in the model of BIBREF4. Recent work by BIBREF34 has similar goals to ours in permitting more controlability in generation systems, developing a RL-based system that allows users to specify an end goal for a story (by specifying the event class that is desired to appear at the end). Their work differs from ours in that it does not deal with text directly, modeling only the sequences of events in the narrative. It may be possible to utilize this model as the scaffolding component in our model (utilizing their RL policy for the scaffold planner, rather than the simple Markovian distribution used here).
<<</Related Work>>>
<<<Conclusion and Future Work>>>
In this paper, we formulated the problem of narrative generation as a switching dynamical system. We showed how this formulation captures notions important in narrative generation, such as narrative dynamics and scaffolds. We developed an approximate Gibbs sampling algorithm for the model that permits the system to generate interpolations conditioned on arbitrary parts of the narrative, and evaluated these interpolations using both human and automatic evaluations. Though in this work we used sentiment tags for our scaffolds/switching variables, future work may look at utilizing different kinds of information to guide the generation of narratives. Utilizing the main predicate of a sentence as a scaffold would be a logical next step, and may prove more informative then the sentiment trajectory. A scaffold such as this can take on many more possible values then a sentiment tag, and as such, it may prove difficult to assign a set of dynamics to each value. Another avenue for future work would deal with this possible problem. One potential solution could be to associate each switching variable value with a (learned) vector in a probability simplex, and use this vector to combine a small set of “primitive" dynamics matrices in order to get that value's associated set of dynamics.
<<</Conclusion and Future Work>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nA Switching Dynamical System for Narrative Generation\nNarrative Dynamics in a Dynamical System\nNarrative Scaffolds as Switching Variables\nNarrative Scaffold - Emotional Trajectory\nThe Full Model\nLearning and Posterior Inference\nLower bound formula & exact training algorithm\nInterpolations via Gibbs Sampling\nConditionals for Gibbs Sampling\nGibbs Sampling Interpolation Overview\nTraining Details\nDataset and Preprocessing\nSwitching Linear Dynamical System (SLDS)\nBaselines\nEvaluations\nGenerating the Interpolations\nAutomatic Evaluation of Interpolations\nHuman Evaluation of Interpolations\nAnnotation Scheme\nHuman Evaluation Results\nLanguage Modeling Perplexity Score\nEvaluation of Transition Dynamics\nRelated Work\nConclusion and Future Work"
],
"type": "outline"
}
|
1909.07593
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Learning Explicit and Implicit Structures for Targeted Sentiment Analysis
<<<Abstract>>>
Targeted sentiment analysis is the task of jointly predicting target entities and their associated sentiment information. Existing research efforts mostly regard this joint task as a sequence labeling problem, building models that can capture explicit structures in the output space. However, the importance of capturing implicit global structural information that resides in the input space is largely unexplored. In this work, we argue that both types of information (implicit and explicit structural information) are crucial for building a successful targeted sentiment analysis model. Our experimental results show that properly capturing both information is able to lead to better performance than competitive existing approaches. We also conduct extensive experiments to investigate our model's effectiveness and robustness.
<<</Abstract>>>
<<<Introduction>>>
Accepted as a long paper in EMNLP 2019 (Conference on Empirical Methods in Natural Language Processing).
Targeted sentiment analysis (TSA) is an important task useful for public opinion mining BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4. The task focuses on predicting the sentiment information towards a specific target phrase, which is usually a named entity, in a given input sentence. Currently, TSA in the literature may refer to either of the two possible tasks under two different setups: 1) predicting the sentiment polarity for a given specific target phrase BIBREF5, BIBREF6, BIBREF7, BIBREF8; 2) jointly predicting the targets together with the sentiment polarity assigned to each target BIBREF9, BIBREF10, BIBREF11, BIBREF12. In this paper, we focus on the latter setup which was originally proposed by BIBREF9. Figure FIGREF2 presents an example sentence containing three targets. Each target is associated with a sentiment, where we use $+$ for denoting positive polarity, 0 for neutral and $-$ for negative.
Existing research efforts mostly regard this task as a sequence labeling problem by assigning a tag to each word token, where the tags are typically designed in a way that capture both the target boundary as well as the targeted sentiment polarity information together. Existing approaches BIBREF9, BIBREF10, BIBREF12 build models based on conditional random fields (CRF) BIBREF13 or structural support vector machines (SSVM) BIBREF14, BIBREF15 to explicitly model the sentiment information with structured outputs, where each targeted sentiment prediction corresponds to exactly one fixed output. While effective, such models suffer from their inability in capturing certain long-distance dependencies between sentiment keywords and their targets. To remedy this issue, BIBREF11 proposed their “sentiment scope’’ model to learn flexible output representations. For example, three text spans with their corresponding targets in bold are presented in Figure FIGREF2, where each target’s sentiment is characterized by the words appearing in the corresponding text span. They learn from data for each target a latent text span used for attributing its sentiment, resulting in flexible output structures.
However, we note there are two major limitations with the approach of BIBREF11. First, their model requires a large number of hand-crafted discrete features. Second, the model relies on a strong assumption that the latent sentiment spans do not overlap with one another. For example, in Figure FIGREF2, their model will not be able to capture the interaction between the target word “OZ” in the first sentiment span and the keyword “amazing” due to the assumptions made on the explicit structures in the output space. One idea to resolve this issue is to design an alternative mechanism to capture such useful structural information that resides in the input space.
On the other hand, recent literature shows that feature learning mechanisms such as self-attention have been successful for the task of sentiment prediction when targets are given BIBREF16, BIBREF17, BIBREF18 (i.e., under the first setup mentioned above). Such approaches essentially attempt to learn rich implicit structural information in the input space that captures the interactions between a given target and all other word tokens within the sentence. Such implicit structures are then used to generate sentiment summary representation towards the given target, leading to the performance boost.
However, to date capturing rich implicit structures in the joint prediction task that we focus on (i.e., the second setup) remains largely unexplored. Unlike the first setup, in our setup the targets are not given, we need to handle exponentially many possible combinations of targets in the joint task. This makes the design of an algorithm for capturing both implicit structural information from the input space and the explicit structural information from the output space challenging.
Motivated by the limitations and challenges, we present a novel approach that is able to efficiently and effectively capture the explicit and implicit structural information for TSA. We make the following key contributions in this work:
We propose a model that is able to properly integrate both explicit and implicit structural information, called EI. The model is able to learn flexible explicit structural information in the output space while being able to efficiently learn rich implicit structures by LSTM and self-attention for exponentially many possible combinations of targets in a given sentence.
We conducted extensive experiments to validate our claim that both explicit and implicit structures are indispensable in such a task, and demonstrate the effectiveness and robustness of our model.
<<</Introduction>>>
<<<Approach>>>
Our objective is to design a model to extract targets as well as their associated targeted sentiments for a given sentence in a joint manner. As we mentioned before, we believe that both explicit and implicit structures are crucial for building a successful model for TSA. Specifically, we first present an approach to learn flexible explicit structures based on latent CRF, and next present an approach to efficiently learn the rich implicit structures for exponentially many possible combinations of targets.
<<<Explicit Structure>>>
Motivated by BIBREF11, we design an approach based on latent CRF to model flexible sentiment spans to capture better explicit structures in the output space. To do so, we firstly integrate target and targeted sentiment information into a label sequence by using 3 types of tags in our EI model: $\mathbf {B}_p$, $\mathbf {A}_p$, and $\mathbf {E}_{\epsilon ,p}$, where $p \in \lbrace +, -, 0\rbrace $ indicates the sentiment polarity and $\epsilon \in \lbrace \textit {B,M,E,S}\rbrace $ denotes the BMES tagging scheme. We explain the meaning of each type of tags as follows.
$\mathbf {B}_p$ is used to denote that the current word is part of a sentiment span with polarity $p$, but appears before the target word or exactly as the first word of the target.
$\mathbf {A}_p$ is used to denote that the current word is part of a sentiment span with polarity $p$, but appears after the target word or exactly as the last word of the target.
$\mathbf {E}_{\epsilon ,p}$ is used to denote the current word is part of a sentiment span with polarity $p$, and is also a part of the target. The BMES sub-tag $\epsilon $ denotes the position information within the target phrase. For example, $\mathbf {E}_{B,+}$ represents that the current word appears as the first word of a target with the positive polarity.
We illustrate how to construct the label sequence for a specific combination of sentiment spans of the given example sentence in Figure FIGREF5, where three non-overlapping sentiment spans in yellow are presented. Each such sentiment span encodes the sentiment polarity in blue for a target in bold in pink square. At each position, we allow multiple tags in a sequence to appear such that the edge $\mathbf {A}_p\mathbf {B}_{p^{\prime }}$ in red consistently indicates the boundary between two adjacent sentiment spans.
The first sentiment span with positive ($+$) polarity contains only one word which is also the target. Such a single word target is also the beginning and the end of the target. We use three tags $\mathbf {B}_+$, $\mathbf {E}_{S,+}$ and $\mathbf {A}_+$ to encode such information above.
The second sentiment span with positive ($+$) polarity contains a two-word target “Shin Lim”. The word “and” appearing before such target takes a tag $\mathbf {B}_+$. The words “perform amazing magic” appearing after such target take a tag $\mathbf {A}_+$ at each position. As for the target, the word “Shin” at the beginning of the target takes tags $\mathbf {B}_+$ and $\mathbf {E}_{B,+}$, while the word “Lim” at the end of the target takes tags $\mathbf {E}_{E,+}$ and $\mathbf {A}_+$. The third sentiment span with neutral (0) polarity contains a single-word target “AGT”. Similarly, we use three tags $\mathbf {B}_0$, $\mathbf {E}_{S,0}$ and $\mathbf {A}_0$ to represent such single word target. The word “on” appearing before such target takes a tag $\mathbf {B}_0$. The word “2018” appearing afterwards takes a tag $\mathbf {A}_0$.
Note that if there exists a target with length larger than 2, the tag $\mathbf {E}_{M,p}$ will be used. For example in Figure FIGREF5, if the target phrase “Shin Lim” is replaced by “Shin Bob Lim”, we will keep the tags at “Shin” and “Lim” unchanged. We assign a tag $\mathbf {E}_{M,+}$ at the word “Bob” to indicate that “Bob” appears in the middle of the target by following the BMES tagging scheme.
Finally, we represent the label sequence by connecting adjacent tags sequentially with edges. Notice that for a given input sentence and the output targets as well as the associated targeted sentiment, there exist exponentially many possible label sequences, each specifying a different possible combinations of sentiment spans. Figure FIGREF11 shows a label sequence for an alternative combination of the sentiment spans. Those label sequences representing the same input and output construct a latent variable in our model, capturing the flexible explicit structures in the output space.
We use a log-linear formulation to parameterize our model. Specifically, the probability of predicting a possible output $\mathbf {y}$, which is a list of targets and their associated sentiment information, given an input sentence $\mathbf {x}$, is defined as:
where $s(\mathbf {x},\mathbf {y},\mathbf {h})$ is a score function defined over the sentence $\mathbf {x}$ and the output structure $\mathbf {y}$, together with the latent variable $\mathbf {h}$ that provides all the possible combinations of sentiment spans for the $(\mathbf {x,y})$ tuple. We define $E(\mathbf {x},\mathbf {y},\mathbf {h})$ as a set of all the edges appearing in all the label sequences for such combinations of sentiment spans. To compute $s(\mathbf {x},\mathbf {y},\mathbf {h})$, we sum up the scores of each edge in $E(\mathbf {x},\mathbf {y},\mathbf {h})$:
where $\phi _{\mathbf {x}}(e)$ is a score function defined over an edge $e$ for the input $\mathbf {x}$.
The overall model is analogous to that of a neural CRF BIBREF19, BIBREF20; hence the inference and decoding follow standard marginal and MAP inference procedures. For example, the prediction of $\mathbf {y}$ follows the Viterbi-like MAP inference procedure.
<<</Explicit Structure>>>
<<<Implicit Structure>>>
We propose a design for EI to efficiently learn rich implicit structures for exponentially many combinations of targets to predict. To do so, we explain the process to assign scores to each edge $e$ from our neural architecture. The three yellow boxes in Figure FIGREF14 compute scores for rich implicit structures from the neural architecture consisting of LSTM and self-attention.
Given an input token sequence $\mathbf {x}=\lbrace x_1,x_2,\cdots ,x_{n}\rbrace $ of length $n$, we first compute the concatenated embedding $\mathbf {e}_k=[\mathbf {w}_k;\mathbf {c}_k]$ based on word embedding $\mathbf {w}_k$ and character embedding $\mathbf {c}_k$ at position $k$.
As illustrated on the left part in Figure FIGREF14, we then use a Bi-directional LSTM to encode context features and obtain hidden states $\mathbf {h}_k=\mathrm {BiLSTM}(\mathbf {e_1},\mathbf {e_2}, \cdots , \mathbf {e_n})$. We use two different linear layers $f_t$ and $f_s$ to compute scores for target and sentiment respectively. The linear layer $f_t$ returns a vector of length 4, with each value in the vector indicating the score of the corresponding tag under the BMES tagging scheme. The linear layer $f_s$ returns a vector of length 3, with each value representing the score of a certain polarity of $+,0,-$. We assign such scores to each type of edge as follows:
Note that the subscript $p$ and $\epsilon $ at the right hand side of above equations denote the corresponding index of the vector that $f_t$ or $f_s$ returns. We apply $f_{t}$ on edges $\mathbf {E}^{k}_{\epsilon ,p}\mathbf {E}^{k+1}_{\epsilon ^{\prime },p}$ and $\mathbf {E}^{k}_{\epsilon ,p}\mathbf {A}^{k}_{p}$, since words at these edges are parts of the target phrase in a sentiment span. Similarly, we apply $f_{s}$ on edges $\mathbf {B}^{k}_{p}\mathbf {B}^{k+1}_{p}$,$\mathbf {A}^{k}_{p}\mathbf {A}^{k+1}_{p}$ and $\mathbf {A}^{k}_{p}\mathbf {B}^{k+1}_{p^{\prime }}$, since words at these edges contribute the sentiment information for the target in the sentiment span.
As illustrated in Figure FIGREF14, we calculate $\mathbf {a}_k$, the output of self-attention at position $k$:
where $\alpha _{k,j}$ is the normalized weight score for $\mathbf {\beta }_{k,j}$, and $\mathbf {\beta }_{k,j}$ is the weight score calculated by target representation at position $k$ and contextual representation at position $j$. In addition, $W$ and $b$ as well as the attention matrix $U$ are the weights to be learned. Such a vector $\mathbf {a}_k$ encodes the implicit structures between the word $x_k$ and each word in the remaining sentence.
Motivated by the character embeddings BIBREF21 which are generated based on hidden states at two ends of a subsequence, we encode such implicit structures for a target similarly. For any target starting at the position $k_1$ and ending at the position $k_2$, we could use $\mathbf {a}_{k_1}$ and $\mathbf {a}_{k_2}$ at two ends to represent the implicit structures of such a target. We encode such information on the edges $\mathbf {B}^{k_1}_{p}\mathbf {E}^{k_1}_{\epsilon ,p}$ and $\mathbf {E}^{k_2}_{\epsilon ,p}\mathbf {A}^{k_2}_{p}$ which appear at the beginning and the end of a target phrase respectively with sentiment polarity $p$. To do so, we assign the scores calculated from the self-attention to such two edges:
where $g_{s}$ returns a vector of length 3 with scores of three polarities. Note that $\mathbf {h}_k$ and $\mathbf {a}_k$ could be pre-computed at every position $k$ and assigned to the corresponding edges. Such an approach allows us to maintain the inference time complexity $O(Tn)$, where $T$ is the maximum number of tags at each position which is 9 in this work and $n$ is the number of words in the input sentence. This approach enables EI to efficiently learn rich implicit structures from LSTM and self-attention for exponentially many combinations of targets.
<<</Implicit Structure>>>
<<</Approach>>>
<<<Experimental Setup>>>
<<<Data>>>
We mainly conduct our experiments on the datasets released by BIBREF9. They contain 2,350 English tweets and 7,105 Spanish tweets, with target and targeted sentiment annotated. See Table TABREF15 for corpus statistics.
<<</Data>>>
<<<Evaluation Metrics>>>
Following the previous works, we report the precision ($P.$), recall ($R.$) and $F_1$ scores for target recognition and targeted sentiment. Note that a correct target prediction requires the boundary of the target to be correct, and a correct targeted sentiment prediction requires both target boundary and sentiment polarity to be correct.
<<</Evaluation Metrics>>>
<<<Hyperparameters>>>
We adopt pretrained embeddings from BIBREF22 and BIBREF23 for English data and Spanish data respectively. We use a 2-layer LSTM (for both directions) with a hidden dimension of 500 and 600 for English data and Spanish data respectively. The dimension of the attention weight $U$ is 300. As for optimization, we use the Adam BIBREF24 optimizer to optimize the model with batch size 1 and dropout rate $0.5$. All the neural weights are initialized by Xavier BIBREF25.
<<</Hyperparameters>>>
<<<Training and Implementation>>>
We train our model for a maximal of 6 epochs. We select the best model parameters based on the best $F_1$ score on the development data after each epoch. Note that we split $10\%$ of data from the training data as the development data. The selected model is then applied to the test data for evaluation. During testing, we map words not appearing in the training data to the UNK token. Following the previous works, we perform 10-fold cross validation and report the average results. Our models and variants are implemented using PyTorch BIBREF26.
<<</Training and Implementation>>>
<<<Baselines>>>
We consider the following baselines:
Pipeline BIBREF10 and Collapse BIBREF10 both are linear-chain CRF models using discrete features and embeddings. The former predicts targets first and calculate targeted sentiment for each predicted target. The latter outputs a tag at each position by collapsing the target tag and sentiment tag together.
Joint BIBREF10 is a linear-chain SSVM model using both discrete features and embeddings. Such a model jointly produces target tags and sentiment tags.
Bi-GRU BIBREF12 and MBi-GRU BIBREF12 are both linear-chain CRF models using word embeddings. The former uses bi-directional GRU and the latter uses multi-layer bi-directional GRU.
HBi-GRU BIBREF12 and HMBi-GRU BIBREF12 are both linear-chain CRF models using word embeddings and character embedding. The former uses bi-directional GRU and the latter uses multi-layer bi-directional GRU.
SS BIBREF11 and SS + emb BIBREF11 are both based on a latent CRF model to learn flexible explicit structures. The former uses discrete features and the latter uses both discrete features and word embeddings.
SA-CRF is a linear-chain CRF model with self-attention. Such a model concatenates the hidden state from LSTM and a vector constructed by self-attention at each position, and feeds them into CRF as features. The model attempts to capture rich implicit structures in the input space, but it does not put effort on explicit structures in the output space.
E-I is a weaker version of EI. Such a model removes the BMES sub-tags in the E tag, causing the model to learn less explicit structural information in the output space.
EI- is a weaker version of EI. Such a model removes the self-attention from EI, causing the model to learn less expressive implicit structures in the input space.
<<</Baselines>>>
<<</Experimental Setup>>>
<<<Results and Discussion>>>
<<<Main Results>>>
The main results are presented in Table TABREF16, where explicit structures as well as implicit structures are indicated for each model for clear comparisons.
In general, our model EI outperforms all the baselines. Specifically, it outperforms the strongest baseline EI- significantly with $p < 0.01$ on the English and Spanish datasets in terms of $F_1$ scores. Note that EI- which models flexible explicit structures and less implicit structural information, achieves better performance than most of the baselines, indicating flexible explicit structures contribute a lot to the performance boost.
Now let us take a closer look at the differences based on detailed comparisons. First of all, we compare our model EI with the work proposed by BIBREF10. The Pipeline model (based on CRF) as well as Joint and Collapse models (based on SSVM) in their work capture fixed explicit structures. Such two models rely on multi-layer perceptron (MLP) to obtain the local context features for implicit structures. These two models do not put much effort to capture better explicit structures and implicit structures. Our model EI (and even EI-) outperforms these two models significantly. We also compare our work with models in BIBREF12, which also capture fixed explicit structures. Such models leverage different GRUs (single-layer or multi-layer) and different input features (word embeddings and character representations) to learn better contextual features. Their best result by HMBi-GRU is obtained with multi-layer GRU with word embeddings and character embeddings. As we can see, our model EI outperforms HMBi-GRU under all evaluation metrics. On the English data, EI obtains $6.50$ higher $F_1$ score and $2.50$ higher $F_1$ score on target recognition and targeted sentiment respectively. On Spanish, EI obtains $5.16$ higher $F_1$ score and $0.50$ higher $F_1$ score on target recognition and targeted sentiment respectively. Notably, compared with HMBi-GRU, even EI- capturing the flexible explicit structures achieves better performance on most of metrics and obtains the comparable results in terms of precision and $F_1$ score on Spanish. Since both EI and EI- models attempt to capture the flexible explicit structures, the comparisons above imply the importance of modeling such flexible explicit structures in the output space.
We also compare EI with E-I. The difference between these two models is that E-I removes the BMES sub-tags. Such a model captures less explicit structural information in the output space. We can see that EI outperforms E-I. Such results show that adopting BMES sub-tags in the output space to capture explicit structural information is beneficial.
Now we compare EI with SA-CRF which is a linear-chain CRF model with self-attention. Such a model attempts to capture rich implicit structures, and fixed explicit structures. The difference between EI and SA-CRF is that our model EI captures flexible explicit structures in the output space which model output representations as latent variables. We can see that EI outperforms SA-CRF on all the metrics. Such a comparison also implies the importance of capturing flexible explicit structures in the output space.
Next, we focus on the comparisons with SS BIBREF11 and SS + emb BIBREF11. Such two models as well as our models all capture the flexible explicit structures. As for the difference, both two SS models rely on hand-crafted discrete features to capture implicit structures, while our model EI and EI- learn better implicit structures by LSTM and self-attention. Furthermore, our models only require word embeddings and character embeddings as the input to our neural architecture to model rich implicit structures, leading to a comparatively simpler and more straightforward design. The comparison here suggests that LSTM and self-attention neural networks are able to capture better implicit structures than hand-crafted features.
Finally, we compare EI with EI-. We can see that the $F_1$ scores of targeted sentiment for both English and Spanish produced by EI are $0.95$ and $0.97$ points higher than EI-. The main difference here is that EI makes use of self-attention to capture richer implicit structures between each target phrase and all words in the complete sentence. The comparisons here indicate the importance of capturing rich implicit structures using self-attention on this task.
<<<Robustness>>>
Overall, all these comparisons above based on empirical results show the importance of capturing both flexible explicit structures in the output space and rich implicit structures by LSTM and self-attention in the input space.
We analyze the model robustness by assessing the performance on the targeted sentiment for targets of different lengths. For both English and Spanish, we group targets into 4 categories respectively, namely length of 1, 2, 3 and $\ge 4$. Figure FIGREF32 reports the $F_1$ scores of targeted sentiment for such 4 groups on Spanish. See the English results in the supplementary material. As we can see EI outperforms all the baselines on all groups.
Furthermore, following the comparisons in BIBREF10, we also measure the precision, recall and $F_1$ of subjectivity and non-neutral polarities on the Spanish dataset. Results are reported in Table TABREF29. The subjectivity measures whether a target phrase expresses an opinion or not according to BIBREF1. Comparing with the best-performing system's results reported in BIBREF10 and BIBREF11, our model EI can achieve higher $F_1$ scores on subjectivity and non-neutral polarities.
<<</Robustness>>>
<<<Error Analysis>>>
We conducted error analysis for our main model EI. We calculate $F_1$ scores based on the partial match instead of exact match. The $F_1$ scores for target partial match is $76.04$ and $83.82$ for English and Spanish respectively. We compare these two numbers against $63.48$ and $71.17$ which are the $F_1$ scores based on exact match. This comparison indicates that boundaries of many predicted targets do not match exactly with those of the correct targets. Furthermore, we investigate the errors caused by incorrect sentiment polarities. We found that the major type of errors is to incorrectly predict positive targets as neutral targets. Such errors contribute $64\%$ and $36\%$ of total errors for English and Spanish respectively. We believe they are mainly caused by challenging expressions in the tweet input text. Such challenging expressions such as “below expectations” are very sparse in the data, which makes effective learning for such phrases difficult.
<<</Error Analysis>>>
<<</Main Results>>>
<<<Effect of Implicit Structures>>>
In order to understand whether the implicit structures are truly making contributions in terms of the overall performance, we compare the performance among four models: EI and EI- as well as two variants EI (i:MLP) and EI (i:Identity) (where i indicates the implicit structure). Such two variants replace the implicit structure by other components:
EI (i:MLP) replaces self-attention by multi-layer perceptron (MLP) for implicit structures. Such a variant attempts to capture implicit structures for a target phrase towards words restricted by a window of size 3 centered at the two ends of the target phrase.
EI (i:Identity) replaces self-attention by an identity layer as implicit structure. Such a variant attempts to capture implicit structures for a target phrase towards words at the two ends of the target phrase exactly.
Overall, those variants perform worse than EI on all the metrics. When the self-attention is replaced by MLP or the identity layer for implicit structures, the performance drops a lot on both target and targeted sentiment. Such two variants EI (i:MLP) and EI (i:Identity) consider the words within a small window centered at the two ends of the target phrase, which might not be capable of capturing the desired implicit structures. The EI- model capturing less implicit structural information achieves worse results than EI, but obtains better results than the two variants discussed above. This comparison implies that properly capturing implicit structures as the complement of explicit structural information is essential.
<<</Effect of Implicit Structures>>>
<<<Qualitative Analysis>>>
We present an example sentence in the test data in Figure FIGREF38, where the gold targets are in bold, the predicted targets are in the pink boxes, the gold sentiment is in blue and predicted sentiment is in red. EI makes all correct predictions for three targets. EI- predicts correct boundaries for three targets and the targeted sentiment predictions are highlighted in Figure FIGREF38. As we can see, EI- incorrectly predicts the targeted sentiment on the first target as neural (0). The first target here is far from the sentiment expression “sound good” which is not in the first sentiment span, making EI- not capable of capturing such a sentiment expression. This qualitative analysis helps us to better understand the importance to capture implicit structures using both LSTM and self-attention.
<<</Qualitative Analysis>>>
<<<Additional Experiments>>>
We also conducted experiments on multi-lingual Restaurant datasets from SemEval 2016 Task 5 BIBREF28, where aspect target phrases and aspect sentiments are provided. We regard each aspect target phrase as a target and assign such a target with the corresponding aspect sentiment polarity in the data. Note that we remove all the instances which contain no targets in the training data. Following the main experiment, we split $10\%$ of training data as development set for the selection of the best model during training.
We report the $F_1$ scores of target and targeted sentiment for English, Dutch and Russian respectively in Table TABREF43. The results show that EI achieves the best performance. The performance of SS BIBREF11 is much worse on Russian due to the inability of discrete features in SS to capture the complex morphology in Russian.
<<</Additional Experiments>>>
<<</Results and Discussion>>>
<<<Related Work>>>
We briefly survey the research efforts on two types of TSA tasks mentioned in the introduction. Note that TSA is related to aspect sentiment analysis which is to determine the sentiment polarity given a target and an aspect describing a property of related topics.
<<<Predicting sentiment for a given target>>>
Such a task is typically solved by leveraging sentence structural information, such as syntactic trees BIBREF5, dependency trees BIBREF6 as well as surrounding context based on LSTM BIBREF29, GRU BIBREF7 or CNN BIBREF8. Another line of works leverage self-attention BIBREF30 or memory networks BIBREF31 to encode rich global context information. BIBREF16 adopted the segmental attention BIBREF32 to model the important text segments to compute the targeted sentiment. BIBREF33 studied the issue that the different combinations of target and aspect may result in different sentiment polarity. They proposed a model to distinguish such different combinations based on memory networks to produce the representation for aspect sentiment classification.
<<</Predicting sentiment for a given target>>>
<<<Jointly predicting targets and their associated sentiment>>>
Such a joint task is usually regarded as sequence labeling problem. BIBREF9 introduced the task of open domain targeted sentiment analysis. They proposed several models based on CRF such as the pipeline model, the collapsed model as well as the joint model to predict both targets and targeted sentiment information. Their experiments showed that the collapsed model and the joint model could achieve better results, implying the benefit of the joint learning on this task. BIBREF10 proposed an approach based on structured SVM BIBREF14, BIBREF15 integrating both discrete features and neural features for this joint task. BIBREF11 proposed the sentiment scope model motivated from a linguistic phenomenon to represent the structure information for both the targets and their associated sentiment polarities. They modelled the latent sentiment scope based on CRF with latent variables, and achieved the best performance among all the existing works. However, they did not explore much on the implicit structural information and their work mostly relied on hand-crafted discrete features. BIBREF12 adopted a multi-layer GRU to learn targets and sentiments jointly by producing the target tag and the sentiment tag at each position. They introduced a constraint forcing the sentiment tag at each position to be consistent with the target tag. However, they did not explore the explicit structural information in the output space as we do in this work.
<<</Jointly predicting targets and their associated sentiment>>>
<<</Related Work>>>
<<<Conclusion and Future Work>>>
In this work, we argue that properly modeling both explicit structures in the output space and the implicit structures in the input space are crucial for building a successful targeted sentiment analysis system. Specifically, we propose a new model that captures explicit structures with latent CRF, and uses LSTM and self-attention to capture rich implicit structures in the input space efficiently. Through extensive experiments, we show that our model is able to outperform competitive baseline models significantly, thanks to its ability to properly capture both explicit and implicit structural information.
Future work includes exploring approaches to capture explicit and implicit structural information to other sentiment analysis tasks and other structured prediction problems.
<<</Conclusion and Future Work>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nApproach\nExplicit Structure\nImplicit Structure\nExperimental Setup\nData\nEvaluation Metrics\nHyperparameters\nTraining and Implementation\nBaselines\nResults and Discussion\nMain Results\nRobustness\nError Analysis\nEffect of Implicit Structures\nQualitative Analysis\nAdditional Experiments\nRelated Work\nPredicting sentiment for a given target\nJointly predicting targets and their associated sentiment\nConclusion and Future Work"
],
"type": "outline"
}
|
1908.05763
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
On the Robustness of Projection Neural Networks For Efficient Text Representation: An Empirical Study
<<<Abstract>>>
Recently, there has been strong interest in developing natural language applications that live on personal devices such as mobile phones, watches and IoT with the objective to preserve user privacy and have low memory. Advances in Locality-Sensitive Hashing (LSH)-based projection networks have demonstrated state-of-the-art performance without any embedding lookup tables and instead computing on-the-fly text representations. However, previous works have not investigated "What makes projection neural networks effective at capturing compact representations for text classification?" and "Are these projection models resistant to perturbations and misspellings in input text?". ::: In this paper, we analyze and answer these questions through perturbation analyses and by running experiments on multiple dialog act prediction tasks. Our results show that the projections are resistant to perturbations and misspellings compared to widely-used recurrent architectures that use word embeddings. On ATIS intent prediction task, when evaluated with perturbed input data, we observe that the performance of recurrent models that use word embeddings drops significantly by more than 30% compared to just 5% with projection networks, showing that LSH-based projection representations are robust and consistently lead to high quality performance.
<<</Abstract>>>
<<<Introduction>>>
At the core of Natural Language Processing (NLP) neural models are pre-trained word embeddings like Word2Vec BIBREF0, GloVe BIBREF1 and ELMo BIBREF2. They help initialize the neural models, lead to faster convergence and have improved performance for numerous application such as Question Answering BIBREF3, Summarization BIBREF4, Sentiment Analysis BIBREF5. While word embeddings are powerful in unlimited constraints such as computation power and compute resources, it becomes challenging to deploy them to on-device due to their huge size.
This led to interesting research by BIBREF6, BIBREF7, BIBREF8, who showed that actually word embedding can be replaced with lightweight binary LSH projections learned on-the-fly. The projection approach BIBREF9, BIBREF10 surmounts the need to store any embedding matrices, since the projections are dynamically computed. This further enables user privacy by performing inference directly on device without sending user data (e.g., personal information) to the server. The computation of the representation is linear in the number of inputs in the sentence surmounting the need to maintain and lookup global vocabulary, and reducing the memory size to $O(|T \cdot d|)$. The projection representations can operate on word and character level, and can be used to represent a sentence or a word depending on the NLP application. BIBREF6 have shown that on-device LSH projections lead to state-of-the-art results in dialog act classification and reach significant improvement upon prior LSTM and CNN neural models.
Despite being so successful, yet there are no studies showing the properties and power of LSH projections. In this paper, we address that by studying What makes projection models effective? and Are these projection models resistant to perturbations and misspellings in input text? To answer these questions, we conduct a series of experimental studies and analysis. For instance, by studying the collision of the learned projection representations, we verify the effectiveness of the produced representations. Our study showed that LSH projections have low collision, meaning that the representations are good allowing the model to capture the meaning of words, instead of colliding everything into one meaning. Next, by analyzing the different character perturbations, we show the robustness of LSH projections when modeling word or sentence level representations. The intuition is that the projection should be able to capture word misspellings as similar, and yet it should be robust to semantically dissimilar terms. We show that Self-Governing Neural Networks (SGNN) models BIBREF6 evaluated with perturbed LSH projections are resistant to misspellings and transformation attacks, while LSTMs with increased perturbations dropped in performance. Overall, the studies are very interesting showcasing the robustness of LSH projection representations, their resistance to misspellings and transformations, and also explains why they lead to better performance.
<<</Introduction>>>
<<<Background: LSH projections for text representations>>>
The Projection function, $\mathbb {P}$ (Figure FIGREF1), BIBREF9 used in SGNN models BIBREF6 extracts token (or character) n-gram & skip-gram features from a raw input text, $\textbf {x}$ and dynamically generates a binary projection representation, $\mathbb {P}(\mathbf {x}) \in [0,1]^{T.d}$ after a Locality-Sensitive Hashing (LSH) based transformation, $\mathbb {L}$ as in
where $\mathbb {F}$ extracts n-grams(or skip-grams), $[f_1, \cdots , f_n]$ from the input text. Here, $[f_1, \cdots , f_n]$ could refer to either character level or token level n-grams(or skip-grams) features.
<<</Background: LSH projections for text representations>>>
<<<Collision Study>>>
Before diving into the actual collision studies, it is important to understand what the properties of good projections are. For instance, good projections should be as separate as possible, while still capturing the inherent n-gram features. Words with similar character n-gram feature vectors should be closer to each other i.e. cat and cats, but yet separate from each other so that the network can learn that cat and cats are related, but yet different. Such observations are not evident from the projections. One way to understand them is by looking at the collision rates. For instance, if there are too many projection collisions, this means that the network is fundamentally incapable of learning and it will not be able to generalize.
For the purpose, we test how spread out the projections are for word and sentence representations. We take a large corpus enwik9 and analyze the average hamming distance of the words and sentences in the corpus. Intuitively, good projections should have less collisions. Our study shows that there is almost no collision. On an average the Hamming distances between words are 557 bits, which is around 50% of the projection dimension. Standard deviations are one order of magnitude lower compared to the average Hamming distances between words which means that on average projections are more or less spread out. For high deviation, it means too many words are either too close to each other or too far away from other other. To understand the properties of word and sentence projections, we conduct two experiments, one in which we compute the word projections and another one in which we compute the sentence projections. For our experiments, we fix the projection dimension, $dim(\mathbb {P}(w)) = 1120$ ($T=80, \, d=14$) following BIBREF6. Results are shown in Table TABREF3 and Table TABREF4 respectively.
Table TABREF3 shows the collision results of the word level projections. On the left we list different projection configurations by varying the number of projection functions $T$, the dimensionality $d$, turning on or off character level projections, including varying size of n-gram and skip-gram features. For each projection configuration, we show the average Hamming distance and the standard deviation. As it can be seen, by increasing the number of n-gram and skip-gram features, the words become more spread out with lesser standard deviation. We recommend using higher number of n-gram and skip-gram features for better model performance.
Table TABREF4 shows the collision results of the sentence level projections. Similarly to Table TABREF3 the left side shows the different projection configurations. For each configuration, we show the average Hamming distance and standard deviation. In the sentence level projection study, we observe that when we consider only word level features, the projections are insensitive to sentence length. But with the character projections on, they are sensitive to the sentence length. This happens because the character projection space is smaller than the words space, as we see only fewer variations for the sentence projections with n-gram and skip-gram compared to word level.
In sentence level projection with word level features, the dimensionality of the spacer vector is high, hence applying projections on this leads to discriminative representations. More concretely, this means that projections with large feature spaces are able to capture the distinctions between any two observed pairs and adding more words to the sentence is not going to change that. On the other hand for short sentences with character level features, the number of possible observed unique char ngrams vs those observed in longer sentences can differ.
<<</Collision Study>>>
<<<Perturbation Study>>>
To further test the robustness of the projections, we conduct perturbation study. A good projection should separate out perturbed word like baank from cats. Meaning that the average Hamming distance from the collision study should be greater than the Hamming distance with and without perturbations.
<<<Character & Word Perturbations>>>
In this section, we analyze the Hamming distance between the projections of the sentences from the enwik9 dataset and the corresponding projections of the same sentences after applying character level perturbations. We experiment with three types of character level perturbation BIBREF11 and two types of word level perturbation operations.
<<</Character & Word Perturbations>>>
<<<Character Level Perturbation Operations>>>
insert(word, n) : We randomly choose n characters from the character vocabulary and insert them at random locations into the input word. We however retain the first and last characters of the word as is. Ex. transformation: $sample \rightarrow samnple$.
swap(word, n): We randomly swap the location of two characters in the word n times. As with the insert operation, we retain the first and last characters of the word as is and only apply the swap operation to the remaining characters. Ex. transformation: $sample \rightarrow sapmle$.
duplicate(word, n): We randomly duplicate a character in the word by n times. Ex. transformation: $sample \rightarrow saample$.
<<<Word Level Perturbation Operations>>>
drop(sentence, n): We randomly drop n words from the sentence. Ex. transformation: This is a big cat. $\rightarrow $ This is a cat.
duplicate(sentence, n): Similar to duplicate(word, n) above, we randomly duplicate a word in the sentence n times. Ex. transformation: This is a big cat. $\rightarrow $ This is a big big cat.
swap(sentence, n): Similar to swap(word, n), we randomly swap the location of two words in the sentence n times. Ex. transformation: This is a big cat. $\rightarrow $ This cat is big.
For both character and word level perturbations, we decide whether or not to perturb each word in a sentence with a fixed probability. For the character level perturbations, once a word is chosen for perturbation, we randomly pick one of the perturbation operations from {insert, swap, duplicate} and randomly pick the number of characters to transform $n \in \lbrace 1,\;3\rbrace $. For the word level perturbations, we randomly apply one of the operations from {drop, duplicate, swap}. We consider perturbation probabilities of $0.05$ and $0.1$ for our experiments.
<<</Word Level Perturbation Operations>>>
<<</Character Level Perturbation Operations>>>
<<<Discussion>>>
We show results on multiple perturbation studies. For instance, sentence has word and character level perturbations, while word has character only perturbation. We evaluate the impact of the word and character projections for sentence and word level projections on the enwik9 dataset. Table TABREF13 shows the character and word perturbation with sentence level projections. Table TABREF14 shows the character perturbation for word level projections.
We observe that the hamming distances between the projections of the perturbed versions of the same words are significantly smaller than the average distance of the word projections measured in the collision study in Section SECREF3. This shows that the words are well separated in the projection space and could potentially be less susceptible to misspellings and omissions.
Based on the results in all Tables 1 to 4, we found a nice linear relationship between the hamming distance, the projection dimension and the amount of perturbation. As it can be seen in the results, the hamming distance between the projections before and after perturbation is directly proportional to the product of the projection dimension and percentage of perturbation as follows: $ \Delta _{\mathbb {P}_{m}} = K_{m}\, \cdot T \, \cdot \, d \cdot P_{perturb} \; , m \in \lbrace word, \,character\rbrace , \; K_{m} > 0$ where $\Delta _{\mathbb {P}_{m}}$ refers to the hamming distance between the projections before and after perturbations and $m$ refers to the mode of projection - {word, character}. $T \cdot d$ refers to the projection space dimension and $P_{perturb}$ refers to the probability of perturbation. $K_{m} > 0$ is a proportionality constant which depends on the projection mode. We observe that $K_{word} > K_{char}$ from our experiments. Character mode projections are relatively more robust to perturbations, however we would also want to include word level n-gram and skipgram features to generate a holistic representation. This establishes a tradeoff between choosing word and character level features. Ideally, one would like to reserve some bits for word and some bits for character level features. We leave the design of the right bit division to future work.
<<</Discussion>>>
<<</Perturbation Study>>>
<<<Effect of Perturbation on Classification>>>
We evaluate LSH projections with text transformations to test whether the projections are robust to input perturbations by nature. We use the character level operations from Section SECREF4.
<<<Evaluation Setup>>>
For evaluation, we used the widely popular dialog act and intent prediction datasets. MRDA BIBREF12 is a dialog corpus of multi-party meetings with 6 classes, 78K training and 15K test data; ATIS BIBREF13 is intent prediction dataset for flight reservations with 21 classes, 4.4K training and 893 test examples; and SWDA BIBREF14, BIBREF15 is an open domain dialog corpus between two speakers with 42 classes, 193K training and 5K test examples. For fair comparison, we train LSTM baseline with sub-words and 240 vocabulary size on MRDA, ATIS and SWDA. We uniformly randomly initialized the input word embeddings. We also trained the on-device SGNN model BIBREF6. Then, we created test sets with varying levels of perturbation operations - $\lbrace 20\%,40\%,60\%\rbrace $.
<<</Evaluation Setup>>>
<<<Results>>>
Table TABREF15 shows the accuracy results of LSTM and on-device SGNN models. Overall, SGNN models are consistently more robust to perturbations across all three datasets and tasks. One of the reasons is that SGNN relies on word and character level n-gram features, while for LSTMs, the character perturbations result in sub-words being mapped to unknown embedding. This leads LSTM to learn to map inputs with many unknown words to the majority class. We observed the same when we perturbed $100\%$ of the words in the input.
As shown in Table TABREF18, the standard deviations of the accuracy with LSTMs are much higher compared to SGNN.
This further reinforces the fact that SGNNs are fundamentally more robust to both word misspellings and black box attacks. In the future, we are plan to benchmark SGNN with more aggressive and exploitative black box based attacks.
<<</Results>>>
<<</Effect of Perturbation on Classification>>>
<<<Conclusion>>>
In this work, we perform a detailed study analyzing why recent LSH-based projection neural networks are effective for language classification tasks. Through extensive analyses including perturbation studies and experiments on multiple tasks, we show that projection-based neural models are resistant to text transformations compared to widely-used approaches like LSTMs with embeddings.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nBackground: LSH projections for text representations\nCollision Study\nPerturbation Study\nCharacter & Word Perturbations\nCharacter Level Perturbation Operations\nWord Level Perturbation Operations\nDiscussion\nEffect of Perturbation on Classification\nEvaluation Setup\nResults\nConclusion"
],
"type": "outline"
}
|
1909.06937
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
CM-Net: A Novel Collaborative Memory Network for Spoken Language Understanding
<<<Abstract>>>
Spoken Language Understanding (SLU) mainly involves two tasks, intent detection and slot filling, which are generally modeled jointly in existing works. However, most existing models fail to fully utilize co-occurrence relations between slots and intents, which restricts their potential performance. To address this issue, in this paper we propose a novel Collaborative Memory Network (CM-Net) based on the well-designed block, named CM-block. The CM-block firstly captures slot-specific and intent-specific features from memories in a collaborative manner, and then uses these enriched features to enhance local context representations, based on which the sequential information flow leads to more specific (slot and intent) global utterance representations. Through stacking multiple CM-blocks, our CM-Net is able to alternately perform information exchange among specific memories, local contexts and the global utterance, and thus incrementally enriches each other. We evaluate the CM-Net on two standard benchmarks (ATIS and SNIPS) and a self-collected corpus (CAIS). Experimental results show that the CM-Net achieves the state-of-the-art results on the ATIS and SNIPS in most of criteria, and significantly outperforms the baseline models on the CAIS. Additionally, we make the CAIS dataset publicly available for the research community.
<<</Abstract>>>
<<<Introduction>>>
Spoken Language Understanding (SLU) is a core component in dialogue systems. It typically aims to identify the intent and semantic constituents for a given utterance, which are referred as intent detection and slot filling, respectively. Past years have witnessed rapid developments in diverse deep learning models BIBREF0, BIBREF1 for SLU. To take full advantage of supervised signals of slots and intents, and share knowledge between them, most of existing works apply joint models that mainly based on CNNs BIBREF2, BIBREF3, RNNs BIBREF4, BIBREF5, and asynchronous bi-model BIBREF6. Generally, these joint models encode words convolutionally or sequentially, and then aggregate hidden states into a utterance-level representation for the intent prediction, without interactions between representations of slots and intents.
Intuitively, slots and intents from similar fields tend to occur simultaneously, which can be observed from Figure FIGREF2 and Table TABREF3. Therefore, it is beneficial to generate the representations of slots and intents with the guidance from each other. Some works explore enhancing the slot filling task unidirectionally with the guidance from intent representations via gating mechanisms BIBREF7, BIBREF8, while the predictions of intents lack the guidance from slots. Moreover, the capsule network with dynamic routing algorithms BIBREF9 is proposed to perform interactions in both directions. However, there are still two limitations in this model. The one is that the information flows from words to slots, slots to intents and intents to words in a pipeline manner, which is to some extent limited in capturing complicated correlations among words, slots and intents. The other is that the local context information which has been shown highly useful for the slot filling BIBREF10, is not explicitly modeled.
In this paper, we try to address these issues, and thus propose a novel $\mathbf {C}$ollaborative $\mathbf {M}$emory $\mathbf {N}$etwork, named CM-Net. The main idea is to directly capture semantic relationships among words, slots and intents, which is conducted simultaneously at each word position in a collaborative manner. Specifically, we alternately perform information exchange among the task-specific features referred from memories, local context representations and global sequential information via the well-designed block, named CM-block, which consists of three computational components:
Deliberate Attention: Obtaining slot-specific and intent-specific representations from memories in a collaborative manner.
Local Calculation: Updating local context representations with the guidances of the referred slot and intent representations in the previous Deliberate Attention.
Global Recurrence: Generating specific (slot and intent) global sequential representations based on local context representations from the previous Local Calculation.
Above components in each CM-block are conducted consecutively, which are responsible for encoding information from different perspectives. Finally, multiple CM-blocks are stacked together, and construct our CM-Net. We firstly conduct experiments on two popular benchmarks, SNIPS BIBREF11 and ATIS BIBREF12, BIBREF13. Experimental results show that the CM-Net achieves the state-of-the-art results in 3 of 4 criteria (e.g., intent detection accuracy on ATIS) on both benchmarks. Additionally, trials on our self-collected dataset, named CAIS, demonstrate the effectiveness and generalizability of the CM-Net.
Our main contributions are as follows:
We propose a novel CM-Net for SLU, which explicitly captures semantic correlations among words, slots and intents in a collaborative manner, and incrementally enriches the specific features, local context representations and global sequential representations through stacked CM-blocks.
Our CM-Net achieves the state-of-the-art results on two major SLU benchmarks (ATIS and SNIPS) in most of criteria.
We contribute a new corpus CAIS with manual annotations of slot tags and intent labels to the research community.
<<</Introduction>>>
<<<Background>>>
In principle, the slot filling is treated as a sequence labeling task, and the intent detection is a classification problem. Formally, given an utterance $X = \lbrace x_1, x_2, \cdots , x_N \rbrace $ with $N$ words and its corresponding slot tags $Y^{slot} = \lbrace y_1, y_2, \cdots , y_N \rbrace $, the slot filling task aims to learn a parameterized mapping function $f_{\theta } : X \rightarrow Y $ from input words to slot tags. For the intent detection, it is designed to predict the intent label $\hat{y}^{int}$ for the entire utterance $X$ from the predefined label set $S^{int}$.
Typically, the input utterance is firstly encoded into a sequence of distributed representations $\mathbf {X} = \lbrace \mathbf {x}_1, \mathbf {x}_2, \cdots , \mathbf {x}_N\rbrace $ by character-aware and pre-trained word embeddings. Afterwards, the following bidirectional RNNs are applied to encode the embeddings $\mathbf {X}$ into context-sensitive representations $\mathbf {H} = \lbrace \mathbf {h}_1, \mathbf {h}_2, \cdots , \mathbf {h}_N\rbrace $. An external CRF BIBREF14 layer is widely utilized to calculate conditional probabilities of slot tags:
Here $\mathbf {Y}_x$ is the set of all possible sequences of tags, and $F(\cdot )$ is the score function calculated by:
where $\mathbf {A}$ is the transition matrix that $\mathbf {A}_{i,j}$ indicates the score of a transition from $i$ to $j$, and $\mathbf {P}$ is the score matrix output by RNNs. $P_{i,j}$ indicates the score of the $j^{th}$ tag of the $i^{th}$ word in a sentence BIBREF15.
When testing, the Viterbi algorithm BIBREF16 is used to search the sequence of slot tags with maximum score:
As to the prediction of intent, the word-level hidden states $\mathbf {H}$ are firstly summarized into a utterance-level representation $\mathbf {v}^{int}$ via mean pooling (or max pooling or self-attention, etc.):
The most probable intent label $\hat{y}^{int}$ is predicted by softmax normalization over the intent label set:
Generally, both tasks are trained jointly to minimize the sum of cross entropy from each individual task. Formally, the loss function of the join model is computed as follows:
where $y^{int}_i$ and $y^{slot}_{i,j}$ are golden labels, and $\lambda $ is hyperparameter, and $|S^{int}|$ is the size of intent label set, and similarly for $|S^{slot}|$ .
<<</Background>>>
<<<CM-Net>>>
<<<Overview>>>
In this section, we start with a brief overview of our CM-Net and then proceed to introduce each module. As shown in Figure FIGREF16, the input utterance is firstly encoded with the Embedding Layer, and then is transformed by multiple CM-blocks with the assistance of slot and intent memories, and finally make predictions in the Inference Layer.
<<</Overview>>>
<<<Embedding Layers>>>
<<<Pre-trained Word Embedding>>>
The pre-trained word embeddings has been indicated as a de-facto standard of neural network architectures for various NLP tasks. We adapt the cased, 300d Glove BIBREF17 to initialize word embeddings, and keep them frozen.
<<</Pre-trained Word Embedding>>>
<<<Character-aware Word Embedding>>>
It has been demonstrated that character level information (e.g. capitalization and prefix) BIBREF18 is crucial for sequence labeling. We use one layer of CNN followed by max pooling to generate character-aware word embeddings.
<<</Character-aware Word Embedding>>>
<<</Embedding Layers>>>
<<<CM-block>>>
The CM-block is the core module of our CM-Net, which is designed with three computational components: Deliberate Attention, Local Calculation and Global Recurrence respectively.
<<<Deliberate Attention>>>
To fully model semantic relations between slots and intents, we build the slot memory $\mathbf {M^{slot}} $ and intent memory $\mathbf {M^{int}}$, and further devise a collaborative retrieval approach. For the slot memory, it keeps $|S^{slot}|$ slot cells which are randomly initialized and updated as model parameters. Similarly for the intent memory. At each word position, we take the hidden state $\mathbf {h}_t$ as query, and obtain slot feature $\mathbf {h}_t^{slot}$ and intent feature $\mathbf {h}_t^{int}$ from both memories by the deliberate attention mechanism, which will be illustrated in the following.
Specifically for the slot feature $\mathbf {h}_t^{slot}$, we firstly get a rough intent representation $\widetilde{\mathbf {h}}_t^{int}$ by the word-aware attention with hidden state $\mathbf {h}_t$ over the intent memory $\mathbf {M^{int}}$, and then obtain the final slot feature $\mathbf {h}_t^{slot}$ by the intent-aware attention over the slot memory $\mathbf {M^{slot}}$ with the intent-enhanced representation $[\mathbf {h}_t;\widetilde{\mathbf {h}}_t^{int}]$. Formally, the above-mentioned procedures are computed as follows:
where $ATT(\cdot )$ is the query function calculated by the weighted sum of all cells $\mathbf {m}_i^{x}$ in memory $\mathbf {M}^{x}$ ($x \in \lbrace slot, int\rbrace $) :
Here $\mathbf {u}$ and $\mathbf {W}$ are model parameters. We name the above calculations of two-round attentions (Equation DISPLAY_FORM23) as “deliberate attention".
The intent representation $\mathbf {h}_t^{int}$ is computed by the deliberate attention as well:
These two deliberate attentions are conducted simultaneously at each word position in such collaborative manner, which guarantees adequate knowledge diffusions between slots and intents. The retrieved slot features $\mathbf {H}_t^{slot}$ and intent features $\mathbf {H}_t^{int}$ are utilized to provide guidances for the next local calculation layer.
<<</Deliberate Attention>>>
<<<Local Calculation>>>
Local context information is highly useful for sequence modeling BIBREF19, BIBREF20. BIBREF21 SLSTM2018 propose the S-LSTM to encode both local and sentence-level information simultaneously, and it has been shown more powerful for text representation when compared with the conventional BiLSTMs. We extend the S-LSTM with slot-specific features $\mathbf {H}_t^{slot}$ and intent-specific features $\mathbf {H}_t^{slot}$ retrieved from memories.
Specifically, at each input position $t$, we take the local window context $\mathbf {\xi }_t$, word embedding $\mathbf {x}_t$, slot feature $\mathbf {h}_t^{slot}$ and intent feature $\mathbf {h}_t^{int}$ as inputs to conduct combinatorial calculation simultaneously. Formally, in the $l^{th}$ layer, the hidden state $\mathbf {h_t}$ is updated as follows:
where $\mathbf { \xi } _ { t } ^ { l }$ is the concatenation of hidden states in a local window, and $\mathbf {i}_t^l$, $\mathbf {f}_t^l$, $\mathbf {o}_t^l$, $\mathbf {l}_t^l$ and $\mathbf {r}_t^l$ are gates to control information flows, and $\mathbf {W}_n^x$ $(x \in \lbrace i, o, f, l, r, u\rbrace , n \in \lbrace 1, 2, 3, 4\rbrace )$ are model parameters. More details about the state transition can be referred in BIBREF21. In the first CM-block, the hidden state $\mathbf {h}_t$ is initialized with the corresponding word embedding. In other CM-blocks, the $\mathbf {h}_t$ is inherited from the output of the adjacent lower CM-block.
At each word position of above procedures, the hidden state is updated with abundant information from different perspectives, namely word embeddings, local contexts, slots and intents representations. The local calculation layer in each CM-block has been shown highly useful for both tasks, and especially for the slot filling task, which will be validated in our experiments in Section SECREF46.
<<</Local Calculation>>>
<<<Global Recurrence>>>
Bi-directional RNNs, especially the BiLSTMs BIBREF22 are regarded to encode both past and future information of a sentence, which have become a dominant method in various sequence modeling tasks BIBREF23, BIBREF24. The inherent nature of BiLSTMs is able to supplement global sequential information, which is insufficiently modeled in the previous local calculation layer. Thus we apply an additional BiLSTMs layer upon the local calculation layer in each CM-block. By taking the slot- and intent-specific local context representations as inputs, we can obtain more specific global sequential representations. Formally, it takes the hidden state $\mathbf {h}_t^{l-1}$ inherited from the local calculation layer as input, and conduct recurrent steps as follows:
The output “states" of the BiLSTMs are taken as “states" input of the local calculation in next CM-block. The global sequential information encoded by the BiLSTMs is shown necessary and effective for both tasks in our experiments in Section SECREF46.
<<</Global Recurrence>>>
<<</CM-block>>>
<<<Inference Layer>>>
After multiple rounds of interactions among local context representations, global sequential information, slot and intent features, we conduct predictions upon the final CM-block. For the predictions of slots, we take the hidden states $\mathbf {H}$ along with the retrieved slot $\mathbf {H}^{slot}$ representations (both are from the final CM-block) as input features, and then conduct predictions of slots similarly with the Equation (DISPLAY_FORM12) in Section SECREF2:
For the prediction of intent label, we firstly aggregate the hidden state $\mathbf {h}_t$ and the retrieved intent representation $\mathbf {h}_t^{int}$ at each word position (from the final CM-block as well) via mean pooling:
and then take the summarized vector $\mathbf {v}^{int}$ as input feature to conduct prediction of intent consistently with the Equation (DISPLAY_FORM14) in Section SECREF2.
<<</Inference Layer>>>
<<</CM-Net>>>
<<<Experiments>>>
<<<Datasets and Metrics>>>
We evaluate our proposed CM-Net on three real-word datasets, and statistics are listed in Table TABREF32.
<<<ATIS>>>
The Airline Travel Information Systems (ATIS) corpus BIBREF12 is the most widely used benchmark for the SLU research. Please note that, there are extra named entity features in the ATIS, which almost determine slot tags. These hand-crafted features are not generally available in open domains BIBREF25, BIBREF29, therefore we train our model purely on the training set without additional hand-crafted features.
<<</ATIS>>>
<<<SNIPS>>>
SNIPS Natural Language Understanding benchmark BIBREF11 is collected in a crowsourced fashion by Snips. The intents of this dataset are more balanced when compared with the ATIS. We split another 700 utterances for validation set following previous works BIBREF7, BIBREF9.
<<</SNIPS>>>
<<<CAIS>>>
We collect utterances from the $\mathbf {C}$hinese $\mathbf {A}$rtificial $\mathbf {I}$ntelligence $\mathbf {S}$peakers (CAIS), and annotate them with slot tags and intent labels. The training, validation and test sets are split by the distribution of intents, where detailed statistics are provided in the supplementary material. Since the utterances are collected from speaker systems in the real world, intent labels are partial to the PlayMusic option. We adopt the BIOES tagging scheme for slots instead of the BIO2 used in the ATIS, since previous studies have highlighted meaningful improvements with this scheme BIBREF30 in the sequence labeling field.
<<</CAIS>>>
<<<Metrics>>>
Slot filling is typically treated as a sequence labeling problem, and thus we take the conlleval as the token-level $F_1$ metric. The intent detection is evaluated with the classification accuracy. Specially, several utterances in the ATIS are tagged with more than one labels. Following previous works BIBREF13, BIBREF25, we count an utterrance as a correct classification if any ground truth label is predicted.
<<</Metrics>>>
<<</Datasets and Metrics>>>
<<<Implementation Details>>>
All trainable parameters in our model are initialized by the method described in BIBREF31 Xavier. We apply dropout BIBREF32 to the embedding layer and hidden states with a rate of 0.5. All models are optimized by the Adam optimizer BIBREF33 with gradient clipping of 3 BIBREF34. The initial learning rate $\alpha $ is set to 0.001, and decrease with the growth of training steps. We monitor the training process on the validation set and report the final result on the test set. One layer CNN with a filter of size 3 and max pooling are utilized to generate 100d word embeddings. The cased 300d Glove is adapted to initialize word embeddings, and kept fixed when training. In auxiliary experiments, the output hidden states of BERT are taken as additional word embeddings and kept fixed as well. We share parameters of both memories with the parameter matrices in the corresponding softmax layers, which can be taken as introducing supervised signals into the memories to some extent. We conduct hyper-parameters tuning for layer size (finally set to 3) and loss weight $\lambda $ (finally set to 0.5), and empirically set other parameters to the values listed in the supplementary material.
<<</Implementation Details>>>
<<<Main Results>>>
Main results of our CM-Net on the SNIPS and ATIS are shown in Table TABREF33. Our CM-Net achieves the state-of-the-art results on both datasets in terms of slot filling $F_1$ score and intent detection accuracy, except for the $F_1$ score on the ATIS. We conjecture that the named entity feature in the ATIS has a great impact on the slot filling result as illustrated in Section SECREF34. Since the SNIPS is collected from multiple domains with more balanced labels when compared with the ATIS, the slot filling $F_1$ score on the SNIPS is able to demonstrate the superiority of our CM-Net.
It is noteworthy that the CM-Net achieves comparable results when compared with models that exploit additional language models BIBREF27, BIBREF28. We conduct auxiliary experiments by leveraging the well-known BERT BIBREF35 as an external resource for a relatively fair comparison with those models, and report details in Section SECREF48.
<<</Main Results>>>
<<</Experiments>>>
<<<Analysis>>>
Since the SNIPS corpus is collected from multiple domains and its label distributions are more balanced when compared with the ATIS, we choose the SNIPS to elucidate properties of our CM-Net and conduct several additional experiments.
<<<Whether Memories Promote Each Other?>>>
In the CM-Net, the deliberate attention mechanism is proposed in a collaborative manner to perform information exchange between slots and intents. We conduct experiments to verify whether such kind of knowledge diffusion in both memories can promote each other. More specifically, we remove one unidirectional diffusion (e.g. from slot to intent) or both in each experimental setup. The results are illustrated in Figure FIGREF43.
We can observe obvious drops on both tasks when both directional knowledge diffusions are removed (CM-Net vs. neither). For the slot filling task (left part in Figure FIGREF43), the $F_1$ scores decrease slightly when the knowledge from slot to intent is blocked (CM-Net vs. “no slot2int"), and a more evident drop occurs when the knowledge from intent to slot is blocked (CM-Net vs. “no int2slot"). Similar observations can be found for the intent detection task (right part in Figure FIGREF43).
In conclusion, the bidirectional knowledge diffusion between slots and intents are necessary and effective to promote each other.
<<</Whether Memories Promote Each Other?>>>
<<<Ablation Experiments>>>
We conduct ablation experiments to investigate the impacts of various components in our CM-Net. In particular, we remove one component among slot memory, intent memory, local calculation and global recurrence. Results of different combinations are presented in Table TABREF44.
Once the slot memory and its corresponding interactions with other components are removed, scores on both tasks decrease to some extent, and a more obvious decline occurs for the slot filling (row 1 vs. row 0), which is consistent with the conclusion of Section SECREF45. Similar observations can be found for the intent memory (row 2). The local calculation layer is designed to capture better local context representations, which has an evident impact on the slot filling and slighter effect on the intent detection (row 3 vs. row 0). Opposite observations occur in term of global recurrence, which is supposed to model global sequential information and thus has larger effect on the intent detection (row 4 vs. row 0).
<<</Ablation Experiments>>>
<<<Effects of Pre-trained Language Models>>>
Recently, there has been a growing body of works exploring neural language models that trained on massive corpora to learn contextual representations (e.g. BERT BERT and EMLo EMLo). Inspired by the effectiveness of language model embeddings, we conduct experiments by leveraging the BERT as an additional feature. The results emerged in Table TABREF47 show that we establish new state-of-the-art results on both tasks of the SNIPS.
<<</Effects of Pre-trained Language Models>>>
<<<Evaluation on the CAIS>>>
We conduct experiments on our self-collected CAIS to evaluate the generalizability in different language. We apply two baseline models for comparison, one is the popular BiLSTMs + CRF architecture BIBREF36 for sequence labeling task, and the other one is the more powerful sententce-state LSTM BIBREF21. The results listed in Table TABREF50 demonstrate the generalizability and effectiveness of our CM-Net when handling various domains and different languages.
<<</Evaluation on the CAIS>>>
<<</Analysis>>>
<<<Related Work>>>
<<<Memory Network>>>
Memory network is a general machine learning framework introduced by BIBREF37 memory2014, which have been shown effective in question answering BIBREF37, BIBREF38, machine translation BIBREF39, BIBREF40, aspect level sentiment classification BIBREF41, etc. For spoken language understanding, BIBREF42 memoryslu2016 introduce memory mechanisms to encode historical utterances. In this paper, we propose two memories to explicitly capture the semantic correlations between slots and the intent in a given utterance, and devise a novel collaborative retrieval approach.
<<</Memory Network>>>
<<<Interactions between slots and intents>>>
Considering the semantic proximity between slots and intents, some works propose to enhance the slot filling task unidirectionally with the guidance of intent representations via gating mechanisms BIBREF7, BIBREF8. Intuitively, the slot representations are also instructive to the intent detection task and thus bidirectional interactions between slots and intents are benefical for each other. BIBREF9 capsule2018 propose a hierarchical capsule network to perform interactions from words to slots, slots to intents and intents to words in a pipeline manner, which is relatively limited in capturing the complicated correlations among them. In our CM-Net, information exchanges are performed simultaneously with knowledge diffusions in both directions. The experiments demonstrate the superiority of our CM-Net in capturing the semantic correlations between slots and intents.
<<</Interactions between slots and intents>>>
<<<Sentence-State LSTM>>>
BIBREF21 BIBREF21 propose a novel graph RNN named S-LSTM, which models sentence between words simultaneously. Inspired by the new perspective of state transition in the S-LSTM, we further extend it with task-specific (i.e., slots and intents) representations via our collaborative memories. In addition, the global information in S-LSTM is modeled by aggregating the local features with gating mechanisms, which may lose sight of sequential information of the whole sentence. Therefore, We apply external BiLSTMs to supply global sequential features, which is shown highly necessary for both tasks in our experiments.
<<</Sentence-State LSTM>>>
<<</Related Work>>>
<<<Conclusion>>>
We propose a novel $\mathbf {C}$ollaborative $\mathbf {M}$emory $\mathbf {N}$etwork (CM-Net) for jointly modeling slot filling and intent detection. The CM-Net is able to explicitly capture the semantic correlations among words, slots and intents in a collaborative manner, and incrementally enrich the information flows with local context and global sequential information. Experiments on two standard benchmarks and our CAIS corpus demonstrate the effectiveness and generalizability of our proposed CM-Net. In addition, we contribute the new corpus (CAIS) to the research community.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nBackground\nCM-Net\nOverview\nEmbedding Layers\nPre-trained Word Embedding\nCharacter-aware Word Embedding\nCM-block\nDeliberate Attention\nLocal Calculation\nGlobal Recurrence\nInference Layer\nExperiments\nDatasets and Metrics\nATIS\nSNIPS\nCAIS\nMetrics\nImplementation Details\nMain Results\nAnalysis\nWhether Memories Promote Each Other?\nAblation Experiments\nEffects of Pre-trained Language Models\nEvaluation on the CAIS\nRelated Work\nMemory Network\nInteractions between slots and intents\nSentence-State LSTM\nConclusion"
],
"type": "outline"
}
|
1912.10806
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
DP-LSTM: Differential Privacy-inspired LSTM for Stock Prediction Using Financial News
<<<Abstract>>>
Stock price prediction is important for value investments in the stock market. In particular, short-term prediction that exploits financial news articles is promising in recent years. In this paper, we propose a novel deep neural network DP-LSTM for stock price prediction, which incorporates the news articles as hidden information and integrates difference news sources through the differential privacy mechanism. First, based on the autoregressive moving average model (ARMA), a sentiment-ARMA is formulated by taking into consideration the information of financial news articles in the model. Then, an LSTM-based deep neural network is designed, which consists of three components: LSTM, VADER model and differential privacy (DP) mechanism. The proposed DP-LSTM scheme can reduce prediction errors and increase the robustness. Extensive experiments on S&P 500 stocks show that (i) the proposed DP-LSTM achieves 0.32% improvement in mean MPA of prediction result, and (ii) for the prediction of the market index S&P 500, we achieve up to 65.79% improvement in MSE.
<<</Abstract>>>
<<<Introduction>>>
Stock prediction is crucial for quantitative analysts and investment companies. Stocks' trends, however, are affected by a lot of factors such as interest rates, inflation rates and financial news [12]. To predict stock prices accurately, one must use these variable information. In particular, in the banking industry and financial services, analysts' armies are dedicated to pouring over, analyzing, and attempting to quantify qualitative data from news. A large amount of stock trend information is extracted from the large amount of text and quantitative information that is involved in the analysis.
Investors may judge on the basis of technical analysis, such as charts of a company, market indices, and on textual information such as news blogs or newspapers. It is however difficult for investors to analyze and predict market trends according to all of these information [22]. A lot of artificial intelligence approaches have been investigated to automatically predict those trends [3]. For instance, investment simulation analysis with artificial markets or stock trend analysis with lexical cohesion based metric of financial news' sentiment polarity. Quantitative analysis today is heavily dependent on data. However, the majority of such data is unstructured text that comes from sources like financial news articles. The challenge is not only the amount of data that are involved, but also the kind of language that is used in them to express sentiments, which means emoticons. Sifting through huge volumes of this text data is difficult as well as time-consuming. It also requires a great deal of resources and expertise to analyze all of that [4].
To solve the above problem, in this paper we use sentiment analysis to extract information from textual information. Sentiment analysis is the automated process of understanding an opinion about a given subject from news articles [5]. The analyzed data quantifies reactions or sentiments of the general public toward people, ideas or certain products and reveal the information's contextual polarity. Sentiment analysis allows us to understand if newspapers are talking positively or negatively about the financial market, get key insights about the stock's future trend market.
We use valence aware dictionary and sentiment reasoner (VADER) to extract sentiment scores. VADER is a lexicon and rule-based sentiment analysis tool attuned to sentiments that are expressed in social media specifically [6]. VADER has been found to be quite successful when dealing with NY Times editorials and social media texts. This is because VADER not only tells about the negativity score and positively but also tells us about how positive or negative a sentiment is.
However, news reports are not all objective. We may increase bias because of some non-objective reports, if we rely on the information that is extracted from the news for prediction fully. Therefore, in order to enhance the prediction model's robustness, we will adopt differential privacy (DP) method. DP is a system for sharing information about a dataset publicly by describing groups' patterns within the dataset while withholding information about individuals in the dataset. DP can be achieved if the we are willing to add random noise to the result. For example, rather than simply reporting the sum, we can inject noise from a Laplace or gaussian distribution, producing a result that’s not quite exact, that masks the contents of any given row.
In the last several years a promising approach to private data analysis has emerged, based on DP, which ensures that an analysis outcome is "roughly as likely" to occur independent of whether any individual opts in to, or to opts out of, the database. In consequence, any one individual's specific data can never greatly affect the results. General techniques for ensuring DP have now been proposed, and a lot of datamining tasks can be carried out in a DP method, frequently with very accurate results [21]. We proposed a DP-LSTM neural network, which increase the accuracy of prediction and robustness of model at the same time.
The remainder of the paper is organized as follows. In Section 2, we introduce stock price model, the sentiment analysis and differential privacy method. In Section 3, we develop the different privacy-inspired LSTM (DP-LSTM) deep neural network and present the training details. Prediction results are provided in Section 4. Section 5 concludes the paper.
<<</Introduction>>>
<<<Problem Statement>>>
In this section, we first introduce the background of the stock price model, which is based on the autoregressive moving average (ARMA) model. Then, we present the sentiment analysis details of the financial news and introduce how to use them to improve prediction performance. At last, we introduce the differential privacy framework and the loss function.
<<<ARMA Model>>>
The ARMA model, which is one of the most widely used linear models in time series prediction [17], where the future value is assumed as a linear combination of the past errors and past values. ARMA is used to set the stock midterm prediction problem up. Let ${X}_t^\text{A}$ be the variable based on ARMA at time $t$, then we have
where $X_{t-i}$ denotes the past value at time $t-i$; $\epsilon _{t}$ denotes the random error at time $t$; $\phi _i$ and $\psi _j$ are the coefficients; $\mu $ is a constant; $p$ and $q$ are integers that are often referred to as autoregressive and moving average polynomials, respectively.
<<</ARMA Model>>>
<<<Sentiment Analysis>>>
Another variable highly related to stock price is the textual information from news, whose changes may be a precursor to price changes. In our paper, news refers to a news article's title on a given trading day. It has been used to infer whether an event had informational content and whether investors' interpretations of the information were positive, negative or neutral. We hence use sentiment analysis to identify and extract opinions within a given text. Sentiment analysis aims at gauging the attitude, sentiments, evaluations and emotions of a speaker or writer based on subjectivity's computational treatment in a text [19]-[20].
Figure FIGREF3 shows an example of the sentiment analysis results obtained from financial news titles that were based on VADER. VADER uses a combination of a sentiment lexicon which are generally labelled according to their semantic orientation as either negative or positive. VADER has been found to be quite successful when dealing with news reviews. It is fully open-sourced under the MIT License. The result of VADER represent as sentiment scores, which include the positive, negative and neutral scores represent the proportion of text that falls in these categories. This means all these three scores should add up to 1. Besides, the Compound score is a metric that calculates the sum of all the lexicon ratings which have been normalized between -1(most extreme negative) and +1 (most extreme positive). Figure FIGREF5 shows the positive and negative wordcloud, which is an intuitive analysis of the number of words in the news titles.
<<</Sentiment Analysis>>>
<<<Sentiment-ARMA Model and Loss Function>>>
To take the sentiment analysis results of the financial news into account, we introduce the sentiment-ARMA model as follows
where $\alpha $ and $\lambda $ are weighting factors; $c$ is a constant; and $f_2(\cdot )$ is similar to $f_1(\cdot )$ in the ARMA model (DISPLAY_FORM2) and is used to describe the prediction problem.
In this paper, the LSTM neural network is used to predict the stock price, the input data is the previous stock price and the sentiment analysis results. Hence, the sentiment based LSTM neural network (named sentiment-LSTM) is aimed to minimize the following loss function:
where $T$ denotes the number of prediction time slots, i.e., $t = 1,...,p$ are the observations (training input data), $t = p+1,...,p+T$ are the predicts (training output data); and $\hat{X}_t$ is given in (DISPLAY_FORM7).
<<</Sentiment-ARMA Model and Loss Function>>>
<<<Overview of LSTM>>>
Denote $\mathcal {X}_t^{\text{train}} = \lbrace X_{t-i},S_{t-i}\rbrace _{i=1}^p$ as the training input data. Figure FIGREF10 shows the LSTM's structure network, which comprises one or more hidden layers, an output layer and an input layer [16]. LSTM networks' main advantage is that the hidden layer comprises memory cells. Each memory cell recurrently has a core self-connected linear unit called “ Constant Error Carousel (CEC)” [13], which provides short-term memory storage and has three gates:
Input gate, which controls the information from a new input to the memory cell, is given by
where $h_{t-1}$ is the hidden state at the time step $t-1$; $i_t$ is the output of the input gate layer at the time step $t$; $\hat{c}_t$ is the candidate value to be added to the output at the time step $t$; $b_i$ and $b_c$ are biases of the input gate layer and the candidate value computation, respectively; $W_i$ and $W_c$ are weights of the input gate and the candidate value computation, respectively; and $\sigma (x) = 1/(1+e^{-x})$ is the pointwise nonlinear activation function.
Forget gate, which controls the limit up to which a value is saved in the memory, is given by
where $f_t$ is the forget state at the time step $t$, $W_f$ is the weight of the forget gate; and $b_f$ is the bias of the forget gate.
Output gate, which controls the information output from the memory cell, is given by
where new cell states $c_t$ are calculated based on the results of the previous two steps; $o_t$ is the output at the time step $t$; $W_o$ is the weight of the output gate; and $b_o$ is the bias of the output gate [14].
<<</Overview of LSTM>>>
<<<Definition of Differential Privacy>>>
Differential privacy is one of privacy's most popular definitions today, which is a system for publicly sharing information about a dataset by describing the patterns of groups within the dataset while withholding information about individuals in the dataset. It intuitively requires that the mechanism that outputs information about an underlying dataset is robust to one sample's any change, thus protecting privacy. A mechanism ${f}$ is a random function that takes a dataset $\mathcal {N}$ as input, and outputs a random variable ${f}(\mathcal {N})$. For example, suppose $\mathcal {N}$ is a news articles dataset, then the function that outputs compound score of articles in $\mathcal {N}$ plus noise from the standard normal distribution is a mechanism [7].
Although differential privacy was originally developed to facilitate secure analysis over sensitive data, it can also enhance the robustness of the data. Note that finance data, especially news data and stock data, is unstable with a lot of noise, with a more robust data the accuracy of prediction will be improved. Since we predict stock price by fusing news come from different sources, which might include fake news. Involving differential privacy in the training to improve the robustness of the finance news is meaningful.
<<</Definition of Differential Privacy>>>
<<</Problem Statement>>>
<<<Training DP-LSTM Neural Network>>>
It is known that it is risky to predict stocks by considering news factors, because news can't guarantee full notarization and objectivity, many times extreme news will have a big impact on prediction models. To solve this problem, we consider entering the idea of the differential privacy when training. In this section, our DP-LSTM deep neural network training strategy is presented. The input data consists of three components: stock price, sentiment analysis compound score and noise.
<<<Data Preprocessing and Normalization>>>
<<<Data Preprocessing>>>
The data for this project are two parts, the first part is the historical S&P 500 component stocks, which are downloaded from the Yahoo Finance. We use the data over the period of from 12/07/2017 to 06/01/2018. The second part is the news article from financial domain are collected with the same time period as stock data. Since our paper illustrates the relationship between the sentiment of the news articles and stocks' price. Hence, only news article from financial domain are collected. The data is mainly taken from Webhose archived data, which consists of 306242 news articles present in JSON format, dating from December 2017 up to end of June 2018. The former 85% of the dataset is used as the training data and the remainder 15% is used as the testing data. The News publishers for this data are CNBC.com, Reuters.com, WSJ.com, Fortune.com. The Wall Street Journal is one of the largest newspapers in the United States, which coverage of breaking news and current headlines from the US and around the world include top stories, photos, videos, detailed analysis and in-depth thoughts; CNBC primarily carries business day coverage of U.S. and international financial markets, which following the end of the business day and on non-trading days; Fortune is an American multinational business magazine; Reuters is an international news organization. We preprocess the raw article body and use NLTK sentiment package alence Aware Dictionary and Sentiment Reasoner (VADER) to extract sentiment scores.
The stocks with missing data are deleted, and the dataset we used eventually contains 451 stocks and 4 news resources (CNBC.com, Reuters.com, WSJ.comFortune.com.). Each stock records the adjust close price and news compound scores of 121 trading days.
A rolling window with size 10 is used to separate data, that is, We predict the stock price of the next trading day based on historical data from the previous 10 days, hence resulting in a point-by-point prediction [15]. In particular, the training window is initialized with all real training data. Then we shift the window and add the next real point to the last point of training window to predict the next point and so forth. Then, according to the length of the window, the training data is divided into 92 sets of training input data (each set length 10) and training output data (each set length 1). The testing data is divided into input and output data of 9 windows (see Figure FIGREF20).
<<</Data Preprocessing>>>
<<<Normalization>>>
To detect stock price pattern, it is necessary to normalize the stock price data. Since the LSTM neural network requires the stock patterns during training, we use “min-max” normalization method to reform dataset, which keeps the pattern of the data [11], as follow:
where $X_{t}^{n}$ denotes the data after normalization. Accordingly, de-normalization is required at the end of the prediction process to get the original price, which is given by
where $\hat{X}_{t}^{n}$ denotes the predicted data and $\hat{X}_{t}$ denotes the predicted data after de-normalization.
Note that compound score is not normalized, since the compound score range from -1 to 1, which means all the compound score data has the same scale, so it is not require the normalization processing.
<<</Normalization>>>
<<</Data Preprocessing and Normalization>>>
<<<Adding Noise>>>
We consider the differential privacy as a method to improve the robustness of the LSTM predictions [8]. We explore the interplay between machine learning and differential privacy, and found that differential privacy has several properties that make it particularly useful in application such as robustness to extract textual information [9]. The robustness of textual information means that accuracy is guaranteed to be unaffected by certain false information [10].
The input data of the model has 5 dimensions, which are the stock price and four compound scores as $(X^t, S_1^t, S_2^t, S_3^t, S_4^t), t=1,...,T$, where $X^t$ represents the stock price and $S_i^t,~i=1,...,4$ respectively denote the mean compound score calculated from WSJ, CNBC, Fortune and Reuters. According to the process of differential privacy, we add Gaussian noise with different variances to the news according to the variance of the news, i.e., the news compound score after adding noise is given by
where $\text{var}(\cdot )$ is the variance operator, $\lambda $ is a weighting factor and $\mathcal {N}(\cdot )$ denotes the random Gaussian process with zero mean and variance $\lambda \text{var}(S_i)$.
We used python to crawl the news from the four sources of each trading day, perform sentiment analysis on the title of the news, and get the compound score. After splitting the data into training sets and test sets, we separately add noise to each of four news sources of the training set, then, for $n$-th stock, four sets of noise-added data $(X^n_t, {\widetilde{S}^t_1}, S^t_2, S^t_3, S^t_4)$, $(X^n_t, {S^t_1}, \widetilde{S}^t_2, S^t_3, S^t_4)$, $(X^n_t, { S^t_1}, S^t_2, \widetilde{S}^t_3, S^t_4)$, $(X^n_t, { S^t_1}, S^t_2, S^t_3, \widetilde{S}^t_4)$ are combined into a new training data through a rolling window. The stock price is then combined with the new compound score training data as input data for our DP-LSTM neural network.
<<</Adding Noise>>>
<<<Training Setting>>>
The LSTM model in figure FIGREF10 has six layers, followed by an LSTM layer, a dropout layer, an LSTM layer, an LSTM layer, a dropout layer, a dense layer, respectively. The dropout layers (with dropout rate 0.2) prevent the network from overfitting. The dense layer is used to reshape the output. Since a network will be difficult to train if it contains a large number of LSTM layers [16], we use three LSTM layers here.
In each LSTM layer, the loss function is the mean square error (MSE), which is the sum of the squared distances between our target variable and the predicted value. In addition, the ADAM [17] is used as optimizer, since it is straightforward to implement, computationally efficient and well suited for problems with large data set and parameters.
There are many methods and algorithms to implement sentiment analysis systems. In this paper, we use rule-based systems that perform sentiment analysis based on a set of manually crafted rules. Usually, rule-based approaches define a set of rules in some kind of scripting language that identify subjectivity, polarity, or the subject of an opinion. We use VADER, a simple rule-based model for general sentiment analysis.
<<</Training Setting>>>
<<</Training DP-LSTM Neural Network>>>
<<<Performance Evaluation>>>
In this section, we validate our DP-LSTM based on the S&P 500 stocks. We calculate the mean prediction accuracy (MPA) to evaluate the proposed methods, which is defined as
where $X_{t,\ell }$ is the real stock price of the $\ell $-th stock on the $t$-th day, $L$ is the number of stocks and $\hat{X}_{t,\ell }$ is the corresponding prediction result.
Figure FIGREF27 plots the average score for all news on the same day over the period. The compound score is fluctuating between -0.3 and 0.15, indicating an overall neutral to slightly negative sentiment. The Positive, Negative and Neutral scores represent the proportion of text that falls in these categories. The Compound score is a metric that calculates the sum of all the lexicon ratings which have been normalized between -1 (most extreme negative) and +1 (most extreme positive).
Figure FIGREF29 shows the $\text{MPAs}$ of the proposed DP-LSTM and vanilla LSTM for comparison. In Table TABREF30, we give the mean MPA results for the prediction prices, which shows the accuracy performance of DP-LSTM is 0.32% higer than the LSTM with news. The result means the DP framework can make the prediction result more accuracy and robustness.
Note that the results are obtained by running many trials, since we train stocks separately and predict each price individually due to the different patterns and scales of stock prices. This in total adds up to 451 runs. The results shown in Table TABREF30 is the average of these 451 runs. Furthermore, we provide results for 9 duration over a period in Figure FIGREF29. The performance of our DP-LSTM is always better than the LSTM with news. Based on the sentiment-ARMA model and adding noise for training, the proposed DP-LSTM is more robust. The investment risk based on this prediction results is reduced.
In Figure FIGREF31, we can see the prediction results of DP-LSTM with is closer to the real S&P 500 index price line than other methods. The two lines (prediction results of LSTM with news and LSTM without news) almost coincide in Figure FIGREF31. We can tell the subtle differences from the Table TABREF32, that DP-LSTM is far ahead, and LSTM with news is slightly better than LSTM without news.
<<</Performance Evaluation>>>
<<<Conclusion>>>
In this paper, we integrated the deep neural network with the famous NLP models (VADER) to identify and extract opinions within a given text, combining the stock adjust close price and compound score to reduce the investment risk. We first proposed a sentiment-ARMA model to represent the stock price, which incorporates influential variables (price and news) based on the ARMA model. Then, a DP-LSTM deep neural network was proposed to predict stock price according to the sentiment-ARMA model, which combines the LSTM, compound score of news articles and differential privacy method. News are not all objective. If we rely on the information extracted from the news for prediction fully, we may increase bias because of some non-objective reports. Therefore, the DP-LSTM enhance robustness of the prediction model. Experiment results based on the S&P 500 stocks show that the proposed DP-LSTM network can predict the stock price accurately with robust performance, especially for S&P 500 index that reflects the general trend of the market. S&P 500 prediction results show that the differential privacy method can significantly improve the robustness and accuracy.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nProblem Statement\nARMA Model\nSentiment Analysis\nSentiment-ARMA Model and Loss Function\nOverview of LSTM\nDefinition of Differential Privacy\nTraining DP-LSTM Neural Network\nData Preprocessing and Normalization\nData Preprocessing\nNormalization\nAdding Noise\nTraining Setting\nPerformance Evaluation\nConclusion"
],
"type": "outline"
}
|
1911.03912
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Effectiveness of self-supervised pre-training for speech recognition
<<<Abstract>>>
We present pre-training approaches for self-supervised representation learning of speech data. A BERT, masked language model, loss on discrete features is compared with an InfoNCE-based constrastive loss on continuous speech features. The pre-trained models are then fine-tuned with a Connectionist Temporal Classification (CTC) loss to predict target character sequences. To study impact of stacking multiple feature learning modules trained using different self-supervised loss functions, we test the discrete and continuous BERT pre-training approaches on spectral features and on learned acoustic representations, showing synergitic behaviour between acoustically motivated and masked language model loss functions. In low-resource conditions using only 10 hours of labeled data, we achieve Word Error Rates (WER) of 10.2\% and 23.5\% on the standard test "clean" and "other" benchmarks of the Librispeech dataset, which is almost on bar with previously published work that uses 10 times more labeled data. Moreover, compared to previous work that uses two models in tandem, by using one model for both BERT pre-trainining and fine-tuning, our model provides an average relative WER reduction of 9%.
<<</Abstract>>>
<<<Introduction>>>
Representation learning has been an active research area for more than 30 years BIBREF1, with the goal of learning high level representations which separates different explanatory factors of the phenomena represented by the input data BIBREF2, BIBREF3. Disentangled representations provide models with exponentially higher ability to generalize, using little amount of labels, to new conditions by combining multiple sources of variations.
Building Automatic Speech Recognition (ASR) systems, for example, requires a large volume of training data to represent different factors contributing to the creation of speech signals, e.g. background noise, recording channel, speaker identity, accent, emotional state, topic under discussion, and the language used in communication. The practical need for building ASR systems for new conditions with limited resources spurred a lot of work focused on unsupervised speech recognition and representation learning BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, in addition to semi- and weakly-supervised learning techniques aiming at reducing the supervised data needed in real-world scenarios BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17.
Recently impressive results have been reported for representation learning, that generalizes to different downstream tasks, through self-supervised learning for text and speech BIBREF18, BIBREF19, BIBREF10, BIBREF11, BIBREF0. Self-supervised representation learning is done through tasks to predict masked parts of the input, reconstruct inputs through low bit-rate channels, or contrast similar data points against different ones. Different from BIBREF0 where the a BERT-like model is trained with the masked language model loss, frozen, and then used as a feature extractor in tandem with a final fully supervised convolutional ASR model BIBREF20, in this work, our “Discrete BERT” approach achieves an average relative Word Error Rate (WER) reduction of 9% by pre-training and fine-tuning the same BERT model using a Connectionist Temporal Classification BIBREF21 loss.
In addition, we present a new approach for pre-training bi-directional transformer models on continuous speech data using the InfoNCE loss BIBREF10 – dubbed “continuous BERT”.
To understand the nature of their learned representations, we train models using the continuous and the discrete BERT approaches on spectral features, e.g. Mel-frequency cepstral coefficients (MFCC), as well as on pre-trained Wav2vec features BIBREF22. These comparisons provide insights on how complementary the acoustically motivated contrastive loss function is to the other masked language model one.
The unsupervised and semi-supervised ASR approaches is in need for test suites like the unified downstream tasks available for language representation models BIBREF18. BIBREF23, BIBREF24, BIBREF25 evaluated semi-supervised self-labeling WER performance on the standard test “clean” and test “other” benchmarks of the Librispeech dataset BIBREF26 when using only 100 hour subset as labeled data. BIBREF22, BIBREF0, BIBREF10 use the same 960h Librispeech data as unlabeled pre-training data, however, they use Phone Error Rates (PER) on the 3h TIMIT dataset BIBREF27 as their performance metric. The zero-resource ASR literature BIBREF7, BIBREF28 use the ABX task evaluate the quality of learned features.
To combine the best of these evaluation approaches, we pre-train our models on the unlabeled 960h Librispeech data, with a close-to-zero supervised set of only 1 hour and 10 hours, sampled equally from the “clean” and “other” conditions of Librispeech. Then, we report final WER performance on its standard dev and test sets. Using our proposed approaches we achieve a best WER of 10.2% and 23.5% the clean and other subsets respectively which is competitive with previous work that uses 100h of labeled data.
<<</Introduction>>>
<<<Preliminaries>>>
<<<BERT>>>
Using self-supervision, BERT BIBREF18, a deep bidirectional transformer model, builds its internal language representation that generalizes to other downstream NLP tasks. Self-attention over the whole input word sequence enables BERT to jointly condition on both the left and right context of data. For training, it uses both a masked language model loss, by randomly removing some input words for the model to predict, and a contrastive loss to distinguish the next sentence in the document from a randomly selected one.
<<</BERT>>>
<<<Wav2Vec>>>
Wav2vec BIBREF22 learns representations of audio data by solving a self-supervised context-prediction task with the same loss function as word2vec BIBREF29, BIBREF10. The model is based on two convolutional neural networks where the encoder $f: \mapsto $ produces a representation $_{i}$ for each time step i at a rate of 100 Hz and the aggregator $g: \mapsto $ combines multiple encoder time steps into a new representation $_i$ for each time step i. Given $_i$, the model is trained to distinguish a sample $_{i+k}$ that is k steps in the future from distractor samples $$ drawn from a distribution $p_n$, by minimizing the contrastive loss for steps $k=1,\dots ,K$:
where $T$ is the sequence length, $\sigma (x) = 1/(1+\exp (-x))$, and where $\sigma (_{i+k}^\top h_k(_i))$ is the probability of $_{i+k}$ being the true sample. A step-specific affine transformation $h_k(_i) = W_k _i + \mathbf {b}_k$ is applied to $_i$ BIBREF10. The loss $\mathcal {L} = \sum _{k=1}^K \mathcal {L}_k$ is optimized by summing (DISPLAY_FORM4) over different step sizes. The learned high level features produced by the context network $_i$ are shown to be better acoustic representations for speech recognition compared to standard spectral features.
<<</Wav2Vec>>>
<<<vq-wav2vec>>>
vq-wav2vec BIBREF0 learns vector quantized (VQ) representations of audio data using a future time-step prediction task. Similar to wav2vec, there is a convolutional encoder and decoder networks $f: \mapsto $ and $g: \hat{} \mapsto $ for feature extraction and aggregation. However, in between them there is a quantization module $q: \mapsto \hat{}$ to build discrete representations which are input to the aggregator.
First, 30ms segments of raw speech are mapped to a dense feature representation $$ at a stride of 10ms using the encoder $f$. Next, the quantizer (q) turns these dense representations into discrete indices which are mapped to a reconstruction $$ of the original representation $$. The $$ is fed into the aggregator $g$ and the model is optimized via the same context prediction task as wav2vec (cf. §SECREF3). The quantization module replaces the original representation $$ by $= _i$ from a fixed size codebook $\in \mathbb {R}^{V \times d}$ which contains $V$ representations of size $d$.
<<</vq-wav2vec>>>
<<</Preliminaries>>>
<<<Approach>>>
<<<Discrete BERT>>>
Our work builds on the recently proposed work in BIBREF0 where audio is quantized using a contrastive loss, then features learned on top by a BERT model BIBREF18. For the vq-wav2vec quantization, we use the gumbel-softmax vq-wav2vec model with the same setup as described in BIBREF0. This model quantizes the Librispeech dataset into 13.5k unique codes.
To understand the impact of acoustic representations baked into the wav2vec features, as alternatives, we explore quantizing the standard mel-frequency cepstral coefficients (MFCC) and log-mel filterbanks coefficients (FBANK), choosing a subset small enough to fit into GPU memory and running k-means with 13.5k centroids (to match the vq-wav2vec setup) to convergence. We then assign the index of the closest centroid to represent each time-step.
We train a standard BERT model BIBREF18, BIBREF30 with only the masked language modeling task on each set of inputs in the same way as described in BIBREF0, namely by choosing tokens for masking with probability of 0.05, expanding each chosen token to a span of 10 masked tokens (spans may overlap) and then computing a cross-entropy loss which attempts to maximize the likelihood of predicting the true token for each one that was masked (Figure ).
<<</Discrete BERT>>>
<<<Continuous BERT>>>
A masked language modeling task cannot be performed with continuous inputs and outputs, as there are no targets to predict in place of the masked tokens. Instead of reconstructing the input as in BIBREF31, we classify the masked positive example among a set of negatives. The inputs to the model are dense wav2vec features BIBREF22, MFCC or FBANK features representing 10ms of audio data. Some of these inputs are replaced with a mask embedding and are then fed into a transformer encoder. We then compute the dot product between the outputs corresponding to each masked input, the true input that was masked, and a set of negatives sampled from other masked inputs within the same batch. The model is optimized with the InfoNCE loss BIBREF10 where given one positive sample $_i$ and $N$ negative samples $\tilde{}$ we minimize:
where each sample $_i$ is computed as a dot product of the output of the model at timestep $i$ and the true unmasked value of positive example at timestep $i$ or a randomly sampled negative example. To stabilize training, we add the squared sum of logits produced by the dot-product to the loss, and then apply a soft clamp $\hat{s_i}=\lambda \tanh (s_i/\lambda )$ for each logit $s_i$ to prevent the model's tendency to continually increase the magnitude of logits during training BIBREF32.
<<</Continuous BERT>>>
<<<Supervised fine-tuning>>>
The pre-trained models are fine-tuned to perform the ASR task by adding a randomly initialized linear projection on top of the features computed by the transformer models into $V$ classes representing the vocabulary of the task. The vocabulary is 29 tokens for character targets plus a word boundary token. The models are optimized by minimizing the CTC loss. Fine-tuning requires only a few epochs on a single GPU.
<<</Supervised fine-tuning>>>
<<</Approach>>>
<<<Experiments>>>
All of our experiments are implemented by extending the fairseq BIBREF33 toolkit.
<<<Data>>>
All of our experiments are performed by pre-training on 960 hours of Librispeech BIBREF26 training set, fine-tuning on labeled 10 hours and 1 hour sets sampled equally from the two conditions of the training set, and evaluating on the standard dev and test splits.
<<</Data>>>
<<<Models>>>
<<<Quantized Inputs Training>>>
We first train the vq-wav2vec quantization model following the gumbel-softmax recipe described in BIBREF0. After training this model on 960h of Librispeech and quantizing the training dataset, we are left with 13.5k unique codewords combinations.
For quantizing MFCC and log-mel filterbanks we first compute dense features using the scripts from the Kaldi BIBREF34 toolkit. We then compute 13.5k K-Means centroids, to match the number of unique tokens produced by the vq-wav2vec model, using 8 32GB Volta GPUs. To fit into GPU memory, we subsample 50% of MFCC features and 25% of FBANK features from the training set before running the K-Means algorithm.
The model we use for the masked language modeling task is a standard BERT model with 12 layers, model dimension 768, inner dimension (FFN) 3072 and 12 attention heads BIBREF18. The learning rate is warmed up over the first 10,000 updates to a peak value of 1e-5, and then linearly decayed over a total of 250k updates. We train on 128 GPUs with a batch size of 3072 tokens per GPU giving a total batch size of 393k tokens BIBREF35. Each token represents 10ms of audio data.
To mask the input sequence, we follow BIBREF0 and randomly sample $p=0.05$ of all tokens to be a starting index, without replacement, and mask $M=10$ consecutive tokens from every sampled index; spans may overlap.
<<</Quantized Inputs Training>>>
<<<Continuous Inputs Training>>>
For training on dense features, we use a model similar to a standard BERT model with the same parameterization as the one used for quantized input training, but we use the wav2vec, MFCC or FBANK inputs directly. We add 128 relative positional embeddings at every multi-head attention block as formulated in BIBREF36 instead of fixed positional embeddings to ease handling longer examples. We train this model on only 8 GPUs, with a batch size of 9600 inputs per GPU resulting in a total batch size of 76,800. We find that increasing the number of GPUs (which increases the effective batch size) does not lead to better results with this particular setup.
Wav2vec features are 512-dimensional, while MFCC features have 39 dimensions and Logmel features have 80. We introduce a simple linear projection from the feature dimension to BERT dimension (768) for all models.
Similarly to the approach in SECREF12, we choose time-steps to mask by randomly sampling, without replacement, $p=0.05$ of all time-steps to be a starting index, and mask $M=10$ consecutive time-steps from every sampled index; spans may overlap. We sample 10 negative examples from other masked time-steps from the same example, and an additional 10 negative examples from masked time-steps occurring anywhere in the batch. We compute a dot product between the original features and the output corresponding to the same time-step after they are processed by the BERT model. We add the squared sum of logits from these computations multiplied by $\lambda =0.04$ to the loss, and then apply a smooth clamp by recomputing each logit $\hat{s_i}=20\tanh (s_i/20)$.
The learning rate is warmed up over the first 10,000 updates to a peak value of 1e-5, and then linearly decayed over a total of 250k updates.
<<</Continuous Inputs Training>>>
<<</Models>>>
<<<Methodology>>>
For quantized inputs, we compute token indices using the gumbel-softmax based vq-wav2vec model. For MFCC and FBANK features we take the index of the closest centroid (as measured by finding the minimum Euclidean distance) to each corresponding feature in the Librispeech dataset. We then train a BERT model as descirbed in §SECREF12.
For wav2vec continuous inputs, we use features extracted by the publicly available wav2vec BIBREF22 model which contains 6 convolutional blocks in the feature extractor and 11 convolutional blocks in the aggregator module. We use the outputs of the aggregator as features. For MFCC and FBANK, we use those features directly after applying a single linear projection to upsample them to the model dimensionality.
We fine-tune our pre-trained models on 1 or 10 hours of labelled data sampled from the Librispeech training set. We use the standard CTC loss and train for up to 20k updates. We find that the pre-trained models converge after only around 4k updates, while the models trained from scratch tend to converge much later, around 18k updates. We fine-tune all models with learning rate of $0.0001$ that is linearly warmed up over the first 2k updates and then annealed following a cosine learning rate schedule over the last 18k updates. We set the dropout of the pre-trained BERT models to 0.1 and sweep on dropout of the BERT model outputs before the final projection layer over values between 0.0 and 0.4 in increments of 0.1. For each model, we choose a single best checkpoint that has the best loss on the validation set, which is a combination of dev-clean and dev-other standard Librispeech splits.
We use the publicly available wav2letter++ BIBREF37 decoder integrated into the Fairseq framework with the official Librispeech 4-gram language model. We run a sweep on weights for language model score, word score and silence token weights for each model, where parameters are chosen randomly and evaluated on the dev-other Librispeech set. We use the weights found by these sweeps to evaluate and report results for all other splits. The sweeps are run with beam size of 250, while the final decoding is done with beam size of 1500.
The quantized BERT models have a limit of 2048 source tokens due to their use of fixed positional embeddings. During training we discard longer examples and during evaluation we discard randomly chosen tokens from each example until they are at most 2048 tokens long. We expect that increasing the size of the fixed positional embeddings, or switching to relative positional embeddings will improve performance on longer examples, but in this work we wanted to stay consistent with the setup in BIBREF0.
The tandem model which uses the features extracted from the pre-trained BERT models is a character-based Wav2Letter setup of BIBREF38 which uses seven consecutive blocks of convolutions (kernel size 5 with 1000 channels), followed by a PReLU nonlinearity and a dropout rate of 0.1. The final representation is projected to a 28-dimensional probability over the vocabulary and decoded using the standard 4-gram language model following the same protocol as for the fine-tuned models
<<</Methodology>>>
<<<Results>>>
Table TABREF15 presents WERs of different input features and pre-training methods on the standard Librispeech clean and other subsets using 10 hours and 1 hour of labeled data for fine-tuning. Compared to the two-model tandem system proposed in BIBREF0, which uses a the discrete BERT features to train another ASR system from scratch, our discrete BERT model provides an average of 13% and 6% of WER reduction on clean and other subsets respectively, by pre-training and fine-tuning the same BERT model on the 10h labeled set.
The wav2vec inputs represent one level of unsupervised feature discovery, This is our reproduction of the tandem system in BIBREF0 which trains a convolutional model from scratch on features extracted of the discrete BERT model with Wav2vec input features, and evaluated on the Librispeech standard “clean” and “other” subsets.which provides a better space for quantization compared to raw spectral features. The discrete BERT training augments the wav2vec features with a higher level of representation that captures the sequential structure of the full utterance through the masked language modeling loss. On the other hand, the continuous BERT training, given its contrastive InforNCE loss, can be viewed as another level of acoustic representations that captures longer range regularities.
Using the MFCC and FBANK as inputs to the continuous and discrete BERT models provide insights on the synergies of different levels of acoustic and language model representations. Similar to the observations in BIBREF40, the FBANK features are more friendly to unsupervised local acoustic representation learning methods like continuous BERT, leading to consistent gains compared to MFCC features for both 10h and 1h sets.
When using the MFCC and FBANK features for the discrete BERT training, the naive k-means clustering provides bad input acoustic centroids with nothing to benefit from the FBANK compared to the MFCC features. This shifts the entire representation learning load to the, language modelling, discrete BERT component which is identical for both FBANK and MFCC, leading to almost similar performance for both input features in both the 10h and 1h fine-tuning conditions. Using the quantized wav2vec features instead provides a boost of about 40% relative improvement on average compared to the quantized FBANK features in the 10h fine-tuning case.
In line with our hypotheses that the discrete BERT model plays the role of a language model and input wav2vec features learns high level acoustic representations, in the very low-resource condition of 1h fine-tuning, the average relative improvement between quantized FBANK and Wav2vec inputs is larger in the “clean” subsets – 55%, which require better local acoustic representations, compared to 45% WER reduction for the noisy “other” subsets that rely more on the global language modeling capabilities.
With wav2vec features providing good acoustic representations, the discrete BERT model provides an average of about 28% relative improvement over the continuous BERT model for the 10h fine-tuning condition. We believe the reason is due to the complementary nature of the discrete BERT language modelling loss and the wav2vec acoustically motivated pre-training, as opposed to the relatively redundant acoustic pre-training losses of the continious BERT and wav2vec. In the 1h fine-tuning case, however, better local acoustic features provide more gains in the “clean” subsets compared to the “other” ones, following the same trend of the quantized FBANK and wav2vec features under the same conditions.
Table TABREF16 shows the competitive performance of the discrete BERT approach compared to previously published work which is fine-tuned on more than 10 times the labeled data.
<<</Results>>>
<<<Ablations>>>
To understand the value of self-supervision in our setup, Table TABREF18 shows WERs for both continuous and discrete input features fine-tuned from random weights, without BERT pre-training, using 10 hours of labeled data. The performance of the discrete features completely collapses since randomly initialized input embedding tables don't have enough training data for learning meaningful representations. This is not a problem for continuous input features where, understandably, Wav2vec input features show much better WERs compared to the MFCC and FBANK features.
The impact of adding a second layer of acoustic representation is shown by comparing the continuous BERT model trained on top of wav2vec features versus the wav2vec model fine-tuned directly using the CTC loss – only one level of learned representations. Continuous BERT training on top of wav2vec features provides substantial gains (Table TABREF19). Adding a second layer of representation more than halved the WER, with more gains observed in the “clean” subset as also observed in SECREF17.
<<</Ablations>>>
<<</Experiments>>>
<<<Discussion and Related Work>>>
The the success of BERT BIBREF18 and Word2Vec BIBREF29 for NLP tasks motivated more research on self-supervised approaches for acoustic word embedding and unsupervised acoustic feature representation BIBREF41, BIBREF42, BIBREF43, BIBREF44, BIBREF9, BIBREF45, BIBREF22, BIBREF10, BIBREF46, BIBREF0, either by predicting masked discrete or continuous input, or by contrastive prediction of neighboring or similarly sounding segments using distant supervision or proximity in the audio signal as an indication of similarity. In BIBREF47 a dynamic time warping alignment is used to discover similar segment pairs. Our work is inspired by the research efforts in reducing the dependence on labeled data for building ASR systems through unsupervised unit discovery and acoustic representation leaning BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, and through multi- and cross-lingual transfer learning in low-resource conditions BIBREF48, BIBREF49, BIBREF50, BIBREF51, BIBREF52, BIBREF53, and semi-supervised learning BIBREF12, BIBREF13, BIBREF14, BIBREF15.
<<</Discussion and Related Work>>>
<<<Conclusion and Future work>>>
We presetned two variations, continuous and discrete, of BERT models that are pre-trained on the librispeech 960h data and fine-tuned for speech recognition rather than used as feature extractor in tandem with another ASR system. Along with the discrete-input BERT model, we used a contrastive loss for training a continuous variant of BERT. The acoustic and language modeling roles in the system are played by the vq-wav2vec and the BERT components respectively. Our ablation experiments showed the contribution and importance of each component for final ASR performance. Our system is able to reach final WER of 10.2% and 23.5% on the standard Librispeech test clean and other sets, respectively, using only 10h of labeled data, almost matching the 100h supervised baselines. Our future directions include testing our model on 1000x larger volume of unlabeled data that is more acoustically challenging, along with multi- and cross-lingual transfer learning extensions.
<<</Conclusion and Future work>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nPreliminaries\nBERT\nWav2Vec\nvq-wav2vec\nApproach\nDiscrete BERT\nContinuous BERT\nSupervised fine-tuning\nExperiments\nData\nModels\nQuantized Inputs Training\nContinuous Inputs Training\nMethodology\nResults\nAblations\nDiscussion and Related Work\nConclusion and Future work"
],
"type": "outline"
}
|
1910.08772
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
MonaLog: a Lightweight System for Natural Language Inference Based on Monotonicity
<<<Abstract>>>
We present a new logic-based inference engine for natural language inference (NLI) called MonaLog, which is based on natural logic and the monotonicity calculus. In contrast to existing logic-based approaches, our system is intentionally designed to be as lightweight as possible, and operates using a small set of well-known (surface-level) monotonicity facts about quantifiers, lexical items and tokenlevel polarity information. Despite its simplicity, we find our approach to be competitive with other logic-based NLI models on the SICK benchmark. We also use MonaLog in combination with the current state-of-the-art model BERT in a variety of settings, including for compositional data augmentation. We show that MonaLog is capable of generating large amounts of high-quality training data for BERT, improving its accuracy on SICK.
<<</Abstract>>>
<<<Introduction>>>
There has been rapid progress on natural language inference (NLI) in the last several years, due in large part to recent advances in neural modeling BIBREF0 and the introduction of several new large-scale inference datasets BIBREF1, BIBREF2, BIBREF3, BIBREF4. Given the high performance of current state-of-the-art models, there has also been interest in understanding the limitations of these models (given their uninterpretability) BIBREF5, BIBREF6, as well as finding systematic biases in benchmark datasets BIBREF7, BIBREF8. In parallel to these efforts, there have also been recent logic-based approaches to NLI BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13, which take inspiration from linguistics. In contrast to early attempts at using logic BIBREF14, these approaches have proven to be more robust. However they tend to use many rules and their output can be hard to interpret. It is sometimes unclear whether the attendant complexity is justified, especially given that such models are currently far outpaced by data-driven models and are generally hard to hybridize with data-driven techniques.
In this work, we introduce a new logical inference engine called MonaLog, which is based on natural logic and work on monotonicity stemming from vanBenthemEssays86. In contrast to the logical approaches cited above, our starting point is different in that we begin with the following two questions: 1) what is the simplest logical system that one can come up with to solve empirical NLI problems (i.e., the system with minimal amounts of primitives and background knowledge)?; and 2) what is the lower-bound performance of such a model? Like other approaches to natural logic BIBREF15, BIBREF16, our model works by reasoning over surface forms (as opposed to translating to symbolic representations) using a small inventory of monotonicity facts about quantifiers, lexical items and token-level polarity BIBREF17; proofs in the calculus are hence fully interpretable and expressible in ordinary language. Unlike existing work on natural logic, however, our model avoids the need for having expensive alignment and search sub-procedures BIBREF18, BIBREF19, and relies on a much smaller set of background knowledge and primitive relations than MacCartneyManning.
To show the effectiveness of our approach, we show results on the SICK dataset BIBREF1, a common benchmark for logic-based NLI, and find MonaLog to be competitive with more complicated logic-based approaches (many of which require full semantic parsing and more complex logical machinery). We also introduce a supplementary version of SICK that corrects several common annotation mistakes (e.g., asymmetrical inference annotations) based on previous work by kalouli2017entail,kalouli2018. Positive results on both these datasets show the ability of lightweight monotonicity models to handle many of the inferences found in current NLI datasets, hence putting a more reliable lower-bound on what results the simplest logical approach is capable of achieving on this benchmark.
Since our logic operates over surface forms, it is straightforward to hybridize our models. We investigate using MonaLog in combination with the language model BERT BIBREF20, including for compositional data augmentation, i.e, re-generating entailed versions of examples in our training sets. To our knowledge, our approach is the first attempt to use monotonicity for data augmentation, and we show that such augmentation can generate high-quality training data with which models like BERT can improve performance.
<<</Introduction>>>
<<<Our System: MonaLog>>>
The goal of NLI is to determine, given a premise set $P$ and a hypothesis sentence $H$, whether $H$ follows from the meaning of $P$ BIBREF21. In this paper, we look at single-premise problems that involve making a standard 3-way classification decision (i.e., Entailment (H), Contradict (C) and Neutral (N)). Our general monotonicity reasoning system works according to the pipeline in Figure FIGREF1. Given a premise text, we first do Arrow Tagging by assigning polarity annotations (i.e., the arrows $\uparrow ,\downarrow $, which are the basic primitives of our logic) to tokens in text. These surface-level annotations, in turn, are associated with a set of natural logic inference rules that provide instructions for how to generate entailments and contradictions by span replacements over these arrows (which relies on a library of span replacement rules). For example, in the sentence All schoolgirls are on the train, the token schoolgirls is associated with a polarity annotation $\downarrow $, which indicates that in this sentential context, the span schoolgirls can be replaced with a semantically more specific concept (e.g., happy schoolgirls) in order to generate an entailment. A generation and search procedure is then applied to see if the hypothesis text can be generated from the premise using these inference rules. A proof in this model is finally a particular sequence of edits (e.g., see Figure FIGREF13) that derive the hypothesis text from the premise text rules and yield an entailment or contradiction.
In the following sections, we provide the details of our particular implementation of these different components in MonaLog.
<<<Polarization (Arrow Tagging)>>>
Given an input premise $P$, MonaLog first polarizes each of its tokens and constituents, calling the system described by BIBREF17, which performs polarization on a CCG parse tree. For example, a polarized $P$ could be every$^{\leavevmode {\color {red}\uparrow }}$ linguist$^{\leavevmode {\color {red}\downarrow }}$ swim$^{\leavevmode {\color {red}\uparrow }}$. Note that since we ignore morphology in the system, tokens are represented by lemmas.
<<</Polarization (Arrow Tagging)>>>
<<<Knowledge Base @!START@${K}$@!END@ and Sentence Base @!START@${S}$@!END@>>>
MonaLog utilizes two auxiliary sets. First, a knowledge base ${K}$ that stores the world knowledge needed for inference, e.g., semanticist $\le $ linguist and swim $\le $ move, which captures the facts that $[\![\mbox{\em semanticist}]\!]$ denotes a subset of $[\![\mbox{\em linguist}]\!]$, and that $[\![\mbox{\em swim}]\!]$ denotes a subset of $[\![\mbox{\em move}]\!]$, respectively. Such world knowledge can be created manually for the problem at hand, or derived easily from existing resources such as WordNet BIBREF22. Note that we do not blindly add all relations from WordNet to our knowledge base, since this would hinge heavily on word sense disambiguation (we need to know whether the “bank” is a financial institution or a river bank to extract its relations correctly). In the current implementation, we avoid this by adding x $\le $ y or x $\perp $ y relations only if both x and y are words in the premise-hypothesis pair. Additionally, some relations that involve quantifiers and prepositions need to be hard-coded, since WordNet does not include them: every $=$ all $=$ each $\le $ most $\le $ many $\le $ a few $=$ several $\le $ some $=$ a; the $\le $ some $=$ a; on $\perp $ off; up $\perp $ down; etc.
We also need to keep track of relations that can potentially be derived from the $P$-$H$ sentence pair. For instance, for all adjectives and nouns that appear in the sentence pair, it is easy to obtain: adj + n $\le $ n (black cat $\le $ cat). Similarly, we have n + PP/relative clause $\le $ n (friend in need $\le $ friend, dog that bites $\le $ dog), VP + advP/PP $\le $ VP (dance happily/in the morning $\le $ dance), and so on. We also have rules that extract pieces of knowledge from $P$ directly, e.g.: n$_1$ $\le $ n$_2$ from sentences of the pattern every n$_1$ is a n$_2$. One can also connect MonaLog to bigger knowledge graphs or ontologies such as DBpedia.
A sentence base ${S}$, on the other hand, stores the generated entailments and contradictions.
<<</Knowledge Base @!START@${K}$@!END@ and Sentence Base @!START@${S}$@!END@>>>
<<<Generation>>>
Once we have a polarized CCG tree, and some $\le $ relations in ${K}$, generating entailments and contradictions is fairly straightforward. A concrete example is given in Figure FIGREF13. Note that the generated $\le $ instances are capable of producing mostly monotonicity inferences, but MonaLog can be extended to include other more complex inferences in natural logic, hence the name MonaLog. This extension is addressed in more detail in HuChenMoss.
<<<Entailments/inferences>>>
The key operation for generating entailments is replacement, or substitution. It can be summarized as follows: 1) For upward-entailing (UE) words/constituents, replace them with words/constituents that denote bigger sets. 2) For downward-entailing (DE) words/constituents, either replace them with those denoting smaller sets, or add modifiers (adjectives, adverbs and/or relative clauses) to create a smaller set. Thus for every$^{\leavevmode {\color {red}\uparrow }}$ linguist$^{\leavevmode {\color {red}\downarrow }}$ swim$^{\leavevmode {\color {red}\uparrow }}$, MonaLog can produce the following three entailments by replacing each word with the appropriate word from ${K}$: most$^{\leavevmode {\color {red}\uparrow }}$ linguist$^{\leavevmode {\color {red}\downarrow }}$ swim$^{\leavevmode {\color {red}\uparrow }}$, every$^{\leavevmode {\color {red}\uparrow }}$ semanticist$^{\leavevmode {\color {red}\downarrow }}$ swim$^{\leavevmode {\color {red}\uparrow }}$ and every$^{\leavevmode {\color {red}\uparrow }}$ linguist$^{\leavevmode {\color {red}\downarrow }}$ move$^{\leavevmode {\color {red}\uparrow }}$. These are results of one replacement.
Performing replacement for multiple rounds/depths can easily produce many more entailments.
<<</Entailments/inferences>>>
<<<Contradictory sentences>>>
To generate sentences contradictory to the input sentence, we do the following: 1) if the sentence starts with “no (some)”, replace the first word with “some (no)”. 2) If the object is quantified by “a/some/the/every”, change the quantifier to “no”, and vice versa. 3) Negate the main verb or remove the negation. See examples in Figure FIGREF13.
<<</Contradictory sentences>>>
<<<Neutral sentences>>>
MonaLog returns Neutral if it cannot find the hypothesis $H$ in ${S}.entailments$ or ${S}.contradictions$. Thus, there is no need to generate neutral sentences.
<<</Neutral sentences>>>
<<</Generation>>>
<<<Search>>>
Now that we have a set of inferences and contradictions stored in ${S}$, we can simply see if the hypothesis is in either one of the sets by comparing the strings. If yes, then return Entailment or Contradiction; if not, return Neutral, as schematically shown in Figure FIGREF13. However, the exact-string-match method is too brittle. Therefore, we apply a heuristic. If the only difference between sentences $S_1$ and $S_2$ is in the set {“a”, “be”, “ing”}, then $S_1$ and $S_2$ are considered semantically equivalent.
The search is implemented using depth first search, with a default depth of 2, i.e. at most 2 replacements for each input sentence. At each node, MonaLog “expands” the sentence (i.e., an entailment of its parent) by obtaining its entailments and contradictions, and checks whether $H$ is in either set. If so, the search is terminated; otherwise the systems keeps searching until all the possible entailments and contradictions up to depth 2 have been visited.
<<</Search>>>
<<</Our System: MonaLog>>>
<<<MonaLog and SICK>>>
We perform two experiments to test MonaLog. We first use MonaLog to solve the problems in a commonly used natural language inference dataset, SICK BIBREF1, comparing our results with previous systems. Second, we test the quality of the data generated by MonaLog. To do this, we generate more training data (sentence pairs) from the SICK training data using our system, and performe fine-tuning on BERT BIBREF20, a language model based on the transformer architecture BIBREF23, with the expanded dataset. In all experiments, we use the Base, Uncased model of BERT.
<<<The SICK Dataset>>>
The SICK BIBREF1 dataset includes around 10,000 English sentence pairs that are annotated to have either “Entailment”, “Neutral” or “Contradictory” relations. We choose SICK as our testing ground for several reasons. First, we want to test on a large-scale dataset, since we have shown that a similar model BIBREF24 reaches good results on parts of the smaller FraCaS dataset BIBREF25. Second, we want to make our results comparable to those of previous logic-based models such as the ones described in BIBREF26, BIBREF27, BIBREF11, BIBREF13, which were also tested on SICK. We use the data split provided in the dataset: 4,439 training problems, 4,906 test problems and 495 trial problems, see Table TABREF16 for examples.
<<</The SICK Dataset>>>
<<<Hand-corrected SICK>>>
There are numerous issues with the original SICK dataset, as illustrated by BIBREF28, BIBREF29.
They first manually checked 1,513 pairs tagged as “A entails B but B is neutral to A” (AeBBnA) in the original SICK, correcting 178 pairs that they considered to be wrong BIBREF28. Later, BIBREF29 extracted pairs from SICK whose premise and hypothesis differ in only one word, and created a simple rule-based system that used WordNet information to solve the problem. Their WordNet-based method was able to solve 1,651 problems, whose original labels in SICK were then manually checked and corrected against their system's output. They concluded that 336 problems are wrongly labeled in the original SICK. Combining the above two corrected subsets of SICK, minus the overlap, results in their corrected SICK dataset, which has 3,016 problems (3/10 of the full SICK), with 409 labels different from the original SICK (see breakdown in Table TABREF19). 16 of the corrections are in the trial set, 197 of them in the training set and 196 in the test set. This suggests that more than one out of ten problems in SICK are potentially problematic. For this reason, two authors of the current paper checked the 409 changes. We found that only 246 problems are labeled the same by our team and by BIBREF29. For cases where there is disagreement, we adjudicated the differences after a discussion.
We are aware that the partially checked SICK (by two teams) is far from ideal. We therefore present results for two versions of SICK for experiment 1 (section SECREF4): the original SICK and the version corrected by our team. For the data augmentation experiment in section SECREF5, we only performed fine-tuning on the corrected SICK. As shown in a recent SICK annotation experiment by kalouli2019explaining, annotation is a complicated issue influenced by linguistic and non-linguistic factors. We leave checking the full SICK dataset to future work.
<<</Hand-corrected SICK>>>
<<</MonaLog and SICK>>>
<<<Experiment 1: Using MonaLog Directly>>>
<<<Setup and Preprocessing>>>
The goal of experiment 1 is to test how accurately MonaLog solves problems in a large-scale dataset. We first used the system to solve the 495 problems in the trial set and then manually identified the cases in which the system failed. Then we determined which syntactic transformations are needed for MonaLog. After improving the results on the trial data by introducing a preprocessing step to handle limited syntactic variation (see below), we applied MonaLog on the test set. This means that the rule base of the system was optimized on the trial data, and we can test its generalization capability on the test data.
The main obstacle for MonaLog is the syntactic variations in the dataset, illustrated in some examples in Table TABREF16. There exist multiple ways of dealing with these variations: One approach is to `normalize' unknown syntactic structures to a known structure. For example, we can transform passive sentences into active ones and convert existential sentences into the base form (see ex. 8399 and 219 in Table TABREF16). Another approach is to use some more abstract syntactic/semantic representation so that the linear word order can largely be ignored, e.g., represent a sentence by its dependency parse, or use Abstract Meaning Representation. Here, we explore the first option and leave the second approach to future work. We believe that dealing with a wide range of syntactic variations requires tools designed specifically for that purpose. The goal of MonaLog is to generate entailments and contradictions based on a polarized sentence instead.
Below, we list the most important syntactic transformations we perform in preprocessing.
Convert all passive sentences to active using pass2act. If the passive does not contain a by phrase, we add by a person.
Convert existential clauses into their base form (see ex. 219 in Table TABREF16).
Other transformations: someone/anyone/no one $\rightarrow ~$some/any/no person; there is no man doing sth. $\rightarrow ~$no man is doing sth.; etc.
<<</Setup and Preprocessing>>>
<<<Results>>>
The results of our system on uncorrected and corrected SICK are presented in Table TABREF27, along with comparisons with other systems.
Our accuracy on the uncorrected SICK (77.19%) is much higher than the majority baseline (56.36%) or the hypothesis-only baseline (56.87%) reported by BIBREF8, and only several points lower than current logic-based systems. Since our system is based on natural logic, there is no need for translation into logical forms, which makes the reasoning steps transparent and much easier to interpret. I.e., with entailments and contradictions, we can generate a natural language trace of the system, see Fig. FIGREF13.
Our results on the corrected SICK are even higher (see lower part of Table TABREF27), demonstrating the effect of data quality on the final results. Note that with some simple syntactic transformations we can gain 1-2 points in accuracy.
Table TABREF28 shows MonaLog's performance on the individual relations. The system is clearly very good at identifying entailments and contradictions, as demonstrated by the high precision values, especially on the corrected SICK set (98.50 precision for E and 95.02 precision for C). The lower recall values are due to MonaLog's current inability to handle syntactic variation.
Based on these results, we tested a hybrid model of MonaLog and BERT (see Table TABREF27) where we exploit MonaLog's strength: Since MonaLog has a very high precision on Entailment and Contradiction, we can always trust MonaLog if it predicts E or C; when it returns N, we then fall back to BERT. This hybrid model improves the accuracy of BERT by 1% absolute to 85.95% on the corrected SICK. On the uncorrected SICK dataset, the hybrid system performs worse than BERT.
Since MonaLog is optimized for the corrected SICK, it may mislabel many E and C judgments in the uncorrected dataset. The stand-alone BERT system performs better on the uncorrected data (86.74%) than the corrected set (85.00%). The corrected set may be too inconsistent since only a part has been checked.
Overall, these hybird results show that it is possible to combine our high-precision system with deep learning architectures. However, more work is necessary to optimize this combined system.
<<</Results>>>
<<<Error Analysis>>>
Upon closer inspection, some of MonaLog's errors consist of difficult cases, as shown in Table TABREF29. For example, in ex. 359, if our knowledge base ${K}$ contains the background fact $\mbox{\em chasing} \le \mbox{\em running}$, then MonaLog's judgment of C would be correct. In ex. 1402, if crying means screaming, then the label should be E; however, if crying here means shedding tears, then the label should probably be N. Here we also see potentially problematic labels (ex. 1760, 3403) in the original SICK dataset.
Another point of interest is that 19 of MonaLog's mistakes are related to the antonym pair man vs. woman (e.g., ex. 5793 in Table TABREF29). This points to inconsistency of the SICK dataset: Whereas there are at least 19 cases tagged as Neutral (e.g., ex. 5793), there are at least 17 such pairs that are annotated as Contradictions in the test set (e.g., ex. 3521), P: A man is dancing, H: A woman is dancing (ex. 9214), P: A shirtless man is jumping over a log, H: A shirtless woman is jumping over a log. If man and woman refer to the same entity, then clearly that entity cannot be man and woman at the same time, which makes the sentence pair a contradiction. If, however, they do not refer to the same entity, then they should be Neutral.
<<</Error Analysis>>>
<<</Experiment 1: Using MonaLog Directly>>>
<<<Experiment 2: Data Generation Using MonaLog>>>
Our second experiment focuses on using MonaLog to generate additional training data for machine learning models such as BERT. To our knowledge, this is the first time that a rule-based NLI system has been successfully used to generate training data for a deep learning application.
<<<Setup>>>
As described above, MonaLog generates entailments and contradictions when solving problems. These can be used as additional training data for a machine learning model.
I.e., we pair the newly generated sentences
with their input sentence, creating new pairs for training. For example, we take all the sentences in the nodes in Figure FIGREF13 as inferences and all the sentences in rectangles as contradictions, and then form sentence pairs with the input sentence. The additional data can be used directly, almost without human intervention.
Thus for experiment 2, the goal is to examine the quality of these generated sentence pairs. For this, we re-train a BERT model on these pairs. If BERT trained on the manually annotated SICK training data is improved by adding data generated by MonaLog, then we can conclude that the generated data is of high quality, even comparable to human annotated data, which is what we found.
More specifically, we compare the performance of BERT models trained on a) SICK training data alone, and b) SICK training data plus the entailing and contradictory pairs generated by MonaLog.
All experiments are carried out using our corrected version of the SICK data set.
However, note that MonaLog is designed to only generate entailments and contradictions. Thus, we only have access to newly generated examples for those two cases, we do not acquire any additional neutral cases. Consequently, adding these examples to the training data will introduce a skewing that does not reflect the class distribution in the test set. Since this will bias the machine learner against neutral cases, we use the following strategy to counteract that tendency: We relabel all cases where BERT is not confident enough for either E or C into N. We set this threshold to 0.95 but leave further optimization of the threshold to future work.
<<</Setup>>>
<<<Data Filtering and Quality Control>>>
MonaLog is prone to over-generation. For example, it may wrongly add the same adjective before a noun (phrase) twice to create a more specific noun, e.g., young young man $\le $ young man $\le $ man. Since it is possible that such examples influence the machine learning model negatively, we look into filtering such examples to improve the quality of the additional training data.
We manually inspected 100 sentence pairs generated by MonaLog to check the quality and naturalness of the new sentences (see Table TABREF32 for examples). All of the generated sentences are correct in the sense that the relation between the premise and the hypothesis is correctly labeled as entailment or contradiction (see Table TABREF34).
While we did not find any sentence pairs with wrong labels, some generated sentences are unnatural, as shown in Table TABREF32. Both unnatural examples contain two successive copies of the same PP.
Note that our data generation hinges on correct polarities on the words and constituents. For instance, in the last example of Table TABREF32, the polarization system needs to know that few is downward entailing on both of its arguments, and without flips the arrow of its argument, in order to produce the correct polarities, on which the replacement of MonaLog depends.
In order to filter unnatural sentences, such as the examples in Table TABREF32, we use a rule-based filter and remove sentences that contain bigrams of repeated words. We experiment with using one quarter or one half randomly selected sentences in addition to a setting where we use the complete set of generated sentences.
<<</Data Filtering and Quality Control>>>
<<</Experiment 2: Data Generation Using MonaLog>>>
<<<Conclusions and Future Work>>>
We have presented a working natural-logic-based system, MonaLog, which attains high accuracy on the SICK dataset and can be used to generated natural logic proofs. Considering how simple and straightforward our method is, we believe it can serve as a strong baseline or basis for other (much) more complicated systems, either logic-based or ML/DL-based. In addition, we have shown that MonaLog can generate high-quality training data, which improves the accuracy of a deep learning model when trained on the expanded dataset. As a minor point, we manually checked the corrected SICK dataset by BIBREF28, BIBREF29.
There are several directions for future work. The first direction concerns the question how to handle syntactic variation from natural language input. That is, the computational process(es) for inference will usually be specified in terms of strict syntactic conditions, and naturally occurring sentences will typically not conform to those conditions. Among the strategies which allow their systems to better cope with premises and hypotheses with various syntactic structures are sophisticated versions of alignment used by e.g. MacCartney,YanakaMMB18. We will need to extend MonaLog to be able to handle such variation. In the future, we plan to use dependency relations as representations of natural language input and train a classifier that can determine which relations are crucial for inference.
Second, as mentioned earlier, we are in need of a fully (rather than partially) checked SICK dataset to examine the impact of data quality on the results since the partially checked dataset may be inherently inconsistent between the checked and non-checked parts.
Finally, with regard to the machine learning experiments, we plan to investigate other methods of addressing the imbalance in the training set created by additional entailments and contradictions. We will look into options for artificially creating neutral examples, e.g. by finding reverse entailments, as illustrated by richardson2019probing.
<<</Conclusions and Future Work>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nOur System: MonaLog\nPolarization (Arrow Tagging)\nKnowledge Base @!START@${K}$@!END@ and Sentence Base @!START@${S}$@!END@\nGeneration\nEntailments/inferences\nContradictory sentences\nNeutral sentences\nSearch\nMonaLog and SICK\nThe SICK Dataset\nHand-corrected SICK\nExperiment 1: Using MonaLog Directly\nSetup and Preprocessing\nResults\nError Analysis\nExperiment 2: Data Generation Using MonaLog\nSetup\nData Filtering and Quality Control\nConclusions and Future Work"
],
"type": "outline"
}
|
2003.10816
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Cross-Lingual Adaptation Using Universal Dependencies
<<<Abstract>>>
We describe a cross-lingual adaptation method based on syntactic parse trees obtained from the Universal Dependencies (UD), which are consistent across languages, to develop classifiers in low-resource languages. The idea of UD parsing is to capture similarities as well as idiosyncrasies among typologically different languages. In this paper, we show that models trained using UD parse trees for complex NLP tasks can characterize very different languages. We study two tasks of paraphrase identification and semantic relation extraction as case studies. Based on UD parse trees, we develop several models using tree kernels and show that these models trained on the English dataset can correctly classify data of other languages e.g. French, Farsi, and Arabic. The proposed approach opens up avenues for exploiting UD parsing in solving similar cross-lingual tasks, which is very useful for languages that no labeled data is available for them.
<<</Abstract>>>
<<<Introduction>>>
Universal Dependencies (UD) BIBREF0, BIBREF1, BIBREF2 is an ongoing project aiming to develop cross-lingually consistent treebanks for different languages. UD provided a framework for consistent annotation of grammar (parts of speech, morphological features, and syntactic dependencies) across different human languages. The annotation schema relies on Universal Stanford Dependencies BIBREF3 and Google Universal POS tags BIBREF4. The general principle is to provide universal annotation ; meanwhile, each language can add language-specific relations to the universal pool when necessary.
The main goal of UD project is to facilitate multi-lingual parser production and cross-lingual learningFOOTREF1. Cross-lingual learning is the task of gaining advantages from high-resource languages in terms of annotated data to build a model for low-resource languages. This paradigm of learning is now an invaluable tool for improving the performance of natural language processing in low-resource languages.
Based on the universal annotations of the UD project, there are several works on cross-lingual tasks. Most of them focus on grammar-related tasks such as POS tagging BIBREF5 and dependency parsing BIBREF6, BIBREF7, BIBREF8. In this paper, we are going to study the effectiveness of UD in making cross-lingual models for more complex tasks such as semantic relation extraction and paraphrase identification. To the best of our knowledge, no work was done on the application of UD annotations in the mentioned tasks.
Universal dependencies approach for cross-lingual learning is based on the fact that UD captures similarities as well as idiosyncrasies among typologically different languages. The important characteristic of UD annotations is that although the UD parse trees of parallel sentences in different languages may not be completely equivalent, they have many similar sub-trees, in the sense that at least core parts of trees are equal BIBREF9.
In this paper, we study two cross-lingual tasks : semantic relation extraction and paraphrase identification. The former is the task of identifying semantic connections between entities in a sentence ; while the training and test data are in different languages. The latter is defined to determine whether two sentences are paraphrase or not ; while the training' pairs of sentences are in a different language from the test data.
To employ similarities of UD trees of different languages to train cross-lingual models, we propose to use syntactic based methods which ideally can deal with parsing information of data. We found that tree kernels allow to estimate the similarities among texts directly from their parse trees. They are known to operate on dependency parse trees and automatically generate robust prediction models based on the similarities of them. We have made parallel dataset for each task and presented the cross-lingual variant of kernel functions for them. Evaluation by the parallel test data reveals that the accuracy of models trained by a language and tested on the other languages get close to mono-lingual when the syntactic parsers are trained with UD corpora. This suggests that syntactic patterns trained on the UD trees can be invariant with respect to very different languages.
To compare the proposed approach with the cross-lingual variant of neural models, we employed several state-of-the-art deep networks and equipped them with pre-trained bi-lingual word embeddings. English training data are fed into the networks, which create a mapping between the input and output values. Then test set is given to the trained network. Results show that the tree-based models outperform end-to-end neural models in cross-lingual experiments.
Moreover, we employed Tree-LSTM network BIBREF10 with UD parse trees, which is capable to produce semantic representation from tree-ordered input data. Tree-LSTM doesn't directly deal with syntactic features of the input sentence, rather it processes the input tokens in order of placing in a tree, e.g. from bottom to up or vice versa. Experiments show superiority of Tree-LSTM trained by UD trees over sequential models like LSTM in cross-lingual evaluations.
This paper is organized as follows : Section SECREF2 describes how UD approach allows to capture similarities and differences across diverse languages. Section SECREF3 presents tree-based models for cross-lingual learning of PI and RE tasks. Section SECREF4 presents an empirical study on cross-lingual learning using UD. Finally Section SECREF5 gives the analysis and conclusion remarks.
<<</Introduction>>>
<<<Transfer Learning via Universal Dependencies>>>
The Universal Dependencies project aims to produce consistent dependency treebanks and parsers for many languages BIBREF0, BIBREF1, BIBREF2. The most important achievements of the project are the cross-lingual annotation guidelines and sets of universal POS and the grammatical relation tags. Consequentially many treebanks have been developed for different languages. The general rule of UD project is to provide a universal tag set ; however each language can add language-specific relations to the universal pool or omit some tags.
To capture similarities and differences across languages, UD uses a representation consisting of three components : (i) dependency relations between lexical words ; (ii) function words modifying lexical words ; and (iii) morphological features associated with words BIBREF9.
The underlying principle of the syntactic annotation schema of the UD project is that dependencies hold between content words, while function words attach to the content word that they further specify BIBREF3. There is an important difference between UD schema and Stanford Typed Dependencies (STD) BIBREF11 as the STD schema chooses function words as heads : prepositions in prepositional phrases, and copula verbs that have a prepositional phrase as their complement.
Although the UD parse graphs of a sentence in different languages may not be completely equal, they have similar core parts. Figure FIGREF5 shows the UD graph of English sentence “The memo presents details about the lineup management" and its translation into French and Farsi. Both the similarities and differences of UD graphs are demonstrated in that figure. Most of the nodes and edges are similar. Farsi has the language-specific relation “compound :lvc", which relates the noun part of the compound verb to the verbal part as depicted in Figure FIGREF5. So far, UD treebanks have been developed for over 70 languages and all of them are freely available for download. UD project released a pipeline, called UDPipe, which is used to train models for UD parsing using the UD treebanks BIBREF12.
UD parsing and similarity of UD structures in different languages provide facilities to train multi-lingual models. In what follows, we focus on two tasks, paraphrase identification and semantic relation extraction, and present cross-learning models for them.
<<</Transfer Learning via Universal Dependencies>>>
<<<Cross-Lingual Tree-based Models>>>
To employ UD parsing in cross-lingual learning, there should be a training algorithm that is capable of utilizing similarities of UD parse trees in different languages. Kernel methods such as SVM use a similarity function, which is called kernel function, to assign a similarity score to pairs of data samples. A kernel function $K$ over an object space $X$ is symmetric, positive semi-definite function $K: X \times X \rightarrow [0,\infty )$ that assigns a similarity score to two instances of $X$, where $K(x,y)=\phi (x)\cdot \phi (y)=\sum {\phi _{i}(x)\phi _{i}(y)}$. Here, $\phi (x)$ is a mapping function from the data object in $X$ to the high-dimensional feature space. Using the kernel function, it is not necessary to extract all features one by one and then multiply the feature vectors. Instead, kernel functions compute the final value directly based on the similarity of data examples.
Tree kernels are the most popular kernels for many natural language processing tasks BIBREF13, BIBREF14. Tree Kernels compute the number of common substructures between two trees $T_1$ and $T_2$ without explicitly considering the whole fragment space BIBREF15. Suppose the set $\mathcal {F}=\lbrace f_1,f_2, \dots , f_{|\mathcal {F}|} \rbrace $ be the tree fragment space and $\mathcal {X}_i(n)$ be an indicator function that is 1 if the $f_i$ rooted at node $n$ and equals to 0, otherwise. Now, tree kernel over $T_1$ and $T_2$ is defined as below BIBREF15 :
where $N_{T_1}$ and $N_{T_2}$ are the set of nodes of $T_1$ and $T_2$, respectively and
which shows the number of common fragments rooted in $n_1$ and $n_2$ nodes. Different tree kernels vary in their definition of $\Delta $ function and fragment type.
There are three important characterizations of fragment type BIBREF16 : SubTree, SubSet Tree and Partial Tree. A SubTree is defined by taking a node of a tree along with all its descendants. SubSet Tree is more general and does not necessarily contain all of the descendants. Instead, it must be generated by utilizing the same grammatical rule set of the original trees. A Partial Tree is more general and relaxes SubSet Tree's constraints. Some popular tree kernels are SubSet Tree Kernel (SST), Partial Tree Kernel (PTK) BIBREF17 and Smoothing Partial Tree Kernel (SPTK) BIBREF15. In the next section, we employ the tree kernels along with UD parse trees for solving cross-lingual tasks.
<<<Cross-Lingual Paraphrase Identification>>>
Paraphrase Identification (PI) is the task of determining whether two sentences are paraphrase or not. It is considered a binary classification task. The best mono-lingual methods often achieve about 85% accuracy over this corpus BIBREF14, BIBREF18. Filice et al. BIBREF14 extended the tree kernels described in the previous section to operate on text pairs. The underlying idea is that this task is characterized by several syntactic/semantic patterns that a kernel machine can automatically capture from the training material. We can assess a text pair as a paraphrase if it shows a valid transformation rule that we observed in the training data. The following example can clarify this concept. A simple paraphrase rewriting rule is the active-passive transformation, such as in “Federer beat Nadal” and “Nadal was defeated by Federer”. The same transformation can be observed in other paraphrases, such as in “Mark studied biology” and “Biology was learned by Mark”. Although these two pairs of paraphrases have completely different topics, they have a very similar syntactic structure.
Tree kernel combinations can capture this inter-pair similarity and allow a learning algorithm such as SVM to learn the syntactic-semantic patterns characterizing valid paraphrases. Given a tree kernel $TK$ and text pairs $p_i = (i_1, i_2)$, the best tree kernel combination for the paraphrase identification task described in BIBREF14 is the following :
SMTK ( pa, pb ) = softmax ( TK(a1,b1)TK(a2, b2), TK(a1,b2)TK(a2,b1) )
where softmax$(x_1,x_2)= \frac{1}{m} \log \left(e^{m x_1} + e^{m x_2}\right)$ is a simple function approximating the max operator, which cannot be directly used in kernel formulations, as it can create non valid kernel functions. In this kernel combination the two different alignments between the trees of the two pairs are tried and the best alignment is chosen. This allows to exploit the inherent symmetry of the Paraphrase Identification task (i.e., if $a$ is a paraphrase of $b$, it also implies that $b$ is a paraphrase of $a$).
When we adopt the universal dependencies, different languages have a common formalism to represent text syntax, and tree kernels, that mostly operate at a syntactical level, can still provide reliable similarity estimations, i.e., $SM_{TK}(p_a, p_b)$ can work even if $p_a$ and $p_b$ have different languages. This allows operating in a cross-lingual setting. For instance, we can use a model trained on a high-resource language for classifying textual data of a poor-resource language. In addition to the syntactic similarity evaluation, the PTK and SPTK which are used in the $SM_{TK}$ formulation also perform a lexical matching among the words of the trees to be compared.
<<</Cross-Lingual Paraphrase Identification>>>
<<<Cross-Lingual Semantic Relation Extraction>>>
Relation Extraction (RE) is defined as the task of identifying semantic relations between entities in a text. The goal is to determine whether there is a semantic relation between two given entities in a text, and also to specify the type of relationship if present. RE is an important part of Information Extraction BIBREF19. Relation extraction methods often focus on the Shortest Dependency Path (SDP) between entities BIBREF20. However, there are some crucial differences between UD annotation principles and others parse formalisms that causes us to reconsider SDP of UD trees.
Considering the sentence : “The most common $[$audits$]_{e1}$ were about $[$waste$]_{e2}$ and recycling", there is a Message-Topic relation between $e1$ and $e2$. The most informative words of the sentence for the relation are “were" and “about" ; while the other words of the sentence can be ignored and the same relation is still realized. It is a crucial challenge of relation extraction methods that important information may appear at any part of the sentence. Most previous works assume that the words lying in the window surrounding entities are enough to extract the relation governing entities BIBREF21, BIBREF22. However, words of a sentence are often reordered when the sentence is translated into other languages. Therefore, using words in the window surrounding entities may result in an accurate model for mono-lingual experiments, but not necessarily for cross-lingual ones.
Regarding UD parsing, there are several significant differences between universal annotation schema and other schemas for dependency parsing. Two main differences are related to prepositions and copula verbs. According to the UD annotation guidelines, prepositions are attached to the head of a nominal, and copula verbs are attached to the head of a clause. However in other schemas, prepositions are often the root of the nominal, and the clause is attached to the copula.
Figure FIGREF12 shows the parse tree of the example : “The most common $[$audits$]_{e1}$ were about $[$waste$]_{e2}$ and recycling". The tree is produced by the ARK parser, which does not follow universal schema. As mentioned before, “were" and “about" lie on the SDP between $e1$ and $e2$. However, considering the UD parse tree depicted in Figure FIGREF12, there is no word in the SDP ; while both “were" and “about" are attached to $e2$. As a result, we propose that the words which are dependent on the entities be considered to be the informative words in addition to the SDP's words. We use these words for making a cross-lingual model.
Kernel functions have several interesting characteristics. The combination of kernel functions in a linear or polynomial way results in a valid kernel function BIBREF23. Composite kernel functions are built on individual kernels ; each of them captures part of the features of a data object. Tree kernels capture the data's syntactic structure, while a word sequence kernel considers the words of a sequence in a particular order. To define a cross-lingual kernel, we have adopted the composite kernel used by the Nguyen et al. BIBREF16 :
where $K_{P-e}$ is a polynomial kernel. Its base kernel is an entity kernel ($K_E$), which is applied to an entity-related feature vector consisting of (named) entity type, mention type, headword, and POS tag. $K_{SST}$ is the Sub-Set Tree (SST) kernel, which is applied to the Path-Enclosed Tree (PET) of the constituency tree structure. PET is the smallest common subtree including the two entities BIBREF24, BIBREF25. $K_{PT}$ is the Partial Tree kernel BIBREF17, which is applied to the dependency-based tree structures. Parameter $\alpha $ weighs the kernels.
To incorporate the most informative words of the sentence into the model, the feature vector $V_o$ is defined similarly to the work of Hashimoto et al. BIBREF21. They proposed concatenating these vectors to make the $V_o$ : the vector representing $e1$, the vector representing $e2$, the average of vectors representing words between two entities, the average of vectors representing words in a window before $e1$, and the average of vectors representing words in a window after $e2$.
Since $V_o$ is defined based on the position of words in the sentence and thus is not necessary a cross-lingual consistent feature vector, we propose to define feature vector $V_{ud}$ by concatenating these vectors : the vector representing $e1$, the vector representing $e2$, the average of vectors representing words in the shortest path between two entities (instead of words between $e1$ and $e2$), the average of vectors representing words dependent to $e1$ (instead of words before $e1$), and the average of vectors representing words dependent to $e2$ (instead of words after $e2$). $V_{ud}$ is cross-lingually consistent provided that the words are picked up from UD parse trees and represented by multi-lingual embeddings.
Based on the $CK$ defined in formula DISPLAY_FORM13 and the feature vectors $V_o$ and $V_{ud}$, the following composite kernels are proposed :
where $K_{P-o}$ is polynomial kernel applied on a feature vector $V_o$.
where $K_{P-ud}$ is polynomial kernel applied on a feature vector $V_{ud}$.
Constituency parsing of a sentence in a language depends on the syntactic rules governing the position of words. In general, constituency parse trees of a sentence in different languages are different. So, the constituency tree should not be involved in the cross-lingual model. Here, $CK_2$ is our proposed kernel, which is used for CL-RE. However, $CK_1$ and $CK_3$ can also be used for cross-lingual experiments subject to the similarity of syntactic parsing of the source and target languages.
SST kernel works only on the constituency trees and not on the dependency trees BIBREF17. Therefore, for evaluating the similarity of dependency trees, PT kernel is used. The PT kernel cannot process labels on the edges ; so dependency trees are converted to the Lexical Centered Tree (LCT) format BIBREF15 and then PT kernel is applied on the transformed trees. In LCT format, the lexical is kept at the center and the other information related to that lexical, such as POS tag and grammatical relation, is then added as its children.
MultiWord Expression (MWE) is a lexeme made up a sequence of two or more lexemes as each lexeme has its own meaning, but the meaning of the whole expression cannot (or at least can only partially) be computed from the meaning of its parts. MWE displays lexical, syntactic, semantic, pragmatic and/or statistical idiosyncrasies BIBREF26. The nature of MWE leads us to deal with the whole lexemes as a word. Fortunately, MWE can be identified from the parse tree. There are three types of dependency relations for MWE in UD parsing : flat, fixed, and compound. According to UD guidelines, the flat relation is used for exocentric (headless) semi-fixed MWEs like names (Walter Burley Griffin) and dates (20 November). The fixed relation applies to completely fixed grammaticized (function word-like) MWE (like instead of, such as), whereas compound applies to endocentric (headed) MWE (like apple pie).
To produce feature vector $V_{ud}$, it is better to treat MWE as single words, especially MWE with fixed relations between parts because considering each part of the MWE separately and averaging their embedding vectors may result in a meaningless vector. This point matters when the words of low-resource languages are first translated into other languages and then presented by an embedding of that language. Therefore, the procedure of producing feature vector $V_{ud}$ should be modified with a simple heuristic : every node of the UD tree within the shortest path between two entities or dependent to $e1$ or $e2$ which have a child node with fixed dependency type is considered with its child as one word. If the child has also a child with a fixed dependency, all of them are considered as one word. For example, Figure FIGREF17 shows the UD tree of a Farsi sentence which is the translation of the English sentence in Figure FIGREF12. Entities are distinguished from other nodes by putting a circle around them. The 5th and 6th nodes from the left make a multiword expression that means “about". Applying the above heuristic results in them both being considered as a single word and so the correct translation to another language is found. Some other examples of Farsi MWEs are “قبل از آن که/before", “در حالی که/while", “به درون/into", “به جز/except", and “بر روی/on". In French language there are also MWEs, such as “bien que/although", “en tant que/as", “tant de/so many", “afin de/in order to", “prés de/near".
Apart from fixed, flat, and compound, there are grammatical relations which are language-specific and show MWE structures BIBREF27. If the target language has language-specific relations, the above heuristic should be applied to them. For example, compound :lvc relation, which is defined for several languages including Farsi, represents the dependence from the noun part to the light verb part of compound verbs. An example of this relation was shown in Figure FIGREF5. The words “ارائه/presentation" and “میدهد/give" together mean “present".
<<</Cross-Lingual Semantic Relation Extraction>>>
<<</Cross-Lingual Tree-based Models>>>
<<<Experiments>>>
In this section, the experimental analysis of the proposed models is presented. We have implemented the cross-lingual variant of kernel functions for PI and RE tasks as described in section SECREF3 and measured the accuracy of models by testing them on the parallel data set.
The main advantage of the proposed method is that it needs no data of the test language, in the sense that the model trained using the training data of a language, e.g. English, is directly used in the other languages, e.g. Farsi, Arabic, etc. From this point of view, the proposed method can only be compared with those methods that use no data (neither labeled nor un-labeled) of the test language or parallel corpus or machine translators between the training and test languages.
One solution for cross-lingual tasks is to equip the high accurate neural networks proposed for each task with pre-trained multi-lingual word embeddings, without any change in the architecture of the network. Therefore, we re-implemented some deep methods and compared the proposed approach with them for both PI and RE tasks.
<<<Paraphrase Identification>>>
For this task, we made a parallel test dataset and implemented PT and SPT kernels and compared the results with two-channel CNN of Wang et al. BIBREF18.
<<<Construction of Parallel Dataset>>>
To prepare a multi-language corpus for PI, we employed an existing English corpus with its Arabic translation and made Farsi correspondence. Microsoft Research Paraphrase Corpus (MSRC) BIBREF28 mostly used by the researches for English PI task. It contains 4,076 and 1,725 pairs of sentences for the training and test, respectively. This data has been extracted from news sources on the web, and has been annotated by humans whether each pair captures a paraphrase equivalence relationship.
PI relates to the task of Semantic Textual Similarity (STS), in which the goal is to capture the degree of equivalence of meaning rather than making a binary decision. SemEval-2017 task 1 put the emphasis on multi-lingual STS BIBREF29. They selected 510 pairs from the test part of the MSRC corpus, and translated them into Arabic by Arabic native speakers. All data have been manually tagged with a number from 0 to 5 to show the degree of similarity.
The Arabic part of the STS dataset of SemEval-2017 is parallel to some parts of the MSRC test corpus. So there is a parallel English-Arabic dataset. Because of the similarity between PI and STS tasks, the dataset of STS can also be used in the PI task, just by converting the scores to 0 or 1. So, the original binary scores of the STS dataset have been retrieved from the MSRC corpus. As a result, a corpus with 510 pairs of English sentences and Arabic translation for PI task is ready. In addition to Arabic translation, we produced correspondence Farsi data by translation of parallel English-Arabic dataset into Farsi by a Farsi native speaker.
In the experiments, MSRC corpus was divided as follows : 1) the training part of MSRC corpus for training ; 2) those data from test part of MSRC, which we don't have their Arabic or Farsi counterpart, for tuning hyper-parameters as development set ; and 3) 510 parallel English-Arabic-Farsi from the test part of MSRC for the test. Therefore, our training and test data have 4076 and 510 samples, respectively. Table TABREF21 shows the statistics of our data.
<<</Construction of Parallel Dataset>>>
<<<Tools and Setup>>>
The classifiers were trained with the C-SVM learning algorithm within KeLP BIBREF30, which is a kernel-based machine learning framework and implemented tree kernels. We employed PT and SPT kernel functions. For evaluating node similarity in SPTK function, we used the same method described in BIBREF14 : if $n_1$ and $n_2$ are two identical syntactic nodes, $\sigma (n_1,n_2)$ denoted the similarity of $n_1$ and $n_2$ and is equal to 1. If $n_1$ and $n_2$ are two lexical nodes with the same POS tag, their similarity is computed as the cosine similarity of the corresponding vectors in a wordspace. In all other cases $\sigma = 0$.
English wordspace was generated by using word2vec tool. In the cross-lingual setup, we need a vocabulary to find the translation of lexical nodes and then compute their similarity in a wordspace. For English-Arabic experiments, we used Almaany dictionary to find the translation of Arabic words into English. For English-Farsi experiments, we used the Aryanpour dictionary to extract the English equivalent of Farsi words. To evaluate the performance of the classifiers we used Accuracy and F$_1$ as the previous works BIBREF31, BIBREF32, BIBREF18.
For dependency parsing, UDPipe was used, which is a trainable pipeline for tokenization, tagging, lemmatization, and dependency parsing. We used version 2.4 of the UD pre-trained models of English, Arabic, and Farsi.
To implement the CNN network of Wang et al. BIBREF18, we used the same word embedding they used. They set the size of the word vector dimension as d =300, and pre-trained the vectors with the word2vec toolkit on the English Gigaword (LDC2011T07). Hyper-parameters of the network are the same as their work.
<<</Tools and Setup>>>
<<<Results>>>
We first examine the tree kernels in the mono-lingual and then in the cross-lingual learning.
<<<Evaluation of tree-based models in mono-lingual learning>>>
In the first experiment, we benchmark the UD-based models on the monolingual dataset. So, we employed the original split of MSRC corpus and trained models using PT and SPT kernels. These models essentially work based on the lexico-syntactic patterns observed in training sentences. Filice et al. BIBREF14 proposed several kernels including linear, graph and SPT kernels. They showed the best accuracy is obtained using the combination of them. However, we use only tree kernels in cross-lingual experiments, to measure how much we can rely on the similarities of UD parse trees in different languages.
As Table TABREF29 shows, tree kernels including PTK and SPTK show comparable results according to the accuracy and F$_1$ measures. This means that PT and SPT kernels, which are trained by UD parse trees, make accurate models that can be used in solving the PI task. In the next experiment, we use these models to evaluate Arabic and Farsi test data.
<<</Evaluation of tree-based models in mono-lingual learning>>>
<<<Evaluation of tree-based models with UD in cross-lingual learning>>>
Now, we employ the parallel dataset for cross-lingual evaluation of the UD-based model trained by English data. A baseline for this task is the majority voting in that what we get if we always predict the most frequent label of the training data. A better baseline for cross-lingual PI is to use some neural models and couple them with pre-induced multilingual embeddings. So, we re-run the two-channel CNN model of Wang et al. BIBREF18 by our test data.
Upper bound for the cross-lingual experiment is considered the accuracy of the model when it is evaluated by the data of the same language of the training data, e.g. English. Table TABREF30 shows that using PTK 61.6% of accuracy is obtained for English test data. It is 57.7% and 57.3% for Arabic and Farsi, respectively ; while the accuracy of the majority baseline is 50.6%. CNN model obtained similar accuracy but much lower F$_1$ scores.
Comparing the results of Tables TABREF29 and TABREF30 reveals that the accuracy of both kernels drops significantly when they are tested by our small test data. The reason is that the distribution of MSRC training data over positive and negative classes is significantly different from our test data. Specifically, 67.5% of MSRC's training data are positive ; while 50.5% of our test data are positive.
<<</Evaluation of tree-based models with UD in cross-lingual learning>>>
<<<Evaluation of tree-based models with parse formalisms rather than UD>>>
In this experiment, we produced dependency parse trees of Farsi data employing Hazm parser which is trained on non-UD tree-bank. Table TABREF30 shows that in this case accuracy of the models significantly drops. Taking a deeper look at the tree kernels, PTK doesn't use the similarity of words and works based on exact matching of them. So, in cross-lingual experiments, it considers only the similarity of trees. In this case, accuracy on Farsi test data is 50.6% which is the same as the majority baseline. This experiment reveals that the trees of parallel sentences that are produced by UD parsers are significantly more similar than the trees generated by other formalisms.
<<</Evaluation of tree-based models with parse formalisms rather than UD>>>
<<</Results>>>
<<</Paraphrase Identification>>>
<<<Relation Extraction>>>
In this section, we explain the experiments of cross-lingual RE and present the results. Specifically, we compared tree-based methods including combination of tree kernels and TreeLSTM with deep methods of CNN BIBREF33, Bi-LSTM BIBREF34 and RCNN BIBREF35.
<<<Result>>>
We first examine the tree kernels in the mono-lingual and then in the cross-lingual learning.
<<<Effect of Multi-Word Expressions>>>
Last two rows of Table TABREF40 show the F$_1$ score of the model trained on the English training data using the $CK2$ and $CK3$, in which MWEs were considered to be a single node within the dependency tree, as described at the end of Section SECREF10. The accuracy of $CK_2$ mainly increased for the Farsi data, because Farsi has many multi-word expressions such as compound verbs. Farsi has only about 250 simple verbs and all the other verbs are compound BIBREF43. Considering MWE as a single node causes all the tokens which compose a verb to be treated as a single word, and so the true translation will be found when searching for that word in dictionaries. Figure FIGREF46 shows the F$_1$ scores of best models for different semantic classes.
<<</Effect of Multi-Word Expressions>>>
<<</Result>>>
<<</Relation Extraction>>>
<<</Experiments>>>
<<<Discussion and Conclusion>>>
Taking a deeper look at the proposed method, most of the mis-classifications of the cross-lingual tree models are related to the following issues :
Structural Difference : The main reason for the error of classifiers is structural differences. Although UD tries to produce as most similar trees as it can for parallel sentences, there are many language-specific dependency patterns that could not be neglected.
Lexical Gap : Words mainly convey the meaning of the sentence. A lexical gap between source and target languages usually ruins the accuracy of cross-lingual models.
Confusion of different senses on a surface : Words of different languages usually have multiple senses. Confusion of different senses of words causes incorrect translation of words, because dictionaries translate word to word, but not word-sense to word-sense. On the other hand, Word Sense Disambiguation (WSD) is a difficult task and needs additional resources such as high-quality multi-lingual wordnets BIBREF44.
Incorrect translation of prepositions : Prepositions are very informative for the RE task. Hashimoto et al. presented the five most informative unigrams and three-grams for three types of relations of the SemEval 2010-task 8 dataset BIBREF21, which are shown in Table TABREF47. Wang et al. BIBREF42 also presented the most representative trigrams for different relations on the same data set. Also, Lahbib et al. BIBREF45 presented the most common Arabic prepositions and showed that each one reflects some specific kinds of semantic relations. Confusion of senses for prepositions is a very common issue in word-to-word translation.
Phrasal verbs : Phrasal verbs, which have a metaphorical meaning, often cannot be translated word for word. For example, the Farsi verb “از دست دادن / to give from hand”, means “lose". When the most informative chunk of the sentence is the phrasal verb, the proposed method does not capture the true meaning.
In general, more lexical and structural similarities between the source and target languages increase the accuracy of UD-based transfer learning. As future works, it is proposed that the UD-based approach is studied for other cross-lingual learning tasks and other languages along with different learning algorithms that are capable of dealing with parse trees.
<<</Discussion and Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nTransfer Learning via Universal Dependencies\nCross-Lingual Tree-based Models\nCross-Lingual Paraphrase Identification\nCross-Lingual Semantic Relation Extraction\nExperiments\nParaphrase Identification\nConstruction of Parallel Dataset\nTools and Setup\nResults\nEvaluation of tree-based models in mono-lingual learning\nEvaluation of tree-based models with UD in cross-lingual learning\nEvaluation of tree-based models with parse formalisms rather than UD\nRelation Extraction\nResult\nEffect of Multi-Word Expressions\nDiscussion and Conclusion"
],
"type": "outline"
}
|
1909.00338
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Monitoring stance towards vaccination in twitter messages
<<<Abstract>>>
BACKGROUND ::: We developed a system to automatically classify stance towards vaccination in Twitter messages, with a focus on messages with a negative stance. Such a system makes it possible to monitor the ongoing stream of messages on social media, offering actionable insights into public hesitance with respect to vaccination. At the moment, such monitoring is done by means of regular sentiment analysis with a poor performance on detecting negative stance towards vaccination. For Dutch Twitter messages that mention vaccination-related key terms, we annotated their stance and feeling in relation to vaccination (provided that they referred to this topic). Subsequently, we used these coded data to train and test different machine learning set-ups. With the aim to best identify messages with a negative stance towards vaccination, we compared set-ups at an increasing dataset size and decreasing reliability, at an increasing number of categories to distinguish, and with different classification algorithms. ::: ::: ::: RESULTS ::: We found that Support Vector Machines trained on a combination of strictly and laxly labeled data with a more fine-grained labeling yielded the best result, at an F1-score of 0.36 and an Area under the ROC curve of 0.66, considerably outperforming the currently used sentiment analysis that yielded an F1-score of 0.25 and an Area under the ROC curve of 0.57. We also show that the recall of our system could be optimized to 0.60 at little loss of precision. ::: ::: ::: CONCLUSION ::: The outcomes of our study indicate that stance prediction by a computerized system only is a challenging task. Nonetheless, the model showed sufficient recall on identifying negative tweets so as to reduce the manual effort of reviewing messages. Our analysis of the data and behavior of our system suggests that an approach is needed in which the use of a larger training dataset is combined with a setting in which a human-in-the-loop provides the system with feedback on its predictions.
<<</Abstract>>>
<<<Background>>>
In the light of increased vaccine hesitance in various countries, consistent monitoring of public beliefs and opinions about the national immunization program is important. Besides performing qualitative research and surveys, real-time monitoring of social media data about vaccination is a valuable tool to this end. The advantage is that one is able to detect and respond to possible vaccine concerns in a timely manner, that it generates continuous data and that it consists of unsolicited, voluntary user-generated content.
Several studies that analyse tweets have already been conducted, providing insight in the content that was tweeted most during the 2009 H1N1 outbreak BIBREF0, the information flow between users with a certain sentiment during this outbreak BIBREF1, or trends in tweets that convey, for example, the worries on efficacy of HPV vaccines BIBREF2, BIBREF3. While human coders are best at deploying world knowledge and interpreting the intention behind a text, manual coding of tweets is laborious. The above-mentioned studies therefore aimed at developing and evaluating a system to code tweets automatically. There are several systems in place that make use of this automatic coding. The Vaccine Confidence Project BIBREF4 is a real-time worldwide internet monitor for vaccine concerns. The Europe Media Monitor (EMM) BIBREF5 was installed to support EU institutions and Member State organizations with, for example, the analysis real-time news for medical and health-related topics and with early warning alerts per category and country. MEDISYS, derived from the EMM and developed by the Joint Research Center of the European Commission BIBREF6, is a media monitoring system providing event-based surveillance to rapidly identify potential public health threats based on information from media reports.
These systems cannot be used directly for the Netherlands because they do not contain search words in Dutch, are missing an opinion-detection functionality, or do not include categories of the proper specificity. Furthermore, opinions towards vaccination are contextualized by national debates rather than a multinational debate BIBREF7, which implies that a system for monitoring vaccination stance on Twitter should ideally be trained and applied to tweets with a similar language and nationality. Finally, by creating an automatic system for mining public opinions on vaccination concerns, one can continue training and adapting the system. We therefore believe it will be valuable to build our own system. Besides analysing the content of tweets, several other applications that use social media with regard to vaccination have been proposed. They, for example, use data about internet search activity and numbers of tweets as a proxy for (changes in) vaccination coverage or for estimating epidemiological patterns. Huang et al. BIBREF8 found a high positive correlation between reported influenza attitude and behavior on Twitter and influenza vaccination coverage in the US. In contrast, Aquino et al. BIBREF9 found an inverse correlation between Mumps, Measles, Rubella (MMR) vaccination coverage and tweets, Facebook posts and internet search activity about autism and MMR vaccine in Italy. This outcome was possibly due to a decision of the Court of Justice in one of the regions to award vaccine-injury compensation for a case of autism. Wagner, Lampos, Cox and Pebody BIBREF10 assessed the usefulness of geolocated Twitter posts and Google search as source data to model influenza rates, by measuring their fit to the traditional surveillance outcomes and analyzing the data quality. They find that Google search could be a useful alternative to the regular means of surveillance, while Twitter posts are not correlating well due to a lower volume and bias in demographics. Lampos, de Bie and Christianinni BIBREF11 also make use of geolocated Twitter posts to track academics, and present a monitoring tool with a daily flu-score based on weighted keywords.
Various studies BIBREF12, BIBREF13, BIBREF14 show that estimates of influenza-like illness symptoms mentioned on Twitter can be exploited to track reported disease levels relatively accurately. However, other studies BIBREF15, BIBREF16 showed that this was only the case when looking at severe cases (e.g. hospitalizations, deaths) or only for the start of the epidemic when interest from journalists was still high.
Other research focuses on detecting discussion communities on vaccination in Twitter BIBREF17 or analysing semantic networks BIBREF18 to identify the most relevant and influential users as well as to better understand complex drivers of vaccine hesitancy for public health communication. Tangherlini et al. BIBREF19 explore what can be learned about the vaccination discussion from the realm of “mommy blogs”: parents posting messages about children’s health care on forum websites. They aim to obtain insights in the underlying narrative frameworks, and analyse the topics of the messages using Latent Dirichlet Allocation (LDA) BIBREF20. They find that the most prominent frame is a focus on the exemption of one’s child from receiving a vaccination in school. The motivation against vaccination is most prominently based on personal belief about health, but could also be grounded in religion. Surian et al. BIBREF21 also apply topic modeling to distinguish dominant opinions in the discussion about vaccination, and focus on HPV vaccination as discussed on Twitter. They find a common distinction between tweets reporting on personal experience and tweets that they characterize as `evidence’ (statements of having had a vaccination) and `advocacy’ (statements that support vaccination).
Most similar to our work is the study by Du, Xu, Song, Liu and Tao BIBREF2. With the ultimate aim to improve the vaccine uptake, they applied supervised machine learning to analyse the stance towards vaccination as conveyed on social media. Messages were labeled as either related to vaccination or unrelated, and, when related, as ‘positive’, ‘negative’ or ‘neutral’. The ‘negative’ category was further broken down into several considerations, such as ‘safety’ and ‘cost’. After having annotated 6,000 tweets, they trained a classifier on different combinations of features, obtaining the highest macro F1-score (the average of the separate F1-scores for each prediction category) of $0.50$ and micro F1-score (the F1-score over all predictions) of $0.73$. Tweets with a negative stance that point to safety risks could best be predicted, at an optimal F1 score of $0.75$, while the other five sub-categories with a negative stance were predicted at an F1 score below $0.5$ or even $0.0$.
Like Du et al. BIBREF2, we focus on analysing sentiment about vaccination using Twitter as a data source and applying supervised machine learning approaches to extract public opinion from tweets automatically. In contrast, in our evaluation we focus on detecting messages with a negative stance in particular. Accurately monitoring such messages helps to recognize discord in an early stage and take appropriate action. We do train machine learning classifiers on modeling other categories than the negative stance, evaluating whether this is beneficial to detecting tweets with a negative stance. For example, we study whether it is beneficial to this task to model tweets with a positive and neutral stance as well. We also inquire whether a more fine-grained categorization of sentiment (e.g.: worry, relief, frustration and informing) offers an advantage. Apart from comparing performance in the context of different categorizations, we compare different machine learning algorithms and compare data with different levels of annotation reliability. Finally, the performance of the resulting systems is compared to regular sentiment analysis common to social media monitoring dashboards. At the public health institute in the Netherlands, we make use of social media monitoring tools offered by Coosto. For defining whether a message is positive, negative or neutral with regard to vaccination, this system makes use of the presence or absence of positive or negative words in the messages. We believe that we could increase the sensitivity and specificity of the sentiment analysis by using supervised machine learning approaches trained on a manually coded dataset. The performance of our machine learning approaches is therefore compared to the sentiment analysis that is currently applied in the Coosto tool.
<<</Background>>>
<<<Implementation>>>
We set out to curate a corpus of tweets annotated for their stance towards vaccination, and to employ this corpus to train a machine learning classifier to distinguish tweets with a negative stance towards vaccination from other tweets. In the following, we will describe the stages of data acquisition, from collection to labeling.
<<<Data collection>>>
We queried Twitter messages that refer to a vaccination-related key term from TwiNL , a database with IDs of Dutch Twitter messages from January 2012 onwards BIBREF22. In contrast to the open Twitter Search API, which only allows one to query tweets posted within the last seven days, TwiNL makes it possible to collect a much larger sample of Twitter posts, ranging several years.
We queried TwiNL for different key terms that relate to the topic of vaccination in a five-year period, ranging from January 1, 2012 until February 8, 2017. Query terms that we used were the word `vaccinatie’ (Dutch for `vaccination’) and six other terms closely related to vaccination, with and without a hashtag (`#’). Among the six words is `rijksvaccinatieprogramma’, which refers to the vaccination programme in The Netherlands. An overview of all query terms along with the number of tweets that could be collected based on them is displayed in Table TABREF5.
We collected a total of 96,566 tweets from TwiNL, which we filtered in a number of ways. First, retweets were removed, as we wanted to focus on unique messages. This led to a removal of 31% of the messages. Second, we filtered out messages that contain a URL. Such messages often share a news headline and include a URL to refer to the complete news message. As a news headline does not reflect the stance of the person who posted the tweet, we decided to apply this filtering step. It is likely that part of the messages with a URL do include a message composed by the sender itself, but this step helps to clean many unwanted messages. Third, we removed messages that include a word related to animals and traveling (`dier’, animal; `landbouw’, agriculture; and `teek’, tick), as we strictly focus on messages that refer to vaccination that is part of the governmental vaccination program. 27,534 messages were left after filtering. This is the data set that is used for experimentation.
<<</Data collection>>>
<<<Data annotation>>>
The stance towards vaccination was categorized into `Negative’, `Neutral’, `Positive’ and `Not clear’. The latter category was essential, as some posts do not convey enough information about the stance of the writer. In addition to the four-valued stance classes we included separate classes grouped under relevance, subject and sentiment as annotation categories. With these additional categorizations we aimed to obtain a precise grasp of all possibly relevant tweet characteristics in relation to vaccination, which could help in a machine learning setting.
The relevance categories were divided into `Relevant’, `Relevant abroad’ and `Irrelevant’. Despite our selection of vaccination-related keywords, tweets that mention these words might not refer to vaccination at all. A word like `vaccine’ might be used in a metaphorical sense, or the tweet might refer to vaccination of animals.
The subject categorization was included to describe what the tweet is about primarily: `Vaccine’, `Disease’ or `Both’. We expected that a significant part of the tweets would focus on the severeness of a disease when discussing vaccination. Distinguishing these tweets could help the detection of the stance as well.
Finally, the sentiment of tweets was categorized into `Informative’, `Angry/Frustration’, `Worried/Fear/Doubts’, `Relieved’ and `Other’, where the latter category lumps together occasional cases of humor, sarcasm, personal experience, and question raised. These categories were based on the article by BIBREF0, and emerged from analysing their H1N1-related tweets. The `Informative’ category refers to a typical type of message in which information is shared, potentially in support of a negative or positive stance towards vaccination. If the message contained more than one sentiment, the first sentiment identified was chosen. Table TABREF6 shows examples of tweets for the above-mentioned categories.
We aimed at a sufficient number of annotated tweets to feed a machine learning classifier with. The majority of tweets were annotated twice. We built an annotation interface catered to the task. Upon being presented with the text of a Twitter post, the annotator was first asked whether the tweet was relevant. In case it was deemed relevant, the tweet could be annotated for the other categorizations. Otherwise, the user could click `OK’ after which he or she was directly presented with a new Twitter post. The annotator was presented with sampled messages that were either not annotated yet or annotated once. We ensured a fairly equal distribution of these two types, so that most tweets would be annotated twice.
As annotators, we hired four student assistants and additionally made use of the Radboud Research Participation System. We asked participants to annotate for the duration of an hour, in exchange for a voucher valued ten Euros, or one course credit. Before starting the annotation, the participants were asked to read the annotation manual, with examples and an extensive description of the categories, and were presented with a short training round in which feedback on their annotations was given. The annotation period lasted for six weeks. We stopped when the number of applicants dropped.
A total of 8,259 tweets were annotated, of which 6,472 were annotated twice (78%). 65 annotators joined in the study, with an average of $229.5$ annotated tweets per person. The number of annotations per person varied considerably, with $2,388$ tweets coded by the most active annotator. This variation is due to the different ways in which annotators were recruited: student-assistants were recruited for several days, while participants recruited through the Radboud Research Participation System could only join for the duration of an hour.
We calculated inter-annotator agreement by Krippendorff's Alpha BIBREF23, which accounts for different annotator pairs and empty values. To also zoom in on the particular agreement by category, we calculated mutual F-scores for each of the categories. This metric is typically used to evaluate system performance by category on gold standard data, but could also be applied to annotation pairs by alternating the roles of the two annotators between classifier and ground truth. A summary of the agreement by categorization is given in Table TABREF10. While both the Relevance and Subject categorizations are annotated at a percent agreement of $0.71$ and $0.70$, their agreement scores are only fair, at $\alpha =0.27$ and $\alpha =0.29$. The percent agreement on Stance and Sentiment, which carry more categories than the former two, is $0.54$ for both. Their agreement scores are also fair, at $\alpha =0.35$ and $\alpha =0.34$. The mutual F-scores show marked differences in agreement by category, where the categories that were annotated most often typically yield a higher score. This holds for the Relevant category ($0.81$), the Vaccine category ($0.79$) and the Positive category ($0.64$). The Negative category yields a mutual F-score of $0.42$, which is higher than the more frequently annotated categories Neutral ($0.23$) and Not clear ($0.31$). We found that these categories are often confused. After combining the annotations of the two, the stance agreement would be increased to $\alpha =0.43$.
The rather low agreement over the annotation categories indicates the difficulty of interpreting stance and sentiment in tweets that discuss the topic of vaccination. We therefore proceed with caution to categorize the data for training and testing our models. The agreed upon tweets will form the basis of our experimental data, as was proposed by Jakubiçek, Kovar and Rychly BIBREF24, while the other data is added as additional training material to see if the added quantity is beneficial to performance. We will also annotate a sample of the agreed upon tweets, to make sure that these data are reliable in spite of the low agreement rate.
<<</Data annotation>>>
<<<Data categorization>>>
The labeled data that we composed based on the annotated tweets are displayed in Table TABREF11. We combined the Relevant and Relevant abroad categories into one category (`Relevant’), as only a small part of the tweets was annotated as Relevant abroad. We did not make use of the subject annotations, as a small minority of the tweets that were relevant referred a disease only. For the most important categorization, stance, we included all annotated labels. Finally, we combined part of the more frequent sentiment categories with Positive.
We distinguish three types of labeled tweets: `strict’, `lax’ and `one’. The strictly labeled tweets were labeled by both annotators with the same label. The lax labels describe tweets that were only annotated with a certain category by one of the coders. The categories were ordered by importance to decide on the lax labels. For instance, in case of the third categorization, Negative was preferred over Positive, followed by Neutral, Not clear and Irrelevant. If one of the annotators labeled a tweet as Positive and the other as Neutral, the lax label for this tweet is Positive. In table TABREF11, the categories are ordered by preference as imposed on the lax labeling. The `one' labeling applies to all tweets that were annotated by only one annotator. Note that the total counts can differ between label categorizations due to the lax labeling: the counts for Positive labels in the Polarity + sentiment labeling (Positive + Frustration, Positive + Information and Positive + other) do not add up to the count of the Positive label in the Polarity labeling.
With the `strict’, `lax’ and `one’ labeling, we end up with four variants of data to experiment with: only strict, strict + lax, strict + one and strict + lax + one. The strict data, which are most reliable, are used in all variants. By comparing different combinations of training data, we test whether the addition of less reliably labeled data (lax and/or one) boosts performance.
The four labelings have an increasing granularity, where the numbers of examples for the Negative category are stable across each labeling. In the first labeling, these examples are contrasted with any other tweet. It hence comprises a binary classification task. In the second labeling, irrelevant tweets are indicated in a separate category. The Other class here represents all relevant tweets that do not convey a negative stance towards vaccination. In the third labeling, this class is specified as the stance categories Positive, Neutral and Not clear. In the fourth labeling, the Positive category, which is the most frequent polarity class, is further split into `Positive + frustration’, `Positive + Information’ and `Positive + Other’. Positivity about vaccination combined with a frustration sentiment reflects tweets that convey frustration about the arguments of people who are negative about vaccination (e.g.: "I just read that a 17 year old girl died of the measles. Because she did not want an inoculation due to strict religious beliefs. -.- #ridiculous"). The Positive + Information category reflects tweets that provide information in favor of vaccination, or combined with a positive stance towards vaccination (e.g.: "#shingles is especially common with the elderly and chronically diseased. #vaccination can prevent much suffering. #prevention").
In line with Kovár, Rychlý and Jakubíček BIBREF25, we evaluate system performance only on the reliable part of the annotations - the instances labeled with the same label by two annotators. As the overall agreement is not sufficient, with Krippendorff's Alpha ranging between $0.27$ and $0.35$, the first author annotated 300 tweets sampled from the strict data (without knowledge of the annotations) to rule out the possibility that these agreed upon annotations are due to chance agreement. Comparing these new annotations to the original ones, the Negative category and the Positive category are agreed upon at mutual F-scores of $0.70$ and $0.81$. The percent agreement on the binary classification scheme (e.g.: Negative versus Other) is $0.92$, with $\alpha =0.67$, which decreases to $\alpha =0.55$ for the Relevance categorization, $\alpha =0.54$ for the Polarity categorization and $\alpha =0.43$ for the Polarity + Sentiment categorization. We find that instances of a negative and positive stance can be clearly identified by humans, while the labels Neutral and Not Clear are less clear cut. Since it is our focus to model tweets with a negative stance, the agreement on the binary decision between Negative and Other is just sufficient to use for experimentation based on Krippendorff’s BIBREF26 remark that “$\alpha \ge .667$ is the lowest conceivable limit” (p.241). In our experimental set-up we will therefore only evaluate our system performance on distinguishing the Negative category from any other category in the strict data.
<<</Data categorization>>>
<<<Experimental Set-up>>>
For each combination of labeling (four types of labeling) and training data (four combinations of training data) we train a machine learning classifier to best distinguish the given labels. Two different classifiers are compared: Multinomial Naive Bayes and Support Vector Machines (SVM). In total, this makes for 32 variants (4 labelings $\times $ 4 combinations of training data $\times $ 2 classifiers). All settings are tested through ten-fold cross-validation on the strict data and are compared against two rule-based sentiment analysis baselines and two random baselines. All components of the experimental set-up are described in more detail below.
<<<Preprocessing>>>
To properly distinguish word tokens and punctuation we tokenized the tweets by means of Ucto, a rule-based tokenizer with good performance on the Dutch language, and with a configuration specific for Twitter. Tokens were lowercased in order to focus on the content. Punctuation was maintained, as well as emoji and emoticons. Such markers could be predictive in the context of a discussion such as vaccination. To account for sequences of words and characters that might carry useful information, we extracted word unigrams, bigrams, and trigrams as features. Features were coded binary, i.e. set to 1 if a feature is seen in a message and set to 0 otherwise. During training, all features apart from the top 15,000 most frequent ones were removed.
<<</Preprocessing>>>
<<<Machine Learning>>>
We applied two machine learning algorithms with a different perspective on the data: Multinomial Naive Bayes and SVM. The former algorithm is often used on textual data. It models the Bayesian probability of features to belong to a class and makes predictions based on a linear calculation. Features are naively seen as independent of one another BIBREF27. In their simplest form, SVMs are binary linear classifiers that make use of kernels. They search for the optimal hyperplane in the feature space that maximizes the geometric margin between any two classes. The advantage of SVMs is that they provide a solution to a global optimization problem, thereby reducing the generalization error of the classifier BIBREF28.
We applied both algorithms by means of the scikit-learn toolkit, a python library that offers implementations of many machine learning algorithms BIBREF29. To cope with imbalance in the number of instances per label, for Multinomial Naive Bayes we set the Alpha parameter to $0.0$ and muted the fit prior. For SVM, we used a linear kernel with the $C$ parameter set to $1.0$ and a balanced class weight.
<<</Machine Learning>>>
<<<Baselines>>>
As baselines, we applied two rule-based sentiment analysis systems for Dutch as well as two random baselines. The first rule-based sentiment analysis system is Pattern, an off-the-shelf sentiment analysis system that makes use of a list of adjectives with a positive or negative weight, based on human annotations BIBREF30. Sentences are assigned a score between $-1.0$ and $1.0$ by multiplying the scores of their adjectives. Bigrams like `horribly good’ are seen as one adjective, where the adjective `horribly’ increases the positivity score of `good’. We translated the polarity score into the discrete labels `Negative’, `Positive’ and `Neutral’ by using the training data to infer which threshold leads to the best performance on the `Negative’ category.
The second baseline is the sentiment analysis offered by the social media monitoring dashboard Coosto. As Coosto is a commercial product, there is no public documentation on their sentiment analysis tool.
In addition to these two baselines, we applied two random baselines: predicting the negative class randomly for 50% of the messages and predicting the negative class randomly for 15% of the messages. The latter proportion relates to the proportion of vaccination-hesitant tweets in the strictly labeled data on which we test the systems.
<<</Baselines>>>
<<</Experimental Set-up>>>
<<<Evaluation>>>
We evaluate performance by means of ten-fold cross-validation on the strictly labeled data. In each of the folds, 90% of the strictly labeled data is used as training data, which are complemented with the laxly labeled data and/or the data labeled by one annotator, in three of the four training data variants. Performance is always tested on the strict data. As evaluation metrics we calculate the F1-score and the Area Under the ROC Curve (AUC) on predicting the negative stance towards vaccination in the test tweets.
<<</Evaluation>>>
<<</Implementation>>>
<<<Results>>>
We trained machine learning (ML) classifiers to distinguish Twitter messages with a negative stance towards vaccination, alternating three aspects of the system: the labels to train on, the composition of the training data and the ML algorithm. The results are presented in Table TABREF15, as the F1-score and AUC of any setting on correctly predicting tweets with a negative stance. Systems with specific combinations of the ML classifier and size of the training data are given in the rows of the table. The four types of labelings are listed in the columns.
The results show a tendency for each of the three manipulations. Regarding the ML algorithm, SVM consistently outperforms Naive Bayes for this task. Furthermore, adding additional training data, albeit less reliable, generally improves performance. Training a model on all available data (strict + lax + one) leads to an improvement over using only the strict data, while adding only the laxly labeled data is generally better than using all data. Adding only the data labeled by one annotator often leads to a worse performance. With respect to the labeling, the Polarity-sentiment labeling generally leads to the best outcomes, although the overall best outcome is yielded by training an SVM on Polarity labeling with strict data appended by lax data, at an area under the curve score of $0.66$.
The best reported performance is an F1-score of $0.36$ and an AUC of $0.66$. In comparison to the baselines (Table TABREF16), these scores are considerably higher. Nevertheless, there is room for improvement. The performance of the random baselines, with F1-scores of $0.18$ (50%) and $0.13$ (15%), indicates that the minimal performance on this task is rather low. The rule-based sentiment analyses yield better performances, at an F1-score of $0.20$ for Pattern and $0.25$ for Coosto. To analyse the behavior of the best ML system, we present a confusion table of its classifications in Table TABREF17. The Irrelevant category is most often classified with one of the other categories, while the Positive and Negative categories are the biggest confusables. The classifier is possibly identifying features that denote a stance, but struggles to distinguish positive from negative.
To gain insight into the potential of increasing the amount of training data, we applied the best ML system (SVM trained on strict and lax data on the polarity labels) on 10% of the strictly labeled data, starting with a small sample of the data and increasing it to all available data (excluding the test data). The learning curve is presented in Figure FIGREF18. It shows an improved performance until the last training data is added, indicating that more training data would likely yield better performance.
<<<Comparison machine learning and rule-based sentiment analysis>>>
A confusion table of the predictions of the best of the two rule-based baselines, Pattern, and the best ML system is displayed in Table TABREF19. Only 192 tweets are labeled by both systems as Negative, while the best ML system accounts for almost double this amount and Pattern for three times as much. Comparing the predictions to the gold standard labeling, 99 of the tweets predicted only by the best ML system as Negative are correct (27%), opposed to 51 that are exclusive to Pattern (8%). Of the tweets that were classified by both as negative, 63 are correct (33%). This shows that the approaches have a rather complementary view on tweets with a negative stance.
To gain more insight into the behavior of both approaches, we applied them to 15,577 unlabeled tweets. Table TABREF20 presents a confusion table with the numbers of tweets that were classified as Negative or another category by both approaches. Again, pattern accounts for the majority of negatively labeled messages, and the overlap is small. Two of the authors validated for a sample of 600 messages whether they actually manifested a negative attitude towards vaccination: 200 messages that were uniquely classified by the best ML system as Negative, 200 messages that were solely labeled as Negative by Pattern and 200 messages that were classified by both systems as Negative. This validation showed the same tendency as for the labeled data, with a higher precision of the best ML system in comparison to Pattern (33.5% versus 21% of the messages correctly predicted) and the highest precision when both systems predicted the negative class (36%).
The complementary view on tweets with a negative stance between the best ML system and rule-based sentiment analysis becomes clear from their differing predictions. To make this difference concrete, we present a selection of the messages predicted as Negative by both systems in Table TABREF21. The first three are only predicted by the best ML system as Negative, and not by Pattern, while the fourth until the sixth examples are only seen as Negative by Pattern. Where the former give arguments (`can not be compared...’, `kids are dying from it’) or take stance (`I’m opposed to...’), the latter examples display more intensified words and exclamations (`that’s the message!!’, `Arrogant’, `horrific’) and aggression towards a person or organization. The last three tweets are seen by both systems as Negative. They are characterized by intensified words that linked strongly to a negative stance towards vaccination (`dangerous’, `suffering’, `get lost with your compulsory vaccination’).
Table TABREF21 also features tweets that were predicted as Negative by neither the best ML-system nor Pattern, representing the most difficult instances of the task. The first two tweets include markers that explicitly point to a negative stance, such as `not been proven' and `vaccinating is nonsense'. The third tweet manifests a negative stance by means of the sarcastic phrase `way to go' (English translation). The use of sarcasm, where typically positive words are used to convey a negative valence, complicates this task of stance prediction. The last tweet advocates an alternative to vaccination, which implicitly can be explained as a negative stance towards vaccination. Such implicitly packaged viewpoints also hamper the prediction of negative stance. Both sarcasm and implicit stance could be addressed by specific modules.
<<</Comparison machine learning and rule-based sentiment analysis>>>
<<<Improving recall>>>
For monitoring the number of Twitter messages over time that are negative towards vaccination, it is arguably more important to detect them at a high recall than at a high precision. False positives (messages incorrectly flagged as Negative) could be filtered manually by a human end user, while False Negatives (messages with a negative stance that are not detected) will be missed. We set out to improve recall, making use of classifier confidence scores and the complementary classifications of Pattern and the best ML system.
A first recall-improving approach is to reset the prediction threshold for the Negative category. For any given instance, the SVM classifier estimates the probability of all categories it was trained on. It will predict the Negative category for an instance if its probability exceeds the probabilities of the other categories. This prediction can be altered by changing the threshold; setting the threshold higher will generally mean that fewer instances will be predicted as a Negative category (corresponding to a higher precision), whereas setting it lower will mean more instances will be predicted as such (corresponding to a higher recall). Thus, the balance between precision and recall can be set as desired, to favor one or another. However, in many cases, changing the threshold will not lead to a (strong) increase in overall performance.
Figure FIGREF22 presents the balance between recall and precision as a result of predicting the Negative category with the best ML system, when the threshold for this category is altered from lowest to highest. Compared to the standard recall of $0.43$ at a precision of $0.29$, increasing the recall to $0.60$ would lead to a drop of precision to $0.21$. The F1-score would then decrease to $0.31$.
A second means by which recall might be improved is to employ ensemble classification. The comparison in the previous section between the best ML method and rule-based sentiment analysis revealed that both systems have a rather disjoint perspective on negative stance: many more tweets are labeled as `Negative' by only one of the two systems than by both. We therefore built an ensemble system that follows both systems in their perspective on tweets with a negative stance: for each tweet, if either of the systems predicts the Negative category, the ensemble system makes this prediction.
The performance of the ensemble system is presented in Table TABREF23. Of the 343 tweets in the test set that are labeled as Negative, 210 are retrieved by the ensemble system. The result is a recall of $0.61$. The system does overshoot in its categorization of tweets as Negative: this category is predicted for 1,168 tweets (about 40% of total test set of 2,886 tweets). The result is a precision of $0.18$. In comparison to lowering the prediction threshold of the ML system, the ensemble system thus yields a slightly worse trade-off between precision and recall.
<<</Improving recall>>>
<<</Results>>>
<<<Discussion>>>
With an F1-score of $0.36$, our system lags behind the $0.75$ F1-score reported by Du et al.BIBREF2. Several factors might have influenced this difference. A first factor is the low proportion of tweets with the label `Negative' in our dataset. In the strict labeling condition, only 343 cases are labeled as negative by two annotators, against 2,543 labeled as positive – the negative cases only comprise 13% of all instances. In the study of Du et al., the anti-vaccination category comprises 24% of all instances (1,445 tweets). More (reliable) examples might have helped in our study to train a better model of negative tweets. Secondly, Du et al. BIBREF2 focused on the English language domain, while we worked with Dutch Twitter messages. The Dutch Twitter realm harbors less data to study than the English one, and might bring forward different discussions when it comes to the topic of vaccination. It could be that the senders' stance towards vaccination is more difficult to pinpoint within these discussions. In line with this language difference, a third prominent factor that might have led to a higher performance in the study of Du et al.BIBREF2 is that they focus on a particular case of vaccination (e.g.: HPV vaccination) and split the anti-vaccination category into several more specific categories that describe the motivation of this stance. The diverse motivations for being against vaccination are indeed reflected in several other studies that focus on identifying discussion communities and viewpoints BIBREF17, BIBREF21, BIBREF19. While splitting the data into more specific categories will lead to less examples per category, it could boost performance on predicting certain categories due to a larger homogeneity. Indeed, the most dominant negative category in the study by Du et al.BIBREF2, dubbed `NegSafety' and occurring in 912 tweets (63% of all negative tweets), yielded the highest F1-score of $0.75$. While two less frequent categories were predicted at an F1-score of $0.0$, this outcome shows the benefit of breaking down the motivations behind a negative stance towards vaccination.
A major limitation of our study is that the agreement rates for all categorizations are low. This is also the case in other studies, like BIBREF8, who report an agreement of $K = 0.40$ on polarity categorization. Foremost, this reflects the difficulty of the task. The way in which the stance towards vaccination is manifested in a tweet depends on the author, his or her specific viewpoint, the moment in time at which a tweet was posted, and the possible conversation thread that precedes it. Making a judgment solely based on the text could be difficult without this context. Agreement could possibly be improved by presenting the annotator with the preceding conversation as context to the text. Furthermore, tweets could be coded by more than two annotators. This would give insight into the subtleties of the data, with a graded scale of tweets that clearly manifest a negative stance towards vaccination to tweets that merely hint at such a stance. Such a procedure could likewise help to generate more reliable examples to train a machine learning classifier.
The low agreement rates also indicate that measuring stance towards vaccination in tweets is a too difficult task to assign only to a machine. We believe that the human-in-the-loop could be an important asset in any monitoring dashboard that focuses on stance in particular discussions. The system will have an important role in filtering the bigger stream of messages, leaving the human ideally with a controllable set of messages to sift through to end up with reliable statistics on the stance that is seen in the discussion at any point in time. In the analysis section, we explored two approaches to increase recall of messages with a negative stance, which would be most useful in this scenario. Lowering the prediction threshold showed to be most effective to this end.
Our primary aim in future work is to improve performance. We did not experiment with different types of features in our current study. Word embeddings might help to include more semantics in our classifier’s model. In addition, domain knowledge could be added by including word lists, and different components might be combined to address different features of the data (e.g.: sarcasm and implicit stance). We also aim to divide the negative category into the specific motivations behind a negative stance towards vaccination, like in the study of Du et al.BIBREF2, so as to obtain more homogeneous categories. Parallel to this new categorization of the data, adding more labeled data appears to be the most effective way to improve our model. The learning curve that we present in Figure FIGREF18 shows that there is no performance plateau reached with the current size of the data. An active learning setting BIBREF31, starting with the current system, could be applied to select additional tweets to annotate. Such a setting could be incorporated in the practical scenario where a human-in-the-loop judges the messages that were flagged as displaying a negative stance by the system. The messages that are judged as correctly and incorrectly predicted could be added as additional reliable training data to improve upon the model. We have installed a dashboard that is catered for such a procedure, starting with the machine learning system that yielded the best performance in our current study.
<<</Discussion>>>
<<<Conclusions>>>
We set out to train a classifier to distinguish Twitter messages that display a negative stance towards vaccination from other messages that discuss the topic of vaccination. Based on a set of 8,259 tweets that mention a vaccination-related keyword, annotated for their relevance, stance and sentiment, we tested a multitude of machine learning classifiers, alternating the algorithm, the reliability of training data and the labels to train on. The best performance, with a precision of $0.29$, a recall of $0.43$, an F1-score of $0.36$ and an AUC of $0.66$, was yielded by training an SVM classifier on strictly and laxly labeled data to distinguish irrelevant tweets and polarity categories. The baselines, with an optimal F1-score of $0.25$ (rule-based sentiment analysis), were considerably outperformed. The latter shows the benefit of machine-learned classifiers on domain-specific sentiment: despite being trained on a reasonably small amount of data, the machine-learning approach outperforms general-purpose sentiment analysis tools.
<<</Conclusions>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nBackground\nImplementation\nData collection\nData annotation\nData categorization\nExperimental Set-up\nPreprocessing\nMachine Learning\nBaselines\nEvaluation\nResults\nComparison machine learning and rule-based sentiment analysis\nImproving recall\nDiscussion\nConclusions"
],
"type": "outline"
}
|
2004.01894
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Evaluating Multimodal Representations on Visual Semantic Textual Similarity
<<<Abstract>>>
The combination of visual and textual representations has produced excellent results in tasks such as image captioning and visual question answering, but the inference capabilities of multimodal representations are largely untested. In the case of textual representations, inference tasks such as Textual Entailment and Semantic Textual Similarity have been often used to benchmark the quality of textual representations. The long term goal of our research is to devise multimodal representation techniques that improve current inference capabilities. We thus present a novel task, Visual Semantic Textual Similarity (vSTS), where such inference ability can be tested directly. Given two items comprised each by an image and its accompanying caption, vSTS systems need to assess the degree to which the captions in context are semantically equivalent to each other. Our experiments using simple multimodal representations show that the addition of image representations produces better inference, compared to text-only representations. The improvement is observed both when directly computing the similarity between the representations of the two items, and when learning a siamese network based on vSTS training data. Our work shows, for the first time, the successful contribution of visual information to textual inference, with ample room for benchmarking more complex multimodal representation options.
<<</Abstract>>>
<<<Introduction>>>
Language understanding is a task proving difficult to automatize, because, among other factors, much of the information that is needed for the correct interpretation of an utterance is not explicit in text BIBREF0. This contrasts with how natural is language understanding for humans, who can cope easily with information absent in text, using common sense and background knowledge like, for instance, typical spatial relations between objects. From another perspective, it is well-known that the visual modality provides complementary information to that in the text. In fact, recent advances in deep learning research have led the field of computer vision and natural language processing to significant progress in tasks that involve visual and textual understanding. Tasks that include visual and textual content include Image Captioning BIBREF1, Visual Question Answering BIBREF2, and Visual Machine Translation BIBREF3, among others.
On the other hand, progress in language understanding has been driven by datasets which measure the quality of sentence representations, specially those where inference tasks are performed on top of sentence representations, including textual entailment BIBREF4, BIBREF5 and semantic textual similarity (STS). In STS BIBREF6, for instance, pairs of sentences have been annotated with similarity scores, with top scores for semantically equivalent sentences and bottom scores for completely unrelated sentences. STS provides a unified framework for extrinsic evaluation of multiple semantic aspects such as compositionality and phrase similarity. Contrary to related tasks, such as textual entailment and paraphrase detection, STS incorporates the notion of graded semantic similarity between the pair of textual sentences and is symmetric.
In this paper we extend STS to the visual modality, and present Visual Semantic Textual Similarity (vSTS), a task and dataset which allows to study whether better sentence representations can be built when having access to the corresponding images, in contrast with having access to the text alone. Similar to STS, annotators were asked to score the similarity between two items, but in this case each item comprises an image and a textual caption. Systems need to predict the human score. Figure FIGREF1 shows an instance in the dataset, with similarity scores in the captions. The example illustrates the need to re-score the similarity values, as the text-only similarity is not applicable to the multimodal version of the dataset: the annotators return a low similarity when using only text, while, when having access to the corresponding image, they return a high similarity. Although a dataset for multimodal inference exists (visual textual entailment BIBREF7) that dataset reused the text-only inference labels.
The vSTS dataset aims to become a standard benchmark to test the contribution of visual information when evaluating the similarity of sentences and the quality of multimodal representations, allowing to test the complementarity of visual and textual information for improved language understanding. Although multimodal tasks such as image captioning, visual question answering and visual machine translation already show that the combination of both modalities can be effectively used, those tasks do not separately benchmark the inference capabilities of multimodal visual and textual representations.
We evaluate a variety of well-known textual, visual and multimodal representations in supervised and unsupervised scenarios, and systematically explore if visual content is useful for sentence similarity. For text, we studied pre-trained word embeddings such as GloVe BIBREF8, pre-trained language models like GPT-2 and BERT BIBREF9, BIBREF10, sentence representations fine-tuned on an entailment task like USE BIBREF11, and textual representations pre-trained on a multimodal caption retrieval task like VSE++ BIBREF12. For image representation we use a model pre-trained on Imagenet (ResNet BIBREF13). In order to combine visual and textual representations we used concatenation and learn simple projections. Our experiments show that the text-only models are outperformed by their multimodal counterparts when adding visual representations, with up to 24% error reduction.
Our contributions are the following: (1) We present a dataset which allows to evaluate visual/textual representations on an inference task. The dataset is publicly available under a free license. (2) Our results show, for the first time, that the addition of image representations allows better inference. (3) The best text-only representation is the one fine-tuned on a multimodal task, VSE++, which is noteworthy, as it is better than a textual representation fine-tuned in a text-only inference task like USE. (4) The improvement when using image representations is observed both when computing the similarity directly from multimodal representations, and also when training siamese networks. At the same time the improvement holds for all textual representations, even those fine-tuned on a similarity task.
<<</Introduction>>>
<<<Related Work>>>
The task of Visual Semantic Textual Similarity stems from previous work on textual inference tasks. In textual entailment, given a textual premise and a textual hypothesis, systems need to decide whether the first entails the second, they are in contradiction, or none of the previous BIBREF4. Popular datasets include the Stanford Natural Language Inference dataset BIBREF5. As an alternative to entailment, STS datasets comprise pairs of sentences which have been annotated with similarity scores. STS systems are usually evaluated on the STS benchmark dataset BIBREF6. In this paper we present an extension of STS, so we present the task in more detail in the next section.
Textual entailment has been recently extended with visual information. A dataset for visual textual entailment was presented in BIBREF7. Even if the task is different from the text-only counterpart, they reused the text-only inference ground-truth labels without re-annotating them. In fact, they annotate a small sample to show that the labels change. In addition, their dataset tested pairs of text snippets referring to a single image, and it was only useful for testing grounding techniques, but not to measure the complementarity of visual and textual representations. The reported results did not show that grounding improves results, while our study shows that the inference capabilities of multimodal visual and textual representations improve over text-only representations. In related work, BIBREF14 propose visual entailment, where the premise is an image and the hypothesis is textual. The chosen setting does not allow to test the contribution of multimodal representationn with respect to unimodal ones.
The complementarity of visual and text representations for improved language understanding was first proven on word representations, where word embeddings were combined with visual or perceptual input to produce multimodal representations BIBREF15. The task of Visual Semantic Textual Similarity is also related to other multimodal tasks such as Image Captioning BIBREF16, BIBREF17, Text-Image Retrieval BIBREF18, BIBREF19 and Visual Question Answering BIBREF2.
Image Captioning is a task that aims to generate a description of a given image. The task is related to ours in that it is required an understanding of the scene depicted in the image, so the system can generate an accurate description of it. Unlike vSTS, image captioning is a generation task in which evaluation is challenging and unclear, as the defined automatic metrics are somewhat problematic BIBREF20. On the other hand, Text-Image Retrieval task requires to find similarities and differences of the items in two modalities, so we can distinguish relevant and irrelevant texts and images regarding the query. Apart from not checking inference explicitly, the other main difference with regards to vSTS is that, in retrieval, items are ranked from most to least similar, whereas the vSTS task consists on scoring an accurate real valued similarity. A comprehensive overview is out of the scope, and thus we focus on the most related vision and language tasks. We refer the reader to BIBREF21 for a survey on vision and language research.
Many of these tasks can be considered as extensions of previously existing NLP taks. For instance, Image Captioning can be seen as an extension of conditional language modeling BIBREF22 or natural language generation BIBREF23, whereas Visual Question Answering is a natural counterpart of the traditional Question Answering in NLP.
Regarding multimodal and unimodal representation learning, convolutional neural networks (CNN) have become the standard architecture for generating representations for images BIBREF24. Most of these models learn transferable general image features in tasks such as image classification, and detection, semantic segmentation, and action recognition. Most used transferable global image representations are learned with deep CNN architectures such as AlexNet BIBREF25, VGG BIBREF26, Inception-v3 BIBREF27, and ResNet BIBREF13 using large datasets such as ImageNet BIBREF1, MSCOCO BIBREF28 and Visual Genome BIBREF29. Recently, Graph Convolution Networks (GCN) showed to be promising way to distill multiple input types multimodal representations BIBREF30.
Language representation is mostly done with pretrained word embeddings like Glove BIBREF8 and sequence learning techniques such as Recurrent Neural Networks (RNN) BIBREF31. Recently, self-attention approaches like Transformers BIBREF32 provided transferable models (BERT, GPT-2, among others BIBREF9, BIBREF10) that significantly improve many state-of-the-art tasks in NLP. Alternatively, sentence representations have been fine-tuned on an entailment task BIBREF11. We will present those used in our work in more detail below.
<<</Related Work>>>
<<<The Visual STS Dataset>>>
STS assesses the degree to which two sentences are semantically equivalent to each other. The annotators measure the similarity among sentences, with higher scores for more similar sentences. The annotations of similarity were guided by the scale in Table TABREF4, ranging from 0 for no meaning overlap to 5 for meaning equivalence. Intermediate values reflect interpretable levels of partial overlap in meaning.
In this work, we extend the STS task with images, providing visual information that models use, and assess how much visual content can contribute in a language understanding task. The input of the task now consists of two items, each comprising an image and its corresponding caption. In the same way as in STS, systems need to score the similarity of the sentences with the help of the images. Figure FIGREF1 shows an example of an instance in the dataset.
In previous work reported in a non-archival workshop paper BIBREF33, we presented a preliminary dataset which used the text-only ground-truth similarity scores. The 819 pairs were extracted from a subset of the STS benchmark, more specifically, the so called STS-images subset, which contains pairs of captions with access to images from PASCAL VOC-2008 BIBREF34 and Flickr-8K BIBREF35. Our manual analysis, including examples like Figure FIGREF1, showed that in many cases the text-only ground truth was not valid, so we decided to re-annotated the dataset but showing the images in addition to the captions (the methodology is identical to the AMT annotation method mentioned below). The correlation of the new annotations with regard to the old ones was high (0.9$\rho $) showing that the change in scores was not drastic, but that annotations did differ. The annotators tended to return higher similarity scores, as the mean similarity score across the dataset increased from 1.7 to 2.1. The inter-tagger correlation was comparable to the text-only task, showing that the new annotation task was well-defined.
From another perspective, the fact that we could only extract 819 pairs from existing STS datasets showed the need to sample new pairs from other image-caption datasets. In order to be effective in measuring the quality of multimodal representations, we defined the following desiderata for the new dataset: (1) Following STS datasets, the similarity values need to be balanced, showing a uniform distribution; (2) Paired images have to be different to avoid making the task trivial, as hand analysis of image-caption datasets showed that two captions of the same image tended to be paraphrases of each other; (3) The images should not be present in more than one instance, to avoid biases in the visual side; (4) It has to contain a wide variety of images so we can draw stronger conclusions. The preliminary dataset fulfilled 2 and 3, but the dataset was skewed towards low similarity values and the variety was limited.
<<<Data Collection>>>
The data collection of sentence-image pairs comprised several steps, including the selection of pairs to be annotated, the annotation methodology, and a final filtering stage.
<<<1. Sampling data for manual annotation.>>>
We make use of two well-known image-caption datasets. On one hand, Flickr30K dataset BIBREF36 that has about 30K images with 5 manually generated captions per image. On the other hand, we use the Microsoft COCO dataset BIBREF28, which contains more than 120K images and 5 captions per image. Using both sources we hope to cover a wide variety of images.
In order to select pairs of instances, we did two sampling rounds. The goal of the first run is to gather a large number of varied image pairs with their captions which contain interesting pairs. We started by sampling images. We then combined two ways of sampling pairs of images. In the first, we generated pairs by sampling the images randomly. This way, we ensure higher variety of paired scenes, but presumably two captions paired at random will tend to have very low similarity. In the second, we paired images taking into account their visual similarity, ensuring the selection of related scenes with a higher similarity rate. We used the cosine distance of the top-layer of a pretrained ResNet-50 BIBREF13 to compute the similarity of images. We collected an equal number of pairs for the random and visual similarity strategy, gathering, in total, $155,068$ pairs. As each image has 5 captions, we had to select one caption for each image, and we decided to select the two captions with highest word overlap. This way, we get more balanced samples in terms of caption similarity.
The initial sampling created thousands of pairs that were skewed towards very low similarity values. Given that manual annotation is a costly process, and with the goal of having a balanced dataset, we used an automatic similarity system to score all the pairs. This text-only similarity system is an ensemble of feature-based machine learning systems that uses a large variety of distance and machine-translation based features. The model was evaluated on a subset of STS benchmark dataset BIBREF6 and compared favorably to other baseline models. As this model is very different from current deep learning techniques, it should not bias the dataset sampling in a way which influences current similarity systems.
The automatic scores were used to sample the final set of pairs as follows. We defined five similarity ranges ($(0, 1], \ldots ,(4, 5]$) and randomly selected the same amount of pairs from the initial paired sample. We set a sampling of maximum 3000 instances (i.e 600 instances per range). Given the fact that the high similarity range had less than 600 instances, we collected a total of 2639 potential text-image candidate pairs for manual annotation. Figure FIGREF8 shows the proposed methodology can sample approximately a uniform distribution with the exception of the higher similarity values (left and middle plots). In addition, we show that the lower predicted similarities are mainly coming from random sampling, whereas, as expected, the higher ones come from similar images.
<<</1. Sampling data for manual annotation.>>>
<<<2. Manual annotations.>>>
In order to annotate the sample of 2639 pairs, we used Amazon Mechanical Turk (AMT). Crowdworkers followed the same instructions of previous STS annotation campaigns BIBREF6, very similar to those in Table TABREF4. Annotators needed to focus on textual similarity with the aid of aligned images. We got up to 5 scores per item, and we discarded annotators that showed low correlation with the rest of the annotators ($\rho < 0.75$). In total 56 annotators took part. On average each crowdworker annotated 220 pairs, where the amounts ranged from 19 to 940 annotations. Regardless the annotation amounts, most of the annotators showed high correlations with the rest of the participants. We computed the annotation correlation by aggregating the individual Pearson correlation with averaged similarity of the other annotators. The annotation shows high correlation among the crowdworkers ($\rho = 0.89$ $\pm 0.01$) comparable to that of text-only STS datasets.
Table TABREF10 shows the average item similarity and item disagreement in the annotation. We defined item disagreement as the standard deviation of the annotated similarity value. The low average similarity can be explained by the high number of zero-similarity pairs. Item disagreement is moderately low (about 0.6 points out of 5) which is in accordance with the high correlation between the annotators.
<<</2. Manual annotations.>>>
<<<3. Selection of difficult examples.>>>
In preliminary experiments, the evaluation of two baseline models, word overlap and the ensemble system mentioned before, showed that the sampling strategy introduced a large number of trivial examples. For example, the word overlap system attained $0.83$ $\rho $. This high correlation could be the result of using word-overlap in the first sampling round. In order to create a more challenging dataset where to measure the effectiveness of multimodal representations, we defined the easiness metric to filter out some of the easy examples from the annotated dataset.
We defined easiness as an amount of discrepancy provided by an example regarding the whole dataset. Taking the inner product of the Pearson correlation formula as basis, we measure the easiness of an annotated example $i$ as follows:
where $o_{i}$ is the word-overlap similarity of the $i$-th pair, $\overline{o}$ is the mean overlap similarity in the dataset, and $s_{o}$ is the standard deviation. Similarly, variable $gs_{i}$ is the gold-standard value of the $i$-th pair, and $\overline{gs}$ and $s_{gs}$ are the mean and standard deviation of gold values in the dataset, respectively. We removed 30% of the easiest examples and create a more challenging dataset of 1858 pairs, reducing $\rho $ to $0.57$ for the word-overlap model, and to $0.66$ $\rho $ (from $0.85$) for the ML based approach.
<<</3. Selection of difficult examples.>>>
<<</Data Collection>>>
<<<Dataset Description>>>
The full dataset comprises both the sample mentioned above and the 819 pairs from our preliminary work, totalling 2677 pairs. Figure FIGREF14 shows the final item similarity distribution. Although the distribution is skewed towards lower similarity values, we consider that all the similarity ranges are sufficiently well covered.
Average similarity of the dataset is $1.9$ with a standard deviation of $1.36$ points. The dataset contains 335 zero-valued pairs out of the 2677 instances, which somehow explains the lower average similarity.
<<</Dataset Description>>>
<<</The Visual STS Dataset>>>
<<<Evaluation of Representation Models>>>
The goal of the evaluation is to explore whether representation models can have access to images, instead of text alone, have better inference abilities. We consider the following models.
ResNet BIBREF13 is a deep network of 152 layers in which the residual representation functions are learned instead of learning the signal representation directly. The model is trained over 1.2 million images of ImageNet, the ILSRVC subset of 1000 image categories. We use the top layer of a pretrained ResNet-152 model to represent the images associated to text. Each image is represented with a vector of 2048 dimensions.
GloVe. The Global Vector model BIBREF8 is a log-linear model trained to encode semantic relationships between words as vector offsets in the learned vector space, combining global matrix factorization and local context window methods. Since GloVe is a word-level vector model, we build sentence representations with the mean of the vectors of the words composing the sentence. The pre-trained model from GloVe considered in this paper is the 6B-300d, with a vocabulary of 400k words, 300 dimension vectors and trained on a dataset of 6 billion tokens.
BERT. The Bidirectional Encoder Representations from Transformer BIBREF9 implements a novel methodology based on the so-called masked language model, which randomly masks some of the tokens from the input, and predicts the original vocabulary id of the masked word based only on its context. The BERT model used in our experiments is the BERT-Large Uncased (24-layer, 1024-hidden, 16-heads, 340M parameters). In order to obtain the sentence-level representation we extract the token embeddings of the last layer and compute the mean vector, yielding a vector of 1024 dimensions.
GPT-2. The Generative Pre-Training-2 modelBIBREF10 is a language model based on the transformer architecture, which is trained on the task of predicting the next word, given all the previous words occurring in some text. In the same manner to BERT and GloVe, we extract the token embeddings of the last layer and compute the mean vector to obtain the sentence-level representation of 768 dimensions. The GPT-2 model used in our experiments was trained on a very large corpus of about 40 GB of text data with 1.5 billion parameters.
USE. The Universal Sentence Encoder BIBREF11 is a model for encoding sentences into embedding vectors, specifically designed for transfer learning in NLP. Based on a deep averaging network encoder, the model is trained for varying text lengths, such as sentences, phrases or short textbfs, and in a variety of semantic tasks including STS. The encoder returns the vector of the sentence with 512 dimensions.
VSE++. The Visual-Semantic Embedding BIBREF12 is a model trained for image-caption retrieval. The model learns a joint space of aligned images and captions. The model is an improvement of the original introduced by BIBREF37, and combines a ResNet-152 over images with a bidirectional Recurrent Neural Network (GRU) over the sentences. Texts and images are projected onto the joint space, obtaining representations of 1024 dimension both for images and texts. We used projections of images and texts in our experiments. The VSE++ model used in our experiments was pre-trained on the Microsoft COCO dataset BIBREF28 and the Flickr30K dataset BIBREF36. Table TABREF15 summarizes the sentence and image representations used in the evaluation.
<<<Experiments>>>
<<<Experimental Setting.>>>
We split the vSTS dataset into training, validation and test partitions sampling at random and preserving the overall score distributions. In total, we use 1338 pairs for training, 669 for validation, and the rest of the 670 pairs were used for the final testing. Similar to the STS task, we use the Pearson correlation coefficient ($\rho $) as the evaluation metric of the task.
<<</Experimental Setting.>>>
<<<STS models.>>>
Our goal is to keep similarity models as simple as possible in order to directly evaluate textual and visual representations and avoid as much as possible the influence of the parameters that intertwine when learning a particular task. We defined two scenarios: the supervised and the unsupervised scenarios.
In the supervised scenario we train a Siamese Regression model in a similar way presented in BIBREF38. Given a sentence/image pair, we wish to predict a real-valued similarity in some range $[1,K]$, being $K=5$ in our experiments. We first produce sentence/image representations $h_{L}$ and $h_{R}$ for each sentence in the pair using any of the unimodal models described above, or using a multimodal representations as explained below. Given these representations, we predict the similarity score $o$ using a regression model that takes both the distance and angle between the pair ($h_{L}$, $h_{R}$):
Note that the distance and angle concatenation ($[h_{x}, h_{+}]$) yields a $2 * d$-dimensional vector. The resulting vector is used as input for the non-linear hidden layer ($h_{s}$) of the model. Contrary to BIBREF38, we empirically found that the estimation of a continuous value worked better than learning a softmax distribution over $[1,K]$ integer values. The loss function of our model is the Mean Square Error (MSE), which is the most commonly used regression loss function.
In the unsupervised scenario similarity is computed as the cosine of the produced $h_{L}$ and $h_{R}$ sentence/image representations.
<<</STS models.>>>
<<<Multimodal representation.>>>
We combined textual and image representations in two simple ways. The first method is concatenation of the text and image representation (concat). Before concatenation we applied the L2 normalization to each of the modalities. The second method it to learn a common space for the two modalities before concatenation (project).
The projection of each modality learns a space of $d$-dimensions, so that $h_{1}, h_{2} \in \mathbb {R}^{d}$. Once the multimodal representation is produced ($h_{m}$) for the left and right pairs, vectors are directly plugged into the regression layers. Projections are learned end-to-end with the regression layers and the MSE as loss function.
<<</Multimodal representation.>>>
<<<Hyperparameters and training details.>>>
We use the validation set to learn parameters of the supervised models, and to carry an exploration of the hyperparameters. We train each model a maximum of 300 epochs and apply early-stopping strategy with a patience of 25 epochs. For early stopping we monitor MSE loss value on validation. For the rest, we run a grid search for selecting the rest of the hyperparameter values. We explore learning rate values (0.0001, 0.001, 0.01, 0.05), L2 regularization weights (0.0, 0.0001, 0.001, 0.01), and different hidden layer ($h_{s}$) dimensions (50,100, 200, 300). In addition, we activate and deactivate batch normalization in each layer for each of the hyperparameter selection.
<<</Hyperparameters and training details.>>>
<<</Experiments>>>
<<<Results>>>
<<<The unsupervised scenario.>>>
Table TABREF26 reports the results using the item representations directly. We report results over train and dev partitions for completeness, but note that none of them was used to tune the models. As it can be seen, multimodal representations consistently outperform their text-only counterparts. This confirms that, overall, visual information is helpful in the semantic textual similarity task and that image and sentence representation are complementary. For example, the bert model improves more than 13 points when visual information provided by the resnet is concatenated. glove shows a similar or even larger improvement, with similar trends for use and vse++(text).
Although vse++(img) shows better performance than resnet when applying them alone, further experimentation showed lower complementarity when combining with textual representation (e.g. $0.807\rho $ in test combining textual and visual modalities of vse++). This is something expected as vse++(img) is pre-trained along with the textual part of the vse++ model on the same task. We do not show the combinations with vse++(img) due to the lack of space.
Interestingly, results show that images alone are valid to predict caption similarity ($0.627 \rho $ in test). Actually, in this experimental setting resnet is on par with bert, which is the best purely unsupervised text-only model. Surprisingly, gpt-2 representations are not useful for text similarity tasks. This might be because language models tend to forget past context as they focus on predicting the next token BIBREF39. Due to the low results of gpt-2 we decided not to combine it with resnet.
<<</The unsupervised scenario.>>>
<<<The supervised scenario.>>>
Table TABREF29 show a similar pattern to that in the the unsupervised setting. Overall, models that use a conjunction of multimodal features significantly outperform unimodal models, and this confirms, in a more competitive scenario, that adding visual information helps learning easier the STS task. The gain of multimodal models is considerable compared to the text-only models. The most significant gain is obtained when glove features are combined with resnet. The model improves more than $15.0$ points. In this case, the improvement over bert is lower, but still considerable with more than $4.0$ points.
In the same vein as in the unsupervised scenario, features obtained with a resnet can be as competitive as some text based models (e.g. BERT). gpt-2, as in the unsupervised scenario, does not produce useful representations for semantic similarity tasks. Surprisingly, the regression model with gpt-2 features is not able to learn anything in the training set. As we did in the previous scenario, we do not keep combining gpt-2 with visual features.
Multimodal version of vse++ and use are the best model among the supervised approaches. Textual version of use and vse++ alone obtain very competitive results and outperforms some of the multimodal models (the concatenate version of glove and bert with resnet). Results might indicate that text-only with sufficient training data can be on par with multimodal models, but, still, when there is data scarcity, multimodal models can perform better as they have more information over the same data point.
Comparison between projected and concatenated models show that projected models attain slightly better results in two cases, but the best overall results are obtained when concatenating vse++(text) with resnet. Although concatenation proofs to be a hard baseline, we expect that more sophisticated combination methods like grounding BIBREF40 will obtain larger gains in the future.
<<</The supervised scenario.>>>
<<</Results>>>
<<</Evaluation of Representation Models>>>
<<<Discussion>>>
<<<Contribution of the Visual Content>>>
Table TABREF31 summarizes the contribution of the images on text representations in test partition. The contribution is consistent through all text-based representations. We measure the absolute difference (Diff) and the error reduction (E.R) of each textual representation with the multimodal counterpart. For the comparison we chose the best text model for each representation. As expected we obtain the largest improvement ($22-26\%$ E.R) when text-based unsupervised models are combined with image representations. Note that unsupervised models are not learning anything about the specific task, so the more information in the representation, the better. In the case of use and vse++ the improvement is significant but not as large as the purely unsupervised models. The best text-only representation is the one fine-tuned on a multimodal task, VSE++, which is noteworthy, as it is better than a textual representation fine-tuned in a text-only inference task like USE.
Improvement is consistent for the supervised models. Contrary to the unsupervised setting, these models are designed to learn about the task, so there is usually less room for the improvement. Still, glove+resnet shows an error reduction of $12.9$ in the test set. Finally, use and vse++ show smaller improvements when we add visual information into the model.
Figure FIGREF32 displays some examples where visual information positively contributes predicting accurately similarity values. Examples show the case where related descriptions are lexicalized in a different way so a text-only model (glove) predicts low similarity between captions (top two examples). Instead, the multimodal representation glove+resnet does have access to the image and can predict more accurately the similarity value of the two captions. The examples in the bottom show the opposite case, where similar set of words are used to describe very different situations. The text based model overestimates the similarity of captions, while the multimodal model corrects the output by looking at the differences of the images.
On the contrary, Figure FIGREF33 shows that images can also be misleading, and that the task is not as trivial as combining global representations of the image. In this case, related but different captions are supported by very similar images, and as a consequence, the multimodal model overestimates their similarity, while the text-only model focuses on the most discriminating piece of information in the text.
<<</Contribution of the Visual Content>>>
<<<The effect of hyperparameters>>>
Neural models are sensitive to hyperparameters, and we might think that results on the supervised scenario are due to hyperparameter optimization. Figure FIGREF35 displays the variability of $\rho $ in development across all hyperparameters. Due to space constraints we show text-only and multimodal concatenated models. Models are ordered by mean performance. As we can see, combined models show better mean performance, and all models except Glove exhibit tight variability.
<<</The effect of hyperparameters>>>
<<</Discussion>>>
<<<Conclusions and Future Work>>>
The long term goal of our research is to devise multimodal representation techniques that improve current inference capabilities. We have presented a novel task, Visual Semantic Textual Similarity (vSTS), where the inference capabilities of visual, textual, and multimodal representations can be tested directly. The dataset has been manually annotated by crowdsourcers with high inter-annotator correlation ($\rho = 0.89$). We tested several well-known textual and visual representations, which we combined using concatenation and projection. Our results show, for the first time, that the addition of image representations allows better inference. The best text-only representation is the one fine-tuned on a multimodal task, VSE++, which is noteworthy, as it is better than a textual representation fine- tuned in a text-only inference task like USE. The improvement when using image representations is observed both when computing the similarity directly from multimodal representations, and also when training siamese networks.
In the future, we would like to ground the text representations to image regions BIBREF40, which could avoid misleading predictions due to the global representation of the image. Finally, we would like to extend the dataset with more examples, as we acknowledge that training set is limited to train larger models.
This research was partially funded by the Basque Government excellence research group (IT1343-19), the NVIDIA GPU grant program, the Spanish MINECO (DeepReading RTI2018-096846-B-C21 (MCIU/AEI/FEDER, UE)) and project BigKnowledge (Ayudas Fundación BBVA a equipos de investigación científica 2018). Ander enjoys a PhD grant from the Basque Government.
<<</Conclusions and Future Work>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Work\nThe Visual STS Dataset\nData Collection\n1. Sampling data for manual annotation.\n2. Manual annotations.\n3. Selection of difficult examples.\nDataset Description\nEvaluation of Representation Models\nExperiments\nExperimental Setting.\nSTS models.\nMultimodal representation.\nHyperparameters and training details.\nResults\nThe unsupervised scenario.\nThe supervised scenario.\nDiscussion\nContribution of the Visual Content\nThe effect of hyperparameters\nConclusions and Future Work"
],
"type": "outline"
}
|
1910.11768
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Exploring Multilingual Syntactic Sentence Representations
<<<Abstract>>>
We study methods for learning sentence embeddings with syntactic structure. We focus on methods of learning syntactic sentence-embeddings by using a multilingual parallel-corpus augmented by Universal Parts-of-Speech tags. We evaluate the quality of the learned embeddings by examining sentence-level nearest neighbours and functional dissimilarity in the embedding space. We also evaluate the ability of the method to learn syntactic sentence-embeddings for low-resource languages and demonstrate strong evidence for transfer learning. Our results show that syntactic sentence-embeddings can be learned while using less training data, fewer model parameters, and resulting in better evaluation metrics than state-of-the-art language models.
<<</Abstract>>>
<<<Introduction>>>
Recent success in language modelling and representation learning have largely focused on learning the semantic structures of language BIBREF0. Syntactic information, such as part-of-speech (POS) sequences, is an essential part of language and can be important for tasks such as authorship identification, writing-style analysis, translation, etc. Methods that learn syntactic representations have received relatively less attention, with focus mostly on evaluating the semantic information contained in representations produced by language models.
Multilingual embeddings have been shown to achieve top performance in many downstream tasks BIBREF1, BIBREF2. By training over large corpora, these models have shown to generalize to similar but unseen contexts. However, words contain multiple types of information: semantic, syntactic, and morphologic. Therefore, it is possible that syntactically different passages have similar embeddings due to their semantic properties. On tasks like the ones mentioned above, discriminating using patterns that include semantic information may result in poor generalization, specially when datasets are not sufficiently representative.
In this work, we study methods that learn sentence-level embeddings that explicitly capture syntactic information. We focus on variations of sequence-to-sequence models BIBREF3, trained using a multilingual corpus with universal part-of-speech (UPOS) tags for the target languages only. By using target-language UPOS tags in the training process, we are able to learn sentence-level embeddings for source languages that lack UPOS tagging data. This property can be leveraged to learn syntactic embeddings for low-resource languages.
Our main contributions are: to study whether sentence-level syntactic embeddings can be learned efficiently, to evaluate the structure of the learned embedding space, and to explore the potential of learning syntactic embeddings for low-resource languages.
We evaluate the syntactic structure of sentence-level embeddings by performing nearest-neighbour (NN) search in the embedding space. We show that these embeddings exhibit properties that correlate with similarities between UPOS sequences of the original sentences. We also evaluate the embeddings produced by language models such as BERT BIBREF0 and show that they contain some syntactic information.
We further explore our method in the few-shot setting for low-resource source languages without large, high quality treebank datasets. We show its transfer-learning capabilities on artificial and real low-resource languages.
Lastly, we show that training on multilingual parallel corpora significantly improves the learned syntactic embeddings. This is similar to existing results for models trained (or pre-trained) on multiple languages BIBREF4, BIBREF2 for downstream tasks BIBREF5.
<<</Introduction>>>
<<<Related Work>>>
Training semantic embeddings based on multilingual data was studied by MUSE BIBREF1 and LASER BIBREF2 at the word and sentence levels respectively. Multi-task training for disentangling semantic and syntactic information was studied in BIBREF6. This work also used a nearest neighbour method to evaluate the syntactic properties of models, though their focus was on disentanglement rather than embedding quality.
The syntactic content of language models was studied by examining syntax trees BIBREF7, subject-object agreement BIBREF8, and evaluation on syntactically altered datasets BIBREF9, BIBREF10. These works did not examine multilingual models.
Distant supervision BIBREF11, BIBREF12 has been used to learn POS taggers for low-resource languages using cross-lingual corpora. The goal of these works is to learn word-level POS tags, rather than sentence-level syntactic embeddings. Furthermore, our method does not require explicit POS sequences for the low-resource language, which results in a simpler training process than distant supervision.
<<</Related Work>>>
<<<Method>>>
<<<Architecture>>>
We iterated upon the model architecture proposed in LASER BIBREF2. The model consists of a two-layer Bi-directional LSTM (BiLSTM) encoder and a single-layer LSTM decoder. The encoder is language agnostic as no language context is provided as input. In contrast to LASER, we use the concatenation of last hidden and cell states of the encoder to initialize the decoder through a linear projection.
At each time-step, the decoder takes an embedding of the previous POS target concatenated with an embedding representing the language context, as well as a max-pooling over encoder outputs. Figure FIGREF2 shows the architecture of the proposed model.
The input embeddings for the encoder were created using a jointly learned Byte-Pair-Encoding (BPE) vocabulary BIBREF13 for all languages by using sentencepiece.
<<</Architecture>>>
<<<Training>>>
Training was performed using an aligned parallel corpus. Given a source-target aligned sentence pair (as in machine translation), we:
Convert the sentence in the source language into BPE
Look up embeddings for BPE as the input to the encoder
Convert the sentence in a target language into UPOS tags, in the tagset of the target language.
Use the UPOS tags in step 3 as the targets for a cross-entropy loss.
Hence, the task is to predict the UPOS sequence computed from the translated input sentence.
The UPOS targets were obtained using StandfordNLP BIBREF14 . Dropout with a drop probability of 0.2 was applied to the encoder. The Adam optimizer BIBREF15 was used with a constant learning rate of $0.0001$. Table TABREF4 shows a full list of the hyperparameters used in the training procedure.
<<</Training>>>
<<<Dataset>>>
To create our training dataset, we followed an approach similar to LASER. The dataset contains 6 languages: English, Spanish, German, Dutch, Korean and Chinese Mandarin. These languages use 3 different scripts, 2 different language orderings, and belong to 4 language families.
English, Spanish, German, and Dutch use a Latin-based script. However, Spanish is a Romantic language while the others are Germanic languages. Chinese Mandarin and Korean are included because they use non-latin based scripts and originate from language families distinct from the other languages. Although the grammatical rules vary between the selected languages, they share a number of key characteristics such as the Subject-Verb-Object ordering, except Korean (which mainly follows the Subject-Object-Verb order). We hope to extend our work to other languages with different scripts and sentence structures, such as Arabic, Japanese, Hindi, etc. in the future.
The dataset was created by using translations provided by Tatoeba and OpenSubtitles BIBREF16. They were chosen for their high availability in multiple languages.
Statistics of the final training dataset are shown in Table TABREF14. Rows and columns correspond to source and target languages respectively.
<<<Tatoeba>>>
Tatoeba is a freely available crowd-annotated dataset for language learning. We selected all sentences in English, Spanish, German, Dutch, and Korean. We pruned the dataset to contain only sentences with at least one translation to any of the other languages. The final training set contains 1.36M translation sentence pairs from this source.
<<</Tatoeba>>>
<<<OpenSubtitles>>>
We augmented our training data by using the 2018 OpenSubtitles dataset. OpenSubtitles is a publicly available dataset based on movie subtitles BIBREF16. We created our training dataset from selected aligned subtitles by taking the unique translations among the first million sentences, for each aligned parallel corpus. We further processed the data by pruning to remove samples with less than 3 words, multiple sentences, or incomplete sentences. The resulting dataset contains 1.9M translation sentence pairs from this source.
<<</OpenSubtitles>>>
<<</Dataset>>>
<<</Method>>>
<<<Experiments>>>
We aim to address the following questions:
Can syntactic structures be embedded? For multiple languages?
Can parallel corpora be used to learn syntactic structure for low-resource languages?
Does multilingual pre-training improve syntactic embeddings?
We address question 1 in Secs. SECREF20 and SECREF28 by evaluating the quality of syntactic and semantic embeddings in several ways. Questions 2 and 3 are addressed in Sec. SECREF30 by studying the transfer-learning performance of syntactic embeddings.
<<<Quality of Syntactic Embeddings>>>
We studied the quality of the learned syntactic embeddings by using a nearest-neighbour (NN) method.
First, we calculated the UPOS sequence of all sentences in the Tatoeba dataset by using a tagger. Sentences were then assigned to distinct groups according to their UPOS sequence, i.e., all sentences belonging to the same group had the same UPOS sequence.
For all languages except Korean, a held-out test set was created by randomly sampling groups that contained at least 6 sentences. For Korean, all groups containing at least 6 sentences were kept as the test set since the dataset is small.
During evaluation, we applied max-pooling to the outputs of the encoder to obtain the syntactic embeddings of the held-out sentences.
For each syntactic embedding, we find its top nearest neighbour (1-NN) and top-5 nearest neighbours (5-NN) in the embedding space for the held-out sentences, based on their UPOS group.
Given $n$ sentences $S = \lbrace s_0, \dots , s_{n-1}\rbrace $ and their embeddings $E = \lbrace e_0, \dots , e_{n-1}\rbrace $, for each $s_i$ there is a set of $k$ gold nearest neighbours $G(i, k) = \lbrace g_0, \dots , g_{k-1}\rbrace $, $G(i, k) \subseteq S$ such that $d(s_i, g) \le d(s_i, s) \textrm { for all } g \in G(i, k) \textrm { and } s \in S \setminus G(i, k)$, where $d(\cdot , \cdot )$ is the cosine distance.
Given embedding $e_i$, we calculate cosine distances $\lbrace d(e_i, e_j) \textrm { for } e_j \in E, e_j \ne e_i\rbrace $ and sort them into non-decreasing order $d_{j_0} \le d_{j_1} \le \dots \le d_{j_{n-2}}$. We consider the ordering to be unique as the probability of embedding cosine distances being equal is very small.
The set of embedded $k$-nearest neighbours of $s_i$ is defined as
Finally, the $k$-nearest neighbours accuracy for $s_i$ is given by
A good embedding model should cluster the embeddings for similar inputs in the embedding space. Hence, the 5-NN test can be seen as an indicator of how cohesive the embedding space is.
The results are shown in Table TABREF22. The differences in the number of groups in each language are due to different availabilities of sentences and sentence-types in the Tatoeba dataset.
The high nearest neighbours accuracy indicates that syntax information was successfully captured by the embeddings. Table TABREF22 also shows that the syntactic information of multiple languages was captured by a single embedding model.
<<<Language Model>>>
A number of recent works BIBREF7, BIBREF8 have probed language models to determine if they contain syntactic information. We applied the same nearest neighbours experiment (with the same test sets) on a number of existing language models: Universal Sentence Encoder (USE) BIBREF17, LASER, and BERT. For USE we used models available from TensorHub. For LASER we used models and created embeddings from the official repository .
For BERT, we report the results using max (BERT$_{max}$) and average-pooling (BERT$_{avg}$), obtained from the BERT embedding toolkit with the multilingual cased model (104 languages, 12-layers, 768-hidden units, 12-heads), and `pooled-output' (BERT$_{output}$) from the TensorHub version of the model with the same parameters.
We computed the nearest neighbours experiment for all languages in the training data for the above models. The results are shown in Table TABREF27. The results show that general purpose language models do capture syntax information, which varies greatly across languages and models.
The nearest neighbours accuracy of our syntactic embeddings in Table TABREF22 significantly outperforms the general purpose language models. Arguably these language models were trained using different training data. However, this is a reasonable comparison because many real-world applications rely on released pre-trained language models for syntactically related information. Hence, we want to show that we can use much smaller models trained with direct supervision, to obtain syntactic embeddings with similar or better quality. Nonetheless, the training method used in this work can certainly be extended to architectures similar to BERT or USE.
<<</Language Model>>>
<<</Quality of Syntactic Embeddings>>>
<<<Functional Dissimilarity>>>
The experiments in the previous section showed that the proposed syntactic embeddings formed cohesive clusters in the embedding space, based on UPOS sequence similarities. We further studied the spatial relationships within the embeddings.
Word2Vec BIBREF18 examined spatial relationships between embeddings and compared them to the semantic relationships between words. Operations on vectors in the embedding space such as $King - Man + Woman = Queen$ created vectors that also correlated with similar operations in semantics. Such semantic comparisons do not directly translate to syntactic embeddings. However, syntax information shifts with edits on POS sequences. Hence, we examined the spatial relationships between syntactic embeddings by comparing their cosine similarities with the edit distances between UPOS sequence pairs.
Given $n$ UPOS sequences $U = \lbrace u_0,...,u_{n-1}\rbrace $, we compute the matrix $L \in \mathbb {R}^{n \times n}$, where $l_{ij} = l(u_i, u_j)$, the complement of the normalized Levenshtein distance between $u_i$ and $u_j$.
Given the set of embedding vectors $\lbrace e_0,...,e_{n-1}\rbrace $ where $e_i$ is the embedding for sentence $s_i$, we also compute $D \in \mathbb {R}^{n \times n}$, where $d_{ij} = d(e_i, e_j)$. We further normalize $d_{ij}$ to be within $[0, 1]$ by min-max normalization to obtain $\hat{D} = \operatorname{minMax}(D)$.
Following BIBREF19, we define the functional dissimilarity score by
Intuitively, UPOS sequences that are similar (smaller edit distance) should be embedded close to each other in the embedding space, and embeddings that are further away should have dissimilar UPOS sequences. Hence, the functional dissimilarity score is low if the relative changes in UPOS sequences are reflected in the embedding space. The score is high if such changes are not reflected.
The functional dissimilarity score was computed using sentences from the test set in CoNLL 2017 Universal Dependencies task BIBREF20 for the relevant languages with the provided UPOS sequences. Furthermore, none of the evaluated models, including the proposed method, were trained with CoNLL2017 data.
We compared the functional dissimilarity scores of our syntactic representations against embeddings obtained from BERT and LASER, to further demonstrate that simple network structures with explicit supervision may be sufficient to capture syntactic structure. All the results are shown in Table TABREF29. We only show the best (lowest) results from BERT.
<<</Functional Dissimilarity>>>
<<<Transfer Performance of Syntactic Embeddings>>>
Many NLP tasks utilize POS as features, but human annotated POS sequences are difficult and expensive to obtain. Thus, it is important to know if we can learn sentences-level syntactic embeddings for low-sources languages without treebanks.
We performed zero-shot transfer of the syntactic embeddings for French, Portuguese and Indonesian. French and Portuguese are simulated low-resource languages, while Indonesian is a true low-resource language. We reported the 1-NN and 5-NN accuracies for all languages using the same evaluation setting as described in the previous section. The results are shown in Table TABREF31 (top).
We also fine-tuned the learned syntactic embeddings on the low-resource language for a varying number of training data and languages. The results are shown in Table TABREF31 (bottom). In this table, the low-resource language is denoted as the `source', while the high-resource language(s) is denoted as the `target'. With this training method, no UPOS tag information was provided to the model for the `source' languages, where supervising information comes solely from parallel sentences and UPOS tags in high-resource languages.
The results show that for a new language (French and Portuguese) that is similar to the family of pre-training languages, there are two ways to achieve higher 1-NN accuracy. If the number of unique sentences in the new language is small, accuracy can be improved by increasing the size of the parallel corpora used to fine-tune. If only one parallel corpus is available, accuracy can be improved by increasing the number of unique sentence-pairs used to fine-tune.
For a new language that is dissimilar to the family of pre-training languages, e.g. Indonesian in Table TABREF31, the above methods only improved nearest neighbours accuracy slightly. This may be caused by differing data distribution or by tagger inaccuracies. The results for Indonesian do indicate that some syntactic structure can be learned by using our method, even for a dissimilar language.
A future direction is to conduct a rigorous analysis of transfer learning between languages from the same versus different language families.
<<</Transfer Performance of Syntactic Embeddings>>>
<<</Experiments>>>
<<<Conclusion>>>
We examined the possibility of creating syntactic embeddings by using a multilingual method based on sequence-to-sequence models. In contrast to prior work, our method only requires parallel corpora and UPOS tags in the target language.
We studied the quality of learned embeddings by examining nearest neighbours in the embedding space and investigating their functional dissimilarity. These results were compared against recent state-of-the-art language models. We also showed that pre-training with a parallel corpus allowed the syntactic embeddings to be transferred to low-resource languages via few-shot fine-tuning.
Our evaluations indicated that syntactic structure can be learnt by using simple network architectures and explicit supervision. Future directions include improving the transfer performance for low-resource languages, disentangling semantic and syntactic embeddings, and analyzing the effect of transfer learning between languages belong to the same versus different language families.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Work\nMethod\nArchitecture\nTraining\nDataset\nTatoeba\nOpenSubtitles\nExperiments\nQuality of Syntactic Embeddings\nLanguage Model\nFunctional Dissimilarity\nTransfer Performance of Syntactic Embeddings\nConclusion"
],
"type": "outline"
}
|
1912.00864
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Conclusion-Supplement Answer Generation for Non-Factoid Questions
<<<Abstract>>>
This paper tackles the goal of conclusion-supplement answer generation for non-factoid questions, which is a critical issue in the field of Natural Language Processing (NLP) and Artificial Intelligence (AI), as users often require supplementary information before accepting a conclusion. The current encoder-decoder framework, however, has difficulty generating such answers, since it may become confused when it tries to learn several different long answers to the same non-factoid question. Our solution, called an ensemble network, goes beyond single short sentences and fuses logically connected conclusion statements and supplementary statements. It extracts the context from the conclusion decoder's output sequence and uses it to create supplementary decoder states on the basis of an attention mechanism. It also assesses the closeness of the question encoder's output sequence and the separate outputs of the conclusion and supplement decoders as well as their combination. As a result, it generates answers that match the questions and have natural-sounding supplementary sequences in line with the context expressed by the conclusion sequence. Evaluations conducted on datasets including "Love Advice" and "Arts & Humanities" categories indicate that our model outputs much more accurate results than the tested baseline models do.
<<</Abstract>>>
<<<Introduction>>>
Question Answering (QA) modules play particularly important roles in recent dialog-based Natural Language Understanding (NLU) systems, such as Apple's Siri and Amazon's Echo. Users chat with AI systems in natural language to get the answers they are seeking. QA systems can deal with two types of question: factoid and non-factoid ones. The former sort asks, for instance, for the name of a thing or person such as “What/Who is $X$?”. The latter sort includes more diverse questions that cannot be answered by a short fact. For instance, users may ask for advice on how to make a long-distance relationship work well or for opinions on public issues. Significant progress has been made in answering factoid questions BIBREF0, BIBREF1; however, answering non-factoid questions remains a challenge for QA modules.
Long short term memory (LSTM) sequence-to-sequence models BIBREF2, BIBREF3, BIBREF4 try to generate short replies to the short utterances often seen in chat systems. Evaluations have indicated that these models have the possibility of supporting simple forms of general knowledge QA, e.g. “Is the sky blue or black?”, since they learn commonly occurring sentences in the training corpus. Recent machine reading comprehension (MRC) methods BIBREF5, BIBREF6 try to return a single short answer to a question by extracting answer spans from the provided passages. Unfortunately, they may generate unsatisfying answers to regular non-factoid questions because they can easily become confused when learning several different long answers to the same non-factoid question, as pointed out by BIBREF7, BIBREF8.
This paper tackles a new problem: conclusion-supplement answer generation for non-factoid questions. Here, the conclusion consists of sentences that directly answer the question, while the supplement consists of information supporting the conclusion, e.g., reasons or examples. Such conclusion-supplement answers are important for helping questioners decide their actions, especially in NLU. As described in BIBREF9, users prefer a supporting supplement before accepting an instruction (i.e., a conclusion). Good debates also include claims (i.e., conclusions) about a topic and supplements to support them that will allow users to reach decisions BIBREF10. The following example helps to explain how conclusion-supplement answers are useful to users: “Does separation by a long distance ruin love?” Current methods tend to answer this question with short and generic replies, such as, “Distance cannot ruin true love”. The questioner, however, is not likely to be satisfied with such a trite answer and will want to know how the conclusion was reached. If a supplemental statement like “separations certainly test your love” is presented with the conclusion, the questioner is more likely to accept the answer and use it to reach a decision. Furthermore, there may be multiple answers to a non-factoid question. For example, the following answer is also a potential answer to the question: “distance ruins most relationships. You should keep in contact with him”. The current methods, however, have difficulty generating such conclusion-supplement answers because they can become easily confused when they try to learn several different and long answers to a non-factoid question.
To address the above problem, we propose a novel architecture, called the ensemble network. It is an extension of existing encoder-decoder models, and it generates two types of decoder output sequence, conclusion and supplement. It uses two viewpoints for selecting the conclusion statements and supplementary statements. (Viewpoint 1) The context present in the conclusion decoder's output is linked to supplementary-decoder output states on the basis of an attention mechanism. Thus, the context of the conclusion sequence directly impacts the decoder states of the supplement sequences. This, as a result, generates natural-sounding supplementary sequences. (Viewpoint 2) The closeness of the question sequence and conclusion (or supplement) sequence as well as the closeness of the question sequence with the combination of conclusion and supplement sequences is considered. By assessing the closeness at the sentence level and sentence-combination level in addition to at the word level, it can generate answers that include good supplementary sentences following the context of the conclusion. This avoids having to learn several different conclusion-supplement answers assigned to a single non-factoid question and generating answers whose conclusions and supplements are logically inconsistent with each other.
Community-based QA (CQA) websites tend to provide answers composed of conclusion and supplementary statements; from our investigation, 77% of non-factoid answers (love advice) in the Oshiete-goo (https://oshiete.goo.ne.jp) dataset consist of these two statement types. The same is true for 82% of the answers in the Yahoo non-factoid dataset related to the fields of social science, society & culture and arts & humanities. We used the above-mentioned CQA datasets in our evaluations, since they provide diverse answers given by many responders. The results showed that our method outperforms existing ones at generating correct and natural answers. We also conducted an love advice service in Oshiete goo to evaluate the usefulness of our ensemble network.
<<</Introduction>>>
<<<Related work>>>
The encoder-decoder framework learns how to transform one representation into another. Contextual LSTM (CLSTM) incorporates contextual features (e.g., topics) into the encoder-decoder framework BIBREF11, BIBREF12. It can be used to make the context of the question a part of the answer generation process. HieRarchical Encoder Decoder (HRED) BIBREF12 extends the hierarchical recurrent encoder-decoder neural network into the dialogue domain; each question can be encoded into a dense context vector, which is used to recurrently decode the tokens in the answer sentences. Such sequential generation of next statement tokens, however, weakens the original meaning of the first statement (question). Recently, several models based on the Transformer BIBREF13, such as for passage ranking BIBREF14, BIBREF15 and answer selection BIBREF16, have been proposed to evaluate question-answering systems. There are, however, few Transformer-based methods that generate non-factoid answers.
Recent neural answer selection methods for non-factoid questions BIBREF17, BIBREF18, BIBREF19 learn question and answer representations and then match them using certain similarity metrics. They use open datasets stored at CQA sites like Yahoo! Answers since they include many diverse answers given by many responders and thus are good sources of non-factoid QA training data. The above methods, however, can only select and extract answer sentences, they do not generate them.
Recent machine reading comprehension methods try to answer a question with exact text spans taken from provided passages BIBREF20, BIBREF6, BIBREF21, BIBREF22. Several studies on the MS-MARCO dataset BIBREF23, BIBREF5, BIBREF8 define the task as using multiple passages to answer a question where the words in the answer are not necessarily present in the passages. Their models, however, require passages other than QA pairs for both training and testing. Thus, they cannot be applied to CQA datasets that do not have such passages. Furthermore, most of the questions in their datasets only have a single answer. Thus, we think their purpose is different from ours; generating answers for non-factoid questions that tend to demand diverse answers.
There are several complex QA tasks such as those present in the TREC complex interactive QA tasks or DUC complex QA tasks. Our method can be applied to those non-factoid datasets if an access fee is paid.
<<</Related work>>>
<<<Model>>>
This section describes our conclusion-supplement answer generation model in detail. An overview of its architecture is shown in Figure FIGREF3.
Given an input question sequence ${\bf {Q}} = \lbrace {\bf {q}}_1, \cdots , {\bf {q}}_i, \cdots , {\bf {q}}_{N_q}\rbrace $, the proposal outputs a conclusion sequence ${\bf {C}} = \lbrace {\bf {c}}_1, \cdots , {\bf {c}}_t, \cdots , {\bf {c}}_{N_c}\rbrace $, and supplement sequence ${\bf {S}} = \lbrace {\bf {s}}_1, \cdots , {\bf {s}}_t, \cdots , {\bf {s}}_{N_s}\rbrace $. The goal is to learn a function mapping from ${\bf {Q}}$ to ${\bf {C}}$ and ${\bf {S}}$. Here, ${\bf {q}}_i$ denotes a one-of-$K$ embedding of the $i$-th word in an input sequence of length $N_q$. ${\bf {c}}_t$ (${\bf {s}}_t$) denotes a one-of-$K$ embedding of the $t$-th word in an input sequence of length $N_c$ ($N_s$).
<<<Encoder>>>
The encoder converts the input $\bf {Q}$ into a question embedding, ${\bf {O}}_q$, and hidden states, ${\bf {H}}={\lbrace {\bf {h}}_i\rbrace _i}$.
Since the question includes several pieces of background information on the question, e.g. on the users' situation, as well as the question itself, it can be very long and composed of many sentences. For this reason, we use the BiLSTM encoder, which encodes the question in both directions, to better capture the overall meaning of the question. It processes both directions of the input, $\lbrace {\bf {q}}_1, \cdots , {\bf {q}}_{N_q}\rbrace $ and $\lbrace {\bf {q}}_{N_q}, \cdots , {\bf {q}}_{1}\rbrace $, sequentially. At time step $t$, the encoder updates the hidden state by:
where $f()$ is an LSTM unit, and ${\bf {h}}^f_i$ and ${\bf {h}}^b_i$ are hidden states output by the forward-direction LSTM and backward-direction LSTM, respectively.
We also want to reflect sentence-type information such as conclusion type or supplement type in sequence-to-sequence learning to better understand the conclusion or supplement sequences. We achieve this by adding a sentence type vector for conclusion $\bf {C}$ or for supplement $\bf {S}$ to the input gate, forget gate output gate, and cell memory state in the LSTM model. This is equivalent to processing a composite input [${\bf {q}}_i$, $\bf {C}$] or [${\bf {q}}_i$, $\bf {S}$] in the LSTM cell that concatenates the word embedding and sentence-type embedding vectors. We use this modified LSTM in the above BiLSTM model as:
When encoding the question to decode the supplement sequence, ${\bf {S}}$ is input instead of ${\bf {C}}$ in the above equation.
The BiLSTM encoder then applies a max-pooling layer to all hidden vectors to extract the most salient signal for each word. As a result, it generates a fixed-sized distributed vector representation for the conclusion, ${\bf {O}}^c_q$, and another for the supplement, ${\bf {O}}^s_q$. ${\bf {O}}^c_q$ and ${\bf {O}}^s_q$ are different since the encoder is biased by the corresponding sentence-type vector, $\bf {C}$ or $\bf {S}$.
As depicted in Figure FIGREF3, the BiLSTM encoder processes each word with a sentence-type vector (i.e. $\bf {C}$ or $\bf {S}$) and the max-pooling layer to produce the question embedding ${\bf {O}}^c_q$ or ${\bf {O}}^s_q$. These embeddings are used as context vectors in the decoder network for the conclusion and supplement.
<<</Encoder>>>
<<<Decoder>>>
The decoder is composed of a conclusion decoder and supplement decoder. Here, let ${\bf {h}}^{\prime }_t$ be the hidden state of the $t$-th LSTM unit in the conclusion decoder. Similar to the encoder, the decoder also decodes a composite input [${\bf {c}}_t$, $\bf {C}$] in an LSTM cell that concatenates the conclusion word embedding and sentence-type embedding vectors. It is formulated as follows:
where $f^{\prime }()$ denotes the conclusion decoder LSTM, $\operatornamewithlimits{softmax}_c$ the probability of word $c$ given by a softmax layer, $c_t$ the $t$-th conclusion decoded token, and ${\bf {c}}_t$ the word embedding of $c_t$. The supplement decoder's hidden state ${\bf {h}}^{\prime \prime }_t$ is computed in the same way with ${\bf {h}}^{\prime }_t$; however, it is updated in the ensemble network described in the next subsection.
As depicted in Figure FIGREF3, the LSTM decoder processes tokens according to question embedding ${\bf {O}}^c_q$ or ${\bf {O}}^s_q$, which yields a bias corresponding to the sentence-type vector, $\bf {C}$ or $\bf {S}$. The output states are then input to the ensemble network.
<<</Decoder>>>
<<<Ensemble network>>>
The conventional encoder-decoder framework often generates short and simple sentences that fail to adequately answer non-factoid questions. Even if we force it to generate longer answers, the decoder output sequences become incoherent when read from the beginning to the end.
The ensemble network solves the above problem by (1) passing the context from the conclusion decoder's output sequence to the supplementary decoder hidden states via an attention mechanism, and (2) considering the closeness of the encoder's input sequence to the decoders' output sequences as well as the closeness of the encoder's input sequence to the combination of decoded output sequences.
(1) To control the context, we assess all the information output by the conclusion decoder and compute the conclusion vector, ${\bf {O}}_c$. ${\bf {O}}_c$ is a sentence-level representation that is more compact, abstractive, and global than the original decoder output sequence. To get it, we apply BiLSTM to the conclusion decoder's output states $\lbrace {{{\tilde{\bf {y}}}}_t^c} \rbrace _t$; i.e., $\lbrace {{{\tilde{\bf {y}}}}_t^c} \rbrace _t = \lbrace {\bf {U}}\cdot \operatornamewithlimits{softmax}({\bf {h}}^{\prime }_t)\rbrace _t$, where word representation matrix $\bf {U}$ holds the word representations in its columns. At time step $t$, the BiLSTM encoder updates the hidden state by:
where ${\bf {h}}^{c,f}_t$ and ${\bf {h}}^{c,b}_t$ are the hidden states output by the forward LSTM and backward LSTM in the conclusion encoder, respectively. It applies a max-pooling layer to all hidden vectors to extract the most salient signal for each word to compute the embedding for conclusion ${\bf {O}}_c$. Next, it computes the context vector ${\bf {cx}}_t$ at the $t$-th step by using the $(t\!\!-\!\!1)$-th output hidden state of the supplement decoder, ${\bf {h}}^{\prime \prime }_{t\!-\!1}$, weight matrices, ${\bf {V}}_a$ and ${\bf {W}}_a$, and a sigmoid function, $\sigma $:
This computation lets our ensemble network extract a conclusion-sentence level context. The resulting supplement sequences follow the context of the conclusion sequence. Finally, ${{\bf {h}}}^{\prime \prime }_t$ is computed as:
$z$ can be $i$, $f$, or $o$, which represent three gates (e.g., input ${\bf {i}}_t$, forget ${\bf {f}}_t$, and output ${\bf {o}}_t$). ${\bf {l}}_t$ denotes a cell memory vector. ${{\bf {W}}}^a_z$ and ${{\bf {W}}}^a_l$ denote attention parameters.
(2) To control the closeness at the sentence level and sentence-combination level, it assesses all the information output by the supplement decoder and computes the supplement vector, ${\bf {O}}_s$, in the same way as it computes ${\bf {O}}_c$. That is, it applies BiLSTM to the supplement decoder's output states $\lbrace {{{\tilde{\bf {y}}}}_t^s} \rbrace _t$; i.e., $\lbrace {{{\tilde{\bf {y}}}}_t^s} \rbrace _t = \lbrace {\bf {U}}\!\cdot \! \operatornamewithlimits{softmax}({{\bf {h}}_t^{\prime \prime }})\rbrace _t$, where the word representations are found in the columns of $\bf {U}$. Next, it applies a max-pooling layer to all hidden vectors in order to compute the embeddings for supplement ${\bf {O}}_s$. Finally, to generate the conclusion-supplement answers, it assesses the closeness of the embeddings for the question ${\bf {O}}_q$ to those for the answer sentences (${\bf {O}}_c$ or ${\bf {O}}_s$) and their combination ${\bf {O}}_c$ and ${\bf {O}}_s$. The loss function for the above metrics is described in the next subsection.
As depicted in Figure FIGREF3, the ensemble network computes the conclusion embedding ${\bf {O}}_c$, the attention parameter weights from ${\bf {O}}_c$ to the decoder output supplement states (dotted lines represent attention operations), and the supplement embedding ${\bf {O}}_s$. Then, ${\bf {O}}_c$ and ${\bf {O}}_s$ are input to the loss function together with the question embedding ${\bf {O}}_q = [{\bf {O}}^c_q,{\bf {O}}^s_q]$.
<<</Ensemble network>>>
<<<Loss function of ensemble network>>>
Our model uses a new loss function rather than generative supervision, which aims to maximize the conditional probability of generating the sequential output $p({\bf {y}}|{\bf {q}})$. This is because we think that assessing the closeness of the question and an answer sequence as well as the closeness of the question to two answer sequences is useful for generating natural-sounding answers.
The loss function is for optimizing the closeness of the question and conclusion and that of the question and supplement as well as for optimizing the closeness of the question with the combination of the conclusion and supplement. The training loss ${\cal {L}}_s$ is expressed as the following hinge loss, where ${\bf {O}}^{+}$ is the output decoder vector for the ground-truth answer, ${\bf {O}}^{-}$ is that for an incorrect answer randomly chosen from the entire answer space, $M$ is a constant margin, and $\mathbb {A}$ is set equal to $\lbrace [{\bf {O}}^{+}_c, {\bf {O}}^{-}_s], [{\bf {O}}^{-}_c, {\bf {O}}^{+}_s], [{\bf {O}}^{-}_c, {\bf {O}}^{-}_s]\rbrace $:
The key idea is that ${\cal {L}}_s$ checks whether or not the conclusion, supplement, and their combination have been well predicted. In so doing, ${\cal {L}}_s$ can optimize not only the prediction of the conclusion or supplement but also the prediction of the combination of conclusion and supplement.
The model is illustrated in the upper part of Figure FIGREF3; $({\bf {O}}_q, {\bf {O}}_c, {\bf {O}}_s)$ is input to compute the closeness and sequence combination losses.
<<</Loss function of ensemble network>>>
<<<Training>>>
The training loss ${\cal {L}}_w$ is used to check ${\cal {L}}_s$ and the cross-entropy loss in the encoder-decoder model. In the following equation, the conclusion and supplement sequences are merged into one sequence $\bf {Y}$ of length $T$, where $T\!=\!N_c\!+\!N_s$.
$\alpha $ is a parameter to control the weighting of the two losses. We use adaptive stochastic gradient descent (AdaGrad) to train the model in an end-to-end manner. The loss of a training batch is averaged over all instances in the batch.
Figure FIGREF3 illustrates the loss for the ensemble network and the cross-entropy loss.
<<</Training>>>
<<</Model>>>
<<<Evaluation>>>
<<<Compared methods>>>
We compared the performance of our method with those of (1) Seq2seq, a seq2seq attention model proposed by BIBREF4; (2) CLSTM, i.e., the CLSTM model BIBREF11; (3) Trans, the Transformer BIBREF13, which has proven effective for common NLP tasks. In these three methods, conclusion sequences and supplement sequences are decoded separately and then joined to generate answers. They give more accurate results than methods in which the conclusion sequences and supplement sequences are decoded sequentially. We also compared (4) HRED, a hierarchical recurrent encoder-decoder model BIBREF12 in which conclusion sequences and supplement sequences are decoded sequentially to learn the context from conclusion to supplement; (5) NAGMWA, i.e., our neural answer generation model without an attention mechanism. This means that NAGMWA does not pass ${\bf {cx}}_t$ in Eq. (DISPLAY_FORM10) to the decoder, and conclusion decoder and supplement decoder are connected only via the loss function ${\cal {L}}_s$. In the tables and figures that follow, NAGM means our full model.
<<</Compared methods>>>
<<<Dataset>>>
Our evaluations used the following two CQA datasets:
<<<Oshiete-goo>>>
The Oshiete-goo dataset includes questions stored in the “love advice” category of the Japanese QA site, Oshiete-goo. It has 771,956 answers to 189,511 questions. We fine-tuned the model using a corpus containing about 10,032 question-conclusion-supplement (q-c-s) triples. We used 2,824 questions from the Oshiete-goo dataset. On average, the answers to these questions consisted of about 3.5 conclusions and supplements selected by human experts. The questions, conclusions, and supplements had average lengths of 482, 41, and 46 characters, respectively. There were 9,779 word tokens in the questions and 6,317 tokens in answers; the overlap was 4,096.
<<</Oshiete-goo>>>
<<<nfL6>>>
We also used the Yahoo nfL6 dataset, the largest publicly available English non-factoid CQA dataset. It has 499,078 answers to 87,361 questions. We fine-tuned the model by using questions in the “social science”, “society & culture”, and “arts & humanities” categories, since they require diverse answers. This yielded 114,955 answers to 13,579 questions. We removed answers that included some stop words, e.g. slang words, or those that only refer to some URLs or descriptions in literature, since such answers often become noise when an answer is generated. Human experts annotated 10,299 conclusion-supplement sentences pairs in the answers.
In addition, we used a neural answer-sentence classifier to classify the sentences into conclusion or supplement classes. It first classified the sentences into supplements if they started with phrases such as “this is because” or “therefore”. Then, it applied a BiLSTM with max-pooling to the remaining unclassified sentences, ${\bf {A}} = \lbrace {\bf {a}}_1, {\bf {a}}_2, \cdots , {\bf {a}}_{N_a}\rbrace $, and generated embeddings for the un-annotated sentences, ${\bf {O}}^a$. After that, it used a logistic sigmoid function to return the probabilities of mappings to two discrete classes: conclusion and supplement. This mapping was learned by minimizing the classification errors using the above 10,299 labeled sentences. As a result, we automatically acquired 70,000 question-conclusion-supplement triples from the entire answers. There were 11,768 questions and 70,000 answers. Thus, about 6 conclusions and supplements on average were assigned to a single question. The questions, conclusions, and supplements had average lengths of 46, 87, and 71 characters, respectively. We checked the performance of the classifier; human experts checked whether the annotation results were correct or not. They judged that it was about 81% accurate (it classified 56,762 of 70,000 sentences into correct classes). There were 15,690 word tokens in questions and 124,099 tokens in answers; the overlap was 11,353.
<<</nfL6>>>
<<</Dataset>>>
<<<Methodology>>>
We conducted three evaluations using the Oshiete-goo dataset; we selected three different sets of 500 human-annotated test pairs from the full dataset. In each set, we trained the model by using training pairs and input questions in test pairs to the model. We repeated the experiments three times by randomly shuffling the train/test sets.
For the evaluations using the nfL6 dataset, we prepared three different sets of 500 human-annotated test q-c-s triples from the full dataset. We used 10,299 human-annotated triples to train the neural sentence-type classifier. Then, we applied the classifier to the unlabeled answer sentences. Finally, we evaluated the answer generation performance by using three sets of machine-annotated 69,500 triples and 500 human-annotated test triples.
After training, we input the questions in the test triples to the model to generate answers for both datasets. We compared the generated answers with the correct answers. The results described below are average values of the results of three evaluations.
The softmax computation was slow since there were so many word tokens in both datasets. Many studies BIBREF24, BIBREF25, BIBREF3 restricted the word vocabulary to one based on frequency. This, however, narrows the diversity of the generated answers. Since diverse answers are necessary to properly reply to non-factoid questions, we used bigram tokens instead of word tokens to speed up the computation without restricting the vocabulary. Accordingly, we put 4,087 bigram tokens in the Oshiete-goo dataset and 11,629 tokens in the nfL6 dataset.
To measure performance, we used human judgment as well as two popular metrics BIBREF2, BIBREF25, BIBREF4 for measuring the fluency of computer-generated text: ROUGE-L BIBREF26 and BLEU-4 BIBREF27. ROUGE-L is used for measuring the performance for evaluating non-factoid QAs BIBREF28, however, we also think human judgement is important in this task.
<<</Methodology>>>
<<<Parameter setup>>>
For both datasets, we tried different parameter values and set the size of the bigram token embedding to 500, the size of LSTM output vectors for the BiLSTMs to $500 \times 2$, and number of topics in the CLSTM model to 15. We tried different margins, $M$, in the hinge loss function and settled on $0.2$. The iteration count $N$ was set to 100.
We varied $\alpha $ in Eq. (DISPLAY_FORM13) from 0 to 2.0 and checked the impact of $L_s$ by changing $\alpha $. Table TABREF18 shows the results. When $\alpha $ is zero, the results are almost as poor as those of the seq2seq model. On the other hand, while raising the value of $\alpha $ places greater emphasis on our ensemble network, it also degrades the grammaticality of the generated results. We set $\alpha $ to 1.0 after determining that it yielded the best performance. This result clearly indicates that our ensemble network contributes to the accuracy of the generated answers.
A comparison of our full method NAGM with the one without the sentence-type embedding (we call this method w/o ste) that trains separate decoders for two types of sentences is shown in Table TABREF19. The result indicated that the existence of the sentence type vector, $\bf {C}$ or $\bf {S}$, contributes the accuracy of the results since it distinguishes between sentence types.
<<</Parameter setup>>>
<<<Results>>>
<<<Performance>>>
The results for Oshiete-goo are shown in Table TABREF20 and those for nfL6 are shown in Table TABREF21. They show that CLSTM is better than Seq2seq. This is because it incorporates contextual features, i.e. topics, and thus can generate answers that track the question's context. Trans is also better than Seq2seq, since it uses attention from the question to the conclusion or supplement more effectively than Seq2seq. HRED failed to attain a reasonable level of performance. These results indicate that sequential generation has difficulty generating subsequent statements that follow the original meaning of the first statement (question).
NAGMWA is much better than the other methods except NAGM, since it generates answers whose conclusions and supplements as well as their combinations closely match the questions. Thus, conclusions and supplements in the answers are consistent with each other and avoid confusion made by several different conclusion-supplement answers assigned to a single non-factoid questions. Finally, NAGM is consistently superior to the conventional attentive encoder-decoders regardless of the metric. Its ROUGE-L and BLEU-4 scores are much higher than those of CLSTM. Thus, NAGM generates more fluent sentences by assessing the context from conclusion to supplement sentences in addition to the closeness of the question and sentences as well as that of the question and sentence combinations.
<<</Performance>>>
<<<Human evaluation>>>
Following evaluations made by crowdsourced evaluators BIBREF29, we conducted human evaluations to judge the outputs of CLSTM and those of NAGM. Different from BIBREF29, we hired human experts who had experience in Oshiete-goo QA community service. Thus, they were familiar with the sorts of answers provided by and to the QA community.
The experts asked questions, which were not included in our training datasets, to the AI system and rated the answers; one answer per question. The experts rated the answers as follows: (1) the content of the answer matched the question, and the grammar was okay; (2) the content was suitable, but the grammar was poor; (3) the content was not suitable, but the grammar was okay; (4) both the content and grammar were poor. Note that our evaluation followed the DUC-style strategy. Here, we mean “grammar” to cover grammaticality, non-redundancy, and referential clarity in the DUC strategy, whereas we mean the “content matched the questions” to refer to “focus” and “structure and coherence” in the DUC strategy. The evaluators were given more than a week to carefully evaluate the generated answers, so we consider that their judgments are reliable. Each expert evaluated 50 questions. We combined the scores of the experts by summing them. They did not know the identity of the system in the evaluation and reached their decisions independently.
Table TABREF22 and Table TABREF22 present the results. The numbers are percentages. Table 7 presents examples of questions and answers. For Oshiete-goo results, the original Japanese and translated English are presented. The questions are very long and include long background descriptions before the questions themselves.
These results indicate that the experts were much more satisfied with the outputs of NAGM than those of CLSTM. This is because, as can be seen in Table 7, NAGM generated longer and better question-related sentences than CLSTM did. NAGM generated grammatically good answers whose conclusion and supplement statements are well matched with the question and the supplement statement naturally follows the conclusion statement.
<<</Human evaluation>>>
<<<Generating answers missing from the corpus>>>
The encoder-decoder network tends to re-generate answers in the training corpus. On the other hand, NAGM can generate answers not present in the corpus by virtue of its ensemble network that considers contexts and sentence combinations.
Table 7 lists some examples. For example, answer #1 generated by NAGM is not in the training corpus. We think it was generated from the parts in italics in the following three sentences that are in the corpus: (1) “I think that it is better not to do anything from your side. If there is no reaction from him, it is better not to do anything even if there is opportunity to meet him next.” (2) “I think it may be good for you to approach your lover. Why don't you think positively about it without thinking too pessimistically?” (3) “Why don't you tell your lover that you usually do not say what you are thinking. $\cdots $ I think that it is important to communicate the feelings to your lover; how you like or care about him/her especially when you are quarreling with each other.”
The generation of new answers is important for non-factoid answer systems, since they must cope with slight differences in question contexts from those in the corpus.
<<</Generating answers missing from the corpus>>>
<<<Online evaluation in “Love Advice” service>>>
Our ensemble network is currently being used in the love advice service of Oshiete goo BIBREF30. The service uses only the ensemble network to ensure that the service offers high-quality output free from grammar errors. We input the sequences in our evaluation corpus instead of the decoder output sequences into the ensemble network. Our ensemble network then learned the optimum combination of answer sequences as well as the closeness of the question and those sequences. As a result, it can construct an answer that corresponds to the situation underlying the question. In particular, 5,702 answers created by the AI, whose name is Oshi-el (Oshi-el means teaching angel), using our ensemble network in reply to 33,062 questions entered from September 6th, 2016 to November 17th, 2019, were judged by users of the service as good answers. Oshi-el output good answers at about twice the rate of the average human responder in Oshiete-goo who answered more than 100 questions in the love advice category. Thus, we think this is a good result.
Furthermore, to evaluate the effectiveness of the supplemental information, we prepared 100 answers that only contained conclusion sentences during the same period of time. As a result, users rated the answers that contained both conclusion and supplement sentences as good 1.6 times more often than those that contained only conclusion sentences. This shows that our method successfully incorporated supplemental information in answering non-factoid questions.
<<</Online evaluation in “Love Advice” service>>>
<<</Results>>>
<<</Evaluation>>>
<<<Conclusion>>>
We tackled the problem of conclusion-supplement answer generation for non-factoid questions, an important task in NLP. We presented an architecture, ensemble network, that uses an attention mechanism to reflect the context of the conclusion decoder's output sequence on the supplement decoder's output sequence. The ensemble network also assesses the closeness of the encoder input sequence to the output of each decoder and the combined output sequences of both decoders. Evaluations showed that our architecture was consistently superior to conventional encoder-decoders in this task. The ensemble network is now being used in the “Love Advice,” service as mentioned in the Evaluation section.
Furthermore, our method, NAGM, can be generalized to generate much longer descriptions other than conclusion-supplement answers. For example, it is being used to generate Tanka, which is a genre of classical Japanese poetry that consists of five lines of words, in the following way. The first line is input by a human user to NAGM as a question, and NAGM generates second line (like a conclusion) and third line (like a supplement). The third line is again input to NAGM as a question, and NAGM generates the fourth line (like a conclusion) and fifth line (like a supplement).
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated work\nModel\nEncoder\nDecoder\nEnsemble network\nLoss function of ensemble network\nTraining\nEvaluation\nCompared methods\nDataset\nOshiete-goo\nnfL6\nMethodology\nParameter setup\nResults\nPerformance\nHuman evaluation\nGenerating answers missing from the corpus\nOnline evaluation in “Love Advice” service\nConclusion"
],
"type": "outline"
}
|
1910.11204
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Syntax-Enhanced Self-Attention-Based Semantic Role Labeling
<<<Abstract>>>
As a fundamental NLP task, semantic role labeling (SRL) aims to discover the semantic roles for each predicate within one sentence. This paper investigates how to incorporate syntactic knowledge into the SRL task effectively. We present different approaches of encoding the syntactic information derived from dependency trees of different quality and representations; we propose a syntax-enhanced self-attention model and compare it with other two strong baseline methods; and we conduct experiments with newly published deep contextualized word representations as well. The experiment results demonstrate that with proper incorporation of the high quality syntactic information, our model achieves a new state-of-the-art performance for the Chinese SRL task on the CoNLL-2009 dataset.
<<</Abstract>>>
<<<Introduction>>>
The task of semantic role labeling (SRL) is to recognize arguments for a given predicate in one sentence and assign labels to them, including “who” did “what” to “whom”, “when”, “where”, etc. Figure FIGREF1 is an example sentence with both semantic roles and syntactic dependencies. Since the nature of semantic roles is more abstract than the syntactic dependencies, SRL has a wide range of applications in different areas, e.g., text classification BIBREF0, text summarization BIBREF1, BIBREF2, recognizing textual entailment BIBREF3, BIBREF4, information extraction BIBREF5, question answering BIBREF6, BIBREF7, and so on.
UTF8gbsn
Traditionally, syntax is the bridge to reach semantics. However, along with the popularity of the end-to-end models in the NLP community, various recent studies have been discussing the necessity of syntax in the context of SRL. For instance, BIBREF8 have observed that only good syntax helps with the SRL performance. BIBREF9 have explored what kind of syntactic information or structure is better suited for the SRL model. BIBREF10 have compared syntax-agnostic and syntax-aware approaches and claim that the syntax-agnostic model surpasses the syntax-aware ones.
In this paper, we focus on analyzing the relationship between the syntactic dependency information and the SRL performance. In particular, we investigate the following four aspects: 1) Quality of the syntactic information: whether the performance of the syntactic parser output affects the SRL performance; 2) Representation of the syntactic information: how to represent the syntactic dependencies to better preserve the original structural information; 3) Incorporation of the syntactic information: at which layer of the SRL model and how to incorporate the syntactic information; and 4) the Relationship with other external resources: when we append other external resources into the SRL model, whether their contributions are orthogonal to the syntactic dependencies.
For the main architecture of the SRL model, many neural-network-based models use BiLSTM as the encoder (e.g., BIBREF10, BIBREF11, BIBREF12), while recently self-attention-based encoder becomes popular due to both the effectiveness and the efficiency BIBREF13, BIBREF14, BIBREF15. By its nature, the self-attention-based model directly captures the relation between words in the sentence, which is convenient to incorporate syntactic dependency information. BIBREF15 replace one attention head with pre-trained syntactic dependency information, which can be viewed as a hard way to inject syntax into the neural model. Enlightened by the machine translation model proposed by BIBREF16, we introduce the Relation-Aware method to incorporate syntactic dependencies, which is a softer way to encode richer structural information.
Various experiments for the Chinese SRL on the CoNLL-2009 dataset are conducted to evaluate our hypotheses. From the empirical results, we observe that: 1) The quality of the syntactic information is essential when we incorporate structural information into the SRL model; 2) Deeper integration of the syntactic information achieves better results than the simple concatenation to the inputs; 3) External pre-trained contextualized word representations help to boost the SRL performance further, which is not entirely overlapping with the syntactic information.
In summary, the contributions of our work are:
We present detailed experiments on different aspects of incorporating syntactic information into the SRL model, in what quality, in which representation and how to integrate.
We introduce the relation-aware approach to employ syntactic dependencies into the self-attention-based SRL model.
We compare our approach with previous studies, and achieve state-of-the-art results with and without external resources, i.e., in the so-called closed and open settings.
<<</Introduction>>>
<<<Related work>>>
Traditional semantic role labeling task BIBREF17 presumes that the syntactic structure of the sentence is given, either being a constituent tree or a dependency tree, like in the CoNLL shared tasks BIBREF18, BIBREF19, BIBREF20. Recent neural-network-based approaches can be roughly categorized into two classes: 1) making use of the syntactic information BIBREF21, BIBREF22, BIBREF23, BIBREF24, and 2) pure end-to-end learning from tokens to semantic labels, e.g., BIBREF25, BIBREF26.
BIBREF22 utilize an LSTM model to obtain embeddings from the syntactic dependency paths; while BIBREF24 construct Graph Convolutional Networks to encode the dependency structure. Although BIBREF8's approach is a pure end-to-end learning, they have included an analysis of adding syntactic dependency information into English SRL in the discussion section. BIBREF10 have compared syntax-agnostic and syntax-aware approaches and BIBREF9 have compared different ways to represent and encode the syntactic knowledge.
In another line of research, BIBREF14 utilize the Transformer network for the encoder instead of the BiLSTM. BIBREF15 present a novel and effective multi-head self-attention model to incorporate syntax, which is called LISA (Linguistically-Informed Self-Attention). We follow their approach of replacing one attention head with the dependency head information, but use a softer way to capture the pairwise relationship between input elements BIBREF16.
For the datasets and annotations of the SRL task, most of the previous research focuses on 1) PropBank BIBREF27 and NomBank BIBREF28 annotations, i.e., the CoNLL 2005 BIBREF18 and CoNLL 2009 BIBREF20 shared tasks; 2) OntoNotes annotations BIBREF29, i.e., the CoNLL 2005 and CoNLL 2012 datasets and more; 3) and FrameNet BIBREF30 annotations. For the non-English languages, not all of them are widely available. Apart from these, in the broad range of semantic processing, other formalisms non-exhaustively include abstract meaning representation BIBREF31, universal decompositional semantics BIBREF32, and semantic dependency parsing BIBREF33. BIBREF34 give a better overview of various semantic representations. In this paper, we primarily work on the Chinese and English datasets from the CoNLL-2009 shared task and focus on the effectiveness of incorporating syntax into the Chinese SRL task.
<<</Related work>>>
<<<Approaches>>>
In this section, we first introduce the basic architecture of our self-attention-based SRL model, and then present two different ways to encode the syntactic dependency information. Afterwards, we compare three approaches to incorporate the syntax into the base model, concatenation to the input embedding, LISA, and our proposed relation-aware method.
<<<The Basic Architecture>>>
Our basic model is a multi-head self-attention-based model, which is effective in SRL task as previous work proves BIBREF35. The model consists of three layers: the input layer, the encoder layer and the prediction layer as shown in Figure FIGREF5.
<<<Input Layer>>>
The input layer contains three types of embeddings: token embedding, predicate embedding, and positional embedding.
Token Embedding includes word embedding, part-of-speech (POS) tag embedding.
Predicate Embedding has been proposed by BIBREF8, and its binary embedding is used to indicate the predicates indices in each sentence.
Positional Embedding encodes the order of the input word sequence. We follow BIBREF13 to use time positional embedding, which is formulated as follows:
where $t$ is the position, $i$ means the dimension, and $d$ is the dimension of the model input embedding.
<<</Input Layer>>>
<<<Encoder Layer>>>
The self-attention block is almost the same as Transformer encoder proposed by BIBREF13. Specifically the Transformer encoder contains a feed-forward network (FFN) and a multi-head attention network. The former is followed by the latter. In this work, we exchange their order, so that the multi-head attention module is moved behind the FFN module as Figure FIGREF5 shows.
FFN The FFN module consists of two affine layers with a ReLU activation in the middle. Formally, we have the following equation:
Multi-Head Attention The basic attention mechanism used in the multi-head attention function is called “Scaled Dot-Product Attention”, which is formulated as follows:
where $Q$ is queries, $K$ is keys, and $V$ is values.
In the multi-head attention setting, it first maps the input matrix $X$ into queries, keys and values matrices by using $h$ different learned linear projections. Taking queries $Q$ as an example:
where $0 \le i < h$. Keys and values use similar projections.
On each of these projections, we perform the scaled dot-product attention in parallel. These parallel output values are concatenated and once again projected into the final values. Equation DISPLAY_FORM14 depicts the above operations.
where
More details about multi-head attention can be found in BIBREF13.
Add & Norm We employ a residual connection to each module, followed by a layer normalization BIBREF36 operation. The output of each module is formulated as
where $f(x)$ is implemented by each above module.
<<</Encoder Layer>>>
<<</The Basic Architecture>>>
<<<Representation of the Syntactic Dependencies>>>
<<<Dependency Head & Relation>>>
The most intuitive way to represent syntactic information is to use individual dependency relations directly, like dependency head and dependency relation label, denoted as Dep and Rel for short.
Except for LISA, where Dep is a one-hot matrix of dependency head word index described in SECREF25, in other cases, we use the corresponding head word. Rel is the dependency relation between the word and its syntactic head. We take both Dep and Rel as common strings and map them into dense vectors in the similar way of word embedding.
<<</Dependency Head & Relation>>>
<<<Dependency Path & Relation Path>>>
In order to preserve the structural information of dependency trees as much as possible, we take the syntactic path between candidate arguments and predicates in dependency trees as linguistic knowledge. Referring to BIBREF9, we use the Tree-based Position Feature (TPF) as Dependency Path (DepPath) and use the Shortest Dependency Path (SDP) as Relation Path (RelPath).
To generate DepPath & RelPath between candidate argument and predicate, we firstly find their lowest common ancestor. Then we get two sub-paths, one is from the ancestor to the predicate and the other is from the ancestor to the argument. For DepPath, we compute distance from ancestor to predicate and argument respectively and then concatenate two distances with the separator `,'. For RelPath, we concatenate the labels appearing in each sub-path with the separator “_" respectively to get two label paths, and then concatenate the two label paths with the separator `,'.
UTF8gbsn As shown in Figure FIGREF21, the lowest common ancestor of the predicate “鼓励 (encourage)" and the candidate argument “农业 (agriculture)" is “鼓励 (encourage)", so their DepPath is “2,0" and its RelPath is “COMP_COMP,".
We take both DepPath and RelPath as common strings and map them into dense vectors in the similar way of Dep and Rel.
UTF8gbsn
<<</Dependency Path & Relation Path>>>
<<</Representation of the Syntactic Dependencies>>>
<<<Incorporation Methods>>>
<<<Input Embedding Concatenation>>>
To incorporate syntactic knowledge, one simple method is to take it as part of the neural network input, denoted as Input. We represent the syntactic information with dense vectors, and concatenate it with other information like word embedding:
where $\oplus $ means concatenation; $E_W$ means the original inputs of the neural model and $E_S$ means the embedding of syntax information, such as Dep/Rel or DepPath/RelPath.
<<</Input Embedding Concatenation>>>
<<<LISA>>>
BIBREF15 propose the linguistically-informed self-attention model (LISA for short) to combine SRL and dependency parsing as multi-task learning in a subtle way. Based on the multi-head self-attention model, LISA uses one attention head to predict the dependency results and it can also directly use pre-trained dependency head results to replace the attention matrix during testing.
Being different from their multi-task learning, we make the replacement of one attention head during both training and testing. Instead of the original $softmax$ attention matrix, we use a one-hot matrix, generated by mapping the dependency head index of each word into a 0-1 vector of the sentence length as Figure FIGREF27 shows.
We add the dependency relation information with $V$ in the replaced head so that we can make full use of the syntactic knowledge. The replaced attention head is formulated as follows:
where $M_D$ is the one-hot dependency head matrix and $E_R$ means the embedding of dependency relation information, such as Rel or RelPath.
<<</LISA>>>
<<<Relation-Aware Self-Attention>>>
Relation-aware self-attention model (RelAwe for brevity) incorporates external information into the attention. By this way, the model considers the pairwise relationships between input elements, which highly agrees with the task of SRL, i.e., aiming to find the semantic relations between the candidate argument and predicate in one sentence.
Compared to the standard attention, in this paper, we add the dependency information into $Q$ and $V$ in each attention head, like equation (DISPLAY_FORM15) shows:
where $E_D$ and $E_R$ mean the syntactic dependency head and relation information respectively. For our multi-layer multi-head self-attention model, we make this change to each head of the first $N$ self-attention layers.
<<</Relation-Aware Self-Attention>>>
<<</Incorporation Methods>>>
<<</Approaches>>>
<<<Experiment>>>
<<<Settings>>>
Datasets & Evaluation Metrics Our experiments are conducted on the CoNLL-2009 shared task dataset BIBREF20. We use the official evaluation script to compare the output of different system configurations, and report the labeled precision (P), labeled recall (R) and labeled f-score (F1) for the semantic dependencies.
Word Representations Most of our experiments are conducted in the closed setting without any external word embeddings or data resources than those provided by the CoNLL-2009 datasets. In the closed setting, word embedding is initialized by a Gaussian distribution with mean 0 and variance $\frac{1}{\sqrt{d}}$, where $d$ is the dimension of embedding size of each layer.
For the experiments with external resources in the open setting, we utilize 1) word embeddings pre-trained with GloVe BIBREF37 on the Gigaword corpus for Chinese and the published embeddings with 100 dimensions pre-trained on Wikipedia and Gigaword for English; and 2) ELMo BIBREF38 and BERT BIBREF39, two recently proposed effective deep contextualized word representations.
Other embeddings, i.e., POS embedding, linguistic knowledge embedding, and so on are initialized in same way as random word embedding no matter in closed or open setting.
Syntactic Parsers In Table TABREF30, both Auto and Gold syntactic dependencies are provided by the dataset. Since the performance of the Auto is far behind the state-of-the-art BiaffineParser BIBREF40, we generate more dependency results by training BiaffineParser with different external knowledge, including pre-trained word embedding and BERT. Performance for different parsers is listed in Table TABREF30.
Parameters In this work, we set word embedding size $d_w=100$, POS embedding size $d_t=50$. The predicate embedding size is set as $d_p=100$. The syntax-related embedding size varies along with different configurations, so as the feature embedding size $d_f$.
To facilitate residual connections, all sub-layers in the model produce outputs of dimension $d_{model}=d_f+d_p$. The hidden dimension $d_{ff}=800$ is applied for all the experiments. We set the number of shared self-attention blocks $N=10$. The number of heads varies with $d_{model}$, but dimension of each head is 25. Besides, LISA incorporates syntax knowledge in the 5-th self-attention layer while RelAwe incorporates in the first 5 layers.
We apply the similar dropout strategy as BIBREF13, i.e., the attention and residual dropout values are $0.2$ and $0.3$ respectively. The dropout is also applied in the middle layer of FFN with value $0.2$. We also employ label smoothing BIBREF41 of value $0.1$ during training.
We use softmax-cross-entropy as our loss function, and use the Adadelta optimizer BIBREF42 with $\epsilon =10^{-6}$ and $\rho =0.95$. For all experiments, we train the model $200,000$ steps with learning rate $lr=1.0$, and each batch has 4096 words.
All the hyper-parameters are tuned on the development set.
Configurations We use different abbreviations to represent the parsing results, syntactic dependency representations, and incorporation methods. All the system configurations in our experiments are listed in Table TABREF36.
<<</Settings>>>
<<<Quality of the Syntactic Dependencies>>>
We use the above-mentioned dependency trees of different quality for comparison, with Dep&Rel representation on our RelAwe model. In addition, we generate one more data AutoDel by deleting all the erroneous dependency heads and relations from the provided Auto data according to the gold heads and relations, and we do not replace them with any alternative heads and relations. We take this setting as another reference (along with GOLD) to indicate that erroneous syntax information may hurt the performance of the SRL model. We take the Gold as the upperbound reference of our task setting. Experiment results in Table TABREF37 demonstrate that, incorporating syntactic knowledge into the SRL model can achieve better performance and overall, the better the quality is, the better the SRL model performs. This is consistent with the previous study by BIBREF8 on the English dataset.
Closer observation reveals two additional interesting phenomena. Firstly, SRL performance improvement is not proportionate to the improvement of dependency quality. When switching syntactic dependency trees from Auto to Biaffine, SRL performance improves 0.5%, although syntactic dependency improves about 8%. In contrast, the difference between Biaffine and BiaffineBert shows more significant improvement of 1.5%. The possible reason is that BiaffineBert provides key dependency information which is missing in other configurations. Secondly, the SRL performance gap between AutoDel and Auto is large though they provide the same correct syntactic information. This may indicate that incorporating erroneous syntactic knowledge hurts the SRL model, and even providing more correct dependencies cannot make up for the harm (cf. BiaffineBert).
<<</Quality of the Syntactic Dependencies>>>
<<<External Resources>>>
Apart from the experiments with syntactic knowledge itself, we also compare different external resources to discover their relationship with the syntax, including pre-trained word embeddings, ELMo, and BERT. We conduct experiments with our best setting, the RelAwe model with DepPath & RelPath and the results are listed in Table TABREF45.
The plain word embedding improves a little in such settings with syntactic information, while for the newly proposed Elmo and Bert, both of them can boost the models further.
<<</External Resources>>>
<<<Final Results on the Chinese Test Data>>>
Based on the above experiments and analyses, we present the overall results of our model in this subsection. We train the three models (Input, LISA, and RelAwe) with their best settings without any external knowledge as Closed, and we take the same models with Bert as Open. The DepPath&RelPath from Gold without external knowledge serves as the Gold for reference. Since we have been focusing on the task of argument identification and labeling, for both Closed and Open, we follow BIBREF22 to use existing systems' predicate senses BIBREF43 to exclude them from comparison.
Table TABREF46 shows that our Open model achieves more than 3 points of f1-score than the state-of-the-art result, and RelAwe with DepPath&RelPath achieves the best in both Closed and Open settings. Notice that our best Closed model can almost perform as well as the state-of-the-art model while the latter utilizes pre-trained word embeddings. Besides, performance gap between three models under Open setting is very small. It indicates that the representation ability of BERT is so powerful and may contains rich syntactic information. At last, the Gold result is much higher than the other models, indicating that there is still large space for improvement for this task.
<<</Final Results on the Chinese Test Data>>>
<<<Results on the English Data>>>
We also conduct several experiments on the English dataset to validate the effectiveness of our approaches on other languages than Chinese and the results are in Table TABREF49. Although both configurations are not exactly the same as their original papers, we tried our best to reproduce their methods on the CoNLL2009 dataset for our comparison. Overall, the results are consistent with the Chinese experiments, while the improvement is not as large as the Chinese counterparts. The RelAwe model with DepPath&RelPath still achieves the best performance. Applying our syntax-enhanced model to more languages will be an interesting research direction to work on in the future. [10]We reimplement LISA in BIBREF15 as LISA(Dep), and BIBREF9's best DepPath approach as Input(DepPath). Therefore, we can compare with their work as fairly as possible. Other settings are the best configurations for their corresponding methods.
<<</Results on the English Data>>>
<<</Experiment>>>
<<<Conclusion and Future Work>>>
This paper investigates how to incorporate syntactic dependency information into semantic role labeling in depth. Firstly, we confirm that dependency trees of better quality are more helpful for the SRL task. Secondly, we present different ways to encode the trees and the experiments show that keeping more (correct) structural information during encoding improves the SRL performance. Thirdly, we compare three incorporation methods and discover that our proposed relation-aware self-attention-based model is the most effective one.
Although our experiments are primarily on the Chinese dataset, the approach is largely language independent. Apart from our tentative experiments on the English dataset, applying the approach to other languages will be an interesting research direction to work on in the future.
<<</Conclusion and Future Work>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated work\nApproaches\nThe Basic Architecture\nInput Layer\nEncoder Layer\nRepresentation of the Syntactic Dependencies\nDependency Head & Relation\nDependency Path & Relation Path\nIncorporation Methods\nInput Embedding Concatenation\nLISA\nRelation-Aware Self-Attention\nExperiment\nSettings\nQuality of the Syntactic Dependencies\nExternal Resources\nFinal Results on the Chinese Test Data\nResults on the English Data\nConclusion and Future Work"
],
"type": "outline"
}
|
2003.07758
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Multi-modal Dense Video Captioning
<<<Abstract>>>
Dense video captioning is a task of localizing interesting events from an untrimmed video and producing textual description (captions) for each localized event. Most of the previous works in dense video captioning are solely based on visual information and completely ignore the audio track. However, audio, and speech, in particular, are vital cues for a human observer in understanding an environment. In this paper, we present a new dense video captioning approach that is able to utilize any number of modalities for event description. Specifically, we show how audio and speech modalities may improve a dense video captioning model. We apply automatic speech recognition (ASR) system to obtain a temporally aligned textual description of the speech (similar to subtitles) and treat it as a separate input alongside video frames and the corresponding audio track. We formulate the captioning task as a machine translation problem and utilize recently proposed Transformer architecture to convert multi-modal input data into textual descriptions. We demonstrate the performance of our model on ActivityNet Captions dataset. The ablation studies indicate a considerable contribution from audio and speech components suggesting that these modalities contain substantial complementary information to video frames. Furthermore, we provide an in-depth analysis of the ActivityNet Caption results by leveraging the category tags obtained from original YouTube videos. The program code of our method and evaluations will be made publicly available.
<<</Abstract>>>
<<<Introduction>>>
The substantial amount of freely available video material has brought up the need for automatic methods to summarize and compactly represent the essential content. One approach would be to produce a short video skim containing the most important video segments as proposed in the video summarization task BIBREF0. Alternatively, the video content could be described using natural language sentences. Such an approach can lead to a very compact and intuitive representation and is typically referred to as video captioning in the literature BIBREF1. However, producing a single description for an entire video might be impractical for long unconstrained footage. Instead, dense video captioning BIBREF2 aims, first, at temporally localizing events and, then, at producing natural language description for each of them. Fig. FIGREF1 illustrates dense video captions for an example video sequence.
Most recent works in dense video captioning formulate the captioning problem as a machine translation task, where the input is a set of features extracted from the video stream and the output is a natural language sentence. Thus, the captioning methods can be leveraged by recent developments in machine translation field, such as Transformer model BIBREF3. The main idea in the transformer is to utilise self-attention mechanism to model long-term dependencies in a sequence. We follow the recent work BIBREF4 and adopt the transformer architecture in our dense video captioning model.
The vast majority of previous works are generating captions purely based on visual information BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10. However, almost all videos include an audio track, which could provide vital cues for video understanding. In particular, what is being said by people in the video, might make a crucial difference to the content description. For instance, in a scene when someone knocks the door from an opposite side, we only see the door but the audio helps us to understand that somebody is behind it and wants to enter. Therefore, it is impossible for a model to make a useful caption for it. Also, other types of videos as instruction videos, sport videos, or video lectures could be challenging for a captioning model.
In contrast, we build our model to utilize video frames, raw audio signal, and the speech content in the caption generation process. To this end, we deploy automatic speech recognition (ASR) system BIBREF11 to extract time-aligned captions of what is being said (similar to subtitles) and employ it alongside with video and audio representations in the transformer model.
The proposed model is assessed using the challenging ActivityNet Captions BIBREF2 benchmark dataset, where we obtain competitive results to the current state-of-the-art. The subsequent ablation studies indicate a substantial contribution from audio and speech signals. Moreover, we retrieve and perform breakdown analysis by utilizing previously unused video category tags provided with the original YouTube videos BIBREF12. The program code of our model and the evaluation approach will be made publicly available.
<<</Introduction>>>
<<<Related Work>>>
<<<Video Captioning>>>
Early works in video captioning applied rule-based models BIBREF13, BIBREF14, BIBREF15, where the idea was to identify a set of video objects and use them to fill predefined templates to generate a sentence. Later, the need for sentence templates was omitted by casting the captioning problem as a machine translation task BIBREF16. Following the success of neural models in translation systems BIBREF17, similar methods became widely popular in video captioning BIBREF18, BIBREF19, BIBREF20, BIBREF1, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25. The rationale behind this approach is to train two Recurrent Neural Networks (RNNs) in an encoder-decoder fashion. Specifically, an encoder inputs a set of video features, accumulates its hidden state, which is passed to a decoder for producing a caption.
To further improve the performance of the captioning model, several methods have been proposed, including shared memory between visual and textual domains BIBREF26, BIBREF27, spatial and temporal attention BIBREF28, reinforcement learning BIBREF29, semantic tags BIBREF30, BIBREF31, other modalities BIBREF32, BIBREF33, BIBREF34, BIBREF35, and by producing a paragraph instead of one sentence BIBREF36, BIBREF1.
<<</Video Captioning>>>
<<<Dense Video Captioning>>>
Inspired by the idea of the dense image captioning task BIBREF37, Krishna BIBREF2 introduced a problem of dense video captioning and released a new dataset called ActivityNet Captions which leveraged the research in the field BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF38, BIBREF10. In particular, BIBREF5 adopted the idea of the context-awareness BIBREF2 and generalized the temporal event proposal module to utilize both past and future contexts as well as an attentive fusion to differentiate captions from highly overlapping events. Meanwhile, the concept of Single Shot Detector (SSD) BIBREF39 was also used to generate event proposals and reward maximization for better captioning in BIBREF6.
In order to mitigate the intrinsic difficulties of RNNs to model long-term dependencies in a sequence, Zhou BIBREF4 tailored the recent idea of Transformer BIBREF3 for dense video captioning. In BIBREF7 the authors noticed that the captioning may benefit from interactions between objects in a video and developed recurrent higher-order interaction module to model these interactions. Xiong BIBREF8 noticed that many previous models produced redundant captions, and proposed to generate captions in a progressive manner, conditioned on the previous caption while applying paragraph- and sentence-level rewards. Similarly, a “bird-view” correction and two-level reward maximization for a more coherent story-telling have been employed in BIBREF9.
Since the human annotation of a video with temporal boundaries and captions for each of them can be laborious, several attempts have been made to address this issue BIBREF40, BIBREF41. Specifically, BIBREF40 employed the idea of cycle-consistency to translate a set of captions to a set of temporal events without any paired annotation, while BIBREF41 automatically-collected dataset of an unparalleled-scale exploiting the structure of instructional videos.
The most similar work to our captioning model is BIBREF4 that also utilizes a version of the Transformer BIBREF3 architecture. However, their model is designed solely for visual features. Instead, we believe that dense video captioning may benefit from information from other modalities.
<<</Dense Video Captioning>>>
<<<Multi-modal Dense Video Captioning>>>
A few attempts has been made to include additional cues like audio and speech BIBREF38, BIBREF42, BIBREF43 for dense video captioning task. Rahman BIBREF38 utilized the idea of cycle-consistency BIBREF40 to build a model with visual and audio inputs. However, due to weak supervision, the system did not reach high performance. Hessel BIBREF42 and Shi BIBREF43 employ a transformer architecture BIBREF3 to encode both video frames and speech segments to generate captions for instructional (cooking) videos. Yet, the high results on a dataset which is restricted to instructional video appear to be not evidential as the speech and the captions are already very close to each other in such videos BIBREF41.
In contrast to the mentioned multi-modal dense video captioning methods: (1) we present the importance of the speech and audio modalities on a domain-free dataset, (2) propose a multi-modal dense video captioning module (MDVC) which can be scaled to any number of modalities.
<<</Multi-modal Dense Video Captioning>>>
<<</Related Work>>>
<<<Proposed Framework>>>
In this section, we briefly outline the workflow of our method referred to as Multi-modal Dense Video Captioning (MDVC) which is shown in Fig. FIGREF5. The goal of our method is to temporally localize events on a video and to produce a textual description for each of them. To this end, we apply a two-stage approach.
Firstly, we obtain the temporal event locations. For this task, we employ the Bidirectional Single-Stream Temporal action proposals network (Bi-SST) proposed in BIBREF5. Bi-SST applies 3D Convolution network (C3D) BIBREF44 to video frames and extracts features that are passed to subsequent bi-directional LSTM BIBREF45 network. The LSTM accumulates visual cues over time and predicts confidence scores for each location to be start/end point of an event. Finally, a set of event proposals (start/end times) is obtained and passed to the second stage for caption generation.
Secondly, we generate the captions given a proposal. To produce inputs from audio, visual, and speech modalities, we use Inflated 3D convolutions (I3D) BIBREF46 for visual and VGGish network BIBREF47 for audio modalities. For speech representation as a text, we employ an external ASR system BIBREF11. To represent the text into a numerical form, we use a similar text embedding which is used for caption encoding. The features are, then, fed to individual transformer models along with the words of a caption from the previous time steps. The output of the transformer is passed into a generator which fuses the outputs from all modalities and estimates a probability distribution over the word vocabulary. After sampling the next word, the process is repeated until a special end token is obtained. Fig. FIGREF1 illustrates an example modality and the corresponding event captions.
<<<Temporal Event Localization Module>>>
An event localization module is dedicated to generating a set of temporal regions which might contain an event. To achieve this, we employ pre-trained Bidirectional Single-Stream Temporal action proposals network (Bi-SST) proposed in BIBREF5 as it has is been shown to reach good performance in the proposal generation task.
Bi-SST inputs a sequence of $T$ RGB frames from a video $V = (x_1, x_2, \dots , x_F)$ and extracts a set of 4096-d features $V^{\prime } = (f_1, f_2, \dots , f_T)$ by applying a 3D Convolution network (C3D) on non-overlapping segments of size 16 with a stride of 64 frames. To reduce the feature dimension, only 500 principal components were selected using PCA.
To account for the video context, events are proposed during forward and backward passes on a video sequence $V^{\prime }$, and, then, the resulting scores are fused together to obtain the final proposal set. Specifically, during the forward pass, LSTM is used to accumulate the visual clues from the “past” context at each position $t$ which is treated as an ending point and produce confidence scores for each proposal.
Afterwards, a similar procedure is performed during the backward pass where the features $V^{\prime }$ are used in a reversed order. This empowers the model to have a sense of the “future” context in a video. In contrast to the forward pass, each position is treated as a starting point of the proposal. Finally, the confidence scores from both passes are fused by multiplication of corresponding scores for each proposal at each time step, and, then, filtered according to a predefined threshold.
Finally, we obtain a set of $N_V$ event proposals for caption generation $P_V=\lbrace p_j = (\text{start}_j, \text{end}_j, \text{score}_j)\rbrace _{j=1}^{N_V}$.
<<</Temporal Event Localization Module>>>
<<<Captioning Module>>>
In this section we explain the captioning based for an example modality, namely, visual. Given a video $V$ and a set of proposals $P_V$ from the event localization module, the task of the captioning module is to provide a caption for each proposal in $P_V$. In order to extract features from a video $V$, we employ I3D network BIBREF46 pre-trained on the Kinetics dataset which produces 1024-d features. The gap between the extracted features and the generated captions is filled with Transformer BIBREF3 architecture which was proven to effectively encode and decode the information in a sequence-to-sequence setting.
<<<Feature Transformer>>>
As shown in Fig. FIGREF6, Feature Transformer architecture mainly consists of three blocks: an encoder, decoder, and generator. The encoder inputs a set of extracted features $ \mathbf {v}^j = (v_1, v_2, \dots , v_{T_j}) $ temporally corresponding to a proposal $p_j$ from $P_V$ and maps it to a sequence of internal representations $ \mathbf {z}^j = (z_1, z_2, \dots , z_{T_j}) $. The decoder is conditioned on the output of the encoder $\mathbf {z}^j$ and the embedding $ \mathbf {e}^j_{\leqslant t} = (e_1, e_2, \dots , e_t)$ of the words in a caption $ \mathbf {w}^j_{\leqslant t} = (w_1, w_2, \dots , w_t) $. It produces the representation $ \mathbf {g}^j_{\leqslant t} = (g_1, g_2, \dots , g_t) $ which, in turn, is used by the generator to model a distribution over a vocabulary for the next word $ p(w_{t+1}|\mathbf {g}^j_{\leqslant t}) $. The next word is selected greedily by obtaining the word with the highest probability until a special ending token is sampled. The captioning is initialized with a starting token. Both are added to the vocabulary.
Before providing an overview of the encoder, decoder, and generator, we presenting the notion of multi-headed attention that acts as an essential part of the decoder and encoder blocks. The concept of the multi-head attention, in turn, heavily relies on dot-product attention which we describe next.
<<<Dot-product Attention>>>
The idea of the multi-headed attention rests on the scaled dot-product attention which calculates the weighted sum of values. The weights are obtained by applying the softmax function on the dot-product of each pair of rows of queries and keys scaled by $\frac{1}{\sqrt{D_k}}$. The scaling is done to prevent the softmax function from being in the small gradient regions BIBREF3. Formally the scaled dot-product attention can be represented as follows
where $Q, K, V $ are queries, keys, and values, respectively.
<<</Dot-product Attention>>>
<<<Multi-headed Attention>>>
The multi-headed attention block is used once in each encoder layer and twice in each decoder layer. The block consists of $H$ heads that allows to cooperatively account for information from several representations sub-spaces at every position while preserving the same computation complexity BIBREF3. In a transformer with dimension $D_T$, each head is defined in the following way
where $q, k, v$ are matrices which have $D_T$ columns and the number of rows depending on the position of the multi-headed block, yet with the same number of rows for $k$ and $v$ to make the calculation in (DISPLAY_FORM11) to be feasible. The $W^{q}_h, W^{k}_h, W^{v}_h \in \mathbb {R}^{D_T \times D_k}$ are trainable projection matrices that map $q, k , v$ from $D_T$ into $D_k= \frac{D_T}{H}$, asserting $D_T$ is a multiple of $H$. The multi-head attention, in turn, is the concatenation of all attention heads mapped back into $D_T$ by trainable parameter matrix $W^o \in \mathbb {R}^{D_k \cdot H \times D_T}$:
<<</Multi-headed Attention>>>
<<<Encoder>>>
The encoder consists of $ L $ layers. The first layer inputs a set of features $ \mathbf {v}^j $ and outputs an internal representation $ \mathbf {z}_1^j \in \mathbb {R}^{T_j \times D_T} $ while each of the next layers treats the output of a previous layer as its input. Each encoder layer $l$ consist of two sub-layers: multi-headed attention and position-wise fully connected network which are explained later in this section. The input to both sub-layers are normalized using layer normalization BIBREF48, each sub-layer is surrounded by a residual connection BIBREF49 (see Fig. FIGREF6). Formally, the $l$-th encoder layer has the following definition
where $\text{FCN}$ is the position-wise fully connected network. Note, the multi-headed attention has identical queries, keys, and values ($ \overline{\mathbf {z}}_l^j $). Such multi-headed attention block is also referred to as self-multi-headed attention. It enables an encoder layer $l$ to account for the information from all states from the previous layer $ \mathbf {z}_{l-1}^j$. This property contrasts with the idea of RNN which accumulates only the information from the past positions.
<<</Encoder>>>
<<<Decoder>>>
Similarly to the encoder, the decoder has $ L $ layers. At a position $t$, the decoder inputs a set of embedded words $\mathbf {e}^j_{\leqslant t}$ with the output of the encoder $\mathbf {z}^j$ and sends the output to the next layer which is conditioned on this output and, again, the encoder output $\mathbf {z}^j$. Eventually, the decoder producing its internal representation $\mathbf {g}_{\leqslant t}^j \in \mathbb {R}^{t \times D_T}$. The decoder block is similar to the encoder but has an additional sub-layer that applies multi-headed attention on the encoder output and the output of its previous sub-layer. The decoder employs the layer normalization and residual connections at all three sub-layers in the same fashion as the encoder. Specifically, the $l$-th decoder layer has the following form:
where $ \mathbf {z}^j $ is the encoder output. Note, similarly to the encoder, (DISPLAY_FORM18) is a self-multi-headed attention function while the second multi-headed attention block attends on both the encoder and decoder and is also referred to as encoder-decoder attention. This block enables each layer of the decoder to attend all state of the encoder's output $ \mathbf {z}^j$.
<<</Decoder>>>
<<<Position-wise Fully-Connected Network>>>
The fully connected network is used in each layer of the encoder and the decoder. It is a simple two-layer neural network that inputs $x$ with the output of the multi-head attention block, and, then, projects each row (or position) of the input $x$ from $D_T$ space onto $D_P$, $(D_P > D_T)$ and back, formally:
where $W_1 \in \mathbb {R}^{D_T \times D_P}$, $W_2 \in \mathbb {R}^{D_P \times D_T}$, and biases $b_1, b_2$ are trainable parameters, $\text{ReLU}$ is a rectified linear unit.
<<</Position-wise Fully-Connected Network>>>
<<<Generator>>>
At the position $t$, the generator consumes the output of the decoder $\mathbf {g}^j_{\leqslant t}$ and produces a distribution over the vocabulary of words $p(w_{t+1}| \mathbf {g}^j_{\leqslant t})$. To obtain the distribution, the generator applies the softmax function of the output of a fully connected layer with a weight matrix $W_G \in \mathbb {R}^{D_T \times D_V}$ where $D_V$ is a vocabulary size. The word with the highest probability is selected as the next one.
<<</Generator>>>
<<<Input Embedding and Positional Encoding>>>
Since the representation of textual data is usually sparse due to a large vocabulary, the dimension of the input of a neural language model is reduced with an embedding into a dimension of a different size, namely $D_T$. Also, following BIBREF3, we multiply the embedding weights by $\sqrt{D_T}$. The position encoding is required to allow the transformer to have a sense of the order in an input sequence. We adopt the approach proposed for a transformer architecture, i. e. we add the output of the combination of sine and cosine functions to the embedded input sequence BIBREF3.
<<</Input Embedding and Positional Encoding>>>
<<</Feature Transformer>>>
<<</Captioning Module>>>
<<<Model Training>>>
As the training is conducted using mini-batches of size 28, the features in one modality must be of the same length so the features could be stacked into a tensor. In this regard, we pad the features and the embedded captions to match the size of the longest sample.
The model is trained by optimizing the Kullback–Leibler divergence loss which measures the “distance” between the ground truth and predicted distributions and averages the values for all words in a batch ignoring the masked tokens.
Since many words in the English language may have several synonyms or human annotation may contain mistakes, we undergo the model to be less certain about the predictions and apply Label Smoothing BIBREF50 with the smoothing parameter $\gamma $ on the ground truth labels to mitigate this. In particular, the ground truth distribution over the vocabulary of size $D_V$, which is usually represented as one-hot encoding vector, the identity is replaced with probability $1-\gamma $ while the rest of the values are filled with $\frac{\gamma }{D_V-1}$.
During training, we exploit the teacher forcing technique which uses the ground truth sequence up to position $t$ as the input to predict the next word instead of using the sequence of predictions. As we input the whole ground truth sequence at once and predicting the next words at each position, we need to prevent the transformer from peeping for the information from the next positions as it attends to all positions of the input. To mitigate this, we apply masking inside of the self-multi-headed attention block in the decoder for each position higher than $t-1$, following BIBREF3.
The details on the feature extraction and other implementation details are available in the supplementary materials.
<<</Model Training>>>
<<</Proposed Framework>>>
<<<Experiments>>>
<<<Dataset>>>
We perform our experiments using ActivityNet Captions dataset BIBREF2 that is considered as the standard benchmark for dense video captioning task. The dataset contains approximately 20k videos from YouTube and split into 50/25/25 % parts for training, validation, and testing, respectively. Each video, on average, contains 3.65 temporally localized captions, around 13.65 words each, and two minutes long. In addition, each video in the validation set is annotated twice by different annotators. We report all results using the validation set (no ground truth is provided for the test set).
The dataset itself is distributed as a collection of links to YouTube videos, some of which are no longer available. Authors provide pre-computed C3D features and frames at 5fps, but these are not suitable for our experiments. At the time of writing, we found 9,167 (out of 10,009) training and 4,483 (out of 4,917) validation videos which is, roughly, 91 % of the dataset. Out of these 2,798 training and 1,374 validation videos (approx. 28 %) contain at least one speech segment. The speech content was obtained from the closed captions (CC) provided by the YouTube ASR system which can be though as subtitles.
<<</Dataset>>>
<<<Metrics>>>
We are evaluating the performance of our model using BLEU@N BIBREF51 and METEOR BIBREF52. We regard the METEOR as our primary metric as it has been shown to be highly correlated with human judgement in a situation with a limited number of references (only one, in our case).
We employ the official evaluation script provided in BIBREF53. Thus, the metrics are calculated if a proposed event and a ground truth location of a caption overlaps more than a specified temporal Intersection over Union (tIoU) and zero otherwise. All metric values are averaged for every video, and, then, for every threshold tIoU in $[0.3, 0.5, 0.7, 0.9]$. On the validation, we average the resulting scores for both validation sets. For the learned proposal setting, we report our results on at most 100 proposals per video.
Notably, up to early 2017, the evaluation code had an issue which previously overestimated the performance of the algorithms in the learned proposal setting BIBREF9. Therefore, we report the results using the new evaluation code.
<<</Metrics>>>
<<<Comparison with Baseline Methods>>>
We compare our method with five related approaches, namely Krishna BIBREF2, Wang BIBREF5, Zhou BIBREF4, Li BIBREF6, and Rahman BIBREF38. We take the performance values from the original papers, except for BIBREF6, and BIBREF4, which are taken from BIBREF9 due to the evaluation issue (see Sec. SECREF27).
The lack of access to the full ActivityNet Captions dataset makes strictly fair comparison difficult as we have less training and validation videos. Nevertheless, we present our results in two set-ups: 1) full validation set with random input features for missing entries, and 2) videos with all three modalities present (video, audio, and speech). The first one is chosen to indicate the lower bound of our performance with the full dataset. Whereas, the second one (referred to as “no missings”) concentrates on the multi-modal setup, which is the main contribution of our work.
The obtained results are presented in Tab. TABREF25. Our method (MDVC) achieves comparable or better performance, even though we have access to smaller training set and 9 % of the validation videos are missing (replaced with random input features). Furthermore, if all three modalities are present, our method outperforms all baseline approaches in the case of both GT and learned proposals. Notably, we outperform BIBREF4 which is also based on the transformer architecture and account for the optical flow. This shows the superior performance of our captioning module which, yet, trained on the smaller amount of data.
<<</Comparison with Baseline Methods>>>
<<<Ablation Studies>>>
In this section, we perform an ablation analysis highlighting the effect of different design choices of our method. For all experiments, we use the full unfiltered ActivityNet Captions validation set with ground truth event proposals.
Firstly, we assess the selection of the model architecture. To this end, we implemented a version of our method where the transformer was replaced by Bidirectional Recurrent Neural Network with Gated Recurrent Units with attention (Bi-GRU), proposed in BIBREF54. To distil the effect of the change in architecture, the results are shown for visual-only models. Both Bi-GRU and the transformer input I3D features extracted from 64 RGB and optical flow frames (the final model inputs 24 frames). Finally, we set a lower bound for the feature performance by training a transformer model with random video features. Tab. TABREF32 shows the comparison. To conclude, we observe that the feature transformer-based model is not only lighter but also achieves better performance in dense video captioning task. Moreover, both method clearly surpasses the random baseline.
Secondly, we evaluate the contribution of different modalities in our framework. Tab. TABREF33 contains the results for different modality configurations as well as for two feature fusion approaches. Specifically, averaging of the output probabilities and concatenation of the outputs of all modalities and applying two fully connected (FC) layers on top. We observe that audio-only model has the worst performance, followed by the visual only model, and the combination of these two. Moreover, the concatenation and FC layers result in better performance than averaging. To further assess if the performance gain is due to the additional modalities or to the extra capacity in the FC layers, we trained a visual-only model with two additional FC layers. The results indicate that such configuration performs worse than any bi-modal setup. Overall, we conclude that the final model with all three modalities performs best among all tested set-ups, which highlights the importance of multi-modal setting in dense video captioning task.
Fig. FIGREF29 shows a qualitative comparison between different models in our ablation study. Moreover, we provide the corresponding captions from the best performing baseline method (Zhuo BIBREF4). We noticed the following pattern: the audio-modality produces coherent sentences and captures the concepts of speaking in the video. However, there are clear mistakes in the caption content. In contrast, the model with all three modalities manages to capture the man who speaks to the camera which is also present in the ground truth. Both visual-only MDVC and Zhuo struggle to describe the audio details.
Finally, to test whether our model improves the performance in general rather than in a specific video category, we report the comparison of the different versions of MDVC per category. To this end, we retrieve the category labels from the YouTubeAPI BIBREF12 (US region) for every available ActivityNet Captions validation video. These labels are given by the user when uploading the video and roughly represent the video content type. The comparison is shown in Fig. FIGREF31. The results imply a consistent gain in performance within each category except for categories: “Film & Animation” and “Travel & Events” which might be explained by the lack of correspondence between visual and audio tracks. Specifically, the video might be accompanied by music, e. g. promotion of a resort. Also, “Film & Animation” contains cartoon-like movies which might have a realistic soundtrack while the visual track is goofy.
<<</Ablation Studies>>>
<<</Experiments>>>
<<<Conclusion>>>
The use of different modalities in computer vision is still an underrepresented topic and, we believe, deserves more attention. In this work, we introduced a multi-modal dense video captioning module (MDVC) and shown the importance of the audio and speech modalities for dense video captioning task. Specifically, MDVC is based on the transformer architecture which encodes the feature representation of each modality for a specific event proposal and produces a caption using the information from these modalities. The experimentation, conducted employing the ActivityNet Captions dataset, shows the superior performance of a captioning module to the visual-only models in the existing literature. Extensive ablation study verifies this conclusion. We believe that our results firmly indicate that future works in video captioning should utilize a multi-modal input.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Work\nVideo Captioning\nDense Video Captioning\nMulti-modal Dense Video Captioning\nProposed Framework\nTemporal Event Localization Module\nCaptioning Module\nFeature Transformer\nDot-product Attention\nMulti-headed Attention\nEncoder\nDecoder\nPosition-wise Fully-Connected Network\nGenerator\nInput Embedding and Positional Encoding\nModel Training\nExperiments\nDataset\nMetrics\nComparison with Baseline Methods\nAblation Studies\nConclusion"
],
"type": "outline"
}
|
1911.03584
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
On the Relationship between Self-Attention and Convolutional Layers
<<<Abstract>>>
Recent trends of incorporating attention mechanisms in vision have led researchers to reconsider the supremacy of convolutional layers as a primary building block. Beyond helping CNNs to handle long-range dependencies, Ramachandran et al. (2019) showed that attention can completely replace convolution and achieve state-of-the-art performance on vision tasks. This raises the question: do learned attention layers operate similarly to convolutional layers? This work provides evidence that attention layers can perform convolution and, indeed, they often learn to do so in practice. Specifically, we prove that a multi-head self-attention layer with sufficient number of heads is at least as powerful as any convolutional layer. Our numerical experiments then show that the phenomenon also occurs in practice, corroborating our analysis. Our code is publicly available.
<<</Abstract>>>
<<<Introduction>>>
Recent advances in Natural Language Processing (NLP) are largely attributed to the rise of the transformer BIBREF1. Pre-trained to solve an unsupervised task on large corpora of text, transformer-based architectures, such as GPT-2 BIBREF2, BERT BIBREF3 and Transformer-XL BIBREF4, seem to possess the capacity to learn the underlying structure of text and, as a consequence, to learn representations that generalize across tasks. The key difference between transformers and previous methods, such as recurrent neural networks BIBREF5 and convolutional neural networks (CNN), is that the former can simultaneously attend to every word of their input sequence. This is made possible thanks to the attention mechanism—originally introduced in Neural Machine Translation to better handle long-range dependencies BIBREF6. With self-attention in particular, the similarity of two words in a sequence is captured by an attention score measuring the distance of their representations. The representation of each word is then updated based on those words whose attention score is highest.
Inspired by its capacity to learn meaningful inter-dependencies between words, researchers have recently considered utilizing self-attention in vision tasks. Self-attention was first added to CNN by either using channel-based attention BIBREF7 or non-local relationships across the image BIBREF8. More recently, BIBREF9 augmented CNNs by replacing some convolutional layers with self-attention layers, leading to improvements on image classification and object detection tasks. Interestingly, BIBREF0 noticed that, even though state-of-the art results are reached when attention and convolutional features are combined, under same computation and model size constraints, self-attention-only architectures also reach competitive image classification accuracy.
These findings raise the question, do self-attention layers process images in a similar manner to convolutional layers? From a theoretical perspective, one could argue that transfomers have the capacity to simulate any function—including a CNN. Indeed, BIBREF10 showed that a multi-layer attention-based architecture with additive positional encodings is Turing complete under some strong theoretical assumptions, such as unbounded precision arithmetic. Unfortunately, universality results do not reveal how a machine solves a task, only that it has the capacity to do so. Thus, the question of how self-attention layers actually process images remains open.
<<<Contributions.>>>
In this work, we put forth theoretical and empirical evidence that self-attention layers can (and do) learn to behave similar to convolutional layers:
From a theoretical perspective, we provide a constructive proof showing that self-attention layers can express any convolutional layers.
Specifically, we show that a single multi-head self-attention layer using relative positional encoding can be re-parametrized to express any convolutional layer. Our insights lead to a relative positional encoding, that we refer to as quadratic encoding, that is very efficient in terms of size.
Our experiments show that the first few layers of attention-only architectures BIBREF0 do learn to attend on grid-like pattern around each query pixel, similar to our theoretical construction.
Strikingly, this behavior is confirmed both for our quadratic encoding, but also for relative encoding that is learned during training. Our results seem to suggest that localized convolution is the right inductive bias for the first few layers of an image classifying network. For deeper layers, on the other hand, long-range as well as horizontally-symmetric inter-dependencies become more relevant.
For reproducibility purposes, our code is publicly available on GitHub.
<<</Contributions.>>>
<<</Introduction>>>
<<<Background on Attention Mechanisms for Vision>>>
We here recall the mathematical formulation of self-attention layers and emphasize the role of positional encodings.
<<<The Multi-Head Self-Attention Layer>>>
Let $\in ^{T\times D_{\textit {in}}}$ be an input matrix consisting of $T$ tokens in of ${D_{\textit {in}}}$ dimensions each. While in NLP each token corresponds to a word in a sentence, the same formalism can be applied to any sequence of $T$ discrete objects, e.g. pixels. A self-attention layer maps any query token $t \in [T]$ from $D_{\textit {in}}$ to $D_{\textit {out}}$ dimensions as follows: Self-Attention()t,: := ( t,: ) val, where we refer to the elements of the $T \times T$ matrix := qrykey as attention scores and the softmax output as attention probabilities. The layer is parametrized by a query matrix $_{\!\textit {qry}}\in ^{D_{\textit {in}} \times D_{k}}$, a key matrix $_{\!\textit {key}}\in ^{D_{\textit {in}} \times D_{k}}$ and a value matrix $_{\!\textit {val}}\in ^{D_{\textit {in}} \times D_{\textit {out}}}$.For simplicity, we exclude any residual connections, batch normalization and constant factors. A key property of the self-attention model described above is that it is equivariant to reordering, that is, it gives the same output independently of how the $T$ input tokens are shuffled. This is problematic for cases we expect the order of things to matter. To alleviate the limitation, a positional encoding is learned for each token in the sequence (or pixel in an image), and added to the representation of the token itself before applying self-attention := (+ ) qrykey(+ ), where $\in ^{T \times D_{\textit {in}}}$ contains the embedding vectors for each position. More generally, $$ may be substituted by any function that returns a vector representation of the position.
It has been found beneficial in practice to replicate this self-attention mechanism into multiple heads, each being able to focus on different parts of the input by using different query, key and value matrices. In multi-head self-attention, the output of the $N_h$ heads of output dimension $D_h$ are concatenated and projected to dimension $D_{\textit {out}}$ as follows: MHSA() := *concath [Nh][Self-Attentionh()] out + out and two new parameters are introduced: the projection matrix $_{\!\textit {out}} \in ^{N_h D_h \times D_{\textit {out}}}$ and a bias term $_{\textit {out}}\in ^{D_{\textit {out}}}$.
<<</The Multi-Head Self-Attention Layer>>>
<<<Attention for Images>>>
Convolutional layers are the de facto choice for building neural networks that operate on images. We recall that, given an image tensor $~\in ~^{W\times H \times D_{\textit {in}}}$ of width $W$, height $H$ and $D_{\textit {in}}$ channels, the output of a convolutional layer for pixel $(i,j)$ is given by Conv()i,j,: := (1, 2) K 1,2,:,: i+1, j+2, : + , where $$ is the $K \times K \times D_{\textit {out}} \times D_{\textit {in}}$ weight tensor , $\in ^{D_{\textit {out}}}$ is the bias vector and the set
contains all possible shifts appearing when convolving the image with a $K\times K$ kernel.
In the following, we review how self-attention can be adapted from 1D sequences to images.
With images, rather than tokens, we have query and key pixels $, \in [W] \times [H]$. Accordingly, the input is a tensor $$ of dimension $W \times H \times D_{\textit {in}}$ and each attention score associates a query and a key pixel. To keep the formulas consistent with the 1D case, we abuse notation and slice tensors by using a 2D index vector: if $= (i,j)$, we write $_{,:}$ and $_{,:}$ to mean $_{i, j,:}$ and $_{i, j,:,:}$, respectively. With this notation in place, the multi-head self attention layer output at pixel $$ can be expressed as follows: Self-Attention(),: = ( ,: ) ,: val and accordingly for the multi-head case.
<<</Attention for Images>>>
<<<Positional Encoding for Images>>>
There are two types of positional encoding that has been used in transformer-based architectures: the absolute and relative encoding (see also tab:relworkattention in the Appendix).
With absolute encodings, a (fixed or learned) vector $_{,:}$ is assigned to each pixel $$. The computation of the attention scores we saw in eq:attcoeff can then be decomposed as follows: , abs = (,: + ,:) qrykey(,: + ,:)
= ,: qrykey,: + ,: qrykey,: + ,:qrykey,: + ,: qrykey,: where $$ and $$ correspond to the query and key pixels, respectively.
The relative positional encoding was introduced by BIBREF4. The main idea is to only consider the position difference between the query pixel (pixel we compute the representation of) and the key pixel (pixel we attend) instead of the absolute position of the key pixel: , rel := ,: qry key ,: + ,: qry key + key ,: + key In this manner, the attention scores only depend on the shift ${{\delta }}:= - $. Above, the learnable vectors $$ and $$ are unique for each head, whereas for every shift ${{\delta }}$ the relative positional encoding $_{{{\delta }}} \in ^{D_p}$ is shared by all layers and heads. Moreover, now the key weights are split into two types: $_{\!\textit {key}}$ pertain to the input and $\widehat{}_{\!\textit {key}}$ to the relative position of pixels.
<<</Positional Encoding for Images>>>
<<</Background on Attention Mechanisms for Vision>>>
<<<Self-Attention as a Convolutional Layer>>>
This section derives sufficient conditions such that a multi-head self-attention layer can simulate a convolutional layer. Our main result is the following:
Theorem 1 A multi-head self-attention layer with $N_h$ heads of dimension $D_h$, output dimension $D_{\textit {out}}$ and a relative positional encoding of dimension $D_p \ge 3$ can express any convolutional layer of kernel size $\sqrt{N_h} \times \sqrt{N_h}$ and $\min (D_h, D_{\textit {out}})$ output channels.
The theorem is proven constructively by selecting the parameters of the multi-head self-attention layer so that the latter acts like a convolutional layer. In the proposed construction, the attention scores of each self-attention head should attend to a different relative shift within the set $\Delta \!\!\!\!\Delta _K = \lbrace -\lfloor K/2 \rfloor , \dots , \lfloor K/2 \rfloor \rbrace ^2$ of all pixel shifts in a $K\times K$ kernel. The exact condition can be found in the statement of Lemma UNKREF15.
Then, Lemma UNKREF17 shows that the aforementioned condition is satisfied for the relative positional encoding that we refer to as the quadratic encoding: (h) := -(h) (1, -21(h), -22(h)) := (2 , 1, 2) qry = key := 0 key := The learned parameters ${{\Delta }}^{(h)} = ({{\Delta }}^{(h)}_1, {{\Delta }}^{(h)}_2)$ and $\alpha ^{(h)}$ determine the center and width of attention of each head, respectively. On the other hand, ${{\delta }}= ({{\delta }}_1, {{\delta }}_2)$ is fixed and expresses the relative shift between query and key pixels.
It is important to stress that the above encoding is not the only one for which the conditions of Lemma UNKREF15 are satisfied. In fact, in our experiments, the relative encoding learned by the neural network also matched the conditions of the lemma (despite being different from the quadratic encoding). Nevertheless, the encoding defined above is very efficient in terms of size, as only $D_p = 3$ dimensions suffice to encode the relative position of pixels, while also reaching similar or better empirical performance (than the learned one). Though we lack a formal proof, we conjecture that every encoding that satisfies Lemma UNKREF15 should have at least three dimensions.
<<<Remark for the 1D case.>>>
Convolutional layers acting on sequences are commonly used in the literature for text BIBREF11, as well as audio BIBREF12 and time series BIBREF13. Theorem UNKREF11 can be straightforwardly extended to show that multi-head self-attention with $N_h$ heads can also simulate a 1D convolutional layer with a kernel of size $K=N_h$ with $\min (D_h, D_{\textit {out}})$ output channels using a positional encoding of dimension $D_p \ge 2$. Since we have not tested empirically if the preceding construction matches the behavior of 1D self-attention in practice, we cannot claim that it actually learns to convolve an input sequence—only that it has the capacity to do so.
<<</Remark for the 1D case.>>>
<<<Proof of Main Theorem>>>
The proof follows directly from Lemmas UNKREF15 and UNKREF17 stated below:
Lemma 1 Consider a multi-head self-attention layer consisting of $N_h = K^2$ heads, $D_h \ge D_{\textit {out}}$ and let $~:~[N_h]~\rightarrow ~{\Delta \!\!\!\!\Delta }_K$ be a bijective mapping of heads onto shifts. Further, suppose that for every head the following holds: ((h),:) = {ll 1 if (h) = -
0 otherwise. . Then, for any convolutional layer with a $K \times K$ kernel and $D_{\textit {out}}$ output channels, there exists $\lbrace _{\!\textit {val}}^{(h)}\rbrace _{h \in [N_h]}$ such that $ \operatorname{MHSA}() = \operatorname{Conv}() $ for every $\in ^{W \times H \times D_{\textit {in}}}$.
Our first step will be to rework the expression of the Multi-Head Self-Attention operator from (SECREF6) and (SECREF6) such that the effect of the multiple heads becomes more transparent: MHSA() = out+ h [Nh] ((h)) val(h) out[(h-1)Dh + 1:h Dh +1] (h) Note that each head's value matrix $_{\!\textit {val}}^{(h)} \in ^{D_{\textit {in}} \times D_{h}}$ and each block of the projection matrix $_{\textit {out}}$ of dimension $D_h \times D_{\textit {out}}$ are learned. Assuming that $D_h \ge D_{\textit {out}}$, we can replace each pair of matrices by a learned matrix $^{(h)}$ for each head. We consider one output pixel of the multi-head self-attention: MHSA(),: = h [Nh] ( ((h),:) ,: ) (h) + out Due to the conditions of the Lemma, for the $h$-th attention head the attention probability is one when $ = - (h) $ and zero otherwise. The layer's output at pixel $$ is thus equal to MHSA() = h [Nh] - (h),: (h) + out For $K = \sqrt{N_h}$, the above can be seen to be equivalent to a convolutional layer expressed in eq. SECREF8: there is a one to one mapping (implied by map $$) between the matrices $^{(h)}$ for $h = [N_h]$ and the matrices $_{k_1,k_2,:,:}$ for all $(k_1,k_2) \in [K]^2.$
<<<Remark about @!START@$D_h$@!END@ and @!START@$D_{\textit {out}}$@!END@.>>>
It is frequent in transformer-based architectures to set $D_h~=~D_{\textit {out}}/N_h$, hence $D_h < D_{\textit {out}}$. In that case, $^{(h)}$ can be seen to be of rank $D_{\textit {out}} - D_h$, which does not suffice to express every convolutional layer with $D_{\textit {out}}$ channels. Nevertheless, it can be seen that any $D_h$ out of $D_{\textit {out}}$ outputs of $\operatorname{MHSA}()$ can express the output of any convolutional layer with $D_h$ output channels. To cover both cases, in the statement of the main theorem we assert that the output channels of the convolutional layer should be $\min (D_h, D_{\textit {out}})$. In practice, we advise to concatenate heads of dimension $D_h = D_{\textit {out}}$ instead of splitting the $D_{\textit {out}}$ dimensions among heads to have exact re-parametrization and no “unused” channels.
Lemma 2 There exists a relative encoding scheme $\lbrace _{{\delta }}\in ^{D_p}\rbrace _{{{\delta }}\in \mathbb {Z}^2}$ with $D_p \ge 3$ and parameters $_{\!\textit {qry}}, _{\!\textit {key}}, \widehat{}_{\!\textit {key}},$ with $D_p \le D_k$ such that, for every ${{\Delta }}\in \Delta \!\!\!\!\Delta _K$ there exists some vector $$ (conditioned on ${{\Delta }}$) yielding $ (_{,:})_{} = 1 $ if $ - = {{\Delta }}$ and zero, otherwise.
We show by construction the existence of a $D_p=3$ dimensional relative encoding scheme yielding the required attention probabilities.
As the attention probabilities are independent of the input tensor $$, we set $_{\!\textit {key}}=_{\!\textit {qry}}={0}$ which leaves only the last term of eq:attrel. Setting $\widehat{}_{\!\textit {key}}\in ^{D_k \times D_p}$ to the identity matrix (with appropriate row padding), yields $_{, } = ^{\top } _{{{\delta }}}$ where $\quad {{\delta }}:= - $. Above, we have assumed that $D_p \le D_k$ such that no information from $_{{{\delta }}}$ is lost.
Now, suppose that we could write: , = -(- 2 + c) for some constant $c$. In the above expression, the maximum attention score over $_{, :}$ is $-\alpha c$ and it is reached for $_{, }$ with ${{\delta }}= {{\Delta }}$. On the other hand, the $\alpha $ coefficient can be used to scale arbitrarily the difference between $_{,{{\Delta }}}$ and the other attention scores.
In this way, for ${{\delta }}= {{\Delta }}$, we have (,:) = e-(- 2+c) ' e-((- ')- 2+c)
= e-- 2 e-c ' e-(- ')- 2 e-c
= e-- 2 ' e-(- ')- 2 = 1 1 + ' e-(- ')- 2 = 1 and for ${{\delta }}\ne {{\Delta }}$, the equation becomes $ \lim _{\alpha \rightarrow \infty } (_{,:})_{} = 0, $ exactly as needed to satisfy the lemma statement.
What remains is to prove that there exist $$ and $\lbrace _{{\delta }}\rbrace _{{{\delta }}\in \mathcal {Z}^2}$ for which eq:isoattdecomposed holds. Expanding the rhs of the equation, we have $ -\alpha (\Vert {{\delta }}- {{\Delta }}\Vert ^2 + c) = -\alpha ( \Vert {{\delta }}\Vert ^2 + \Vert {{\Delta }}\Vert ^2 - 2\langle {{\delta }}, {{\Delta }}\rangle + c )\,. $ Now if we set $ = -\alpha \, (1, -2{{\Delta }}_1, -2{{\Delta }}_2) $ and $ _{{\delta }}= (\Vert {{\delta }}\Vert ^2 , {{\delta }}_1, {{\delta }}_2), $ then
which matches eq:isoattdecomposed with $c = -\Vert {{\Delta }}\Vert ^2$ and the proof is concluded.
<<</Remark about @!START@$D_h$@!END@ and @!START@$D_{\textit {out}}$@!END@.>>>
<<</Proof of Main Theorem>>>
<<</Self-Attention as a Convolutional Layer>>>
<<<Experiments>>>
The aim of this section is to validate the applicability of our theoretical results—which state that self-attention can perform convolution—and to examine whether self-attention layers in practice do actually learn to operate like convolutional layers, when being trained on standard image classification tasks. In particular, we study the relationship between self-attention and convolution with quadratic and learned relative positional encodings. We find that for both cases, the attention probabilities learned tend to respect the conditions of Lemma UNKREF15, corroborating our hypothesis.
<<<Implementation Details>>>
We study a fully attentional model consisting of six multi-head self-attention layers. As it has already been shown by BIBREF9 that combining attention features with convolutional features improves performance on Cifar-100 and ImageNet, we do not focus on attaining state-of-the-art performance. Nevertheless, to validate that our model learns a meaningful classifier we compare it to the standard ResNet18 BIBREF14 on the CIFAR-10 dataset BIBREF15. In all experiments, we use a $2\times 2$ invertible down-sampling BIBREF16 on the input to reduce the size of the image as storing the attention coefficient tensor requires a large amount of GPU memory. The fixed size representation of the input image is computed as the average pooling of the last layer representations and given to a linear classifier.
We used the PyTorch library BIBREF17 and based our implementation on PyTorch Transformers. We release our code on Github and all hyper-parameters are in tab:hyper-parameter in the Appendix.
<<</Implementation Details>>>
<<<Quadratic Encoding>>>
As a first step, we aim to verify that, with the relative position encoding introduced in (SECREF3), attention layers learn to behave like convolutional layers. We train nine attention heads at each layer to be on par with the $3\times 3$ kernels used predominantly by the ResNet architecture. The center of attention of each head $h$ is initialized to $\Delta ^{(h)} \sim \mathcal {N}({0}, 2_2)$.
fig:isoduringtraining shows how the initial positions of the heads (different colors) at layer 4 changed during training. We can see that after optimization, the heads attend on specific pixel of the image forming a grid around the query pixel. Our intuition that Self-Attention applied to images learn convolutional filter around the queried pixel is then confirmed.
fig:isoattentionfinal displays all attention head at each layer of the model at the end of the training. It can be seen that in the first few layers the heads tend to focus on local patterns (layers 1 and 2), while deeper layers (layers 3-6) also attend to larger patterns by positioning the center of attention further from the queried pixel position. We also include in the Appendix a plot of the attention positions for a higher number of heads ($N_h=16$), fig:isomanyheads displays both local patterns similar to CNN and long range dependencies. Interestingly, attention heads do not overlap and seem to take an arrangement maximizing the coverage of the input space.
To verify that our self-attention model performs equally well as a small ResNet (tab:parametersize), in fig:learnedattentionmap we display the evolution of the test accuracy on CIFAR-10 over the 300 epochs of training. The ResNet is faster to converge, but we cannot ascertain whether this corresponds to an inherent property of the architecture or an artifact of the adopted optimization procedures. Our implementation could be optimized to exploit the locality of Gaussian attention probabilities and reduce significantly the number of FLOPS.
<<</Quadratic Encoding>>>
<<<Learned Relative Positional Encoding>>>
We move on to study the positional encoding used in practice by fully-attentional models on images.
We implemented the 2D relative positional encoding scheme used by BIBREF0, BIBREF9: we learn a $\lfloor D_p / 2 \rfloor $ position encoding vector for each row and each column pixel shift. Hence the relative positional encoding of a key pixel at position $$ with a query pixel at position $$ is the concatenation of the row shift embedding ${{\delta }}_1$ and the column shift embedding ${{\delta }}_2$ (where ${{\delta }}= - $). We chose $D_p = D_{\textit {out}} = 400$ in the experiment. We differ from the (unpublished) implementation described by BIBREF0 in the following points: (i) we do not use convolution stem and ResNet bottlenecks for downsampling, but only a $2\times 2$ invertible downsampling layer BIBREF16 at input, (ii) we use $D_h = D_{\textit {out}}$ instead of $D_h = D_{\textit {out}} / N_h$ backed from our theory that the effective number of learned filters is $\min (D_h, D_{\textit {out}})$, (iii) the attention scores are computed using only the relative positions of the pixels and not the data. As seen in tab:parametersize, our implementation achieves accuracy close to that of ResNet18.
The attention probabilities of each head at each layer are displayed on fig:learnedattentionmap. The figure confirms our hypothesis for the first two layers and partially for the third: even when left to learn the encoding from the data, certain self-attention heads (depicted on the left) learn to attend to individual pixels, closely matching the condition of Lemma UNKREF15 and thus Theorem UNKREF11. At the same time, other heads pay attention to horizontally-symmetric but non-localized patterns, as well as to long-range pixel inter-dependencies. The phenomenon is particularly prominent for layers four to six, where the behavior of self-attention can be seen to deviate from that of convolution. We also notice that vertical symmetry is much more rare in the learned attention probabilities of high layers. This matches the intuition that, for image classification, distinguishing between what is above or below something is more crucial than what is left or right. Finally, the fact that some of the heads in the last two layers seem to be redundant, likely indicating that the computational and space complexity of the model could be amenable to further reduction, for example by pruning.
<<</Learned Relative Positional Encoding>>>
<<</Experiments>>>
<<<Conclusion>>>
We showed that self-attention layers applied to images can express any convolutional layer (given sufficiently many heads) and that learned fully-attentional models do behave similar to CNN in practice. More generally, fully-attentional models seem to learn a generalization of CNNs where the kernel pattern is learned at the same time as the filters—similar to deformable convolutions BIBREF18, BIBREF19. Interesting directions for future work include translating existing insights from the rich CNNs literature back to transformers on various data modalities, including images, text and time series. Also, though we currently lack the computational resources to do so, we would be interested to test whether our findings are replicated for datasets of larger-scale, such as ImageNet and COCO.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nContributions.\nBackground on Attention Mechanisms for Vision\nThe Multi-Head Self-Attention Layer\nAttention for Images\nPositional Encoding for Images\nSelf-Attention as a Convolutional Layer\nRemark for the 1D case.\nProof of Main Theorem\nRemark about @!START@$D_h$@!END@ and @!START@$D_{\\textit {out}}$@!END@.\nExperiments\nImplementation Details\nQuadratic Encoding\nLearned Relative Positional Encoding\nConclusion"
],
"type": "outline"
}
|
1909.08306
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Text Length Adaptation in Sentiment Classification
<<<Abstract>>>
Can a text classifier generalize well for datasets where the text length is different? For example, when short reviews are sentiment-labeled, can these transfer to predict the sentiment of long reviews (i.e., short to long transfer), or vice versa? While unsupervised transfer learning has been well-studied for cross domain/lingual transfer tasks, Cross Length Transfer (CLT) has not yet been explored. One reason is the assumption that length difference is trivially transferable in classification. We show that it is not, because short/long texts differ in context richness and word intensity. We devise new benchmark datasets from diverse domains and languages, and show that existing models from similar tasks cannot deal with the unique challenge of transferring across text lengths. We introduce a strong baseline model called BaggedCNN that treats long texts as bags containing short texts. We propose a state-of-the-art CLT model called Length Transfer Networks (LeTraNets) that introduces a two-way encoding scheme for short and long texts using multiple training mechanisms. We test our models and find that existing models perform worse than the BaggedCNN baseline, while LeTraNets outperforms all models.
<<</Abstract>>>
<<<Introduction>>>
Text classification can be categorized according to the text length of the data, from sentence-level classification BIBREF0 to document-level classification BIBREF1, BIBREF2. One kind of such task is sentiment classification BIBREF3, a subtask of sentiment analysis BIBREF4, BIBREF5 where we are to predict the sentiment/rating given a review written by a user. In some domains, the length of these reviews varies widely. For example, well-known review websites in East Asia such as Naver Movies and Douban Movies provide two channels for users to write reviews, depending on their preferred length. Figure FIGREF4 shows the review channels provided in Naver Movies.
The first channel is a short review channel, which contains large amounts of reviews, and enforces users to write short reviews accompanied by rating labels. Although labeled, these reviews lack expressiveness to extract useful information. In contrast, the second channel is a long review channel, which contains few long and detailed reviews with descriptions of different aspects about the product/service. Despite being more expressive, long reviews are often not accompanied by sentiment labels, which most supervised sentiment classification models would require.
We study the “transferability” from one review channel to the other. That is, we try to answer whether a text classifier trained on a dataset with length $\alpha $ can predict a text with length $\beta $, where $\alpha $ and $\beta $ differ by a large margin (e.g., sentences versus paragraphs). This is an important question because there are scenarios where we may have better and more sufficient labeled text data, but we want to classify unlabeled texts with different length. For long to short transfer, more expressive long reviews can be leveraged for training a sentiment classifier for short and context-sparse reviews. For short to long transfer, large amounts of short reviews can be used as supervision to train a classifier for long reviews.
To motivate the non-triviality of such transfer, we train an out-channel (OC) classifier that uses short texts to predict long texts, and an in-channel (IC) classifier that uses long texts on both training and prediction. We also experiment conversely. We use three kinds of classifiers: bag-of-words (BoW), convolutional neural networks BIBREF6, and BERT multilingual BIBREF7. We calculate the transfer loss BIBREF8, which is the difference between the out-channel and in-channel classifier errors (i.e., $\text{TL}=0$ means trivially transferable). Table TABREF7 shows that, though using a better inductive bias such as CNN and BERT seems to slightly lower TL, it remains significantly high, consistently suggesting that length transfer is non-trivial.
Our first contribution is thus to define a new task called Cross Length Transfer (CLT). CLT is a task similar to Cross Domain BIBREF9 and Cross Lingual Transfer BIBREF10 where the difference between the source and target texts is the text length, having non-trivial influence that is shown in Table TABREF7. Our second contribution is to show that models from similar tasks (e.g. Cross Domain Transfer and Multiple Instance Learning) are not effective for CLT and even yield negative transfer, as we elaborate in Section SECREF10 and empirically show in Section SECREF4. Finally, we present two new models specifically for CLT: a strong baseline called BaggedCNN that treats long texts as bags containing short texts, and a state-of-the-art CLT model called Length Transfer Networks (LeTraNets). LeTraNets enables a two-way encoding scheme using multiple training mechanims, and accepts both short and long text inputs, where one such input is created artificially through concatenation or segmentation. Table TABREF7 shows that LeTraNets has the best transfer loss, and sometimes perform better than in-channel classifier (when TL is less than zero). We test our models using the multiple benchmark datasets we gathered and show that models from other tasks perform worse than our proposed strong baseline and that LeTraNets performs the best among all models. To the best of our knowledge, we are the first to study CLT.
<<</Introduction>>>
<<<Cross Length Transfer>>>
Cross Length Transfer (CLT) is an unsupervised transfer learning task in which the setting is that the sampling distributions of the training and test data are different because the texts lengths are different (e.g., sentences and paragraphs). Formally, we suppose two sets of texts: a source set $\mathcal {S}$ in which we have labels, and a target set $\mathcal {T}$ in which we want to predict the labels. Moreover, we know that the text length distributions of $\mathcal {S}$ and $\mathcal {T}$ are different, such that an equality case exists as $|\mathcal {S}| = r |\mathcal {T}|$, where $|\mathcal {X}|$ is the mean length of the set $\mathcal {X}$, and $r \ne 1$ is a non-negative rate of difference between two mean lengths. There are two subtasks: long to short transfer where $r>1$ and thus $\mathcal {S}$ contains longer texts, and short to long transfer where $r<1$ and thus $\mathcal {S}$ contains shorter texts. A CLT model should effectively learn to predict the labels of $\mathcal {T}$, on both scenarios. A concrete and simple example is when $\mathcal {S}$ contains labeled sentence reviews and $\mathcal {T}$ contains unlabeled paragraph reviews. A CLT model uses $\mathcal {S}$ for training to effectively predict labels of reviews in $\mathcal {T}$. Also, the same CLT model should be able to do effective prediction vice versa, i.e., when $\mathcal {S}$ are paragraph reviews and $\mathcal {T}$ are sentence reviews. Previous unsupervised transfer learning tasks, i.e. Cross Domain Transfer BIBREF9 and Cross Lingual Transfer BIBREF11, are similar to CLT but have concrete differences. Generally, the goal of these tasks is to map semantic domains, contextually or linguistically, of both $\mathcal {S}$ and $\mathcal {T}$ into a shared space, by aligning the vocabulary BIBREF12, BIBREF13, expanding domain-specific lexicons BIBREF14, BIBREF15, generating labeled samples BIBREF16, and learning to indiscriminate between domains BIBREF17, BIBREF18. These methods are generally symmetric; i.e., even when $\mathcal {S}$ and $\mathcal {T}$ interchange, the same method can be applied easily. However, in CLT, both $\mathcal {S}$ and $\mathcal {T}$ are already in the same contextual and linguistic domains, thus previous methods would not work. Also, CLT brings two new challenges against devising a symmetric model. First, texts with different context richness may have different properties they focus on: hierarchical structures BIBREF2 may be more important for document-level reviews while finding lexical/phrasal cues BIBREF0 may be more important for sentence-level reviews. Second, words on texts with different lengths may have different semantic intensity. For example, “good” may have a very high positive sentiment intensity on short texts, and a relatively low positive sentiment intensity on long ones.
<<<Benchmark Datasets>>>
We provide three pairs of short/long datasets from different domains (movies and restaurants) and from different languages (English and Korean) suitable for the task: Mov_en, Res_en, and Mov_ko. Most of the datasets are from previous literature and are gathered differently The Mov_en datasets are gathered from different websites; the short dataset consists of hand-picked sentences by BIBREF19 from document-level reviews from the Rotten Tomatoes website, while the long dataset consists of reviews from the IMDB website obtained by BIBREF20. The Res_en dataset consists of reviews from Yelp, where the short dataset consists of reviews with character lengths less than 140 from BIBREF21, while reviews in the long dataset are gathered from BIBREF20. We also share new short/long datasets Mov_ko, which are gathered from two different channels, as shown in Figure FIGREF4, available in Naver Movies. Unlike previous datasets BIBREF9, BIBREF22 where they used polarity/binary (e.g., positive or negative) labels as classes, we also provide fine-grained classes, with five classes of different sentiment intensities (e.g., 1 is strong negative, 5 is strong positive), for Res_en and Mov_ko. Following the Cross Domain Transfer setting BIBREF9, BIBREF23, BIBREF24, we limit the size of the dataset to be small-scale to focus on the main task at hand. This ensures that models focus on the transfer task, and decrease the influence of other factors that can be found when using larger datasets. Finally, following BIBREF22, we provide additional unlabeled data for those models that need them BIBREF9, BIBREF23, except for the long dataset of Mov_ko, where the labeled reviews are very limited. We show the dataset statistics in Table TABREF9, and share the datasets here: https://github.com/rktamplayo/LeTraNets.
<<</Benchmark Datasets>>>
<<<Possible Existing Solutions>>>
<<<Cross Domain Transfer (CDT)>>>
CDT offers models that effectively transfer domain-independent features from two different domains. The most popular non-neural CDT model is Structural Correspondence Learning BIBREF25blitzer2007biographies, a method that identifies feature correspondence from different domains using pivot features. A recent neuralized extension is Neural SCL BIBREF26ziser2017neural, in which an autoencoder module is integrated to SCL. The CDT literature is vast, and we refer the readers to BIBREF27 and BIBREF28 for overviews. Although these models may see improvements due to a possible difference in vocabulary (especially when the review channels are different), these improvements may be marginal since the domain of the datasets is the same.
<<</Cross Domain Transfer (CDT)>>>
<<<Multiple Instance Learning (MIL)>>>
MIL is a task where given the labels of a bag of multiple instances, we are to label the individual instances BIBREF29. In the text classification domain, MIL is often devised as segment-level classification BIBREF30, BIBREF31, where documents are bags and sentences in the documents are segments. The most recent MIL model is the Multiple Instance Learning Network BIBREF32;angelidis2018multiple, where they used attention-based polarity scoring to identify segment labels. MIL models can be used in long to short transfer, where we assume that segment labels in long texts can be used to label short reviews. However, they (a) assume that segments from long data, which rely on inter-sentence semantics, are comparable to self-contained short texts, and (b) are ineffective on short to long transfer because it needs multiple sentences to train components of the model for document-level classification.
<<</Multiple Instance Learning (MIL)>>>
<<<Weak Supervision>>>
A simple yet possible solution for short to long transfer is a three-step approach where we (1) cluster the short texts into several long texts, (2) infer the class labels of the clusters, and (3) use the labeled clusters as weak supervision to create a classifier. Micro Aspect Sentiment Model BIBREF33;amplayo2017aspect does (1) and (2) automatically. For (3), we can train a classifier such as CNNs BIBREF0 to predict labels of long texts. One critical issue of this solution is that since both clustering and class labels are inferred, there is a high chance that at least one of them is incorrect. This thus creates compounding errors that decrease the performance of the model.
<<</Weak Supervision>>>
<<</Possible Existing Solutions>>>
<<</Cross Length Transfer>>>
<<<Our Models>>>
<<<BaggedCNN: A Strong Baseline>>>
We present BaggedCNN, a simple yet strong baseline to the CLT task. BaggedCNN is a model derived from MILNet BIBREF31. MILNet uses CNN to encode segments, BiGRU BIBREF34 to calculate attention weights, and gated polarity to calculate document-level probabilities. We refer the readers to the original paper for more details. We improve using two key modifications: (a) removing the sequential connections (i.e., BiGRU) between segments, and (b) using a single classifier for both the segments and full document. For each document divided into segments $D=\lbrace S_i\rbrace $, BaggedCNN starts by encoding the segments using a CNN classifier called $\text{CNN}_{bag}$. Then, we pool the segment encodings into one vector using attention mechanism. Finally, we use a logistic regression classifier that can be used to classify either the segments or the document. This is possible since the vectors of both segments and document are in the same vector space:
The model is trained differently depending on the transfer task: For long to short transfer, we minimize the cross-entropy loss between the actual and predicted class of the document $\mathcal {L}_d$. For short to long transfer, we minimize the mean cross entropy loss between the actual and predicted class of the segments $\sum \mathcal {L}_{s_i}/n, 1\le i \le n$. Note that BaggedCNN is reduced to a model where average pooling is done instead of the attention mechanism. At test time, we use $y_d$ for classification. While it has been shown that removing the sequential structure in the document level (i.e., BiGRU in the case of MILNet) decreases the performance of the document classifier BIBREF35, BIBREF2, we argue that this removal is effective on the CLT task because of inter-segment independence. That is, sentences in the document are treated similar to short texts. We also show in our experiments that BaggedCNN performs better than MILNet. However, BaggedCNN still fails to consider two things. First, while the model relaxes the strong assumption on similarity between segments and short texts, by removing the sequential connections, most segments cannot be treated as stand-alone short texts. For example, the segment “Yet it is salty.” is not a stand-alone short review. Second, when doing short to long transfer, the input short text is just one segment, thus the model is reduced into a weaker hierarchical CNN classifier.
<<</BaggedCNN: A Strong Baseline>>>
<<<LeTraNets: Length Transfer Networks>>>
We improve BaggedCNN by proposing a model called Length Transfer Networks (LeTraNets), as shown in Figure FIGREF16. LeTraNets is composed of two classifiers: a stand-alone CNN classifier with text encoder $\text{CNN}_{lone}$, and BaggedCNN, which includes a segment-level text encoder $\text{CNN}_{bag}$. The $\text{CNN}_{lone}$ encoder is used to capture holistic textual features, while the $\text{CNN}_{bag}$ encoder is used to capture segment-level textual features, assuming there is a bigger text that owns the segments.
For each data instance, LeTraNets accepts two kinds of inputs: a long text $D={w_d}$ and a set of short texts $S={{w_{s_0}},...,{w_{s_n}}}$. However, the task setting only provides either one of long texts or short texts as input. We thus create pseudo-texts from the available text data through the following methods. In the long to short transfer task, we use segments in long texts as pseudo-short texts, as used in BaggedCNN. In the short to long transfer task, we concatenate a random number of short texts to create pseudo-long texts. The latter amounts to a possibly infinite number of long texts we can use for training. The short texts are encoded by both $\text{CNN}_{lone}$ and $\text{CNN}_{bag}$ as $s^{\lbrace l\rbrace }_i$ and $s^{\lbrace b\rbrace }_i$. The long texts are encoded using both $\text{CNN}_{lone}$ and BaggedCNN as $d^{\lbrace l\rbrace }$ and $d^{\lbrace b\rbrace }$:
The encoded long text vectors $d^{\lbrace l\rbrace }$ and $d^{\lbrace b\rbrace }$ and short text vectors $s^{\lbrace b\rbrace }_i$ and $s^{\lbrace l\rbrace }_i$ are used to classify their labels using softmax classifiers specific to the CNN encoders:
<<<Training Mechanisms>>>
There are two main issues when training the model in the CLT setting. First, both the stand-alone CNN classifier and BaggedCNN are disconnected, acting as two individual classifiers. Second, the model needs both labels for both short and long text data, but we are only given labels for one kind of data during training for each transfer setting. Solving the second issue is crucial for short to long transfer, as we cannot train the full model if we do not have labels for long data. To this end, we use three training mechanisms below that help mitigate these issues. We connect them on different levels. In the word-level, we use the same word embedding space for both classifiers. Beyond word-level, we use a training mechanism called Joint Training (JT). This concatenates the encoded text vectors, and creates another logistic regression classifier for the concatenated vector. This creates a connection between classifiers at the classification-level.
Beyond word-level, we introduce Prediction Regularization (PR) mechanism to train encoders with no labels. This regularizes the predictions of a weaker classifier based on the predictions of a stronger classifier. We consider BaggedCNN as the stronger classifier for long to short transfer, and $\text{CNN}_{lone}$ as the stronger classifier for short to long transfer. We use Kullback-Leibler divergence as the regularization function.
Finally, using the PR mechanism directly might not work because predictions from the stronger classifier may not be optimized yet. Hence, we use Stepwise Pretraining (SP) mechanism to pretrain specific parts of the model in a step-by-step fashion. First, we pretrain the stronger classifier, then the weaker classifier with PR mechanism, and finally the classifier of the JT mechanism. After pretraining, we train the full model. The training configurations are different depending on the transfer task, which is also shown in Figure FIGREF16. For long to short transfer, we use $p(y^{\lbrace j\rbrace }_{d})$ for the JT mechanism and $R_d$ for the PR mechanism. For short to long transfer, we use $p(y^{\lbrace j\rbrace }_{s_i})$ for the JT mechanism and $R_s$ for the PR mechanism. The final training objective is to minimize the loss function, depending on the text length:
where $\mathcal {L}^{\lbrace a\rbrace }_x$ is the cross-entropy loss between the actual and predicted values of the classifier $p(y^{\lbrace a\rbrace }_x)$, and $\lambda $ is tuned using a development set. At test time, we use $p(y^{\lbrace j\rbrace }_{d})$ and $p(y^{\lbrace j\rbrace }_{s_i})$ to classify the sentiment for long to short and short to long transfer, respectively.
<<</Training Mechanisms>>>
<<</LeTraNets: Length Transfer Networks>>>
<<</Our Models>>>
<<<Experiments>>>
<<<Experimental Settings>>>
The dimensions of word vectors are set to 300. We use pre-trained GloVe embeddings BIBREF36 to initialize our English word vectors, and pre-trained FastText embeddings BIBREF37 to initialize our Korean word vectors. For all CNNs, we set $h=3,4,5$, each with 100 feature maps, following BIBREF0. We use dropout BIBREF38 on all non-linear connections with a dropout rate of 0.5. We set the batch size to 32. We use stochastic gradient descent over shuffled mini-batches with the Adadelta update rule BIBREF39 with $l_2$ constraint of 3. We experiment with a 5-fold cross-validation on the given source training set and report the average results.
<<</Experimental Settings>>>
<<<Comparison Models>>>
We compare our models with the models from similar tasks as discussed in Section SECREF10. Specifically, we compare with (a) Cross Domain Transfer (CDT) models SCL BIBREF9 and NeuSCL BIBREF23, (b) CDT models with a CNN classifier integration BIBREF16 (SCL+CNN and NeuSCL+CNN), (c) a multiple-instance learning (MIL) model MILNet BIBREF31, (d) a weakly supervised model MASM+CNN BIBREF21. We remind that MILNet is only applicable to long to short transfer, and MASM+CNN is only applicable to short to long transfer. We use the available code provided by previous authors. Finally, we also compare with CNN BIBREF0, and a combination of two CNNs (CNNx2) as no-transfer baselines.
<<</Comparison Models>>>
<<<Dataset and Evaluation>>>
We use the datasets described in Table TABREF9 for all our experiments. We use the following evaluation metrics. For all datasets, we use accuracy (Acc) to measure the overall sentiment classification performance. Additionally, for fine-grained datasets, we use root mean squared error (RMSE) to measure the divergence between the predicted and ground truth sentiment scores. Finally, in order to compare models in an integrated manner, we report the average transfer ratio BIBREF40, a version of the transfer loss which is more adaptive to averaging, calculated as the average quotient between the transfer error and the in-domain baseline error, i.e. $\text{TR} = \sum _x e(\mathcal {S}_x,\mathcal {T}_x) / e_b(\mathcal {T}_x,\mathcal {T}_x)$, where $\mathcal {S}_x$ and $\mathcal {T}_x$ are the source and domain of dataset $x$, respectively, $e$ and $e_b$ are accuracy errors from the competing model and the baseline CNN model.
<<</Dataset and Evaluation>>>
<<<Long to Short Transfer>>>
We show the results for long to short transfer in the first part of Table TABREF20. Results show that Cross Domain Transfer models do not perform well, which confirms our hypothesis that they are not well suited for this task. MILNet performs well on polarity tasks, but performs poorly on fine-grained tasks, having worse performance than the no-transfer CNN baseline. This shows that although Multiple Instance Learning models are effective in classifying positive or negative sentiments, they are not flexible to fine-grained sentiment intensities, which differs when text lengths are different. On the other hand, BaggedCNN performs better than MILNet, proving that simplifying MIL models work well on CLT. Overall, LeTraNets performs the best among all models, having the best accuracies and RMSEs on all datasets and settings.
<<</Long to Short Transfer>>>
<<<Short to Long Transfer>>>
We report the results for short to long transfer in the second part of Table TABREF20. Results show that Cross Domain Transfer models perform much worse compared to their performance in the long to short transfer task. The weak supervised model MASM+CNN performs the worst, having worse results than the no-transfer CNN baseline on all datasets. BaggedCNN also performs well in this task, even though it does not use its attention mechanism. This shows that BaggedCNN is a very tough-to-beat baseline for the CLT task. Finally, LeTraNets also outperforms all the models on this subtask.
<<</Short to Long Transfer>>>
<<<Transfer Ratio (TR)>>>
Figure FIGREF30 shows the average transfer ratio (TR) of all competing models, where $\text{TR}=1$ means trivially transferable. The figure shows that the CDT models SCL and NeuSCL both obtain a larger transfer ratio compared to the no-transfer CNN baseline. The transfer ratios improve when CNN is integrated into both models, but does not improve much from the baseline. MILNet and BaggedCNN perform comparably on the long to short transfer task, where BaggedCNN performs slightly better. LeTraNets performs the best among the models, having transfer ratios less than 1.1.
<<</Transfer Ratio (TR)>>>
<<</Experiments>>>
<<<Analyses>>>
<<<Ablation on Training Mechanisms>>>
We investigate the performance of LeTraNets when the training mechanisms are not used. Specifically, we perform ablation tests on the Joint Training (JT), Prediction Regularization (PR), and Stepwise Pretraining (SP) mechanisms. The results in Table TABREF31 show that LeTraNets performs the best when all training mechanisms are used. Also, when used individually, all the training mechanisms boost up the performance of the model. Hence, we confirm that the training mechanisms help LeTraNets achieve good performance on the task.
<<</Ablation on Training Mechanisms>>>
<<<Performance per Text Length>>>
We check the capability of LeTraNets to transfer across text lengths, by looking at its performance as the text length increases. Specifically, we compare the performance per text length of LeTraNets and CNN models, trained on either short texts (LeTraNetsshort and CNNshort) or long texts (LeTraNetslong and CNNlong), on Res_en short/long datasets. Figure FIGREF34 shows the results. CNN performs well when the text length is similar to the training dataset and performs poorly otherwise. LeTraNets, however, performs similarly on all kinds of text lengths although it is trained purely on a dataset of a specific length. More interestingly, LeTraNetsshort performs better than LeTraNetslong on longer texts, and unexpectedly performs worse on shorter texts. This suggests that LeTraNets weakens its ability to classify texts with the same length and improves its ability to classify texts with different length. This property is acceptable in our problem setup since we care on effectively classifying short (or long) texts more, assuming we only have access to long (or short) texts as training data. However, future work should explore on CLT models that perform well on both text lengths.
<<</Performance per Text Length>>>
<<<On Topic Diversity>>>
Longer texts can discuss diverse topics, while shorter texts are limited to few (or one) topics. In the sentiment classification domain, longer reviews may mention positive sentiments towards an aspect of a product, and then talk about negative sentiments towards another aspect. With this hypothesis, we examine whether LeTraNets can handle longer texts with diverse topics when trained on short texts. Specifically, we compare the performance per topic diversity of LeTraNets and CNN models, trained on short texts of Res_en dataset. We measure topic diversity as the Shannon index BIBREF41 of the topic distribution inferred by an LDA topic model BIBREF42 fit using the unlabeled data. Figure FIGREF36 shows the results. Results indicate that the performance increase of LeTraNets over CNN increases as the diversity of topics increases. This shows that for short to long transfer, LeTraNets is able to handle texts with topics that are more diverse, even when trained on short texts, which tend to have less diverse topics.
<<</On Topic Diversity>>>
<<<Cross Domain and Length Transfer>>>
Which between domain and text length should we consider to achieve a better performance? To answer this question, we combine Cross Domain Transfer (CDT) and Cross Length Transfer (CLT) into one task: Cross Domain and Length Transfer (CDLT) and compare the performance of CDT and CLT models on the task. We use the Mov_en and Res_en datasets to create four CDLT datasets, and check which between the CDT model NeuSCL+CNN and the CLT model LeTraNets achieves a higher increase in performance. The results are shown in Table TABREF38. We find that NeuSCL+CNN performs worse, obtaining accuracies worse than that of the no-transfer CNN baseline. LeTraNets performs better, obtaining significant increase in performance from the baseline. This shows that solving the non-transferability of length is more important to achieve a more effective sentiment classifier.
<<</Cross Domain and Length Transfer>>>
<<</Analyses>>>
<<<Conclusions>>>
We defined a new task called Cross Length Transfer (CLT) to check the transferability across lengths of classification models. We set the grounds by defining the task, providing three benchmark datasets from different domains and languages, and introducing models from related tasks. We proposed two models: a strong baseline model called BaggedCNN, and LeTraNets, a model that improves over the weakness of BaggedCNN. Our multiple experiments show that LeTraNets demonstrates superior performance over all competing models. We aim to apply the CLT to other classification tasks, such as natural language inference BIBREF43, where text length is influential towards overall model performance BIBREF44.
<<</Conclusions>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nCross Length Transfer\nBenchmark Datasets\nPossible Existing Solutions\nCross Domain Transfer (CDT)\nMultiple Instance Learning (MIL)\nWeak Supervision\nOur Models\nBaggedCNN: A Strong Baseline\nLeTraNets: Length Transfer Networks\nTraining Mechanisms\nExperiments\nExperimental Settings\nComparison Models\nDataset and Evaluation\nLong to Short Transfer\nShort to Long Transfer\nTransfer Ratio (TR)\nAnalyses\nAblation on Training Mechanisms\nPerformance per Text Length\nOn Topic Diversity\nCross Domain and Length Transfer\nConclusions"
],
"type": "outline"
}
|
2002.10361
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Multilingual Twitter Corpus and Baselines for Evaluating Demographic Bias in Hate Speech Recognition
<<<Abstract>>>
Existing research on fairness evaluation of document classification models mainly uses synthetic monolingual data without ground truth for author demographic attributes. In this work, we assemble and publish a multilingual Twitter corpus for the task of hate speech detection with inferred four author demographic factors: age, country, gender and race/ethnicity. The corpus covers five languages: English, Italian, Polish, Portuguese and Spanish. We evaluate the inferred demographic labels with a crowdsourcing platform, Figure Eight. To examine factors that can cause biases, we take an empirical analysis of demographic predictability on the English corpus. We measure the performance of four popular document classifiers and evaluate the fairness and bias of the baseline classifiers on the author-level demographic attributes.
<<</Abstract>>>
<<<Introduction>>>
While document classification models should be objective and independent from human biases in documents, research have shown that the models can learn human biases and therefore be discriminatory towards particular demographic groups BIBREF0, BIBREF1, BIBREF2. The goal of fairness-aware document classifiers is to train and build non-discriminatory models towards people no matter what their demographic attributes are, such as gender and ethnicity. Existing research BIBREF0, BIBREF3, BIBREF4, BIBREF5, BIBREF1 in evaluating fairness of document classifiers focus on the group fairness BIBREF6, which refers to every demographic group has equal probability of being assigned to the positive predicted document category.
However, the lack of original author demographic attributes and multilingual corpora bring challenges towards the fairness evaluation of document classifiers. First, the datasets commonly used to build and evaluate the fairness of document classifiers obtain derived synthetic author demographic attributes instead of the original author information. The common data sources either derive from Wikipedia toxic comments BIBREF0, BIBREF4, BIBREF5 or synthetic document templates BIBREF3, BIBREF4. The Wikipedia Talk corpus BIBREF7 provides demographic information of annotators instead of the authors, Equity Evaluation Corpus BIBREF3 are created by sentence templates and combinations of racial names and gender coreferences. While existing work BIBREF8, BIBREF9 infers user demographic information (white/black, young/old) from the text, such inference is still likely to cause confounding errors that impact and break the independence between demographic factors and the fairness evaluation of text classifiers. Second, existing research in the fairness evaluation mainly focus on only English resources, such as age biases in blog posts BIBREF9, gender biases in Wikipedia comments BIBREF0 and racial biases in hate speech detection BIBREF8. Different languages have shown different patterns of linguistic variations across the demographic attributes BIBREF10, BIBREF11, methods BIBREF12, BIBREF4 to reduce and evaluate the demographic bias in English corpora may not apply to other languages. For example, Spanish has gender-dependent nouns, but this does not exist in English BIBREF2; and Portuguese varies across Brazil and Portugal in both word usage and grammar BIBREF13. The rich variations have not been explored under the fairness evaluation due to lack of multilingual corpora. Additionally, while we have hate speech detection datasets in multiple languages BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, there is still no integrated multilingual corpora that contain author demographic attributes which can be used to measure group fairness. The lack of author demographic attributes and multilingual datasets limits research for evaluating classifier fairness and developing unbiased classifiers.
In this study, we combine previously published corpora labeled for Twitter hate speech recognition in English BIBREF14, BIBREF19, BIBREF20, Italian BIBREF15, Polish BIBREF16, Portuguese BIBREF18, and Spanish BIBREF17, and publish this multilingual data augmented with author-level demographic information for four attributes: race, gender, age and country. The demographic factors are inferred from user profiles, which are independent from text documents, the tweets. To our best knowledge, this is the first multilingual hate speech corpus annotated with author attributes aiming for fairness evaluation. We start with presenting collection and inference steps of the datasets. Next, we take an exploratory study on the language variations across demographic groups on the English dataset. We then experiment with four multiple classification models to establish baseline levels of this corpus. Finally, we evaluate the fairness performance of those document classifiers.
<<</Introduction>>>
<<<Data>>>
We assemble the annotated datasets for hate speech classification. To narrow down the data sources, we limit our dataset sources to the unique online social media site, Twitter. We have requested 16 published Twitter hate speech datasets, and finally obtained 7 of them in five languages. By using the Twitter streaming API, we collected the tweets annotated by hate speech labels and their corresponding user profiles in English BIBREF14, BIBREF19, BIBREF20, Italian BIBREF15, Polish BIBREF16, Portuguese BIBREF18, and Spanish BIBREF17. We binarize all tweets' labels (indicating whether a tweet has indications of hate speech), allowing to merge the different label sets and reduce the data sparsity.
Whether a tweet is considered hate speech heavily depends on who the speaker is; for example, whether a racial slur is intended as hate speech depends in part on the speaker's race BIBREF14. Therefore, hate speech classifiers may not generalize well across all groups of people, and disparities in the detection offensive speech could lead to bias in content moderation BIBREF21. Our contribution is to further annotate the data with user demographic attributes inferred from their public profiles, thus creating a corpus suitable for evaluating author-level fairness for this hate speech recognition task across multiple languages.
<<<User Attribute Inference>>>
We consider four user factors of age, race, gender and geographic location. For location, we inference two granularities, country and US region, but only experiment with the country attribute. While the demographic attributes can be inferred through tweets BIBREF22, BIBREF8, we intentionally exclude the contents from the tweets if they infer these user attributes, in order to make the evaluation of fairness more reliable and independent. If users were grouped based on attributes inferred from their text, then any differences in text classification across those groups could be related to the same text. Instead, we infer attributes from public user profile information (i.e., description, name and photo).
<<<Age, Race, Gender.>>>
We infer these attributes from each user's profile image by using Face++ (https://www.faceplusplus.com/), a computer vision API that provides estimates of demographic characteristics. Empirical comparisons of facial recognition APIs have found that Face++ is the most accurate tool on Twitter data BIBREF23 and works comparatively better for darker skins BIBREF24. For the gender, we choose the binary categories (male/female) by the predicted probabilities. We map the racial outputs into four categories: Asian, Black, Latino and White. We only keep users that appear to be at least 13 years old, and we save the first result from the API if multiple faces are identified. We experiment and evaluate with binarization of race and age with roughly balanced distributions (white and nonwhite, $\le $ median vs. elder age) to consider a simplified setting across different languages, since race is harder to infer accurately.
<<</Age, Race, Gender.>>>
<<<Country.>>>
The country-level language variations can bring challenges that are worth to explore. We extract geolocation information from users whose profiles contained either numerical location coordinates or a well-formatted (matching a regular expression) location name. We fed the extracted values to the Google Maps API (https://maps.googleapis.com) to obtain structured location information (city, state, country). We first count the main country source and then binarize the country to indicate if a user is in the main country or not. For example, the majority of users in the English are from the United States (US), therefore, we can binarize the country attributes to indicate if the users are in the US or not.
<<</Country.>>>
<<</User Attribute Inference>>>
<<<Corpus Summary>>>
We show the corpus statistics in Table TABREF8 and summarize the full demographic distributions in Table TABREF9. The binary demographic attributes (age, country, gender, race) can bring several benefits. First, we can create comparatively balanced label distributions. We can observe that there are differences in the race and gender among Italian and Polish data, while other attributes across the other languages show comparably balanced demographic distributions. Second, we can reduce errors inferred from the Face++ on coarse labels. Third, it is more convenient for us to analyze, conduct experiments and evaluate the group fairness of document classifiers.
Table TABREF8 presents different patterns of the corpus. The Polish data has the smallest users. This is because the data focuses on the people who own the most popular accounts in the Polish data BIBREF16, the other data collected tweets randomly. And the dataset shows a much more sparse distribution of the hate speech label than the other languages.
Table TABREF9 presents different patterns of the user attributes. English, Portuguese and Spanish users are younger than the Italian and Polish users in the collected data. And both Italian and Polish show more skewed demographic distributions in country, gender and race, while the other datasets show more balanced distributions.
<<</Corpus Summary>>>
<<<Demographic Inference Accuracy>>>
Image-based approaches will have inaccuracies, as a person's demographic attributes cannot be conclusively determined merely from their appearance. However, given the difficulty in obtaining ground truth values, we argue that automatically inferred attributes can still be informative for studying classifier fairness. If a classifier performs significantly differently across different groups of users, then this shows that the classifier is biased along certain groupings, even if those groupings are not perfectly aligned with the actual attributes they are named after. This subsection tries to quantify how reliably these groupings correspond to the demographic variables.
Prior research found that Face++ achieves 93.0% and 92.0% accuracy on gender and ethnicity evaluations BIBREF23. We further conduct a small evaluation on the hate speech corpus by a small sample of annotated user profile photos providing a rough estimate of accuracy while acknowledging that our annotations are not ground truth. We obtained the annotations from the crowdsourcing website, Figure Eight (https://figure-eight.com/). We randomly sampled 50 users whose attributes came from Face++ in each language. We anonymize the user profiles and feed the information to the crowdsourcing website. Three annotators annotated each user photo with the binary demographic categories. To select qualified annotators and ensure quality of the evaluations, we set up 5 golden standard annotation questions for each language. The annotators can join the evaluation task only by passing the golden standard questions. We decide demographic attributes by majority votes and present evaluation results in Table TABREF11. Our final evaluations show that overall the Face++ achieves averaged accuracy scores of 82.8%, 88.4% and 94.4% for age, race and gender respectively.
<<</Demographic Inference Accuracy>>>
<<<Privacy Considerations>>>
To facilitate the study of classification fairness, we will publicly distribute this anonymized corpus with the inferred demographic attributes including both original and binarized versions. To preserve user privacy, we will not publicize the personal profile information, including user ids, photos, geocoordinates as well as other user profile information, which were used to infer the demographic attributes. We will, however, provide inferred demographic attributes in their original formats from the Face++ and Google Maps based on per request to allow wider researchers and communities to replicate the methodology and probe more depth of fairness in document classification.
<<</Privacy Considerations>>>
<<</Data>>>
<<<Language Variations across Demographic Groups>>>
Demographic factors can improve the performances of document classifiers BIBREF25, and demographic variations root in language, especially in social media data BIBREF26, BIBREF25. For example, language styles are highly correlated with authors' demographic attributes, such as age, race, gender and location BIBREF27, BIBREF28. Research BIBREF29, BIBREF12, BIBREF30 find that biases and stereotypes exist in word embeddings, which is widely used in document classification tasks. For example, “receptionist” is closer to females while “programmer” is closer to males, and “professor” is closer to Asian Americans while “housekeeper” is closer to Hispanic Americans.
This motivates us to explore and test if the language variations hold in our particular dataset, how strong the effects are. We conduct the empirical analysis of demographic predictability on the English dataset.
<<<Are Demographic Factors Predictable in Documents?>>>
We examine how accurately the documents can predict author demographic attributes from three different levels:
Word-level. We extract TF-IDF-weighted 1-, 2-grams features.
POS-level. We use Tweebo parser BIBREF31 to tag and extract POS features. We count the POS tag and then normalize the counts for each document.
Topic-level. We train a Latent Dirichlet Allocation BIBREF32 model with 20 topics using Gensim BIBREF33 with default parameters. Then a document can be represented as a probabilistic distribution over the 20 topics.
We shuffle and split data into training (70%) and test (30%) sets. Three logistic classifiers are trained by the three levels of features separately. We measure the prediction accuracy and show the absolute improvements in Figure FIGREF18.
The improved prediction accuracy scores over majority baselines suggest that language variations across demographic groups are encoded in the text documents. The results show that documents are the most predictable to the age attribute. We can also observe that the word is the most predictable feature to demographic factors, while the POS feature is least predictable towards the country factor. These suggest there might be a connection between language variations and demographic groups. This motivates us to further explore the language variations based on word features. We rank the word features by mutual information classification BIBREF34 and present the top 10 unigram features in Table TABREF14. The qualitative results show the most predictable word features towards the demographic groups and suggest such variations may impact extracted feature representations and further training fair document classifiers.
The Table TABREF14 shows that when classifying hate speech tweets, the n-words and b-words are more significant correlated with the white instead of the other racial groups. However, this shows an opposite view than the existing work BIBREF8, which presents the two types of words are more significantly correlated with the black. This can highlight the values of our approach that to avoid confounding errors, we obtain author demographic information independently from the user generated documents.
<<</Are Demographic Factors Predictable in Documents?>>>
<<</Language Variations across Demographic Groups>>>
<<<Experiments>>>
Demographic variations root in documents, especially in social media data BIBREF26, BIBREF25, BIBREF10. Such variations could further impact the performance and fairness of document classifiers. In this study, we experiment four different classification models including logistic regression (LR), recurrent neural network (RNN) BIBREF35, convolutional neural network (CNN) BIBREF36 and Google BERT BIBREF37. We present the baseline results of both performance and fairness evaluations across the multilingual corpus.
<<<Data Preprocessing>>>
To anonymize user information, we hash user and tweet ids and then replace hyperlinks, usernames, and hashtags with generic symbols (URL, USER, HASHTAG). Documents are lowercased and tokenized using NLTK BIBREF38. The corpus is randomly split into training (70%), development (15%), and test (15%) sets. We train the models on the training set and find the optimal hyperparameters on the development set before final evaluations on the test set. We randomly shuffle the training data at the beginning of each training epoch.
<<</Data Preprocessing>>>
<<<Baseline Models>>>
We implement and experiment four baseline classification models. To compare fairly, we keep the feature size up to 15K for each classifier across all five languages. We calculate the weight for each document category by $\frac{N}{N_l}$ BIBREF39, where $N$ is the number of documents in each language and $N_l$ is the number of documents labeled by the category. Particularly, for training BERT model, we append two additional tokens, “[CLS]” and “[SEP]”, at the start and end of each document respectively. For the neural models, we pad each document or drop rest of words up to 40 tokens. We use “unknown” as a replacement for unknown tokens. We initialize CNN and RNN classifiers by pre-trained word embeddings BIBREF40, BIBREF41, BIBREF42, BIBREF43 and train the networks up to 10 epochs.
<<<LR.>>>
We first extract TF-IDF-weighted features of uni-, bi-, and tri-grams on the corpora, using the most frequent 15K features with the minimum feature frequency as 2. We then train a LogisticRegression from scikit-learn BIBREF34. We use “liblinear” as the solver function and leave the other parameters as default.
<<</LR.>>>
<<<CNN.>>>
We implement the Convolutional Neural Network (CNN) classifier described in BIBREF36, BIBREF44 by Keras BIBREF45. We first apply 100 filters with three different kernel sizes, 3, 4 and 5. After the convolution operations, we feed the concatenated features to a fully connected layer and output document representations with 100 dimensions. We apply “softplus” function with a l2 regularization with $.03$ and a dropout rate with $.3$ in the dense layer. The model feeds the document representation to final prediction. We train the model with batch size 64, set model optimizer as Adam BIBREF46 and calculate loss values by the cross entropy function. We keep all other parameter settings as described in the paper BIBREF36.
<<</CNN.>>>
<<<RNN.>>>
We build a recurrent neural network (RNN) classifier by using bi-directional Gated Recurrent Unit (bi-GRU) BIBREF35, BIBREF4. We set the output dimension of GRU as 200 and apply a dropout on the output with rate $.2$. We optimize the RNN with RMSprop BIBREF47 and use the same loss function and batch size as the CNN model. We leave the other parameters as default in the Keras BIBREF45.
<<</RNN.>>>
<<<BERT>>>
BERT is a transformer-based pre-trained language model which was well trained on multi-billion sentences publicly available on the web BIBREF37, which can effectively generate the precise text semantics and useful signals. We implement a BERT-based classification model by HuggingFace's Transformers BIBREF48. The model encodes each document into a fixed size (768) of representation and feed to a linear prediction layer. The model is optimized by AdamW with a warmup and learning rate as $.1$ and $2e^{-5}$ respectively. We leave parameters as their default, conduct fine-tuning steps with 4 epochs and set batch size as 32 BIBREF49. The classification model loads “bert-base-uncased” pre-trained BERT model for English and “bert-base-multilingual-uncased” multilingual BERT model BIBREF50 for the other languages. The multilingual BERT model follows the same method of BERT by using Wikipedia text from the top 104 languages. Due to the label imbalance shown in Table TABREF8, we balance training instances by randomly oversampling the minority during the training process.
<<</BERT>>>
<<</Baseline Models>>>
<<<Evaluation Metrics>>>
<<<Performance Evaluation.>>>
To measure overall performance, we evaluate models by four metrics: accuracy (Acc), weighted F1 score (F1-w), macro F1 score (F1-m) and area under the ROC curve (AUC). The F1 score coherently combines both precision and recall by $2*\frac{precision*recall}{precision+recall}$. We report F1-m considering that the datasets are imbalanced.
<<</Performance Evaluation.>>>
<<<Fairness Evaluation.>>>
To evaluate group fairness, we measure the equality differences (ED) of true positive/negative and false positive/negative rates for each demographic factor. ED is a standard metric to evaluate fairness and bias of document classifiers BIBREF0, BIBREF4, BIBREF5.
This metric sums the differences between the rates within specific user groups and the overall rates. Taking the false positive rate (FPR) as an example, we calculate the equality difference by:
, where $D$ is a demographic factor (e.g., race) and $d$ is a demographic group (e.g., white or nonwhite).
<<</Fairness Evaluation.>>>
<<</Evaluation Metrics>>>
<<</Experiments>>>
<<<Results>>>
We have presented our evaluation results of performance and fairness in Table TABREF20 and Table TABREF29 respectively. Country and race have very skewed distributions in the Italian and Polish corpora, therefore, we omit fairness evaluation on the two factors.
<<<Overall performance evaluation.>>>
Table TABREF20 demonstrates the performances of the baseline classifiers for hate speech classification on the corpus we proposed. Results are obtained from the five languages covered in our corpus respectively. Among the four baseline classifiers, LR, CNN and RNN consistently perform well on all languages. Moreover, neural-based models (CNN and RNN) substantially outperform LR on four out of five languages (except Spanish). However, the results obtained by BERT are relatively lower than the other baselines, and show more significant gap in the English dataset. One possible explanation is BERT was pre-trained on Wikipedia documents, which are significantly different from the Twitter corpus in document length, word usage and grammars. For example, each tweet is a short document with 20 tokens, but the BERT is trained on long documents up to 512 tokens. Existing research suggests that fine-tuning on the multilingual corpus can further improve performance of BERT models BIBREF49.
<<</Overall performance evaluation.>>>
<<<Group fairness evaluation.>>>
We have measured the group fairness in Table TABREF29. Generally, the RNN classifier achieves better and more stable performance across major fairness evaluation tasks. By comparing the different baseline classifiers, we can find out that the LR usually show stronger biases than the neural classification models among majority of the tasks. While the BERT classifier performs comparatively lower accuracy and F1 scores, the classifier has less biases on the most of the datasets. However, biases can significantly increases for the Portuguese dataset when the BERT classifier achieves better performance. We examine the relationship by building linear model between two differences: the performance differences between the RNN and other classifiers, the SUM-ED differences between RNN and other classifiers. We find that the classification performance does not have significantly ($p-value > .05$) correlation with fairness and bias. The significant biases of classifiers varies across tasks and languages: the classifiers trained on Polish and Italian are biased the most by Age and Gender, the classifiers trained on Spanish and Portuguese are most biased the most by Country, and the classifiers trained on English tweets are the most unbiased throughout all the attributes. Classifiers usually have very high bias scores on both gender and age in Italian and Polish data. We find that the age and gender both have very skewed distributions in the Italian and Polish datasets. Overall, our baselines provide a promising start for evaluating future new methods of reducing demographic biases for document classification under the multilingual setting.
<<</Group fairness evaluation.>>>
<<</Results>>>
<<<Conclusion>>>
In this paper, we propose a new multilingual dataset covering four author demographic annotations (age, gender, race and country) for the hate speech detection task. We show the experimental results of several popular classification models in both overall and fairness performance evaluations. Our empirical exploration indicates that language variations across demographic groups can lead to biased classifiers. This dataset can be used for measuring fairness of document classifiers along author-level attributes and exploring bias factors across multilingual settings and multiple user factors. The proposed framework for inferring the author demographic attributes can be used to generate more large-scale datasets or even applied to other social media sites (e.g., Amazon and Yelp). While we encode the demographic attributes into categories in this work, we will provide inferred probabilities of the demographic attributes from Face++ to allow for broader research exploration. Our code, anonymized data and data statement BIBREF51 will be publicly available at https://github.com/xiaoleihuang/Multilingual_Fairness_LREC.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nData\nUser Attribute Inference\nAge, Race, Gender.\nCountry.\nCorpus Summary\nDemographic Inference Accuracy\nPrivacy Considerations\nLanguage Variations across Demographic Groups\nAre Demographic Factors Predictable in Documents?\nExperiments\nData Preprocessing\nBaseline Models\nLR.\nCNN.\nRNN.\nBERT\nEvaluation Metrics\nPerformance Evaluation.\nFairness Evaluation.\nResults\nOverall performance evaluation.\nGroup fairness evaluation.\nConclusion"
],
"type": "outline"
}
|
2001.02214
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Attributed Multi-Relational Attention Network for Fact-checking URL Recommendation
<<<Abstract>>>
To combat fake news, researchers mostly focused on detecting fake news and journalists built and maintained fact-checking sites (e.g., this http URL and this http URL). However, fake news dissemination has been greatly promoted via social media sites, and these fact-checking sites have not been fully utilized. To overcome these problems and complement existing methods against fake news, in this paper we propose a deep-learning based fact-checking URL recommender system to mitigate impact of fake news in social media sites such as Twitter and Facebook. In particular, our proposed framework consists of a multi-relational attentive module and a heterogeneous graph attention network to learn complex/semantic relationship between user-URL pairs, user-user pairs, and URL-URL pairs. Extensive experiments on a real-world dataset show that our proposed framework outperforms eight state-of-the-art recommendation models, achieving at least 3~5.3% improvement.
<<</Abstract>>>
<<<Introduction>>>
While social media sites provide users with the revolutionized communication medium by bringing the communication efficiency to a new level, they can be easily misused for widely spreading misinformation and fake news. Fake news and misinformation have been a long-standing issue for various purposes such as political propaganda BIBREF0 and financial propaganda BIBREF1. To fight against fake news, traditional publishers employed human editors to manually and carefully check the content of news articles to maintain their reputation. However, social media provided a new way to spread news, which lead to broader information sources and expanded audience (i.e., anyone can be a media and create news). In particular, users share news articles with their own opinion or read articles shared by their friends from whatever the source of news is with mostly blind trust BIBREF2 or with their own ideologies BIBREF3, BIBREF4. Although social media posts usually have a very short life cycle, the unprecedented amount of fake news may lead to a catastrophic impact on both individuals and society. Besides from misleading users with false information BIBREF4, widely propagated fake news could even cause trust crisis of entire news ecosystem BIBREF5, even further affecting both the cyberspace and physical space.
In literature, researchers focused on four topics regarding fake news: characterization (i.e., types of fake news), motivation, circulation, and countermeasures BIBREF6, BIBREF7. A large body of work has been done on fake news identification BIBREF5, BIBREF8, BIBREF9, BIBREF10 by exploiting multiple content-related and social-related components. However, we notice that the fake news still has been widely spread even after early detection BIBREF11. Therefore, we propose to study a complementary approach to mitigate the spread and impact of fake news. Recently, community and journalists started building and maintaining fact-checking websites (e.g., Snopes.com). Social media users called fact-checkers also started using these fact-checking pages as factual evidences to debunk fake news by replying to fake news posters. Figure FIGREF1 demonstrates a real-world example of a fact-checker's fact-checking behavior on Twitter by debunking another user's false claim with a Snopes page URL as an evidence to support the factual correction.
In BIBREF12, researchers found that these fact-checkers actively debunked fake news mostly within one day, and their replies were exposed to hundreds of millions users. To motivate these fact-checkers further quickly engage with fake news posters and intelligently consume increased volume of fact-checking articles, in this paper we propose a novel personalized fact-checking URL recommender system. According to BIBREF13, co-occurrence matrix within the given context provides information of semantic similarity between two objects. Therefore, in our proposed deep-learning based recommender system, we employ two extended matrices: user-user co-occurrence matrix, and URL-URL co-occurrence matrix to facilitate our recommendation. In addition, users tend to form relationships with like-minded people BIBREF14. Therefore, we incorporate each user's social context to capture the semantic relation to enhance the recommendation performance.
Our main contributions are summarized as follows:
We propose a new framework for personalized fact-checking URL recommendation, which relies on multi-relational context neighbors.
We propose two attention mechanisms which allow for learning deep semantic representation of both a target user and a target URL at different granularity.
Experimental results show that our proposed model outperforms eight state-of-the-art baselines, covering various types of recommendation approaches. Ablation study confirm the effectiveness of each component in our proposed framework.
<<</Introduction>>>
<<<Related Works>>>
In this section, we briefly review related works and position our work within the following areas: (1) fake news and misinformation; (2) advancements in recommender systems; and (3) graph convolutional networks.
<<<Fake News and Misinformation>>>
Fake news has attracted considerable attention since it is related to our daily life and has become a serious problem related to multiple areas such as politics BIBREF0 and finance BIBREF1. Social media sites have become one of popular mediums to propagate fake news and misinformation. The dominant line of work in this topic is fake news detection BIBREF15 which was mostly formulated as a binary classification problem. Researchers began to incorporate social context and other features for identifying fake news at an early stage and preventing it from diffusion on the social network BIBREF5, BIBREF7. Some other researchers focus on investigating the propagation patterns of fake news in social network BIBREF16, BIBREF17. BIBREF18 also studied fake news intervention. Unlike most previous works, we follow the direction of BIBREF12 and propose to build a personalized recommender system for promoting the fact-checking article circulation to debunk fake news.
<<</Fake News and Misinformation>>>
<<<Advancements in Recommender System>>>
Traditionally, recommendation algorithms can be divided into two categories: collaborative filtering BIBREF19 and content-based filtering. However, in the past few years, the recommendation has become a more integrated task due to the success of the deep neural network. Neural Networks (NNs) proves to be effective to capture underlying nonlinear relations BIBREF20. Another advantage is that the NNs enhanced the model's capability of extracting knowledge from multimodal data BIBREF21, BIBREF22, BIBREF23, which serves as auxiliary information and provide solutions to address the data sparsity problem. More recently, researchers introduced attention mechanism into recommender systems, which has achieved great success in various fields BIBREF24, BIBREF25. Researchers developed multiple variants of attention mechanism to improve both the recommendation precision and model interpretability BIBREF26, BIBREF27, BIBREF28, BIBREF29. In this paper, we also propose two novel designs of attention mechanism. Following BIBREF30, BIBREF31, we further explore multi-relational context of given user-URL pair, aiming at discriminating the most important elements towards URL-dependent user preference.
<<</Advancements in Recommender System>>>
<<<Graph Convolutional Networks>>>
With the surge of Graph-based Neural Network, GCN-based approaches have shown strong effectiveness on various tasksBIBREF32, BIBREF33, BIBREF34, including recommender system. The core idea is to iteratively aggregate attributed node vectors around each node, and messages propagates by stacking multiple layers. However, the original design of GCN is not suitable for our scenario because of the following reasons: First, existing GCN works BIBREF33, BIBREF34 do not distinguish different types of nodes, whereas in our case, it does not make sense to aggregate user and URL nodes together. And the aggregation function proposed in most GCN works treats all its adjacency nodes with the same importance. It is inappropriate in real-world applications and probably tends to neglect necessary information. BIBREF35 breaks this schema by using a multi-head attention mechanism to replace the convolution-like operator, yet it requires significant extra computation and memory.
Compared to the previous works, in this paper, we focus on a novel application and investigate both co-occurrence context and social context related influences for fact-checking URL recommendation. We also incorporate sets of auxiliary attributes, which enable more comprehensive learning of the compatibility between given pairs of user and URL. Moreover, we take advantage of advancements in graph neural networks and attention mechanisms, and solve the aforementioned research problems.
<<</Graph Convolutional Networks>>>
<<</Related Works>>>
<<<Problem Formulation>>>
We formally introduce definitions before describing our proposed framework. We define fact-checking behavior as a user (i.e., fact-checker) embeds a fact-checking URL in his reply in order to debunk fake news. We regard each fact-checking behavior as an implicit interaction between target user $i$ and target URL $j$.
<<<Definition 1 (Fact-checking URL Recommendation Task)>>>
Let $\mathcal {U} = \lbrace u_1,u_2,...,u_n\rbrace $ denotes a set of fact-checkers on social media, and use $\mathcal {C} = \lbrace c_1,c_2,...,c_m\rbrace $ to index fact-checking URLs. We construct user-URL interaction matrix $Y = \lbrace y_{ij} | u\in \mathcal {U}, v \in \mathcal {C} \rbrace $ according to users' fact-checking behavior, where
each value of 1 for $y_{ij}$ indicates the existence of implicit interaction between target user $i$ and target URL $j$. Each user $u_i$ and each URL $c_j$ associate with a set of attributes. The goal of the recommendation task is to recommend top-N URLs from the URL set $\mathcal {C}$ to each user.
We also construct the entire dataset as a heterogeneous graph, which is a special kind of information network that consists of either multiple types of objects or different types of links, or both.
<<</Definition 1 (Fact-checking URL Recommendation Task)>>>
<<<Definition 2 (Heterogeneous Network) @!START@BIBREF36@!END@>>>
Formally, consider a heterogeneous graph $\mathcal {G}=(\mathcal {V},\mathcal {E})$, where $\mathcal {V} (|V|= m + n)$ and $E$ denote the node set and edge set, respectively. The heterogeneity represents by the node type mapping function: $\phi : \mathcal {V} \rightarrow \mathcal {A}$ and edge type projection function: $\psi : \mathcal {E} \rightarrow \mathcal {R}$, where $\mathcal {A}$ and $\mathcal {R}$ denote the sets of predefined node types and edge types, and $|\mathcal {A}| + |\mathcal {R}| > 2$. Note that we does not consider self-loop in our graph construction.
<<</Definition 2 (Heterogeneous Network) @!START@BIBREF36@!END@>>>
<<<Definition 3 (Multi-relational Context)>>>
Given target user $i$, we define his following fact-checkers and co-occurrenced fact-checkers as his social context user neighbors and co-occurrenced context user neighbors, respectively. Similarly, we name the other URLs posted by target user $i$ and co-occurrenced URLs of target URL $j$ as historical context URL neighbors and co-occurrenced context URL neighbors, respectively. In general, we call all the context neighbors as multi-relational context of given target user-URL pair.
<<</Definition 3 (Multi-relational Context)>>>
<<<Example>>>
Figure FIGREF12 illustrates the multi-relational context. In Figure FIGREF12, $c_1$, $c_2$, $c_3$ represents fact-checking URLs and $u_1$, $u_2$, $u_3$ are users who involve sharing these URLs. For example, $(u_1 \rightarrow u_2)$ indicates the social relationship between $u_1$ and $u_2$. Intuitively, we care more about the influence of $u_2$ on $u_1$. $(u_1 \rightarrow c_1 \leftarrow u_2)$ means $u_1$ and $u_2$ are co-occurrenced user neighbors. Similarly, we name $c_1$ and $c_2$ as co-occurrenced URL neighbors of $u_3$, and $c_2$ is historical context URL neighbor given target $u_3$-$c_3$ pair.
<<</Example>>>
<<</Problem Formulation>>>
<<<Proposed Framework>>>
We propose a novel framework called Attributed Multi-Relational Attention Network (AMRAN), to understand the influence of the multi-relational context to target user's fact-checking behavior. In this section, we elaborate our proposed AMRAN with using notations described in Table TABREF15.
At the high level, AMRAN is composed of two modules as shown in Figure FIGREF16: (i) a convolutional spatial attention network (CSAN) and (ii) a heterogeneous graph attention network (HGAN). CSAN jointly models the influence of multi-relational context on target user-URL pair (Section 4.1). It enriches the neighborhood diversity, and expands the scope of information reception. HGAN leverages both global node connectivity and local node attributes, in order to incorporate the effect of information propagation and encode user's dynamic preference in depth (Section 4.2). At the final step, the model produces recommendations by combining wide context-aware target user embedding and URL embedding, multi-relational context user embedding and context URL embedding, and deep context-aware user embedding and URL embedding (Section 4.3).
<<<Convolutional Spatial Attention Network (CSAN)>>>
The left bounding box in Figure FIGREF16 illustrates the structure of CSAN module. To provide a broad scope of knowledge for generating wide context-aware target user embedding and URL embedding, we adopt a multi-branch setting in CSAN. The two parallel branch models multi-relational context for target user and target URL respectively. Each branch contains two identical streams. We select $b_h$ context neighbors for each stream (e.g., historical context URL neighbors and co-occurrenced context URL neighbors of target URL, social context user neighbors and co-occurenced user neighbors of target user). These streams are employed to learn the most discriminative features from multi-relational neighbors of target user and target URL. Then we employ a gated fusion layer to capture the optimal global level representation of target user-URL pair.
Note that we enable the embedding sharing within each branch as users/URLs share the same feature set.
<<<Raw Attribute Input>>>
User and URL associate with different feature sets. Therefore, CSAN starts from embedding the input attribute set of each context neighbor. We use $s$ and $t$ to denote the number of features related to user and URL, respectively. Note that the dimension of initial embedding for each attribute could be different since they may carry with different information volume. We use one-hot encoding for categorical feature inputs, and apply direct lookup on these features. However, the same solution performs poorly when it comes continuous attributes such as the post frequency of an URL. Empirically, we found that an available solution is to bucketize these features into small intervals. Specifically, we map these continuous attributes in range $[0,1), [1,2),..., [2^k, 2^{k+1})$ into $0,1,..., k$ in this work.
<<</Raw Attribute Input>>>
<<<Attribute Embedding Layer>>>
We then project them into the same latent space via a set of attribute-specific transformation matrices $W_1, W_2, ..., W_{s+t}$ to project all the attributes into a $w$-dimensional space. The attributes of each neighbor then are stacked as a matrix in shape of $s \times w$ for users and $t \times w$ for URLs.
However, we treat the target user-URL pair differently. After projecting attributes by the same attribute-specific transformation matrix as their relational neighbors, instead of stacking them as a matrix, we concatenate the attribute embedding vectors together and feed it through a linear projection to generate $u^{\prime }_i \in \mathbb {R}^d$ and $c^{\prime }_j \in \mathbb {R}^d$ for future reference.
<<</Attribute Embedding Layer>>>
<<<Spatial Attention Block>>>
To prevent some unknown misalignment and conduct better comparison among the neighborhood features, we proposed a schema for jointly learning the layer-wise and channel-wise attention. In particular, for each stream, we pile the neighbors' representation matrices together to obtain a 3-dimensional tensor $M$. Intuitively, the design helps improve the alignment quality of neighbor's features. Then, inspired by BIBREF37, BIBREF38, we employ a spatial attention block in each stream for jointly learning channel-level and layer-level soft attention. See figure FIGREF21 for a high-level illustration of our spatial attention block. All the streams adopt identical spatial attention blocks, and each block attends the input attribute representations independently.
In the figure, we use the historical context URL stream for illustration. The output of spatial attention block is an attention weight map $S \in \mathbb {R}^{t \times w \times b}$ which is in the same shape with the input tensor $M$. Intuitively, the layer-wise attention and channel-wise attention are dedicated to selecting the most discriminative features and the most important neighbors, respectively. Thus, they are highly complementary to each other in functionality; and we adopt a factorized manner for optimization and computational efficiency as:
where $L \in \mathbb {R}^{t \times w \times 1}$ and $C \in \mathbb {R}^{1 \times 1 \times b}$ denote the layer-wise feature map and channel-wise feature map, respectively. $S$ is the result of tensor multiplication.
<<<Layer-wise Attention>>>
Conceptually, the layer-wise attention learns globally important elements in the feature. We apply a cross-channel average pooling operation onto the input tensor, following by 2 convolution layers of $3 \times 3$ and $1 \times 1$ filter, respectively. Specifically, cross-channel average pooling operation is defined as:
where $b$ is the number of selected neighbors.
<<</Layer-wise Attention>>>
<<<Channel-wise Attention>>>
The design of channel-wise attention is very similar to layer-wise attention, which aims to acquire a global view of discriminative users. Formally, the global average pooling is defined as:
where $t$ and $w$ are shared height and width of all channels. Similarly, we employ two convolution layers after the pooling operation.
Note that each convolution layer was followed by batch normalization operation. Furthermore, as other work of modern CNN structure BIBREF39, we append a ReLU activation function to assure $L>0, C>0$.
We further introduce one more convolution layer of $1 \times 1 \times b$ filter for enhancing the fusion of the layer-wise attention and channel-wise attention. The output tensor then is fed through a sigmoid function for normalization and generate the final attention weight tensor of spatial attention block. Formally, the output of the spatial attention module is the element-wise product of initial feature tensor $M$ and generated attention weights $S$:
Intuitively, the attended feature map learned fine-grained important elements via high alignment and compatible attentions.
<<</Channel-wise Attention>>>
<<</Spatial Attention Block>>>
<<<Gated Branch Fusion Layer>>>
We apply another CNN layer of $3 \times 3$ filter after the attended user representation of each stream for feature extraction and dimension :
which produces the multi-relational context representation vectors: $o_{i_h}, o_{i_c}, o_{u_f}$ and $o_{u_c}$ for each stream, respectively.
We employ a gated mechanism to assigns different weights to relation-specific neighborhood representation as:
where scalars $g_u$ and $g_v$ are learned automatically to control the importance of the two streams within each branch.
<<</Gated Branch Fusion Layer>>>
<<</Convolutional Spatial Attention Network (CSAN)>>>
<<<Heterogeneous Graph Attention Network (HGAN)>>>
Following recent success in Graph Convolutional Network (GCN) BIBREF32, BIBREF33, BIBREF40, BIBREF34, BIBREF35. We propose a heterogeneous graph attention network (HGAN) which is tailored for recommendation task. In particular, our proposed module adopts a parallel attention structure for the user neighbor and the URL neighbor of the central node, respectively. Considering a heterogeneous graph $\mathcal {G}=(\mathcal {V},\mathcal {E})$, the nodes represent objects in this network which can be either user or URL. The edges denote the relation between connected nodes. The node attributes pass along the edges during the propagation. We try to leverage between the local node attributes and global network structure. Our novelty lies in two aspects: (i) we differentiate the contribution of URL node and user node, respectively; and (ii) we consider both similarities of node and the influence of different relation types.
While the CSAN obtains information from multi-relational immediate neighbors, which expand the scope of knowledge for target user and target URL representations, HGAN aims at learning deeper semantic representations of target user and target URL.
<<<Heterogeneous Graph Network>>>
We try to capture different semantic relation behind various types of nodes and edges. For every single layer, if the central node is user node, its neighborhood contains its co-occurrenced users and posted URLs. If the central node type is URL, its neighborhood nodes consist of users who posted it and its co-occurrenced URLs.
We adopt similar embedding approach as we did in CSAN for the initial representation of each node, but we concatenate all the features into a long vector $x_i$ for each node instead of stacking them as a matrix. Considering the different types of the node associated with the varied feature set, we use a set of node type-specific transformation matrices to project different types of node representation into the same feature space before aggregation as follows:
Let $H^{(0)} \in \mathbb {R}^{(m+n) \times d}$ be the embedding matrix of all the attributed nodes, where $m+n$ is the total number of nodes and d is the dimension of latent embedding space; each row $h_i^{(0)}$ stands for the initial embedding vector of node $i$.
We define edges based on users' reference of URL (user-URL edges), user co-occurrence relation (user-user edges), and URL co-occurrence (URL-URL edges). We then introduce an adjacency matrix $A$ of $\mathcal {G}$ based on the importance of each edge. In particular, to compute the weight of user-user edges and URL-URL edges, we adopt a matrix named Shifted Positive Point-wise Mutual Information (SPPMI) BIBREF41, a popular measure for word associations, to utilize the co-concurrence context information. In word embedding scenario, each cell within the matrix measures the relation of corresponding word-context pair. The factorization of such matrix is proved to be equivalent to skip-gram model with negative sampling (SGNS). The Point-wise Mutual Information (PMI) between node $i$ and node $j$ is computed as $PMI(i,j) = log \frac{P(i,j)}{P(i)P(j)}$ where $P(i,j) = \frac{\# (i,j)}{|D|}$ and $P(i) = \frac{\# (i)}{|D|}$. $|D|$ denotes the total number of observed word-context pairs within a predefined sliding window. $P(i,j)$ is the joint probability that word $i$ and word $j$ appear together within the window size. Furthermore, we introduce the SPPMI matrix as an extension based on PMI value:
where $k$ is a hyperparameter, which represents the number of negative samples. Conceptually, a positive PMI value implies a semantically correlated word-context pair, Therefore, SPPMI, which only takes the positive value of PMI shifted by a global constant, reflects a closer semantic relation between word-context pairs. Inspired by this concept/idea, we use $|D|$ to denote the number of times of user (URL) co-occurrence and generate the user co-occurrence matrix in shape of $n \times n$ and URL co-occurrence matrix of $m \times m$. Note that we do not discriminate between the target node and context node.
Similarly, we learn from the TF-IDF concept and redefine it on recommendation task with implicit feedback BIBREF42 as:
where $\# (i,j)$ represents the number of times URL $j$ be posted by user $i$. $TF_{ij}$ further normalizes it by the maximum number of post times of any URL by user $i$. The $IDF_i$ is associated with the user's previous behavior as $m$ denotes the total number of URLs and $m_i$ is the number of URLs posted by user $i$.
Formally, the weight of the edge between node $i$ and node $j$ is defined as:
<<</Heterogeneous Graph Network>>>
<<<Heterogeneous Attention Layer (HGAL)>>>
Given the node's initial representation defined as above, we then pass messages to aggregate the neighborhood nodes' information and combine it with the target user's interests. A popular propagation strategy in existing GCN works is the normalized Laplacian matrix BIBREF32. Even though it proves to be effective, it is not trainable and it assigns every adjacent node with the same weight. Following previous work BIBREF35, we propose to incorporate a hierarchical attention mechanism to learn the weight of each adjacent node adaptively.
Since the distribution of the number of neighbors of each node disperses greatly, sub-sampling becomes an essential procedure in our task to avoid an explosion of computation cost after multiple hops stacked. We adopt Weighted Random Selection (WRS) BIBREF43 to select a fixed number of nodes for both node types in each graph attention layer. Figure FIGREF40 shows a graphical illustration of one HGAL.
Assume that the central node is a user node. We separately calculate the attention weights between the user node and its user node neighbors, or between the user node and its URL node neighbors. The similarity between the target user's node representation $h^{(l)}_u$ and all of its selected neighbors are defined as:
where $h^{(l)}_i$ is the representation of user $i$ at layer $l$, and $\mathcal {N}^{\phi _t}_i$ denotes the node type-based neighbor. We adopt $f(h^{(l)}_i,h^{(l)}_j)=cosine(h^{(l)}_i,h^{(l)}_j)$ as similarity function. Intuitively, $\alpha ^{\phi }_{ij}$ measures the importance of neighbor $j$ towards central node $i$. Meanwhile, we obtain the edge weight $A_{ij}$ as well.
After this, we aggregate the type-based neighborhood node representation and generate the embedding of neighborhood as the average of different types of nodes:
To model the information propagation and capture higher-order relations, we stack the HGAL multiple times. In addition, we introduce the residual connection BIBREF44 to help train a HGAN with many layers.
where $\sigma $ denotes the sigmoid function. $W_g^{(l)}$ and $b_g^{(l-1)}$ are the shared weight matrix and bias term at layer $l$, respectively. The node representation at $l$-th layer provides knowledge of $l$ degrees away.
<<</Heterogeneous Attention Layer (HGAL)>>>
<<</Heterogeneous Graph Attention Network (HGAN)>>>
<<<Interaction Layer>>>
The interaction layer is tailored for recommendation tasks. Recall that we obtained wide context-based user embedding $u^{\prime }_i$ and URL embedding $c^{\prime }_j$, context representations $p_i$, $p_j$ and deep context-based user embedding $h^{(l)}_i$ and URL embedding $h^{(l)}_j$ in the previous sections. Then we formulate the final URL-dependent user representation by using a fully connected layer as:
where $W_o$ and $b_o$ are a linear transformation weight matrix and bias term, respectively. $\oplus $ denotes vector concatenation. Note that the fully-connected layer can be replaced by other techniques (e.g. CNN). Finally, we feed it through a softmax function to calculate the probability that user interested in the given URL.
<<</Interaction Layer>>>
<<<Training>>>
We adopt the cross-entropy loss function during the training process.
We follow a uniform sampling strategy to obtain negative samples $(i,j) \in Y^{-}$ from unobserved interactions. Since the entire architecture is differentiable, we use back propagation to achieve end-to-end training.
<<</Training>>>
<<</Proposed Framework>>>
<<<Evaluation>>>
In this section, we describe a dataset, baselines, experimental setting, and experimental results. In the experiments, we seek to answer the following research questions:
RQ1: What is the performance of our model and baselines?
RQ2: How beneficial is each submodule of our model?
RQ3: How effective is our attention mechanisms?
RQ4: What is sensitivity of our model with regard to hyperparameters?
<<<Dataset>>>
We evaluate our proposed model on a Twitter dataset obtained from the authors of BIBREF12. The interaction behavior collected in the dataset is consistent with our definition in SECREF3. As they did for their study, we only kept users who have at least three interactions (i.e., posting at least three fact-checking messages containing fact-checking URLs). We conducted additional preprocessing step by removing users whose posts are non-English, or their tweets were inaccessible, because some of our baselines require a fact-checker's tweets. Our final dataset consists of 11,576 users (i.e, fact-checkers), 4,732 fact-checking URLs and 63,429 interactions. The dataset also contains each user's social network information. Note that each user's social relationship is restricted within available users in the dataset. And we further take available feature values of both user and URL into consideration. For instance, a category of referred fact-checking article and the name of corresponding fact-checking website reveals linguistic characteristics such as writing style and topical interest of each URL; while the number of followers and number of followees of each user indicates the credibility and influence of the fact-checker. Statistics of the final dataset is presented in Table TABREF65.
<<</Dataset>>>
<<<Baselines>>>
To measure relative effectiveness of our model, we compare our model against eight state-of-the-art baselines including the traditional collaborative filtering method, neural network-based models, and context-aware approaches.
MF BIBREF45 is a standard collaborative filtering technique. It factorizes an interaction matrix $X \in \mathbb {R}^{M \times N}$ into two matrices $U \in \mathbb {R}^{M \times d}$ and $X \in \mathbb {R}^{d \times N}$. $U$ contains each user's latent representation, and $X$ contains each URL's latent representation.
GAU BIBREF12 is a framework specifically designed for fact-checking URL recommendation utilizing rich side information such as a user' social network, tweets, and referred fact-checking pages. It is the most relevant and domain-specific baseline.
NeuMF BIBREF20 is a neural network based item recommendation algorithm. We adopted a composite version of MF jointly coupled with a MLP.
CMN BIBREF30 combines a global latent factor model with an augmented memory network to capture personalized neighbor-based structure in a non-linear fashion.
NAIS BIBREF31 is an item-based collaborative filtering architecture that integrates attention mechanism to distinguish the contribution of previously consumed items. The authors proposed two versions of NAIS: (1) $NAIS_{concat}$ which concatenates two vectors to learn the attention weight; and (2) $NAIS_{prod}$ which feeds the element-wise product of the two vectors to the attention network. Therefore, we also build two versions of NAIS, and compare them with our model.
DeepCoNN BIBREF46 was originally proposed for an item rating prediction task which jointly model user and item based on their textual reviews. The prior work shows that it significantly outperforms other topic modeling based methods.We re-implemented the baseline and adapted it for our recommendation task with implicit feedback.
NARRE BIBREF47 is a deep neural network based framework for a item rating prediction task. It employs the attention mechanism to distinguish the importance of each review. We re-implemented the framework for our implicit feedback situation.
NGCF BIBREF48 is a new recommendation framework based on graph neural network, explicitly encoding the collaborative signal in the form of high-order connectivity in user-item bipartite graph by performing embedding propagation.
Table TABREF66 presents characteristics of baselines and our model, showing what information each model utilizes. Note that even though CMN and NAIS both utilize co-occurrence context, CMN only utilizes user co-occurrence context whereas NAIS looks into URL co-occurrence context.
<<</Baselines>>>
<<<Evaluation Protocol>>>
We adopt the leave-one-out evaluation protocol to evaluate the performance of our model and baselines. The leave-one-out evaluation protocol has been widely used in top-K recommendation tasks. In particular, we held the latest interaction of each user as the test set and used the remaining interactions for training. Each testing instance was paired with 99 randomly sampled negative instances. Each recommendation model ranks the 100 instances according to its predicted results. The ranked list is judged by Hit Ratio (HR) BIBREF49 and Normalized Discount Cumulative Gain (NDCG) BIBREF50 at the position 10. HR@10 is a recall-based metric, measuring the percentage of the testing item being correctly recommended in the top-10 position. NDCG@10 is a ranked evaluation metric which considers the position of the correct hit in the ranked result. Since both modules in our framework introduce randomness, we repeat each experiment 5 times with different weight initialization and randomly selecting neighbors. We report the average score of the best performance in each training process for both metrics to ensure the robustness of our framework.
<<</Evaluation Protocol>>>
<<<Hyper-parameter Settings>>>
We implement our framework by using Pytorch framework, initialize weight parameters by Xavier initialization BIBREF51, and optimize the model with Adam optimizer BIBREF52. The mini-batch size is set to 128. Empirically, in CSAN, we select 10 neighbors for each stream. In HGAN, we choose 8 user neighbors and 8 URL neighbors for each central node at a single layer, and the default number of graph attention layers is set to 2. If the object (i.e.g, user neighbor or URL neighbor) is not sufficient enough, we pad the sequence with zeros vectors.
In the proposed AMRAN model, all hyperparameters are tuned by using the grid-search on the validation set, which is formed by holding out one interaction of each user from the training data like the prior work BIBREF20. We conduct the grid search over a latent dimension size from {8,16,32,64}, a regularization term from {0.1, 0.01, 0.001, 0.0001, 0.00001}, a learning rate from {0.0001, 0.0003, 0.001, 0.01, 0.05, 0.1}, and SPPMI shifted constant value $s$ from {1, 2, 5, 10}. The number of negative samples w.r.t each positive interaction is set to 4. We adopt the same latent dimension size for all sub-modules. For a fair comparison, we also thoroughly optimize the baselines' hyperparameters by using the validation set.
<<</Hyper-parameter Settings>>>
<<<RQ1: Performance of Our Model and Baselines>>>
Table TABREF70 presents performance of our model and baselines. According to the results and information described in Table TABREF66, we had the following observations. First, deep learning-based approaches usually obtained better performance than traditional models (e.g., MF and GAU). This observation makes sense because (1) traditional models failed to capture the important non-linear relationship between users and fact-checking URLs; (2) Most deep-learning based baseline models employ attention mechanism which helps better understand the semantic relation between user and URL; and (3) training tricks such as drop out and batch normalization also contribute to a better quality of training. In particular, $NAIS_{concat}$ achieves better performance than $NAIS_{prod}$ which supports the reason (1).
The second observation is that models with text review achieve better results compared with collaborative filtering-based methods. It is not surprising since that textual content contains rich information which could be auxiliary information to implicit feedback data and thus improve the recommendation accuracy. However, we observed that text-based recommendation approaches usually have a high complexity. Third, social context and co-occurrence context play important roles in improving recommendation results. NAIS significantly outperforms CMN and becomes the strongest baseline model. It indicates that URL-URL co-occurrence relationship is more important than user-user co-occurrence relationship since semantic representation of each user is much complex than semantic representation of a fact-checking URL.
Overall, our AMRAN outperforms all baselines, achieving 0.657 HR@10 and 0.410 NDCG@10. It improves HR@10 by 5.3% and NDCG@10 by 3% over the best baseline (i.e., $NAIS_{concat}$).
<<</RQ1: Performance of Our Model and Baselines>>>
<<<RQ2: Effectiveness of our submodules>>>
In this experiment, we are interested in measuring effectiveness of our submodules of AMRAN: CSAN and HGAN. Table TABREF71 the experimental result. CSAN achieves 0.642 HR@10 and 0.387 HR@10, whereas HGAN achieves 0.653 HR@10 and 0.403 NDCG@10. Both of the submodules outperform all the baselines in HR@10. HGAN outperforms all the baselines, and CSAN is competitive over the baselines. This experimental result confirms that both CSAN and HGAN positively contributed to the performance of our AMRAN.
<<</RQ2: Effectiveness of our submodules>>>
<<<RQ3: Effectiveness of our Attention Mechanisms>>>
We proposed two attention mechanisms: (1) spatial attention block in CSAN; and (2) graph attention mechanism in HGAN described in Section SECREF4. In this experiment, we are interested in studying the impact of the attention mechanisms. In particular, we run each submodule of AMRAN (i.e., CSAN or HGAN) with/without a corresponding attention mechanism. Table TABREF74 shows performance of these models. In both submodules, our proposed attention mechanisms positively improved the performance of these submodules, confirming the positive impact toward correctly recommending fact-checking URLs.
<<</RQ3: Effectiveness of our Attention Mechanisms>>>
<<<RQ4: Hyperparameter Sensitivity>>>
Now, we turn to analyze how our model is sensitive to hyperparameter values, and which hyperparameter value produces the best recommendation result. Recall that we utilize the context information to generate comprehensive embedding of given user and URL. In CSAN, we employ four streams to capture fine-grained context characteristics and share the embedding weight matrix with the target user and target URL representations. In the first experiment, we vary the number of neighbors associated with each steam in CSAN to show how CSAN's performance is changed. Figure FIGREF76 shows that both $HR@10$ and $NDCG@10$ have similar trends, and selecting 10 neighbors at each stream produced the best result.
Next, we measure how performance of HGAN is changed when varying the number of HGALs and a size of selected neighbor nodes at each layer. Figure FIGREF77 demonstrates the necessity of employing 2 HGALs, which consistently outperforms the one HGAL. The best performance was achieved when a size of selected neighbor nodes was set to 8. In addition, we vary the number of negative samples, and a size of latent semantic space for the target user and target URL (i.e., an embedding vector size of the target user and target URL). Figure FIGREF78 shows high dimensional latent semantic space produces high performance of AMRAN. 64 dimensional embeddings produced the best results. We also observe that one negative sample would not be enough to produce good results in especially when an embedding vector size is small. The top performance is achieved when one positive instance paired with 3 or 4 negative instances.
<<</RQ4: Hyperparameter Sensitivity>>>
<<<Case Study: Visualization of Relevance Propagation>>>
Attention mechanism not only improve recommendation performance of our model, but also provide explainability of our model. As a case study, we specifically chose an example to demonstrate relevance propagation. In particular, we randomly sampled a user 7849 as the example as shown in Figure FIGREF80. The user 7849 has 3 co-occurrenced users, 3 following users, and posted 4 URLs. Note that we omit less important 2nd-degree neighbors for simplicity. The most relevant neighbors and the propagation paths are highlighted automatically via the attention mechanism. In general, based on the user's historical context URLs, we observe that the topic that user 7849 would like to participate in debunking is fauxtography. However, in this very particular case, the most influential context neighbors of the user are user 25 (co-occurrence user) and user 4759 (social context) given URL 1623. Both of the context neighbors share the similar taste with user 7849 on the favorite website (Politifact.com). Moreover, we found that URL 2525 appeared in 2nd-degree neighborhood of the user 7849, and was originated from the same website (Snopes.com) with URL 1623.
<<</Case Study: Visualization of Relevance Propagation>>>
<<</Evaluation>>>
<<<Conclusion>>>
In this paper, we proposed a novel framework, which effectively recommends relevant fact-checking URLs to fact-checkers. The proposed framework inspired by recent advancements in graph neural network and attention mechanism leveraged user-URL specific context information to capture deep semantic and complex structure between target user and target URL. We compared the performance of our model, AMRAN, with eight state-of-the-art baselines. Experimental results showed that our model achieved up to 5.3% improvement against the best baseline. Both submodules of AMRAN positively contributed to the recommendation results.
This work was supported in part by NSF grant CNS-1755536, AWS Cloud Credits for Research, and Google Cloud. Any opinions, findings and conclusions or recommendations expressed in this material are the author(s) and do not necessarily reflect those of the sponsors.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Works\nFake News and Misinformation\nAdvancements in Recommender System\nGraph Convolutional Networks\nProblem Formulation\nDefinition 1 (Fact-checking URL Recommendation Task)\nDefinition 2 (Heterogeneous Network) @!START@BIBREF36@!END@\nDefinition 3 (Multi-relational Context)\nExample\nProposed Framework\nConvolutional Spatial Attention Network (CSAN)\nRaw Attribute Input\nAttribute Embedding Layer\nSpatial Attention Block\nLayer-wise Attention\nChannel-wise Attention\nGated Branch Fusion Layer\nHeterogeneous Graph Attention Network (HGAN)\nHeterogeneous Graph Network\nHeterogeneous Attention Layer (HGAL)\nInteraction Layer\nTraining\nEvaluation\nDataset\nBaselines\nEvaluation Protocol\nHyper-parameter Settings\nRQ1: Performance of Our Model and Baselines\nRQ2: Effectiveness of our submodules\nRQ3: Effectiveness of our Attention Mechanisms\nRQ4: Hyperparameter Sensitivity\nCase Study: Visualization of Relevance Propagation\nConclusion"
],
"type": "outline"
}
|
2003.08897
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Normalized and Geometry-Aware Self-Attention Network for Image Captioning
<<<Abstract>>>
Self-attention (SA) network has shown profound value in image captioning. In this paper, we improve SA from two aspects to promote the performance of image captioning. First, we propose Normalized Self-Attention (NSA), a reparameterization of SA that brings the benefits of normalization inside SA. While normalization is previously only applied outside SA, we introduce a novel normalization method and demonstrate that it is both possible and beneficial to perform it on the hidden activations inside SA. Second, to compensate for the major limit of Transformer that it fails to model the geometry structure of the input objects, we propose a class of Geometry-aware Self-Attention (GSA) that extends SA to explicitly and efficiently consider the relative geometry relations between the objects in the image. To construct our image captioning model, we combine the two modules and apply it to the vanilla self-attention network. We extensively evaluate our proposals on MS-COCO image captioning dataset and superior results are achieved when comparing to state-of-the-art approaches. Further experiments on three challenging tasks, i.e. video captioning, machine translation, and visual question answering, show the generality of our methods.
<<</Abstract>>>
<<<Introduction>>>
Automatically generating captions for images, namely image captioning BIBREF0, BIBREF1, has emerged as a prominent research problem at the intersection of computer vision (CV) and natural language processing (NLP). This task is challenging as it requires to first recognize the objects in the image, the relationships between them, and finally properly organize and describe them in natural language.
Inspired by the sequence-to-sequence model for machine translation, most image captioning approaches adopt an encoder-decoder paradigm, which uses a deep convolutional neural network (CNN) to encode the input image as a vectorial representation, and a recurrent neural network (RNN) based caption decoder to generate the output caption. Recently, self-attention (SA) networks, denoted as SANs, have been introduced by BIBREF2, BIBREF3 to replace conventional RNNs in image captioning. Since its first introduction in Transformer BIBREF4, SA and its variants have shown promising empirical results in a wide range of CV BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10 and NLP BIBREF11, BIBREF12, BIBREF13 tasks. Although SAN-based framework has achieved state-of-the-art performance in image captioning, it remains two problems to be solved.
Firstly, SA is susceptible to the internal covariate shift BIBREF14 problem. Typically, SA is regarded as a mapping of a set of query and key/value pairs. We observe, from another perspective, that computation of the attention weights in SA could be considered as feeding the queries into a fully-connected layer, whose parameters are dynamically computed according to the inputs. Problem could happen when the distribution of the queries shifts due to the change in network parameters during training. That is, the subsequent layers have to continuously adapt to the new input distribution, and consequently, SA may not be learned effectively. This problem is called “Internal Covariate Shift" in BIBREF14 –— the tendency that the distribution of activations drifts during training in a feed-forward network.
To eliminate the internal covariate shift problem inside SA, in this paper, we introduce an effective reparameterization of SA, named Normalized Self-Attention (NSA). NSA performs a novel normalization method on the hidden activations of SA to fix their distributions. By doing so, we can effectively decouple the fully-connected layer's parameters from those of other layers, leading to a better-conditioned optimization of SA. While Layer Normalization (LN) BIBREF15 is proven to be very critical for enabling the convergence of Transformer, however, LN is only applied outside SA blocks. To our knowledge, there has not been any deep exploration to find a suitable normalization method inside SA. We demonstrate that our NSA can collaborate with LN to bring improved generalization for SA-based networks.
Another critical issue in SA is its inability to model the geometric relationships among input elements. The vanilla self-attention treats its inputs as “bag-of-features", simply neglecting their structure and the relationships between them. However, the objects in the image, from which the region-based visual features are extracted for image captioning, inherently have geometric structure — 2D spatial layout and variations in scale/aspect ratio. Such inherent geometric relationships between objects play a very complex yet critical role in understanding the image content. One common solution to inject position information into SA is adding representations of absolute positions to each element of the inputs, as is often used in the case of 1D sentences. Nonetheless, this solution does not work well for image captioning because the 2D geometry relations between objects are harder to infer from their absolute positions.
We present a more efficient approach to the above problem: explicitly incorporating relative geometry relationships between objects into SA. The module is named Geometry-aware Self-Attention (GSA). GSA extends the original attention weight into two components: the original content-based weight, and a new geometric bias, which is efficiently calculated by the relative geometry relations and, importantly, the content of the associated elements, i.e. query or key.
By combining both NSA and GSA, we obtain an enhanced SA module. We then construct our Normalized and Geometry-aware Self-Attention Network, namely NG-SAN, by replacing the vanilla SA modules in the encoder of the self-attention network with the proposed one. Extensive experiments on MS-COCO validates the effectiveness of our proposals. In particular, our NG-SAN establishes a new state-of-the-art on the MS-COCO evaluation sever, improving the best single-model result in terms of CIDEr from 125.5 to 128.6. To demonstrate the generality of NSA, we further present video captioning, machine translation, and visual question answering experiments on the VATEX, WMT 2014 English-to-German, and VQA-v2 datasets, respectively. On top of the strong Transformer-based baselines, our methods can consistently increase accuracies on all tasks at a negligible extra computational cost.
To summarize, the main contributions of this paper are three-fold:
We presented Normalized Self-Attention, an effective reparameterization of self-attention, which brings the benefits of normalization technique inside SA.
We introduce a class of Geometry-aware Self-Attention that explicitly makes use of the relative geometry relationships and the content of objects to aid image understanding.
By combining the two modules and apply it on the self-attention network, we establish a new state-of-the-art on the MS-COCO image captioning benchmark. Further experiments on video captioning, machine translation, and visual question answering tasks demonstrate the generality of our methods.
<<</Introduction>>>
<<<Related Work>>>
<<<Image Captioning>>>
Existing image captioning approaches typically follows the CNN-RNN architecture BIBREF16. Recently, a variety of improving works have been proposed. BIBREF1 introduces soft and hard attention mechanisms to automatically focus on salient objects when generating each word. BIBREF17 mimics human polishing process with a ruminant decoder. BIBREF18 uses an object detector to propose salient image regions (objects) and extract for each object a feature vector, which are then used as inputs for attention mechanism. BIBREF19 introduces reinforcement-learning with a self-critical reward for model training. Recently, BIBREF2 and BIBREF3 propose to replace conventional RNN with the Transformer architecture, achieving state-of-the-art performance. However, more deep exploration of the self-attention module in Transformer is not conducted on the task of image captioning, which motivates our work in this paper.
<<</Image Captioning>>>
<<<Normalization>>>
Normalization BIBREF14 has become a critical ingredient in constructing a deep neural network. It is proposed by Batch normalization (BN) BIBREF14 to control the distributions of the internal activations of feed-forward neural networks, thereby reducing internal covariate shift. Several variants of normalization method such as Layer Normalization (LN) BIBREF15, Instance Normalization (IN) BIBREF20, and Group Normalization BIBREF21 have been developed mainly to reduce the mini-batch dependencies inherent in BN. LN operates along the channel dimension for each individual element in an example. IN performs BN-like computation but only for each sample. Though BN and LN have been adopted in networks that contain the SA module, e.g. Transformer, they are typically used outside the SA module. For the first time, our normalized self-attention brings the benefit of normalization inside the SA module.
<<</Normalization>>>
<<<Position encoding in self-attention networks>>>
To inject sequence ordering into SA module, in Transformer, absolute position encodings based on sinusoids are added to the input elements both in the encoder and decoder. Recently, BIBREF22 modulates SA by incorporating the relative distances between sequence elements. BIBREF6 proposes an SA-like module for object detection, which multiplies a new relation weight on the original self-attention weight, and is used by BIBREF23 in Transformer. Its relation weight is computed solely with the relative coordinates and sizes between bounding boxes. Different from these works, our GSA module explores a broader range of geometric biases that involve not only the geometry information but also the content of the associated objects.
<<</Position encoding in self-attention networks>>>
<<</Related Work>>>
<<<Preliminaries>>>
<<<Self-Attention (SA)>>>
We first review a basic form of self-attention, called “Scaled Dot-Product Attention", which is first proposed as a core component in Transformer.
The self-attention layer first transforms a set of $N$ $d_k$-dimensional vectors, packed into a matrix $X \in \mathbb {R}^{N \times d_k}$, into queries $Q \in \mathbb {R}^{N \times d}$, keys $K \in \mathbb {R}^{N \times d}$, and values $V\in \mathbb {R}^{N \times d}$ given by $Q=X W_Q,\ K=X W_K, \ V=X W_V$, where the projections $W_Q$, $W_K$, and $W_V$ are all $d_k\times d$ parameter matrices. The energy scores $E$ between any queries and keys are computed as
where $E$ is an $N \times N$ weight matrix, on which a softmax function is applied to obtain the weights of the values. The output is computed as a weighted sum of the values as
<<</Self-Attention (SA)>>>
<<<Self-attention network for image captioning>>>
Figure FIGREF12 shows self-attention network (SAN), which is our baseline architecture for image captioning. Similar to Transformer, the model consists of an image encoder and a caption decoder, both of which are composed of a stack of $L$ layers. Each layer consists of one (for the encoder layer) or two (for the decoder layer) multi-head attention (MHA) sub-layers followed by a feed-forward network (FFN). The MHA sub-layer contains $h$ parallel “heads" with each head corresponding to an independent scaled dot-product attention function. Besides, a residual connection and layer normalization are used between all the sub-layers.
The inputs to the encoder are the region-based visual features extracted from Faster-RCNN BIBREF24 object detector. Each input element corresponds to an object in the image. Before feeding the input vectors into the encoder, they are first passed through a dense layer followed by a ReLU layer to adapt their dimension to be consistent with the encoder. The decoder takes the attended visual features and the embeddings of the previous words to predict the next word recursively. Following Transformer, we add sinusoidal “positional encodings" to the inputs at the bottoms of the decoder. Because the regions in the image don't have a natural order like sequences, no position information is added in the encoder side.
<<</Self-attention network for image captioning>>>
<<</Preliminaries>>>
<<<Approach>>>
<<<Normalized SA (NSA)>>>
This section introduces a reparameterization of self-attention that takes advantage of normalization method for improved training.
We first review the formulation of Batch Normalization (BN). Consider feeding an input mini-batch $x$ into a feed-forward layer $y=F(x, \Theta )$, where $F$ is an arbitrary transformation, and $\Theta $ is the parameter to be learned. The internal covariate shift happens when the distribution of $x$ shifts during training. To reduce internal covariate shift, BN normalizes each channel of $x$ using the mean and variance accumulated over the same channel in the whole mini-batch.
We then take a closer look at the attention weight in Eqn. DISPLAY_FORM10:
It can be considered as an input instance $X\in \mathbb {R}^{N \times d_k}$ first goes through a $d_k \times d$ linear layer parameterized by $W_Q $ to obtain $Q=XW_Q\in \mathbb {R}^{N \times d}$, which is then further fed into a $d\times N$ linear layer parameterized by $K^\top = W_K^\top X^\top $ followed by a Softmax activation to output $N$ probabilities over the keys. Thus, we can re-formulate Eqn. DISPLAY_FORM14 as a fully-connected layer $F$ followed by a Softmax activation:
Note that the parameter $\Theta $ is dynamically calculated based on $X$. From this perspective, SA can be susceptible to the internal covariate shift problem just as in a standard feed-forward network. That is, when the distribution of input $Q$ shifts due to the change in network parameters during training, the layer parameter $\Theta $ needs to continuously adapt to the new input distribution. Consequently, SA may not be learned effectively.
Therefore, to eliminate the internal covariate shift, it is advantageous for the distribution of $Q$ to remain fixed over time. Then $\Theta $ does not have to readjust to compensate for the change in the distribution of $Q$. This can be accomplished by performing normalization on $Q$ by
We now consider the implementation of $\operatorname{Norm}$. BN is not directly suitable for $\operatorname{Norm}$ because instead of using a shared layer parameters for all examples in the dataset, the layer parameter $\Theta =W_K^\top X^\top $ is dynamically computed with the instance-specific $X$. Therefore, it is more desirable to perform normalization, $\operatorname{Norm}$, for every single instance independently.
Let $x \in \mathbb {R}^{ B \times T \times C}$ and $x_{btc}$ denote the $btc-$th element of $x$, where $b$ is the sample index, $c$ is the channel index, and $t$ is the index of the additional spatial dimension. We implement $\operatorname{Norm}$ as normalizing each instance in the mini-batch independently using per-channel feature statistics:
The above normalization method is exactly the Instance Normalization (IN) in the 1D case. Subtracting the mean from the queries could be considered as highlighting the differences among the queries and encourage them to query information from distinctive aspects.
We represent the normalization operation in Eqn. DISPLAY_FORM17 as $\hat{x} = \operatorname{IN}(x)$. Finally, we derive our normalized self-attention that reparameterizes the self-attention as
Similar to BN and IN, it is optional to further apply the channel-wise affine transformation $\tilde{x}_{btc} = \hat{x}_{btc} \gamma _c+\beta _c$ in $\operatorname{Norm}$, where $\gamma , \beta \in \mathbb {R}^{C}$ are learnable scale and shift parameters. But we empirically found it not necessary in our experiments. It is also optional to normalize $K$ with $\hat{K} = \operatorname{IN}(K)$. This is equivalent to normalizing the dynamic parameters $\Theta $, which, however, may limit the capacity of SA.
<<<Relation to prior works.>>>
Our normalization method differs from Layer Normalization (LN) in that LN normalizes along all channels of each individual element, while our method normalizes along each channel of all input elements in an instance. As for IN, it is typically used in 2D CNNs, e.g. on style transfer task. To our knowledge, IN has not been successfully used for language generation tasks, in particular for SAN.
<<</Relation to prior works.>>>
<<</Normalized SA (NSA)>>>
<<<Geometry-Aware SA (GSA)>>>
The inherent geometric structure among the input objects is beneficial for reasoning about the visual information, which, however, is not modeled in the vanilla Transformer. Therefore, we propose GSA that improves the SA module by taking into account the pairwise geometry relationships and the content information of objects.
Denote the relative geometry features between two objects $i$ and $j$ as $\mathbf {f}^g_{ij}$, which is a 4-dimensional vector of the relative position and size of the bounding boxes:
where $(x_i, y_i), w_i, h_i$ are the center coordinate, width, and height of box $i$, respectively.
We project $\mathbf {f}^g_{ij}$ to a high-dimensional representation $G_{ij}$ with a fully-connected (FC) layer followed by a ReLU activation as
where $G \in \mathbb {R}^{N \times N \times d_g} $.
We then modify the energy score in Eq. DISPLAY_FORM9 to include the effect of $G$ as
where $\phi $ is the geometric attention function, which outputs a score matrix of shape $N\times N$, and $Q^\prime , K^\prime \in \mathbb {R}^{N \times d_g}$ are geometric queries and keys that are computed in the same way as $Q, K$, i.e. by projecting the input $X$. In the above equation, the first term is related to the queries and keys, namely content-based weight. The second term represents the geometric bias, which involves the geometry relations and the contents of $Q^\prime $ and $K^\prime $.
We now discuss three choices of $\phi $, which can be either used individually or combined.
<<<Content-independent geometric bias.>>>
The geometry relation $G_{ij}$ conveys useful information for understanding the relationships between two objects, e.g. object $i$ and $j$ have “comparable sizes" and object $i$ is “next to" object $j$. Thus, we directly project $G_{ij}$ to a scalar score by
where $w_g$ is the parameter to be learned. The ReLU nonlinearity acts as a zero trimming operation so that only the relations between objects with certain geometric relationships are considered.
The relation network BIBREF6 presented recently for object detection is a special case of the content-independent geometric bias. Different from the above formulation, it fuses the content-independent geometric bias and the original attention weights by multiplication and use sinusoidal embedding of the geometry feature.
<<</Content-independent geometric bias.>>>
<<<Query-dependent geometric bias.>>>
The above “content-independent" variant assumes a static geometric bias, i.e. the same geometric bias is applied to all the query-key pairs in an SA layer. However, the geometric biases are more often different, depending on what the associated query object is. For example, for the queries, “sea" and “ball", their scale difference are often huge in the image, and thus their sensitivities to the same change of a key's distance/position vary widely. Therefore, the geometric biases of the two queries should be adapted to match their content. To this end, we decide to dynamically compute the geometric bias for different queries:
Here we use dot-product to match ${Q^\prime }_{i}$ with $G_{ij}$ since it is more computation and memory efficient than using the Concatenation-FC operation.
<<</Query-dependent geometric bias.>>>
<<<Key-dependent geometric bias.>>>
Similar to the query-dependent variant, geometric bias can also be associated with the content of the keys, computed as
<<</Key-dependent geometric bias.>>>
<<</Geometry-Aware SA (GSA)>>>
<<<Applying NSA and GSA modules to SAN>>>
We first combine both NSA and GSA by replacing $Q$ in Eqn. DISPLAY_FORM23 with the normalized one, $\hat{Q}$. We then use this module to replace the vanilla SA modules in the encoder of SAN, which results in our full model, namely Normalized and Geometry-aware Self-Attention Network (NG-SAN). NSA is not applied in the decoder of SAN because the decoder is autoregressive and has variable-length inputs. This is undesirable for IN because the mean and variance statistics are meaningless when the sequence length is 1.
<<</Applying NSA and GSA modules to SAN>>>
<<</Approach>>>
<<<Experiments on Image Captioning>>>
<<<Experimental setup>>>
<<<MS-COCO dataset @!START@BIBREF25@!END@.>>>
It is the most popular benchmark for image captioning. We use the `Karpathy' splits that have been used extensively for reporting results in prior works. This split contains 113,287 training images with 5 captions each, and 5k images for validation and test splits, respectively. We follow standard practice BIBREF26 to pre-process the text, resulting in a final vocabulary of 9,487 words. We use the region-based image features provided by Bottom-Up BIBREF18 for training.
<<</MS-COCO dataset @!START@BIBREF25@!END@.>>>
<<<Evaluation metrics.>>>
We use the standard automatic evaluation metrics to evaluate the quality of image captions, including BLEU-1/2/3/4 BIBREF27, METEOR BIBREF28, ROUGE-L BIBREF29, CIDEr BIBREF30, and SPICE BIBREF31, which are denoted as B@1/2/3/4, M, R, C and S, respectively.
<<</Evaluation metrics.>>>
<<<Implementation details.>>>
We follow Transformer-Base model BIBREF4 and BIBREF3 to set the model hyper-parameters and train the model. Specifically, the dimensionality of input image features is 2048. The latent dimension in the MHA module is 512, and the number of heads is 8. Inner dimension in the FFN module is 2,048. We apply dropout with a probability of 0.1. We use the same number of layers $L$ for the encoder and decoder. For training, we use the Adam optimizer BIBREF32 We use a step decay schedule with warm-up for varying the learning rate. The base learning rate is set to $min( t\times 10^{-4}; 3\times 10^{-4})$, where $t$ is the current epoch number that starts at 1. After 6 epochs, the learning rate is decayed by 1/2 every 3 epochs. All models are first trained for 15 epochs with the cross-entropy loss and then further optimized with CIDEr reward BIBREF19 for additional 15 epochs. If not specifically mentioned, by default we set $L=4$, only normalize the query and do not apply $\gamma , \beta $ in NSA, and use the query-dependent variant ($\phi ^1$) of GSA. Beam search with a beam width of 3 is used during testing stage.
<<</Implementation details.>>>
<<</Experimental setup>>>
<<<Analysis on NSA>>>
In this section, we examine the effectiveness of NSA module. We replace the SA modules in the encoder of SAN with NSA, resulting in a model named Normalized Self-Attention Network (N-SAN).
<<<Number of attention layers.>>>
In Table TABREF38 we compare the performance of N-SAN and SAN under the same number of SA layers $L\in \lbrace 1,2,4,6\rbrace $. We can see that the model size grows linearly as $L$ increases. Regarding the performance, we have two observations as follows. 1) As $L$ increases, the performance of both SAN and N-SAN gradually improves and reaches the optimal value when $L=6$. However, the performance gain of increasing $L$ from 4 to 6 is not very significant. Therefore, we use $L=4$ for later experiments as a compromise between the model's performance and complexity. 2) N-SAN consistently outperforms SAN on all metrics under different $L$. In Figure FIGREF33, we further plot the CIDEr scores of the one-layer SAN and N-SAN models during training, evaluated on the validation split at each epoch. As we can see, the curve of N-SAN is above that of SAN for most of the time.
<<</Number of attention layers.>>>
<<<Different normalization methods.>>>
Since we introduced IN into the NSA module for normalization, an intuitive question to ask is whether we can replace IN with other normalization methods. In Table TABREF39 we show the results of using different normalization methods including BN, LN, IN and IN without using the affine transformations ($\gamma $ and $\beta $). We have the following observations. 1) Using LN slightly decreases the performance. We conjecture that is because LN normalizes activations of all channels with the same normalization terms ($\mu $ and $\sigma $), thus limiting the expression capacity of each channel when calculating attention weights. 2) IN and IN w/o $\gamma , \beta $ significantly outperform SAN and all the other normalization methods. Meanwhile, the extra affine transformations ($\gamma $ and $\beta $) are not necessary. 3) Applying BN outperforms SAN but is inferior to adopting IN. BN has a similar effect as IN to reduce the internal covariate shift by fixing the distribution of the queries. However, as is described in Sec. SECREF13, since the layer parameter $\Theta $ in Eqn. DISPLAY_FORM15 depends on instance-specific input, it is more desirable to perform input normalization also on each instance instead of on the whole mini-batch.
<<</Different normalization methods.>>>
<<<What if we normalize the keys in addition to the queries?>>>
In Table TABREF42, we compare the variants of Eqn. DISPLAY_FORM18, including normalizing Q alone, K alone, and both Q and K. We have the following observations. 1) Normalizing either of Q and K could increase the performance. 2) The performances of normalizing both Q and K and normalizing Q alone are very similar, and are both significantly higher than that of SAN. 3) Normalizing K alone is inferior to normalizing Q alone. The reason is that normalizing $K$ is equivalent to normalizing $\Theta $ in Eqn. DISPLAY_FORM15, which may limit the model capacity of SA.
<<</What if we normalize the keys in addition to the queries?>>>
<<</Analysis on NSA>>>
<<<Analysis on GSA>>>
In this section, we examine the effectiveness of GSA module. Similar to N-SAN, we replace the SA modules in the encoder of SAN with GSA to obtain a model named Geometry-aware Self-Attention Network (G-SAN).
<<<Variants of GSA.>>>
In Table TABREF43 we compare various variants of GSA module introduced in Sec. SECREF20. “+absolute" denotes adding absolute geometry information of each individual object to their input representations at the bottoms of the encoder. It is obtained by embedding the geometry features, i.e. the center coordinates and the width/height of the box, normalized by the width/height of the image, to a sinusoidal representation using the same method as the “positional encodings" in BIBREF4. We have the following findings. 1) Adding the absolute geometry information (“absolute") is not beneficial to the performance. That is probably because it is too complex for SA to infer the 2D layout of objects from their absolute geometry information. 2) All the proposed variants of GSA can improve the performance of SAN, showing the advantages of using relative geometry information. 3) “query-dependent" brings the best performance and outperforms the content-independent variant, proving that incorporating the content information of the associated query can help infer a better geometric bias. 4) “key-dependent" is inferior to “query-dependent". That is because when using key-dependent geometric bias, the scores $\phi ^3_{ij} = {K^\prime _{j}}^\top G_{ij}$ condition on different keys $K^\prime _{j}$, thus the differences in $G_{ij}$ may be overwhelmed by the differences in $K^\prime _{j}$ when performing softmax on the keys' dimension. In comparision, when using query-dependent geometric bias, the effect of $G_{ij}$ could be highlighted since the scores condition on a common query ${Q^\prime _{i}}$ when performing softmax. We did not observe further improvement when combing these variants into $\phi $ in Eq. DISPLAY_FORM23.
<<</Variants of GSA.>>>
<<</Analysis on GSA>>>
<<<Analysis on the full model (NG-SAN)>>>
We now validate the effectiveness of NG-SAN that takes advantage of both NSA and GSA.
<<<Comparisons with state-of-the-arts.>>>
We compare NG-SAN with the state-of-the-art methods, including Up-Down BIBREF18, CAVP BIBREF33, SGAE BIBREF34, VSUA BIBREF35, ORT BIBREF23, AoANet BIBREF36, and MT BIBREF3. All the methods except ORT, AoANet, and MT are based on single- or multi-layer Long Short-Term Memory (LSTM) networks. MT adopts a Transformer-Base architecture, using 6 SA layers for both the encoder and the decoder, and inserts an additional LSTM layer in the decoder. ORT also adopts the Transformer-Base architecture and follows BIBREF6 to model the spatial relationship between inputs. AoANet uses SAN as the encoder and LSTM as the decoder.
Table TABREF49 compares the results of each method. We can see that both G-SAN and N-SAN outperform the SAN baseline across all metrics. Moreover, NG-SAN further outperforms G-SAN and N-SAN, demonstrating that GSA and NSA are compatible with each other. NG-SAN significantly outperforms all the other methods, including both LSTM-based and SA-based ones, over all metrics. Particularly, we improve the best CIDEr score from 130.9 to 132.1. Table TABREF44 further reports the performance of the top-performing single-model solutions on the official test server. Compared with the published methods, our single model significantly outperforms all the other methods in terms of all evaluation metrics except BLEU-1. In particular, we establish a new state-of-the-art score of 128.6 on CIDEr (C40).
<<</Comparisons with state-of-the-arts.>>>
<<<Complexity.>>>
As can be seen in the “#params" column in Table TABREF49, NG-SAN requires very few (about 2k) additional parameters compared with SAN. For NSA, it does not require any parameters, and the computation overhead of the additional normalization process is almost ignorable. While GSA indeed requires some additional parameters, the amount is ignorable. GSA can be efficiently implemented by matrix multiplication and the einstein summation (einsum) operations provided by mainstream deep learning frameworks.
<<</Complexity.>>>
<<</Analysis on the full model (NG-SAN)>>>
<<</Experiments on Image Captioning>>>
<<<Extension: Experiments on Other Tasks>>>
We further investigate the effectiveness and generality of our methods on Video Captioning (VC) BIBREF37, Machine Translation (MT) BIBREF38, and Visual Question Answering (VQA) BIBREF39 tasks. Since VC and MT are both sequence-to-sequence problems, we directly use Transformer as the baseline models, and we replace the SA modules in their encoder with the proposed NSA module to construct our methods. As for VQA, we use MCAN BIBREF40 as the baseline model, which uses a SAN-based network to simultaneously encode image and question information. To build our method for VQA, we replace all the SA modules in MCAN with our GSA modules.
<<<Video Captioning>>>
We use a recently released large-scale video captioning dataset, VATEX BIBREF41. It contains over 41,250 videos and 412,500 English captions. For a fair comparison with VATEX, we directly use the pre-extracted video features provided by the paper. Specifically, each video is sampled at 25fps and 1,000-dimensional features are extracted from these sampled frames using a pretrained I3D BIBREF42 model. Because the dataset is relatively small, we found using one layer in both the encoder and decoder is satisfactory. We use a training configuration the same as that of our image captioning model.
In Table TABREF52, we compare our method with the Transformer baseline and the VATEX model. We see that the performance of Transformer strongly exceeds that of VATEX, which adopts an LSTM-based architecture. Our Transformer+NSA method consistently improves over Transformer on all metrics. Particularly, our method improves the CIDEr score by 3.7 points when compared to Transformer, and significantly improves the CIDEr score by 11.4 points when compared to VATEX baseline.
<<</Video Captioning>>>
<<<Machine Translation>>>
We also evaluate NSA on MT task, for which the Transformer was originally proposed. We trained on the widely-used WMT 2014 English to German (En-–De) dataset, which consists of about 4.56 million sentence pairs. The models were validated on newstest-2013 and tested on newstest-2014 with BLEU. We use the well-known Transformer-Base BIBREF4 variant of Transformer as the baseline model, which has 6 layers in both the encoder and decoder. Specifically, we follow the implementation of the fairseq-py BIBREF43 toolkit.
As shown in Table TABREF53, Compared to Transformer-Base model, NSA increases the BLEU score by 0.36 points without adding any parameters.
<<</Machine Translation>>>
<<<Visual Question Answering>>>
We conduct experiments on the most commonly used VQA benchmark, VQA-v2 BIBREF39. It contains human-annotated question-answer pairs relating to the images from the MS-COCO dataset, with 3 questions per image and 10 answers per question. We strictly follow MCAN BIBREF40 to implement our models. Specifically, images are represented with region features extracted from Faster R-CNN object detector and the input questions are transformed with GloVe word embeddings and an LSTM network.
Table TABREF56 shows the overall accuracies of our methods and the current state-of-the-art models on the online test-dev and test-std splits. GSA boosts the test-std accuracy of MCAN from 70.83 to 71.28.
<<</Visual Question Answering>>>
<<</Extension: Experiments on Other Tasks>>>
<<<Conclusion>>>
We proposed two improvements to the self-attention (SA) mechanism, i.e. a Normalized Self-Attention (NSA) to reduce the internal covariate shift problem inside SA, and a class of Geometry-aware Self-Attention (GSA) that explicitly and dynamically computes the geometric bias between objects to benefit image understanding. We have conducted extensive experiments on MS-COCO image captioning dataset to validate the effectiveness of NSA, GSA, and their combination. We further show the significance and generality of our methods on video captioning, machine translation, and visual question answering tasks. On all tasks, simply replacing the vanilla SA module with our proposed methods provides solid improvements over strong baselines.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Work\nImage Captioning\nNormalization\nPosition encoding in self-attention networks\nPreliminaries\nSelf-Attention (SA)\nSelf-attention network for image captioning\nApproach\nNormalized SA (NSA)\nRelation to prior works.\nGeometry-Aware SA (GSA)\nContent-independent geometric bias.\nQuery-dependent geometric bias.\nKey-dependent geometric bias.\nApplying NSA and GSA modules to SAN\nExperiments on Image Captioning\nExperimental setup\nMS-COCO dataset @!START@BIBREF25@!END@.\nEvaluation metrics.\nImplementation details.\nAnalysis on NSA\nNumber of attention layers.\nDifferent normalization methods.\nWhat if we normalize the keys in addition to the queries?\nAnalysis on GSA\nVariants of GSA.\nAnalysis on the full model (NG-SAN)\nComparisons with state-of-the-arts.\nComplexity.\nExtension: Experiments on Other Tasks\nVideo Captioning\nMachine Translation\nVisual Question Answering\nConclusion"
],
"type": "outline"
}
|
2004.02451
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
An analysis of the utility of explicit negative examples to improve the syntactic abilities of neural language models
<<<Abstract>>>
We explore the utilities of explicit negative examples in training neural language models. Negative examples here are incorrect words in a sentence, such as"barks"in"*The dogs barks". Neural language models are commonly trained only on positive examples, a set of sentences in the training data, but recent studies suggest that the models trained in this way are not capable of robustly handling complex syntactic constructions, such as long-distance agreement. In this paper, using English data, we first demonstrate that appropriately using negative examples about particular constructions (e.g., subject-verb agreement) will boost the model's robustness on them, with a negligible loss of perplexity. The key to our success is an additional margin loss between the log-likelihoods of a correct word and an incorrect word. We then provide a detailed analysis of the trained models. One of our findings is the difficulty of object-relative clauses for RNNs. We find that even with our direct learning signals the models still suffer from resolving agreement across an object-relative clause. Augmentation of training sentences involving the constructions somewhat helps, but the accuracy still does not reach the level of subject-relative clauses. Although not directly cognitively appealing, our method can be a tool to analyze the true architectural limitation of neural models on challenging linguistic constructions.
<<</Abstract>>>
<<<Introduction>>>
intro
Despite not being exposed to explicit syntactic supervision, neural language models (LMs), such as recurrent neural networks, are able to generate fluent and natural sentences, suggesting that they induce syntactic knowledge about the language to some extent. However, it is still under debate whether such induced knowledge about grammar is robust enough to deal with syntactically challenging constructions such as long-distance subject-verb agreement. So far, the results for RNN language models (RNN-LMs) trained only with raw text are overall negative; prior work has reported low performance on the challenging test cases BIBREF0 even with the massive size of the data and model BIBREF1, or argue the necessity of an architectural change to track the syntactic structure explicitly BIBREF2, BIBREF3. Here the task is to evaluate whether a model assigns a higher likelihood on a grammatically correct sentence (UNKREF3) over an incorrect sentence (UNKREF5) that is minimally different from the original one BIBREF4.
.5ex
The author that the guards like laughs.
.5ex
The author that the guards like laugh.
In this paper, to obtain a new insight into the syntactic abilities of neural LMs, in particular RNN-LMs, we perform a series of experiments under a different condition from the prior work. Specifically, we extensively analyze the performance of the models that are exposed to explicit negative examples. In this work, negative examples are the sentences or tokens that are grammatically incorrect, such as (UNKREF5) above.
Since these negative examples provide a direct learning signal on the task at test time it may not be very surprising if the task performance goes up. We acknowledge this, and argue that our motivation for this setup is to deepen understanding, in particular the limitation or the capacity of the current architectures, which we expect can be reached with such strong supervision. Another motivation is engineering: we could exploit negative examples in different ways, and establishing a better way will be of practical importance toward building an LM or generator that can be robust on particular linguistic constructions.
The first research question we pursue is about this latter point: what is a better method to utilize negative examples that help LMs to acquire robustness on the target syntactic constructions? Regarding this point, we find that adding additional token-level loss trying to guarantee a margin between log-probabilities for the correct and incorrect words (e.g., $\log p(\textrm {laughs} | h)$ and $\log p(\textrm {laugh} | h)$ for (UNKREF3)) is superior to the alternatives. On the test set of BIBREF0, we show that LSTM language models (LSTM-LMs) trained by this loss reach near perfect level on most syntactic constructions for which we create negative examples, with only a slight increase of perplexity about 1.0 point.
Past work conceptually similar to us is BIBREF5, which, while not directly exploiting negative examples, trains an LM with additional explicit supervision signals to the evaluation task. They hypothesize that LSTMs do have enough capacity to acquire robust syntactic abilities but the learning signals given by the raw text are weak, and show that multi-task learning with a binary classification task to predict the upcoming verb form (singular or plural) helps models aware of the target syntax (subject-verb agreement). Our experiments basically confirm and strengthen this argument, with even stronger learning signals from negative examples, and we argue this allows to evaluate the true capacity of the current architectures. In our experiments (Section exp), we show that our margin loss achieves higher syntactic performance.
Another relevant work on the capacity of LSTMs is BIBREF6, which shows that by distilling from syntactic LMs BIBREF7, LSTM-LMs can be robust on syntax. We show that our LMs with the margin loss outperforms theirs in most of the aspects, further strengthening the capacity of LSTMs, and also discuss the limitation.
The latter part of this paper is a detailed analysis of the trained models and introduced losses. Our second question is about the true limitation of LSTM-LMs: are there still any syntactic constructions that the models cannot handle robustly even with our direct learning signals? This question can be seen as a fine-grained one raised by BIBREF5 with a stronger tool and improved evaluation metric. Among tested constructions, we find that syntactic agreement across an object relative clause (RC) is challenging. To inspect whether this is due to the architectural limitation, we train another LM on a dataset, on which we unnaturally augment sentences involving object RCs. Since it is known that object RCs are relatively rare compared to subject RCs BIBREF8, frequency may be the main reason for the lower performance. Interestingly, even when increasing the number of sentences with an object RC by eight times (more than twice of sentences with a subject RC), the accuracy does not reach the same level as agreement across a subject RC. This result suggests an inherent difficulty to track a syntactic state across an object RC for sequential neural architectures.
We finally provide an ablation study to understand the encoded linguistic knowledge in the models learned with the help of our method. We experiment under reduced supervision at two different levels: (1) at a lexical level, by not giving negative examples on verbs that appear in the test set; (2) at a construction level, by not giving negative examples about a particular construction, e.g., verbs after a subject RC. We observe no huge score drops by both. This suggests that our learning signals at a lexical level (negative words) strengthen the abstract syntactic knowledge about the target constructions, and also that the models can generalize the knowledge acquired by negative examples to similar constructions for which negative examples are not explicitly given. The result also implies that negative examples do not have to be complete and can be noisy, which will be appealing from an engineering perspective.
<<</Introduction>>>
<<<Target Task and Setup>>>
The most common evaluation metric of an LM is perplexity. Although neural LMs achieve impressive perplexity BIBREF9, it is an average score across all tokens and does not inform the models' behaviors on linguistically challenging structures, which are rare in the corpus. This is the main motivation to separately evaluate the models' syntactic robustness by a different task.
<<<Syntactic evaluation task>>>
task As introduced in Section intro, the task for a model is to assign a higher probability to the grammatical sentence over the ungrammatical one, given a pair of minimally different sentences at a critical position affecting the grammaticality. For example, (UNKREF3) and (UNKREF5) only differ at a final verb form, and to assign a higher probability to (UNKREF3), models need to be aware of the agreement dependency between author and laughs over an RC.
<<<@!START@BIBREF0@!END@ test set>>>
While initial work BIBREF4, BIBREF10 has collected test examples from naturally occurring sentences, this approach suffers from the coverage issue, as syntactically challenging examples are relatively rare. We use the test set compiled by BIBREF0, which consists of synthetic examples (in English) created by a fixed vocabulary and a grammar. This approach allows us to collect varieties of sentences with complex structures.
The test set is divided by a necessary syntactic ability. Many are about different patterns of subject-verb agreement, including local (UNKREF8) and non-local ones across a prepositional phrase or a subject/object RC, and coordinated verb phrases (UNKREF9). (UNKREF1) is an example of agreement across an object RC.
The senators smile/*smiles.
The senators like to watch television shows and are/*is twenty three years old.
Previous work has shown that non-local agreement is particularly challenging for sequential neural models BIBREF0.
The other patterns are reflexive anaphora dependencies between a noun and a reflexive pronoun (UNKREF10), and on negative polarity items (NPIs), such as ever, which requires a preceding negation word (e.g., no and none) at an appropriate scope (UNKREF11):
The authors hurt themselves/*himself.
No/*Most authors have ever been popular.
Note that NPI examples differ from the others in that the context determining the grammaticality of the target word (No/*Most) does not precede it. Rather, the grammaticality is determined by the following context. As we discuss in Section method, this property makes it difficult to apply training with negative examples for NPIs for most of the methods studied in this work.
All examples above (UNKREF1–UNKREF11) are actual test sentences, and we can see that since they are synthetic some may sound somewhat unnatural. The main argument behind using this dataset is that even not very natural, they are still strictly grammatical, and an LM equipped with robust syntactic abilities should be able to handle them as human would do.
<<</@!START@BIBREF0@!END@ test set>>>
<<</Syntactic evaluation task>>>
<<<Language models>>>
lm
<<<Training data>>>
Following the practice, we train LMs on the dataset not directly relevant to the test set. Throughout the paper, we use an English Wikipedia corpus assembled by BIBREF10, which has been used as training data for the present task BIBREF0, BIBREF6, consisting of 80M/10M/10M tokens for training/dev/test sets. It is tokenized and rare words are replaced by a single unknown token, amounting to the vocabulary size of 50,000.
<<</Training data>>>
<<<Baseline LSTM-LM>>>
Since our focus in this paper is an additional loss exploiting negative examples (Section method), we fix the baseline LM throughout the experiments. Our baseline is a three-layer LSTM-LM with 1,150 hidden units at internal layers trained with the standard cross-entropy loss. Word embeddings are 400-dimensional, and input and output embeddings are tied BIBREF11. Deviating from some prior work BIBREF0, BIBREF1, we train LMs at sentence level as in sequence-to-sequence models BIBREF12. This setting has been employed in some previous work BIBREF3, BIBREF6.
Parameters are optimized by SGD. For regularization, we apply dropout on word embeddings and outputs of every layer of LSTMs, with weight decay of 1.2e-6, and anneal the learning rate by 0.5 if the validation perplexity does not improve successively, checking every 5,000 mini-batches. Mini-batch size, dropout weight, and initial learning rate are tuned by perplexity on the dev set of Wikipedia dataset.
The size of our three-layer LM is the same as the state-of-the-art LSTM-LM at document-level BIBREF9. BIBREF0's LSTM-LM is two-layer with 650 hidden units and word embeddings. Comparing two, since the word embeddings of our models are smaller (400 vs. 650) the total model sizes are comparable (40M for ours vs. 39M for theirs). Nonetheless, we will see in the first experiment that our carefully tuned three-layer model achieves much higher syntactic performance than their model (Section exp), being a stronger baseline to our extensions, which we introduce next.
<<</Baseline LSTM-LM>>>
<<</Language models>>>
<<</Target Task and Setup>>>
<<<Learning with Negative Examples>>>
method
Now we describe four additional losses for exploiting negative examples. The first two are existing ones, proposed for a similar purpose or under a different motivation. As far as we know, the latter two have not appeared in past work.
We note that we create negative examples by modifying the original Wikipedia training sentences. As a running example, let us consider the case where sentence (UNKREF19) exists in a mini-batch, from which we create a negative example (UNKREF21).
.5ex
An industrial park with several companies is located in the close vicinity.
.5ex
An industrial park with several companies are located in the close vicinity.
<<<Notations>>>
By a target word, we mean a word for which we create a negative example (e.g., is). We distinguish two types of negative examples: a negative token and a negative sentence; the former means a single incorrect word (e.g., are).
<<</Notations>>>
<<<Negative Example Losses>>>
<<<Binary-classification loss>>>
This is proposed by BIBREF5 to complement a weak inductive bias in LSTM-LMs for learning syntax. It is multi-task learning across the cross-entropy loss ($L_{lm}$) and an additional loss ($L_{add}$):
where $\beta $ is a relative weight for $L_{add}$. Given outputs of LSTMs, a linear and binary softmax layers predict whether the next token is singular or plural. $L_{add}$ is a loss for this classification, only defined for the contexts preceding a target token $x_{i}$:
where $x_{1:i} = x_1 \cdots x_{i}$ is a prefix sequence and $\mathbf {h^*}$ is a set of all prefixes ending with a target word (e.g., An industrial park with several companies is) in the training data. $\textrm {num}(x) \in \lbrace \textrm {singular, plural} \rbrace $ is a function returning the number of $x$. In practice, for each mini-batch for $L_{lm}$, we calculate $L_{add}$ for the same set of sentences and add these two to obtain a total loss for updating parameters.
As we mentioned in Section intro, this loss does not exploit negative examples explicitly; essentially a model is only informed of a key position (target word) that determines the grammaticality. This is rather an indirect learning signal, and we expect that it does not outperform the other approaches.
<<</Binary-classification loss>>>
<<<Unlikelihood loss>>>
This is recently proposed BIBREF15 for resolving the repetition issue, a known problem for neural text generators BIBREF16. Aiming at learning a model that can suppress repetition, they introduce an unlikelihood loss, which is an additional loss at a token level and explicitly penalizes choosing words previously appeared in the current context.
We customize their loss for negative tokens $x_i^*$ (e.g., are in (UNKREF21)). Since this loss is added at token-level, instead of Eq. () the total loss is $L_{lm}$, which we modify as:
where $\textrm {neg}_t(\cdot )$ returns negative tokens for a target $x_i$. $\alpha $ controls the weight. $\mathbf {x}$ is a sentence in the training data $D$. The unlikelihood loss strengthens the signal to penalize undesirable words in a context by explicitly reducing the likelihood of negative tokens $x_i^*$. This is more direct learning signal than the binary classification loss.
<<</Unlikelihood loss>>>
<<<Sentence-level margin loss>>>
We propose a different loss, in which the likelihoods for correct and incorrect sentences are more tightly coupled. As in the binary classification loss, the total loss is given by Eq. (). We consider the following loss for $L_{add}$:
where $\delta $ is a margin value between the log-likelihood of original sentence $\mathbf {x}$ and negative sentences $\lbrace \mathbf {x}_j^* \rbrace $. $\textrm {neg}_s(\cdot )$ returns a set of negative sentences by modifying the original one. Note that we change only one token for each $\mathbf {x}_j^*$, and thus may obtain multiple negative sentences from one $\mathbf {x}$ when it contains multiple target tokens (e.g., she leaves there but comes back ...).
Comparing to the unlikelihood loss, not only decreasing the likelihood of a negative example, this loss tries to guarantee a minimal difference between the two likelihoods. The learning signal of this loss seems stronger in this sense; however, the token-level supervision is missing, which may provide a more direct signal to learn a clear contrast between correct and incorrect words. This is an empirical problem we pursue in the experiments.
<<</Sentence-level margin loss>>>
<<<Token-level margin loss>>>
Our final loss is a combination of the previous two, by replacing $g(x_i)$ in the unlikelihood loss by a margin loss:
<<</Token-level margin loss>>>
<<</Negative Example Losses>>>
<<<Parameters>>>
Each method employs a few additional hyperparameters. For the binary classification ($\beta $) and unlikelihood ($\alpha $) losses, we select their values from $\lbrace 1,10,100,1000\rbrace $ that achieve the best average syntactic performance (we find $\alpha =1000, \beta =1$). For the two margin losses, we fix $\beta =1.0$ and $\alpha =1.0$ and only see the effects of margin values.
<<</Parameters>>>
<<<Scope of Negative Examples>>>
scope Since our goal is to understand to what extent LMs can be sensitive to the target syntactic constructions by giving explicit supervision via negative examples, we only prepare negative examples on the constructions that are directly tested at evaluation. Specifically, we mark the following words in the training data, and create negative examples:
To create negative examples on subject-verb agreement, we mark all present verbs and change their numbers.
We also create negative examples on reflexive anaphora, by flipping between {themselves}$\leftrightarrow ${himself, herself}.
These two are both related to the syntactic number of a target word. For binary classification we regard both as a target word, apart from the original work that only deals with subject-verb agreement BIBREF5. We use a single common linear layer for both constructions.
In this work, we do not create negative examples for NPIs. This is mainly for technical reasons. Among four losses, only the sentence-level margin loss can correctly handle negative examples for NPIs, essentially because other losses are token-level. For NPIs, left contexts do not have information to decide the grammaticality of the target token (a quantifier; no, most, etc.) (Section task). Instead, in this work, we use NPI test cases as a proxy to see possible negative (or positive) impacts as compensation for specially targeting some constructions. We will see that in particular for our margin losses, such negative effects are very small.
<<</Scope of Negative Examples>>>
<<</Learning with Negative Examples>>>
<<<Experiments on Additional Losses>>>
exp
We first see the overall performance of baseline LMs as well as the effects of additional losses. Throughout the experiments, for each setting, we train five models from different random seeds and report the average score and standard deviation.
<<<Naive LSTM-LMs perform well>>>
The main accuracy comparison across target constructions for different settings is presented in Table main. We first notice that our baseline LSTM-LMs (Section lm) perform much better than BIBREF0's LM. A similar observation is recently made by BIBREF6. This suggests that the original work underestimates the true syntactic ability induced by LSTM-LMs. The table also shows the results by their distilled LSTMs from RNNGs (Section intro).
<<</Naive LSTM-LMs perform well>>>
<<<Higher margin value is effective>>>
For the two types of margin loss, which margin value should we use? Figure margin reports average accuracies within the same types of constructions. For both token and sentence-levels, the task performance increases with $\delta $, but a too large value (15) causes a negative effect, in particular on reflexive anaphora. There is an increase of perplexity by both methods. However, this effect is much smaller for the token-level loss. In the following experiments, we fix the margin value to 10 for both, which achieves the best syntactic performance.
<<</Higher margin value is effective>>>
<<<Which additional loss works better?>>>
We see a clear tendency that our token-level margin achieves overall better performance. Unlikelihood loss does not work unless we choose a huge weight parameter ($\alpha =1000$), but it does not outperform ours, with a similar value of perplexity. The improvements by binary-classification loss are smaller, indicating that the signals are weaker than other methods with explicit negative examples. Sentence-level margin loss is conceptually advantageous in that it can deal with any types of negative examples defined in a sentence including NPIs. We see that it is often competitive with token-level margin loss, but we see relatively a large increase of perplexity (4.9 points). This increase is observed by even smaller values (Figure margin). Understanding the cause of this degradation as well as alleviating it is an important future direction.
<<</Which additional loss works better?>>>
<<</Experiments on Additional Losses>>>
<<<Limitations of LSTM-LMs>>>
orc In Table main, the accuracies on dependencies across an object RC are relatively low. The central question in this experiment is whether this low performance is due to the limitation of current architectures, or other factors such as frequency. We base our discussion on the contrast between object (UNKREF45) and subject (UNKREF46) RCs:
The authors (that) the chef likes laugh.
The authors that like the chef laugh.
Importantly, the accuracies for a subject RC are more stable, reaching 99.8% with the token-level margin loss, although the content words used in the examples are common.
It is known that object RCs are less frequent than subject RCs BIBREF8, BIBREF18, and it could be the case that the use of negative examples still does not fully alleviate this factor. Here, to understand the true limitation of the current LSTM architecture, we try to eliminate such other factors as much as possible under a controlled experiment.
<<<Setup>>>
We first inspect the frequencies of object and subject RCs in the training data, by parsing them with the state-of-the-art Berkeley neural parser BIBREF19. In total, while subject RCs occur 373,186 times, object RCs only occur 106,558 times. We create three additional training datasets by adding sentences involving object RCs to the original Wikipedia corpus (Section lm). To this end, we randomly pick up 30 million sentences from Wikipedia (not overlapped to any sentences in the original corpus), parse by the same parser, and filter sentences containing an object RC, amounting to 680,000 sentences. Among the test cases about object RCs, we compare accuracies on subject-verb agreement, to make a comparison with subject RCs. We also evaluate on “animate only” subset, which has a correspondence to the test cases for subject RC with only differences in word order and inflection (like (UNKREF45) and (UNKREF46); see footnote FOOTREF47). Of particular interest to us is the accuracy on these animate cases. Since the vocabularies are exactly the same, we hypothesize that the accuracy will reach the same level as that on subject RCs with our augmentation.
<<</Setup>>>
<<<Results>>>
However, for both all and animate cases, accuracies are below those for subject RCs (Figure orc). Although we see improvements from the original score (93.7), the highest average accuracy by the token-level margin loss on “animate” subset is 97.1 (“with that”), not beyond 99%. This result indicates some architectural limitation of LSTM-LMs in handling object RCs robustly at a near perfect level. Answering why the accuracy does not reach (almost) 100%, perhaps with other empirical properties or inductive biases BIBREF20, BIBREF21 is future work.
<<</Results>>>
<<</Limitations of LSTM-LMs>>>
<<<Do models generalize explicit supervision, or just memorize it?>>>
One distinguishing property of our margin loss, in particular token-level loss, is that it is highly lexical, making contrast explicitly between correct and incorrect words. This direct signal may make models acquire very specialized knowledge about each target word, not very generalizable one across similar words and occurring contexts. In this section, to get insights into the transferability of syntactic knowledge induced by our margin losses, we provide an ablation study by removing certain negative examples during training.
<<</Do models generalize explicit supervision, or just memorize it?>>>
<<<Conclusion>>>
We have shown that by exploiting negative examples explicitly, the syntactic abilities of LSTM-LMs greatly improve, demonstrating a new capacity of handling syntax robustly. Given a success of our approach using negative examples, and our final analysis for transferability, which indicates that the negative examples do not have to be complete, one interesting future direction is to extend our approach to automatically inducing negative examples themselves in some way, possibly with orthographic and/or distributional indicators or others.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nTarget Task and Setup\nSyntactic evaluation task\n@!START@BIBREF0@!END@ test set\nLanguage models\nTraining data\nBaseline LSTM-LM\nLearning with Negative Examples\nNotations\nNegative Example Losses\nBinary-classification loss\nUnlikelihood loss\nSentence-level margin loss\nToken-level margin loss\nParameters\nScope of Negative Examples\nExperiments on Additional Losses\nNaive LSTM-LMs perform well\nHigher margin value is effective\nWhich additional loss works better?\nLimitations of LSTM-LMs\nSetup\nResults\nDo models generalize explicit supervision, or just memorize it?\nConclusion"
],
"type": "outline"
}
|
1912.00582
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
BLiMP: A Benchmark of Linguistic Minimal Pairs for English
<<<Abstract>>>
We introduce The Benchmark of Linguistic Minimal Pairs (shortened to BLiMP), a challenge set for evaluating what language models (LMs) know about major grammatical phenomena in English. BLiMP consists of 67 sub-datasets, each containing 1000 minimal pairs isolating specific contrasts in syntax, morphology, or semantics. The data is automatically generated according to expert-crafted grammars, and aggregate human agreement with the labels is 96.4%. We use it to evaluate n-gram, LSTM, and Transformer (GPT-2 and Transformer-XL) LMs. We find that state-of-the-art models identify morphological contrasts reliably, but they struggle with semantic restrictions on the distribution of quantifiers and negative polarity items and subtle syntactic phenomena such as extraction islands.
<<</Abstract>>>
<<<Introduction>>>
Current neural networks for language understanding rely heavily on unsupervised pretraining tasks like language modeling. However, it is still an open question what degree of knowledge state-of-the-art language models (LMs) acquire about different linguistic phenomena. Many recent studies BIBREF0, BIBREF1, BIBREF2 have advanced our understanding in this area by evaluating LMs' preferences between minimal pairs of sentences, as in Example SECREF1. However, these studies have used different analysis metrics and focused on a small set of linguistic paradigms, making a big-picture comparison between these studies limited.
. Ṫhe cat annoys Tim. (grammatical) The cat annoy Tim. (ungrammatical)
We introduce the Benchmark of Linguistic Minimal Pairs (shortened to BLiMP or just *X ) a linguistically-motivated benchmark for assessing LMs' knowledge across a wide variety of English phenomena, encapsulating both previously studied and novel contrasts. *X consists of 67 datasets automatically generated from expert-crafted grammars, each containing 1000 minimal pairs and organized by phenomenon into 12 categories. Validation with crowd workers shows that humans overwhelmingly agree with the contrasts in *X .
We use *X to study several pretrained LMs: Transformer-based LMs GPT-2 BIBREF3 and Transformer-XL BIBREF4, an LSTM LM trained by BIBREF5, and a $n$-gram LM. We evaluate whether the LM assigns a higher probability to the acceptable sentence in each minimal pair in *X . This experiment gives a sense of which grammatical distinctions LMs are sensitive to in general, and the extent to which unrelated models have similar strengths and weaknesses. We conclude that current neural LMs robustly learn agreement phenomena and even some subtle syntactic phenomena such as ellipsis and control/raising. They perform comparatively worse (and well below human level) on minimal pairs related to argument structure and the licensing of negative polarity items and quantifiers. All models perform at or near chance on extraction islands, which we conclude is the most challenging phenomenon covered by *X . Overall, we note that all models we evaluate fall short of human performance by a wide margin. GPT-2, which performs the best, does match (even just barely exceeds) human performance on some grammatical phenomena, but remains 8 percentage points below human performance overall.
We conduct additional experiments to investigate the effect of training size on LSTM model performance on *X . We show that learning trajectories differ, sometimes drastically, across different paradigms in the dataset, with phenomena such as anaphor agreement showing consistent improvement as training size increases, and other phenomena such as NPIs and extraction islands remaining near chance despite increases in training size. We also compare overall sentence probability to two other built-in metrics coded on *X and find that the chosen metric changes how we evaluate relative model performance.
<<</Introduction>>>
<<<Background & Related Work>>>
<<<Language Models>>>
The objective of a language model is to give a probability distribution over the possible strings of a language. Language models can be built on neural network models or non-neural network models. Due to their unsupervised nature, they can be trained without external annotations. More recently, neural network based language modeling has been shown to be a strong pretraining task for natural language understanding tasks BIBREF6, BIBREF7, BIBREF8, BIBREF9. Some recent models, such as BERT BIBREF9 use closely related tasks such as masked language modeling.
In the last decade, we have seen two major paradigm shifts in the state of the art for language modeling. The first major shift for language modeling was the movement from statistical methods based on $n$-grams BIBREF10 to neural methods such as LSTMs BIBREF11, which directly optimize on the task of predicting the next word. More recently, Transformer-based architectures employing self-attention BIBREF12 have outperformed LSTMs at language modeling BIBREF4. Although it is reasonably clear that these shifts have resulted in stronger language models, the primary metric of performance is perplexity, which cannot give detailed insight into these models' linguistic knowledge. Evaluation on downstream task benchmarks BIBREF13, BIBREF14 is more informative, but might not present a broad enough challenge or represent grammatical distinctions at a sufficiently fine-grained level.
<<</Language Models>>>
<<<Evaluating Linguistic Knowledge>>>
A large number of recent studies has used acceptability judgments to reveal what neural networks know about grammar. One branch of this literature has focused on using minimal pairs to infer whether LMs learn about specific linguistic phenomena. Table TABREF4 gives a summary of work that has studied linguistic phenomena in this way. For instance, linzen2016assessing look closely at minimal pairs contrasting subject-verb agreement. marvin2018targeted look at a larger set of phenomena, including negative polarity item licensing and reflexive licensing. However, a relatively small set of phenomena is covered by these studies, to the exclusion of well-studied phenomena in linguistics such as control and raising, ellipsis, distributional restrictions on quantifiers, and countless others. This is likely due to the labor-intensive nature of collecting examples that exhibit informative grammatical phenomena and their acceptability judgments.
A related line of work evaluates neural networks on acceptability judgments in a more general domain of grammatical phenomena. Corpora of sentences and their grammaticality are collected for this purpose in a number of computational studies on grammaticality judgment BIBREF26, BIBREF27, BIBREF16. The most recent and comprehensive corpus is CoLA BIBREF16, which contains around 10k sentences covering a wide variety of linguistic phenomena from 23 linguistic papers and textbooks. CoLA, which is included in the GLUE benchmark BIBREF13, has been used to track advances in the general grammatical knowledge of reusable sentence understanding models. Current models like BERT BIBREF9 and T5 BIBREF28 can be trained to give acceptability judgments that approach or even exceed individual human agreement with CoLA.
While CoLA can also be used to evaluate phenomenon-specific knowledge of models, this method is limited by the need to train a supervised classifier on CoLA data prior to evaluation. BIBREF29 compare the CoLA performance of pretrained sentence understanding models: an LSTM, GPT BIBREF8, and BERT. They find that these models have good performance on sentences involving marked argument structure, and struggle on sentences with long-distance dependencies like those found in questions, though the Transformers have a noticeable advantage. However, evaluating supervised classifiers prevents making strong conclusions about the models themselves, since biases in the training data may affect the results. For instance, relatively strong performance on a phenomenon might be due to a model's implicit knowledge or to frequent occurrence of similar examples in the training data. Evaluating LMs on minimal pairs evades this problem by eschewing supervised training on acceptability judgments. It is possible to use the LM probability of a sentence as a proxy for acceptability because other factors impacting a sentence's probability such as length and lexical content are controlled for.
<<</Evaluating Linguistic Knowledge>>>
<<</Background & Related Work>>>
<<<Data>>>
The *X dataset consists of 67 paradigms of 1000 sentence pairs. Each paradigm is annotated for the unique contrast it isolates and the broader category of phenomena it is part of. The data is automatically generated according to expert-crafted grammars, and our automatic labels are validated with crowd-sourced human judgments.
<<<Data generation procedure>>>
To create minimal pairs exemplifying a wide array of linguistic contrasts, it is necessary to artificially generate all datasets. This ensures both that we have sufficient unacceptable examples, and that the data is fully controlled, allowing for repeated isolation of a single linguistic phenomenon in each paradigm BIBREF30. The data generation scripts use a basic template to create each paradigm, pulling from a vocabulary of over 3000 words annotated for morphological, syntactic, and semantic features needed to create grammatical and semantically felicitous sentences. Examples SECREF6 and SECREF6 show one such template for the `acceptable' and `unacceptable' sentences within a pair: the sole difference between them is the underlined word, which differs only in whether the anaphor agrees in number with its antecedent. Our generation codebase and scripts are freely available.
. DP1 V1 refl_match .
The cats licked themselves .
. DP1 V1 refl_mismatch .
The cats licked itself .
This generation procedure is not without limitations, and despite the very detailed vocabulary we use, implausible sentences are occasionally generated (e.g., `Sam ran around some glaciers'). In these cases, though, both the acceptable and unacceptable sentences will be equally implausible given world knowledge, so any difference in the probability assigned to them is still due to the intended grammatical contrast.
<<</Data generation procedure>>>
<<<Coverage>>>
The paradigms that are covered by *X represent well-established contrasts in English morphology, syntax, and semantics. Each paradigm is grouped into one of 12 phenomena, shown in Table TABREF1. The paradigms are selected with the constraint that they can be illustrated with minimal pairs of equal sentence length and that it is of a form that could be written as a template, like in SECREF6 and SECREF6. While this dataset has broad coverage, it is not exhaustive – it is not possible to include every grammatical phenomenon of English, and there is no agreed-upon set of core phenomena. However, we consider frequent inclusion of a phenomenon in a syntax/semantics textbook as an informal proxy for what linguists consider to be core phenomena. We survey several syntax textbooks BIBREF31, BIBREF32, BIBREF33, and find that nearly all of the phenomena in *X are discussed in some source, and most of the topics that repeatedly appear in textbooks and can be represented with minimal pairs (e.g. agreement, argument selection, control/raising, wh-extraction/islands, binding) are present in *X . Because the generation code is reusable, it is possible to generate paradigms not included in *X in the future.
<<</Coverage>>>
<<<Comparison to Related Resources>>>
With over 3000 words, *X has by far the widest lexical variability of any related generated dataset. The vocabulary includes verbs with 11 different subcategorization frames, including verbs that select for PPs, infinitival VPs, and embedded clauses. By comparison, datasets by BIBREF30 and BIBREF1 each use a vocabulary of well under 200 items. Other datasets of minimal pairs that achieve greater lexical and syntactic variety use data-creation methods that are limited in terms of empirical scope or control. BIBREF0 construct a dataset of minimal pairs for subject-verb agreement by changing the number marking on present-tense verbs in a subset of English Wikipedia. However this approach does not generalize beyond simple agreement phenomena. BIBREF27 build a dataset of minimal pairs by taking sentences from the BNC through round-trip machine translation. The resulting sentences contain a wider variety of grammatical violations, but it is not possible to control the nature of the violation and a single sentence may contain several violations.
<<</Comparison to Related Resources>>>
<<<Data validation>>>
To verify that the generated sentences represent a real contrast in acceptability, we conduct human validation via Amazon Mechanical Turk. Twenty separate validators rated five pairs from each of the 67 paradigms, for a total of 6700 judgments. We restricted validators to individuals currently located in the US who self-reported as native speakers of English. To assure that our validators made a genuine effort on the task, each HIT included an attention check item and a hidden field question to catch bot-assisted humans. For each minimal pair, 20 different individuals completed a forced-choice task that mirrors the task done by the LMs; the human-determined “acceptable” sentence was calculated via majority vote of annotators. By this metric, we estimate aggregate human agreement with our annotations to be 96.4% overall. As a threshold of inclusion in *X , the majority of validators needed to agree with *X on at least 4/5 examples from each paradigm. Thus, all 67 paradigms in the public version of *X passed this validation, and only two additional paradigms had to be rejected on this criterion. We also estimate individual human agreement to be 88.6% overall using the approximately 100 annotations from each paradigm. Figure TABREF14 reports these individual human results (alongside model results) as a conservative measure of human agreement.
white
<<</Data validation>>>
<<</Data>>>
<<<Models & Methods>>>
<<<Models>>>
<<<GPT-2>>>
GPT-2 BIBREF3 is a large-scale language model using the Transformer architecture BIBREF12. We use the large version of GPT-2, which contains 24 layers and 345M parameters. The model is pretrained on BIBREF3's custom-built WebText dataset, which contains 40GB of text extracted from web pages and filtered by humans. To our best knowledge, the WebText corpus is not publicly available. Assuming approximately 5-6 bytes/chars per word on average, we estimate WebText contains approximately 8B tokens. The testing code for GPT-2 has been integrated into jiant, a codebase for training and evaluating sentence understanding models BIBREF34.
<<</GPT-2>>>
<<<Transformer-XL>>>
Transformer-XL BIBREF4 is another multi-layer Transformer-based neural language model. We test a pretrained Transformer-XL model with 18 layers of Transformer decoders and 16 attention heads for each layer. The model is trained on WikiText-103 BIBREF35, a corpus of 103M tokens from high-quality Wikipedia articles. Code for testing Transformer-XL on *X is also implemented in jiant.
<<</Transformer-XL>>>
<<<LSTM>>>
We include a long-short term memory (LSTM, BIBREF36) language model in our experiments. Specifically, we test a pretrained LSTM language model from BIBREF5 on *X . The model is trained on a 90M token corpus extracted from English Wikipedia. For investigating the effect of training size on models' *X performance, We retrain a series of LSTM models with the same hyperparameters and the following training sizes: 64M, 32M, 16M, 8M, 4M, 2M, 1M, 1/2M, 1/4M, and 1/8M tokens. For each size, we train the model on five different random samples drawing from the original training data, which has a size of 83M tokens. We release our LSTM evaluation code.
<<</LSTM>>>
<<<5-gram>>>
We build a 5-gram LM on the English Gigaword corpus BIBREF37, which consists of 3.07B tokens. To efficiently query $n$-grams we use an implementation based on BIBREF38, which is shown to speed up estimation BIBREF39. We release our $n$-gram evaluation code.
<<</5-gram>>>
<<</Models>>>
<<<Evaluation>>>
We mainly evaluate the models by measuring whether the LM assigns a higher probability to the grammatical sentence within the minimal pair. This method, used by BIBREF1, is only meaningful for comparing sentences of similar length and lexical content, as overall sentence probability tends to decrease as sentence length increases or word frequencies decrease BIBREF27. However, as discussed in Section SECREF3 we design every paradigm in *X to be compatible with this method.
<<</Evaluation>>>
<<</Models & Methods>>>
<<<Results>>>
We report the 12-category accuracy results for all models and human evaluation in Table TABREF14.
<<<Overall Results>>>
An LM's overall performance on *X can be measured simply by taking the proportion of correct predictions across the 67,000 minimal pairs from all paradigms. GPT-2 achieves the highest score and the $n$-gram the lowest. Transformer-XL and the LSTM LM perform in the middle, and at roughly the same level as each other. All models perform well below estimated human agreement (as described in Section SECREF11). The $n$-gram model's poor overall performance confirms *X is not solvable from co-occurrence information alone. Rather, success at *X is driven by the more abstract features learned by neural networks. There are no categories in which the $n$-gram approaches human performance.
Because we evaluate pretrained models that differ in architecture and training data quantity/domain, we can only speculate about what drives these differences (though see Section SECREF37 for a controlled ablation study on the LSTM LM). Nonetheless, the results seem to indicate that access to training data is the main driver of performance on *X for the neural models we evaluate. On purely architectural grounds, the similar performance of Transformer-XL and the LSTM is surprising since Transformer-XL is the state of the art on several LM training sets. However, they are both trained 100$\pm 10$M tokens of Wikipedia text. Relatedly, GPT-2's advantage may come from the fact that it is trained on roughly two orders of magnitude more data. While it is unclear whether LSTMs trained on larger datasets could rival GPT-2, such experiments are impractical due to the difficulty of scaling LSTMs to this size.
<<</Overall Results>>>
<<<Phenomenon-Specific Results>>>
The results also reveal considerable variation in performance across grammatical phenomena. Models generally perform best and closest to human level on morphological phenomena. This includes anaphor agreement, determiner-noun agreement, and subject-verb agreement. In each of these domains, GPT-2's performance is within 2.1 percentage points of humans. The set of challenging phenomena is more diverse. Islands are the hardest phenomenon by a wide margin. Only GPT-2 performs noticeably above chance, but it remains 20 points below humans. Some semantic phenomena, specifically those involving NPIs and quantifiers, are also challenging overall. All models show relatively weak performance on argument structure.
From results we conclude that current SotA LMs have robust knowledge of basic facts of English agreement. This does not mean that LMs will come close to human performance for all agreement phenomena. Section SECREF32 discusses evidence that increased dependency length and the presence of agreement attractors of the kind investigated by BIBREF0 and BIBREF5 reduce performance on agreement phenomena.
The exceptionally poor performance on islands is hard to reconcile with BIBREF2's (BIBREF2) conclusion that LSTMs have knowledge of some island constraints. In part, this difference may come down to differences in metrics. BIBREF2 compare a set of four related sentences with gaps in the same position or no gaps to obtain the wh-licensing interaction as a metric of how strongly the LM identifies a filler-gap dependency in a single syntactic position. They consider an island constraint to have been learned if this value is close to zero. We instead compare LM probabilities of sentences with similar lexical content but with gaps in different syntactic positions. These metrics target different forms of grammatical knowledge, though both are desirable properties to find in an LM. We also note that the LMs we test do not have poor knowledge of filler-gap dependencies in general, with all neural models perform above well above chance. This suggests that, while these models are able to establish long-distance dependencies in general, they are comparatively worse at identifying the syntactic domains in which these dependencies are blocked.
The semantic phenomena that models struggle with are usually attributed in current theories to a presupposition failure or contradiction arising from semantic composition or pragmatic reasoning BIBREF40, BIBREF41, BIBREF42. These abstract semantic and pragmatic factors may be difficult for LMs to learn. BIBREF1 also find that LSTMs largely fail to recognize NPI licensing conditions. BIBREF20 find that BERT (which is similar in scale to GPT-2) recognizes these conditions inconsistently in an unuspervised setting.
The weak performance on argument structure is somewhat surprising, since arguments are usually (though by no means always) local to their heads. Argument structure is closely related to semantic event structure BIBREF43, which may be comparatively difficult for LMs to learn. This finding contradicts BIBREF29's (BIBREF29) conclusion that argument structure is one of the strongest domains for neural models. However, BIBREF29 study supervised models trained on CoLA, which includes a large proportion of sentences related to argument structure.
<<</Phenomenon-Specific Results>>>
<<<Correlation of Model & Human Performance>>>
We also examine to what extent the models' performances are similar to each other, and how they are similar to human evaluation in terms of which phenomena are comparatively difficult. Figure TABREF29 shows the Pearson correlation between the four LMs and human evaluation on their accuracies in 67 paradigms. Compared to humans, GPT-2 has the highest correlation, closely followed by Transformer-XL and LSTM, though the correlation is only moderate. The $n$-gram's performance correlates with humans relatively weakly. Transformer-XL and LSTM are very highly correlated at 0.9, possibly reflecting their similar training data. Also, neural models correlate with each other more strongly than with humans or the $n$-gram model, suggesting neural networks share some biases that are not entirely human-like.
white
<<</Correlation of Model & Human Performance>>>
<<<Shallow Predictors of Performance>>>
We also ask what factors aside from linguistic phenomena make a minimal pair harder or easier for an LM to distinguish. We test whether shallow features like sentence length or overall sentence likelihood are predictors of whether the LM will have the right preference. The results are shown in Figure FIGREF31. While sentence length, perplexity and the probability of the good sentence all seem to predict model performance to a certain extent, the predictive power is not strong, especially for GPT-2, which is much less influenced by greater perplexity of the good sentence than the other models.
<<</Shallow Predictors of Performance>>>
<<</Results>>>
<<<Additional Experiments>>>
<<<Long-Distance Dependencies>>>
The presence of intervening material that lengthens an agreement dependency lowers accuracy on that sentence in both humans and LMs. We study how the presence or absence of this intervening material affects the ability of LMs to detect mismatches in agreement in *X . First, we test for knowledge of determiner-noun agreement with and without an intervening adjective, as in Example SECREF32. The results are plotted in Figure FIGREF33. The $n$-gram model is the most heavily impacted, performing on average 35 points worse. This is unsurprising, since the bigram consisting of a determiner and noun is far more likely to be observed than the trigram of determiner, adjective, and noun. For the neural models, we find a weak but consistent effect, with all models performing on average between 5 and 3 points worse when there is an intervening adjective.
. Ṙon saw that man/*men. Ron saw that nice man/*men.
Second, we test for sensitivity to mismatches in subject-verb agreement when an “attractor” noun of the opposite number intervenes. We compare attractors in relative clauses and as part of a relational noun as in Example SECREF32, following experiments by BIBREF0 and others. Again, we find an extremely large effect for the $n$-gram model, which performs over 50 points worse and well below chance when there is an attractor present, showing that the $n$-gram model is consistently misled by the presence of the attractor. All of the neural models perform above chance with an attractor present, but GPT-2 and the LSTM perform 22 and 20 points worse when an attractor is present. Transformer-XL's performance is harmed by only 5 points. Note that GPT-2 still has the highest performance in both cases, and even outperforms humans in the relational noun case. Thus, we reproduce BIBREF0's finding that attractors significantly reduce LSTM LMs' sensitivity to mismatches in agreement and find evidence that this holds true of Transformer LMs as well.
. Ṫhe sisters bake/*bakes. The sisters who met Cheryl bake/*bakes. The sisters of Cheryl bake/*bakes.
<<</Long-Distance Dependencies>>>
<<<Regular vs. Irregular Agreement>>>
In the determiner-noun agreement and subject-verb agreement categories, we generate separate datasets for nouns with regular and irregular number marking, as in Example SECREF34. All else being equal, only models with access to sub-word-level information should make any distinction between regular and irregular morphology.
. Ṙon saw that nice kid/*kids. (regular) Ron saw that nice man/*men. (irregular)
Contrary to this prediction, the results in Figure FIGREF36 show that the sub-word-level models GPT-2 and Transformer-XL show little effect of irregular morphology: they perform less than $0.013$ worse on irregulars than regulars. Given their high performance overall, this suggests they robustly encode number features without relying on segmental cues.
<<</Regular vs. Irregular Agreement>>>
<<<Training size and *X performance>>>
We also use *X to track how a model's knowledge of particular phenomena varies with the quantity of training data. We test this with the LSTM model and find that different phenomena in *X have notably different learning curves across different training sizes, as shown in Figure FIGREF39. Crucially, phenomena with similar results from the LSTM model trained on the full 83M tokens of training data may have very different learning curves. For example, the LSTM model performs well on both irregular forms and anaphor agreement, but the different learning curves suggest that more training data is required in the anaphor agreement case to achieve this same performance level. This is supported by a regression analysis showing that the best-fit line for anaphor agreement has the steepest slope (0.0623), followed by Determiner-Noun agreement (0.0426), Subject-Verb agreement (0.041), Irregular (0.039) and Ellipsis (0.0389). By contrast, Binding (0.016), Argument Structure (0.015), and Filler-Gap Dependency (0.0095) have shallower learning curves, showing a less strong effect of increases in training data size. The phenomena that showed the lowest performance overall, NPIs and Islands, also show the lowest effects of increases to training size, with slopes of 0.0078 and 0.0036, respectively. This indicates that, even given a substantially larger amount training data, the LSTM is unlikely to achieve human-like performance on these phenomena – it simply fails to learn the necessary dependencies. It should be noted that these differences in learning curves show how *X performance dissociates from perplexity, the standard measure of LM performance: while perplexity keeps decreasing as training size increases, the performance in different *X phenomena show very different learning curves.
<<</Training size and *X performance>>>
<<<Alternate Evaluation Methods>>>
There are several other techniques one can use to measure an LM's “preference” between two minimally different sentences. So far, we have considered only the full-sentence method, advocated for by BIBREF1, which compares the LM likelihood of the full sentences. In a followup experiment, we use two “prefix methods”, each of which has appeared in prior work in this area, that evaluate the model's preferences by comparing its prediction at a key point of divergence between the two sentences. Subsets of *X data—from the binding, determiner-noun agreement, and subject-verb agreement categories—are designed to be compatible with multiple methods, allowing us to conduct the first direct comparison. We find that all methods give broadly similar results when aggregating over a large set of paradigms, but some results diverge sharply for specific paradigms.
<<<One-prefix method>>>
In the one-prefix method, used by BIBREF0, a pair of sentences share the same initial portion of a sentence, but differ in a critical word that make them differ in grammaticality (e.g., The cat eats mice vs. The cat eat mice). The model's prediction is correct if it assigns a higher probability to the grammatical token given the shared prefix.
<<</One-prefix method>>>
<<<Two-prefix method>>>
In the two-prefix method, used by BIBREF19, a pair of sentences have a different initial portion that diverge in some critical way, but the grammaticality difference is only revealed when a shared critical word is included (e.g., The cat eats mice vs. The cats eats mice). For these paradigms, we evaluate whether the model assigns a higher probability to the critical word conditioned on the grammatical prefix compared the ungrammatical prefix. Note that the same pair of sentences cannot be compatible with both prefix methods, and that a pair may be compatible with the full-sentence method but neither prefix method.
For both prefix methods, it is crucial that the grammaticality of the sentence is unambiguously predictable from the critical word, but not sooner. With simple LM probabilities, the probabilities of the rest of the word tokens in the sentence also affect the performance. For example, a model may predict that `The cat ate the mouse' is more likely than `The cat eaten the mouse' without correctly predicting that $P(\emph {ate}|\emph {the cat}) > P(\emph {eaten}|\emph {the cat})$ if it predicts that $P(\emph {the mouse}|\emph {the cat ate})$ is much greater than $P(\emph {the mouse}|\emph {the cat eaten})$. Furthermore, it is unclear how a model assigns probabilities conditioned on an ungrammatical prefix, since ungrammatical sentences are largely absent from the training data. Using prefix probabilities allow us to exclude models' use of this additional information and evaluate how the models perform when they have just enough information to judge grammaticality.
<<</Two-prefix method>>>
<<</Alternate Evaluation Methods>>>
<<</Additional Experiments>>>
<<<Discussion & Future Work>>>
We have shown ways in which *X can be used as tool to gain both high-level and fine-grained insight into the grammatical knowledge of language models. Like the GLUE benchmark BIBREF13, *X assigns a single overall score to an LM which summarizes its general sensitivity to minimal pair contrasts. Thus, it can function as a linguistically motivated benchmark for the general evaluation of new language models. *X also provides a breakdown of LM performance by linguistic phenomenon, which can be used to draw concrete conclusions about the kinds of grammatical knowledge acquired by a given model. This kind of information is useful for detailed comparisons across models, as well as in ablation studies.
One question we leave unexplored is how well supervised acceptability classifiers built on top of pretrained models like BERT BIBREF9 perform on *X . It would be possible to evaluate how well such classifiers generalize to unseen phenomena by training on a subset of paradigms in *X and evaluating on the held-out sets, giving an idea of to what extent models are able to transfer knowledge in one domain to a similar one. BIBREF20 find that this method is potentially more revealing of implicit grammatical knowledge than purely unsupervised methods.
An important goal of linguistically-informed analysis of LMs is to better understand those empirical domains where current LMs appear to acquire some relevant knowledge, but still fall short of human performance. The results from *X suggest that—in addition to relatively well-studied phenomena like filler-gap dependencies, NPIs, and binding—argument structure remains one area where there is much to uncover about what LMs learn. More generally, as language modeling techniques continue to improve, it will be useful to have large-scale tools like *X to efficiently track changes in what these models do and do not know about grammar.
<<</Discussion & Future Work>>>
<<<Acknowledgments>>>
This material is based upon work supported by the National Science Foundation under Grant No. 1850208. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. This project has also benefited from support to SB by Eric and Wendy Schmidt (made by recommendation of the Schmidt Futures program), by Samsung Research (under the project Improving Deep Learning using Latent Structure), by Intuit, Inc., and by NVIDIA Corporation (with the donation of a Titan V GPU).
<<</Acknowledgments>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nBackground & Related Work\nLanguage Models\nEvaluating Linguistic Knowledge\nData\nData generation procedure\nCoverage\nComparison to Related Resources\nData validation\nModels & Methods\nModels\nGPT-2\nTransformer-XL\nLSTM\n5-gram\nEvaluation\nResults\nOverall Results\nPhenomenon-Specific Results\nCorrelation of Model & Human Performance\nShallow Predictors of Performance\nAdditional Experiments\nLong-Distance Dependencies\nRegular vs. Irregular Agreement\nTraining size and *X performance\nAlternate Evaluation Methods\nOne-prefix method\nTwo-prefix method\nDiscussion & Future Work\nAcknowledgments"
],
"type": "outline"
}
|
1909.12673
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
A Constructive Prediction of the Generalization Error Across Scales
<<<Abstract>>>
The dependency of the generalization error of neural networks on model and dataset size is of critical importance both in practice and for understanding the theory of neural networks. Nevertheless, the functional form of this dependency remains elusive. In this work, we present a functional form which approximates well the generalization error in practice. Capitalizing on the successful concept of model scaling (e.g., width, depth), we are able to simultaneously construct such a form and specify the exact models which can attain it across model/data scales. Our construction follows insights obtained from observations conducted over a range of model/data scales, in various model types and datasets, in vision and language tasks. We show that the form both fits the observations well across scales, and provides accurate predictions from small- to large-scale models and data.
<<</Abstract>>>
<<<Introduction>>>
With the success and heightened adoption of neural networks for real world tasks, some questions remain poorly answered. For a given task and model architecture, how much data would one require to reach a prescribed performance level? How big a model would be needed? Addressing such questions is made especially difficult by the mounting evidence that large, deep neural networks trained on large-scale data outperform their smaller counterparts, rendering the training of high performance models prohibitively costly. Indeed, in the absence of practical answers to the above questions, surrogate approaches have proven useful. One such common approach is model scaling, where one designs and compares small-scale models, and applies the obtained architectural principles at a larger scale BIBREF0, BIBREF1, BIBREF2. Despite these heuristics being widely used to various degrees of success, the relation between the performance of a model in the small- and large-scale settings is not well understood. Hence, exploring the limitations or improving the efficiency of such methods remains subject to trial and error.
In this work we circle back to the fundamental question: what is the (functional) relation between generalization error and model and dataset sizes? Critically, we capitalize on the concept of model scaling in its strictest form: we consider the case where there is some given scaling policy that completely defines how to scale up a model from small to large scales. We include in this context all model parameters, such that traversing from one scale (in which all parameters are known) to another requires no additional resources for specifying the model (e.g., architecture search/design).
We empirically explore the behavior of the generalization error over a wide range of datasets and models in vision and language tasks. While the error landscape seems fairly complex at first glance, we observe the emergence of several key characteristics shared across benchmarks and domains. Chief among these characteristics is the emergence of regions where power-law behavior approximates the error well both with respect to data size, when holding model size fixed, and vice versa.
Motivated by these observations, we establish criteria which a function approximating the error landscape should meet. We propose an intuitive candidate for such a function and evaluate its quality, both in explaining the observed error landscapes and in extrapolating from small scale (seen) to large scale (unseen) errors. Critically, our functional approximation of the error depends on both model and data sizes. We find that this function leads to a high quality fit and extrapolation. For instance, the mean and standard deviation of the relative errors are under 2% when fitting across all scales investigated and under 5% when extrapolating from a slimmed-down model (1/16 of the parameters) on a fraction of the training data (1/8 of the examples) on the ImageNet BIBREF3 and WikiText-103 BIBREF4 datasets, with similar results for other datasets.
To the best of our knowledge, this is the first work that provides simultaneously:
[itemsep=2pt,topsep=0pt,parsep=0pt]
A joint functional form of the generalization error landscape—as dependent on both data and model size—with few, interpretable degrees of freedom (section SECREF5).
Direct and complete specification (via the scaling policy) of the model configuration attaining said generalization error across model and dataset sizes.
Highly accurate approximation of error measurements across model and data scales via the functional form, evaluated on different models, datasets, and tasks (section SECREF6 ).
Highly accurate error prediction from small to large model and data (section SECREF7).
We conclude with a discussion of some implications of our findings as a practical and principled tool for understanding network design at small scale and for efficient computation and trade-off design in general. We hope this work also provides a useful empirical leg to stand on and an invitation to search for a theory of generalization error which accounts for our findings.
<<</Introduction>>>
<<<Related work>>>
<<<Model scaling:>>>
A number of studies have explored the effect of model scaling on performance. For instance, image classification networks can be scaled by depth BIBREF5 or width BIBREF6, BIBREF7. More recently, BIBREF8 demonstrated how scaling width, depth, and input resolution has combined positive effects larger than scaling each factor in isolation. However, this relationship has yet to be quantified in a predictive form – by how much will error change with model scaling? In this work, we focus on finding a constructive functional form for determining the model given a specified performance.
<<</Model scaling:>>>
<<<Data scaling:>>>
It has long been recognized that more data improves performance, and various studies report such trends in both computer vision BIBREF9, BIBREF10 and language processing tasks BIBREF11, BIBREF12. A number of prior studies observed power-law relations between the generalization error and training data size BIBREF13, BIBREF14, BIBREF15. Most relevant to our work, BIBREF16 explored the effect of data size on the generalization error in vision, language, and speech tasks, and observed a strikingly consistent power-law behavior in a large set of experiments. However, while these studies point to the empirical existence of a power law in terms of data, they do not offer tools for predicting the performance given a specified model. Nor do they offer low-cost methods to specify the model configuration which would attain the power law with data dependency. Indeed, BIBREF16 had to search over models and their configurations at large scale to exhibit their findings, incurring prohibitive computational costs.
In contrast, we demonstrate a constructive recipe, where we directly predict the test performance at large scale and specify the full model configuration which attains it (with no need for large-scale search), given performance at small scale.
<<</Data scaling:>>>
<<<Predicting model performance:>>>
Since training models at full data/model scale may be computationally prohibitive, a line of work tries to predict the performance of a given model on a given dataset, without training the model, for example by using a bank of previously trained models, dataset, and their associated performances BIBREF17. Others have proposed to estimate performance on small data BIBREF18 or model sizes BIBREF2, BIBREF19 in the context of neural architecture search (NAS). In this case, the small-scale evaluation is used to compare models at small cost, to expedite the search process; see BIBREF20 for a recent survey. Our work complements previous approaches by demonstrating a functional form that can predict large-scale performance from small-scale measurements. Moreover, our method may be integrated in NAS, addressing some of its current limitations (as discussed in section SECREF8).
<<</Predicting model performance:>>>
<<<Theoretical error bounds:>>>
Much attention has been given to theoretical explanations of the generalization capabilities of deep neural networks BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25. While fully engaging with this literature is beyond our scope, we note that recent studies have derived bounds involving power-law dependencies in both model BIBREF26 and data size BIBREF27. We leave it as an open question for future work to find theoretical explanations for the empirical behavior and the functional form we investigate in this work.
<<</Theoretical error bounds:>>>
<<</Related work>>>
<<<Experimental Setup>>>
<<<Notation:>>>
Let $_n = \lbrace _i,y_i \rbrace _{i=1}^{n}$ denote a labeled (training) dataset with $n$ samples or datapoints. Let $f_m$ denote a neural network whose size is the number of parameters $m$, such that $\hat{y} = f_m()$ is the predicted label. Let $\epsilon \left(n,m \right)$ be the generalization error as a function of $n$ and $m$, measured by a performance metric (e.g., top-1 accuracy or cross-entropy loss) on a held-out test set. We refer to this error function as the error landscape.
<<</Notation:>>>
<<<Scaling Policies>>>
<<<Dataset scaling:>>>
We wish to scale datasets while preserving the original distribution. For image classification, we uniformly subsample all classes by a constant ratio, thus preserving the relative sample size per class. We limit the maximal sub-sampling to avoid eradicating any class. For language modeling, where the number of classes (vocabulary items) has a very long tail distribution, we randomly sample sentences such that the total number of sampled words will be a certain fraction of the original dataset. Table TABREF9 reports the data scales we use. In all tasks the held-out test set remains untouched for evaluating the error.
<<</Dataset scaling:>>>
<<<Hyper-parameters:>>>
For similar reasons we wish to avoid hyper-paramater search at large scales, and thus avoid the temptation to tune hyper-parameters accordingly (learning rate, regularization, etc.). Therefore, we hold all hyper-parameters fixed. This enables us to construct a functional form that fits the error landscape and can be used to predict the error across scales while completely defining the model attaining it. We consider pros and cons of this approach in the discussion (section SECREF8).
<<</Hyper-parameters:>>>
<<</Scaling Policies>>>
<<<Tasks, Models, and Datasets>>>
We experiment with both vision and language tasks. We use 6 benchmark datasets for image classification and 3 for language modeling. For image classification, we train ResNet BIBREF5 and WRN models BIBREF6. For language modeling, we train AWD-LSTM BIBREF28 and Transformer-XL models BIBREF29. Summary statistics are shown in Table TABREF9, along with the range of explored scales. Appendix SECREF9 gives additional information.
<<</Tasks, Models, and Datasets>>>
<<</Experimental Setup>>>
<<<Observations on the Error Landscape>>>
figsub:observe3dwiki103figsub:observe3dcifar10depth=44 respectively show an example test error landscape for width scaling of Transformer-XL on WikiText-103 and WRN-44-16 on CIFAR10. Various additional such landscapes are found in appendix SECREF11, showing largely consistent patterns. Examining the error landscapes yields the following observations:
Model Scaling
For a given dataset size, scaling up the model results in an initial decrease in test error, which then saturates to a level determined by the dataset size. This behavior has been noted by BIBREF8 across varied model scaling methods, although they have not engaged with the dependency on dataset size.
The rate of error decrease with model size appears well approximated by a power-law.
These two observations together can be summarized as the following relation:
where $b, \beta , c_m$ may depend on the data size $n$, s.t. as $m$ grows, $\epsilon \rightarrow c_m$. Example fits to this form (allowing $b, \beta , c_m$ to be fit per $n$) are seen in figsub:observe2dwiki103 (right) and figsub:observe2dcifar10 (right).
Data scaling
For a given model size, scaling up the dataset results in an initial increase in performance, which then saturates to a level determined by the model size.
The rate of error decrease with dataset size appears well approximated by a power-law. BIBREF16 also noted a similar relationship, but did not functionally tie the saturation level to the dataset size.
These two observations together can be summarized as the following relation:
where $a, \alpha , c_n$ may depend on the model size $m$, s.t. as $n$ grows, $\epsilon \rightarrow c_n$. Example fits to this form (allowing $a, \alpha , c_n$ to be fit per $m$) are seen in figsub:observe2dwiki103 (left) and figsub:observe2dcifar10 (left).
Joint properties The behavior of the error when scaling model size while holding data size fixed, and vice versa, extends to the entire error landscape in a well-behaved manner, such that the manifold $\epsilon (m,n)$ is smooth everywhere as a function of both model and data scales.
<<</Observations on the Error Landscape>>>
<<<Functional Approximation of the Generalization Error>>>
<<<Criteria>>>
Motivated by the above observations, we now consider a functional approximation for the error landscape. In particular, let us consider function families meeting the following criteria which augment and restrict our observations:
As either model or dataset size goes to zero, the expected performance is equivalent to a random-guess error level $\epsilon _0$.
For a given dataset size, scaling up the model will result in an initial increase in performance, which will then saturate, taking the form in (DISPLAY_FORM26).
For a given model size, scaling up the dataset will result in an initial increase in performance, which will then saturate, taking the form in (DISPLAY_FORM30).
There exists an irreducible error $\epsilon _\infty $, intrinsic to the dataset.
The function must be smooth everywhere and monotonic non-increasing in terms of model and data size (observation UNKREF31).
While there are many possible function families meeting the above criteria, below we propose a simple function family for our evaluation. We do not claim that this is in fact the true underlying dependency, but rather that it serves as a good approximation of the error landscape—consistent with these criteria.
<<</Criteria>>>
<<<Proposed Function Family>>>
As a first insightful step, consider the implications of satisfying UNKREF35 and UNKREF36 simultaneously. By examining the limiting behavior as $m$ or $n$ grow, we have:
Thus, a consistent form satisfying UNKREF35 and UNKREF36 simultaneously is:
where $c_\infty $ is a constant not dependent on either $m$ or $n$. Let us now examine the simplified case where $a,b,\alpha ,\beta $ are constant:
where $\alpha \ge 0$ and $\beta \ge 0$ control the global rate at which error decreases with data and model size, respectively, $a>0$ and $b>0$ are a form of unit conversion between data and model sizes and error, and $c_\infty >0$ is the asymptotic lower value attainable. This function is a special case of (DISPLAY_FORM40) and meets criteria UNKREF35 and UNKREF36 by construction. Importantly UNKREF37 and UNKREF38 are also met.
However, by giving up the dependence of $a,b,\alpha ,\beta $ on $m,n$, this function does not meet criterion UNKREF33. We thus need to model the transition from the initial random-guess level to the power-law region. We propose to parameterize the transition using the following envelope (complex) function:
where $i = \sqrt{-1}$. Here the simple pole at $ \eta $ controls the transition point from the initial random-guess level $\epsilon _0$ as $(m,n)$ increase. As $(m,n)$ grow, $\tilde{\epsilon }\rightarrow c_\infty $ and the final irreducible error $\epsilon _\infty \triangleq \epsilon _0c_\infty \eta ^{-1}$ is approached. The random-guess error, $\epsilon _0$, is a known parameter determined by dataset statistics (e.g, $(N_{classes}-1) / N_{classes}$ for a balanced datasaet). Note that due to our choice of rational envelope, we can divide by a constant the form in (DISPLAY_FORM41). Without loss of generality, let us choose $a=1$.
Note that while the forms in equations DISPLAY_FORM40 and DISPLAY_FORM41 are well motivated, the approach taken for modeling the transition is solely a convenience one. In fact, the transition(s) as function of $m$ and $n$ may be captured in the functional forms of $a,b,\alpha ,\beta $ or another envelope mechanism. We leave a more refined investigation of the nature of the transitions to future work.
<<</Proposed Function Family>>>
<<</Functional Approximation of the Generalization Error>>>
<<<error landscape estimation>>>
We wish to empirically estimate the quality of the proposed functional parameterization as a fit to the true error landscape. Let $\hat{\epsilon }(n,m ; )$ be the parametric function family ((DISPLAY_FORM42)) approximating the error landscape $\epsilon \left(n,m \right)$, where $= \lbrace \alpha ,\beta ,b,c_\infty ,\eta \rbrace $. Define the divergence $\delta (n,m;)$ as the relative difference between the estimated error $\hat{\epsilon }(m,n;)$ and the true error $\epsilon (m,n)$:
We fit a least squares regression model to find the best parameters minimizing the divergence. In this section, we fit the function given all model/data configurations $m , n$ (see Table TABREF9) and evaluate the fit quality. (In the next section, we perform extrapolation experiments, from seen to unseen points.) We perform the fit separately for each dataset and evaluate its quality by the mean $\mu $ and standard deviation $\sigma $ of the divergence $\delta $ over all points $(m,n)$. See Appendix SECREF68 for experimental details.
As fig:fit shows, estimated test accuracy is highly correlated with actual test accuracy for various datasets, with worst-case values $\mu <1\%$ and $\sigma <5\%$ . Note that the number of free parameters is small ($||\le 6$) compared to the number of points (42–49 model-data configurations), demonstrating the appropriateness of the proposed function for modeling the complex error landscape.
<<<A Probe into Depth Scaling>>>
Here we verify that our results extend to another canonical scaling policy, namely depth scaling. fig:cifar10-depth shows the error landscape with depth scaling on CIFAR10, exhibiting the same characteristics as width scaling. fig:fit-cifar10-widthfig:fit-cifar10-depth show error landscape estimation results for both cases of width and depth scaling, exhibiting small and comparable fit errors (error intervals $<1.2\%$). Since the difference in approximation quality is effectively indistinguishable when scaling depth or width orthogonally, we expect compound scaling to adhere to the same functional form. Indeed, we verified this on the publicly available (model scaling only) results for EfficientNet BIBREF8.
<<</A Probe into Depth Scaling>>>
<<</error landscape estimation>>>
<<<Extrapolation>>>
In this section, we evaluate the ability of our functional approximation to extrapolate beyond seen model/data configurations. The primary question we ask is: can we predict the error of a large model/data configuration from the errors of smaller-scale model/data configurations? To do this, we fit the least squares regression on a subset of the configurations and predict the error on larger, unseen configurations. More formally, let $(m_i, n_j)$ denote a given model/data configuration. We first estimate parameters $_{ij}$ by fitting the function in (DISPLAY_FORM42) on all points of at most that size ($m \le m_i, n \le n_j$). Then we predict the error $\epsilon (m,n)$ in all points corresponding to larger configurations ($m > m_i, n > n_j$) using estimated $_{ij}$. Finally, we measure the divergence $\delta (m,n)$ between the estimated error and the actual error at all larger configurations. This process is illustrated in fig:extrapolation-array.
fig:extrapolation-single-vision shows the results of one such extrapolation experiment, on ImageNet. In this case, we have fit the functional form on all configurations of model size $m \le m_i = M/16 $ and data size $n \le n_j = N/8$, and predicted the error on all larger configurations. As the figure shows, the extrapolation is highly accurate, with a mean divergence of $\mu =4.5\%$ (std: $\sigma =4.7\%$). fig:extrapolation-single-language reports a similar experiment on WikiText-103. Here, again, we see very good extrapolation, with a mean divergence of $\mu =0.5\%$ (std: $\sigma =1.7\%$). Note that each extrapolation is run 10 times with different random initializations of $_{ij}$ in the least squares with negligible effect on the prediction.
In practice, we may be interested in extrapolation quality with different subsets of configurations. Appendix SECREF12 provides detailed extrapolation results on multiple subsets of configurations, for both vision and language datasets. Generally, the extrapolation performs well once not ill-posed, which may be caused by lack of signal in the region of the initial “random-guess” level, or in degenerate cases like having fewer measurements than the number of free parameters in $$.
<<</Extrapolation>>>
<<<Discussion and Conclusion>>>
In this work, through insights gained by the joint examination of the dependencies of generalization error on both model and data size, we arrive at criteria for functions consistent with the form of the generalization error under a given scaling policy. We consider one such function and find it to be in very good agreement with the actual behavior of the error landscape. Indeed, the agreement is strong enough that extrapolation from small to large scale becomes feasible: the function predicts the behavior of the generalization error in practice for the practical case of scaling models and data. We discuss several example implications of knowing such a functional form.
<<</Discussion and Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated work\nModel scaling:\nData scaling:\nPredicting model performance:\nTheoretical error bounds:\nExperimental Setup\nNotation:\nScaling Policies\nDataset scaling:\nHyper-parameters:\nTasks, Models, and Datasets\nObservations on the Error Landscape\nFunctional Approximation of the Generalization Error\nCriteria\nProposed Function Family\nerror landscape estimation\nA Probe into Depth Scaling\nExtrapolation\nDiscussion and Conclusion"
],
"type": "outline"
}
|
1909.01958
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
From 'F' to 'A' on the N.Y. Regents Science Exams: An Overview of the Aristo Project
<<<Abstract>>>
AI has achieved remarkable mastery over games such as Chess, Go, and Poker, and even Jeopardy, but the rich variety of standardized exams has remained a landmark challenge. Even in 2016, the best AI system achieved merely 59.3% on an 8th Grade science exam challenge. ::: This paper reports unprecedented success on the Grade 8 New York Regents Science Exam, where for the first time a system scores more than 90% on the exam's non-diagram, multiple choice (NDMC) questions. In addition, our Aristo system, building upon the success of recent language models, exceeded 83% on the corresponding Grade 12 Science Exam NDMC questions. The results, on unseen test questions, are robust across different test years and different variations of this kind of test. They demonstrate that modern NLP methods can result in mastery on this task. While not a full solution to general question-answering (the questions are multiple choice, and the domain is restricted to 8th Grade science), it represents a significant milestone for the field.
<<</Abstract>>>
<<<Introduction>>>
This paper reports on the history, progress, and lessons from the Aristo project, a six-year quest to answer grade-school and high-school science exams. Aristo has recently surpassed 90% on multiple choice questions from the 8th Grade New York Regents Science Exam (see Figure FIGREF6). We begin by offering several perspectives on why this achievement is significant for NLP and for AI more broadly.
<<<The Turing Test versus Standardized Tests>>>
In 1950, Alan Turing proposed the now well-known Turing Test as a possible test of machine intelligence: If a system can exhibit conversational behavior that is indistinguishable from that of a human during a conversation, that system could be considered intelligent (BID1). As the field of AI has grown, the test has become less meaningful as a challenge task for several reasons. First, its setup is not well defined (e.g., who is the person giving the test?). A computer scientist would likely know good distinguishing questions to ask, while a random member of the general public may not. What constraints are there on the interaction? What guidelines are provided to the judges? Second, recent Turing Test competitions have shown that, in certain formulations, the test itself is gameable; that is, people can be fooled by systems that simply retrieve sentences and make no claim of being intelligent (BID2;BID3). John Markoff of The New York Times wrote that the Turing Test is more a test of human gullibility than machine intelligence. Finally, the test, as originally conceived, is pass/fail rather than scored, thus providing no measure of progress toward a goal, something essential for any challenge problem.
Instead of a binary pass/fail, machine intelligence is more appropriately viewed as a diverse collection of capabilities associated with intelligent behavior. Finding appropriate benchmarks to test such capabilities is challenging; ideally, a benchmark should test a variety of capabilities in a natural and unconstrained way, while additionally being clearly measurable, understandable, accessible, and motivating.
Standardized tests, in particular science exams, are a rare example of a challenge that meets these requirements. While not a full test of machine intelligence, they do explore several capabilities strongly associated with intelligence, including language understanding, reasoning, and use of common-sense knowledge. One of the most interesting and appealing aspects of science exams is their graduated and multifaceted nature; different questions explore different types of knowledge, varying substantially in difficulty. For this reason, they have been used as a compelling—and challenging—task for the field for many years (BID4;BID5).
<<</The Turing Test versus Standardized Tests>>>
<<<Natural Language Processing>>>
With the advent of contextualized word-embedding methods such as ELMo (BID6), BERT (BID7), and most recently RoBERTa (BID8), the NLP community's benchmarks are being felled at a remarkable rate. These are, however, internally-generated yardsticks, such as SQuAD (BID9), Glue (BID10), SWAG (BID11), TriviaQA (BID12), and many others.
In contrast, the 8th Grade science benchmark is an external, independently-generated benchmark where we can compare machine performance with human performance. Moreover, the breadth of the vocabulary and the depth of the questions is unprecedented. For example, in the ARC question corpus of science questions, the average question length is 22 words using a vocabulary of over 6300 distinct (stemmed) words (BID13). Finally, the questions often test scientific knowledge by applying it to everyday situations and thus require aspects of common sense. For example, consider the question: Which equipment will best separate a mixture of iron filings and black pepper? To answer this kind of question robustly, it is not sufficient to understand magnetism. Aristo also needs to have some model of “black pepper" and “mixture" because the answer would be different if the iron filings were submerged in a bottle of water. Aristo thus serves as a unique “poster child" for the remarkable and rapid advances achieved by leveraging contextual word-embedding models in, NLP.
<<</Natural Language Processing>>>
<<<Machine Understanding of Textbooks>>>
Within NLP, machine understanding of textbooks is a grand AI challenge that dates back to the '70s, and was re-invigorated in Raj Reddy's 1988 AAAI Presidential Address and subsequent writing (BID14;BID15). However, progress on this challenge has a checkered history. Early attempts side-stepped the natural language understanding (NLU) task, in the belief that the main challenge lay in problem-solving. For example, Larkin1980ModelsOC manually encoded a physics textbook chapter as a set of rules that could then be used for question answering. Subsequent attempts to automate the reading task were unsuccessful, and the language task itself has emerged as a major challenge for AI.
In recent years there has been substantial progress in systems that can find factual answers in text, starting with IBM's Watson system (BID16), and now with high-performing neural systems that can answer short questions provided they are given a text that contains the answer (BID17;BID18). The work presented here continues along this trajectory, but aims to also answer questions where the answer may not be written down explicitly. While not a full solution to the textbook grand challenge, this work is thus a further step along this path.
<<</Machine Understanding of Textbooks>>>
<<</Introduction>>>
<<<A Brief History of Aristo>>>
Project Aristo emerged from the late Paul Allen's long-standing dream of a Digital Aristotle, an “easy-to-use, all-encompassing knowledge storehouse...to advance the field of AI.” (BID19). Initially, a small pilot program in 2003 aimed to encode 70 pages of a chemistry textbook and answer the questions at the end of the chapter. The pilot was considered successful (BID20), with the significant caveat that both text and questions were manually encoded, side-stepping the natural language task, similar to earlier efforts. A subsequent larger program, called Project Halo, developed tools allowing domain experts to rapidly enter knowledge into the system. However, despite substantial progress (BID21;BID22), the project was ultimately unable to scale to reliably acquire textbook knowledge, and was unable to handle questions expressed in full natural language.
In 2013, with the creation of the Allen Institute for Artificial Intelligence (AI2), the project was rethought and relaunched as Project Aristo (connoting Aristotle as a child), designed to avoid earlier mistakes. In particular: handling natural language became a central focus; Most knowledge was to be acquired automatically (not manually); Machine learning was to play a central role; questions were to be answered exactly as written; and the project restarted at elementary-level science (rather than college-level) (BID23).
The metric progress of the Aristo system on the Regents 8th Grade exams (non-diagram, multiple choice part, for a hidden, held-out test set) is shown in Figure FIGREF6. The figure shows the variety of techniques attempted, and mirrors the rapidly changing trajectory of the Natural Language Processing (NLP) field in general. Early work was dominated by information retrieval, statistical, and automated rule extraction and reasoning methods (BID24;BID25;BID26;BID27;BID28). Later work has harnessed state-of-the-art tools for large-scale language modeling and deep learning (BID29;BID30), which have come to dominate the performance of the overall system and reflects the stunning progress of the field of NLP as a whole.
<<</A Brief History of Aristo>>>
<<<The Aristo System>>>
We now describe the architecture of Aristo, and provide a brief summary of the solvers it uses.
<<<Overview>>>
The current configuration of Aristo comprises of eight solvers, described shortly, each of which attempts to answer a multiple choice question. To study particular phenomena and develop solvers, the project has created larger datasets to amplify and study different problems, resulting in 10 new datasets and 5 large knowledge resources for the community.
The solvers can be loosely grouped into:
Statistical and information retrieval methods
Reasoning methods
Large-scale language model methods
Over the life of the project, the relative importance of the methods has shifted towards large-scale language methods.
Several methods make use of the Aristo Corpus, comprising a large Web-crawled corpus ($5 \times 10^{10}$ tokens (280GB)) originally from the University of Waterloo, combined with targeted science content from Wikipedia, SimpleWikipedia, and several smaller online science texts (BID25).
<<</Overview>>>
<<<Information Retrieval and Statistics>>>
Three solvers use information retrieval (IR) and statistical measures to select answers. These methods are particularly effective for “lookup” questions where an answer is explicitly stated in the Aristo corpus.
The IR solver searches to see if the question along with an answer option is explicitly stated in the corpus, and returns the confidence that such a statement was found. To do this, for each answer option $a_i$, it sends $q$ + $a_i$ as a query to a search engine (we use ElasticSearch), and returns the search engine’s score for the top retrieved sentence $s$, where $s$ also has at least one non-stopword overlap with $q$, and at least one with $a_i$. This ensures $s$ has some relevance to both $q$ and $a_i$. This is repeated for all options $a_i$ to score them all, and the option with the highest score selected. Further details are available in (BID25).
The PMI solver uses pointwise mutual information (BID31) to measure the strength of the associations between parts of $q$ and parts of $a_i$. Given a large corpus $C$, PMI for two n-grams $x$ and $y$ is defined as $\mathrm {PMI}(x,y) = \log \frac{p(x,y)}{p(x) p(y)}$. Here $p(x,y)$ is the joint probability that $x$ and $y$ occur together in $C$, within a certain window of text (we use a 10 word window). The term $p(x) p(y)$, on the other hand, represents the probability with which $x$ and $y$ would occur together if they were statistically independent. The ratio of $p(x,y)$ to $p(x) p(y)$ is thus the ratio of the observed co-occurrence to the expected co-occurrence. The larger this ratio, the stronger the association between $x$ and $y$. The solver extracts unigrams, bigrams, trigrams, and skip-bigrams from the question $q$ and each answer option $a_i$. It outputs the answer with the largest average PMI, calculated over all pairs of question n-grams and answer option n-grams. Further details are available in (BID25).
Finally, ACME (Abstract-Concrete Mapping Engine) searches for a cohesive link between a question $q$ and candidate answer $a_{i}$ using a large knowledge base of vector spaces that relate words in language to a set of 5000 scientific terms enumerated in a term bank. ACME uses three types of vector spaces: terminology space, word space, and sentence space. Terminology space is designed for finding a term in the term bank that links a question to a candidate answer with strong lexical cohesion. Word space is designed to characterize a word by the context in which the word appears. Sentence space is designed to characterize a sentence by the words that it contains. The key insight in ACME is that we can better assess lexical cohesion of a question and answer by pivoting through scientific terminology, rather than by simple co-occurence frequencies of question and answer words. Further details are provided in (BID32).
These solvers together are particularly good at “lookup” questions where an answer is explicitly written down in the Aristo Corpus. For example, they correctly answer:
Infections may be caused by (1) mutations (2) microorganisms [correct] (3) toxic substances (4) climate changes
as the corpus contains the sentence “Products contaminated with microorganisms may cause infection.” (for the IR solver), as well as many other sentences mentioning both “infection” and “microorganisms” together (hence they are highly correlated, for the PMI solver), and both words are strongly correlated with the term “microorganism” (ACME).
<<</Information Retrieval and Statistics>>>
<<<Reasoning Methods>>>
The TupleInference solver uses semi-structured knowledge in the form of tuples, extracted via Open Information Extraction (Open IE) (BID33). Two sources of tuples are used:
A knowledge base of 263k tuples ($T$), extracted from the Aristo Corpus plus several domain-targeted sources, using training questions to retrieve science-relevant information.
On-the-fly tuples ($T^{\prime }$), extracted at question-answering time from t<he same corpus, to handle questions from new domains not covered by the training set.
TupleInference treats the reasoning task as searching for a graph that best connects the terms in the question (qterms) with an answer choice via the knowledge; see Figure FIGREF15 for a simple illustrative example. Unlike standard alignment models used for tasks such as Recognizing Textual Entailment (RTE) (BID34), however, we must score alignments between the tuples retrieved from the two sources above, $T_{\mathit {qa}} \cup T^{\prime }_{\mathit {qa}}$, and a (potentially multi-sentence) multiple choice question $qa$.
The qterms, answer choices, and tuples fields (i.e. subject, predicate, objects) form the set of possible vertices, $\mathcal {V}$, of the support graph. Edges connecting qterms to tuple fields and tuple fields to answer choices form the set of possible edges, $\mathcal {E}$. The support graph, $G(V, E)$, is a subgraph of $\mathcal {G}(\mathcal {V}, \mathcal {E})$ where $V$ and $E$ denote “active” nodes and edges, respectively. We define an ILP optimization model to search for the best support graph (i.e., the active nodes and edges), where a set of constraints define the structure of a valid support graph (e.g., an edge must connect an answer choice to a tuple) and the objective defines the preferred properties (e.g. active edges should have high word-overlap). Details of the constraints are given in (BID27). We then use the SCIP ILP optimization engine (BID35) to solve the ILP model. To obtain the score for each answer choice $a_i$, we force the node for that choice $x_{a_i}$ to be active and use the objective function value of the ILP model as the score. The answer choice with the highest score is selected. Further details are available in (BID27).
Multee (BID29) is a solver that repurposes existing textual entailment tools for question answering. Textual entailment (TE) is the task of assessing if one text implies another, and there are several high-performing TE systems now available. However, question answering often requires reasoning over multiple texts, and so Multee learns to reason with multiple individual entailment decisions. Specifically, Multee contains two components: (i) a sentence relevance model, which learns to focus on the relevant sentences, and (ii) a multi-layer aggregator, which uses an entailment model to obtain multiple layers of question-relevant representations for the premises and then composes them using the sentence-level scores from the relevance model. Finding relevant sentences is a form of local entailment between each premise and the answer hypothesis, whereas aggregating question-relevant representations is a form of global entailment between all premises and the answer hypothesis. This means we can effectively repurpose the same pre-trained entailment function $f_e$ for both components. Details of how this is done are given in (BID29). An example of a typical question and scored, retrieved evidence is shown in Figure FIGREF18. Further details are available in (BID29).
The QR (qualitative reasoning) solver is designed to answer questions about qualitative influence, i.e., how more/less of one quantity affects another (see Figure FIGREF19). Unlike the other solvers in Aristo, it is a specialist solver that only fires for a small subset of questions that ask about qualitative change, identified using (regex) language patterns.
The solver uses a knowledge base $K$ of 50,000 (textual) statements about qualitative influence, e.g., “A sunscreen with a higher SPF protects the skin longer.”, extracted automatically from a large corpus. It has then been trained to apply such statements to qualitative questions, e.g.,
John was looking at sunscreen at the retail store. He noticed that sunscreens that had lower SPF would offer protection that is (A) Longer (B) Shorter [correct]
In particular, the system learns through training to track the polarity of influences: For example, if we were to change “lower” to “higher” in the above example, the system will change its answer choice. Another example is shown in Figure FIGREF19. Again, if “melted” were changed to “cooled”, the system would change its choice to “(B) less energy”.
The QR solver learns to reason using the BERT language model (BID7), using the approach described in Section SECREF21 below. It is fine-tuned on 3800 crowdsourced qualitative questions illustrating the kinds of manipulation required, along with the associated qualitative knowledge sentence. The resulting system is able to answer questions that include significant linguistic and knowledge gaps between the question and retrieved knowledge (Table TABREF20).
Because the number of qualitative questions is small in our dataset, the solver does not significantly change Aristo's performance, although it does provide an explanation for its answers. For this reason we omit it in the results later. Further details and a detailed separate evaluation is available in (BID36).
<<</Reasoning Methods>>>
<<<Large-Scale Language models>>>
The field of NLP has advanced substantially with the advent of large-scale language models such as ELMo (BID6), ULMFit (BID37), GPT (BID38), BERT (BID7), and RoBERTa (BID8). These models are trained to perform various language prediction tasks such as predicting a missing word or the next sentence, using large amounts of text (e.g., BERT was trained on Wikipedia + the Google Book Corpus of 10,000 books). They can also be fine-tuned to new language prediction tasks, such as question-answering, and have been remarkably successful in the few months that they have been available.
We apply BERT to multiple choice questions by treating the task as classification: Given a question $q$ with answer options $a_{i}$ and optional background knowledge $K_{i}$, we provide it to BERT as:
[CLS] $K_i$ [SEP] $q$ [SEP] $a_{i}$ [SEP]
for each option (only the answer option is assigned as the second BERT "segment"). The [CLS] output token for each answer option is projected to a single logit and fed through a softmax layer, trained using cross-entropy loss against the correct answer.
The AristoBERT solver uses three methods to apply BERT more effectively. First, we retrieve and supply background knowledge along with the question when using BERT. This provides the potential for BERT to “read” that background knowledge and apply it to the question, although the exact nature of how it uses background knowledge is more complex and less interpretable. Second, we fine-tune BERT using a curriculum of several datasets, including some that are not science related. Finally, we ensemble different variants of BERT together.
For background knowledge $K_i$ we use up to 10 of the top sentences found by the IR solver, truncated to fit into the BERT max tokens setting (we use 256).
Following earlier work on multi-step fine-tuning (BID39), we first fine-tune on the large (87866 qs) RACE training set (BID40), a challenging set of English comprehension multiple choice exams given in Chinese middle and high schools.
We then further fine-tune on a collection of science multiple choice questions sets:
OpenBookQA train (4957 qs) (BID41)
ARC-Easy train (2251 qs) (BID13)
ARC-Challenge train (1119 qs) (BID13)
22 Regents Living Environment exams (665 qs).
We optimize the final fine-tuning using scores on the development set, performing a small hyperparameter search as suggested in the original BERT paper (BID7).
We repeat the above using three variants of BERT, the original BERT-large-cased and BERT-large-uncased, as well as the later released BERT-large-cased-whole-word-masking. We also add a model trained without background knowledge and ensemble them using the combination solver described below.
The AristoRoBERTa solver takes advantage of the recent release of Roberta (BID8), a high-performing and optimized derivative of BERT trained on significantly more text. In AristoRoBERTa, we simply replace the BERT model in AristoBERT with RoBERTa, repeating similar fine-tuning steps. We ensemble two versions together, namely with and without the first fine-tuning step using RACE.
<<</Large-Scale Language models>>>
<<<Ensembling>>>
Each solver outputs a non-negative confidence score for each of the answer options along with other optional features. The Combiner then produces a combined confidence score (between 0 and 1) using the following two-step approach.
In the first step, each solver is “calibrated” on the training set by learning a logistic regression classifier from each answer option to a correct/incorrect label. The features for an answer option $i$ include the raw confidence score $s_i$ as well as the score normalized across the answer options for a given question. We include two types of normalizations:
Each solver can also provide other features capturing aspects of the question or the reasoning path. The output of this first step classifier is then a calibrated confidence for each solver $s$ and answer option $i$: $ \mathit {calib}^s_i = 1/(1+\exp (- \beta ^s \cdot f^s)) $ where $f^s$ is the solver specific feature vector and $\beta ^s$ the associated feature weights.
The second step uses these calibrated confidences as (the only) features to a second logistic regression classifier from answer option to correct/incorrect, resulting in a final confidence in $[0,1]$, which is used to rank the answers:
Here, feature weights $\beta ^s$ indicate the contribution of each solver to the final confidence. Empirically, this two-step approach yields more robust predictions given limited training data compared to a one-step approach where all solver features are fed directly into a single classification step.
<<</Ensembling>>>
<<</The Aristo System>>>
<<<Experiments and Results>>>
This section describes our precise experimental methodology followed by our results.
<<<Experimental Methodology>>>
<<<Omitted Question Classes>>>
In the experimental results reported below, we omitted questions that utilized diagrams. While these questions are frequent in the test, they are outside of our focus on language and reasoning. Moreover, the diagrams are highly varied (see Figure FIGREF22) and despite work that tackled narrow diagram types, e.g, food chains (BID42), overall progress has been quite limited (BID43).
We also omitted questions that require a direct answer (rather than selecting from multiple choices), for two reasons. First, after removing questions with diagrams, they are rare in the remainder. Of the 482 direct answer questions over 13 years of Regents 8th Grade Science exams, only 38 ($<$8%) do not involve a diagram. Second, they are complex, often requiring explanation and synthesis. Both diagram and direct-answer questions are natural topics for future work.
<<</Omitted Question Classes>>>
<<<Dataset Formulation>>>
We evaluate Aristo using several datasets of independently-authored science questions taken from standardized tests. Each dataset is divided into train, development, and test partitions, the test partitions being “blind”, i.e., hidden to both the researchers and the Aristo system during training. All questions are taken verbatim from the original sources, with no rewording or modification. As mentioned earlier, we use only the non-diagram, multiple choice (NDMC) questions. We exclude questions with an associated diagram that is required to interpret the question. In the occasional case where two questions share the same preamble, the preamble is repeated for each question so they are independent. The Aristo solvers are trained using questions in the training partition (each solver is trained independently, as described earlier), and then the combination is fine-tuned using the development set.
The Regents exam questions are taken verbatim from the New York Regents Examination board, using the 4th Grade Science, 8th Grade Science, and 12th Grade Living Environment examinations. The questions are partitioned into train/dev/test by exam, i.e., each exam is either in train, dev, or test but not split up between them. The ARC dataset is a larger corpus of science questions drawn from public resources across the country, spanning grades 3 to 9, and also includes the Regents 4th and 8th questions (using the same train/dev/test split). Further details of the datasets are described in (BID13). The datasets are publicly available. Dataset sizes are shown in Table TABREF34. All but 39 of the 9366 questions are 4-way multiple choice, the remaining 39 ($<$0.5%) being 3- or 5-way. A random score over the entire dataset is 25.02%.
For each question, the answer option with the highest overall confidence from Aristo's combination module is selected, scoring 1 point if the answer is correct, 0 otherwise. In the (very rare) case of N options having the same confidence (an N-way tie) that includes the correct option, the system receives 1/N points (equivalent to the asymptote of random guessing between the N).
<<</Dataset Formulation>>>
<<</Experimental Methodology>>>
<<<Main Results>>>
The results are summarized in Table TABREF33, showing the performance of the solvers individually, and their combination in the full Aristo system. Note that Aristo is a single system run on the five datasets (not retuned for each dataset in turn).
Most notably, Aristo's scores on the Regents Exams far exceed earlier performances (e.g., BID0;BID25), and represents a new high-point on science questions.
In addition, the results show the dramatic impact of new language modeling technology, embodied in AristoBERT and AristoRoBERTa, the scores for these two solvers dominating the performance of the overall system. Even on the ARC-Challenge questions, containing a wide variety of difficult questions, the language modeling based solvers dominate. The general increasing trend of solver scores from left to right in the table loosely reflects the progression of the NLP field over the six years of the project.
To check that we have not overfit to our data, we also ran Aristo on the most recent years of the Regents Grade Exams (4th and 8th Grade), years 2017-19, that were unavailable at the start of the project and were not part of our datasets. The results are shown in Table TABREF42, a showing score similar to those on our larger datasets, suggesting the system is not overfit.
On the entire exam, the NY State Education Department considers a score of 65% as “Meeting the Standards”, and over 85% as “Meeting the Standards with Distinction”. If this rubric applies equally to the NDMC subset we have studied, this would mean Aristo has met the standard with distinction in 8th Grade Science.
<<</Main Results>>>
<<<Answer Only Performance>>>
Several authors have observed that for some multiple choice datasets, systems can still perform well even when ignoring the question body and looking only at the answer options (BID44;BID45). This surprising result is particularly true for crowdsourced datasets, where workers may use stock words or phrases (e.g., “not”) in incorrect answer options that gives them away. A dataset with this characteristic is clearly problematic, as systems can spot such cues and do well without even reading the question.
To measure this phenomenon on our datasets, we trained and tested a new AristoRoBERTa model giving it only the answer options (no question body nor retrieved knowledge). The results on the test partition are shown in Table TABREF44. We find scores significantly above random (25%), in particular for the 12th Grade set which has longer answers. But the scores are sufficiently low to indicate the datasets are relatively free of annotation artifacts that would allow the system to often guess the answer independent of the question. This desirable feature is likely due to the fact these are natural science questions, carefully crafted by experts for inclusion in exams, rather than mass-produced through crowdsourcing.
<<</Answer Only Performance>>>
<<<Adversarial Answer Options>>>
One way of testing robustness in multiple choice is to change or add incorrect answer options, and see if the system's performance degrades (BID26). If a system has mastery of the material, we would expect its score to be relatively unaffected by such modifications. To explore this, we investigated adversarially adding extra incorrect options, i.e., searching for answer options that might confuse the system, using AristoRoBERTa, and adding them as extra choices to the existing questions.
To do this, for each question, we collect a large ($\approx $ 100) number of candidate additional answer choices using the correct answers to other questions in the same dataset (and train/test split), where the top 100 are chosen by a superficial alignment score (features such as answer length and punctuation usage). We then re-rank these additional choices using AristoRoBERTa, take the top N, and add them to the original K (typically 4) choices for the question.
If we add N=4 extra choices to the normal 4-way questions, they become 8-way multiple choice, and performance drops dramatically (over 40 percentage points), albeit unfairly as we have by definition added choices that confuse the system. We then train the model further on this 8-way adversarial dataset, a process known as inoculation (BID46). After further training, we still find a drop, but significantly less (around 10 percentage points absolute, 13.8% relative, Table TABREF45), even though many of the new distractor choices would be easy for a human to rule out.
For example, while the solver gets the right answer to the following question:
The condition of the air outdoors at a certain time of day is known as (A) friction (B) light (C) force (D) weather [selected, correct]
it fails for the 8-way variant:
The condition of the air outdoors at a certain time of day is known as (A) friction (B) light (C) force (D) weather [correct] (Q) joule (R) gradient [selected] (S) trench (T) add heat
These results show that while Aristo performs well, it still has some blind spots that can be artificially uncovered through adversarial methods such as this.
<<</Adversarial Answer Options>>>
<<</Experiments and Results>>>
<<<Related Work>>>
This section describes related work on answering standardized-test questions, and on math word problems in particular. It provides an overview rather than exhaustive citations.
<<<Standardized Tests>>>
Standardized tests have long been proposed as challenge problems for AI (e.g., BID47;BID4;BID5;BID48), as they appear to require significant advances in AI technology while also being accessible, measurable, understandable, and motivating. Earlier work on standardized tests focused on specialized tasks, for example, SAT word analogies (BID49), GRE word antonyms (BID50), and TOEFL synonyms (BID51). More recently, there have been attempts at building systems to pass university entrance exams. Under NII's Todai project, several systems were developed for parts of the University of Tokyo Entrance Exam, including maths, physics, English, and history (BID52;BID53;BID54), although in some cases questions were modified or annotated before being given to the systems (e.g., BID55). Similarly, a smaller project worked on passing the Gaokao (China's college entrance exam) (e.g., BID56;BID57). The Todai project was reported as ended in 2016, in part because of the challenges of building a machine that could “grasp meaning in a broad spectrum” (BID58).
<<</Standardized Tests>>>
<<<Math Word Problems>>>
Substantial progress has been achieved on math word problems. On plane geometry questions, (BID59) demonstrated an approach that achieve a 61% accuracy on SAT practice questions. The Euclid system (BID60) achieved a 43% recall and 91% precision on SAT "closed-vocabulary" algebra questions, a limited subset of questions that nonetheless constitutes approximately 45% of a typical math SAT exam. Closed-vocabulary questions are those that do not reference real-world situations (e.g., "what is the largest prime smaller than 100?" or "Twice the product of x and y is 8. What is the square of x times y?")
Work on open-world math questions has continued, but results on standardized tests have not been reported and thus it is difficult to benchmark the progress relative to human performance. See Amini2019MathQATI for a recent snapshot of the state of the art, and references to the literature on this problem.
<<</Math Word Problems>>>
<<</Related Work>>>
<<<Summary and Conclusion>>>
Answering science questions is a long-standing AI grand challenge (BID14;BID20). This paper reports on Aristo—the first system to achieve a score of over 90% on the non-diagram, multiple choice part of the New York Regents 8th Grade Science Exam, demonstrating that modern NLP methods can result in mastery of this task. Although Aristo only answers multiple choice questions without diagrams, and operates only in the domain of science, it nevertheless represents an important milestone towards systems that can read and understand. The momentum on this task has been remarkable, with accuracy moving from roughly 60% to over 90% in just three years. Finally, the use of independently authored questions from a standardized test allows us to benchmark AI performance relative to human students.
Beyond the use of a broad vocabulary and scientific concepts, many of the benchmark questions intuitively appear to require reasoning to answer (e.g., Figure FIGREF19). To what extent is Aristo reasoning to answer questions? For many years in AI, reasoning was thought of as the discrete, symbolic manipulation of sentences expressed in a formally designed language (BID61;BID62). With the advent of deep learning, this notion of reasoning has shifted, with machines performing challenging tasks using neural architectures rather than explicit representation languages. Today, we do not have a sufficiently fine-grained notion of reasoning to answer this question precisely, but we can observe surprising performance on answering science questions. This suggests that the machine has indeed learned something about language and the world, and how to manipulate that knowledge, albeit neither symbolically nor discretely.
Although an important milestone, this work is only a step on the long road toward a machine that has a deep understanding of science and achieves Paul Allen's original dream of a Digital Aristotle. A machine that has fully understood a textbook should not only be able to answer the multiple choice questions at the end of the chapter—it should also be able to generate both short and long answers to direct questions; it should be able to perform constructive tasks, e.g., designing an experiment for a particular hypothesis; it should be able to explain its answers in natural language and discuss them with a user; and it should be able to learn directly from an expert who can identify and correct the machine's misunderstandings. These are all ambitious tasks still largely beyond the current technology, but with the rapid progress happening in NLP and AI, solutions may arrive sooner than we expect.
<<</Summary and Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nThe Turing Test versus Standardized Tests\nNatural Language Processing\nMachine Understanding of Textbooks\nA Brief History of Aristo\nThe Aristo System\nOverview\nInformation Retrieval and Statistics\nReasoning Methods\nLarge-Scale Language models\nEnsembling\nExperiments and Results\nExperimental Methodology\nOmitted Question Classes\nDataset Formulation\nMain Results\nAnswer Only Performance\nAdversarial Answer Options\nRelated Work\nStandardized Tests\nMath Word Problems\nSummary and Conclusion"
],
"type": "outline"
}
|
1909.09986
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Improving Quality and Efficiency in Plan-based Neural Data-to-Text Generation
<<<Abstract>>>
We follow the step-by-step approach to neural data-to-text generation we proposed in Moryossef et al (2019), in which the generation process is divided into a text-planning stage followed by a plan-realization stage. We suggest four extensions to that framework: (1) we introduce a trainable neural planning component that can generate effective plans several orders of magnitude faster than the original planner; (2) we incorporate typing hints that improve the model's ability to deal with unseen relations and entities; (3) we introduce a verification-by-reranking stage that substantially improves the faithfulness of the resulting texts; (4) we incorporate a simple but effective referring expression generation module. These extensions result in a generation process that is faster, more fluent, and more accurate.
<<</Abstract>>>
<<<Introduction>>>
In the data-to-text generation task (D2T), the input is data encoding facts (e.g., a table, a set of tuples, or a small knowledge graph), and the output is a natural language text representing those facts. In neural D2T, the common approaches train a neural end-to-end encoder-decoder system that encodes the input data and decodes an output text. In recent work BIBREF0 we proposed to adopt ideas from “traditional” language generation approaches (i.e. BIBREF1, BIBREF2, BIBREF3) that separate the generation into a planning stage that determines the order and structure of the expressed facts, and a realization stage that maps the plan to natural language text. We show that by breaking the task this way, one can achieve the same fluency of neural generation systems while being able to better control the form of the generated text and to improve its correctness by reducing missing facts and “hallucinations”, common in neural systems.
In this work we adopt the step-by-step framework of BIBREF0 and propose four independent extensions that improve aspects of our original system: we suggest a new plan generation mechanism, based on a trainable-yet-verifiable neural decoder, that is orders of magnitude faster than the original one (§SECREF3); we use knowledge of the plan structure to add typing information to plan elements. This improves the system's performance on unseen relations and entities (§SECREF4); the separation of planning from realizations allows the incorporation of a simple output verification heuristic that drastically improves the correctness of the output (§SECREF5); and finally we incorporate a post-processing referring expression generation (REG) component, as proposed but not implemented in our previous work, to improve the naturalness of the resulting output (§SECREF6).
<<</Introduction>>>
<<<Step-by-step Generation>>>
We provide a brief overview of the step-by-step system. See BIBREF0 for further details. The system works in two stages. The first stage (planning) maps the input facts (encoded as a directed, labeled graph, where nodes represent entities and edges represent relations) to text plans, while the second stage (realization) maps the text plans to natural language text.
The text plans are a sequence of sentence plans—each of which is a tree— representing the ordering of facts and entities within the sentence. In other words, the plans determine the separation of facts into sentences, the ordering of sentences, and the ordering of facts and entities within each sentence. This stage is completely verifiable: the text plans are guaranteed to faithfully encode all and only the facts from the input. The realization stage then translates the plans into natural language sentences, using a neural sequence-to-sequence system, resulting in fluent output.
<<</Step-by-step Generation>>>
<<<Fast and Verifiable Planner>>>
The data-to-plan component in BIBREF0 exhaustively generates all possible plans, scores them using a heuristic, and chooses the highest scoring one for realization. While this is feasible with the small input graphs in the WebNLG challenge BIBREF4, it is also very computationally intensive, growing exponentially with the input size. We propose an alternative planner which works in linear time in the size of the graph and remains verifiable: generated plans are guaranteed to represent the input faithfully.
The original planner works by first enumerating over all possible splits into sentences (sub-graphs), and for each sub-graph enumerating over all possible undirected, unordered, Depth First Search (DFS) traversals, where each traversal corresponds to a sentence plan. Our planner combines these into a single process. It works by performing a series of what we call random truncated DFS traversals. In a DFS traversal, a node is visited, then its children are visited recursively in order. Once all children are visited, the node “pops” back to the parent. In a random truncated traversal, the choice of which children to visit next, as well as whether to go to the next children or to “pop”, is non-deterministic (in practice, our planner decides by using a neural-network controller). Popping at a node before visiting all its children truncates the DFS: further descendants of that node will not be visited in this traversal. It behaves as a DFS on a graph where edges to these descendants do not exist. Popping the starting node terminates the traversal.
Our planner works by choosing a node with a non-zero degree and performing a truncated DFS traversal from that node. Then, all edges visited in the traversal are removed from the input graph, and the process repeats (performing another truncated DFS) until no more edges remain. Each truncated DFS traversal corresponds to a sentence plan, following the DFS-to-plan procedure of BIBREF0: the linearized plan is generated incrementally at each step of the traversal. This process is linear in the number of edges in the graph.
At training time, we use the plan-to-DFS mapping to perform the correct sequence of traversals, and train a neural classifier to act as a controller, choosing which action to perform at each step. At test time, we use the controller to guide the truncated DFS process. This mechanism is inspired by transition based parsing BIBREF5. The action set at each stage is dynamic. During traversal, it includes the available children at each stage and pop. Before traversals, it includes a choose-i action for each available node $n_i$. We assign a score to each action, normalize with softmax, and train to choose the desired one using cross-entropy loss. At test time, we either greedily choose the best action, or we can sample plans by sampling actions according to their assigned probabilities.
Feature Representation and action scoring. Each graph node $n_i$ corresponds to an entity $x_{n_i}$, and has an associated embedding vector $\mathbf {x_{n_i}}$. Each relation $r_i$ is associated with an embedding vector $\mathbf {r_i}$. Each labeled input graph edge $e_k = (n_i, r_\ell , n_j)$ is represented as a projected concatenated vector $\mathbf {e_k}=\mathbf {E}(\mathbf {x_{n_i}};\mathbf {r_\ell };\mathbf {x_{n_j}})$, where $\mathbf {E}$ is a projection matrix. Finally, each node $n_i$ is then represented as a vector $\mathbf {n_i} = \mathbf {V}[\mathbf {x_{n_i}};\sum _{e_j\in \pi (i)}\mathbf {e_j};\sum _{e_j\in \pi ^{-1}(i)}\mathbf {e_j}]$, where $\pi (i)$ and $\pi ^{-1}(i)$ are the incoming and outgoing edges from node $n_i$. The traverse-to-child-via-edge-$e_j$ action is represented as $\mathbf {e_j}$, choose-node-i is represented as $\mathbf {n_i}$ and pop-to-node-i is represented as $\mathbf {n_i}+\mathbf {p}$ where $\mathbf {p}$ is a learned vector. The score for an action $a$ at time $t$ is calculated as a dot-product between the action representation and the LSTM state over the symbols generated in the plan so far. Thus, each decision takes into account the immediate surrounding of the node in the graph, and the plan structure generated so far.
Speed On a 7 edges graph, the planner of BIBREF0 takes an average of 250 seconds to generate a plan, while our planner takes 0.0025 seconds, 5 orders of magnitude faster.
<<</Fast and Verifiable Planner>>>
<<<Incorporating typing information for unseen entities and relations>>>
In BIBREF0, the sentence plan trees were linearized into strings that were then fed to a neural machine translation decoder (OpenNMT) BIBREF6 with a copy mechanism. This linearization process is lossy, in the sense that the linearized strings do not explicitly distinguish between symbols that represent entities (e.g., BARACK_OBAMA) and symbols that represent relations (e.g., works-for). While this information can be deduced from the position of the symbol within the structure, there is a benefit in making it more explicit. In particular, the decoder needs to act differently when decoding relations and entities: entities are copied, while relations need to be verbalized. By making the typing information explicit to the decoder, we make it easier for it to generalize this behavior distinction and apply it also for unseen entities and relations. We thus expect the typing information to be especially useful for the unseen part of the evaluation set.
We incorporate typing information by concatenating to the embedding vector of each input symbol one of three embedding vectors, S, E or R, where S is concatenated to structural elements (opening and closing brackets), E to entity symbols and R to relation symbols.
<<</Incorporating typing information for unseen entities and relations>>>
<<<Output verification>>>
While the plan generation stage is guaranteed to be faithful to the input, the translation process from plans to text is based on a neural seq2seq model and may suffer from known issues with such models: hallucinating facts that do not exist in the input, repeating facts, or dropping facts. While the clear mapping between plans and text helps to reduce these issues greatly, the system in BIBREF0 still has 2% errors of these kinds.
<<<Existing approaches: soft encouragement via neural modules.>>>
Recent work in neural text generation and summarization attempt to address these issues by trying to map the textual outputs back to structured predicates, and comparing these predicates to the input data. BIBREF7 uses a neural checklist model to avoid the repetition of facts and improve coverage. BIBREF8 generate $k$-best output candidates with beam search, and then try to map each candidate output back to the input structure using a reverse seq2seq model trained on the same data. They then select the highest scoring output candidate that best translates back to the input. BIBREF9 reconstructs the input in training time, by jointly learning a back-translation model and enforcing the back-translation to reconstruct the input. Both of these approaches are “soft” in the sense that they crucially rely on the internal dynamics or on the output of a neural network module that may or may not be correct.
<<</Existing approaches: soft encouragement via neural modules.>>>
<<<Our proposal: explicit verification.>>>
The separation between planning and realization provided by the step-by-step framework allows incorporating a robust and straightforward verification step, that does not rely on brittle information extraction procedures or trust neural network models.
The plan-to-text generation handles each sentence individually and translates entities as copy operations. We thus have complete knowledge of the generated entities and their locations. We can then assess the correctness of an output sentence by comparing its sequence of entities to the entity sequence in the corresponding sentence plan, which is guaranteed to be complete. We then decode $k$-best outputs and rerank them based on their correctness scores, tie-breaking using model scores. We found empirically that, with a beam of size 5 we find at least one candidate with an exact match to the plan's entity sequence in 99.82% of the cases for seen entities and relations compared to 98.48% at 1-best, and 72.3% for cases of unseen entities and relations compared to 58.06% at 1-best. In the remaining cases, we set the system to continue searching by trying other plans, by going down the list of plans (when using the exhaustive planner of BIBREF0) or by sampling a new plan (when using the linear time planner suggested in this paper).
<<</Our proposal: explicit verification.>>>
<<</Output verification>>>
<<<Referring Expressions>>>
The step-by-step system generates entities by first generating an indexed entity symbols, and then lexicalizing each symbol to the string associated with this entity in the input structure (i.e., all occurrences of the entity 11TH MISSISSIPPI INFANTRY MONUMENT will be lexicalized with the full name rather than “it” or “the monument”). This results in correct but somewhat unnatural structures. In contrast, end-to-end neural generation systems are trained on text that includes referring expressions, and generate them naturally as part of the decoding process, resulting in natural looking text. However, the generated referring expressions are sometimes incorrect. BIBREF0 suggests the possibility of handling this with a post-processing referring-expression generation step (REG). Here, we propose a concrete REG module and demonstrate its effectiveness. One option is to use a supervised REG module BIBREF11, that is trained to lexicalize in-context mentions. Such an approach is sub-optimal for our setup as it is restricted to the entities and contexts it seen in training, and is prone to error on unseen entities and contexts.
Our REG solution lexicalizes the first mention of each entity as its associated string and attempts to generate referring expressions to subsequent mentions. The generated referring expressions can take the form “Pron”, “X” or “the X” where Pron is a pronoun, and X is a word appearing in the entity's string (allowing, e.g., John, or the monument). We also allow referring to its entity with its entire associated string. We restrict the set of allowed pronouns for each entity according to its type (male, female, plural-animate, unknown-animate, inanimate). We then take, for each entity mention individually, the referring expression that receives the best language model score in context, using a strong unsupervised neural LM (BERT BIBREF12). The system is guaranteed to be correct in the sense that it will not generate wrong pronouns. It also has failure modes: it is possible for the system to generate ambiguous referring expressions (e.g., John is Bob's father. He works as a nurse.), and may lexicalize Boston University as Boston. We find that the second kind of mistake is rare as it is handled well by the language model. It can also be controlled by manually restricting the set of possible referring expression to each entity. Similarly, it is easy to extend the system to support other lexicalizations of entities by extending the sets of allowed lexicalizations (for example, supporting abbreviations, initials or nicknames) either as user-supplied inputs or using heuristics.
<<</Referring Expressions>>>
<<<Evaluation and Results>>>
We evaluate each of the introduced components separately. Tables listing their interactions are available in the appendix. The appendix also lists some qualitative outputs. The main trends that we observe are:
The new planner causes a small drop in BLEU, but is orders of magnitude faster (§SECREF12).
Typing information causes a negligible drop in BLEU overall, but improves results substantially for the unseen portion of the dataset (§SECREF13).
The verification step is effective at improving the faithfulness of the output, practically eliminating omitted and overgenerated facts, reducing the number of wrong facts, and increasing the number of correctly expressed facts. This is based on both manual and automatic evaluations. (§SECREF14).
The referring expression module is effective, with an intrinsic correctness of 92.2%. It substantially improves BLEU scores. (§SECREF16).
<<<Setup>>>
We evaluate on the WebNLG dataset BIBREF4, comparing to the step-by-step systems described in BIBREF0, which are state of the art. Due to randomness inherent in neural training, our reported automatic evaluation measures are based on an average of 5 training runs of each system (neural planner and neural realizer), each run with a different random seed.
<<</Setup>>>
<<<Neural Planner vs Exhaustive Planner>>>
We compare the exhaustive planner from BIBREF0 to our neural planner, by replacing the planner component in the BIBREF0 system. Moving to the neural planner exhibits a small drop in BLEU (46.882 dropped to 46.506). However, figure indicates 5 orders of magnitude (100,000x) speedup for graphs with 7 edges, and a linear growth in time for number of edges compared to exponential time for the exhaustive planner.
<<</Neural Planner vs Exhaustive Planner>>>
<<<Effect of Type Information>>>
We repeat the coverage experiment in BIBREF0, counting the number of output texts that contain all the entities in the input graph, and, of these text, counting the ones in which the entities appear in the exact same order as the plan. Incorporating typing information reduced the number of texts not containing all entities by 18% for the seen part of the test set, and 16% for the unseen part. Moreover, for the text containing all entities, the number of texts that did not follow the plan's entity order is reduced by 46% for the seen part of the test set, and by 35% for the unseen part. We also observe a small drop in BLEU scores, which we attribute to some relations being verbalized more freely (though correctly).
<<</Effect of Type Information>>>
<<<Effect of Output Verification>>>
The addition of output verification resulted in negligible changes in BLEU, reinforcing that automatic metrics are not sensitive enough to output accuracy. We thus performed manual analysis, following the procedure in BIBREF0. We manually inspect 148 samples from the seen part of the test set, containing 440 relations, counting expressed, omitted, wrong and over-generated (hallucinated) facts. We compare to the StrongNeural and BestPlan systems from BIBREF0. Results in Table indicate that the effectiveness of the verification process in ensuring correct output, reducing the already small number of ommited and overgenerated facts to 0 (with the exhaustive planner) and keeping it small (with the fast neural planner).
<<</Effect of Output Verification>>>
<<<Referring Expression Module>>>
<<<Intrinsic evaluation of the REG module.>>>
We manually reviewed 1,177 pairs of entities and referring expressions generated by the system. We find that 92.2% of the generated referring expressions refer to the correct entity.
From the generated expressions, 325 (27.6%) were pronouns, 192 (16.3%) are repeating a one-token entity as is, and 505 (42.9%) are generating correct shortening of a long entity. In 63 (5.6%) of the cases the system did not find a good substitute and kept the entire entity intact. Finally, 92 (7.82%) are wrong referrals. Overall, 73.3% of the non-first mentions of entities were replaced with suitable shorter and more fluent expressions.
<<</Intrinsic evaluation of the REG module.>>>
<<<Effect on BLEU scores.>>>
As can be seen in Table , using the REG module increases BLEU scores for both the exhaustive and the neural planner.
<<</Effect on BLEU scores.>>>
<<</Referring Expression Module>>>
<<</Evaluation and Results>>>
<<<Conclusions>>>
We adopt the planning-based neural generation framework of BIBREF0 and extend it to be orders of magnitude faster and produce more correct and more fluent text. We conclude that these extensions not only improve the system of BIBREF0 but also highlight the flexibility and advantages of the step-by-step framework for text generation.
<<</Conclusions>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nStep-by-step Generation\nFast and Verifiable Planner\nIncorporating typing information for unseen entities and relations\nOutput verification\nExisting approaches: soft encouragement via neural modules.\nOur proposal: explicit verification.\nReferring Expressions\nEvaluation and Results\nSetup\nNeural Planner vs Exhaustive Planner\nEffect of Type Information\nEffect of Output Verification\nReferring Expression Module\nIntrinsic evaluation of the REG module.\nEffect on BLEU scores.\nConclusions"
],
"type": "outline"
}
|
1912.03457
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Unsung Challenges of Building and Deploying Language Technologies for Low Resource Language Communities
<<<Abstract>>>
In this paper, we examine and analyze the challenges associated with developing and introducing language technologies to low-resource language communities. While doing so, we bring to light the successes and failures of past work in this area, challenges being faced in doing so, and what they have achieved. Throughout this paper, we take a problem-facing approach and describe essential factors which the success of such technologies hinges upon. We present the various aspects in a manner which clarify and lay out the different tasks involved, which can aid organizations looking to make an impact in this area. We take the example of Gondi, an extremely-low resource Indian language, to reinforce and complement our discussion.
<<</Abstract>>>
<<<Introduction>>>
Technology pervades all aspects of society and continues to change the way people access and share information, learn and educate, as well as provide and access services. Language is the main medium through which such transformational technology can be integrated into the socioeconomic processes of a community. Natural Language Processing (NLP) and Speech systems, therefore, break down barriers and enable users and whole communities with easy access to information and services. However, the current trend in building language technology is designed to work on languages with very high resources in terms of data and infrastructure.
Also, as Machine Learning (ML) and NLP practitioners, we get caught up in an information-theoretic view of the problem, e.g., focusing on incremental improvements of performance on benchmarks or capturing accurate distributions over data, and tend to forget that the raison d'être of NLP is to build systems that add value to its users BIBREF0. We want to build models that enable people to read the news that was not written in their language, ask questions about their health when they do not have access to a doctor, etc. And while these technology applications are more and more ubiquitous for languages with a lot of data, a larger majority of languages remain resource-poor and bereft of such systems. As discussed in the United Nations e-government survey BIBREF1, “one of the most important obstacles to e-inclusion, particularly among vulnerable groups with little education, is language”. Thus, by excluding these languages from reaping the benefits of the advancements in language technology, we marginalize the already vulnerable groups even further.
India is a highly multilingual society and home to some of the largest language communities in the world. 6 out of 20 most-spoken (native) languages in the world are Indic. Ethnologue BIBREF2 records 461 tongues in India out of 6912 worldwide (6%), the 4th largest belonging to any single country in the world. 122 of these languages are spoken by more than 10,000 people. 29 languages have more than 1 million speakers, which include indigenous tribal languages like Gondi and Mundari, some without a supported writing system or script. Despite the large numbers of users, most of these languages have very little data available. Figure FIGREF1 shows that as compared to some of the much lesser spoken languages like German, Indic languages are severely low resourced. In a vast country like India, access to information thus becomes a huge concern. This lack of information means that not only do these communities not have information in domains like agriculture, health, weather etc., which could improve their quality of lives, but they may also not be aware of their basic rights as citizens of the country.
In this paper, we take the position that the current direction of advanced language technology towards extremely high data requirements can have severe socio-economic implications for a majority of language communities in the world. We focus on specific aspects of designing and building systems and applications for low resource languages and their speech communities to exemplify viable social impact through language technology. We begin by discussing the aspect of information exchange, which is the core motivation behind enabling low-resource language communities. We then steer our analysis towards the design and creation of an interface for people in these communities to simplify and enrich the process of information exchange. Finally, we gather insights about how to deploy these technologies to ensure extensive impact by studying and taking inspiration from existing technological deployments.
We use Gondi, a South-Central Dravidian language in the vulnerable category on UNESCO's Atlas of the Worlds Languages in Danger BIBREF3, as an example wherever possible. Spoken by nearly 3 million people BIBREF4 in the Indian states of Chhattisgarh, Andhra, Odisha, Maharashtra and Karnataka, it is heavily influenced by the dominant state language. However, it is also one of the least resourced languages in India, with very little available data and technology.
We believe that the components discussed in the sections below encapsulate the spectrum of issues surrounding this field and that all future discussions in this area will also fall under the umbrella of these categories. We believe that by focusing on Gondi, we will not only empower the Gondi community but more importantly, understand and create a pipeline or framework which can serve as a clear guide for potential ventures which plan on introducing disruptive language technologies in under-served communities.
<<</Introduction>>>
<<<Information Exchange>>>
The primary element in communication is information exchange. People living in less connected areas are often unable to get the kind of information they need, due to various socio-economical and technological barriers. As a result, they miss out on crucial knowledge required to improve their well-being. There are three co-dependent aspects woven into the fabric of information exchange - access of information, quality and coverage of the information and methods to create and digitize available knowledge (generation).
<<<Access>>>
This section refers to past work and current ventures of making digital resources adequately available and accessible to people.
<<<Making information accessible to people>>>
Less-connected and technologically underdeveloped areas often suffer from the limited accessibility of up-to-date information. Providing more individuals access to the online repositories of information can often help them improve their well-being.
There are some situations particularly during natural calamities where the absence of notifications about potentially disaster-prone areas can result in life and death situations of individuals. People in regions with sparse connectivity often fall victim to these incidents due to lack of timely updates. Using technical platforms to support the spread of information to these regions is an important goal to keep in mind. LORELEI BIBREF5 is a DARPA funded initiative with the goal of the building of technologies for dealing and responding to disasters in low resource language communities. Similar initiatives in India would be capable of saving lives.
The daily function and health of individuals in a community can be influenced positively by the dissemination of relevant information. For example, healthcare and agricultural knowledge can affect the prosperity of a rural household, making them aware of potential solutions and remedies which can be acquired. There has been a considerable body of work focused on technology for healthcare access, which includes telemedicine BIBREF6 and remote diagnosis BIBREF7. While the use of telecenters to spread information on agricultural practises has been employed, persuading users to regularly use the telecenters BIBREF8 is a challenge, which could be addressed by the use of language technologies to simplify access. VideoKheti BIBREF9 is an example of a voice-based application which provides educational videos to farmers about effective agricultural practices. Similar studies have been carried out to assess the effectiveness of voice-activated applications for farming BIBREF10. There are considerable challenges, however, to ensure that these solutions are inclusive and accessible to low-literate and less-connected users.
Similarly, there are situations where there are certain rights and duties which an individual as a citizen of India is entitled to. Some communities have long been exploited and ill-treated BIBREF11, and providing them information regarding their rights as well as accurate news could foster a sense of solidarity within the community and encourage them to make their voice heard. An extensive study on the impact of CGNet Swara BIBREF12 showed that this citizen journalism platform inspired people in rural communities, gave them a feeling of being heard, and provided a venue to voice their grievances. There are also other promising ventures such as Awaaz De BIBREF10 and Gram Vaani BIBREF13 which aim to boost social activism in a similar manner.
<<</Making information accessible to people>>>
<<<Making more digital content available>>>
The process of enabling more low-resource language communities with tools to access online information alone is not sufficient. There need to be steps taken to make more of the content which exists online interpretable to people in these communities. For example, The Indian Constitution and other similar official communications from the government are written in 22 scheduled languages of India. Lack of access to other related documents deprives them of basic information. This is where building robust machine translation tools for low resource languages can help. Cross-language information retrieval makes extensive use of these translation mechanisms BIBREF14 where information is retrieved in a language different from the language of the user's query. BIBREF15 describes a system making use of minimal resources to perform the same.
There is huge potential for language technologies to be involved in content creation and information access. Further, more accurate retrieval methods can help the user get relevant information specific to their needs and context in their own language.
<<</Making more digital content available>>>
<<<Making NLP models more accessible to low resource languages>>>
Often, many state-of-the-art tools cannot be applied to low-resource languages due to the lack of data. Table TABREF6 describes the various technologies and their presence concerning languages with different levels of resource availability and the ease of data collection. We can observe that for low resource languages, there is considerable difficulty in adopting these tools. Machine Translation can potentially be used as a fix to bridge the gap. Translation engines can help in translating documents from minority languages to majority languages. This allows the pool of data to be used in a number of NLP tasks like sentiment analysis and summarization. Doing so allows us to leverage the existing body of work in NLP done on resource-rich languages and subsequently apply it to the resource-poor languages, thereby foregoing any attempt to reinvent the wheel for these languages. This ensures a quicker and wider impact.BIBREF16 performs sentiment analysis on Chinese customer reviews by translating them to English. They observe that the quality of machine translation systems are sufficient for sentiment analysis to be performed on the automatically translated texts without a substantial trade-off in accuracy.
<<</Making NLP models more accessible to low resource languages>>>
<<</Access>>>
<<<Generation>>>
This section refers to the generation of digital content which enriches online repositories with more diverse sets of information.
<<<Digitization of Documents>>>
There is a need to generate digital information and content for low-resource languages. It not only benefits the community by creating digital content for their needs, but it also provides data which can be used to train data-driven language technologies, such as ASRs, translation systems, and optical character recognition systems. Efforts to digitize content in India have been conducted in the past few years. The Government of India launched the Digital India initiative in 2015, which aims to digitize government documents in one of India's 120+ local languages. Such initiatives have evidently been useful before. For instance, the IMPACT project by the European Union was a large scale digitization project which helped push a lot of innovative work towards OCR and language technology for historical text retrieval and processing. IMPRINT is a similar initiative created by the Ministry of Human Resource Development (MHRD) to drive further research towards addressing such challenges.
The recent advancements in OCR technologies can propel efforts to digitize more handwritten documents. Such initiatives are already being undertaken to digitize and revive historical languages in Japan BIBREF17. Digital India library is a project that aims towards digitizing books and making them available online. Apart from printed books, a lot of ancient literature is written on palm leaves. The Regional Mega Scan Centre (RMSC) at IIIT Hyderabad has digitized over 100,000 books, one-third of which are in Indian Languages and additionally, they have also digitized text from scans of palm leaves. More initiatives such as these will help preserve and revive a number of languages that are part of the Indian heritage.
<<</Digitization of Documents>>>
<<<Crowdsourcing>>>
Data collection via crowdsourcing can be a challenge for low resource languages, primarily due to the expensive nature of the task coupled with the lack of commercial demand for such data. Thus, collecting this data at low cost becomes an important priority. Project Karya is a crowdsourcing platform which provides digital work to low-income workers. Although the data quality can be a concern, promising results have shown otherwise. BIBREF18 tested the quality of crowdsourced data in rural regions of India, tasking individuals with the digitization of Hindi/Marathi handwritten documents. A 96.7% accuracy of annotation was yielded, proving that there is potential in this area. Recently, collection of Marathi speech data is also being conducted. In a similar fashion, Navana Tech, a startup, has been collecting data in mid and low-resource languages of verbal banking queries so that they can be integrated into various banking application platforms for financial inclusion. Such crowdsourcing platforms not only act as a potential data for low-resource communities, they also benefit low-income workers by increasing their current daily wage. Such ventures would enhance the inclusion of such workers in the digitization process, something which aligns with the aims of the Digital India mission.
The collection of data in an extremely low-resource language like Gondi can be particularly tricky, additionally considering the fact that Gondi does not have an official script. Pratham Books is a non-profit organization which aims to democratize access to books for children. They recently hosted a workshop where they trained members of the local community to translate books on StoryWeaver, their open-source publication platform. At the end of this workshop, approximately 200 books were translated from Hindi to Gondi (Devanagiri script). This was the first time children's books were made available in Gondi, and it also sparked the creation of parallel data for Hindi-Gondi translation systems.
<<</Crowdsourcing>>>
<<</Generation>>>
<<</Information Exchange>>>
<<<Interface>>>
The design of a user-friendly interface plays a very crucial role in ensuring that the deployed technology encompasses all strata of society. It is often seen that a majority of target users have not had the privilege of education, and show varying levels of literacy, both foundational and digital. In such scenarios, text-based modalities pose several limitations from both the user and designer perspectives, and graphical user interfaces have been the preferred choice in these applications. BIBREF19 reports that text-based interfaces were completely redundant for illiterate users and severely error-prone for literate but novice users. Further, several languages do not have unique keyboard standards or fonts, and some do not have a script at all BIBREF20.
To overcome these issues with text, speech as a modality has also been deployed with varying success. `CGNet Swara', a citizen-run journalism portal, uses a phone-based IVR system to educate illiterate users BIBREF21. ’Avaaj Utalo’ allows users to make simple phone calls to ask questions or browse questions and answers asked on agricultural topics BIBREF10. ’Spoken Web’ is another application wherein users can create ’voice sites’ analogous to ’websites’ which can then be easily accessed through voice interaction on mobile phones BIBREF22. These serve to provide farmers with relevant crop and market information. An attempt to leverage the complementarity of voice and graphic-based inputs was made by VideoKheti, a mobile system with a multi-modal interface for low-literate farmers providing agricultural extension videos on command in their own language or dialect BIBREF9. They report that people in these communities find it difficult to use softkey type keyboards that are extremely common on modern smartphones. Instead, they proposed a system comprising of large buttons, graphics and some voice input. Such a system for delivering information to farmers was made and they showed that the farmers were very comfortable using it. Their results also show that a speech interface alone was not enough for that scenario, except in cases where the search list was long and the results were dependent on keywords or short phrases. Similarly, the Adivasi Radio App , based on text-to-speech (TTS) technology, is developed to read out written reports in Gondi, one of the main tribal languages in Chhattisgarh. Bolo is another mobile application which uses a very simple interface to improve children's literacy in India. Project Karya also proposes to divide massive digital tasks into ”microwork” and crowdsource this work to millions of people in rural India via phones .
While voice might solve the foundational literacy problems, the lack of digital literacy is often more challenging to overcome. BIBREF23 demonstrate the use of an app to teach the Mundari language to children. The app comprised of a series of games designed with the help of the community. The content was delivered in the Bangla script, which was what the children were taught in school. Their study noted that children from such communities found the usage of a smartphone to be difficult.
Relying on voice-based systems also poses a few challenges. It is not easy to build robust ASR systems for these languages due to severe lack of data, dialect variations and several such constraints. An attempt to resolve this was made with the development of the SALAAM ASR BIBREF24 which uses the acoustic model of an existing ASR and performs a cross-lingual phoneme mapping between the source and target language. This, however, is limited to recognition of a very small set of vocabulary, but finds use due to its' cost-effective and low resource setting.
<<</Interface>>>
<<<Deployment and Impact>>>
After developing technologies to provide information, and ensuring that the applications are designed in such a way that they are accessible to the population, the technology must be effectively deployed. Specialized applications are useless if they are not deployed properly in a way that accesses their end-users. When deploying a specially developed technology, the application must be deployed with consideration of the existing community dynamics. For any deployment to be successful, it usually must be able to be purposefully integrated into the lifestyle of community members - or have strong utilization incentives if it is a transformative technology. In this section, we will review examples of technology dissemination to low-resource/rural communities, and the impacts of effective deployment. While some technologies that we examine are not deployed utilizing low-resource languages specifically, the types of rural communities and villages in which they are deployed are analogous to the contexts in which low-resource languages exist, and clear parallels can be drawn.
Integrating the usage of a language technology intervention into a community in a low-resource context requires much more simply introducing the technology. Unlike hardware interventions and innovations like solar panels or new agricultural tools, language technologies often rely on the delivery, exchange, and utilization of information, which is much less tangible than physical solutions. This is especially for people with limited previous exposure to digital technology. Upon observing a selection of language-based interventions that were deployed in low-resource contexts, we observed that the most successful deployments of technologies tended to have three components of success. They: 1) Initially launched by seeding with target communities, 2) Worked closely to engage the community itself with the technology and information, and 3) Provided a strong incentive structure to adapt the technology - this incentive could be as simple as payments or as complex as communicated benefits from the technology.
<<<Case Studies>>>
In this section, we will be reviewing and comparing three separate technological systems, Learn2Earn BIBREF25, Mobile Vaani BIBREF13, and the Climate and Agriculture Information Service (CAIS)BIBREF26, and see how they utilized the rules of successful deployment outlined above. Learn2Earn, developed by Microsoft Research, is a simple IVR based mobile language technology app which uses quizzes to educate people and spread public awareness campaigns, launched initially in rural central India. Mobile Vaani is a large-scale and broad community-based IVR media exchange platform, developed by the NGO Gram Vaani. It currently has over 100,000 unique monthly active users, and processes 10,000 calls per day across the three Indian states of Bihar, Jharkhand, and Madhya Pradesh. Finally, the CAIS system is an SMS-based information delivery system designed for farmers who live in a rural, low-resource and no connectivity agricultural village on the Char Islands in the Bangladeshi Chalan Beel Wetland. This application provides weather data and agricultural advice to farmers on a periodic basis and was developed by a collaboration between mPower and two local NGOs. After designing the platform in accessible ways, each deployment process began with the seeding within a target community within itself. All examples that we studied became successful only after a small scale launch of their product. These launches occurred in different ways but were all based on targeting a starting group of users and incentivizing them to utilize and share the product. The initial users were people who were somewhat fluent in the technology (either through training or existing knowledge), and who knew of or had specialized needs that the technology could address.
<<<Learn2Earn>>>
Learn2Earn was built as a tech-enabled information dissemination system; its original information awareness campaign centred on informing farmers about their rights as guaranteed in India’s Forest Rights act. Because of the nature of their message content and delivery, the researchers decided to seed the platform with a single advertisement on an existing IVR channel already utilized by farmers. This advertisement reached 150 people, and provided them with distinct financial incentives to both call the platform, and invite friends to the platform. While only 17 of the original listeners of the advertisement went on to call the number, those respondents were members of the relevant community (farmers who were familiar with IVR technology) and were networked through family and friendships to additional ideal users of the platform. Within 7 weeks, the incentive structure allowed the platform to spread from the original 17 users to over 17,000, with little influence from the platform respondents. BIBREF25
<<</Learn2Earn>>>
<<<Mobile Vaani>>>
Mobile Vaani initially tried to launch in 2011 by encouraging employees from their partner NGOs to distribute their platform. The platform was initially imagined as a voice-based, inclusive medium for communities to express their grievances and communicate with each other digitally. The initial employees who recruited for the platform were not from the community but did work closely with them regularly. While there was some initial success in the launch, the mobile Vaani Team were unable to grow at a significant pace because they informed the end-users about their intended design and usage of the technology, which “set unrealistic expectations of the platform in the minds of the participating users” after the technology could not be used in the exact way that it was encouraged. A few months later, the platform decided to re-launch and expand by recruiting a series of trained and compensated volunteers from a variety of communities that they hoped to engage. During the second “launch,” the community members were able to learn about the platform, and adapt it to their specific use cases. The platform began to gain popularity during a teacher’s strike in the state of Jharkhand – where a specific use case for expressing grievances powered by the community arose.
<<</Mobile Vaani>>>
<<<CAIS>>>
The CAIS platform launched in direct collaboration with the NGO partner for the village – every available farmer registered their name, number, and crop type with the NGO partner – and consequently the target population was integrated from the start. As the programs grew, each engaged with the community on a high level. In the case of all three platforms, a specific population, and a very specific understanding of that population’s needs had to be identified before the platform could be relatively effective. Even after the deployment of the platform, care and close integration with community systems had to be done. The village in which CAIS worked had a system of self-empowerment groups, that had been organized by the NGO. Each group had a leader; and while not every villager in each group had a basic phone, every group leader did. Consequently, the researchers behind CAIS worked to ensure that every group leader was engaged in with CAIS and that they would relay their CAIS informational updates to the villagers that they lead. Similarly, the CAIS researchers worked closely with village leaders to determine who was able to access the information in SMS form and deployed informational physical posters as a substitute to those portions of the village population who could not. This intensive work led to the successful adaptation of technology to the benefit of the farmer’s yields. The researchers behind the Vaani system also continued to expand the system through a local network of volunteers. BIBREF13
<<</CAIS>>>
<<<CGNET Swara>>>
CGNet Swara, which we introduced earlier in this paper, also increased their initial participation by engaging with the wider community by holding in-person training and awareness sessions. They have conducted over 50 workshops, and have trained more than 2,000 members of various communities BIBREF12. Outreach activities such as these also allowed an increased spread of awareness via word-of-mouth. From these examples, it is clear to see that community engagement is the absolute key to spreading technology. Incentives –both monetary and situational – are a huge way that these platforms were able to engage their initial users. Incentives served to empower individuals to become champions of the platform and increased the enabled them to use their knowledge of the community and existing peer networks to deliver the technology where it was needed. All platforms used incentives of some sort; Learn2Earn used a direct payment for recruitment + participation, and also delivered relevant topics to the users. Mobile Vaani provided financial incentives to the volunteers who mobilized to evangelize the product. CAIS did not provide monetary incentives but instead brought technology that had an actionable and tangential impact on the daily lives of farmers. With the deployment of these technologies, direct needs of the population were solved.
<<</CGNET Swara>>>
<<</Case Studies>>>
<<</Deployment and Impact>>>
<<<Conclusion>>>
The boost in recent advancements in NLP research has started breaking down communication and information barriers. This, coupled with in-depth studies on the socio-economic benefits of enabling less-connected communities with technology, provides a strong argument for increasing investment in this area. It is promising to observe increased innovation and steady progress in the empowerment of rural communities using language tools. Increased exposure to the challenges and works in this area can catalyse developments in improving inclusion and information dissemination. We hope that this paper will provide pointers in the right direction for potential ventures that plan on introducing disruptive language technologies to marginalized communities.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nInformation Exchange\nAccess\nMaking information accessible to people\nMaking more digital content available\nMaking NLP models more accessible to low resource languages\nGeneration\nDigitization of Documents\nCrowdsourcing\nInterface\nDeployment and Impact\nCase Studies\nLearn2Earn\nMobile Vaani\nCAIS\nCGNET Swara\nConclusion"
],
"type": "outline"
}
|
1910.11491
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Attention Optimization for Abstractive Document Summarization
<<<Abstract>>>
Attention plays a key role in the improvement of sequence-to-sequence-based document summarization models. To obtain a powerful attention helping with reproducing the most salient information and avoiding repetitions, we augment the vanilla attention model from both local and global aspects. We propose an attention refinement unit paired with local variance loss to impose supervision on the attention model at each decoding step, and a global variance loss to optimize the attention distributions of all decoding steps from the global perspective. The performances on the CNN/Daily Mail dataset verify the effectiveness of our methods.
<<</Abstract>>>
<<<Introduction>>>
Abstractive document summarization BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4 attempts to produce a condensed representation of the most salient information of the document, aspects of which may not appear as parts of the original input text. One popular framework used in abstractive summarization is the sequence-to-sequence model introduced by BIBREF5. The attention mechanism BIBREF6 is proposed to enhance the sequence-to-sequence model by allowing salient features to dynamically come to the forefront as needed to make up for the incapability of memorizing the long input source.
However, when it comes to longer documents, basic attention mechanism may lead to distraction and fail to attend to the relatively salient parts. Therefore, some works focus on designing various attentions to tackle this issue BIBREF2, BIBREF7. We follow this line of research and propose an effective attention refinement unit (ARU). Consider the following case. Even with a preliminary idea of which parts of source document should be focused on (attention), sometimes people may still have trouble in deciding which exact part should be emphasized for the next word (the output of the decoder). To make a more correct decision on what to write next, people always adjust the concentrated content by reconsidering the current state of what has been summarized already. Thus, ARU is designed as an update unit based on current decoding state, aiming to retain the attention on salient parts but weaken the attention on irrelevant parts of input.
The de facto standard attention mechanism is a soft attention that assigns attention weights to all input encoder states, while according to previous work BIBREF8, BIBREF9, a well-trained hard attention on exact one input state is conducive to more accurate results compared to the soft attention. To maintain good performance of hard attention as well as the advantage of end-to-end trainability of soft attention, we introduce a local variance loss to encourage the model to put most of the attention on just a few parts of input states at each decoding step. Additionally, we propose a global variance loss to directly optimize the attention from the global perspective by preventing assigning high weights to the same locations multiple times. The global variance loss is somewhat similar with the coverage mechanism BIBREF10, BIBREF11, which is also designed for solving the repetition problem. The coverage mechanism introduces a coverage vector to keep track of previous decisions at each decoding step and adds it into the attention calculation. However, when the high attention on certain position is wrongly assigned during previous timesteps, the coverage mechanism hinders the correct assignment of attention in later steps.
We conduct our experiments on the CNN/Daily Mail dataset and achieve comparable results on ROUGE BIBREF12 and METEOR BIBREF13 with the state-of-the-art models. Our model surpasses the strong pointer-generator baseline (w/o coverage) BIBREF11 on all ROUGE metrics by a large margin. As far as we know, we are the first to introduce explicit loss functions to optimize the attention. More importantly, the idea behind our model is simple but effective. Our proposal could be applied to improve other attention-based models, which we leave these explorations for the future work.
<<</Introduction>>>
<<<Proposed model>>>
<<<Model Architecture>>>
We adopt the Pointer-Generator Network (PGN) BIBREF11 as our baseline model, which augments the standard attention-based seq2seq model with a hybrid pointer network BIBREF14. An input document is firstly fed into a Bi-LSTM encoder, then an uni-directional LSTM is used as the decoder to generate the summary word by word. At each decoding step, the attention distribution $a_t$ and the context vector $c_t$ are calculated as follows:
where $h_i$ and $s_t$ are the hidden states of the encoder and decoder, respectively. Then, the token-generation softmax layer reads the context vector $c_t$ and current hidden state $s_t$ as inputs to compute the vocabulary distribution. To handle OOVs, we inherit the pointer mechanism to copy rare or unseen words from the input document (refer to BIBREF11 for more details).
To augment the vanilla attention model, we propose the Attention Refinement Unit (ARU) module to retain the attention on the salient parts while weakening the attention on the irrelevant parts of input. As illustrated in Figure FIGREF5, the attention weight distribution $a_t$ at timestep $t$ (the first red histogram) is fed through the ARU module. In the ARU module, current decoding state $s_t$ and attention distribution $a_t$ are combined to calculate a refinement gate $r_t$:
where $\sigma $ is the sigmoid activation function, $W_{s}^{r}$, $W_{a}^r$ and $b_r$ are learnable parameters. $r_t$ represents how much degree of the current attention should be updated. Small value of $r_{ti}$ indicates that the content of $i$-th position is not much relevant to current decoding state $s_t$, and the attention on $i$-th position should be weakened to avoid confusing the model. The attention distribution is updated as follows (the symbol $\odot $ means element-wise product):
<<</Model Architecture>>>
<<<Local Variance Loss>>>
As discussed in section SECREF1, the attention model putting most of attention weight on just a few parts of the input tends to achieve good performance. Mathematically, when only a small number of values are large, the shape of the distribution is sharp and the variance of the attention distribution is large. Drawing on the concept of variance in mathematics, local variance loss is defined as the reciprocal of its variance expecting the attention model to be able to focus on more salient parts. The standard variance calculation is based on the mean of the distribution. However, as previous work BIBREF15, BIBREF16 mentioned that the median value is more robust to outliers than the mean value, we use the median value to calculate the variance of the attention distribution. Thus, local variance loss can be calculated as:
where $\hat{\cdot }$ is a median operator and $\epsilon $ is utilized to avoid zero in the denominator.
<<</Local Variance Loss>>>
<<<Global Variance Loss>>>
To avoid the model attending to the same parts of the input states repeatedly, we propose another variance loss to adjust the attention distribution globally. Ideally, the same locations should be assigned a relatively high attention weight once at most. Different from the coverage mechanism BIBREF11, BIBREF10 tracking attention distributions of previous timesteps, we maintain the sum of attention distributions over all decoder timesteps, denoted as $A$. The $i$-th value of $A$ represents the accumulated attention that the input state at $i$-th position has received throughout the whole decoding process. Without repeated high attention being paid to the same location, the difference between the sum of attention weight and maximum attention weight of $i$-th input state among all timesteps should be small. Moreover, the whole distribution of the difference over all input positions should have a flat shape. Similar to the definition of local variance loss, the global variance loss is formulated as:
where $g_i$ represents the difference between the accumulated attention weight and maximum attention weight at $i$-th position.
<<</Global Variance Loss>>>
<<<Model Training>>>
The model is firstly pre-trained to minimize the maximum-likelihood loss, which is widely used in sequence generation tasks. We define $y^* = \lbrace y^*_1, \cdots , y_T^*\rbrace $ as the ground-truth output sequence for a given input sequence $x$, then the loss function is formulated as:
After converging, the model is further optimized with local variance loss and global variance loss. The mix of loss functions is:
where $\lambda _1$ and $\lambda _2$ are hyper-parameters.
-0.13cm
<<</Model Training>>>
<<</Proposed model>>>
<<<Experiments>>>
<<<Preliminaries>>>
<<<Dataset and Metrics.>>>
We conduct our model on the large-scale dataset CNN/Daily Mail BIBREF19, BIBREF1, which is widely used in the task of abstractive document summarization with multi-sentences summaries. We use the scripts provided by BIBREF11 to obtain the non-anonymized version of the dataset without preprocessing to replace named entities. The dataset contains 287,226 training pairs, 13,368 validation pairs and 11,490 test pairs in total. We use the full-length ROUGE F1 and METEOR as our main evaluation metrics.
<<</Dataset and Metrics.>>>
<<<Implementation Details.>>>
The data preprocessing is the same as PGN BIBREF11, and we randomly initialize the word embeddings. The hidden states of the encoder and the decoder are both 256-dimensional and the embedding size is also 256. Adagrad with learning rate 0.15 and an accumulator with initial value 0.1 are used to train the model. We conduct experiments on a single Tesla P100 GPU with a batch size of 64 and it takes about 50000 iterations for pre-training and 10000 iterations for fine-tuning. Beam search size is set to 4 and trigram avoidance BIBREF17 is used to avoid trigram-level repetition. Tuned on validation set, $\lambda _1$ and $\lambda _2$ in the loss function (Equation. DISPLAY_FORM12) is set to 0.3 and 0.1, respectively.
<<</Implementation Details.>>>
<<</Preliminaries>>>
<<<Automatic Evaluation Result>>>
As shown in Table TABREF13 (the performance of other models is collected from their papers), our model exceeds the PGN baseline by 3.85, 2.1 and 3.37 in terms of R-1, R-2 and R-L respectively and receives over 3.23 point boost on METEOR. FastAbs BIBREF3 regards ROUGE scores as reward signals with reinforcement learning, which brings a great performance gain. DCA BIBREF4 proposes deep communicating agents with reinforcement setting and achieves the best results on CNN/Daily Mail. Although our experimental results have not outperformed the state-of-the-art models, our model has a much simpler structure with fewer parameters. Besides, these simple methods do yield a boost in performance compared with PGN baseline and may be applied on other models with attention mechanism.
We further evaluate how these optimization approaches work. The results at the bottom of Table TABREF13 verify the effectiveness of our proposed methods. The ARU module has achieved a gain of 0.97 ROUGE-1, 0.35 ROUGE-2, and 0.64 ROUGE-L points; the local variance loss boosts the model by 3.01 ROUGE-1, 1.6 ROUGE-2, and 2.58 ROUGE-L. As shown in Figure FIGREF22, the global variance loss helps with eliminating n-gram repetitions, which verifies its effectiveness.
<<</Automatic Evaluation Result>>>
<<<Human Evaluation and Case Study>>>
We also conduct human evaluation on the generated summaries. Similar to the previous work BIBREF3, BIBREF20, we randomly select 100 samples from the test set of CNN/Daily Mail dataset and ask 3 human testers to measure relevance and readability of each summary. Relevance is based on how much salient information does the summary contain, and readability is based on how fluent and grammatical the summary is. Given an article, different people may have different understandings of the main content of the article, the ideal situation is that more than one reference is paired with the articles. However, most of summarization datasets contain the pairs of article with a single reference summary due to the cost of annotating multi-references. Since we use the reference summaries as target sequences to train the model and assume that they are the gold standard, we give both articles and reference summaries to the annotator to score the generated summaries. In other words, we compare the generated summaries against the reference ones and the original article to obtain the (relative) scores in Table 3. Each perspective is assessed with a score from 1 (worst) to 5 (best). The result in Table TABREF21 demonstrate that our model performs better under both criteria w.r.t. BIBREF11. Additionally, we show the example of summaries generated by our model and baseline model in Table TABREF23. As can be seen from the table, PGN suffers from repetition and fails to obtain the salient information. Though with coverage mechanism solving saliency and repetition problem, it generates many trivial facts. With ARU, the model successfully concentrates on the salient information, however, it also suffers from serious repetition problem. Further optimized by the variance loss, our model can avoid repetition and generate summary with salient information. Besides, our generated summary contains fewer trivial facts compared to the PGN+Coverage model.
<<</Human Evaluation and Case Study>>>
<<</Experiments>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nProposed model\nModel Architecture\nLocal Variance Loss\nGlobal Variance Loss\nModel Training\nExperiments\nPreliminaries\nDataset and Metrics.\nImplementation Details.\nAutomatic Evaluation Result\nHuman Evaluation and Case Study"
],
"type": "outline"
}
|
1909.11297
|
Please extract the outline of the given paper.
You just need to output the section names (without details in sections' content) in the correct order without any additional explanation,
like "Abstract
Introduction
Related Work
Method
<Subsection1 of Method>
<Subsection2 of Method>
Experiments
Conclusion".
Context: <<<Title>>>
Learning to Detect Opinion Snippet for Aspect-Based Sentiment Analysis
<<<Abstract>>>
Aspect-based sentiment analysis (ABSA) is to predict the sentiment polarity towards a particular aspect in a sentence. Recently, this task has been widely addressed by the neural attention mechanism, which computes attention weights to softly select words for generating aspect-specific sentence representations. The attention is expected to concentrate on opinion words for accurate sentiment prediction. However, attention is prone to be distracted by noisy or misleading words, or opinion words from other aspects. In this paper, we propose an alternative hard-selection approach, which determines the start and end positions of the opinion snippet, and selects the words between these two positions for sentiment prediction. Specifically, we learn deep associations between the sentence and aspect, and the long-term dependencies within the sentence by leveraging the pre-trained BERT model. We further detect the opinion snippet by self-critical reinforcement learning. Especially, experimental results demonstrate the effectiveness of our method and prove that our hard-selection approach outperforms soft-selection approaches when handling multi-aspect sentences.
<<</Abstract>>>
<<<Introduction>>>
Aspect-based sentiment analysis BIBREF0, BIBREF1 is a fine-grained sentiment analysis task which has gained much attention from research and industries. It aims at predicting the sentiment polarity of a particular aspect of the text. With the rapid development of deep learning, this task has been widely addressed by attention-based neural networks BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6. To name a few, wang2016attention learn to attend on different parts of the sentence given different aspects, then generates aspect-specific sentence representations for sentiment prediction. tay2018learning learn to attend on correct words based on associative relationships between sentence words and a given aspect. These attention-based methods have brought the ABSA task remarkable performance improvement.
Previous attention-based methods can be categorized as soft-selection approaches since the attention weights scatter across the whole sentence and every word is taken into consideration with different weights. This usually results in attention distraction BIBREF7, i.e., attending on noisy or misleading words, or opinion words from other aspects. Take Figure FIGREF1 as an example, for the aspect place in the sentence “the food is usually good but it certainly is not a relaxing place to go”, we visualize the attention weights from the model ATAE-LSTM BIBREF2. As we can see, the words “good” and “but” are dominant in attention weights. However, “good” is used to describe the aspect food rather than place, “but” is not so related to place either. The true opinion snippet “certainly is not a relaxing place” receives low attention weights, leading to the wrong prediction towards the aspect place.
Therefore, we propose an alternative hard-selection approach by determining two positions in the sentence and selecting words between these two positions as the opinion expression of a given aspect. This is also based on the observation that opinion words of a given aspect are usually distributed consecutively as a snippet BIBREF8. As a consecutive whole, the opinion snippet may gain enough attention weights, avoid being distracted by other noisy or misleading words, or distant opinion words from other aspects. We then predict the sentiment polarity of the given aspect based on the average of the extracted opinion snippet. The explicit selection of the opinion snippet also brings us another advantage that it can serve as justifications of our sentiment predictions, making our model more interpretable.
To accurately determine the two positions of the opinion snippet of a particular aspect, we first model the deep associations between the sentence and aspect, and the long-term dependencies within the sentence by BERT BIBREF9, which is a pre-trained language model and achieves exciting results in many natural language tasks. Second, with the contextual representations from BERT, the two positions are sequentially determined by self-critical reinforcement learning. The reason for using reinforcement learning is that we do not have the ground-truth positions of the opinion snippet, but only the polarity of the corresponding aspect. Then the extracted opinion snippet is used for sentiment classification. The details are described in the model section.
The main contributions of our paper are as follows:
We propose a hard-selection approach to address the ABSA task. Specifically, our method determines two positions in the sentence to detect the opinion snippet towards a particular aspect, and then uses the framed content for sentiment classification. Our approach can alleviate the attention distraction problem in previous soft-selection approaches.
We model deep associations between the sentence and aspect, and the long-term dependencies within the sentence by BERT. We then learn to detect the opinion snippet by self-critical reinforcement learning.
The experimental results demonstrate the effectiveness of our method and also our approach significantly outperforms soft-selection approaches on handling multi-aspect sentences.
<<</Introduction>>>
<<<Related Work>>>
Traditional machine learning methods for aspect-based sentiment analysis focus on extracting a set of features to train sentiment classifiers BIBREF10, BIBREF11, BIBREF12, which usually are labor intensive. With the development of deep learning technologies, neural attention mechanism BIBREF13 has been widely adopted to address this task BIBREF14, BIBREF2, BIBREF15, BIBREF3, BIBREF16, BIBREF4, BIBREF17, BIBREF6, BIBREF5, BIBREF18, BIBREF19, BIBREF20, BIBREF21. wang2016attention propose attention-based LSTM networks which attend on different parts of the sentence for different aspects. Ma2017Interactive utilize the interactive attention to capture the deep associations between the sentence and the aspect. Hierarchical models BIBREF4, BIBREF17, BIBREF6 are also employed to capture multiple levels of emotional expression for more accurate prediction, as the complexity of sentence structure and semantic diversity. tay2018learning learn to attend based on associative relationships between sentence words and aspect.
All these methods use normalized attention weights to softly select words for generating aspect-specific sentence representations, while the attention weights scatter across the whole sentence and can easily result in attention distraction. wang2018learning propose a hard-selection method to learn segmentation attention which can effectively capture the structural dependencies between the target and the sentiment expressions with a linear-chain conditional random field (CRF) layer. However, it can only address aspect-term level sentiment prediction which requires annotations for aspect terms. Compared with it, our method can handle both aspect-term level and aspect-category level sentiment prediction by detecting the opinion snippet.
<<</Related Work>>>
<<<Model>>>
We first formulate the problem. Given a sentence $S=\lbrace w_1,w_2,...,w_N\rbrace $ and an aspect $A=\lbrace a_1,a_2,...,a_M\rbrace $, the ABSA task is to predict the sentiment of $A$. In our setting, the aspect can be either aspect terms or an aspect category. As aspect terms, $A$ is a snippet of words in $S$, i.e., a sub-sequence of the sentence, while as an aspect category, $A$ represents a semantic category with $M=1$, containing just an abstract token.
In this paper, we propose a hard-selection approach to solve the ABSA task. Specifically, we first learn to detect the corresponding opinion snippet $O=\lbrace w_{l},w_{l+1}...,w_{r}\rbrace $, where $1\le l\le r\le N$, and then use $O$ to predict the sentiment of the given aspect. The network architecture is shown in Figure FIGREF5.
<<<Word-Aspect Fusion>>>
Accurately modeling the relationships between sentence words and an aspect is the key to the success of the ABSA task. Many methods have been developed to model word-aspect relationships. wang2016attention simply concatenate the aspect embedding with the input word embeddings and sentence hidden representations for computing aspect-specific attention weights. Ma2017Interactive learn the aspect and sentence interactively by using two attention networks. tay2018learning adopt circular convolution of vectors for performing the word-aspect fusion.
In this paper, we employ BERT BIBREF9 to model the deep associations between the sentence words and the aspect. BERT is a powerful pre-trained model which has achieved remarkable results in many NLP tasks. The architecture of BERT is a multi-layer bidirectional Transformer Encoder BIBREF22, which uses the self-attention mechanism to capture complex interaction and dependency between terms within a sequence. To leverage BERT to model the relationships between the sentence and the aspect, we pack the sentence and aspect together into a single sequence and then feed it into BERT, as shown in Figure FIGREF5. With this sentence-aspect concatenation, both the word-aspect associations and word-word dependencies are modeled interactively and simultaneously. With the contextual token representations $T_S=T_{[1:N]}\in \mathbb {R}^{N\times {H}}$ of the sentence, where $N$ is the sentence length and $H$ is the hidden size, we can then determine the start and end positions of the opinion snippet in the sentence.
<<</Word-Aspect Fusion>>>
<<<Soft-Selection Approach>>>
To fairly compare the performance of soft-selection approaches with hard-selection approaches, we use the same word-aspect fusion results $T_{S}$ from BERT. We implement the attention mechanism by adopting the approach similar to the work BIBREF23.
where $v_1\in \mathbb {R}^{H}$ and $W_1\in \mathbb {R}^{H\times {H}}$ are the parameters. The normalized attention weights $\alpha $ are used to softly select words from the whole sentence and generate the final aspect-specific sentence representation $g$. Then we make sentiment prediction as follows:
where $W_2\in \mathbb {R}^{C\times {H}}$ and $b\in \mathbb {R}^{C}$ are the weight matrix and bias vector respectively. $\hat{y}$ is the probability distribution on $C$ polarities. The polarity with highest probability is selected as the prediction.
<<</Soft-Selection Approach>>>
<<<Hard-Selection Approach>>>
Our proposed hard-selection approach determines the start and end positions of the opinion snippet and selects the words between these two positions for sentiment prediction. Since we do not have the ground-truth opinion snippet, but only the polarity of the corresponding aspect, we adopt reinforcement learning BIBREF24 to train our model. To make sure that the end position comes after the start position, we determine the start and end sequentially as a sequence training problem BIBREF25. The parameters of the network, $\Theta $, define a policy $p_{\theta }$ and output an “action” that is the prediction of the position. For simplicity, we only generate two actions for determining the start and end positions respectively. After determining the start position, the “state" is updated and then the end is conditioned on the start.
Specifically, we define a start vector $s\in \mathbb {R}^{H}$ and an end vector $e\in \mathbb {R}^{H}$. Similar to the prior work BIBREF9, the probability of a word being the start of the opinion snippet is computed as a dot product between its contextual token representation and $s$ followed by a softmax over all of the words of the sentence.
We then sample the start position $l$ based on the multinomial distribution $\beta _l$. To guarantee the end comes after the start, the end is sampled only in the right part of the sentence after the start. Therefore, the state is updated by slicing operation ${T_S}^r=T_S[l:]$. Same as the start position, the end position $r$ is also sampled based on the distribution $\beta _r$:
Then we have the opinion snippet $T_O=T_S{[l:r]}$ to predict the sentiment polarity of the given aspect in the sentence. The probabilities of the start position at $l$ and the end position at $r$ are $p(l)=\beta _l[l]$ and $p(r)=\beta _r[r]$ respectively.
<<<Reward>>>
After we get the opinion snippet $T_O$ by the sampling of the start and end positions, we compute the final representation $g_o$ by the average of the opinion snippet, $g_o=avg(T_O)$. Then, equation DISPLAY_FORM9 with different weights is applied for computing the sentiment prediction $\hat{y_o}$. The cross-entropy loss function is employed for computing the reward.
where $c$ is the index of the polarity class and $y$ is the ground truth.
<<</Reward>>>
<<<Self-Critical Training>>>
In this paper, we use reinforcement learning to learn the start and end positions. The goal of training is to minimize the negative expected reward as shown below.
where $\Theta $ is all the parameters in our architecture, which includes the base method BERT, the position selection parameters $\lbrace s,e\rbrace $, and the parameters for sentiment prediction and then for reward calculation. Therefore, the state in our method is the combination of the sentence and the aspect. For each state, the action space is every position of the sentence.
To reduce the variance of the gradient estimation, the reward is associated with a reference reward or baseline $R_b$ BIBREF25. With the likelihood ratio trick, the objective function can be transformed as.
The baseline $R_b$ is computed based on the snippet determined by the baseline policy, which selects the start and end positions greedily by the $argmax$ operation on the $softmax$ results. As shown in Figure FIGREF5, the reward $R$ is calculated by sampling the snippet, while the baseline $R_b$ is computed by greedily selecting the snippet. Note that in the test stage, the snippet is determined by $argmax$ for inference.
<<</Self-Critical Training>>>
<<</Hard-Selection Approach>>>
<<</Model>>>
<<<Experiments>>>
In this section, we compare our hard-selection model with various baselines. To assess the ability of alleviating the attention distraction, we further conduct experiments on a simulated multi-aspect dataset in which each sentence contains multiple aspects.
<<<Datasets>>>
We use the same datasets as the work by tay2018learning, which are already processed to token lists and released in Github. The datasets are from SemEval 2014 task 4 BIBREF26, and SemEval 2015 task 12 BIBREF27, respectively. For aspect term level sentiment classification task (denoted by T), we apply the Laptops and Restaurants datasets from SemEval 2014. For aspect category level sentiment prediction (denoted by C), we utilize the Restaurants dataset from SemEval 2014 and a composed dataset from both SemEval 2014 and SemEval 2015. The statistics of the datasets are shown in Table TABREF20.
<<</Datasets>>>
<<<Implementation Details>>>
Our proposed models are implemented in PyTorch. We utilize the bert-base-uncased model, which contains 12 layers and the number of all parameters is 100M. The dimension $H$ is 768. The BERT model is initialized from the pre-trained model, other parameters are initialized by sampling from normal distribution $\mathcal {N}(0,0.02)$. In our experiments, the batch size is 32. The reported results are the testing scores that fine-tuning 7 epochs with learning rate 5e-5.
<<</Implementation Details>>>
<<<Compared Models>>>
LSTM: it uses the average of all hidden states as the sentence representation for sentiment prediction. In this model, aspect information is not used.
TD-LSTM BIBREF14: it employs two LSTMs and both of their outputs are applied to predict the sentiment polarity.
AT-LSTM BIBREF2: it utilizes the attention mechanism to produce an aspect-specific sentence representation. This method is a kind of soft-selection approach.
ATAE-LSTM BIBREF2: it also uses the attention mechanism. The difference with AT-LSTM is that it concatenates the aspect embedding to each word embedding as the input to LSTM.
AF-LSTM(CORR) BIBREF5: it adopts circular correlation to capture the deep fusion between sentence words and the aspect, which can learn rich, higher-order relationships between words and the aspect.
AF-LSTM(CONV) BIBREF5: compared with AF-LSTM(CORR), this method applies circular convolution of vectors for performing word-aspect fusion to learn relationships between sentence words and the aspect.
BERT-Original: it makes sentiment prediction by directly using the final hidden vector $C$ from BERT with the sentence-aspect pair as input.
<<</Compared Models>>>
<<<Our Models>>>
BERT-Soft: as described in Section SECREF7, the contextual token representations from BERT are processed by self attention mechanism BIBREF23 and the attention-weighted sentence representation is utilized for sentiment classification.
BERT-Hard: as described in Section SECREF10, it takes the same input as BERT-Soft. It is called a hard-selection approach since it employs reinforcement learning techniques to explicitly select the opinion snippet corresponding to a particular aspect for sentiment prediction.
<<</Our Models>>>
<<<Experimental Results>>>
In this section, we evaluate the performance of our models by comparing them with various baseline models. Experimental results are illustrated in Table TABREF21, in which 3-way represents 3-class sentiment classification (positive, negative and neutral) and Binary denotes binary sentiment prediction (positive and negative). The best score of each column is marked in bold.
Firstly, we observe that BERT-Original, BERT-Soft, and BERT-Hard outperform all soft attention baselines (in the first part of Table TABREF21), which demonstrates the effectiveness of fine-tuning the pre-trained model on the aspect-based sentiment classification task. Particularly, BERT-Original outperforms AF-LSTM(CONV) by 2.63%$\sim $9.57%, BERT-Soft outperforms AF-LSTM(CONV) by 2.01%$\sim $9.60% and BERT-Hard improves AF-LSTM(CONV) by 3.38%$\sim $11.23% in terms of accuracy. Considering the average score across eight settings, BERT-Original outperforms AF-LSTM(CONV) by 6.46%, BERT-Soft outperforms AF-LSTM(CONV) by 6.47% and BERT-Hard outperforms AF-LSTM(CONV) by 7.19% respectively.
Secondly, we compare the performance of three BERT-related methods. The performance of BERT-Original and BERT-Soft are similar by comparing their average scores. The reason may be that the original BERT has already modeled the deep relationships between the sentence and the aspect. BERT-Original can be thought of as a kind of soft-selection approach as BERT-Soft. We also observe that the snippet selection by reinforcement learning improves the performance over soft-selection approaches in almost all settings. However, the improvement of BERT-Hard over BERT-Soft is marginal. The average score of BERT-Hard is better than BERT-Soft by 0.68%. The improvement percentages are between 0.36% and 1.49%, while on the Laptop dataset, the performance of BERT-Hard is slightly weaker than BERT-Soft. The main reason is that the datasets only contain a small portion of multi-aspect sentences with different polarities. The distraction of attention will not impact the sentiment prediction much in single-aspect sentences or multi-aspect sentences with the same polarities.
<<</Experimental Results>>>
<<<Experimental Results on Multi-Aspect Sentences>>>
On the one hand, the attention distraction issue becomes worse in multi-aspect sentences. In addition to noisy and misleading words, the attention is also prone to be distracted by opinion words from other aspects of the sentence. On the other hand, the attention distraction impacts the performance of sentiment prediction more in multi-aspect sentences than in single-aspect sentences. Hence, we evaluate the performance of our models on a test dataset with only multi-aspect sentences.
A multi-aspect sentence can be categorized by two dimensions: the Number of aspects and the Polarity dimension which indicates whether the sentiment polarities of all aspects are the same or not. In the dimension of Number, we categorize the multi-aspect sentences as 2-3 and More. 2-3 refers to the sentences with two or three aspects while More refers to the sentences with more than three aspects. The statistics in the original dataset shows that there are much more sentences with 2-3 aspects than those with More aspects. In the dimension Polarity, the multi-aspect sentences can be categorized into Same and Diff. Same indicates that all aspects in the sentence have the same sentiment polarity. Diff indicates that the aspects have different polarities.
Multi-aspect test set. To evaluate the performance of our models on multi-aspect sentences, we construct a new multi-aspect test set by selecting all multi-aspect sentences from the original training, development, and test sets of the Restaurants term-level task. The details are shown in Table TABREF37.
Multi-aspect training set. Since we use all multi-aspect sentences for testing, we need to generate some “virtual” multi-aspect sentences for training. The simulated multi-aspect training set includes the original single-aspect sentences and the newly constructed multi-aspect sentences, which are generated by concatenating multiple single-aspect sentences with different aspects. We keep the balance of each subtype in the new training set (see Table TABREF38). The number of Neutral sentences is the least among three sentiment polarities in all single-aspect sentences. We randomly select the same number of Positive and Negative sentences. Then we construct multi-aspect sentences by combining single-aspect sentences in different combinations of polarities. The naming for different combinations is simple. For example, 2P-1N indicates that the sentence has two positive aspects and one negative aspect, and P-N-Nu means that the three aspects in the sentence are positive, negative, and neutral respectively. For simplicity, we only construct 2-asp and 3-asp sentences which are also the majority in the original dataset.
Results and Discussions. The results on different types of multi-aspect sentences are shown in Table TABREF40. The performance of BERT-Hard is better than BERT-Original and BERT-Soft over all types of multi-aspect sentences. BERT-Hard outperforms BERT-Soft by 2.11% when the aspects have the same sentiment polarities. For multi-aspect sentences with different polarities, the improvements are more significant. BERT-Hard outperforms BERT-Soft by 7.65% in total of Diff. The improvements are 5.07% and 12.83% for the types 2-3 and More respectively, which demonstrates the ability of our model on handling sentences with More aspects. Particularly, BERT-Soft has the poorest performance on the subset Diff among the three methods, which proves that soft attention is more likely to cause attention distraction.
Intuitively, when multiple aspects in the sentence have the same sentiment polarities, even the attention is distracted to other opinion words of other aspects, it can still predict correctly to some extent. In such sentences, the impact of the attention distraction is not obvious and difficult to detect. However, when the aspects have different sentiment polarities, the attention distraction will lead to catastrophic error prediction, which will obviously decrease the classification accuracy. As shown in Table TABREF40, the accuracy of Diff is much worse than Same for all three methods. It means that the type of Diff is difficult to handle. Even though, the significant improvement proves that our hard-selection method can alleviate the attention distraction to a certain extent. For soft-selection methods, the attention distraction is inevitable due to their way in calculating the attention weights for every single word. The noisy or irrelevant words could seize more attention weights than the ground truth opinion words. Our method considers the opinion snippet as a consecutive whole, which is more resistant to attention distraction.
<<</Experimental Results on Multi-Aspect Sentences>>>
<<<Visualization>>>
In this section, we visualize the attention weights for BERT-Soft and opinion snippets for BERT-Hard. As demonstrated in Figure FIGREF39, the multi-aspect sentence “the appetizers are OK, but the service is slow” belongs to the category Diff. Firstly, the attention weights of BERT-Soft scatter among the whole sentence and could attend to irrelevant words. For the aspect service, BERT-Soft attends to the word “ok” with relatively high score though it does not describe the aspect service. This problem also exists for the aspect appetizers. Furthermore, the attention distraction could cause error prediction. For the aspect appetizers, “but” and “slow” gain high attention scores and cause the wrong sentiment prediction Negative.
Secondly, our proposed method BERT-Hard can detect the opinion snippet for a given aspect. As illustrated in Figure FIGREF39, the opinion snippets are selected by BERT-Hard accurately. In the sentence “the appetizers are ok, but the service is slow”, BERT-Hard can exactly locate the opinion snippets “ok” and “slow” for the aspect appetizers and service respectively.
At last, we enumerate some opinion snippets detected by BERT-Hard in Table TABREF42. Our method can precisely detect snippets even for latent opinion expression and alleviate the influence of noisy words. For instance, “cannot be beat for the quality” is hard to predict using soft attention because the sentiment polarity is transformed by the negative word “cannot”. Our method can select the whole snippet without bias to any word and in this way the attention distraction can be alleviated. We also list some inaccurate snippets in Table TABREF43. Some meaningless words around the true snippet are included, such as “are”, “and” and “at”. These words do not affect the final prediction. A possible explanation to these inaccurate words is that the true snippets are unlabeled and our method predicts them only by the supervisory signal from sentiment labels.
<<</Visualization>>>
<<</Experiments>>>
<<<Conclusion>>>
In this paper, we propose a hard-selection approach for aspect-based sentiment analysis, which determines the start and end positions of the opinion snippet for a given input aspect. The deep associations between the sentence and aspect, and the long-term dependencies within the sentence are taken into consideration by leveraging the pre-trained BERT model. With the hard selection of the opinion snippet, our approach can alleviate the attention distraction problem of traditional attention-based soft-selection methods. Experimental results demonstrate the effectiveness of our method. Especially, our hard-selection approach outperforms soft-selection approaches significantly when handling multi-aspect sentences with different sentiment polarities.
<<</Conclusion>>>
<<</Title>>>
|
{
"references": [
"Title\nAbstract\nIntroduction\nRelated Work\nModel\nWord-Aspect Fusion\nSoft-Selection Approach\nHard-Selection Approach\nReward\nSelf-Critical Training\nExperiments\nDatasets\nImplementation Details\nCompared Models\nOur Models\nExperimental Results\nExperimental Results on Multi-Aspect Sentences\nVisualization\nConclusion"
],
"type": "outline"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.